key: cord- -gj q ft authors: paiva, josé-artur; laupland, kevin b. title: real -time pcr for early microbiological diagnosis: is it time? date: - - journal: intensive care med doi: . /s - - - sha: doc_id: cord_uid: gj q ft nan blood cultures are the classical gold standard for microbiological diagnosis of bloodstream infection (bsi) and sepsis. however, only % of blood cultures processed are positive, and finalized results typically take - h. empirical antimicrobial therapy, administered until the etiological agent is identified and antimicrobial susceptibility test results are available, may be either excessive or inadequate, and unnecessary treatment with broadspectrum antimicrobials can lead to significant collateral damage including drug toxicity, antimicrobial drug resistance, increased length of stay, and additional cost. this is an important and relevant quality gap. it is evident that improved identification methods and practices that allow reduction of time to microbiological diagnosis and targeted therapy constitutes a major quality improvement framework in antibiotic use [ ] . diagnostic techniques that do not depend on growth of organisms in culture may offer a distinct advantage over current methods. they allow shorter time to results and detection of non-cultivable microorganisms under antibiotic pressure. two recent studies have shown that matrix-assisted laser desorption/ionization time-of-flight mass spectrometry following isolation from clinical specimens coupled with antimicrobial stewardship programme (ast) intervention decreases time to organism identification and to effective and optimal antibiotic therapy in adult [ ] and pediatric patients with bsi [ ] . in the adult population, acceptance of an ast intervention has also been associated with a trend toward reduced mortality on multivariable analysis. moreover, nucleic acid amplification testing and mass spectrometry can identify selected antibiotic resistance patterns to vancomycin (vana/ vanb), methicillin (meca), cephalosporins (beta-lactamases) and carbapenem (cpe) [ ] . polymerase chain reaction (pcr) is well established for the diagnosis of "atypical" pathogens in severe community-acquired pneumonia [ ] and for the study of ards with possible infectious etiology, namely for respiratory viruses (hsv and cmv), with virus load quantification, and also for pneumocystis and aspergillus [ ] . in a retrospective case-control study in adult icu patients with pneumonia and severe sepsis or septic shock, a strategy with bronchoalveolar lavage (bal) cultures plus bal m-pcr led to higher microbiological yield and less time to antibiotic therapy modification compared to a bal culture strategy ( . ± . vs. . ± . h; p < . ) [ ] . however, several criticisms have been raised with the use of real-time pcr for the study of suspected sepsis and bsi. a study showed that the post-test probability of both a positive ( . , % ci . - . %) and a negative ( . , % ci . - . %) septifast test indicated potential limitations of the technique in diagnosing bsi in patients that had been admitted for an average of days in hospital and had recently received antibiotics and organ support [ ] . a systematic review and meta-analysis showed that, in suspected sepsis, septifast has higher specificity than sensitivity, being better for ruling in than for ruling out infection [ ] . there are a number of important considerations when critiquing studies comparing blood cultures with nucleic acid diagnostic techniques. false negative pcr tests can occur by interference with human dna and the presence of pcr inhibitors in the blood. furthermore, they can only detect pathogens that are specifically tested for. on the other hand, blood cultures are less sensitive especially in the setting of recent exposure to antibiotics. overall, blood cultures have only % specificity, and sensitivity *correspondence: jarturpaiva@gmail.com is approximately % in suspected bacteremia, % in febrile neutropenia, % in severe sepsis, and % in septic shock [ ] . defining a true positive result for evaluating diagnostic tests for infection is challenging and may be best by a composite measure that includes clinical status and type and severity of infection. recently, a comprehensive literature search was conducted to identify studies with measurable outcomes to evaluate the evidence for the effectiveness of different rapid diagnostic practices in decreasing time to targeted therapy for hospitalized patients with bsi [ ] . the authors concluded that rapid phenotypic techniques with direct communication likely improve the timeliness of targeted therapy and that rapid molecular testing with direct communication significantly improves timeliness and significantly reduces mortality, compared to standard testing. since publication of this review, the radi-cal study [ ] , an observational study with patients with suspected or proven bsi, pneumonia, or sterile fluid and tissue infection in nine icus, showed that pcr/electrospray ionization-mass spectrometry provides rapid pathogen identification with a sensitivity of %, specificity of %, and negative predictive value of % at h from sample acquisition and that treatment could have been altered in up to % of patients. further, banerjee et al., in a prospective randomized controlled trial, studied patients with positive blood cultures and stratified randomization into arms: standard blood culture processing (control), rapid-multiplex pcr (rmpcr) reported with templated comments or rmpcr reported with templated comments and real-time audit and feedback of antimicrobial orders by an ast team [ ] . antibiotic de-escalation occurred h faster in the rmpcr/ ast group than in controls, with almost a % reduction in broad-spectrum antibiotic days of therapy. the evamica study, recently published in this journal, is an important addition to the body of literature investigating rapid diagnostic techniques in the icu [ ] . this multicentre cluster-randomised crossover trial included patients and confirms that adding direct molecular detection of pathogens in the blood of patients hospitalized with severe sepsis to standard blood cultures results in an overall higher microbial diagnosis rate (increase from . to . %) and shorter time to results ( . vs. . h). given higher diagnostic sensitivity and turnaround, do the results of the evamica study indicate that rapid diagnostic tests be integrated into standard diagnostic laboratory practice? there are a number of considerations for discussion in this regard. first, it is important to recognize that, while theoretically the availability of these results should lead to an increase in targeted and a reduction in excessive or inadequate therapies, this study does not prove this. second, it must be recognized that for rapid tests to be most useful they should ideally be offered h per day, days a week. in the evam-ica study, tests were not offered at weekends and were batched for daily runs. whether many clinical laboratories could implement this test for provision of prompt results in the "real world" setting remains to be determined. third, while a significant improvement in diagnostic certainty was observed, the fact that the majority remained undiagnosed for an infecting etiology leaves much to be desired. so is it time for it? yes! it is our contention that realtime pcr should be incorporated into standard clinical management of patients with sepsis. however, use of these tests will still require adjunct use of standard blood culture methods and, for full benefit implementation, coupling with ast. we must recognize that, even with the important gains we have witnessed with the use of new diagnostic tests, the majority of patients with sepsis will remain undiagnosed for a specific etiology. research into further improving diagnostic certainty through ongoing development of rapid culture-independent microbiological identification methods, means to enhance swift communication of results between microbiology laboratory and the icu, and enhanced integration with ast is needed to improve individual patient outcomes and reduce the burden of excessive antibiotic use and subsequent emergence of antimicrobial resistance. effectiveness of practices to increase timeliness of providing targeted therapy for inpatients with bloodstream infections: a laboratory medicine best practices systematic review and meta-analysis impact of rapid organism identification via matrix-assisted laser desorption/ionization time-of-flight combined with antimicrobial stewardship team intervention in adult patients with bacteremia and candidemia impact of matrix-assisted laser desorption and ionization time-of-flight and antimicrobial stewardship intervention on treatment of bloodstream infections in hospitalized children evaluation of matrix-assisted laser desorption ionization-time of flight mass spectrometry for rapid detection of β-lactam resistance in enterobacteriaceae derived from blood cultures community-acquired pneumonia related to intracellular pathogens diagnostic workup for ards patients impact of bronchoalveolar lavage multiplex polymerase chain reaction on microbiological yield and therapeutic decisions in severe pneumonia in intensive care unit diagnostic accuracy of septifast multi-pathogen real-time pcr in the setting of suspected healthcare-associated bloodstream infection accuracy of light-cycler septifast for the detection and identification of pathogens in the blood of patients with suspected sepsis: a systematic review and metaanalysis multi-pathogen real-time pcr system adds benefit for my patients: yes rapid diagnosis of infection in the critically ill, a multicenter study of molecular detection in bloodstream infections, pneumonia, and sterile site infections randomized trial of rapid multiplex polymerase chain reaction-based blood culture identification and susceptibility testing performance and economic evaluation of the molecular detection of pathogens for patients with severe infections: the evamica open-label cluster-randomised interventional crossover trial key: cord- -qlhkwbp authors: fisher, peter g. title: [image: see text] ferret behavior date: - - journal: exotic pet behavior doi: . /b - - - - . - sha: doc_id: cord_uid: qlhkwbp nan polecats tend to be solitary and very territorial with fi ghting between males having been observed, presumably over territory and sexual domain. the domestic ferret on the other hand is very social and gregarious, enjoying play activities with conspecifi cs and preferring to sleep with other ferrets of the same or opposite sex. the polecat is quick, nervous, and easily frightened and will show fear of people if left with the mother during the critical period of / to / weeks of age. the domestic ferret, however, was initially kept as a pest destroyer, normally raised in confi nement and liberated to the fi eld in order to hunt the intended prey. therefore these ferrets were raised to be easily handled and could not be nervous or fearful of humans. further resemblance and disparities between the domestic ferret and the wild polecat will be made in other sections of this chapter as we investigate the behavior of today's pet ferret. being domesticated from a crepuscular species, ferrets possess a tapetum lucidum, which allows for more effective vision at low levels of light. they do not see well in pitch dark and have diffi culty adjusting to bright light. this means that a ferret must be allowed to adjust to the light and become fully awake before it is removed from under a blanket or from a cozy spot where it is sleeping, or the handler risks being bitten. ferrets have binocular vision, and although they can swivel their eyes to look at different objects, most ferrets look forward and turn their heads to see things to the side. the pupil is horizontally slit, which is common in species that chase prey with gaits characterized by a hopping motion and explains the ferret's fascination with a bouncing ball. ferrets have very good visual acuity at close range, which is important because the ferret uses varying body language and visual displays to communicate. they see less detail at greater distances and as a result pay more attention to complex visual stimuli such as moving objects. the ear canals of a ferret do not open until approximately days postnatally (as compared with days in a cat), which coincides with the appearance of a startle response to loud hand claps and the recording of acoustically activated neurons in the midbrain (figure - ) . this late onset of hearing may explain why kits produce exceptionally loud, piercing sounds during the fi rst weeks of life. lactating jills are tuned in to ferret behavior kit vocalizations and will respond to high-frequency (greater than khz) sounds in a maze test, whereas males and nonlactating females will ignore these sounds. adult ferrets hear best when sounds are within a range of to khz. this may explain why ferrets love squeaky toys, which produce sounds in this range. kits of wild polecats have a critical period of learning the scent of prey (olfactory imprinting), which according to apfelbach ( ) is between and days of age. except under duress, polecats will refuse to eat any prey whose smell they have not learned by that time. as adults, they will actively search for prey with which they were familiarized during this critical time period and will ignore other prey or food smells. this may explain why certain ferrets will eat only one type of diet and why kits exposed to only one brand of food at to days of age may be opposed to dietary changes later in life. it is therefore recommended that young kits be offered a variety of foods during their fi rst months of life in order to prevent dietary selectivity or olfactory imprinting. the sense of smell in ferrets is particularly keen. wild mustelidae hunt down their prey using their sense of smell to home in on the quarry. during exploratory behavior the ferret spends a great deal of time with its nose to the ground investigating its environment. objects placed directly in front of a ferret will be examined fi rst by smelling, followed by visual or tactile inspection. polecats are a solitary species and leave marks throughout their home range by performing a repertoire of scent-marking actions that include wiping, body rubbing, and the anal drag (figures - and - ) . observations of ferrets in an outside enclosure revealed that anal drags were performed at latrines near den sites and at an equal frequency by males and females throughout the year. mustelidae also use urine for scent-marking and produce skin oils that are profoundly affected by circulating hormone levels. hobs (male ferrets) in particular will produce intense seasonal skin oils that correspond to the increased testosterone levels associated with longer day lengths. ferret anal scent gland odors are sexually dimorphic, and studies have demonstrated that ferrets can use these variations as a communication tool. ferrets can distinguish between male and female anal sac odors, among strange, familiar, and their own odors, and between fresh and -day-old odors. these results are consistent with both a sex attraction role and a territorial defense role for anal sac odors. the anal drag. the domestic ferret defi nes its territory by marking behavior such as backing into corners to defecate and following with the anal drag, as illustrated here. (illustration courtesy barb lynch.) different messages are conveyed by the various marks. kelliher and baum showed that in the ferret, olfactory detection and processing of volatile odors from conspecifi cs is required for heterosexual mate choice. males perform more body rubbing than females (jills), especially during the breeding season. anal drags leave an olfactory signature of anal sac secretion for intersexual and intrasexual communication. olfactory marking behavior also communicates territoriality and gives other ferrets knowledge of the marking ferret's sex and hormonal activity. wiping and rubbing actions release the ferret's general body odor and may act as a threat signal in agonistic encounters. the response to olfactory stimuli and the scent-marking behavior of domestic ferrets is much less pronounced than that of their undomesticated counterparts. domestic ferrets retain the actions of marking that are so important to their wild relatives. the ferret thrives in the company of other ferrets; readily sharing living quarters, hammocks in which they sleep, food bowls, and water bottles. despite this harmony, ferrets are still instinctively territorial and lay claim to smaller, albeit signifi cant territories within their home environment. like the wild polecat, domestic ferrets back up and defecate on objects or certain areas (and some even anal drag after defecation) in order to mark their territory. the domestic ferret tends to choose corners in which to defecate that may represent territory perimeters. wiping behavior. domestic hobs possess preputial sebaceous glands that produce oils that they will wipe or mark on household items to communicate sexuality and territoriality. this behavior corresponds to the increased testosterone levels associated with longer day lengths. (illustration courtesy barb lynch.) when it comes to the postdefecation anal drag, operators of ferret shelters will note that this behavior will increase in some ferrets when a new ferret is introduced to the household or when ferrets become more seasonally hormonal. this innate behavior occurs even in ferrets that are surgically descented (anal sacculectomy), as the ferret is unaware of its missing anatomy. ferrets also possess perianal sebaceous glands that secrete oils used in scent-marking. the strength of the scent from these glands is reduced in neutered males. worth mentioning is the way in which ferrets use their sense of smell in meet-and-greet behavior. when ferrets are introduced they will often sniff each other's anal area and neck and shoulder region (figure - ) . this behavior may give a domestic ferret information about the other ferret's sex and hormonal status. this activity may be the domestic ferret's equivalent of the behavior in the wild counterpart by which sexual receptivity is assessed. although quiet most of the time, domestic ferrets do make a variety of vocalizations with which they communicate. in order to determine the meaning of ferret sounds, shimbo recorded waveforms and sound spectrographs of various domestic ferret vocalizations. interpretation of these meet-and-greet behavior. when ferrets are introduced, they will often sniff each other's anal area and the neck or shoulder region. this behavior may give the domestic ferret information about the other ferret's sex and hormonal status. (courtesy laura powers.) auditory studies led to several generalizations such as an increase in tonality on a basic signal indicates heightened excitement, a rising infl ection indicates urgency, and a rising pitch of a string of sounds indicates displeasure. any one or more of these alterations in infl ection can be superimposed on any vocalization to alter its meaning. the following are descriptions and the interpretive signifi cance of the most common ferret vocalizations as recognized by many ferret owners. also know as chuckling or "the buck," the dook is the most commonly used ferret vocalization. this vocal signal can be low-or high-pitched and is usually strung together in a series of chortles or chucks in undulating pitches. the dook usually signifi es happiness or excitement and is commonly expressed during play and exploratory behavior. the greater the excitement level is, the louder are the intensity and volume. the ferret and most other mustelidae use a hissing sound to convey anger and frustration, but it can also denote fear or be used as a warning signal. it can be a short burst that warns a playing partner, "hey, that hurt, back off a little," or serves as a fear response, forewarning that "my guard is up, be careful." prolonged hissing usually indicates frustration. a high-pitched screech is used when a ferret is startled, frightened, or in pain. when cornered by another animal, ferrets may scream to startle their opponent and thereby gain escape. prolonged screaming is an indication that something is seriously wrong and may occur when a ferret is in intense pain; such screaming has also been reported to occur during seizures. all cases of continual or recurrent screaming warrant a medical workup. an unusual loud chirp may occur as a defensive vocalization when a ferret is frightened or very excited. some ferrets bark when they are angry. it is usually easy to discern a happy, curious ferret vocalization from one indicating anger, fear, or extreme pain. be aware that an apprehensive or distressed ferret may bite, and use appropriate caution with ferrets that are using these verbal signals. ferrets also use body language and a variety of visual displays to communicate moods and feelings. they prefer to follow and attack prey moving exotic pet behavior at a velocity close to the escape speed of a mouse. this may help to explain their fascination with bouncing balls, toys pulled along the ground in front of them, and in general anything that moves. during exploration the inquisitive ferret will periodically demonstrate scouting behavior in the form of erect or alert posturing. this attention response is similar to (and probably stems from) actions shown by the european polecat while investigating unfamiliar surroundings. during this response the neck is raised, the head is held at degrees to the body, the ears are pricked, and the vibrissae are extended. piloerection in the form of a frizzed-out tail may be a sign of anger or excitement, either fearful or joyous (figure - ) . during a display of anger, the puffed tail is usually accompanied by an arched back and a vocal hiss or screech. if the display represents excitement and joy, the tail may fuzz out and fl ick back and forth. piloerection of the tail may also be noted during an anaphylactic reaction such as that seen with a vaccine reaction. normal locomotion in a ferret consists of alternating movements of all four feet, although a ferret can be seen to hop or gallop with the rear legs when running or at play. many repeatable locomotor patterns can be noted bottle brush tail. piloerection in the form of a frizzed-out tail may be a sign of anger or excitement, either fearful or joyous. during a display of anger the puffed tail is usually accompanied by an arched back and a vocal hiss or screech. if the display represents excitement and joy, the tail may fuzz out and fl ick back and forth. (courtesy lisa leidig.) that tell us the ferret is a happy, playful pet; these activities have been described and nicknamed by ferret owners. for example, the "dance of joy" or the "weasel war dance" is exhibited by the ferret that is happy and excited. the animated ferret tries to go in several directions at once; dancing from side-to-side, hopping forward, twisting back, fl ipping and rolling on the fl oor-all at an energetic pace. there seems to be no apparent reason for this dance other than pure joy and happiness. the "alligator roll" is a form of intense play or wrestling between two ferrets where one ferret grabs the other by the back of the neck and fl ips him upside down. some feel this is a way for one ferret to show dominance. because wild ferrets are solitary, any form of social hierarchy would be a refl ection of domestication and the housing of multiple unrelated ferrets in close captivity. it is obvious that ferrets are energetic, funloving animals. as a result of this high energy, ferrets need ample play time (preferably up to hours per day) and benefi t greatly from environmental enrichment. in addition to the "dance of joy" and the "alligator roll" already discussed, play behavior may include other visual displays. during periods of intense play ferrets may suddenly stop, fall to the ground, and slump, with body fl attened, eyes open, and back legs splayed. this usually indicates the ferret is worn out and is taking a short break. in a few minutes the ferret will usually engage the rear legs and inch forward by pushing with the hind feet only. once rested or if teased by a playmate into resuming the fun, the ferret will jump up and again engage in full-blown play behavior. this slumping may stem from the silent stalking of polecat predatory-attack behavior in which the body is held close to the ground. the actual predatory attack in which the ferret springs forward may be elicited by any rapid movement, which initiates a preprogrammed electrical brain stimulation. therefore further romping on the part of the domestic ferret play partner initiates a return-to-play assault by the "slumping" ferret. because of their high metabolic rate, short gastrointestinal tract, and gastrointestinal transit time of about hours, ferrets defecate frequently and can mark the corners of their cages, much to the aggravation of the conscientious owner who keeps a clean litter box available at all times. it should be stressed that clean to the ferret often means unused, as many ferrets will avoid a litter box that has been soiled only once. before defecating and urinating, ferrets will usually briefl y explore their cage environment in order to fi nd a suitable location in which to void. most ferrets choose one or two corners within the cage as the favorite location. once satisfi ed with the spot, the ferret will turn around, back into the corner, and, with back slightly arched and tail raised directly over the back, defecate using slight pulsing contractions of the abdomen. ferrets do not bury their stool but will at times perform a postdefecation anal drag in which they scoot their anus along the fl oor for a few seconds. when urinating the ferret behaves similarly to fi nd the appropriate site, it then squats with the rear legs spread slightly apart. the urination posture of both males and females is similar, with the only difference being that females squat slightly lower. ferrets have an innate love for digging, and a clean litter box is a perfect setting for digging and play behavior, often resulting in an unused, tipped-over litter box. see box - for litter box tips. it is not uncommon for ferret owners to report that their ferret licks or drinks its own or a cagemate's urine. physical examinations and health workups including complete blood count, chemistry panel, and urinalysis are usually unremarkable. it is possible that this behavior stems from the behavior of polecat hobs, which sometimes groom themselves with their own urine to make themselves more desirable to jills. ferrets usually reach sexual maturity at to months of age. most reproductive behavior in the pet ferret is suppressed because of surgical sterilization and exposure to artifi cial, indoor lighting for consistent periods of time averaging hours per day. knowledge of normal reproductive behavior is important when interpreting certain ferret play and aggressive behaviors as well as understanding the behavioral and physiologic changes associated with adrenal disease. researchers have shown that both estrogen and testosterone contribute to masculine sexual behavior in male ferrets and female ferrets. ferret hormonal activity is strongly infl uenced by endogenous circadian rhythms, which persist under conditions of constant light and constant darkness. however, these circadian rhythms are usually infl uenced by external factors such as light, temperature, barometric pressure, and hormones. of these factors the most important is light, and ferret sexual behavior becomes more evident as natural day lengths increase. as day lengths increase, circulating melatonin levels diminish and hypothalamic gonadotropin-releasing hormone (gnrh) is released in a pulsatile fashion, in turn resulting in the release of pituitary luteinizing hormone (lh) and follicle-stimulating hormone (fsh), which stimulate the release of ferret behavior estrogen and testosterone from the gonads. this results in an increase in sexual activity and interest. the onset of puberty in hobs is denoted by development of male sexual behaviors such as showing more interest in jills and the introduction of neck gripping and pelvic thrusting into their play behavior. if exposed only to natural lighting, the hob will become reproductively active a full litter box tips w spend some time observing your ferret's habits in the cage. when it backs into a cage corner to relieve itself, pick it up and place it in the corner of the litter box. w provide a large litter box that takes up most of the bottom of the cage. this is more likely to encourage use of the box. punch holes in the litter box and wire it to the cage walls so that it can't be tipped over. w offer praise and food treats when your ferret uses the box. w to discourage digging use newspaper strips in the litter box, and slowly add a little bit of litter. over a week or more gradually add more litter and less newspaper. most ferrets learn not to play in the litter fairly quickly. newspaper doesn't deter odors, so it needs to be changed often. w buy a ferret-friendly litter box with one low side and a guard on the higher sides to prevent the ferret from backing up far enough to miss the box. w clean soiled corners, inside or outside the cage, with an appropriate pet odor neutralizer such as urine-off or eliminodor. w provide litter boxes in corners of rooms ferrets are allowed to explore. more than one litter box is ideal. if your ferret seems to prefer a certain corner, place the litter box there. w ferrets do not bury their stool as cats do; therefore only a shallow layer of litter to cover the bottom of the litter box is needed. avoid fi ne clumping litters as they are messy and dusty, potentially resulting in respiratory problems. w recycled newspaper litter or plain clay litter are good choices. avoid scented litters, as ferrets may avoid them. w change the litter box(es) often to encourage use. w most ferrets won't soil their beds or food bowls. place bedding or food dishes in all non-litter box corners of the cage. bedding that has been slept in and retains the ferret's body scent works best. w before out-of-the-cage play, place your ferret in the cage's clean litter box. continue to place it in the box until it urinates or defecates, then reward it with play. to months before the jill. a testosterone surge will result in reproductive behaviors associated with attracting the opposite sex and protecting territory. hobs in rut will be more aggressive and will scent-mark in order to get the message across to potential breeding partners that they are ready to mate. male ferrets have preputial gland secretions that they will wipe on objects by dragging their bellies across the ground. perianal scent glands are also used for scent-marking by dragging the anus or scooting across the ground (anal drag). numerous dermal sebaceous glands, most prominent at the nape of the neck, are used by rubbing and rolling onto inanimate objects hobs wish to mark. males have more sebaceous glands than females, and glandular production appears to be under androgenic control. in a natural setting all these reproductive behaviors would allow multiple polecat hobs to stake their territory and fi ght off any potential competitive male suitors so that by the time the jill becomes sexually receptive they can get down to the business of breeding. during the mount, the male grabs the nape of the jill's neck with his teeth and will grip her body by wrapping his forelegs around her ribcage. pelvic thrusts last variable lengths of time up to minutes. between pelvic thrust bursts are periods of rest during which the male simply lies over the female and holds on with the neck grip. at the point of penetration the male will increase the arch of his back anteriorly, causing his foreleg grip to slip behind the female's rib cage. holding this position for a variable but usually prolonged period of time is the best interpretation of penetration, at which time pelvic thrusting ceases. occasionally the male will tense his pelvis, causing the tail to raise for short periods of time. at this time the female will occasionally fl inch or remain fl accid. variable mating times from minutes to hours have been reported, but in one study mating times recorded in pairs of ferrets lasted from to minutes. these prolonged intromissions appear necessary in order to ensure fertilization. whether this is to allow for increased sperm deposition as a result of the multiple ejaculations of the male or if it is necessary to stimulate the lh surge and subsequent ovulation in the female is open to debate. neutered males with adrenal gland disease may display sexual behavior because of production of testosterone by the abnormal glands (see the discussion of adrenal disease). behavioral changes associated with rising estrogen levels and puberty in the female ferret are less pronounced. some jills may show evidence of being more excitable and nervous, whereas most show no behavioral changes at all. wheel-running activity was shown to increase during ferret behavior estrus, with the number of wheel revolutions being doubled or tripled as compared with totals in ovariohysterectomized or anestrus ferrets. with the onset of full estrus, food intake may decrease and jills may sleep less and become irritable. before onset of full estrus, jills will be unresponsive to the advances of a hob in rut. there will be a good bit of anal, genital, and neck sniffi ng, nose poking, and attempts by the male to grab the female by the neck, but the jill will ignore this behavioral foreplay or when tired of it hiss and nip or attack the male. dramatic edematous vulvar swelling in response to estrogen secretion by the ovaries is a clear signal that full estrus has occurred. at this time the jill will demonstrate the above behaviors but with more noise and intensity. these reproductive behaviors are very similar to those in other mammalian species, with much sniffi ng, genitalia display, and play fi ghting. when ready to breed, the female becomes fl accid and submissive, and mounting by the hob is allowed. being an induced ovulator, the jill will remain in estrus for extended periods if not bred. if breeding does not occur, the vulvar tissue will remain swollen, and hyperestrinism can cause severe anemia that will not abate until ovariohysterectomy or hormonal treatment is instituted. adrenal disease may also cause a swollen vulva as a result of androgens being oversecreted by the adrenals. remnants of ovarian tissue may also cause hyperestrinism. as a general rule, patterns of behavior and social relationships are developed through learning as well as heredity, and social animal groups are organized by social status, territoriality, and reproductive activities. the interplay of experience and innate* factors in the development of behavior is very subtle and can be diffi cult to separate. european polecat kits are dependent on their mothers to bring them meat meals from the time they are weaned at to weeks to the time they begin to hunt on their own at weeks of age. during this preweaning time kits have been observed to interact socially and play. however, by the time they are weeks old, a time when kits may leave their nests permanently and go out on their own, kits show various degrees of independence from one another. adult polecats are essentially solitary, with one study fi nding ferrets to share dens simultaneously with other fer rets on only . % of radio-tracking events. also, adult ferrets *innate behaviors are those that do not seem to require specifi c experiences (learning) for their expression. demonstrate intrasexual territoriality, with dominant males showing more spatial overlap with females than with subordinate males. the domesticated ferret, on the other hand, shows much more diurnal activity, and many can be kept in pairs or groups without confl ict. the best explanation for the difference in socialization patterns is that familiarization and habituation* play a signifi cant role in the ferret's social response to both man and conspecifi cs. familiarization in the form of imprinting may be involved as young polecats removed from their mothers during this critical phase in their development ( to weeks) become imprinted on their human caretakers. evidence to support this belief is the fact that young polecats follow their mother on foraging expeditions and that hand-reared ferrets readily follow a human being. it has also been shown that the presence of the mother appears to facilitate the development of fear of humans in the young. in captivity, however, fear of humans does not develop in wild polecats if they are removed from their mother at any time before the second day after their eyes have opened (typically to days). socialized ferrets are also more likely to show habituation than isolated ferrets, demonstrating that socialization and domestication go hand in hand. pet ferrets acclimate to their environment and will rise to the occasion when given an opportunity to play, explore, or interact with others. in other words, they become diurnal as their periods of activity coincide with that of their human household. most ferrets enter the pet trade at any time from to weeks of age. in the united states most pet ferrets come from a few large commercial breeding farms and therefore are exposed to other ferrets and humans from the time their eyes open. from the above data, and from observing domestic ferrets and their obvious agreeable nature with both humans and cagemates, it seems safe to assume that ferrets, like dogs, do have a critical period of socialization. this period occurs between the time their eyes open at weeks of age to weeks. through their observations, most ferret researchers and multiple ferret owners believe that ferrets do not form any kind of social hierarchy and that positioning for dominance does not occur. nevertheless, ferrets will fi ght occasionally, especially when exposed to an unfamiliar ferret. some ferret rescue workers have recommended placing ferretone on the necks and scruffs of all ferrets being introduced to an unfamiliar ferret. ferrets consistently like this oily supplement and will likely lick each other in an appropriate manner while being less likely to demonstrate aggression. *habituation is a decreased response to new objects and environments resulting from prolonged or repeated exposure. pet ferrets readily show affection for their human owners through gleeful greeting behavior and willingness to shower owners with ferret kisses. young ferrets, on the other hand, are not likely to enjoy quiet cuddle time. their exploratory behavior creates too strong an urge to get off an owner's lap and move on to investigate the environment around them. as ferrets mature, a combination of age, improved socialization, and a decrease in exploratory behavior results in a more staid ferret that enjoys periods of quiet snuggling and petting. ferrets have been domesticated for over years; therefore it seems likely that given the right environment, poorly socialized ferrets can become more affable and gregarious. this suggestion is supported by the fact that intact hobs kept together in a colony situation with minimal human handling can live in harmony outside the breeding season. ferrets will groom their fur through licking and gentle nibbling motions. they normally maintain a smooth and shiny hair coat as long as they are kept on a balanced diet made up primarily of high-quality animal protein and fat. ferrets have also been known to groom other ferrets to which they are bonded. this grooming is usually around a cagemate's ears and head as the ferrets lie side by side. normal ferret skin is smooth and pale without evidence of fl aking, scabs, or infl ammation. a dry, dull fur coat and evidence of fl aking may be a refl ection of poor diet or low environmental humidity. in the wild ferrets spend a good part of the day in underground burrows in which the humidity is high and the temperature a consistent ° f ( °c). the dry warmth of many homes during the winter months may cause the skin to dehydrate, with subsequent fl aking and itching. pruritus may also be a sign of external parasites or adrenal disease. a ferret's hair coat has a thick cream-colored undercoat covered by longer, coarse guard hairs. it is the color of these guard hairs that defi nes the various ferret coat colorations-from dark and light sable to cinnamon, silver, or white. both intact and neutered ferrets undergo a hormonally infl uenced molt, usually twice a year, in which the hair coat thins in response to photoperiod. as daylight hours and environmental temperatures increase, corresponding with late spring in the northern hemisphere as well as ferret breeding season, ferrets may lose most of their guard hairs over a period of several weeks. ferrets can get trichobezoars from grooming, which can get large enough to cause gastric obstruction or irritation (figure - ) . therefore the use of a hairball remedy in the form of a feline petroleum-based laxative is especially important during this seasonal molt. the skin of the ferret contains numerous sebaceous glands, the secretions of which give the ferret its characteristic musky odor. these secretions also are strongly infl uenced by seasonal hormonal infl uences, especially in the male, and may give the coat a greasy feel and an obvious yellow to orange appearance most noticeable over the dorsal shoulder area. in the northern hemisphere these secretions are observable in the late spring to early summer, corresponding with the ferret's natural breeding season. if this coat discoloration and increased odor are particularly evident and do not diminish with time, they may be signs of adrenal disease, especially in the male ferret. if associated with adrenal disease, loss of guard hairs may occur concurrently, and areas of obvious alopecia may develop, as well as other systemic signs discussed elsewhere. during this time ferrets may also show increased scent-marking behavior and will rub their backs and shoulders along carpet and furniture to the dismay of odor-conscious owners. cage walls and bedding readily take on the yellow color and musky odor of these sebaceous secretions. the ferret is an obligate carnivore with a short intestinal tract that lacks a cecum and ileocolic valve. the small intestine is approximately fi ve times longer than the ferret's body, and the mean gastrointestinal transit time of food passage from stomach to rectum is minutes. this rapid transit time, along with the ferret's lack of intestinal brush border enzymes, especially lactase, contributes to ineffi ciency in absorption. as a result, they are less able than cats to absorb suffi cient calories from carbohydrates. to compensate for the ineffi ciency of its digestive tract, the ferret requires a concentrated diet, high in protein and fat and low in fi ber. ferrets snack and eat multiple small meals throughout the day, and unless regularly fed very high-fat content foods they generally eat as much as they want without becoming obese. ferrets normally increase food intake approximately % in the winter and gain weight by depositing subcutaneous fat. this will reverse as daylight lengthens in the spring. for maintenance, ferrets may consume to kcal/kg body weight daily. daily food consumption averages g ( . oz) and g ( . oz) dry matter per kg body weight for male and female ferrets, respectively. ferrets are solitary feeders that when allowed free access to food will eat or meals per day, which is true of most species when food is available ad libitum. in laboratory studies in which ferrets had to perform a task (bar press) to gain access to food, it was shown that meal frequency declined. there was a corresponding increase in meal size, allowing the ferrets to maintain a relatively constant total daily food intake suffi cient to maintain normal growth and body weight. these shifts in feeding patterns in response to increased work needed to procure a meal are similar to those in other species and are consistent with an ecologic analysis of foraging behavior. generally, socially feeding animals increase procurement and consumption rates as food availability decreases, whereas solitary feeders, such as cats and ferrets, do not. this study demonstrates that ferrets could be maintained on meals fed once or twice daily versus a free-feeding situation. many ferret owners like to offer raisins or other simple carbohydrates as treats, but these high-sugar treats are diffi cult for the ferret's gastrointestinal tract to digest and may be contraindicated because of the prevalence of insulinoma in ferrets. instead, small pieces of cooked chicken, totally ferret, or n-bones treats may be offered. the predatory behavior of ferrets consists mainly of instinctive behavior patterns that are elicited by external stimuli. in all higher animals a sudden change or stimulation will usually elicit a movement toward the source-a response described as the orientation response. the learning of an orientational component plays an important role in the development of functional sequences of behavior. eibl-eibesfeldt observed the preycatching techniques of polecats and found that the normal movements of pursuit, grasping the neck of the prey, shaking it, and turning it over on its back occur the fi rst time an appropriate object is presented. after several experiences the neck bite becomes properly oriented for quick killing of prey. more recently, similar behavior was demonstrated in the black-footed ferret in which maturation, experience, and greater environmental complexity (enriched cage, including encouragement of foodsearching behaviors) all increased the likelihood of the ferret's making a successful kill. this behavior becomes important when the domestic ferret is kept in a household with other exotic pets that it may perceive as prey. it is at this time that the ferret may cross the line from play behavior to prey-catching behavior. both behaviors can look similar initially, as ferrets at play also demonstrate neck biting, but if stimulated by a perceived prey species (pet bird, lizard, or rodent) the ferret may instinctively go beyond play and infl ict harmful or fatal bite wounds on the other pet. therefore, ferrets should not be left unsupervised with other small exotic pets. apfelbach showed that for the ferret the time needed to catch and kill rats depended on the size of the rats in relation to the ferret. killing success decreases with a relative increase in prey size. this may explain why the domestic ferret tends to live harmoniously in a home with dogs and cats. these larger species do not stimulate instinctive prey catching and killing behavior, but instead the response to the larger animal takes the form of the less-intense, albeit similar, play behavior. a number of investigations have demonstrated the existence in mammals of behavior not motivated by fear, thirst, or hunger and unrelated to general activity level. these have led to postulation of another drive, the exploratory drive. characteristically, such behavior is aroused by novel external stimulation. it involves either locomotor activity or manipulation of objects, and it declines as a function of time. in general, higher animals will approach and examine strange objects with whatever sensory equipment is available to them, and in a strange environment they usually move around and examine all of their surroundings. when an animal is exposed to a novel object or environment it will fi rst familiarize itself with the new stimulus or situation. with a strange object, exploration must precede play. as familiarity increases the exploratory behavior decreases and the animal's curiosity of the novelty may with time lead to play behavior. with other new situations fear may lead to rejection instead of exploration. whether fear or curiosity and subsequent play behavior are elicited depends on various factors, including the physiologic state of the individual animal and the magnitude, intensity, or strangeness of the eliciting stimulus. in general, a small change in the environment elicits investigation, whereas a major change may elicit fear. degree of domestication can also affect exploration behavior. in his work on identifying behavioral differences between the domesticated ferret and its wild counterpart, the european polecat, poole ( ) showed that wild ferrets are less likely to examine and more likely to avoid strange objects than tame ferrets. in this study attention responsiveness to new auditory stimuli was measured, and exploration of new surroundings was observed. the attention response is essentially a method of scanning for stimuli. the european polecat shows extreme caution in exploring an unfamiliar environment; it takes frequent cover, uses defi nite pathways in the immediate vicinity of its den, and regularly returns to its home area after making forays into unfamiliar territory. the polecat also shows a more rapidly diminished attention response to auditory stimuli. the domestic ferret, on the other hand, can be moved to a strange cage or placed in an unfamiliar area without showing signs of fear or disorientation. the ferret also shows persistent response to repeated auditory stimuli, a feature that might also be related to the reactivity typical of a juvenile animal. these results again appear to support lorenz's view that domesticated animals show more juvenile behavior as compared with their wild counterparts. ehrlich's work with the black-footed ferret demonstrated similar fi ndings: increased handling (equivalent to greater degree of domestication) leads to increased exploration behavior. those fi ndings may explain why today's domestic ferret shows an intense curiosity and little fear of its surroundings. when allowed to roam, the domestic ferret shows fearless exploratory behavior: the domestication process has been thorough at removing the ferret's fear response when it comes to exploratory behavior, and the ferret's inquisitive nature and love of exploration are boundless, sometimes to its detriment. as a result of being continuously handled and carried securely without being dropped, or perhaps because of its reportedly poor eyesight, domestic ferrets show little fear of height. this is opposite to what shimbo found in her personal experience with undomesticated polecats, which appeared very frightened and uneasy when exposed to heights. this apparent lack of fear of heights in the domestic ferret can result in injuries or death if they climb out of an upstairs or apartment window. an open door may also be addressed with fearless curiosity, and the urge to explore can lead the ferret to the outdoors where it is limited in its ability to fi nd food and to protect itself from predators or from extremes of weather. refl ecting on the lorenz hypothesis that the behavior of domestic animals resembles that of juvenile individuals of their wild counterparts, it can certainly be said that this holds true when it comes to play behavior. in general, motor patterns used in intraspecies play behavior are characterized by actions that occur frequently in other functional contexts (e.g., aggression, sexual, and predatory behaviors). in poole observed polecats (putorius putorius) at play and demonstrated the incomplete sequence of confl ict behavior. four of the agonistic patterns were absent from intense play behavior-two extreme forms of attack ("sustained neck biting" and "sideways attack") and two extreme fear patterns ("defensive threat" and "screaming"). the play behavior imitated patterns of aggression, but in a less-serious and less-threatening manner. adolescent ferrets' play behavior also imitates sexual behavior, with juvenile male ferrets exhibiting higher levels of neck biting and "stand-over" behavior than females (figure - ) . sex differences in the expression of prepubertal play behavior of ferrets apparently result from the differential exposure of males and females to androgen during the postnatal period. the same holds true for the domestic ferret. the ferret demonstrates an obvious love of play in a variety of forms, and we can imagine their mer- mounting play behavior. adolescent ferrets' play behavior also imitates sexual behavior, with juvenile male ferrets exhibiting higher levels of neck biting and "stand-over" behavior than females. sex differences in the expression of prepubertal play behavior of ferrets apparently result from the differential exposure of males and females to androgen during the postnatal period. (courtesy peter fisher.) riment stemming from predatory, sexual, exploratory, and digging behaviors. the typical sequence for two ferrets at play would begin with the chase, followed by an exaggerated approach or ambush, veering off, and reciprocal chasing, followed by mounting, rolling, and wrestling with inhibited neck biting (figure - ) . these mock sexual and predatory behaviors are accompanied by vocalizations that signal both excitement (dooking) and anger (hissing). the solitary ferret at play also demonstrates various behaviors that stem from normal behaviors seen in its wild counterpart. predators stalk and chase quarry. observe the ferret playing with a hard rubber ball or squeaky toy, and you will see the same type of behaviors. hard rubber balls, such as super balls, really stimulate hunt and capture behavior. a rolling or bouncing ball captivates the ferret, which immediately begins the hunt. the ball is aggressively pursued until the ferret "captures" it by grabbing it, then bites hard and shakes it as if it were prey. keep in mind that ferrets love soft-rubber items and will readily ingest the torn pieces of soft rubber, creating potential for gastrointestinal foreign bodies (figure - ) . therefore any ferret play ball needs to be hard enough and large enough neck-biting play behavior. the typical sequence for two ferrets at play would begin with the chase, followed by an exaggerated approach or ambush, veering off, and reciprocal chasing, followed by mounting, rolling, and wrestling with inhibited neck biting. these mock sexual and predatory behaviors may be accompanied by vocalizations that signal both excitement (dooking) and anger (hissing). (courtesy peter fisher.) that the ferret cannot readily tear off chunks that it might ingest ( figure - ) . playing ferrets also love to dig. this digging behavior comes naturally, as the sharp claws and streamlined body of the polecat were designed for digging and tunneling deep underground in pursuit of game or for making their below-ground burrows. this ancestral behavior may explain why ferrets love to dig at the carpet, the fl oor, and their litter box and enjoy digging in the soil of potted plants. some ferret owners satisfy this desire to dig by allowing the ferrets to dig away in a large plastic play box (such as a large cat litter box) fi lled two-thirds full with rice or potting soil. a neater option is to provide the ferret with tubing to explore. both fl exible (ribbed plastic similar to clothes dryer vent tubing) and rigid (pvc pipe) tubing make for great exploratory amusements. because of its keen olfactory sense the ferret explores with its nose. the ferret can be observed searching back and forth across a room with its nose to the ground. when it fi nds an object of interest, often the ferret will drag it off to its "lair," also known as "ferreting it away" (figure - ) . this ferret burrow is usually the most inaccessible location the ferret can fi nd-a small cubby or a hole discovered under the kitchen cabinets or in the back of a closet. the ferret is instinctively seeking out a tight, dark, enclosed space, which mimics its native ancestor's underground burrow. hiding objects. when it fi nds an object of interest, often the ferret will drag it off to its "lair," also known as "ferreting it away." this ferret burrow is usually the most inaccessible location the ferret can fi nd-a small cubby or a hole discovered under the kitchen cabinets or in the back of a closet. (courtesy peter fisher.) it is amazing to see the variety of objects the ferret has "stolen and stashed." this hoarding behavior probably stems, once again, from behaviors seen in polecats, which have a very high metabolic rate and energy need; therefore having a readily available food supply is a must. instead of toys and objects of interest, polecats would build a cache of leftover food items and prey on which to feed while resting in their burrows. studies suggest that environmental impoverishment, whether in the form of physical or social restriction or limitation of play objects, has wideranging effects on the overall well-being of ferrets. chivers and einon found that some of the isolation-induced effects on behavior seen in rats also occurred in ferrets, with deprivation of rough and tumble social play causing hyperactivity that persisted into adulthood. work done by korhonen ( ) showed that overall health, refl ected by optimum weight and fur coat quality, occurred when ferrets were provided with increased housing fl oor space and compatible cagemates and when offered balls and bite cups with which to play. although adult ferrets may appear perfectly content sleeping in their hammocks hours a day, this certainly is not mentally and physically stimulating. unsupervised free time in a "ferret proof" room is always recommended. keep in mind that ferrets love human interaction, like to explore new places and objects, have a keen olfactory sense, and enjoy digging. a ferret that jumps back and forth in front of you and nips at your feet is telling you it wants to play. simply getting down on your hands and knees and chasing a ferret will stimulate more ferret dancing and happy vocalizations, chuckling, or dooking. if the ferret is not prone to biting, try playing tug-of-war games with an old washcloth or favorite plush toy (figure - ) . if the ferret is other-ferret friendly, try taking it to a fellow ferret fancier's ferret-proofed home for some exposure to a whole new environment complete with sights, smells, and ferret friends. transmissible diseases, especially the gastrointestinal infection likely caused by a coronavirus and commonly referred to as epizootic catarrhal enteritis (ece), should be considered when allowing initial contact between ferrets. digging can be encouraged by hiding toys in a children's sandbox or in a litter box (figure - ) . remember, however, to never leave ferrets unsupervised outdoors, as they tend to wander and may get lost. they are also relatively intolerant of extreme heat or cold. inactive ferrets are prone to weight gain and its subsequent effects on overall health. constant captivity in an enclosed space may also lead to behavioral problems such as biting and conspecifi c aggression. it is the ferret behavior ferret owner's responsibility to ensure that this active, energetic pet's mental, physical, and sensory well-being is routinely stimulated so that it may lead a full and robust life. not all activities require human interaction, nor do they require a big monetary investment. many just require a little time, creativity, and imagination. box - describes some activities created by ferret owners-suggestions for inexpensively creating a fun and stimulating ferret household environment. the primary function of aggressive behavior between conspecifi cs is to determine and maintain rank or territory. aggressive actions are among the most prominent social activities of animals, with patterns of aggressive behavior differing from species to species. although such actions often appear antisocial, the fi ghting, bluffi ng, and threatening serves to promote survival of the species. it appears that a species disposition to environmental enrichment. filling a large plastic litter box with recycled newspaper pellets or rice and hiding objects for the ferret to fi nd helps satisfy their innate digging behavior. ferrets that enjoy playing in their water bowls will also enjoy recreation in a small wading pool with added ping-pong balls coated with ferretone. (courtesy peter fisher.) aggression is innate, but many details of the aggressive behavior are learned or perfected through experience. in most animals early social experience greatly affects subsequent aggressive behavior. true fi ghting behavior between domestic ferrets is similar to that described by poole in his study of european polecats -as an incident during which each animal attempted to bite the back of its opponent's neck with a sustained, immobilizing hold. successful bites (i.e., those during which the opponent was unable to break free) were sometimes accompanied by shaking or dragging of the immobilized animal. when the attacked animal was able to break free, it sometimes displayed evidence of intimidation, including screaming, defensive biting, hissing, fl eeing, urinating, or defecating. however, serious injury did not usually occur. staton and crowell-davis reported on the results of an experimental protocol to evaluate the effects of four factors on the fi ghting behavior between pairs of domestic ferrets: familiarity (pairings of cagemates versus strangers); time of year (pairings during winter versus spring); sex (male-male, male-female, and female-female pairings); and neutering status (intact-intact, neutered-intact, and neutered-neutered pairings). awareness of factors that might affect the potential for aggression between w use food as treats, with the following caveats. keep in mind that ferrets are strict carnivores with high protein requirements. they use fats more so than carbohydrates for energy needs. however, excessive high-fat treats will result in the ferret's caloric needs being met with minimal food intake and as a result protein requirements may not be met. so if treating with high-fat oils such as ferretone, remember to use them in small quantities. • try rubbing a little ferretone on ping-pong balls and fl oating them in a shallow pan of water. • place a few pieces of food or desirable treat in an egg carton, tape the lid shut and cut a small hole in the top. make the ferret work for the treat. the same idea can be used with a milk carton. • place a few pieces of good-quality ferret food (try a different variety from its normal everyday food) in an -ounce ( ml) plastic soft-drink bottle, leave the top off, and let the ferret roll and play with it trying to make the treats come out. w create handmade toys and amusement centers. • make tunnels from pcv pipe or empty oatmeal containers with the bottoms cut out and taped end to end. • tape cardboard boxes together, and cut holes in various locations for exploration. • glue a small bell inside a plastic easter egg. • make a ferret maze out of a large appliance box. fill the box with scrap cardboard rolled and taped into round or triangular tubes. hide food items at various spots within the box. w fill a box with potting soil, rice, hay, plastic balls, or crumpled paper balls, and let the ferret fulfi ll its instinctive digging needs. w use old towels to give a ferret a "magic carpet ride," or just twirl the towel around and over the ferret. w use dryer hose to satisfy instinctive tunneling behavior. some owners like to stretch the hose out, using a beanbag chair to hold one end in place. w obtain a bottle of deer or boar scent from the hunting section of a sporting goods store, and rub a drop or two on a favorite toy. w tie plastic or a ping-pong ball to a piece of sturdy string and hang it from the ceiling to in ( cm) above the ground. w put empty paper grocery bags on the fl oor. some of the bags can be fi lled with crumpled paper, ping-pong balls, or food treats. unfamiliar ferrets may predict likelihood of a fi ght. results of the staton and crowell-davis study suggest that familiarity, sex, and neutering status are all important determinantes of aggression between ferrets. sixty percent of the attempts of pairing strangers resulted in combative behavior, whereas none of the familiar cagemates fought. based on previous information on aggressive behavior in intact male ferrets and that shown in studies of other species, it was thought that intact male ferrets would, in general, be more aggressive than neutered animals. however, the study showed that intact male ferrets were not indiscriminantly aggressive and that pairs of neutered males were just as likely as pairs of intact males to fi ght. in addition, the study showed that females were in general not less aggressive than males, with pairings of unfamiliar neutered female ferrets likely to result in aggression. the study also showed that if unfamiliar neutered ferrets are introduced, the pairing of two males or a male and female would result in the lowest levels of aggression. it is also interesting to note that time of year (winter versus spring) did not affect the incidence of fi ghting behavior, even for intact animals in which circulating hormone concentrations are likely to change with seasons. this may have been a result of the fact that animals in this study were housed under artifi cial lighting and that this amount of light was not altered to mimic the increase in daylight that stimulates the breeding season in ferrets. the fact that % of unfamiliar pairings did fi ght illustrates the diffi culties faced by pet owners attempting to introduce a new ferret into the household or in ferret shelters in which new additions and limited space result in frequent pairings of strange ferrets. studies with kangaroo rats, pigs, and mice have shown that olfactory exposure, visual exposure, and sharing a common substrate may all play a key role in establishing familiarity between strangers and thus reduce fi ghting behavior. a second part of the staton and crowell-davis study showed that this was not the case with ferrets. housing strange ferrets next to each other for weeks where they shared visual and olfactory stimuli did not reduce fi ghting when the ferrets were later introduced. however, ferret owners claim that housing a new ferret next to an existing ferret or ferrets for a period of time before introduction does help. if introducing a female to a bonded male-female pair, experienced ferret owners advocate housing the new female with the male for a few days, as they are more likely to get along. if they get along, then putting all three together inside the original cage with additional sleeping arrangements can be tried. make sure close supervision is provided during these introductory periods. in addition, ferret shelter managers have found that a -to -day introductory period works best when familiarizing a new ferret to a multiple- ferret household. a small open room with a minimum of hiding areas and one that does not house other ferrets works best as a neutral meeting site. the new ferret member is introduced to the most congenial ferret in the household, usually an older, easygoing male, for minutes of chaperoned meet-and-greet time. if the ferrets seem to get along, then the time together is extended. once the new ferret has accepted the introductory ferret, other ferrets are slowly introduced to see if the new ferret is capable of cohabiting with the group. ferrets are individuals, and this bonding procedure does not always work; some ferrets just prefer living alone. ferrets use their mouths in many behaviors, including play, attention seeking, defense, "hunting," fear, and response to pain. watch young kits wrestle and play and you will see them bite each other's necks and drag each other around while grasping any loose skin with their mouths. mother ferrets use their mouths to pick up and move their kits if they have wandered too far and to discipline them with gentle nips. ferrets playing with a toy will usually pick it up, grab it, and drag it around with their mouths. inappropriate nipping or biting may occur when ferrets perceive people as playmates, as an attention-getting device, or when ferrets are in pain or hungry. depending on the message they are trying to convey, ferrets may give a friendly nip or grab a human's hand or foot, bite down, hold on, and shake their heads. this is how they would respond to another ferret, whose naturally thick skin and fur would lessen the intensity of the bite. to humans, however, the bite can be both painful and alarming. in ferrets with a history of consistent biting behavior, it is ideal to try to determine the cause. this begins with collecting a behavioral history and in problem cases having the owners fi ll out a behavioral questionnaire (box - ) . once the type(s) of aggression and most probable causes for the aggression have been identifi ed, the goal is to avoid situations that elicit the biting behavior and to diminish biting behavior if it occurs. box - summarizes the various causes of ferret aggression and potential situations in which biting behavior may occur. recently purchased or adopted ferrets may be especially problematic, as they come with limited socialization, training, or handling. these (often young) ferrets may bite as a fear response to sudden movements and noise. this is probably a refl ex response that stems from the ferret's wild counterparts who were preyed on by larger mammals or birds of prey. if frightened or startled the ferret may show a defense reaction much like the frightened submissive dog-it will arch the back and fl uff out the tail and body fur (piloerection) to look larger and stronger, open the mouth in a threatening way, and hiss or exotic pet behavior behavioral questionnaire for the biting ferret w how long have you owned your ferret? w are there certain situations that initiate the biting behavior? w is this your fi rst pet ferret? w describe your ferret's environment? w is there a new pet in the household? w has the amount of "free time" and exercise your ferret gets recently changed? w have there been any lifestyle changes that may be refl ected in the ferret's behavior? w have you experienced a recent move or change in living arrangements? w have any new ferrets been brought into the household? w do young children routinely handle your ferret? w what do you do when your ferret bites? w what forms of behavior modifi cation have you tried? what is the response? w are there situations or objects that stimulate biting behavior? w do you smoke? w do you apply hand cream to which the ferret may be attracted? screech in order to frighten the perceived attacker or alarm other ferrets in the area. if not descented the ferret may empty or express its anal sacs, again much as a frightened dog would do. depending on the level of socialization of the ferret, this fear response may also lead to biting. it is best to try and prevent this defense response by letting the fearful ferret know your whereabouts and intentions when handling. to ensure that the ferret will not be startled, make noise outside the cage or rattle the cage door so they are aware of your presence, and when approaching talk in a soothing manner. if the ferret continues to appear fearful, give it time to adjust to your presence before handling, then use a towel to pick it up while talking to it quietly. if a ferret is possessive of a particular toy, take away the guarded object, and discourage the ferret from obtaining objects that are off limits. if the problem persists, try redirecting the ferret's attention with an alternative activity such as ball chasing, or try desensitizing the possessive ferret with repeated relaxed exposures to the object or toy and reward with gentle praise or a treat when the ferret is not possessive. be careful not to reinforce this behavior by offering another, possibly more acceptable toy while the ferret is nipping. with fear-related or maternal aggression, avoid circumstances that might elicit aggression. it is important not to startle or grab fearful ferrets, ferret behavior especially when they are sleeping, and to respect a nursing jill's privacy and innate protective behavior. ferrets are rarely fearful once they are awake and aware of your presence; however, if they continue to be cautious and nippy, try to replace the fear response with a counteraction such as anticipation of play or food. extra care should be taken with deaf ferrets that may startle more readily. congenital deafness occurs in ferrets, and anecdotal reports indicate that an increased incidence of deafness exists in albino and/or black-eyed ferrets. if a ferret stalks and nips at young children, it may be best to change your ferret's free time to a time when the child is napping or away from home. owners should be made aware that other household pets (e.g., birds, rabbits, or rodents) may be perceived as prey, and unsupervised w play aggression-the most common underlying cause for biting in ferrets, this is a normal behavior, especially in young ferrets, that needs to be mitigated. w possessive aggression-aggressive behavior directed at humans or other pets that approach the ferret when it is in possession of something it values, usually a favorite toy. may be exacerbated by restricting the ferret's free time and space. w fear-related aggression-occurs when ferret is startled or poorly socialized and not used to handling. this type of aggression can occur as a result of punishment, traumatic experiences, or genetic factors. w predatory aggression-a normal innate behavior of the polecat from which the ferret was domesticated. when directed toward people or other pets it results in a behavioral problem that may involve stalking, chasing, grasping, and biting. w redirected aggression-occurs when the harmful behavior is directed toward a person or pet that is not the original stimulus for the aggressive behavior. occurs when a person or pet interferes with two ferrets that are playing hard. w maternal aggression-refers to aggressive behavior directed toward humans or other pets that approach a jill with her kits. w pain-induced or irritable aggression-caused by an underlying medical condition: • gastrointestinal foreign body, hairball, gastric ulcers • infl ammatory bowel disease • hormonal changes associated with adrenal disease • any painful disease process w sexual aggression-may explain conspecifi c ferret aggression in which mating behavior may be accompanied by intense biting. contact time with that pet should be discouraged. it is also a good idea to put the pet dog or cat in another room during the ferret's free time until the behavioral interaction of these pets is known. overly exuberant play behavior or play aggression is the most common situation in which ferret frolicking can lead to biting. gentle nips are normal and natural to ferrets, which often bite at other ferrets to encourage play. therefore it is not uncommon for ferrets to nip their owners gently in order to gain their attention. play biting in ferrets is similar to the same misbehavior in puppies. box - outlines ideas for controlling this behavior problem. ferret play can escalate to the point where its frenzied commotion borders on aggressive behavior. one ferret may arch its neck and back and shove itself sidelong into the other in a very characteristic way. this fake challenge is an example of subdued or "domesticated" aggressive behavior turned to play. nose poking, ramming another ferret with mouth open, and defensive threats in which the ferret stands very erect with back arched and tail possibly brushed up are other examples of playful behaviors that originate in polecat aggressive behavior. the magnitude of the actions and vocalizations differentiates play from aggressive behavior. young kits at play are particularly mouthy, as seen with continual biting, mouthing, tugging, and dragging each other. these behaviors are preventing and managing ferret play biting behavior w avoid aggressive play (pushing, rolling, wrestling). w avoid tug-of-war games. w keep fi ngers curled when playing with an easily excited ferret. w use time outs- to minutes in a small room or pet carrier (do not use ferret's cage for time out) with no toys or towels or anything with which the ferret could play. w when the ferret bites, make a high-pitched sound (yip, ouch). this will mimic the sound another ferret makes when play behavior gets out of hand. w when the ferret starts biting during play, redirect it to appropriate toys such as hard rubber balls or plush toys. w try a gentle scruffi ng, and wiggle the ferret in the air while making a hissing sound (this is how a mother ferret disciplines a kit), but be mindful that the ferret may get even more excited. w display gentle but fi rm dominance over the ferret by holding it on its back for a few minutes. w wrap the ferret snugly in a towel or baby blanket so that it cannot get out and bite. walk around while gently cuddling and talking to the ferret and petting when it is calm. offer a food treat when the ferret remains calm. believed to refl ect innate dominance behavior learning. similar to other mammals, a mother ferret will tug and pull at her young as a means of discipline and control. if kits are sold to pet retailers at a young age (some kits are placed in pet shops as early as weeks of age), mother-kit socialization patterns may be disrupted. this lack of maternal nurturing can lead to overly stormy play behavior, which may be perceived as aggression by new owners. keep in mind that the solitary pet ferret may perceive its human owner as its playmate, and nips, pokes, and attempted drags of arms or feet may be directed at him or her. frequent handling in a quiet, subdued environment, along with behavior modifi cation in the form of positive reinforcements, time-outs, and counterconditioning, will all go a long way toward properly socializing these belligerent kits. remember to never incorporate physical punishment into your behavioral modifi cation routine. this may cause a frightened or excited ferret to become even more frenzied, resulting in more intense and perhaps vicious biting. also, ferrets may be sensitive to certain odors. ferrets may react to sweet-smelling hand lotion or soaps by licking or nipping the wearer. some ferrets love the smell or taste of nicotine and may react by biting the smoker's fi ngers. finally, an adult ferret that suddenly becomes more aggressive and nippy should be assessed for underlying health issues such as pain or hormonal imbalances associated with adrenal disease. most wild mustelidae are considered nocturnal, but wild polecats have been observed hunting during the day. sleeping habits probably refl ect habitat, territorial competition, and availability of food. under laboratory conditions, ferrets spend over % of the time sleeping, with approximately % of total sleep time in rapid eye movement (rem) sleep. this large amount of rem sleep is achieved by having a high number of rem sleep episodes rather than longer rem periods. domestic ferrets show diurnal activity in captivity and normally sleep from to hours in a typical day, with varying sleeping patterns. as a general rule, older ferrets demonstrate shorter, more frequent periods of activity and spend a greater part of their day sleeping. younger ferrets, on the other hand, tend to display longer periods of activity interspersed with sleep. regardless of age, the duration and timing of active wakefulness refl ect the owner's schedule and how often the ferrets are given the opportunity to interact. most ferrets are ready to explore and play at any time; the duration of these activities is a refl ection of age. domestic ferrets often sleep very soundly, during which time respirations and heart rate decrease. the depth of sleep is so profound that many ferret owners mistake this deep phase of sleep for severe illness or death. ferrets, especially older ferrets, can take several moments to awaken from sleep even with vigilant attempts at arousal. ferret owners need to be aware of this and be patient in awakening their ferret with gentle prodding and soothing vocalizations. if deep sleep behavior becomes increasingly pronounced in duration and depth, clinical evaluation for illness, particularly hypoglycemia associated with insulinoma, is warranted. ferrets sleep in a variety of positions, and bonded pairs or groups will pile on one another to sleep (figure - ). ferrets may sleep curled up like dogs, on their backs with all four legs sprawled out, or even hanging upside down half-way out of their hammocks. quiet respirations are usually audible, and periodic soft whimpering sounds may be heard from the sleeping ferret. yawning is a normal behavior of most ferrets and is usually not of clinical concern. ferrets just waking up from a nap will begin their wakefulness with a stretch and a yawn. it is interesting to note that scruffi ng the ferret (restraining the ferret by holding the skin at the nape of the neck) often elicits a yawning refl ex. this may facilitate a brief oral examination. the action of scruffi ng causes a relaxation response and is used as a form of restraint when necessary to calm an excited ferret. the relaxation elicited by scruffi ng is usually consistent and is similar to the method used by the jill when disciplining young kits or moving them from one location to another. learning to understand the behavior of animals is a very important aspect of diagnosis in veterinary medicine. the nonemergency physical examination begins with hands-off observation of the animal for any sensory signals that give an impression of overall health status. a few minutes of observing the ferret for behavioral signs of health or illness can disclose valuable information about overall patient well-being. a healthy ferret is alert and curious about its surroundings, demonstrating attentive and exploratory behaviors, and has bright, clear eyes and a smooth, shiny hair coat. before initiating the physical examination in any ferret, healthy or sick, it is also important to observe for temperament; behavioral signs of friendliness, fear, or potential aggression. a ferret that leans forward with interest, using its sense of smell to explore an outstretched hand, is usually a normal, friendly, inquisitive ferret. poorly socialized ferrets may show signs of fear such as backing up with the ears laid back and fl at, and some may even give a vocal hiss. the aggressive ferret may signal its displeasure by trying to get away and/or hissing vocally, but it will not give you other warning signs-no snarl or showing of the teeth; it will bite without warning. this tends to be a fairly intense bite that breaks the skin, and the ferret may hold on despite attempts by the holder to break free. during quiet observation the clinician can be taking the patient history. listen to the client for behavioral clues that might aid in making a differential diagnosis. if the ferret is just lying on the examination table with a dull look in its eyes, the clinician immediately gets the impression that this ferret does not feel well. pawing at the mouth or salivating may be a sign of nausea potentially caused by gastrointestinal discomfort secondary to gastric obstruction, gastric ulcers, or hypoglycemia secondary to insulinoma. if clients state that their normally passive and friendly ferret has become intermittently aggressive toward cagemates, adrenal disease or pain should be ruled out. if the owner reports lethargy and diffi culty in arousing the ferret from sleep, then hypoglycemia resulting from insulinoma should be considered. if the owner notices that the ferret has been standing in a hunched position with an arched back and wiggling its ears while grinding its teeth, then abdominal pain with secondary bruxism should be ruled out. pollakiuria and stranguria are abnormal urinary behaviors and common signs of cystitis or prostatitis, both of which may occur secondary to adrenal disease. male ferrets with urethral calculi, severe prostatitis, or periprostatic cysts associated with adrenal disease may become obstructed and be unable to urinate. these ferrets will usually display repeated attempts to urinate, with urgency demonstrated by intense arching of the back, straining, and evidence of abdominal pain with or without vocalization. observe the respirations to see if the ferret is showing any signs of increased respiratory rate and distress. scratching may indicate external parasites or underlying adrenal disease. the ferret's normal physical changes in response to seasonal changes and photoperiod, such as weight gain in the winter and weight loss in late spring, as well as seasonal shedding patterns, need to be taken into consideration if the client is concerned about weight loss or a sudden onset of increased shedding in an otherwise healthy ferret. an understanding of ferret behaviors, both normal and abnormal, therefore serves as a great aid in assessing ferret health. hormonal abnormalities including elevations in plasma estrogens and androgens can occur secondary to adrenal disease. these imbalances may lead to increased sexual behavior even in neutered and spayed ferrets. these behaviors include neck gripping, mounting, and pelvic thrusting, which may be interpreted by owners of pet ferrets as an aggressive behavioral change for which they will seek veterinary counseling (figure - ) . as a result, any healthy ferret that is presented because of a recent onset of conspecifi c aggressive or sexual behavior should be assessed for adrenal disease. other signs of adrenal disease include bilaterally symmetric alopecia (usually beginning on the tail and then extending up over the dorsum), pruritus, vulvar swelling in ovariohysterectomized female ferrets, and prostatic enlargement and cysts in neutered male ferrets (figure - ) . another less commonly reported behavioral change is the increased mothering behavior of jills associated with increased circulating levels of progesterone secondary to adrenal disease. for example, we have seen a jill with adrenal disease that showed nesting behavior by taking favorite stuffed animals and mothering them. the underlying cause(s) of these behavioral manifestations may be clarifi ed with a review of ferret adrenal disease physiology. in the intact ferret, gonadal estradiol or testosterone exerts a negative feedback on the hypothalamus and pituitary gland, thereby preventing excessive secretion of gnrh, lh, and fsh. it has been shown that lack of negative gonadal hormonal feedback on hypothalamic gnrh in neutered ferrets results in persistently elevated gonadotropic lh, which may induce nonneoplastic mounting behavior. ferret hyperadrenocorticism results in increases in plasma levels of one or more of the following sex steroids: estradiol, androstenedione, -alpha-hydroxyprogesterone, and dehydroepiandrosterone sulfate (dheas). this hormonal imbalance may lead to increased sexual behavior including neck gripping, mounting, and pelvic thrusting, which may be interpreted by owners of pet ferrets as an aggressive behavioral change for which they will seek veterinary counseling. (courtesy peter fisher.) pruritus. pruritus can be a behavioral sign of external parasites or adrenal disease. some ferrets with adrenal disease may manifest itching behavior as the only outward sign of this common endocrine disease. (courtesy peter fisher.) and neoplastic adrenocortical enlargement. the ensuing hyperadrenocorticism may result in increases in plasma levels of estradiol, androstenedione, -alpha-hydroxyprogesterone, and dehydroepiandrosterone sulfate, which result in physical and behavioral changes dominated by features consistent with excessive production of these sex hormones. a diagnosis of adrenal disease based on history and clinical signs can be more defi nitively confi rmed with ultrasonography of the adrenal glands and measurement of plasma levels of androstenedione, -alpha-hydroxyprogesterone, and estradiol (clinical endocrinology service, university of tennessee). insulin-secreting pancreatic islet cell tumors are among the most common neoplastic diseases affecting ferrets. synonyms include functional islet cell tumor, pancreatic β-cell tumor, pancreatic endocrine tumor, and insulinoma. the disease affects both male and female ferrets between the ages of and years but is most commonly diagnosed in ferrets to years of age. on histopathologic examination beta cell carcinoma is most often found, sometimes in combination with beta cell adenoma or hyperplasia. continuous hyperinsulinemia sustains the metabolic effect of insulin; therefore hepatic gluconeogenesis and glycogenolysis are inhibited, and peripheral uptake of glucose by tissue cells is increased. as the disease progresses, hypoglycemia ensues. insulinoma is another endocrine disease in which a history of certain behavioral changes will help the clinician narrow the differential diagnosis. the rate of development, magnitude, and duration of hypoglycemia are factors determining the severity of clinical signs. many ferrets are presented with a history of behavioral changes including intermittent weakness and lethargy, a decrease in play and exploratory behavior, and an increase in length and depth of sleep. ferret owners may report that the pet is no longer animated and seems dull and confused. signs usually progress slowly over a period of weeks to months; many owners are slow to pick up on the changes in the pet's behavior or attribute the quiet, less-responsive behavior to old age. pawing at the mouth, teeth grinding, and hypersalivation-a result of hypoglycemia-induced nauseaare other behavioral signs that may be associated with insulinoma. left untreated, hypoglycemia may result in seizures, coma, and death. the defi nitive diagnosis of insulinoma depends on the histopathologic examination of pancreatic tissue. however, in most ferrets a diagnosis of insulinoma is made before surgery, by demonstration of hypoglycemia in ferret behavior association with history and clinical signs. other causes of hypoglycemia should be ruled out, including anorexia or starvation, severe gastrointestinal disease, sepsis, neoplasia, and hepatic disease. pain assessment in ferrets is often more diffi cult than in dogs and cats because, in general, veterinarians are less familiar with the normal behavior of ferrets. changes in behavior associated with pain can be subtle, but careful observation of the undisturbed ferret will allow the clinician to pick up on the various indicators of pain. an uncomfortable ferret will be reluctant to curl into its normal, relaxed sleeping position, may have a tucked appearance to the abdomen and a strained facial expression, and may have increased frequency and depth of respirations. the gait may be stiff, with the head elevated and extended forward. most ferrets in pain are lethargic and anorexic. a painful abdomen is a common sequela to ferret gastrointestinal diseases including gastric ulcers, gastrointestinal foreign bodies or trichobezoars, ece, and helicobacter infections. owners often report that the ferret is hunched up with an arched back, immobile, or walking with a stilted gait and is grinding its teeth-all common signs of abdominal pain. a less astute owner may not recognize spasmodic teeth grinding behavior manifested in a ferret that holds its head down and rhythmically moves its facial muscles back and forth and wriggles its ears in response to painful stimuli. postoperative and traumatic pain are usually manifested as a reluctance to move and a facial expression demonstrating dull, half-open, noninquisitive eyes, which are overall expressions of tension. it is amazing to watch the change in behavioral attitude and facial relaxation once pain medication is administered. if possible, analgesics should be provided before painful stimulus occurs. administering preemptive analgesia as part of the preanesthetic protocol or administering analgesics intraoperatively before discontinuing general anesthesia diminishes the wind-up effect of pain and decreases the postoperative pain caused by neuropathic and infl ammatory pain. each patient must be evaluated individually when analgesic protocols are chosen, with frequency, duration, and type of analgesic used based on clinical judgment, hematologic, and biochemical values and patient response. a return to normal attentive behavior, curling up under a towel to sleep, and a good appetite are all behavioral signs that postoperative analgesia is adequate. imprinting on prey odours in ferrets (mustela putorius f. furo l.) and its neural correlates instinctive predatory behavior of the ferret (putorius putorius furo) modifi ed by chlordiazeperoxide hydrochloride (librium) ferret nutrition feed consumption and food passage in mink (mustela vison) and european ferret (mustela putorius furo) available at: www.practical-pet-care behavior of mustela putorius furo (the domestic ferret) recognizing pain in exotic mammals evidence implicating aromatization of testosterone in the regulation of male ferret sexual behavior effects of early social experience on activity and object investigation in the ferret scent marking behavior of the ferret an olfactory recognition system in the ferret mustela furo l. (carnivora: mustelidae) the physiological analysis of aggressive behavior hybridization of the phylogenetic relationship between polecats and domestic ferrets in britain wheel-running during anoestrus and oestrus in the ferret exploratory behaviour of the black-footed ferret etkin w: theories of socialization and communication analgesia of small mammals growth, reproduction and breeding animal behaviour: a synthesis of ethology and comparative psychology domestic animal behavior for veterinarians and animal scientists foraging cost and meal patterns in ferrets nares occlusion eliminates heterosexual partner selection without disrupting coitus in ferrets of both sexes the effects of environmental enrichment in ferrets social play or the development of social behavior in ferrets (mustela putorius)? mustela putorius furo: a carnivore with extremely high proportion of rem sleep mechanisms of animal behavior seeing is believing: ferrets' eyes and vision assessing spatial activity in captive ferrets, mustela furo l. (carnivora: mustelidae), nz failure of fertilization following abbreviated copulation in the ferret (mustela putorius furo) late onset of hearing in the ferret dermatologic diseases the aggressive behavior of individual male polecats (mustela putorius, m. furo and hybrids) towards familiar and unfamiliar opponents some behavioural differences between the european polecat, mustela putorius, the ferret, m. furo, and their hybrids endocrine diseases the denning behavior of feral ferrets (mustela furo) in a pastoral habitat ferrets for dummies hyperadrenocorticism in ferrets a tao full of detours, the behavior of the domestic ferret factors associated with aggression between pairs of domestic ferrets effects of neonatal castration and testosterone on sexual partner preference in the ferret sexual differentiation of play behavior in the ferret a history of the ferret effects of experience and cage enrichment on predatory skills of black-footed ferrets (mustela nigripes) physiology of the ferret key: cord- - idcbl authors: fennell, peter g.; melnik, sergey; gleeson, james p. title: limitations of discrete-time approaches to continuous-time contagion dynamics date: - - journal: phys rev e doi: . /physreve. . sha: doc_id: cord_uid: idcbl continuous-time markov process models of contagions are widely studied, not least because of their utility in predicting the evolution of real-world contagions and in formulating control measures. it is often the case, however, that discrete-time approaches are employed to analyze such models or to simulate them numerically. in such cases, time is discretized into uniform steps and transition rates between states are replaced by transition probabilities. in this paper, we illustrate potential limitations to this approach. we show how discretizing time leads to a restriction on the values of the model parameters that can accurately be studied. we examine numerical simulation schemes employed in the literature, showing how synchronous-type updating schemes can bias discrete-time formalisms when compared against continuous-time formalisms. event-based simulations, such as the gillespie algorithm, are proposed as optimal simulation schemes both in terms of replicating the continuous-time process and computational speed. finally, we show how discretizing time can affect the value of the epidemic threshold for large values of the infection rate and the recovery rate, even if the ratio between the former and the latter is small. a feature of our environment is the existence of networks, from real-life human contact networks, to virtual networks such as online social networks, to functional and technological networks such as transport networks and the internet [ ]. networks form a medium for contagions, which spread from node to node through the links of the networks. contagions can be physical [ , ] , cultural [ , ] , societal [ ] [ ] [ ] , financial [ ] [ ] [ ] , and the modeling of such contagions [ ] [ ] [ ] [ ] [ ] -along with the understanding of the suitability of various modeling approaches [ ] [ ] [ ] -is vital for matters of the utmost public importance [ ] [ ] [ ] . a common modeling paradigm for studying contagions is the framework of continuous-time markov processes [ ] [ ] [ ] , where events (such as the infection of a susceptible individual by an infected individual) occur at certain rates. the most well known of these models are epidemiological compartment models [ ] , which, although introduced as models of disease spread [ ] , are also widely used as models of social contagions such as the diffusion of information and innovations [ ] [ ] [ ] . continuous-time markov process models can provide valuable insights into contagion processes, and have real value in both predicting and controlling contagious outbreaks [ ] [ ] [ ] . one avenue to study continuous-time markov process models is by using discrete-time approximations [ ] [ ] [ ] [ ] [ ] [ ] [ ] [ ] [ ] [ ] [ ] [ ] [ ] [ ] [ ] [ ] [ ] . such approaches can be either numerical (i.e., synchronous updating monte carlo simulations) or theoretical. in a discretetime approach, time is discretized into time steps of length t (which usually takes the value t = ), and events occur with certain probabilities. these probabilities are known as the state transition probabilities, and are simply the product of the corresponding rate and the time step t. although discrete-time approaches correspond to their continuous-time counterpart in the limit t → , they can significantly differ in the case that t is finite. allen, in her work [ ] , shows that discrete-time susceptible-infectedsusceptible (sis) and susceptible-infected-recovered (sir) models can produce complex behavior such as period doubling and chaotic effects for sufficiently large values of the time step and/or contact rate. this behavior is not possible in the continuous-time sis and sir models, and is thus no more than an artifact of discretizing time. similarly, gomez et al. [ ] observe that differences between continuous and discrete-time sis dynamics are substantial when an arbitrary time step of t = is employed. an understanding of the discrepancies introduced as a result of discretizing time is thus important, allowing us to gauge the validity of discrete-time approaches and when they may accurately be employed. in this paper, we show the limitations of discrete-time approaches when used to study continuous-time contagion dynamics. our message is clear-that the accuracy of such methods will be poor if state transition probabilities are too large, leading to deviations from the underlying continuoustime process. the repercussions of this are manifold. discretetime theoretical approaches can be significantly inaccurate for large values of the contagion parameter values (such as infection and recovery rates), and thus the analysis of such approaches will not be valid. furthermore, discrete-time monte carlo simulations-often used as a gold standard [ , , ] can be inaccurate for large parameter values, and such inaccurate simulations can lead to misleading conclusions. we illustrate this latter point with an example from the literature in sec. iv. our work highlights the consequences of erroneous approaches to studying continuous-time contagion dynamics, which has important implications not only for the academic study of these dynamics [ ] [ ] [ ] [ ] [ ] [ ] [ ] [ ] [ ] [ ] [ ] [ ] [ ] [ ] [ ] but also for the implementation of such dynamics within large-scale simulators for real contagions [ , ] . to begin, we describe in some detail both continuous and discrete-time markov processes to illustrate mathematically the difference between the two. in continuous-time markov processes, events are described by rates λ, while events in the discrete-time analog are described by transition probabilities λ, whereλ = λ t. in the course of our analysis we focus on the specific example of sis dynamics; however, our analysis holds for any continuous-time markovian dynamics, with the core message being the limitations on the size of the transition probabilitiesλ for which discrete-time approaches are accurate. consider sis dynamics taking place on a network of n nodes. this is a continuous-time markov process where at any time t each node i in the network has a corresponding state x i t , which is either susceptible (x i t = s) or infected (x i t = i ) [ , , ] . the states of each node in the network change dynamically over time. susceptible nodes become infected through each of their infected neighbours at a rate β per infected neighbor, while infected nodes recover at a rate μ. "rate" here refers to instantaneous transition rates, which in continuous-time dynamics define the transitions between states; these are defined in terms of probabilities as ( ) where {x i t+ t = i via j } is the event that a susceptible node i became infected through an infected neighbor j . the fraction terms in the right hand sides of eqs. ( ) and ( ) are the probabilities of state changes per unit time and taking the limit of these fractions as t → leads to the concept of transition rates. in general, we can define r i as the rate at which a node i changes from its current state to the opposite state; this is given by where m i,t is the number of infected neighbors of node i at time t. the evolution of the dynamics in the network can be fully described by the master equation for the markov process [ , ] . if we denote by y t = {x i t } n i= the state of the network at time t, and by p(y,t) the probability that the network is in state y t = y, then the master equation is given by with initial conditions p(y, ) = p (y). here r y→y is the instantaneous rate at which the network changes from state y to y and is fully determined by the network structure and the transition rates μ and β. while the master equation is the gold standard-exactly describing the evolution of sis dynamicsthe dimension of its sample space is n , which in general is prohibitively large for analytical or numerical studies. one way to tackle this problem is to study the dynamics as a series of individual transitions between states. in continuous-time dynamics nodes change state one at a time, or asynchronously [ ] . given the state of the network, the probability distributions governing both the length of time until the next state change and the node which will change state can be constructed. these are given by the following lemmas (of which rigorous derivations can be found in the literature, e.g., ref. [ ] , chapter ): lemma . let τ be the holding time of the network, the length of time that the network remains in its current state before changing to the next state. then τ is an exponentially distributed random variable and the parameter of the distribution is the sum of the individual node transition rates, i.e., n i= r i . lemma . the probability that the next node in the network to change state will be node i is r i / n j = r j . lemmas and describe how the network probabilistically evolves from one state to another. they are the basis of continuous-time stochastic simulation methods such as the well-known gillespie algorithm, also known as the stochastic simulation algorithm or kinetic monte carlo [ ] [ ] [ ] [ ] . such simulations are often referred to as event-based simulations because the time intervals are not fixed but rather correspond to the time between consecutive state-changes in the system. at each step in such algorithms, time advances by an amount τ and node i changes its state, where τ and i are random numbers drawn according to lemmas and ( fig. ) . stochastic simulations give the opportunity to construct p(y,t) empirically by running multiple realizations of the stochastic process and aggregating over an ensemble of realizations. such simulations are statistically exact as they are fully based on lemmas and , which are derived without approximation from the axioms of the markov process. in a discrete-time framework, time is no longer treated as a continuous variable but rather takes the form of a discrete variable, which advances in time intervals of length t. instantaneous transition rates are replaced by transition probabilities. in a single time interval, susceptible nodes become infected through their infected neighbors with probabilityβ = β t per infected neighbor, while infected nodes recover with probabilityμ = μ t. note that t is often assumed to take the value t = , but even in this case it should be included in the expression forβ andμ to clarify that a rate needs to be multiplied by a time step before it can be expressed as a probability. the discretization of time in this manner leads to two deviations from the continuous-time process. these deviations arise through both the transition probabilities, which are used in place of transition rates, as well as the parallel (synchronous) state changes in discrete-time systems that are uncharacteristic of continuous-time dynamics. to understand the roots of the deviations introduced through the transition probabilities we can examine the definitions of μ and β as rates given in eqs. ( ) and ( ). these equations can be rearranged to give transition probabilities in terms of these rates, i.e., where in this case t is an infinitesimally small length of time. in the case that t is not infinitesimally small, eqs. ( ) and ( ) fig. . schematics of both (a) the gillespie algorithm and (b) the synchronous updating scheme. vertical ticks on the t axis indicate the moments through which the simulation advances-in synchronous updating the interval between these moments is a fixed time step with value t while in the gillespie algorithm the interval is a random variable τ given by lemma . the light green and dark red circles are nodes in the network, which are in the susceptible and infected states, respectively. a square around a node means that the node has been chosen for updating at a certain moment and may change its state. in the gillespie algorithm a node is chosen according to lemma and will always change its state while in the synchronous updating scheme every node has the chance to change state and will do so with a probability that depends on their state and the states of their nearest neighbors. become approximations. in a time interval of length t in the continuous-time markov process, the exact probability that an infected node will recover is − e −μ t while the probability that a susceptible node will become infected by a given infected neighbor is − e −β t . the transition probabilitiesμ andβ, the right-hand side of eqs. ( ) and ( ), are approximations to − e −μ t and − e −β t , respectively, and an important question then arises of the effect these approximations have on the dynamics. figure shows the actual probability − e −λ t along with the discrete-time probabilityλ = λ t, where we use the parameter λ to represent either μ or β. we also plot the error , which is defined as the difference between the discrete-time probability and the actual probability. whenλ < . , < . and so the approximation is fairly accurate in this range. for larger values of the state transition probabilityλ, however, the approximation differs significantly from the true values. at λ = . , ≈ . and whenλ = , ≈ . . these individual errors can accumulate and have significant implications on the dynamics as a whole; indeed we show empirically in secs. iii and iv that although discrete-time approaches can be very accurate whenμ andβ are very small they begin to lose accuracy whenμ andβ are of the order of magnitude of − . fig. . the actual probability (blue solid line) that a rate λ event will occur in a time step of length t plotted along with the approximate probabilityλ (black dash-dotted line) as used in discrete-time formalisms. the error is defined as the absolute distance between the two. second, we comment on the synchronous updating nature of discrete-time approaches. this is in contrast to the continuoustime process where nodes change state asynchronously and the change of state of one node immediately affects the transition rates of the other nodes (fig. ) . the strength of effect will depend on the transition probabilities, as the valuesμ andβ dictate the number of state changes that take place in each time step and thus the propensity of multiple nodes to change state at the same time. thus, we arrive at a simple conclusion: the values ofμ and β (and thus μ, β, and t) used in discrete-time approaches should be controlled so that these approaches are accurate representations of the continuous-time process. for large values of μ or β, the time step t should be small while if t = , as in the case of the majority of discrete-time approaches, the values of μ and β should be relatively small. throughout the rest of this paper we will give empirical evidence of this conclusion. finally, we comment on discrete-time numerical simulation schemes that are used to stochastically simulate sis dynamics. a commonly used simulations scheme is synchronous updating, also referred to as rejection sampling ( fig. ) [ , , , ] . in this case, time advances in steps of one time unit, i.e., t = . in a single time unit, a susceptible node will become infected by its infected neighbors with probabilityβ per infected neighbor while infected nodes become susceptible with probabilityμ. synchronous updating simulations are statistically exact realizations of the discretetime dynamics; these dynamics are fully described by the discrete-time master equation where q y →y is the probability that the network changes from state y to state y in a time step of length t = and is fully determined by the network structure and the transition probabilitiesμ andβ [ ] . because synchronous updating simulations exactly mimic the discrete-time dynamics and master equation they will be used throughout this paper to gauge the accuracy of the discrete-time approach. in the remainder of the paper, we show how the approximations introduced in discrete-time approaches can lead to misrepresentation of the actual continuous-time dynamics. we begin in the next section by examining the discrete-time approximations of eqs. ( ) and ( ) for fixed μ and β and various values of t. we show that discrete-time dynamics can accurately reproduce continuous-time dynamics for small values of t, but that they incur a breakdown in accuracy as t increases. further to this, we show in sec. iv that when the time step is fixed to the value t = , as in much of the literature, discrete-time approaches break down in accuracy when the transition rates (μ and β) are too large. this limits the range of parameters that can be studied with discrete-time approaches. we illustrate this with an example from the literature, also showing how synchronous updating simulation schemes can favour discrete-time formalisms leading to biased conclusions when comparing against continuous-time theories. finally, in sec. v we show that overly large values ofβ andμ can affect the value of the epidemic threshold, even if the effective transition rate defined as γ = β/μ =β/μ is small. in this section, we analyze the discrete-time approximations introduced in sec. ii b as a function of the size of the discrete time step t. we do this by carrying out synchronous updating simulations for various values of t and comparing them against exact results obtained from the master equation. numerical simulations are carried out in c++ and the code is available online [ ]. as our example, we consider sis dynamics on a complete graph of n nodes, i.e., a graph where every pair of nodes is connected. on such a graph, the sis dynamics are defined by the rate functions where z t is the number of infected nodes at time t and β and μ are the infection rate and recovery rate respectively, consistent with eq. ( ) for the complete graph. we choose the complete graph because on such a graph the master equation given in eq. ( ) can be reduced from a system of n equations for p(y,t) to a system of n + equations for p(n,t), the probability that there are n infected nodes in the graph at time t [ ] . this reduced system is given by for n n with initial conditions p(n, ) = p (n). for small values of n , this system can easily be solved using standard differential-equation solvers, giving us a gold standard against which to compare the discrete-time simulations. we also perform gillespie algorithm simulations to illustrate the accuracy and the speed of such simulations and thus their efficacy in simulating continuous-time dynamics. we present the results for sis dynamics with β = , μ = running on a complete graph with n = nodes in fig. . we plot the solution of eq. ( ) as well as the numerical results given by the gillespie algorithm and synchronous updating schemes with different time steps t. for the numerical simulations, we performed realizations and obtained the corresponding p(n,t) by taking the fraction of realizations in which there are n infected nodes at time t. for the synchronous updating simulations, we consider time steps of t = . , . and . from fig. , it is clear that these values of t with μ = β = will give a comprehensive range on which to judge the accuracy of the discrete-time approach, while noting that for t = and for these μ and β parameter values the system is deterministic. we consider the sis process at time t = at which stage the expected fraction of infected nodes η t = n= np(n,t) has reached a metastable state (fig. ) [ ] . at t = we empirically construct p(n, ) from the synchronous updating simulations and compare it to p(n, ) calculated from the master equation ( ) . the histogram of fig. (b) shows this comparison. from this histogram, it is clear that while the discrete-time simulations are quite accurate for small t this accuracy can fully break down when t is too large. the accuracy of the probability distribution in the metastable state depends highly on the value of the time step used to reach the metastable state. in the synchronous updating simulations with t = the results are highly inaccurate with all of the probability concentrated on n = , i.e., p(n, ) = δ n, . even the case t = . , while fairly accurate, shows discrepancies in both the probability distribution p(n, ) and the expected fraction of infected nodes η t [ fig. (a) ]. considering that the error betweenμ (β) and − e −μ t ( − e −β t ) is less than . for μ = β = and t = . (fig. ) , we conclude that these discrepancies are due to the simultaneous state changes in synchronous updating, which are uncharacteristic of the continuous-time process. in the histogram of fig. (c) we compare p(n, ) constructed empirically from the gillespie algorithm to p(n, ) calculated from the master equation. the gillespie algorithm is extremely accurate and matches the exact p(n,t) to a high degree of precision. furthermore, this algorithm is computationally rapid. we performed a short comparison of the simulation algorithms in terms of speed, showing in table i the simulation run times for the realizations for the gillespie algorithm and for synchronous updating with various values of t. for t = . -corresponding to the simulations which most closely match the accuracy of the gillespie simulations-the gillespie algorithm is an order of magnitude faster. this computational speed, along with its natural precision of the algorithm, make the gillespie algorithm an optimal algorithm for simulating continuous-time dynamics. to summarize, the accuracy of discrete-time approximations to continuous-time dynamics depends highly on the size of the discrete-time step t at which the system evolves. this has extremely important implications for real-world predictive models of epidemic spreads that are discrete-time based [ , ] , as overly large time steps can affect the prediction of both the expected evolution of a contagion [ fig. in the next section, we fix the time step at t = and show how the accuracy breaks down when the infection and recovery rates are too large, showing that discrete-time formalisms using this approach are limited in the ranges of the rate parameters that they can study and thus their ability to match continuoustime dynamics. as mentioned in sec. ii b, synchronous updating has the same characteristics of discrete-time systems, which are characterized by transition probabilities and difference equations of the form where p(y,t + t)-the probability that the system is in state y at time t + t-is a function of the probabilities p(y ,t) for all possible states y in the sample space . on the other hand, continuous-time systems are characterized by transition rates and differential equations of the form given by the master equation ( ) . although the discrete-time formulation coincides with the continuous-time one in the limit t → , the dynamics will differ for noninfinitesimal t. issues then arise when comparing discrete and continuous-time systems and the choice of numerical scheme becomes important. we illustrate this now with an example from the literature, while also showing how the accuracy of discrete-time approaches with t = can be insufficient for large values of the transition rates. a prominent current strand of research is the behavior of the sis model on infinite networks with power-law degree distributions [ , , [ ] [ ] [ ] [ ] [ ] . in ref. [ ] , chakrabarti et al. introduced the nonlinear dynamical systems (nlds) theory, a discrete-time approach to sis modeling with a set of meanfield difference equations of the form for i n , where p i,t+ is the probability that node i is infected at time t. they compare their results to two continuous-time formulations, the heterogeneous mean-field (hmf) approach of pastor-sattoras and vespignani [ ] and the kephart-white (kw) [ ] approach. the bases of the comparison are synchronous updating numerical simulations with a time step t = and it is found (see for example fig. of ref. [ ] ) that the nlds theory is much closer to the numerical simulations than both the hmf and kw theories. however, the comparison of discrete-time and continuoustime formulations in this manner is biased. synchronous updating with a time step t = is the correct procedure for numerically simulating discrete-time dynamics. on the other hand, to simulate continuous-time dynamics either synchronous updating with a vanishingly small time step or a continuous-time simulation scheme such as the gillespie algorithm should be used. to illustrate the difference resulting from the use of the different updating methods we reproduce an example from [ ] . the example is sis dynamics on an erdős-rényi network of nodes and mean degree k = . figure shows various numerical simulations of these dynamics. again, the computer code used to perform the simulations is available from ref. [ ] . included in fig. are synchronous updating simulations with a time step t = , as in ref. [ ] , along with synchronous updating simulations with a small time step t = . and gillespie algorithm simulations. in fig. (c) of ref. [ ] , where μ = . and β = . , it can be seen that the fractionη of infected nodes in the metastable state given by the nlds theory matches very closely the synchronous updating numerical simulations. however, as can be seen in fig. (b) here, these synchronous updating simulations differ quite significantly from continuous-time simulations, which plateau at the metastable state withη ≈ . . the kw theory, which in ref. [ ] is rejected as being inaccurate, actually converges to a value much closer to the continuous-time simulations than the nlds theory. thus, using the correct simulation technique, the conclusions in ref. [ ] should be reversed: the kw model is more accurate than the ndls model. for fixed t = , the accuracy of the discrete-time approach decreases as μ and β increase. in the example above, when μ is decreased from μ = . to μ = . the discretetime simulations match relatively closer to the continuous-time simulations [ fig. (a) ], while when μ is decreased further to μ = . the discrepancy between the two simulations in negligible. chakrabarti et al. state that their model "outperforms (the kw model) when μ is high." however the opposite is the case: their discrete-time approach breaks down in accuracy (as an approximation to the continuous-time process) as μ increases. we conclude with an observation to motivate the next section. as μ is increased from μ = . [ fig. (a) ] to μ = . [ fig. (b) ], the fraction of infected nodes in the metastable stateη (at, for example, t = ) decreases for both the continuous-time simulations and discrete-time simulations. however,η decreases quicker for the continuous-time simulations and so it would seem that the critical value μ c at whichη first becomes zero will be different depending on whether a discrete-time or continuous-time approach is used. this has implications for the epidemic threshold, which is the focus of the next section. a characteristic of sis dynamics is the occurrence of phase transitions as the effective transition rate γ is varied. recall that the effective transition rate is defined as the ratio of the infection rate to the recovery rate, i.e., γ = β/μ. depending on the structure of the network and whether the network is finite or infinite, the critical point, or epidemic threshold, γ c between different phases can vary. as mentioned in sec. iv, there are still remaining questions about the steady-state behavior of the sis model-particularly the value of the epidemic threshold on such networks-and so a good understanding of how different approximations affect the value of the epidemic threshold is important. in this section, we show that although the epidemic threshold is defined in terms of the ratio γ = β/μ =β/μ, the individual values of the transition probabilitiesβ andμ used in discrete-time approaches affect the value of the epidemic threshold when it is calculated by (i) performing discrete-time numerical simulations or (ii) iterating a discrete-time system [such as eq. ( )] from a set of initial conditions (as, for example, in ref. [ ] ). note however that the epidemic threshold predicted by steady-state analysis [i.e., setting p t+ = p t in eq. ( ) ], such as in ref. [ ] , is completely valid. we show howμ andβ affect the value of the epidemic threshold in the following manner. for a given network, we fix the value ofμ and varyβ so that the effective transition rate γ varies between γ min and γ max where γ min and γ max are chosen so that the epidemic threshold lies between them, i.e., γ min γ c γ max . thus whenμ is small (large),β will be small (large) so that their ratio lies in the range γ min β /μ γ max . we perform standard synchronous updating simulations (with t = ) and obtain the critical value γ c as the smallest value of γ such that the fraction of nodes in the metastable state is nonzero. if the epidemic threshold depends only on the ratio γ =β/μ and is independent of the individual values of μ andβ, then γ c should be the same regardless of the value of μ, which is fixed. however, we find that this is not the case. we perform this experiment on an erdős-rényi network with n = nodes and mean degree k = , similar to the network used in the example of sec. iv (fig. ) but with a greater number of nodes. on such a network, the epidemic threshold is predicted by steady-state analysis of both the ndls and hmf theories as γ c = . . from fig. we see that whenμ is small (μ = . ), the epidemic threshold predicted by synchronous updating simulations corresponds to this value γ c = . . however, asμ (and thusβ) increases, the accuracy of the discrete-time approach breaks down and both the fraction of infected nodes in the metastable state and the epidemic threshold deviate from the true values. the epidemic threshold decreases from γ c = . whenμ = . to γ c = . whenμ = , even though the ratio γ =β/μ remains in the same range. thus, in discrete-time formalisms the steady-state behavior is not fully determined by the effective transition rate γ but also depends onμ andβ. from our analysis in sec. iii (fig. ) it is clear that the metastable state reached iteratively from an initial condition depends on the single-step transition probabilitiesμ andβ. if these are too large, the errors introduced in the discrete-time approximation become significant, affecting the metastable state and the value of the epidemic threshold. the results of this section have important implications for discrete-time approaches. first, they show that the epidemic threshold calculated empirically using synchronous updating simulations can be incorrect ifμ andβ are too large, even if the ratio between them is small. second, they have implications for calculating the epidemic threshold from discrete-time systems of the form p t+ = f (p t ) by forward iterating the system from an initial condition [ ] . if the transition probabilitiesμ and β used in such systems are too large then the metastable state will be affected, possibly leading to a miscalculation of the epidemic threshold. in this paper, we have provided conclusive evidence of the limitations of discrete-time approaches as approximations to continuous-time contagion processes. when the state transition probabilities are too large, such approaches become inaccurate and misrepresentative of the underlying continuoustime processes, thus compromising their utility and their applicability to prediction and analysis. our message is clear: due care needs to be taken when implementing discrete-time methods as approximations to continuous-time dynamical processes. being constructive, we have briefly discussed alternatives. for simulations of continuous-time processes on networks, event-based simulations such as the gillespie algorithm are more favorable than synchronous updating schemes both in terms of accuracy and speed. for theoretical analysis, continuoustime analogs [ , ] of discrete-time approaches should be employed because they are unconstrained in the range of dynamics parameter values that can be studied. proc. natl. acad. sci. usa advances in network analysis and its applications performance analysis of complex networks and systems dynamical processes on complex networks the theory of stochastic processes proceedings of the nd international symposium on reliable distributed systems the mathematical theory of infectious diseases and its applications stochastic interacting systems: contact, voter and exclusion processes handbook of stochastic methods, springer series in synergetics interacting particle systems we refer to the metastable state as the state when the expected fraction of infected nodes has reached a plateau proceedings of the ieee computer society symposium on research in security and privacy this work has been supported by science foundation key: cord- - fp u authors: al-siyabi, turkiya; binkhamis, khalifa; wilcox, melanie; wong, sallene; pabbaraju, kanti; tellier, raymond; hatchette, todd f; leblanc, jason j title: a cost effective real-time pcr for the detection of adenovirus from viral swabs date: - - journal: virol j doi: . / - x- - sha: doc_id: cord_uid: fp u compared to traditional testing strategies, nucleic acid amplification tests such as real-time pcr offer many advantages for the detection of human adenoviruses. however, commercial assays are expensive and cost prohibitive for many clinical laboratories. to overcome fiscal challenges, a cost effective strategy was developed using a combination of homogenization and heat treatment with an “in-house” real-time pcr. in swabs submitted for adenovirus detection, this crude extraction method showed performance characteristics equivalent to viral dna obtained from a commercial nucleic acid extraction. in addition, the in-house real-time pcr outperformed traditional testing strategies using virus culture, with sensitivities of % and . %, respectively. overall, the combination of homogenization and heat treatment with a sensitive in-house real-time pcr provides accurate results at a cost comparable to viral culture. human adenoviruses (hadv) are ubiquitous dna viruses that cause a wide spectrum of illness [ ] . the majority of hadvs cause mild and self-limiting respiratory tract infections, gastroenteritis or conjunctivitis; however, more severe disease can occur such as kerato-conjunctivitis, pneumonitis, and disseminated disease in the immunodeficient host [ ] [ ] [ ] [ ] . hadv is increasingly being recognized as a significant viral pathogen, particularly in immunocompromized patients where accurate and timely diagnosis can play an integral part of management [ ] [ ] [ ] [ ] [ ] [ ] [ ] . hadv diagnosis can be achieved using virus culture, antigen-based methods (immunofluorescence, enzyme immunoassays or immunochromatography), or nucleic acid amplification tests (naats). for respiratory viruses, naats are well established as the most sensitive methods for detection and have become front-line diagnostic procedures [ , [ ] [ ] [ ] [ ] [ ] [ ] . most commercially available naats are highly multiplexed assays and enable simultaneous detection of several respiratory pathogens; however, their poor performance for detecting hadv emphasizes the need for single target detection [ , , ] . adenovirusspecific naats have been challenged by the diversity of hadv species, which now include more than different types [ , ] . commercial qualitative and quantitative naats are available for the detection of all hadv species and most types, yet these are cost prohibitive for many laboratories. "in-house" real-time pcr assays are relatively inexpensive alternatives to commercial naats that provide rapid and accurate results [ , , , [ ] [ ] [ ] [ ] [ ] [ ] . wong and collaborators [ ] developed an in-house real-time pcr assay that has been designed for the detection of all hadv species. it has been extensively validated using a variety of clinical specimens [ , ] . in addition to the pcr reaction itself, extraction of nucleic acids prior to pcr is also a substantial contributor to cost. recently, a crude mechanical lysis using silica glass beads (i.e. homogenization) and heat treatment was shown to recover herpes simplex virus dna from swabs submitted in universal transport media (utm) [ , ] . while defying the traditional paradigm of specimen processing for molecular testing, homogenization with heat treatment was shown to be a cost effective alternative to nucleic acid extraction. this study evaluated whether the combination of homogenization and heat treatment with an in-house real-time pcr would be a cost effective strategy for the detection of hadv from viral swabs transported in utm. in patients suspected of respiratory or conjunctivitis, flocked nasopharyngeal or ocular swabs, respectively, were submitted for adenovirus detection. swabs were collected by clinicians at the capital health district authority (cdha) and were submitted to the cdha microbiology laboratory (halifax, ns, canada) between april and march . the swabs were transported in ml of utm (copan diagnostics inc., murrieta, ca) and kept at °c for no more than hours prior to processing. viral cultures were performed as part of routine diagnostic testing by experienced technologists. following virus culture, specimens were transferred in aliquots into cryotubes (without any identifiable patient information) and the anonymized specimen tubes were archived at − °c for retrospective molecular analyses. twentyseven virus culture-positive specimens and virus culture-negative specimens were randomly selected and tested for the presence of hadv using a well established in-house real-time pcr assay [ ] following recovery of viral dna was recovered by homogenization with heat treatment or automated nucleic acid extraction. the world medical association (wma) declaration of helsinki is a statement of ethical principles for medical research involving human subjects, including research on identifiable human material and data. since the purpose of this clinical validation was quality improvement of the laboratory detection of adenovirus and relied exclusively on anonymous human biological materials that did not use or generate identifiable patient information, research ethics board (reb) review was not required based on chapter , article . of the tri-council policy statement: ethical conduct for research involving humans ( nd edition). viral cultures were performed as part of routine diagnostic testing by experienced technologists in the cdha microbiology laboratory (halifax, ns, canada). briefly, μl of specimen was inoculated onto cultured a cells (atcc ccl- ), incubated at °c in a % co atmosphere, and monitored daily for the presence of characteristic cytopathic effect (cpe) [ ] . if cpe was observed, cells were fixed with acetone and stained using specific fluorescein isothiocyanate (fitc)-labeled monoclonal antibodies in the d ultra dfa reagent kit (diagnostic hybrids, athens, oh). in absence of cpe, cells were fluid changed on day and incubated for an additional days. on day , the culture was discontinued and a terminal stain was performed. a cells were propagated in nutrient mixture f- ham with lglutamine (sigma-aldrich canada ltd., oakville, on) supplemented with % fetal calf serum (hyclone, thermo fisher scientific, ottawa, on), μg/ml amphotericin b (sigma-aldrich), μg/ml ampicillin (novapharm ltd, toronto, on), and mg/ml vancomycin (sigma-aldrich). for quantification, -fold dilutions of hadv-c, type (strain tonsil , atcc vr- ) were inoculated onto -well plates in volumes of μl. cells were maintained as described above and after days were subjected to direct immunofluorescence (dfa) to determine the % tissue culture infective dose (tcid ). results were expressed as tcid /ml and represent replicates obtained in four independent experiments (n = ). prior to molecular testing, viral dna was recovered from specimens using either homogenization with heat treatment as previously described [ ] , or using a commercial nucleic acid extraction as recommended by the manufacturer. for homogenization, μl of specimen and . g of various sized acid-washed silica beads: ≤ μm; - μm; - μm at a ratio of : : (sigma-aldrich, oakville, on) were placed on a fastprep- homogenizer (mp biomedicals, solon, oh) at . m/s for s. following a brief centrifugation at , × g for min, μl of the supernatant was diluted in two volumes of te buffer ( mm tris-hcl, mm edta, ph . ). the homogenate was then heated at °c for min, cooled to room temperature, and μl was subjected to adenovirus real-time pcr. automated extractions were performed on μl of specimen using a magna pure total nucleic acid isolation kit (roche diagnostics, mannheim, germany) on a roche magnapure lc instrument. the elution volume was μl. specimens with discordant results during method comparison were subjected to a manual dna extraction using a qiaamp dna blood mini kit (qiagen, toronto, on) with a sample volume of μl. the dna was eluted in μl, and concentrated -fold using a qiagen minelute pcr purification kit. plasmid dna, used for the internal control, was purified from a ml overnight culture using a qiaprep spin miniprep kit (qiagen) as recommended by the manufacturer. for molecular typing, amplicon was purified using a qiaquick gel extraction kit (qiagen) with a final elution volume of μl. all nucleic acid extractions were performed using manufacturers' instructions. nucleic acids were used immediately following extraction and aliquots were placed at − °c for longterm storage. the real-time pcr has been extensively validated using respiratory specimens [ ] . to facilitate workflow in the cdha microbiology laboratory (halifax, nova scotia, canada), the in-house assay was optimized for amplification and detection on a roche lightcycler . platform. real-time pcr was performed as duplex reactions with primers and probes (table ) targeting the adenovirus hexon gene and an exogenous internal control. for adenovirus, two sets of primers and probes were used to span the genetically diverse adenovirus types [ , ] . primers were synthesized by sigma genosys (oakville, on). probes for adenovirus and the internal control were purchased from biosearch technologies (novato, ca) and tib molbiol llc (adelphia, nj), respectively. the internal control, termed pgfp, is added to each reaction to monitor for the presence of pcr inhibitors. pgfp is a pmk-derived plasmid with a fragment of the gene encoding green fluorescence protein (gfp). the construct was synthesized, assembled, and transformed into escherichia coli k by life technologies (burlington, on). the final construct was verified by dna sequencing and restriction endonuclease digestion. e. coli harboring pgfp was inoculated into luria bertani broth supplemented with μg/ml kanamycin. plasmid dna was purified from a ml overnight culture and plasmid dna was quantified by spectrophotometry. ten-fold serial dilutions were used as template for the in-house realtime pcr. an inverse linear relationship (y = − . × + . ; r = . ) was generated by plotting crossing points (cp) values against plasmid concentration (data not shown). the linear range spanned cp values ranging from to , corresponding to concentrations of to copies per μl, respectively. for each pcr reaction, approximately copies were added. real-time pcr assay was performed using the light-cycler dna master hybprobe kit (roche diagnostics) in μl reactions consisting of: μl of template, × lightcycler faststart mix, mm mgcl ; . units of heat-labile uracil-n-glycosylase [ ] ; μl the internal control at copies/μl; nm of each adenovirus primer (adv f, adv r, adv f, adv r) and nm of probe (adv pr and adv pr); and nm of each pgfp primer (fgfp and rgfp) and nm of each probe (gfppr and gfppr ) ( table ). amplification and detection were performed using the lightcyler . instrument under the thermocycling conditions described for the roche hsv- / detection kit: initial activation at °c for min, followed by amplification cycles of denaturation at °c for s, annealing at °c for s, and elongation at °c for s. following amplification, melting temperature (tm) analysis was performed by measuring the fluorescent signal during the following cycling profile: °c for s, °c for s, and °c for s with a . °c/s transition. fluorescence was acquired at the annealing stage during amplification and continuously during the melting curve. cp and tm values were determined using software provided by the manufacturer. the nm (adenovirus) and nm (pgfp) channels were analyzed for presence or absence of target. pcr inhibition was suspected by either loss of positivity in the nm channel, or a shift in cp values greater than two standard deviations (cp ≥ . ) from the value obtained with the negative control. to resolve discrepant results obtained between the inhouse pcr assay and virus culture, or quantify the adenovirus dna during evaluation of the analytical sensitivity, the adenovirus r-gene kit (argene inc., sherley, ny) was used according to the manufacturer's protocol following a manual dna extraction. this internally controlled quantitative real-time pcr assay targets the hexon gene of adenovirus, and is validated for detection table nucleotide sequences of primers and probes used in this study sequence ( ′ to ′) reference cca gga cgc ctc gga gta [ ] adv r aaa ctt gtt att cag gct gaa gta cgt [ ] adv pr fam-agt ttg ccc gcg cca cca ccg -bhq * [ ] adv f gga cag gac gct tcg gag ta [ ] adv r ctt gtt ccc cag act gaa gta ggt [ ] adv pr fam-cag ttc gcc cgy gcm aca g -bhq * [ of types to [ ] . the kit contains: a ready-to-use premix contains (primers, probe, polymerase, and buffer) needed for amplification, quantification standards (at , , , , and , copies/reaction), and a sensitivity-control at copies/reaction. results were expressed as the number of copies per reaction. analytical specificity, limit of detection, and reproducibility the analytical specificity was first determined in silico by performing a basic local alignment search tool (blast) for primers, probes, and entire amplicon sequences using the national center for biotechnology information website (http://www.ncbi.nlm.nih.gov). in addition, high titer nucleic acids were extracted from a panel of microorganisms chosen based on their ability to cause similar diseases or their potential for being found in the clinical specimen as a pathogen or normal flora ( (figure and table ). the analytical sensitivity (or limit of detection, lod) of the homogenization with heat treatment or nucleic acid extraction, in combination with the real-time pcr, was determined using -fold serial dilutions (in utm) of a cultured hadv-c type . each dilution was simultaneously processed by both extraction methods, and an aliquot immediately inoculated onto a cells for virus culture. the lod was defined by probit analysis [ ] using triplicate values obtained in four independent experiments by two different operators (n = ). each virus dilution was expressed as tcid /ml in the original sample. the virus dilutions were also quantified using a commercial real-time pcr and expressed as target copies/ reaction for each assay. intra-and inter-assay reproducibility were calculated for each dilution and expressed as % coefficients of variation (%cv). the performance of each method was compared to a modified gold standard to determine sensitivity, specificity, accuracy and precision. a case was defined by concordant results (positive or negative) between at least two assays. to resolve discrepant results obtained between the in-house real-time pcr assay and virus culture, dna was extracted manually and was subjected to commercial real-time pcr. the virus culture-positive specimens were subjected to pcr targeting the conserved segments surrounding the hypervariable region (hvr ) of the hexon gene terminator chemistry on the applied biosystems × l dna sequencer. type designation was undertaken by blast analysis, and confirmed by comparison to a database generated from sequences obtained from genbank [ ] . sequence analysis and multiple sequence alignments (clustalw analysis) were performed using the seqman and megalign components of lasergene software (dnastar, madison, wi). the phylogenetic tree was inferred using a neighbor-joining (nj) method with bootstrapping analysis for n = . chi-square and two-tailed fisher's exact tests were used to compare proportions in -by- contingency tables. confidence intervals ( %) for the estimated parameters are computed by a general method based on "constant chisquare boundaries" [ ] . agreement between assays was measured using kappa statistics. the statistical package for social sciences (spss) software v. was used and p ≤ . was used to denote a statistically significance. clades are shaded to depict species a to f. analytical specificity, limit of detection, and reproducibility blast searches of primers and probes targeting the adenovirus hexon gene the internal control sequences revealed that these were highly specific targets. in fact, no cross reactions were observed with high-titer nucleic acids extracted from other respiratory viruses or bacteria ( table ). the in-house real-time assay was able to detect serogroups a to f, including a variety of genetically diverse types: , , , , , , , , , , and ( figure and table ). as seen in figure , the performance of the inhouse pcr following the homogenization-or nucleic acid extraction-based protocols was equivalent. for each method, overlapping linear relationships were observed (y = − . × + . ; r = . compared to y = − . + . ; r = . , respectively) that spanned eight orders of magnitude with cp values ranging from to (figure a ). the intra-and inter-assay reproducibility of the real-time pcr following homogenization and heat treatment ranged from . to . %, and . to . %, respectively. similarly, intra-and inter-assay reproducibility of following the nucleic acid extraction protocol ranged from . to . % and . to . %. as expected, the highest %cv values observed for both methods were with virus dilutions near the lod. for hadv-c type , the lod for virus culture was . tcid /ml. the in-house real-time pcr was reproducibly positive following nucleic acid extraction or homogenization with viral stock dilutions corresponding to . tcid /ml ( / and / , respectively), and positive pcr reactions were frequently observed using virus dilutions of . tcid /ml ( / and / , respectively). virus stock dilutions were quantified using commercial real-time pcr assay, and the lod for homogenization or nucleic acid extraction-based protocols were shown to be approximately equivalent (figure ) . with a probability of %, the lod for the homogenization-and nucleic acid extraction-based protocols were copies/reaction (log = . ) and copies/reaction (log = . ), respectively ( figure b) . dilutions corresponding to the lod for virus culture were also quantified by real-time pcr and estimated at approximately copies/reaction ( figure b ). of the clinical specimens, concordant negative and concordant positive results were obtained when comparing virus culture to the in-house pcr following either of the two extraction methods ( figure a and table ). real-time pcr generated additional positive results that were later resolved as true positives using a manual dna extraction and a commercial real-time pcr ( figure a ). all pcr-positive culture-negative results were detected following homogenization protocol, whereas were detected following nucleic acid extraction ( figure a) . the single discordant result between the molecular assays had a cp value of . , suggesting that it may be attributed to sampling error (poisson distribution) at low concentrations of template [ ] . since the internal control also failed to amplify in this sample, the negative result could also be attributed to pcr inhibition. upon repeat processing by automated and manual nucleic acid extractions, positive results were obtained. therefore, the original specimen result was considered a false negative. overall, compared to the modified gold standard, the sensitivity of the inhouse real-time pcr following homogenization with heat treatment or nucleic acid extraction was approximately equivalent at % ( . - %) and . % ( . - . %), respectively (table ). in contrast, the sensitivity of virus culture was only . % ( . - . %) ( table ). the accuracy of each method was % ( . - %), . % ( . - . %), and . % ( . - . %), respectively ( table ) . all assays showed a high degree of specificity and precision (table ) . when comparing cp values for the positive results obtained with the real-time pcr following both extraction methods, a linear relationship was observed (y = . × + . ; r = . ) ( figure b ). cp values for homogenization with heat treatment were consistently higher than those obtained using the nucleic acid extraction; however, no significant differences in sensitivity (analytical or clinical) were observed (figure and table ). as expected, virus culture-positive specimens had positive pcr results with low cp values, whereas the virus culture-negative specimens had pcr-positive results with cp values greater than ( figure b ). dna extracted from the real-time pcr positive specimens were subjected to a conventional pcr targeting the conserved segments surrounding the hvr of the hexon gene [ ] . successful sequences were obtained from dna extracted from the specimens that were both virus culture and real-time pcr-positive. a type could be assigned using multiple sequence alignment of sequences derived from genbank, as previously described [ ] . individual blast analysis yielded similar results. three serogroups were observed: b (types , , , and ), c (types and ), and d (types , , and ). the predominant types observed were: ( . %), ( . %), ( . %), and ( . %). the conventional pcr was unable to amplify the target sequences from dna extracted from the virus culture-negative/real-time pcr-positive specimens. the cp values for these specimens ranged from to , suggesting only low quantities of virus were present (figure ). dna sequencing was also used to distinguish the prototypic hadv type p (strain de wit) from newly emerged type p . adenovirus type p has been associated with severe disease in europe and the north america [ ] [ ] [ ] [ ] . while the hexon hvr sequences obtained in this study share % identity with hadv type p , only two mutations (g a and g a) separate types p from p in this region. to further characterize the a b figure analytical sensitivity of the in-house real-time pcr. prior to amplification, -fold serial dilutions of hadv-c type were processed by homogenization and heat treatment (open circles, solid line), or nucleic acid extraction (filled squares, dashed line). in both cases, equivalent results were obtained in respect to: a) the linear range; and b) the lod determined by probit analysis (n = ). at a probability of %, the lod for the homogenization-and nucleic acid extraction-based protocols were copies/reaction (log = . ) and copies/reaction (log = . ), respectively. the same dilutions used for inoculate virus culture and dfa staining (indicated by open triangles, dotted line) were also quantified and demonstrated a lod of approximately copies/ml (log = . ). virus, the fibre knob gene was sequenced with primer pair f mut and r mut (table ) , using reaction conditions, thermocycling parameters, and dna sequencing as described for the molecular typing. compared to wild-type p, the fiber knob gene of hadv type p displays a -bp deletion (referred to as the k -e deletion) [ , , ] . the adenovirus type from this study harbored the characteristic -bp deletion, consistent with hadv type p (figure ) . an exogenous internal control was used in this study which is non-competitive (contains a primer pair that does not target adenovirus). the addition of the internal control and primers and probes to the in-house pcr reaction did affect the analytical sensitivity of the assay (data not shown). since the internal control was added at the level of pcr, both extraction methods could be directly evaluated for the presence of pcr inhibitors. despite a subsequent heat treatment and dilution step, homogenization is a crude method to recover viral dna and may not sufficient to remove or inactivate pcr inhibitors. amplification of the internal control in adenovirusnegative specimens is consistent with a true negative result and not simply attributed to pcr inhibition. pcr inhibition was suspected by either loss of positivity in the nm channel, or a shift in cp values greater than two standard deviations (which corresponds to approximately ± . cp) from the value obtained with the negative control. this value was established previously, where the internal control cp values from consecutive hsv-negative specimens were compared by homogenization and heat treatment or nucleic acid extraction [ ] . this cutoff value remains true for the internal control used in this study. since the in-house pcr was performed as a duplex with an internal control added at the level of pcr, the clinical specimens processed following homogenization and heat treatment or nucleic acid extraction could be monitored directly for the presence of potential pcr inhibitors. potential inhibitory substances were observed in two distinct cases: the first was a specimen that had been processed by homogenization with heat treatment, and the second, in a specimen subjected to nucleic acid extraction. in both cases, pcr inhibition was not observed upon repeat processing, suggesting either a processing error had occurred or the pcr inhibitor was labile [ ] . therefore, pcr inhibition could not be proven or excluded. as a result, the rate of possible pcr inhibition with either extraction method was equivalent at . % ( / ). at cdha (halifax, ns, canada), the average number of specimens submitted yearly for adenovirus testing is (range to for years to ) and the turnaround time for virus culture can be up to days. a cost analysis was performed that assumed a more practical approach of bi-weekly molecular testing ( - specimens with positive, negative and reagent controls). excluding labor, the average cost of a commercial pcr following nucleic acid extraction would range from $ to $ (cad) per specimen. in comparison, the inhouse real-time pcr following a nucleic acid extraction would reduce the cost approximately~ -fold ($ . to $ . ). replacement of the nucleic acid extraction with the homogenization-base protocol further reduces the cost~ -fold ($ . to $ . ), which is comparable to the average cost of virus culture ($ . to . ). the time require for bi-weekly processing for either molecular methods is~ h/week, which is far lower than the time required for weekly maintenance and processing of specimens using cell culture and dfa staining. naats like real-time pcr have revolutionized the detection of human pathogens in clinical microbiology laboratories. rapid specimen throughput and excellent performance characteristics make them an appealing alternative to traditional culture methods; however, cost limits their use in many clinical laboratories. both the recovery of nucleic acids using extraction and the pcr reaction itself contribute to the cost. we have shown that combining a crude extraction method like homogenization with heat treatment [ ] and an in-house real-time pcr [ ] is a cost effective strategy for the detection hadv from swabs submitted in utm. homogenization uses multidirectional motion to disrupt cells through contact with silica beads and the heat treatment [ , ] . in combination with a subsequent heat treatment to inactivate heat-labile pcr inhibitors, this crude mechanical lysis had been shown to be a cost-effective method to recover viral dna from swabs transported in utm [ ] . the performance characteristics of this approach were equivalent to using traditional nucleic acid extraction and both molecular methods far exceeded those obtained with virus culture. replacing the nucleic acid extraction with the homogenization protocol did not affect the analytical (or clinical) sensitivity of the real-time pcr (figure and table ). using dilutions of hadv-c type , the lod for the homogenization protocol was approximately copies/reaction, was consistent with previously reported values ( - copies/reaction) for hadv types and [ ] . this analytical sensitivity is approximately -fold more sensitive than the estimated lod for virus culture. furthermore, positive results could be even obtained at copies/reaction with a probability of . % ( figure b ). while no significant differences were observed between the molecular assays, both demonstrated a high level of analytical sensitivity. when comparing clinical specimens using a modified gold standard, the in-house pcr following homogenization and heat treatment or nucleic acid extraction demonstrated similar sensitivities of % and . %, respectively (table ). this far surpasses the performance of virus culture at . %. the % increase in positivity is consistent with the~ -fold increase in analytical sensitivity and is not surprising since similar results were observed when transitioning other viruses from culture to naats [ ] [ ] [ ] [ ] . when comparing positive results from the in-house real-time pcr, cp values obtained following the homogenization protocol were consistently higher than those obtained following nucleic acid extraction ( figure b ). however, the analytical and clinical sensitivities of each assay were not significantly different ( figure and table ). it should be noted that all virus culture-negative/pcr-positive specimens had cp values greater than , corresponding to viral loads that fell below the lod for virus culture ( figure b ). the homogenization-or nucleic acid extraction-based protocols both showed excellent analytical specificity, with no cross-reactions from other organisms ( table ) . both methods were able to detect diverse hadv types spanning all the different species (figure and table ). of the virus culture-positive specimens, the most predominant types detected were , and , belonging to species b, d and c, respectively. these hadv types are well-recognized causes of acute respiratory tract and ocular infections and are consistent with the distribution reported by others regions in canada [ , ] . interestingly, a variant of hadv type , termed p , has been described as an emerging pathogen associated with outbreaks and sporadic cases of acute respiratory disease in europe and the united states [ ] [ ] [ ] [ ] . while most recorded cases were mild infections, severe disease and deaths have occurred. hadv type p has a characteristic -bp deletion (k -e ) in the fiber knob gene [ , , ] . the adenovirus type from this study was consistent with type p and harbored these mutations ( figure ). while there has been a number of reports of type p circulating in the us and europe, this variant has only once been reported in canada [ ] . the first adenovirus p cases in canada were reported from nova scotia's neighboring province, new brunswick, and included one fatality (figure ) [ ] . the specimen identified as p in this study was obtained from a fatal case dating back to same time period as the new brunswick cases. further epidemiological investigations are underway. while severe and fatal cases associated with type p have been reported, similar outcomes have been reported with many other common hadv types [ , , , ] . the most likely culprit of disease severity is the immune status of the host, not the adenovirus type or species. it should be noted that the thermocycling conditions for the adenovirus pcr were modified to allow simultaneous processing of other real-time pcr assays (hsv and vzv) in the cdha microbiology laboratory [ ] . simultaneously processing of multiple pcr assays on the same lightcycler instrument allows more efficient batch testing when equipment availability is limited. interestingly, these modifications allowed the detection of hadv type which had previously been problematic on an abi instrument [ ] . difference between assays can be attributed to a numerous factors (i.e. instrumentation, kits, etc.); however, the most likely explanation in this case is the annealing temperature. using the original pcr protocol [ ] , hadv type could only be detected when the annealing temperature was reduced from °c to °c [ ] . the annealing temperature in this study is °c. using conditions described in this study, the detection of hadv type has now been replicated in both collaborating laboratories. a limitation of this study is that the validation of homogenization was only performed using swabs in utm. future experiments will need to examine whether homogenization can be applied to other relevant specimen types (urine, stool, blood and tissue); however, the realtime pcr following a nucleic acid extraction has been shown to be effective for this purpose [ , ] . secondly, the performance characteristics of homogenization may vary between pcr assays and should not be implemented without proper validation [ ] . while homogenization with heat treatment has shown to be effective for the recovery of viral dna from hadv (this study), hsv [ ] , and varicella zoster virus, decreased sensitivity was observed for enveloped rna viruses like mumps and influenza viruses ( [ , ] leblanc, j. unpublished data). homogenization and heat treatment showed performance characteristics equivalent to a commercial nucleic acid extraction for the detection of hadvs. in combination with a sensitive in-house real-time pcr, homogenization with heat treatment generated results far superior than virus culture, and at a comparable cost. by modifying the thermocycling conditions to those used by other assays in the cdha microbiology laboratory, it further streamlined workflow and facilitated transition from virus culture to molecular testing. compared to virus isolation and propagation using culture, molecular testing also further reduces the risk of laboratoryacquired infections [ ] . overall, homogenization with heat treatment combined with a sensitive in-house real-time pcr is a cost-effective method for the detection of hadvs. s principles and practice of infectious diseases a communitybased outbreak of severe respiratory illness caused by human adenovirus serotype severe pneumonia due to adenovirus serotype : a new respiratory threat? adenovirus serotype infection first reported cases of human adenovirus serotype p infection a: treatment of adenovirus infections in patients undergoing allogeneic hematopoietic stem cell transplantation comparison of in-house realtime quantitative pcr to the adenovirus r-gene kit for determination of adenovirus load in clinical samples quantification of adenovirus dna in plasma for management of infection in stem cell graft recipients t-cell immunotherapy for adenoviral infections of stem-cell transplant recipients clinical features and treatment of adenovirus infections high levels of adenovirus dna in serum correlate with fatal outcome of adenovirus infection in children after allogeneic stem-cell transplantation invasive adenoviral infections in t-cell -depleted allogeneic hematopoietic stem cell transplantation: high mortality in the era of cidofovir comparison of three multiplex pcr assays for the detection of respiratory viral infections: evaluation of xtag respiratory virus panel fast assay, respifinder assay and respifinder smart assay switching gears for an influenza pandemic: validation of a duplex reverse transcriptase pcr assay for simultaneous detection and confirmatory identification of pandemic (h n ) influenza virus comparison of the filmarray respiratory panel and prodesse real-time pcr assays for detection of respiratory pathogens development of a respiratory virus panel test for detection of twenty human respiratory viruses by use of multiplex pcr and a fluid microbead-based assay detection of adenoviruses detection of a broad range of human adenoviruses in respiratory tract samples using a sensitive multiplex real-time pcr assay comparison of the luminex xtag respiratory viral panel with in-house nucleic acid amplification tests for diagnosis of respiratory virus infections members of the adenovirus research community: toward an integrated human adenovirus designation system that utilizes molecular and serological data and serves both clinical and fundamental virology real-time qualitative pcr for human adenovirus types from multiple specimen sources development of a pcrbased assay for detection, quantification, and genotyping of human adenoviruses molecular detection and quantitative analysis of the entire spectrum of human adenoviruses by a two-reaction real-time pcr assay multiplexed, realtime pcr for quantitative detection of human adenovirus pring-akerblom p: rapid and quantitative detection of human adenovirus dna by real-time pcr evaluation of type-specific real-time pcr assays using the lightcycler and j.b.a.i.d.s. for detection of adenoviruses in species hadv-c homogenization with heat treatment: a cost effective alternative to nucleic acid extraction for herpes simplex virus real-time pcr from viral swabs a reliable and inexpensive method of nucleic acid extraction for the pcr-based detection of diverse plant pathogens presumptive identification of common adenovirus serotypes by the development of differential cytopathic effects in the human lung carcinoma (a ) cell culture uracil-dna glycosylase (ung) influences the melting curve profiles of herpes simplex virus (hsv) hybridization probes probit analysis comprehensive detection and serotyping of human adenoviruses by pcr and sequencing logistic regression quantitation of targets for pcr by use of limiting dilution genome sequences of human adenovirus isolates from mild respiratory cases and a fatal pneumonia, isolated during - epidemics in north america genome sequence of the first human adenovirus type isolated in china inhibition and facilitation of nucleic acid amplification a comparison of cell culture versus real-time pcr for the detection of hsv / from routine clinical specimens adenovirus polymerase chain reaction assay for rapid diagnosis of conjunctivitis comparison of a commercial qualitative real-time rt-pcr kit with direct immunofluorescence assay (dfa) and cell culture for detection of influenza a and b in children efficacy of pcr and other diagnostic methods for the detection of respiratory adenoviral infections epidemiology of severe pediatric adenovirus lower respiratory tract infections in manitoba, canada characterization of culture-positive adenovirus serotypes from respiratory specimens in genome type analysis of adenovirus types and isolated during successive outbreaks of lower respiratory tract infections in children detection of mumps virus rna by real-time one-step reverse transcriptase pcr using the lightcycler platform viral agents of human disease: biosafety concerns a cost effective real-time pcr for the detection of adenovirus from viral swabs we would like to thank members of division of microbiology, department of pathology and laboratory medicine at cdha (halifax, nova scotia) for their ongoing support and for funding for this project. in particular, we are indebted to wanda brewer for the propagation and maintenance of a cells, and the various technologists responsible for routine virus culture. the authors declare that they have no competing interests.authors' contributions jl conceived the study. jl, th and rt participated in its design and coordination. ta, kb, and jl carried out the molecular testing. mw quantified the adenovirus stocks and established tcid values. ta and kb performed statistical analyses. jl analyzed the dna sequencing results. rt, sw and kp were involved in the phylogenetic analyses and typing of the adenoviruses as well as preparing the specificity panels. all authors were involved in the preparation of the manuscript. all authors have read and approved the final manuscript. key: cord- -i puqauk authors: ienco, dino; interdonato, roberto title: deep multivariate time series embedding clustering via attentive-gated autoencoder date: - - journal: advances in knowledge discovery and data mining doi: . / - - - - _ sha: doc_id: cord_uid: i puqauk nowadays, great quantities of data are produced by a large and diverse family of sensors (e.g., remote sensors, biochemical sensors, wearable devices), which typically measure multiple variables over time, resulting in data streams that can be profitably organized as multivariate time-series. in practical scenarios, the speed at which such information is collected often makes the data labeling task uneasy and too expensive, so that limit the use of supervised approaches. for this reason, unsupervised and exploratory methods represent a fundamental tool to deal with the analysis of multivariate time series. in this paper we propose a deep-learning based framework for clustering multivariate time series data with varying lengths. our framework, namely detsec (deep time series embedding clustering), includes two stages: firstly a recurrent autoencoder exploits attention and gating mechanisms to produce a preliminary embedding representation; then, a clustering refinement stage is introduced to stretch the embedding manifold towards the corresponding clusters. experimental assessment on six real-world benchmarks coming from different domains has highlighted the effectiveness of our proposal. nowadays, huge amount of data is produced by a large and diverse family of sensors (e.g., remote sensors, biochemical sensors, wearable devices). modern sensors typically measure multiple variables over time, resulting in streams of data that can be profitably organized as multivariate time-series. while a major part of recent literature about multivariate time-series focuses on tasks such as forecasting [ , , ] and classification [ , ] of such data objects, the study of multivariate time-series clustering has often been neglected. the development of effective unsupervised clustering techniques is crucial in practical scenarios, where labeling enough data to deploy a supervised process may be too expensive (i.e., in terms of both time and money). moreover, clustering allows to discover characteristics of multivariate time series data that go beyond the apriori knowledge on a specific domain, serving as tool to support subsequent exploration and analysis processes. while several methods exist for the clustering of univariate time series [ ] , the clustering of multivariate time series remains a challenging task. early approaches have been proposed which were generally based on adaptations of standard clustering techniques to such data, e.g., density based methods [ ] , methods based on independent component analysis [ ] and fuzzy approaches [ , ] . recently, hallac et al. [ ] proposed a method, namely ticc (toeplitz inverse covariance-based clustering), that segments multivariate time series and, successively, clusters subsequences through a markov random fields based approach. the algorithm leverages an (em)-like strategy, based on alternating minimization, that iteratively clusters the data and then updates the cluster parameters. unfortunately, this method does not produce a clustering solution considering the original time series but a data partition where the unit of analysis is the subsequence. as regards deep learning based clustering, such methods have recent become popular in the context of image and relational data [ , ] , but their potential has not yet been fully exploited in the context of the unsupervised analysis of time series data. tzirakis et al. [ ] recently proposed a segmentation/clustering framework based on agglomerative clustering which works on video data (time series of rgb images). the approach firstly extracts a clustering assignment via hierarchical clustering, then performs temporal segmentation and, finally, extracts representation via convolutional neural network (cnn). the clustering assignment is used as pseudo-label information to extract the new representation (training the cnn network) and to perform video segmentation. the proposed approach is specific to rgb video segmentation/clustering and it is not well suited for varying length information. all these factors limit its use to standard multivariate time-series analysis. a method based on recurrent neural networks (rnns) has also been recently proposed in [ ] . the representation provided by the rnn is clustered using a divergence-based clustering loss function in an end-to-end manner. the loss function is designed to consider cluster separability and compactness, cluster orthogonality and closeness of cluster memberships to a simplex corner. the approach requires training and validation data to learn parameters and choose hyperparameter setting, respectively. finally, the framework is evaluated on a test set indicating that the approach seems not completely unsupervised and, for this reason, not directly exploitable in our scenario. in this work, we propose a new deep-learning based framework, namely det-sec (deep time series embedding clustering), to cope with multivariate timeseries clustering. differently from previous approaches, our framework is enough general to deal with time-series coming from different domains, providing a partition at the time-series level as well as manage varying length information. the detsec has two stages: firstly a recurrent autoencoder exploits attention and gating mechanisms to produce a preliminary embedding representation. then, a clustering refinement stage is introduced to stretch the embedding manifold towards the corresponding clusters. we provide an experimental analysis which includes comparison with five state of the art methods and ablations analysis of the proposed framework on six real-world benchmarks from different domains. the results of this analysis highlight the effectiveness of the proposed framework as well as the added value of the new learnt representation. the rest of the paper is structured as follows: in sect. we introduce the det-sec framework, in sect. we present our experimental evaluation, and sect. concludes the work. in this section we introduce detsec (deep time series embedding clustering via attentive-gated autoencoder). let x = {x i } n i= be a multivariate time-series dataset. each x i ∈ x is a time-series where x ij ∈ r d is the multidimensional vector of the time-series x i at timestamp j, with ≤ j ≤ t , d being the dimensionality of x ij and t the maximum time-series length. we underline that x can contain time-series with different lengths. the goal of detsec is to partition x in a given number of clusters, provided as an input parameter. to this purpose, we propose to deal with the multivariate time-series clustering task by means of recurrent neural networks [ ] (rnn), in order to manage at the same time (i) the sequential information exhibited by time-series data and (ii) the multivariate (multi-dimensional) information that characterizes time-series acquired by real-world sensors. our approach exploits a gated recurrent unit (gru) [ ] , a type of rnn, to model the time-series behavior and to encode the original time-series in a new vector embedding representation. detsec has two different stages. in the first one, the gru based autoencoder is exploited to summarize the time-series information and to produce the new vector embedding representation, obtained by forcing the network to reconstruct the original signal, that integrates the temporal behavior and the multi-dimensional information. once the autoencoder network has been pretrained, the second stage of our framework refines such representation by taking into account a twofold task, i.e., the reconstruction one and another one devoted to stretch the embedding manifold towards clustering centroids. such centroids can be derived by applying any centroid-based clustering algorithm (i.e. k-means) on the new data representation. the final clustering assignment is derived by applying the k-means clustering algorithm on the embeddings produced by detsec. figure visually depicts the encoder/decoder structure of detsec, consisting of three different components in our network architecture: i) an encoder, ii) a backward decoder and iii) a forward decoder. the encoder is composed by two gru units that process the multivariate time series: the first one (in red) processes the time-series in reverse order (backward) while the second one (in green) processes the input time-series in the original order (forward). successively, for each gru unit, an attention mechanism [ ] is applied to combine together the information coming from different timestamps. attention mechanisms are widely used in automatic signal processing [ ] ( d signal or natural language processing) as they allow to merge together the information extracted by the rnn model at different timestamps via a convex combination of the input sources. the attention formulation we used is the following one: the network has three main components: i) an encoder, ii) a forward decoder and iii) a backward decoder. the encoder includes forward/backward gru networks. for each network an attention mechanism is employed to combine the sequential information. subsequently, the gating mechanism combines the forward/backward information to produce the embedding representation. the two decoder networks have similar structure: the forward decoder reconstructs the original signal considering its original order (forward -green color) while the backward decoder reconstructs the same signal but in inverse order (backward -red color). (color where h ∈ r t,l is a matrix obtained by vertically stacking all feature vectors h tj ∈ r l learned at t different timestamps by the gru and l is the hidden state size of the gru network. matrix w a ∈ r l,l and vectors b a , u a ∈ r l are parameters learned during the process. the symbol indicates element-wise multiplication. the purpose of this procedure is to learn a set of weights (λ t , . . . , λ tt ) that allows to combine the contribution of each timestamp h tj . the sof tm ax function is used to normalize weights λ so that their sum is equal to . the results of the attention mechanism for the backward (h att back ) and for the forward (h att forw ) gru units are depicted with red and green boxes, respectively, in fig. . finally, the two sets of features are combined by means of a gating mechanism [ ] as follows: where the gating function gate(·) performs a non linear transformation of the input via a sigmoid activation function and a set of parameters (w and b) that are learnt at training time. the result of the gate(·) function is a vector of elements ranging in the interval [ , ] that is successively used to modulate the information derived by the attention operation. the gating mechanism adds a further decision level in the fusion between the h att forw and h att back information since it has the ability to select (or retain a part of) the helpful features to support the task at hand [ ] . the forward and backward decoder networks are fed with the representation (embedding) generated by the encoder. they deal with the reconstruction of the original signal considering the same order (resp. the reverse order) for the forward (resp. backward) decoder. this means that the autoencoder copes with the sum of two reconstruction tasks (i.e., forward and backward) where each reconstruction task tries to minimize the mean squared error between the original data and the reconstructed one. formally, the loss function implemented by the autoencoder network is defined as follows: where |||| is the squared l distance, dec (resp. dec back ) is the forward (resp. backward) decoder network, enc is the encoder network and rev(x i ) is the timeseries x i in reverse order. Θ are the parameters associated to the encoder while Θ (resp. Θ ) are the parameters associated to the forward (resp. backward) decoder. algorithm depicts the whole procedure implemented by detsec. it takes as input the dataset x, the number of epochs n ep ochs and the number of expected clusters nclust. the output of the algorithm is the new representation derived by the gru based attentive-gated autoencoder, named embeddings. the first stage of the framework (lines - ) trains the autoencoder reported in fig. for a total of epochs. successively, the second stage of the framework (lines - ) performs a loop considering the remaining number of epochs in which, at each epoch, the current representation is extracted, a k-means algorithm is executed to obtain the current cluster assignment and the corresponding centroids. successively, the autoencoder parameters are optimized considering the reconstruction loss l ae plus a third term that has the objective to stretch the data embeddings closer to the corresponding cluster centroids: where δ il is a function that is equal to if the data embedding of the time-series x i belongs to cluster l and elsewhere. centroids l is the centroid of cluster l. finally, the new data representation (embeddings) is extracted (line ) and returned by the procedure. the final partition is obtained by applying the k-means clustering algorithm on the new data representation. require: x, n ep ochs, nclust. ensure: embeddings. : i = : while i < do : update Θ , Θ and Θ by descending the gradient: : δ, centroids = runkmeans(embeddings, nclust) : update Θ , Θ and Θ by descending the gradient: : in this section we assess the behavior of detsec considering six real world multivariate time series benchmarks. to evaluate the performance of our proposal, we compare it with several competing and baselines approaches by means of standard clustering evaluation metrics. in addition, we perform a qualitative analysis based on a visual inspection of the embedding representations learnt by our framework and by competing approaches. for the comparative study, we consider the following competitors: -the classic k-means algorithm [ ] based on euclidean distance. -the spectral clustering algorithm [ ] (sc ). this approach leverages spectral graph theory to extract a new representation of the original data. k-means method is then applied to obtain the final data partition. -the deep embedding clustering algorithm [ ] (dec ) that performs partitional clustering through deep learning. similarly to k-means, also this approach is suited for data with fixed length. also in this case we perform zero padding to fit all the time-series lengths to the size of the longest one. -the dynamic time warping measures [ ] (dtw ) coupled with k-means algorithm. such distance measure is especially tailored for time-series data with variable length-size. this measure is a differentiable distance measure recently introduced to manage dissimilarity evaluation between multivariate time-series of variable length. we couple such measure with the k-means algorithm. note that when using k-means and sc, due to the fact that multivariate time series can have different lengths, we perform zero padding to fit all the time-series lengths to the longest one. for the dec method, we use the keras implementation . for the dtw and softdtw measures we use their publicly available implementations [ ] . with the aim to understand the interplay among the different components of detsec, we also propose an ablation study by taking into account the following variants of our framework: -a variant of our approach that does not involve the gating mechanism. the information coming from the forward and backward encoder are summed directly without any weighting schema. we name such ablation detsec nogate . -a variant of our approach that only involves the forward encoder/decoder gru networks disregarding the use of the multivariate time series in reverse order. we name such ablation detsec noback . our comparative evaluation has been carried out by performing experiments on six benchmarks characterized by different characteristics in terms of number of samples, number of attributes (dimensions) and time length: auslan, japvowel, arabicdigits, remsensing, basicm and ecg. all datasets, except remsensing -which was obtained contacting the authors of [ ] -are available online . the characteristics of the six datasets are reported in table . clustering performances were evaluated by using two evaluation measures: the normalized mutual information (nmi) and adjusted rand index (ari) [ ] . the nmi measure varies in the range [ , ] while the ari measure varies in the range [− , ]. these measures take their maximum value when the clustering partition completely matches the original one, i.e., the partition induced by the available class labels. due to the non deterministic nature of all the clustering algorithms involved in the evaluation, we run the clustering process times for each configuration, and we report average and standard deviation for each method, benchmark and measure. detsec is implemented via the tensorflow python library. for the comparison, we set the size of the hidden units in each of the gru networks (forward/backward -encoder/decoder) to for basicm, ecg benchmarks and for auslan, japvowel, arabicdigits, remsensing benchmarks. this difference is due to the fact that the former group includes datasets with limited number of samples that cannot be employed to efficiently learn recurrent neural networks with too many parameters. to train the model, we set the batch size equal to , the learning rate to − and we use the adam optimizer [ ] to learn the parameters of the model. the model are trained for epochs: in the first epochs the autoencoder is pre-trained while in the remaining epochs the model is refined via clustering loss. experiments are carried out on a workstation equipped with an intel(r) xeon(r) e - v @ . ghz cpu, with gb of ram and one titan x gpu. table reports on the performances of detsec and the competing methods in terms of nmi and ari. we can observe that detsec outperforms all the other methods on five datasets over six. the highest gains in performance are achieved on speech and activity recognition datasetes (i.e., japvowel, arabicdigits, aus-lan and basicm ). on such benchmarks, detsec outperforms the best competitors of at least points (auslan ) with a maximum gap of points on arabicdigits. regarding the egc dataset, we can note that best performances are obtained by k-means and dec. however, it should be noted that also in this case detsec outperforms the competitors specifically tailored to manage multivariate time-series data (i.e., dtw and softdtw ). table reports on the comparison between detsec and its ablations. it can be noted how there is not a clear winner resulting from this analysis. detsec obtains the best performance (in terms of nmi and ari) on two benchmarks (arabicdigits and basicm ), while detsec nogate and detsec noback appear to be more suitable for other benchmarks (even if the performances of detsec remain always comparable to the best ones). for instance, we can observe that detsec nogate achieves the best performances on ecg. this is probably due to the fact that this ablation requires a lower number of parameters to learn, and this can be beneficial for processing datasets with a limited number of samples, timestamps and dimensions. to proceed further in the analysis, we visually inspect the new data representation produced by detsec and the best two competing methods (i.e., sc and dtw ) by using basicm as illustrative example. we choose this benchmark since it includes a limited number of samples (i.e., to ease the visualization and avoid possible visual cluttering) and it is characterized by timeseries of fixed length that avoid zero padding transformation. the basicm benchmark includes examples belonging to four different classes that, in fig. , are depicted with four different colors: red, blue, green and black. figure (a), (b) , (c) and (d) show the two-dimensional projections of the original data versus the dtw and sc approaches on such dataset. the two dimensional representation is obtained via the t-distributed stochastic neighbor embedding (tsne) approach [ ] . in this evaluation, we clearly observe that detsec recovers the underlying data structure better than the competing approaches. the original data representation ( fig. (a) ) drastically fails to capture data separability. the dtw method ( fig. (b) ) retrieves the cluster involving the blue points, on the left side of the figure, but it can be noted how all the other classes still remain mixed up. sc produces a better representation than the previous two cases but it still exhibits some issue to recover the four cluster structure: the green and black examples are slightly separated but some confusion is still present while the red and blue examples lie in a very close region (a fact that negatively impacts the discrimination between these two classes). conversely, detsec is able to stretch the data manifold producing embeddings that visually fit the underlying data distribution better than the competing approaches, and distinctly organize the samples according to their inner cluster structure. to sum up, we can underline that explicitly managing the temporal autocorrelation leads to better performances regarding the clustering of multivariate time-series of variable length. considering the benchmarks involved in this work, detsec exhibits a general better behavior with respect to the competitors when the benchmark contains enough data to learn the model parameters. this is particularly evident when speech or activity recognition tasks are considered. in addition, the visual inspection of the generated embedding representation is in line with the quantitative results and it underlines the quality of the proposed framework. in this paper we have presented detsec, a deep learning based approach to cluster multivariate time series data of variable length. detsec is a two stages framework in which firstly an attentive-gated rnn-based autoencoder is learnt with the aim to reconstruct the original data and, successively, the reconstruction task is complemented with a clustering refinement loss devoted to further stretching the embedding representations towards the corresponding cluster structure. the evaluation on six real-world time-series benchmarks has demonstrated the effectiveness of detsec and its flexibility on data coming from different application domains. we also showed, through a visual inspection, how the embedding representations generated by detsec highly improve data separability. as future work, we plan to extend the proposed framework considering a semi-supervised and/or constrained clustering setting. representation learning: a review and new perspectives efficient attention using a fixed-size memory representation a density based method for multivariate time series clustering in kernel feature space learning phrase representations using rnn encoder-decoder for statistical machine translation a fuzzy clustering model for multivariate spatial time series soft-dtw: a differentiable loss function for time-series optimizing dynamic time warping's window width for time series data mining applications wavelets-based clustering of multivariate time series duplo: a dual view point deep learning architecture for time series classification multivariate lstm-fcns for time series classification adam: a method for stochastic optimization clustering of time series data -a survey an ensemble model based on adaptive noise reducer and over-fitting prevention lstm for multivariate time series forecasting a tutorial on spectral clustering visualizing data using t-sne a survey of clustering with deep learning: from the perspective of network architecture improving speech recognition by revising gated recurrent units temporal pattern attention for multivariate time series forecasting mv-kwnn: a novel multivariate and multi-output weighted nearest neighbours algorithm for big data time series forecasting introduction to data mining, st edn tslearn: a machine learning toolkit dedicated to time-series data recurrent deep divergence-based clustering for simultaneous feature learning and clustering of variable length time series time-series clustering with jointly learning deep representations, clusters and temporal boundaries independent component analysis for clustering multivariate time series data learning kullback-leibler divergence-based gaussian model for multivariate time series classification gated multi-task network for text classification unsupervised deep embedding for clustering analysis key: cord- - z us v authors: allen, edward e.; farrell, john; harkey, alexandria f.; john, david j.; muday, gloria; norris, james l.; wu, bo title: time series adjustment enhancement of hierarchical modeling of arabidopsis thaliana gene interactions date: - - journal: algorithms for computational biology doi: . / - - - - _ sha: doc_id: cord_uid: z us v network models of gene interactions, using time course gene transcript abundance data, are computationally created using a genetic algorithm designed to incorporate hierarchical bayesian methods with time series adjustments. the posterior probabilities of interaction between pairs of genes are based on likelihoods of directed acyclic graphs. this algorithm is applied to transcript abundance data collected from arabidopsis thaliana genes. this study extends the underlying statistical and mathematical theory of the norris-patton likelihood by including time series adjustments. cell signaling is accomplished via networks of transcriptional changes that lead to synthesis of distinct sets of proteins, which cause changes in growth, development, or metabolism. treatments that elevate levels of hormones result in cascades of changes in gene expression driven by activation and synthesis of transcription factors which are required to turn on downstream genes. one approach to model these gene regulatory networks is to collect measurements of changes in abundance of gene transcripts across a time course. the expression of a gene encoding a transcriptional activator or repressor protein may signal to the next gene to either turn on or turn off downstream genes and their encoded proteins. thus, time course transcriptomic data sets contain important information about how genes drive these changes in biological networks. yet genome-wide transcript abundance assays examine tens of thousands of genes so identification of patterns or networks within these large data sets is difficult. it is also critical to filter the meaningful transcript changes in these data sets to remove genes whose responses are not above background or that are dissimilar due to biological or technical variation. yet even though the bioinformatics community has developed statistical methods to filter the data [ ] , additional approaches are needed to identify the networks and patterns in these large data sets. an important modern approach to statistical modeling includes bayesian techniques involving likelihoods and posterior probabilities. here, we extend our previous work on this problem by incorporating time series adjustments in the computation of bayesian likelihoods. we apply this method to time course data generated in response to treatments that elevate the levels of the hormone ethylene in arabidopsis thaliana. we take advantage of a previously published genome-wide transcriptional data set [ ] , subjected to rigorous filtering and from which all the genes predicted to encode transcription factors have been identified. the goal is to predict gene regulatory networks that control time-matched developmental changes. the results in this paper are novel for several reasons. first, the methods use the hierarchical nature of the data sets. for example, replicate data are not averaged. rather, the method constructs a model over all of the data that uses each replicate as a source of information. the assumption is that at each level of the hierarchy there are commonalities in the data and parameters. thus, the replicate data is not independent. second, the addition of time series adjustment to improve the independence of the model's residuals gives these techniques stronger statistical foundations. third, the combination of bayesian model averaging with a cutting edge genetic algorithm provides rigorous estimates of posterior probabilities for edges. these computational modeling algorithms are derived using rigorous mathematical and statistical techniques and are computationally efficient. the models produced are easily understandable. many different techniques for modeling non-hierarchical data using gene expression data have been proposed. an excellent recent survey on this subject was given by emily [ ] . there are many techniques for modeling gene and protein networks-with various different properties-available in the literature. our technique in this paper is a bayesian regression type method. variations of bayesian modeling can be found in [ , , ] . other methods that use types of regression include [ , ] which focus on logistic regression techniques, and [ , ] which use poisson regression. other approaches to modeling these types of problems include differential equations [ ] and boolean modeling [ ] . this bayesian likelihood computational algorithm incorporates additional important features from earlier versions. earlier variations included computing posterior probabilities for a single replicate [ ] and for multiple replicates with both hierarchical [ ] and independent [ ] structures. over the course of this research, the search procedure has changed from metropolis hastings to genetic algorithms. genetic algorithms' execution times are typically polynomial rather than the doubly exponential execution time, in terms of the numbers of time points and genes, of metropolis hastings. this variation also uses a bayesian version of the cross generational elitist selection, heterogeneous recombination, cataclysmic mutation algorithm (chc) [ ] . genetic algorithms are motivated by the operators of selection, crossover, and mutation. the chc variation does not allow the crossover of similar parents. once the population becomes too homogeneous, then a cataclysmic mutation event regenerates the population from the current most fit parents. the bayesian chc (bchc) implemented in this paper uses a hierarchical statistical construct (the norris-patton likelihood) as the fitness function. the hormone ethylene (acc) is known to activate root growth in arabidopsis thaliana [ ] . transcription factors (tfs) are cellular proteins that bind to dna to turn genes either on (activation) or off (repression). developmental changes are controlled by these genes. the data set used in this modeling process was the complete set of abundance levels of the twenty-six tfs believed/known to be involved in the activation of the growth of roots at eight time points after treatment with the ethylene precursor acc [ ] . here, constructing an appropriate network model has potential agricultural applications in that it should lead to more complex understandings of root development. three network modeling paradigms are generally considered in the literature: cotemporal, next state one step and next state one and two steps. a next state one step model predicts the transcript abundance relationships between genes at time j based on the transcript abundance at time j − . in this paper, we will only consider next state one step models; for simplicity, we will refer to next state one step as next state. the time series adjusted (tsa) next state models are an amalgamation of next state modeling with standard time series adjustments [ ] . the time series adjustment methodology makes the residuals (i.e., the estimated error terms) more independent. a directed graph g = (v, e) consists of a pair of collections: v a set of vertices (or nodes); and, e a collection of directed edges between pairs of vertices. a cycle is a sequence v , e directed acyclic graphs (dags) do not contain cycles. an example of a dag is given in fig. . in this modeling algorithm, dags form the mathematical foundation of our computational approach. the vertices of a dag represent genes and the directed edges are one-way relationships between pairs of vertices. when there is a directed edge from for any dag d with vertex set v = {v , v , · · · , v n }, the vertices can be topologically sorted. this gives a total order > on v such that if v i is an ancestor conditional probability gives that for any two events a and b, the probability similarly, the density function f for two continuous variables y and y is recursively, using the order < implied by topologically sorting the dags on the set of continuous variable specific for a particular dag d, let y be the gene that cannot have any parents. let y be the gene that can have at most parent y . similarly, let y h be the gene that can have parents from the collection {y , · · · , y h− }. therefore, if we let y i represent the data of child i for all of the r replicates, we have for d f (y , y , . . . , y n |d) =f (y |d)f (y |y , d)f (y |y , y , d) · · · f (y k |y , · · · y n− , d) statistical regression models of response (child) data from predictors (parents) data over time nearly always have correlated residuals over time. this is usually due to the remaining influence of the previous time's response data. in complicated modeling situations (e.g., like ours where we need to obtain closed form likelihoods of dags within a hierarchical structure in order to produce posterior probabilities of edges), it is common to derive results as if there were non-correlated residuals, as we have done in previous work. our previous work has shown utility both for simulated and biological data, but we now rigorously incorporate a time series adjustment into our model. this should result in substantially less correlated residuals and thus more accurate likelihoods for the dags. since these likelihoods are the foundations for the edges' estimated posterior probabilities, these estimates should also be improved. our time series adjustment is an integer autoregressive adjustment of order in the commonly used family of markov conditioning. it is a version of kedem's and fokianos' autoregressive model [ , page ]. in our setting, this simply adds the child's data at the previous time as an additional regressor for the child's data at the current time. thus, much of the child's data at the previous time's influence would be regressed out leaving less correlated, closer to independent, residuals from one time to the next. for each h, with ≤ h ≤ n, f (y h |y , y , . . . , y h− , d) gives the density of y h given y h 's parent's data for dag d. now, let i y c be the data vector of any given child c from the i th replicate. the vector i y c has dimension t, the number utilized time points in the child c data set for a given replicate i. the symbol i x c is the t × k c regressor matrix for i y c . for next state with time series adjustment, t is the number of time points per replicate minus one since at time , the child data has no last previous parent data nor last previous child (tsa) data-so, the utilized child data starts at time . the value of k c is the number of parents of c plus two since i x c has a separate column for each of its parent's data at the previous time, a column of 's for the intercept, and a column of the child's data at the previous time (the time series adjustment). a k c dimensional slope vector for child c's regressors is i β c . the common within replicate residual variance of child c is σ c . assumptions which detail the hierarchical structure include that for a given the proof of theorem uses the following lemmas whose computation can be found in [ ] (a thesis from our research group). we include the proof of lemma to show how the computation of the likelihood includes the slope parameters i β c of each of the replicates separately. proof. using integration, we have letting |m | denote the determinant of the matrix m , we have the following: extending lemma to the product of density functions used in lemma , we have: note that g, v and σ c are positive free parameters. in our modeling algorithm, we set g = v = σ c = . the use of the time series adjusted next state norris-patton likelihood, along with a tailor-made genetic algorithm and bayesian model averaging, allows for the rigorous estimation of posterior probabilities for all gene pair interactions. if indicator < then : p (t) cataclysm(p (t)) : indicator : end if : archive archive ∪ p (t) : end while : return archive : end procedure simply put, a genetic algorithm (ga) takes the current population and produces the next generation using the operations of selection, crossover, and mutation [ ] . individuals (i.e., dags) are automatically moved to the next generation with preference given to those with the higher likelihoods (the elitist strategy). the first population must be initialized. the genetic algorithm terminates after a specified number of iterations. the tbchc genetic algorithm is an extension of bch [ ] which was heavily influenced by the chc [ ] . the tbchc fitness function includes the next state time series adjustment. the tbchc operators of selection, crossover, mutation, and repair will be discussed in the following paragraphs. the population of each generation consists of a fixed number of dags. each dag represents gene relationships. the genetic algorithm's aim is to move from the current population of dags to a new generation where the overall quality improves (as measured by the norris-patton likelihood). the elitist strategy only moves the top % of dags from the current generation to the next and the balance is filled by crossover. as tbchc iterates, all distinct dags are archived. the final gene interaction model is produced from this archived collection. generally, the selection operator chooses which members of the current population can potentially contribute children to the next generation. in fig. selection is accomplished through a random pairing of all parents in the current population (lines - ). by assuming prior probabilities for the dag, the likelihood of a given dag d is proportional to the d's npl [ ] . thus, the fitness of a candidate d can be computed using the npl. the crossover operator (line ) exchanges genetic information (i.e., directed edges) between two parents producing two new offspring. the edges chosen to be exchanged are chosen randomly. there is one caveat: if the two parents are too similar-determined by the hamming distance between them then the two selected parent dags are not allowed to produce offspring (line ). in a simple genetic algorithm, all selected parents are allowed to produce offspring. this tbchc prohibition of mating by similar parents may result in fewer dags in the next population than in the current population. since the modeling process is based on dags, if the crossover operator introduces a cycle in the offspring, a repair operator is applied. selection and crossover are used exclusively in tbchc until the population becomes too similar. at that point, cataclysmic mutation (line ) is applied to reset the population by creating a new population of dags from the top % npl dags. there are no known techniques for assigning the optimum values to the genetic algorithm parameters. however, experience and the literature give general criterion for appropriate values. still, values are often determined on a case by case basis. the tbchc algorithm parameters include the following: parallel executions each with generations; the number of initial dags is ; the crossover probability is . ; and, the number of parents of any given node is limited to . cataclysmic mutation causes the population of dags to be replaced by dags generated by crossover and mutation on the top % of the population to restore the candidate class to . this tbchc algorithm is implemented in python . using the networkx [ ] and dispy packages [ ] . it is important to realize that each directed edge in the model is labeled by a number in the interval [ , ] indicating the posterior bayesian probability that the associated relationship exists in the biological network. using bayesian statistics, , which simply and appropriately weights each visited dag d according to its likelihood. this methodology requires equally likely priors since in such a situation the posterior for d is proportional its likelihood [ ] . in order for this estimate to reflect its true value, it is necessary that ar contain a large and varied collection of dags of high likelihood. using the transcript abundance data for arabidopsis thaliana genes stimulated by acc, gene interaction models for a next state with and without time series adjustment were computationally created, shown in fig. . each edge is labeled by its posterior probability. figure provides comparisons of three similar models to those given in fig. . figure (a) shows a stronger and tighter distribution of posterior probabilities than fig. (b) . there is significant agreement across the models for average posterior probabilities exceeding . and less than . . however, for average posterior probabilities with values greater than . and less than . there is a great deal of variance, which reflects the lack of a strong posterior probability over this range. a typical underlying assumption of statistical analysis is that the residuals are independent [ , page ]. it is well understood, however, that the residuals associated with time course data are not usually independent. by incorporating time series adjustments into the modeling process, the residuals' independence is much improved; thus, yielding a less approximated, more accurate likelihood function. the continuation of this research includes four tasks. first, the computational networks have been sent to the muday lab for biological investigation, confirmation and interpretation. second, in this paper, we investigated the enhancement of times series adjustment on a next state one step model. there are two other time paradigms, next state one and two steps and cotemporal, each of which has a time series adjustment analogue and a corresponding norris-patton likelihood. comparing and contrasting the computational results of these three distinct modeling methods-as well as their biological interpretations-are important in understanding the gene interaction models developed using this methodology. third, we will further consider higher order autoregressive adjustment to continue improving the independence of the residuals. fourth, effort is underway to implement nonuniform priors in the modeling techniques. this would permit construction of gene interaction models that reflect relationships found in the literature. modeling gene regulation networks using ordinary differential equations detecting gene-gene interactions that underlie human diseases probability and statistics, th edn a survey of statistical methods for gene-gene interaction in case-control genome-wide association studies the chc adaptive search algorithm: how to have safe search when engaging in nontraditional genetic recombination evolutionary computation -basic algorithms and operators using bayesian networks to analyze expression data exploring network structure, dynamics, and function using networkx identification of transcriptional and receptor networks that control root responses to ethylene bayesian model averaging: a tutorial continuous cotemporal probabilistic modeling of systems biology networks from sparse data regression models for time series analysis a bchc genetic algorithm model of cotemporal hierarchical arabidopsis thaliana gene interactions stochastic boolean networks: an efficient approach to modeling gene regulatory networks an introduction to genetic algorithms bayesian interaction and associated networks from multiple replicates of sparse time-course data bayesian probabilistic network modeling from multiple independent replicates hierarchical bayesian system network modeling of multiple related replicates bayesian network analysis of signaling networks: a primer dispy: distributed and parallel computing with/for python plink: a toolset for whole-genome association and populationbased linkage analysis boost: a fast approach to detecting gene-gene interactions in disease data gboost: a gpu-based tool for detecting gene-gene interactions in genome-wide case control studies acknowledgments. the authors thank the national science foundation for their support with a grant, nsf# . john farrell thanks wake forest university for support as a wake forest fellow for summer . key: cord- -u l jv authors: bao, yinyin; bossion, amaury; brambilla, davide; buriak, jillian m.; cai, kang; chen, long; cooley, joya a.; correa-baena, juan-pablo; dagdelen, john m.; fenniri, miriam z.; horton, matthew k.; joshi, hrishikesh; khau, brian v.; kupgan, grit; la pierre, henry s.; rao, chengcheng; rosales, adrianne m.; wang, dong; yan, qifan title: snapshots of life—early career materials scientists managing in the midst of a pandemic date: - - journal: chem mater doi: . /acs.chemmater. c sha: doc_id: cord_uid: u l jv nan ■ yinyin bao, group leader, institute of pharmaceutical sciences, eth zürich i never expected that a simple assembly of certain macromolecules could have such a huge worldwide impact on human life, which is what we as polymer scientists endeavor to achieve but cannot. ironically, the coronavirus succeeded in making it happen. working in the second most densely infected country, as a father of two little kids and a junior group leader, i would not say that lockdown life is easy. with our lab closed, i can only focus on the papers, reports and other writing work, and communications with students can only be done online. what is more complicated is that i have to shift my working time mainly to the night, so that i can have "bulk time" to concentrate on one thing. although with less productivity, fortunately i have more time to teach my daughter mathematics, read story books for my son, make handicrafts with them, and have other fun. this greatly eases my anxiety due to the suspended research, which makes my lockdown life better than expected. during the spread of coronavirus, another thing i did not expect was the huge difference between east and west regarding people's reactions. a typical example is the big debate on whether uninfected people should wear masks. since the outbreak of sars in , chinese and other asians have become very sensitive to unknown viruses, and wearing masks has been considered as an effective method to prevent the spread. however, this is treated as a sign of sickness and overreaction for most europeans, and is thus not socially acceptable. three weeks ago, when i wore my first mask just before the lockdown of eth, i only saw asian people wearing masks in switzerland. surprisingly, along with the situation of covid getting worse, i start to see europeans wearing masks in the stores and on the street. i even read the news that austrians are required to wear masks in supermarkets. it is interesting to see this transformation due to the reconciliation between eastern and western culture, and i will continuously follow this trend during this special period. ■ amaury bossion, post-doctoral while the new coronavirus pandemic is dominating headlines worldwide and the french president, mr. emmanuel macron, just extended the stay-at-home order, here are my thoughts on the surreal atmosphere. as a young postdoctoral scientist, i must admit it is quite harsh not only sacrificing laboratory time but also socializing only remotely. it is even more frustrating when this crisis put an instantaneous halt to promising ongoing experiments. this bizarre and heavy atmosphere is even more present in that i became, partly against my will, a troglodyte in my small parisian apartment. while lots of my friends and colleagues returned home to spend more time with their families at the beginning of the crisis, a mixture of logic and reflection advised me to stay home for the greater good, despite my family's call to come back. although, being organized and rigorous, covid- is putting a strain on me mentally as remaining focused upon writing ongoing articles and reviews all day is really demanding with all the other distractions surrounding me. physically, it has considerably reduced my daily physical activity to the point where the most exercise i get is the few walking steps needed to move around my flat. although, i sincerely hope this horrific time will end soon, and we will learn lessons for future preparedness, i definitely believe that we can take advantage of it to grow mentally and learn new things. ■ davide brambilla, assistant professor, facultéde pharmacie, universitéde montreÁl as a son of a health professional in the red zone in italy, i was rapidly aware of the seriousness of the infection. nonetheless, the pandemic materialized as a storm when, from one day to the other, we had to shut down the laboratory and stop all nonessential experiments. after the initial phase of disorientation, the first work-related thoughts went to the teaching, the graduate students and their projects. while for undergraduate classes the universitéde montreál rapidly reacted and provided support to generate online classes, for the research, the initial stress slowly converted in the recognition that this could be a great opportunity to review and better plan our projects. now, after a month of the home-office, i feel that this forced shutdown brought me out of a working routine, and made me appreciate even more the importance of our profession. scientific research is the only actual weapon we have to fight this infection and to prevent or give a rapid response to future ones. deeply, i hope this pandemic will teach us all something, and that the opinions of scientists will receive higher consideration by the society and the decisionmaking institutions. ■ kang cai, post-doctoral fellow, department of chemistry, northwestern university i am experiencing my second "stay-at-home" period in the us now. i am a postdoctoral researcher at northwestern university in the us. three months ago, i went back to china to attend an academic forum, and then stayed at my hometown in hunan province to spend chinese spring festival with my family. on jan. , the covid- outbreak happened, and i experienced my first "stay-at-home" period. my return flight to the us was on january st, which turned out to be united airline's last flight from china to the us. after a two-week self-quarantine, i worked hard in the lab and tried to get as many results as possible, since i realized that universities in the us could also be shut down in the near future, which happened one month later. now, i have been staying in my apartment for three weeks i work on manuscripts, read papers, think about proposals, do "reactions" in the kitchen, and watch tv. the group meeting is held every week via zoom video, which is good, because it becomes the only "social activity" every week. currently i have plenty of work to do, which makes the "stay-at-home" days not that boring. but if the days continue for another one-to-two months, or even longer, which seems very likely to happen, i am not sure whether i will become anxious or not. it is a tough year for all of us. i am supposed to be on the job market this year, but now the situation has changed a lot. the future is full of uncertainty. ■ long chen, professor, department of chemistry, tianjin university during the locked-down period in our city since february, although the laboratories are still closed and all the students keep staying in their hometown, we all have great confidence that our country, and the entire world, can win this covid- pandemic crisis. we keep in touch with our group members via wechat, and also continue to hold the group meetings online, with a focus on literature reports. although from the early times of this pandemic we were not allowed to enter the office in the campus, all the online resources of the university can be conveniently accessed via a vpn connection. it is also an opportunity for the principal investigator and graduate students to analyze and summarize their research work. we recently managed to finish an invited review contribution and several manuscripts. with the situation improving and becoming better and better in china, some universities have already announced their gradual reopening time schedule. it is my hope that humanity pulls together and builds a future in which we are united to fight together against this virus and bring about the fastest possible victory. ■ joya cooley, post-doctoral fellow, materials research laboratory, university of california santa barbara i've read plenty that says that a routine is the most helpful for maintaining overall sanity. i've been keeping a routine, but have found that keeping a flexible routine with some goals works better for me. some of those goals include: sit down at my desk and get some writing done, read some literature, do yoga, make meals, check on my parents. i find that variation in my routine helps with maintaining sanityif i stray from a rigid routine, i start to create more inner turmoil when i cannot keep up. however, if i just try to set some daily/weekly goals, i can tackle them based on how i'm feeling that particular day. try to practice yoga at least times a week; some days it happens in the morning, some days in the evening, some days not at all. write for at least one hour a day; some days that happens all at once, some days it is broken up into shorter sessions. i feel i'm in a precarious position as i'm wrapping up my postdoctoral work and gearing up to begin my independent career, but all i can do for now is take it one day at a time. our laboratories closed unofficially on march th, , as i worried about the health of my students and postdocs. it coincided with spring break and some students were traveling abroad to see their families. i was hesitant to advise them on travel and the institute had not officially closed, making it difficult to issue strong recommendations. georgia tech officially closed all nonessential activities one week later. the weeks after were tough, as students got stranded abroad and experiments were not finished to the extent we wanted. our laboratories only opened in october , and we spent the past months ramping up. it took one day to ramp everything down. nonetheless, this has presented itself as an opportunity to come up with strong experimental plans, revisit the literature, and compile and analyze data that will be going into future manuscripts. the week of april , , finally started feeling normal and we are trying to make the most of it. i have a plan for each of my students and postdocs for the next two months, after that, i will have to reevaluate! will we turn into a computational group? we will see! as for teaching, transitioning to online format for this millennial (me) was easy and my students adapted quite wellthey ask more questions than ever. we are a team at lawrence berkeley national laboratory working with colleagues from the biomedical research community to make text mining and search tools specifically tailored to covid- research with the goal of helping to accelerate our colleagues' research. our team is made up of a number of graduate student researchers and postdocs from lbnl and uc berkeley who specialize in natural language processing methods for analyzing materials science literature, but we were approached about a month ago by colleagues from the innovative genomics institute about applying some of our techniques to the covid- literature. since then, we have been working around the clock to build covidscholar.org, a knowledge portal designed to help researchers stay on top of the covid- literature. to our knowledge, our database is the most comprehensive and current source for covid- papers available today with more than , papers and we are expanding to include patents and clinical trials in the near future. our site also includes features that leverage machine learning models that extract knowledge from the literature and help researchers make new connections that they might have missed due to the sheer volume of research coming out every day (we are seeing more than papers published on covid- daily.) this project has been extremely motivating for everyone on the team and we have been able to make rapid progress as a result. ■ miriam fenniri, undergraduate student, university of british columbia i am a soon-to-be fourth (senior) year undergraduate student at the university of british columbia (ubc). i was fortunate enough to spend last summer in an organic chemistry lab at universitélaval, where i was working on the synthesis of low band gap conducting polymers in the laboratory of prof. mario leclerc. this summer, i was planning on staying on ubc campus doing research and continuing my work as a teaching assistant until covid- got in the way. not only is the course that i was going to assist with canceled, but the research center where i was going to spend these next and a half months is closed until further notice. i am still living on-campus. many of my colleagues and classmates have returned homesome are still here either by choice or because of travel restrictions. classes were moved online on march th, and shortly thereafter, laboratories closed and research was halted. the transition to online learning has been smooth, however i never expected to have to write a midterm, much less three midterms, in my living room. at least i am now prepared to do the same regarding my final exams. to combat loneliness (and boredom), my family has been hosting weekly video-chats and strangely enough, i look forward to them! ■ matthew horton, materials project staff, lawrence berkeley national lab i work in computational materials science, a field for which i'm enormously grateful that myself and my colleagues can continue to make contributions to remotely. however, this can only happen because of people out there who are still performing the necessary maintenance and support for the high-performance computers and servers that we use. for myself, these are the people who are still keeping the facilities at the national energy research scientific computing center (nersc) running smoothly here in berkeley, but i know that my colleagues greatly appreciate similar efforts all across the world. it is important we recognize all of these people, and the personal risk they're taking on, as well as everyone working to keep laboratories in a safe and stable condition. beyond computation, much of my job is working to share the data we generate online at the materials project so that it is accessible to as broad a group of people as possible, and part of that is working to try to build a community. i have many open questions for how best to do this, but it is feeling necessary now more than ever that we better understand what steps we can take to make our community stronger and more inclusive. for my part, i'm enjoying talking to scientists on an online materials science discussion forum we recently launched, as well as help welcome in new developers to make contributions to the open-source codes we work on. as this situation evolves we can challenge ourselves to become more inventive in finding ways to connect and collaborate together, and carry these lessons forward. ■ hrishikesh joshi, graduate student, i have been living in germany for . yrs, pursuing a doctorate in chemistry. presently, i am preparing for a big day in my career, my ph.d. defense, "remotely." if i had to describe my current state in one word, it would be "uncertain" on all accounts. will the internet hold up during the defense? will my online presentation be good enough? will i find a job after my defense? the global economy is headed toward a recession, and most companies are downsizing. being an international candidate, it will be more challenging to find a job now than before. on the one hand, i am thrilled as i get to at least defend my ph.d. on the other hand, i am disappointed as i miss out on the opportunity to share this day with people. working remotely has made me appreciate personal interactions even more. every thursday, i am very excited as i get to go to the lab (reduced-workforce regulations) and feel a bit normal again. nevertheless, these times have also been productive as i am working on overdue programming projects and experimenting a lot with cooking. i feel these times are uncertain and disappointing but opportunistic in some ways. the enactment of unfamiliar public health measures and the rapid breakdown of our status quo are two major emotional stressors associated with the ongoing covid- pandemic. as an early career scientist, it is easy to fall into the rut of futility that comes with leaving experiments half-finished with looming deadlines. instead of focusing on events outside of my control, i found it more productive to reframe the current situation as a unique opportunity to work on myself, whether it be through reading up on current literature or investing more time into hobbies. in the past month, i've invested more time to properly care for my existing houseplants, repurposing a garment rack to create a diy grow light setup. if you are in need of a hobby, i recommend cultivating common, inexpensive vining plants such as pothos or philodendron! both plants display rapid growth and thrive even when you occasionally forget to water them, and with enough care and time they will grow into respectable, climbing foliage. houseplants are also inspiring metaphors for how we should live our lives; by constantly reaching for the sky while taking care of our essential needs, we can succeed and flourish in the face of unexpected change. ■ grit kupgan, postdoctoral researcher, as a member of a theoretical/computational group, i feel grateful that our research does not have to come to a halt. i cannot imagine the frustrations of researchers who have to delay their work as laboratories become inaccessible. for me, the transition from working in the lab to working from home has been a smooth one. at the beginning of march, our advisor requested a meeting to develop a contingency plan. as expected, the plan was invoked within a week due to the worsening pandemic. because of this preparation, everyone in the group was ready. our data were backed up, and some of us even took our workstations home. indeed, working from home has several advantages. i get to spend more time with my family who lives in a different state, and eliminate daily commutes. to keep the research momentum going, our group holds the research meetings twice a week online. additionally, i try to maintain my usual routines, such as work hours and breaks, and attend online seminars whenever possible. unfortunately, in my opinion, working from home reduces the sense of camaraderie slightly. in the lab, we can have research discussions, ask questions, get suggestions, and share personal stories and thoughts about politics (domestic and international) with our colleagues daily, which is always a pleasure. ■ henry s. la pierre, assistant professor, my professional and personal life move on different frequencies. my research group is in a rush to publish and to meet grant deadlines. even with the closure of laboratories and the cancellation of a busy travel schedule, leaving synchrotron and magnet lab experiments indefinitely suspended, our scientific progress is still planned on the order of days. while i am building plans for my students to safely return to the lab, these may very well not be implemented: there is no justification to rush back without effective and organized testing. as my wife, daughter, and i prepare for the arrival of her baby brother in july, i am acutely aware that the changes and dangers of this new world will not abate soon. it is exhausting simultaneously meeting the demands of the moment and mitigating the risks of a nebulous "return." these risks are particularly worrisome in the vacuum of federal and state leadership. as we rebuild our institutions, scientists, engineers, and academics must demand the integration of our technical and organizational expertise to the structure and function of our governments. one bright spot in this debacle has been the competent and measured response of scientific leadership across disciplines and institutions. ■ chengcheng rao, graduate student, with respect to my own research, my ongoing experiments needed to be postponed, accompanied by a shift of focus to literature, writing, and paperwork. i had been wondering if it would become possible to execute my experiments automatically and/or remotely? currently, the answer is no, but i cannot help but observe that with the growth of ai and breakthroughs underpinned by intelligent robots, some experiments will be doable by machines with fewer hands-inthe-lab. it is an eye-opening moment to think about how to bridge and transfer this advanced technology to my research as well. for interpersonal communication, our group meetings have moved online to keep social distancing and self-isolation, which is a new format for me. hence, we need some time to get familiar with this new communication method/software as face-to-face communication is more productive. all graduate courses are all online as well, for graduate students who need to take courses, and this is a challenge. meanwhile, so much information is shared by email, and it sure feels like we are receiving literally tons of emails every dayit is very hard to follow every email as some have too much/little information, and some are duplicated or even conflicting. it is another challenge to obtain useful information effectively through the information explosion. for my graduation, as i approach the end of my ph.d., my defense was expected at the end of the summer of . will i have an online defense, and a virtual graduation convocation? i hope not. hence, i am always thinking about when the coronavirus outbreak will come to an end. wuhan's shutdown was lifted on april , and it shows that the epidemic will be brought under control if effective controls are taken. but will it manifest itself as a second wave? this is the part i am most worried about. after thinking about all aspects, it is necessary to create and maintain a routine during this ever-changing time. do some work on paper or computer, avoid going out unnecessarily, and be sure to get some exercise to strengthen immunitythese are the things that i am doing. although the temperature is still below zero celsius in edmonton, i do believe spring is on its way. ■ adrianne rosales, assistant professor, almost exactly one year ago, i was working from home for a different reason: the birth of my daughter. while i was over the moon to be a new mother, there was a part of me that struggled with the anxiety of staying productive. my research group was only two years old, and i had watched others continue to submit grants, write papers, and advise students soon after the births of their children. whether that productivity was real or not, i held myself to an impossibly high standard on very little sleep. when covid- began to spread, i was acutely aware of the implications of shutting down my lab space and working from home again, not even one year later. nap time only goes so far, and my group was d still mostly firstand second-year graduate students. things were finally starting to work! ultimately, it was clear that much larger consequences were at hand and that many would suffer. while it was still voluntary, i decided to pause our lab activity. our university mandate came the following week, but in the meantime, it felt like a decision i made every day. and although i still hold high expectations for my group, we have worked together to make sure those expectations are also realistic. this will not be the last challenge my lab faces, and i hope that in addition to research productivity, i am training my lab on leadership and resiliency. the covid- pandemic is now well under control in china as of recent. most of our regular daily life has recovered, although the institute is only open to permanent staff. the "good" thing is i have much more time to concentrate on researchi reviewed several papers during this time, probably more than i usually do, and i feel that i have read each manuscript more carefully than before. senior graduate students have not yet come back from home, which will certainly have a big impact on the progress of their thesis and the production of the group as a whole. as a group of experimental chemistry, we urgently await, with great anticipation, the start of normal experiments and research. social media on the web have played an important role during this special period. i taught a course to first-year graduate students online, in spite of having no experience with respect to online teaching, and with relief i can say finally that it went well. many conferences have been canceled or postponed to next year, but on the other hand, online conferences became increasingly popular. i have already received several invitations for online ph.d. defenses, which is critical for our young people to progress. web-based conferences can significantly save time, and may become more and more popular even after the pandemic. i am confident that the pandemic will be under control worldwide, probably starting around the summer. however, the impact of the pandemic on the economy already shows, and i hope that it will not affect the funding situation in the future. during this winter vacation, i went to wuhan to visit my family on th of january, one week before the lockdown of the city caused by the covid- outbreak. i stayed at my parents' home with my family for more than two months, wondering about the fates of my entire family. the virus infected several of my relatives, four of whom were hospitalized. luckily, they all recovered, except my -year-old grandfather. during the most dire of days, like the people of wuhan, i could not sleep in the night and was looking for the slightest hope in the news, while planning to limit exposure of uninfected family members. with strong backup from the chinese people, especially medical personnel, wuhan survived and reopened on th of april. i returned to shanghai by flight on the first day wuhan reopened, and was greeted by media instead of medical teams upon arrival. during the city lockdown, a journal article authored by a student of mine and me was submitted in spite of difficulties caused by a lagging internet connection. after peer-review, a reviewer raised stability issues, which required further experimental data. luckily, my collaborators were able to provide relevant results. i sincerely cannot anticipate the outcome, had i wrote to the editor explaining that we could not provide any further experimental data in the near future due to the coronavirus outbreak, or if i were to ask for an unlimited extension of time to revise. as of now, we still do not have a schedule to reopen the laboratories. i am still waiting for another days upon arrival in shanghai before the university can clear me to return to campus. almost all my group members were in their respective hometowns, longing for a notice to return to shanghai, especially the ones who are graduating this year. online meetings were possible but were difficult due to slow internet connections. i have individual discussions regarding literature with each one of them, hoping they are coping well with the current situation. chemistry of materials we wish the best to all of our authors, readers, and reviewers. we are in this together, and we look forward to another set of snapshots in a month amaury bossion orcid grit kupgan orcid key: cord- -fn zlutj authors: nan title: abstracts of the th annual meeting of the german society of clinical pharmacology and therapy: hannover, – september date: journal: eur j clin pharmacol doi: . /bf sha: doc_id: cord_uid: fn zlutj nan grapefruit juice may considerably increase the systemic bioavailability of drugs as felodipine and nifedipine. this food-drug interaction has potential practical importance because citrus juices are often consumed at breakfasttime when drugs are often taken. it is likely that a plant flavonoid in grapefruit juice, naringenin, is responsible for this effect (inhibition of cytochrome p- enzymes in the liver or in the small intestinal wall). ethinylestradiol (ee ), the estrogen of oral contraceptive steroides, shows a high first-pass-metabolism in vivo. therefore, the purpose of this study is to test the interaction between grapefi-uite juice and ee , the area under the serum concentration-time curve (auc _ h) ofee was determined in a group of young healthy women (n = ) on day + ofmenstruale cycle. to compare intraindividually, the volunteers were randomly allocated to two test days. the female volunteers took lag ee together with either ml of herb tea or with the same amount of grapefruit juice (content of naringenin mg/ ). furthermore, on the day of testing the women drank times ml of the corresponding fluid every three hours up to four times. the auc . h of ee amounts to . + . pg x mi- x h after the administration of the drug with grapefruit juice; that means % higher in comparison to + . pg x m - x h after concomitant intake of tea. also, the mean cmax-value increases to %, p _< . ( . + . pg x m - and . + . pg x m - , respectively). this result shows that the systemic bioavailability ofee increases after intake of the drug with grapefruit juice. the extent of this effect is lower than the extent of known interindividual variability. procarbazine is a tumourstafic agent widely used m hodgin's disease, non-hodgldn's lymphomas and mmours of brain and lung. procarbazine is an inactive prodrug which is converted by a cytochrome p mediated reaction to its active metabolites, in the first step to azoprocarbazine. the kinetics of both procarbazine and azoprocarbazine is not described in humans up to now. on turnout patients we have investigated the plasma kinetics of both procarbazine and azoprocarbazine after oral adminislxation of mg procarbazine in form of capsules and drink solution, respectively. a hplc method with uv-detection ( nrn) and detection limits of and ng/ml was developed for procarbazine and azoprocarbazine respectively. after both the capsules and drink solution the parent drug could be detected in plasma only for h. in contrast the tl/ of terminal elimination of azoprocarbazine was estimated in the range of , to , h with a mean of , h - + , h. the auc of procarbazine was less than % of that of azoprocarbazine. cma x values of azoprocarbazine were determined in the range of , to ,l gg/ml. in comparison to the drink solution we determined on the basis of the plasma levels of azoprocarbazine a bioavailability of the therapeutic used procarbazine capsules of , + , %. prostaglandin e (pge ) is used for the treatment of patients with peripheral arterial disease, and probably effective due to its vasodilator and antiplatelet effects. l-arginine is the precursor of endogenously synthesized nitric oxide (no). in healthy human subjects, larginine also induces peripheral vasodi]ation and inhibits platelet aggregation due to an increased no production. in the present study the influence of a single intravenous dose of l-arginine ( g, min) or pge ( p.g, min) on blood pressure, peripheral hemodynamics (femoral artery duplex sonography), and urinary no -and cgmp excretion rates was assessed in ten patients with peripheral arterial disease (fontaine iii -iv). blood flow in the femoral artery was significantly increased by l-arginine by % (p < . ), and by pge by % (p < . ). l-arginine more strongly decreased systolic and diastolic blood pressure than pge . plasma arginine concentration was increased -fold by l-arginine, but unaffected by pge . urinary excretion of no -increased by % after l-arginine (p < . ), and by % after pge (p = n.s.). urinary cgmp excretion increased by % after l-arginine and by % after pgei (each p = n.s.). we conclude that intravenous l-arginine decreases peripheral arterial resistance, resulting in enhanced blood flow and decreased blood pressure in patients with peripheral arterial disease. these effects were paralleled by increased urinary no -excretion, indicating that systemic no production was enhanced by the infusion. increased no -excretion may be a sum effect of no synthase substrate provision (l-arginine) and increased shear stress (pge and l-arginine). it is weli established that the endothelial edrf/no-mediated relaxing mechanism is impaired in atherosclerotic and in hypertensive arteries. recently it was suggested that primary pulmonary hypertension might be another disease in which the endothelial edrf/no pathway is disturbed. we tested the hypothesis that intravenous administration of l-arginine (l-arg), the physiological precursor of edrf/no, stimulates the production of no, subsequently increasing plasma cgmp levels and reducing systemic and / or pulmonary vasular resistance, in patients with coronary heart disease (chd; n = ) and with primary pulmonary hypertension (pph; n = ). l-arg ( g, min) or placebo (nac ) was infused in chd patients, and l-arg was infused in pph patients undergoing cardiac catheterization. mean aortic (pao) and pulmonary (ppul) arterial pressures were continuously monitored. cardiac output (co; by thermodilution), and total peripheral resistance (tpr) were measured before and during the infusions. plasma cgmp was determined by ria. in chd patients, pao decreased from . + . to . + . mm hg during l-arg (p< . ), whereas ppul was unchanged. tpr decreased from . -+ . to . + . dyne sec cm - during l-arg administration (p< . ). co significantly increased during l-arg (from . + . to . + . /min, p< . ). placebo did not significantiy influence any of the haemodynamic parameters, cgmp slightly increased by . + . % during l-arg, but slightly decreased during placebo (- . + . %)(p < . for l-arg vs. placebo). in pph patients, l-arg induced no significant change in pao, tpr, and co. mean ppul was . + . mm hg at the beginning of the study, but was only slightly reduced by l-arg to . + , mm hg (p = n.s.). plasma cgmp was not affected by l-arg in these patients. we conclude that l-arg stimulates no production and induces vasorelaxation in chd patients, but not in patients with primary pulmonary hypertension. thus, the molecular defects underlying the impaired no foimation may be different m both diseases. institutes of clinical pharmacology, *cardiology, and **pneumology, medical school, hannover, germany. the influence of submaximal exercise on the urinary excretion of , -dinor-pgflc, (the major urinary prostacyclin metabolite), , dinor-txb (the major urinary thromboxane a metabolite), and pge (originating from the kidney), and on platelet aggregation was assessed in untrained and endurance-trained male subjects before and after days of rag/day of aspirin. urinary , -dinor-txb excretion was significantly higher in the athletes at rest (p < . ). submaximal exercise increased urinary , -dinor- -keto-pgfl~ excretion without affecting , -dinor-txb or pge excretion or platelet aggregation. aspirin treatment induced an - % inhibition of platelet aggregation and , -dinor-txb excretion in both groups. however, urinary , -dinor- -keto-pgfl~ was inhibited by only % in the untrained, but by % in the trained group (p < . ). urinary pge was unaffected by aspirin in both groups, indicating that cyclooxygenase activity was not impaired by a systemic aspirin effect. after low dose aspirin administration, the same selective stimulatory effect of submaximal exercise on urinary , -dinor- -keto-pgfl~ excretion was noted in both groups as before. the ratio of , -dinor- -keto-pgfld , -dinor-txb was increased by exercise; this effect was potentiated by aspirin (p < . ). our results suggest that the stimulatory effect of submaximal exercise on prostacyclin production is not due to an enhanced prostacyclin endoperoxide shift from activated platelets to the endothelium, but rather the result of endothelial prostacyclin synthesis activation from endogenous precursors. mg/day of aspirin potentiates the favorable effect of submaximal exercise on endothelial prostacyclin production by selectively blocking platelet cyclooxygenase activity. institute of clinical pharmacology, medical school, hannover, germany. soluble guanylyl cyclases (gc-s) are heterodimeric hemeproteins consisting of two protein subunits ( kda, kda). the enzyme is activated by nitric oxide (no) and catalyzes the formation of the signal molecule "cgmp" (cyclic guanosine- 's'-monophosphate) from gtp. numerous physiological effects of cgmp are already very well characterized. however, detailed insights in the no-activation mechanism of this enzyme have been described to date only in a hypothetical model ( ). recently, this concept was supported by experimental data using sitedirected mutagenesis to create a no-insensitive soluble guanylyl cyclase mutant ( ). it is generally accepted that the prostethic heine-group plays a crucial role in the activation mechanism of this protein. nonetheless, some interesting questions with regard to structure and regulation of soluble guanylyl cyclases still need to be uncovered (e.g. activation with other free radicals, such as carbon monoxide). since this kind of studies is limited so far by isolating large quantities of a biologically active enzyme with conventional purification techniques, the recombinant protein was expressed in the baculovirus / insect cell system. we describe here the construction and characterization of recombinant baculoviruses, harboring the genes that encode both protein subunits of the soluble guanylyl cyclase. insect cells infected with these recombinant baculoviruses produce between - % (as related to total cell protein) of functional soluble guanylyl cyclase. positive infection was monitored as a change in morphology of the cells and by production of the respective recombinant viruses detected by polymerase-chain-reaction (pcr). so far examined, the recombinant enzyme exhibits similar physicochemical characteristics as the "natural" protein. exogenous addition of several heme analogues to the infected cells is able to either stimulate or inhibit the enzymatic activity of gc-s. we are confident to purify milligram amounts of the recombinant protein in the near future. pet studies of myocardial pharmacology have principally concerned the sympathetic nervous system and u'acers have been developed to probe the integrity of both pre-and post-synaptic sites. the sympathetic nervous system plays a crucial role in the control of heart rate and myocardial contractility as well in the conlrol of the coronary circulation. alterations of this system have been implicated in the pathophysiology of a number of .cardiac disorders, in particular, heart failure, ventricular arrhythmogenesis, coronary artery disease, idiopathic dilated and hypertrophic cardiomyopathy. several beta blockers have been labelled with carbon-ll for imaging by pet. the most promising of these is cgp which is a non-selective beta adrenoceptor anatagonist particularly suited for pet studies due to its high affinity and low lipophilicity, thus enabling the functional receptor pool on the cell surface to be studied. studies in our institution in a group of young healthy subjects have yielded bmax values of . _+ . pmol/g myocardium. these data are consistent with literature values of bmax for beta adrenoceptors in human ventricular myocardium determined by a variety of in vitro assays. a recent study in patients with hypertrophic cardiomyopathy has shown that myocardial beta adrenoceptor density is decreased by approximately - % relative to values in normal subjects. the decrease in receptor density occurs in both hypertrophied and nonhypertrophied portions of the left ventricle. these data are consistent with the hypothesis that sympathetic overdrive might be involved in the phenotypic expression of hypertrophic cardiomyopathy. a further decrease of myocardial beta adrenoceptor density (to levels well below _ - . pmol/g) has been observed in those patients with hypertrophic cardiomyopathy who procede to ventricular dilatation and heart failure. cyp a hydroxylates polycyclic aromatic hydrocarbons such as benzo(a)pyrene occurring e.g. in cigarette smoke. two hereditary mutations are discovered: ml, a t to c transition , bp downstream of exon ; m , located at position , in exon representing an a to g transition resulting an isoleucine to valine substitution in the heme-binding region. recently we could demonstrate in caucasians that carriers of the m -mutation possess an increased risk towards lung cancer (drakoulis et al clin.lnvestig. : , ) , whereas the ml-mutation shows no such association. the phasg-ii enzyme gstm catalyses the conjugation of glutathione to electrophilic compounds such as products of cyp ai. gstm is absent in . % of the caucasian population due to base deletions in exon and of the gene. we found no contrariety in the gstm distribution, including frequencies of type a (p.) and type b (v) among lung cancer patients (odds ratio = . , n = ; cancer res. : res. : , . lung cancer patients and reference patients were investigated for mutations of cypia and gstm by allele-specific pcr and rflp. a statistically significant higher risk for lung cancer among carriers of the m trait was found (odds ratio = . , p = . ). interestingly, amid lung cancer, m -alleles were less often linked to ml than in controls (odds ratio = . , %-confidence limits = . - . , p = . ). however, the frequency of cypia mutations did not differ among active and defective gstm types. consequently, we could not confirm in the caucasian population the synergistic effects of cypia mutations (especially m ) and deficient gstm as combined susceptibility factors for lung cancer as described among the japanese (cancer res. : , in healthy subjects the effect of gastrointestinal hormones like somatostatin and glucagon on splanchnic hemodynamics is not well defined due to the invasiveness of the direct measurement of e.g. portal vein (pv) wedged pressure. methods : now, we applied duplex sonography ( . m~z) and color coded flow mapping to compare the effects of ocreotide (i ~g sc), a long acting somatostatin agonist, and glucagon (i mg iv) on the hemodynamics of the pv, superior mesenteric artery (sma) and common hepatic artery (ha) in healthy volunteers ( g,i q; ± y; x ± sem). basal values of pv flow ( . ± . cm/s), pv flow volume ( ± ml/min), sma systolic (sf: ± cm/s) and diastolic flow (df: ± cm/s), sma pourcelot index (pi) ( . ± . ), ha sf ( ± cm/s) and df ( ± cm/s) and ha pi ( . ± . ) well agreed with previously reported results. within min ocreotide resulted in a decrease of sma sf (- ± %) sma df (- ± %), ha sf (- ± %) and ha df (- ± %). maximum drop of pv flow (- ± %) and flow volume (- ± %) occurred at min. all effects diminished at min. no significant change of vessel diameter and pi was seen. min following its application glucagon caused a highly variable, only short lasting increase of pv flow volume (+ ± %) and sma df (+ ± %). ha fd (+ ± %) showed a tendency to rise (ns). we conclude that in clinical pharmacology duplex sonography is a valuable aid for measuring effects of hormones and drugs on splanchnic hemodynamics. pectanginal pain and signs of silent myocardial ischemia frequently occur in hypertensives, even in the absence of coronary artery disease (cad) and/or left ventricular hypertrophy, probably due to a reduced coronary flow reserve. since the oxygen extraction of the heart is nearly maximal during rest, increases of oxygen demand cannot be balanced by increases of myocardial perfusion: to assess the frequency of ischemic type st-segment depressions in this patients and to determine the influence of heart rate (hr) and blood pressure (bp), simultaneous h hoher-and h ambulatory bp monitoring were performed in hypertensives (age - years, f, m) without cad before and after four weeks on therapy with the -blocker betaxolol. episodes of significant st-segment depressions (> . mv,> min) of a total length of min could be demonsu'ated in / patients ( %) without antihypertensive therapy_ systolic bp significantly increased from + . mmhg (mean + sd, p < . ) min before to a maximum of + . mmhg during the ischemic episodes, hr and rate-pressure product (rpp) increased from + . min -t and . + . mmhg x rain -t x to _+ . min-: and . + . mmhg x min - x (p < . ). the extent of st-segment depressions significantly correlated with hr and rpp (p < . ). drug therapy with - mg/d betaxolol for weeks significantly decreased mean hr, systolic' and diastolic bp (p < . ). ischemic episodes of a total length of min were recorded only in of hypertensives ( . %; p < . ; x -test). in conclusion, increases of hr and systolic bp seem to be the most important factors which induce myocardial ischemia in hypertensives without cad. as silent ischemia is a independent risk factor for sudden cardiac death and other cardiac events, specific antihypertensive therapy should not only be aimed to normalize blood pressure, but should also address reduction of ischemic episodes as demonstrated here. phosphodiesterase inhibitors exert their positive inotropic effects by inhibiting camp degradation and increasing the intracellular calcium concentration in cardiomyocytes. an identical phosphodiesterase type i[ has been demonstrated in platelets and vascular smooth muscle cells. we studied the influence ofpiroximone on platelet function in vitro and ex vivo and the hemodynaimc effects of a bolus application of piroximone in patients with severe heart failure (nyha iii-iv) using a swan -ganz-catheter. in order to study the influence ofpiroximone on platelet function in vitro, platelet rich plasma from healthy volunteers was incubated with piroximone ( - ~tmol/l) from minute to hottrs and aggregation was induced by addition of adp. for the ex vivo experiments platelet rich plasma was obtained from patients, who received piroximone in doses of . , . , . or . mg/kg bw. blood samples were drawn immediately before and , , , and minutes after bolus application. the adp-induced platelet aggregation was inhibited time-and dosedependently. the ic value for piroximoue in vitro amounted to + omol/ . in the ex vivo experiments the maximal inhibition of adp-induced aggregation was obtained in prp from patients who had received mg/kg bw piroximune minutes before. the admitdstration ofpiroximone resulted in a marked hemodynamic improvement with a dose-dependent increase in cardiac index and decreases in pulmonary artery pressure and resistance. to treat conditions associated with acute and chronic multiorgan dysfunction. studies indicate patients receive approximately ten drugs, on average during their icu stay, from several drug classes. commonly prescribed drugs include narcotics, sedatives, antibiotics, antiarrhythmics, antihypertensives, drugs for stress ulcer prophylaxis, diuretics, vasopressors, and inotropes. reports suggest surgical icu patients cost the hospital an average of $ , /patient in un-reimbursed costs under fixed-price reimbursement. furthermore, patients with the greatest drain in revenue received catecholamines, triple antibiotics, or antifungal agents. thrombolytics, antibiotics, plasma expanders, and benzodiazepines account for nearly twothirds of the cost of drugs prescribed in medical and surgical icus. agents with considerable economic impact include biotechnology drugs for sepsis. pharmacoeconomic data in icu patients suggest increased attention should be directed towards several areas, including patients with pneumonia, intraabdominal sepsis, nosocomial bloodstream infections, optimizing sedation and analgesic therapy, preventing persistent paralysis from neuromuscular blockers, preventing stress ulcers, treating hypotension, and providing optimal nutritional support. studies are needed to assess the impact of strategies to improve icu drug prescribing on length of stay and quality of life. if expensive drugs are shown to decrease the length of icu stay, then their added costs can have positive economic benefits to the health care system. the responses to min iv. infusions of the -and -adrenoceptor agonist isoprenalin (iso) and the -(and c~-) adrenoceptor agonist adrenalin (adr) at constant rates of ijg/min were evaluated noninvasively after pretreatment (pre-tr) with placebo (pl), mg of the -selective adrenoceptor antagonist talinolol (tal) and mg of the non-selective antagonist propranolol (pro) in healthy subjects. the following were analysed: heart rate (hr, bpm), pre-ejection time (pep, ms), ejection time (vet, ms), hr-corrected electromechanical systole (qs c, ms), impedance-cardiographic estimates of stroke volume (sv, ml), cardiac output (co, i/min) and peripheral resistance (tpr, dyn.s.cm - ) calculated from co and mean blood pressure (sbp and dbp according to auscultatory korotkoff-i and -iv sounds this indicates that ) about half the rise of hr and co and half the shortening of pep is -respectively ~ -determined, ) that predominant -adrenergic responses, whilst not affecting vet, take optimal benefit from the inodilatory enhancement of pump performance, ) that an additional -adrenergic stimulation is proportionally less efficient, as vet is dramatically shortened, thus blunting the gain in sv so that the rise in co relies substantially on the amplified increase of hr and ), vet is more sensitive than qs c in expressing additional -adrenoceptor agonism and ) prime systolic time intervals provide a less speculative and physiologically more meaningful represenation of cardiac pump dynamics than hr-corrected ones. zentrum flit kardiovaskul~re pharmakologie, mathildenstral e , mainz, brd a regression between blunting of ergometric rise of heart rate and l~ladrenoceptor occupancies in healthy man c. de mey, d. palm, k. breithaupt-grsgler, g.g. belz the hr-responses to supine bicycle ergometry ( min at appr. watt) were investigated at several time points after the administration of propranolol (pro: , , mg), carvedilol (car: . , , , mg), talinolol (tal: , , , mg), metoprolol (met: mg) and celipro-iol (cel: mg) to healthy man. the effects of the agents (= difference of the ergometric response for active drug and placebo) were analysed for both the end values (end) and the increments (inc) from resting values immediately before ergometry up to end. the effects were correlated with the %-~l-adrenoceptor occupancies estimated using a standard emax-model (sigmoidicity=l) from the concentrations of active substrate in plasma determined by i~l-adrenoceptor specific radioreceptor assay. the respective intercepts (i), slopes (s) and correlation coefficients (r) are detailed here below : inhibition of leukotrienes is a promising approach to the treatmer~t of several diseases because excess formation of these lipid mediators has been shown to play an important role in a wide range of pathophysiological conditions. since until recently we were not able to obtain specific drugs suppressing leukotriene biosynthesis or action for clinical practice, we started investigating the effects of putative natural modulators of leukotriene biosynthesis such as fish oil. healthy male volunteers were supplemented for days with fish oil providing mg eicosapentaenoic and docosahexaenoic acid per kg body weight and day. the urinary concentration of leukotriene e plus n-acetyl leukotriene e served as a measure for the endogenous leukotriene production, treatment resulted in a significant increase in the eicosapentaenoate concentration in red blood cell membranes. fish oil reduced the endogenous leukotriene generation in of the volunteers. the effect was associated with a decrease in urinary prostaglandin metabolites, determined as tetranorprostanedioic acid. in contrast to what was expected from published in vitro and ex vivo experiments, no endogenously generated cysteinyl leukotrienes of the series could be identified. the inhibitory effect of fish oil on the endogenous leukotriene generation was not synergistic to the effect of vitamin e, which also exhibited some suppressive activity. early clinical data on the effects of fish oil on teukotriene production in patients with allergy or rheumatoid arthritis are not yet conclusive. we conclude that fish oil exhibits some inhibitory activity on leukotriene production in vivo. the effectivity of fish oil may be attenuated by concomitant modulation of other mediator systems e.g. up-regulation of tumor necrosis factor production. • the number and affinity of platelet thromboxane (txa ) and prostacyclin (pgi )-receptors are regulated by several factors. we studied the influence of oral intake of acetylsalieylic acid (asa) on ex-vivo binding studies with human platelet membranes on the binding of the specific thromboxane a antagonist h-sq- and the pgi agunist h-l]oprost. the number of receptors (bmm) and the binding affinity (kd) were calculated using scatchard's plot analysis. in healthy male volunteers o significant difference was seen following intake of mg/d of asa for days (mean -+ sem): the potency of meloxicam (mel), a new anti-inflammatory drug (nsaid), in the rat is higher than that of well-known nsaids, in adjuvant arthrtitis rats, mel is a potent inhibitor of the local and the systemic signs of the disease. mel is also a potent inhibitor of pg-biosynthesis by leukocytes found in pleuritic exudate in rats. conversely, the effect of mel on pg-biosynthesis in isotated enzyme preparations from bull seminal vesicle in vitro, the effect on intragastric and intrarenal pg-biosynthesis and the influence on the txb - evel in rat serum is weak. in spite of the high antiinflammatory potency in the rat, mel shows a low gastrointestinal toxicity and nephrotoxicity in rats. -cyclooxygenase- (cox- ) has been recently identified as a isoenzyme of cyclooxygenase. nsaids are anti-inflammatory through inhibition of pg-biosynthesis by inducible cox- and are ulcerogenic and nephrotoxic through inhibition of the constitutive cox- . we have investigated the effects of mel and other nsaids on cox- of non stimulated and on cox- of lps-stimulated guine pig peritoneal macrophages. cells were cultured with and without lps for hrs together with the nsaid. arachidonic acid was then added for further mins, the medium removed and pge measured by ria. bimakalim, emd , is a new investigational k+-channel activator with vasod[lating properties. single pereral doses of . mg bimakalim, mg diltlazem, either alone or in combination, were investigated in healthy male supine volunteers ( to years of age) [n a placebo-controlled, periodbalanced, randemised, double-blind, way cross-over design. point estimates of the global effects of bimakalim [k] , di]tiazem [d] and their interaction [kxd, = in case of mere additivity] incl. % confidence intervals (ci) were analysed for systolic and diastolic blood pressure (sbp, dbp; mmhg), heart rate (hr; bpm), pq (ms), systolic time intervals (pep, qs c, lvetc; ms), cardiac output (co; i.min- ), total peripheral resistance (tpr; dyn.s.cm- ), heather index (hi; q.s- ); , h after dosing, *statistically significant at a= . : - to - - to - to - to . to . - . to , . to . * - . to , - . to - . * - . tol, - . to , - . to . - . to . - . to . - . to . - . to . - to - to -& to . - . to . afterload reduction and a drop in dbp occurred with bimakalim associated with a rise in hr and mild increase in cardiac performance, diltiazem (slightly) decreased afterload and bp with little (reflectory) accompanying changes and had a negative dromotropic effect. the combination caused additive effects. center for cardiovascular pharmacology, zekapha gmbh, mathildenstr. , mainz, germany. rheumatoid arthritis (ra) is characterized by an immunological mediated inflammatory reaction in affected joints. infiltration of granulocytes and monocytes is the pathophysiological hallmark within the initial phase of inflammation. these cells are able to synthesize leukotrienes. ltb is a potent chemotactic factor and therefore could be responsible for the influx of granulocytes from the circulation. cysteinyl leukotrienes ltc , d and e augment vascular permeability and are potent vasoconstrictors. ltb and cysteinyl leukotrienes have been detected in synovial fluid of patients with ra. however, these results are difficult to interprete, because the procedure is invasive and artificial synthesis cannot be excluded. we used a different, noninvasive approach by assessing the excretion of lte into urine. studies with hltc have demonstrated that lte is unchanged excreted into urine and is the major udnary metabolite of cysteinyl leukotrienes in man. udnary lte was isolated from an aliquot of a hour urine collection by solid phase extraction followed by hplc and quantitated by ria. nine patients were enrolled in the present study. all met the american college of rheumatology criteria for ra. patients were treated with nonsteroidal inflammatory drugs and disease modifying drugs. therapy with prednisolon was started after collection of the initial hour urine sample. disease activity was assessed by crp (mean + mg/l) and esr (mean _+ mm/hour platelet aggregation is mediated by the binding of an adhesive protein, fibrinogen, to a surface receptor, the platelet glycoprotein lib/ilia. gpiib/llla is one of a family of adhesion receptors, integrins, which consist of a ca++-dependent complex of two distinct protein subunits. under resting conditions, gpiib/llla has a low affinity for fibrinogen in solution. however, activation of platelets by most agonists, including thrombin, adp and thromboxane results in a conformational change in the receptor and the expression of a high affinity site for fibrinogen. binding of fibrinogen to platelets is a common end-point for all agonists and therefore is a potential target for the development of antiplatelet drugs. these have included chimeric, partially humanised antibodies ( e ), peptides and peptidomimetics that bind to the receptor and prevent fibrinogen binding. the peptides often include the sequence rgd, a sequence that is present in fibrinogen and is one of the ligand's binding sites. when administered in vivo, antagonists of gpiib/llla markedly suppress platelet aggregation in response to all known agonists, without altering platelet shape change, a marker of platelet activation. they also prolong the bleeding time in a dose and perhaps drug dependent manner, often to more than rain. in experimental models of arterial thrombosis, gpllb/llla antagonists have proved highly effective and are more potent than aspirin. studies in man have focused on coronary angioplasty, unstable angina and coronary thrombolysis and have given promising results. e given as a bolus and infusion combined with aspirin and heparin reduced the need for urgent revascularisation in patients undergoing high-risk angioplasty, although bleeding was more common. some compounds have shown oral bioavailability raising the possibility that these agents could be administered chronically. antagonists of the platelet gpiib/llla provide a novel and potent approach to antithrombotic therapy. drug databases on computers are commonly textfiles or consist of tables of generic-names or prices for example. until now pharmacokinetic data are not easily available for regular use, because searching parameters in a textfile is time consuming and personal intensive. on the other hand these pharmacokinetic data are the fundamental background of every dosage regimen and individual dosage adjustment. for many drugs elimination is dependent on the patients renal function. renal failure leads to accumulation, possibly up to toxic plasma concentrations. therefore, the decision was to build up a pharmacokinetic database. the aim is to achieve simplicity and effectiveness by using the basic rules. only three parameters are needed to describe the pharmacokinetics: clearance (ci), volume of distribution (vd) and half-life (t~). moreover, with two parameters the third can be calculated ancl'controlled by the equation: cl = vd * , / t½ according to the dettli-equation and the bayes' theorem estimation of individual pharmacokinetic parameters will be done by a computer program. the advantage is that the impact of therapeutic drug monitoring can be increased. using the population data and the bayesian approach, only one measurement of serum drug concentrations might be enough to achieve an individual dosage regimens (el desoky et al., ther drug monitor , : ) higher therapeutic security for the patient can be achieved. there is also a major pharmacoeconomic aspect: adapting drug dosage reduces costs (susanka et al., am j hosp pharm , : ) the basic database for future pharmacokinetic clinical desicions is going to be built up. the pharmacokinetic interactions with grape#uit juice reported for many drugs are attributed to the inhibition of cytochrome p enzymes by nanngenin, which is the aglycene of the bitter juice component nadngin. however, only circumstantial evidence exist that naringenin is indeed formed when grapefruit juice is ingested, and the lack of drug interaction when naringin solution is given instead of the juice is still unexplained. we investigated the pharmacokinetics of naringin, naringenin and its conjugated metabolites following ingestion of ml grapefruit juice per kg body weight, containing ijm naringin, in male and female healthy adults. urine was collected - , - , - , - , - , - , - and - hours alter juice intake. naringin and naringenin concentrations were measured by reversed phase hplc following extraction using ethyl acetate, with a limit of quantitation of nm. conjugated metabolites in urine were transformed by incubation with glucuronidase ( u/ml) / sulfatase ( u/ml) from abalone entrails for h at ph . and determined as parent compounds. additionally, naringin and naringenin concentrations were measured in plasma samples from grapefl'uit juice interaction studies conducted previously. neither naringin nor its conjugated products were detected in any of the samples. naringenin was not found in plasma. small amounts of nanngenin appeared in urine alter a median lag time of hours and reached up to . % of the dose (measured as nanngin). after treatment with glucuronidase / sulfatase, up to % of the dose was recovered in urine: the absence of naringin and its conjugates and the lag time observed for naringenin to appear in urine suggests that cleavage of the sugar moeity may be required before the flavonoid can be absorbed as the aglycone. naringenin itself undergoes rapid phase ii metabolism. whether the conjugated metabolite is a potent cytochrome p inhibitor is unknown but not probable. the pronounced variability of naringenin excretion provides a possible explanation for apparently contradictory results in grapefruit and/or naringin interaction studies. grapefruit juice increases the oral bioavailablity of almost any dihydropyridine tested, presumably due to inhibition of first-pass metabolism mediated by the cytochrome p isoform cyp a / . the mean extent of increase was up to threefold, observed for felodipine, and more pronounced drug effects were also reported. thus, a such interaction may be of considerable clinical relevance. no data are yet available for nimodipine. we conducted a randomized cross-over interaction study on the effects of concomitant intake of grapefruit juice on the pharmacokinetics of nimodipine and its metabolites m (pyridine analogue), m (demethylated) and m (pyridine analogue, demethylated). healthy young men ( smokers / nonsmokers) were included into the investigation. nimodipine was given as a single mg tablet (nimotop e) with either ml of water or ml of grapefruit juice (d~hler gmbh, darmstadt, mg/i naringin). concentrations ef nimodipine and its metabolites in plasma withdrawn up to hours p.ostdose were measured by gc-ecd, and model independent pharmacokinetic parameters were estimated. the study was handled as an equivalence problem, and anova based % confidence intervals were calculated for the test (=grapefruit period) to reference (= water period) ratios. the absence of a relevant interaction was assumed if the ci were within the . to . range: grapefruit juice was reported to inhibit the metabolism of a variety of drugs, including dihydropyridines, verapamil, terfenadine, cyclosporine, and caffeine. these drugs are metabolized mainly by the cytochrome p isoforms cyp a (caffeine and, in part, verapamil) and cyp a (others). theophylline has a therapeutic range of - mg/i and is also in part metabolized by cyp a . therefore, we conducted a randomized changeover interaction study on the effects of concomitant intake of grapefruit juice on the pharmacokinetics of theophylline. healthy young male nonsmokers were included. theophylline was given as a single dose of mg in solution (euphyllin e ), diluted by either ml of water or ml of grapefruit juice (d hler gmbhi darmstadt, mg/i nadngin). subsequently, additional fractionated . i of either juice or water were administered until hours postdose. theophylline concentrations in plasma withdrawn up to hours postdose were measured by hplc, and pharmacokinetics were estimated using compartment model independent methods. the study was handeled as an equivalence problem, and anova based % confidence intervals were calculated for the test (=grapefruit period) to reference (= water period) ratios (trnax: differences thus, no inhibitory effect of grapefruit juice on theophylline pharmacokinetics was observed. the lower contribution of cyp a to primary theophylline metabolism or differences in naringin and/or naringenin kinetics are possible explanations for the apparent contradiction between the effects of grapefruit juice on caffeine and on theophylline metabolism. the physical stability of erythromycin stearate film tablets was studied according to a factorial design with experimental variables temperature, relative humidity, and storage time. after one half year of storage at oc and % relative humidity, the fraction of dose released within min in a usp xxl paddle apparatus under standard conditions decreased from % for the reference stored at ambient temperature in intact blister packages to % for the stress-tested specimens. chemical degradation of the active ingredient did not become apparent before months of storage. under all other storage conditions, no effects of physical aging upon drug release were found. the bioequivalence of reference and stress-tested samples was studied in six healthy volunteers. the extent of relative bioavailability of the test product was markedly reduced (mean: . %, range: - %), mean absorption times of the test product were significantly prolonged. the results indicate that the product tested can undergo physical alterations upon storage under unfavourable conditions, and lose its therapeutic efficacy. it can be expected that this phenomenon is reduced by suitable packaging, but the magnitude of deterioration may cause concern. on the other hand, incomplete drug release is in this case easily detected by dissolution testing. whether similar correlations exist for other erythromycin formulations remains to be demonstrated. the efficacy of a drug therapy is considerably influenced by patient compliance. within clinical trials the effects of poor compliance on the interpretation of study results frequently leads to underestimating the efficacy of the treatment. in the evaluation of the "lipid research clinics primary coronary prevention trial" and the "helsinki heart study" special attention was focused on compliance with medication. the strong influence of compliance on clinical outcome and the dilutional effect of poor compliance on the efficacy of the respective drugs occured in both these trials. there are indirect (e.g. pill-count, patient interview) and direct methods (e.g. measurement of drugs, metabolites or chemical markers in body fluids) used to assess compliance with drug therapy. the indirect methods mentioned are commonly considered as unreliable. the direct methods can prove dose ingestion a short time before the sample is taken, however, they cannot show the time history of the drug use. an advanced method of measuring compliance is to use electronic devices. the integration of time/date-recording microcirculty into pharmaceutical packaging, so as to compile a time history of package use, provides real-time data as indicative of the time when dosing occurred. this method supports a precise, quantitative definition of "patient compliance" as: the extent to which the actual time history of dosing corresponds to the prescribed drug regimen. by taking real-time compliance data into account the results from clinical trials show not only clearer evaluations of drug efficacy and dese-reponse-relationship but also a better understanding of dose dependant adverse drug reactions. in the present study, we examined the usefulness of eroderm- and eroderm- . seventy five impotent men, to years old, participated in the present trial. the patients were classified into groups, patients each. the first group was treated by cream containing only co-dergocrine mesilate (eroderm- ), the second received a cream containing isosorbide dinitrate, isoxsuprine hcl and co-dergocrine mesilate (eroderm- ), while the third used a cream containing placebo. the cream was applied to penile shaft and gland / - hr before sexual stimulation and intercourse. the patients were asked to report their experience via questi'onnaire after one week. the results of treatment are as follows: seven patients ( %) who applied eroderm- indicated a full erection and successful intercourse. the use of eroderm- restored potency in patients ( %) of the second group. three men ( %) of psychogenic type reported overall satisfaction with placebo cream. treatment of impotence with eroderm cream was most successful in patients with psychogenic disorders which are often coincident with minor vascular or neurological disorders. fair results were reported by patients afflicted by moderate neurological disorders. except for one case of drug allergy following the use of eroderm- , no side effects were reported. we believe that eroderm cream has obvious advantages and may be a suitable treatment before the use of non-safe method as intracavernous medication. a new type of topically applied drugs (eroderm creams) for impotence is presented. eroderm creams contain vasoactive drugs. these drugs have ability to penetrate the penile cutaneous tissue and facilitate erection. in the present study, we examine the usefulness of eroderm- in the treatment of erectile dysfunction. eroderm- contains tiemonium methylsulfate, a.f. piperazine and jsosorbide dinitrate. a randomized, double blinded control trial on patients was performed. the etiology of impotence was investigated. all patients received eroderm- and placebo cream. the patients randomized into groups of . the first group received eroderm- on day and placebo cream on day , however, group two received placebo on day . the patients were advised to apply the cream on the penile shaft / - hr, before sexual stimulation and intercourse. the patients reported their experience via questionnaire. overall percent of patients demonstrated a response with eroderm- . the other responders reported a partial erection and tumescenous. three men ( %) reported a full crection and satisfied intercourse with either cream. these patients were psychogenic impotence. neither eroderm- nor placebo cream produced marked response in patients. four patients were venous leakage which were advised to use tourniquet at the base of penis after / hr. of cream application. only one of them indicated a good response. the highest activity proved to occur in psychogenic impotence. less rate of success was observed in patients with minor to moderate neurological and/or arterial disorders. no marked side effects were recorded. for these reasons eroderm- may be proposed as first line therapy of erectile dysfunction. control of cell proliferation is a basic homeostatic function in multicellular organisms. we studied the effects of some prostaglandins and leukotrienes and of their pharmacological inhibitors on cell proliferation in murine mast cells and mast cell lines, in a human promyelocytic cell line (hl- cells) and in burkitt's lymphoma cell lines. in addition, prostaglandin and leukotriene production was investigated in mast cells, representing putative endogenous sources of these lipid mediators. murine mast cells were derived from bone marrow of balb/c mice. proliferation of cells was estimated using a colorimetric assay (mtt-test). production of prostaglandin d (pgd ), pgj , delta- -pgj , leukotriene c (ltc ) and ltb by mast cells was determined by combined use of high performanceliquid chromatography and radioimmunoassay. pgd and its metabolites pgj and delta- -pgj exhibited significant antiproliferative effects in the micromolar range in mast cells, mast cell lines, hl- and burkitt's lymphoma cell lines whereas inhibition of cyclooxygenase by indomethacin was without major effects. ltc and ltb had a small stimulatory effect on cell proliferation in hl- cells. degradation and possibly induction of cell differentiation may have attenuated the actions of leukotrienes. the leukotriene biosynthesis inhibitors aa- and mk- reduced proliferation of hl- and lymphoma cells significantly but had no major effects on mast cell growth. on the other hand, mast cells stimulated with calcium ionophore produced pgd and its metabolites, as well as ltb and ltc in significant amounts. from our data we conclude that prostaglandins and leukotrienes may play an important role in the control of cell proliferation. we compared the pattern of drug expenditures of several hospitals in (size: to beds). a, b are university hospitals in the ,,old"and c,d,e are university hospitals in the ,,new" german countries, f is a community based institution in an ,,old" german country. main data source were lists comprising all drags according to their expenditures in a rank order up to %. items were classified into i) pharmaceutical products including immunoglobulines, ii) blood and -derived products (cell concentrates, human albumin, clotting factors) and iii) contrast media (x-ray). with regard to group i) the highest expenditures nccured in hospitals a and b whereas drug costs in c -e were / less and came to only % in hospital f. the main groups of drugs which together account for > % of these expenditures are shown in the table. ) products were about % up to % of group i and highest in hospitals a, b and e, but about / lower in hospitals c and d. these results suggest meaningful differences in the drug utilization between the old and new countries as well as betv,,een university institutions and community based hospitals. however, although all hospitals provide oncology and traumatology services and all university hospitals offer ntx, differences in other subspecialities e.g bone marrow and liver transplantation and treatment of patients with haemophilia must be considered, too. dr.medsebastian harder, dept c]inicai pharmacology, university hospital frankfurt, theodor stern kai , frankfurt/main frg m. hgnicka, r. spahr, m. feelisch, and r. gerzer organic nitrates like glyceryl trinitrate (gtn) act as prodrugs and release nitric oxide (no), which corresponds to the endogenously produced endothelium-derived relaxing factor. in the vascular tissue, no induces relaxation of smooth muscle cells, whereas in platelets it shows an antiaggregatory effect. both activities are mainly mediated via stimulation of soluble guanylyl cyclase (sgc) by no. in contrast to compounds which release no spontaneously, a membrane-associated biotransformation step is thought to be required for no release from organic nitrates. glutathione-s-transferasea and cytochrome p- enzymes have been shown to metabolize organic nitrates in the liver, but little is known as to whether these enzymes are involved in the metabolic conversion of organic nitrates in the vasculature. furthermore, it is still unclear whether or not platelets are capable of metabolizing organic nitrates to no. we isolated the microsomal fraction of bovine aorta in order to characterize the activities towards organic nitrates using the guanylyl cyclase reaction as an indirect and the oxyhemoglobin-technique as a direct measure for no liberation. gtn was metabolized to no by the microsomal fraction under aerobic conditions already in the absence of added cofactors. this activity was not influenced by the cytochrome p- inhibitors cimetidine and metyrapone. in contrast, the glutathione s-transferase substrate -chloro- , -dinitrobenzene and the glutathione s-transferase inhibitors sulfobromophthalein and ethacrynic acid did not affect no release, but potently inhibited sgc activity. blocking of microsomal thiol-groups resulted in a decreased no release from gtn. homogenates of human plateles isolated by thrombapheresis and stabilized by addition of mm n-acetylcysteine did not show no-release from gtn as determined by the stimulation of the platelet sgc even after addition of the possible cosubstrates glutathione and nadph. these data demonstrate ( ) that bovine aortic microsomes exhibit an organic nitrate metabolizing and no-releasing activity whose properties are clearly different from classical cytochrome p- enzymes and from glutathione s-transferases, and ( ) that human platelets itself are not capable of bioactivating organic nitrates and therefore require organic nitrate metabolism in the vessel wall for antiaggregation to occur. bioavailability of acesal ®, acesal ® extra, micristin ® (all mg acetylsalicylic acid -asa), and miniasal ® ( mg asa), opw oranienbufg, relative to respective listed references was studied in female and male healthy volunteers (age - y, weight - kg, height - cm). asa and salicylic acid (sa) were measured using an hplc method validated from ng/ml to pg/ml. extent of absorption was assessed by auc (bioequivalence range . - . ), rate by cr~/auc (bioequivalence range . - . ). geometric means and %-confidence limits of the ratios test/reference (multiplicative model) are shown in the acesal ® and micdstin ® were bioequivalent in rate and extent of absorption with the reference formulations. the fast liberating acesal ® extra was bioequivalent with respect to extent only. asa from miniasal ® was absorbed more slowly than from an asa solution (cm= ( %-range): - ng/ml and - ng/ml; t~ (min-max): . - . h and . - . h). asa from micdstin ® and the corresponding reference was absorbed more slowly than from acesal ® and acesal ® extra. this was accompanied by decreased aucasa (increase of first pass metabolism) and increased apparent trrz (absorption being rate limiting). all ratios of aucsa/aucasa after administration of mg asa were markedly higher than after mg asa. thus, the formation of salicyludc acid from sa might be capacity limited at doses of mg asa. in the study >>physicians' assessment of internal practice-conditions and regional health-services-conditions in accordance with ambulatory patient-management<< a sampie of primary care physicians -comprising gps and internists -provide data for continuons analyses of arnbulatory health care quality and structure. focussing on the physicians' drug prescription, the impacts of reform law (gesundheitsstralcturgesctz, gsg) upon primary care providers and their therapeutic decisions were examined in . four different surveys were carded out during the year, dealing with frequent patients' reasons for encounter in gps' offices. after a pretest was carried out, physicians reported on patient-physician-encounters, basing on mailed questionnaires. for every therapeutic change patients received, the reasons for the change were recorded (e.g, reform law, medical indication) and above the physicians' expectations towards three criteria to measure the quality: ) physicians' assessment of the patients' satisfaction, ) adverse drug effects, ) therapeutic benefit. according to therapeutic changes due to reform law (drag budgets, blacklist) it can be stated: ) therapeutic changes due to reform law were carried out with relevant frequency. ) the reform law was of different concern regarding the different reasons for encounter we investigate& ) the impacts' strangth of the legal control mechanisms differed among several groups of physicians: those who already have been liable to recourse before more often carried out therapeutic changes according to fixed drug budget. different multivariate logistic regression-models yield an estimation of the odds-ratio of about . ) therapeutic changes in accordance with the reform law having been carried out at the beginning of the year more often suffered from negative expectations towards the therapeutic quality then changes during the actual encounter, e.g. >>joint pains . ku/l to ± min in those with a che s . ku/l, the metabolic clearance rate (mcr) decreased from ± ml/min to iii ± ml/min. in patients on phenytoin the t½-b was reduced to % of the platelet mass) was much stronger affected by the dt-tx treatment: the mean area was reduced by +p % after rag, + % after mg, _+ % after mg, + % after mg and _+ % after mg dt-tx versus - + % after placebo. in the presence of cells of the vessel wall (smc) the overall thrombus formation was reduced by up to + % after only mg, + % after mg, +_ % after mg, _+ % after rag and -+ % after mg dt-tx versus +_ % after placebo. dt-tx , a molecule combining potent and specific th romboxane synthetase inhibition with prostaglandin endoperoxide/thromboxane a receptor antagonism, has been examined in healthy male subjects. collagen-induced platelet aggregation in platelet rich plasma prepared from venous blood was measu red photometrically before and up to hours after a single oral dose of , , , or mg dt-tx in a placebo-controlled, double-blind study. platelet aggregation was induced in the ex vivo samples by collagen in concentrations between . and p.g/ml to evaluate platelet aggregation in relation to the strength of the proaggregatory stimulus. the ecs , i.e. the concentration of collagen required for a half-maximal aggregatory response (defined as the maximal change of the optical density), was determined. in the placebo-treated control group, the mean ecso was + ng/ml collagen (+ se; n= ) before treatment. it then varied between + and +_ ng/ml collagen after treatment. the'ratio of the post-to the individual pre-treatment ecso values was . _+ . (n= ) at . h, . _+ . at lh, . _+ . at h, . - . at h, . + . at h and . + . at h. this indicates that the sensitivity of the platelets to collagen was not affected by the placebo treatment. oral treatment with dt-tx , however, strongly inhibited the aggregatory response of the platelets to collagen stimulation. the ecs -ratio was increased to a maximum of . the detection of endogenous opioids suggested the opinion that in case of the presence in the organism of a receptor for an exogenous substance there is probably a similar endogenous substance.the occurrence in the blood of persons, who were not treated with cardiac glycosides, of endogenous digoxin-like or ouabain-like [actors confirms that opinion. in our study we took up the research of other drug-like [actors in the blood serum of healthy people. in two hundered and twenty-five healthy volunteers (llom,ll f) non-smokers not receiving any treatment before or during the test and aged between ib and y(mean age y) the occurrence of drug-like [actors in blood serum was studied.the examinations were carried out with the use of the fluorescence-polarization-immunoassay (fpia)-tdabbott. th e presence of the following endogenous drug-like foctors in the blood serum was evaluated: quinidine,phenytoin, earbamazepine,theophylline, cyclosporineand gentamicin. the presence of endogenous phenytoin-like, theophyllinelike and cyclosporine-like [actors has been demonstrated. the drug-like [actors were not found in the case of quinidine ,carbamazepine and gentamicin. the phenytoin-like factor was found in , ~, theophylline-like [actor , ~ and cyclosporine-like [actor in , ~ of examined volunteers.the mean value of the drug-like [actors were as follow : phenytoin , ~ , pg/ml,theophylline , ~ o,ll pg/ml and cyclosporine , z , ng/ml. the supposition may be proponued that organism produces drug-like substances according to its needs. the acetylation and oxidation phenotypes were studied in healthy volunteers ( m, [) aged between ib and years (mean y) in the wielkopolska region in poland. the acetylation phenotype was studied with the use of sulphadimidine which was given in a dose of mg/kg b.w. per os.sulphadimidine was determined by a spectrophotometric method.the border value of m.r. was ~ in urine. the oxidation phenotype was studied with the use of sparteine which was given in a dose of , mg/kg b.w.per de. sparteine was determined by the gas chromatographic method in urine. if m~ was
. ). cpb induced a significant decrease of pche (- %)(p< . ) and protein concentration (- %)(p< . ) and a less pronounced numedcal reduction the specific pche (- %)(p> . ). the reduction of pche and protein concentration was not significantly affected by ending cpb (p> . ), and the values remained low over the remaining operation time. there was no significant difference in pche, measured at °c in vitro, or protein concentration between the normothermic and hypothermic group (p> . ). furthermore, there was no correlation between serum hepadn-activity and pche reduction. pche in the plasma of healthy volunteers was not significantly affected by either hepadn up to u/ml or apretinin up to u/ml (p> . ). conclusion: ( ) the concentration of the antitumor antibiotic mitomycin c (mmc), used in ophtalmic surgery for its antiproliferative effects, was measured in the aqueous humor of glaucoma patients undergoing trabeculectomy. sponges soaked with mmc-solution ( ul of mmc-solution . mg/ml: rag) were applied intraoperatively under the scleral flap for rain. to ul of aqueous humor were drawn with a needle min following the end of topical mmc-treatment. samples were assayed for mmc using a reverse-phase hplc-system with ultraviolet detection (c -column, elution: phosphate-buffer ( . m, ph: . ):methanol, v:v = : , nm). swabs were extracted in phosphatebuffer ( . m, ph: . ) before hplc-analysis. external calibration was used for mmc quantitetion. quantitation limit was ng/ml. in all aqueous humor samples mmc-concentration was below ng/ml. mmc in the swabs amounted to % of the mmc amount applied. conclusion: after intraoperetive topical application, mmc concentration in the aqueous humor of patients is very low. the substantial loss of mmc from the swabs used for the topical mmc-treatment suggests ( ) rapid systemic absorption of mmc and/or ( ) a loss through irngation of the operative field following topical mmc-application. institut fur pharmakologie und * klinik for augenheilkunde, universitcit k n, gleuelerstrasse , k n al a due to runaway costs of the national health service which are reflected as well in growing expenditures for drugs at the university hospital of jena investigation of indication related drug administration patterns becomes more and more interesting. this holds especially true for intensive care units (itu's) which are determined by similar high costs for technical equipment as for drugs ( ) although any economical considerations seem to be questionable due to ethical reasons ( ). over a month period indication related drug administrations of surgical itu's of the university hospital jena have been recorded and analyzed by using a pc-notebook. total expenditures for all included patients add up to dm . . regarding these drugs and blood products which caused % of total costs in . the leading substances ( antithrombin ill, human albumin %, prothrembine complex, ...) represent % of total costs including blood products, antibiotics and ig m endched intravenous immunglobine. therefore the indication of particulary these drugs became mere interesting for further investigations. already during the study actual discussions with the treating medical staff have been made leading to new developed therapy recommendations. providing same high standard of medical treatment a remarkable cost saving of some drugs by more cdtical and purposeful use could already be achieved as a first result. however, the results of the study underline impressivly the benefit of such investigations for improvement of drug treatment. the simple replacement of expensive drugs ( e.g. prothrembine complex ) by higher quantities of cheaper ones of the same indication group ( e.g. fresh frozen plasma ( )) does not necessarily mean less expenditures in all cases but may cause unsiderable side effects. ( ketokonazole is known to decrease pituitary acth secretion in vitro and inhibits adrenal ll-hydroxylase activity. to work out the clinical significance of both effects analysis of episodic secretion of acth, cortisol (f) and ll-deoxycortisol (df) was performed in patients with cushing's syndrome (cs) requiring adrenostatic therapy. methods : ketokonazole was started in ii patients with cs ( acth-secreting pituitary adenomas [cd], adrenal adenoma [aa] ). in of them ( cd, aa) blood samples were obtained for hours at i min intervals ( samples/patient) before and again under treatment (mean dose i mg/d, > weeks). hormone levels were measured by ria and secretion patterns analysed by means of pulsar, cluster and desade. patients were investigated only once because treatment was stopped due to side effects. results : the we conclude that the observed % increase of plasma acth and the % decrease of f/df ratio demonstrate that inhibition of adrenal li -hydroxylase activity is the primary mode of action of ketoconzole in vivo. even at high doses acth and f secretion patterns could not be normalized. the improvement of pain and swelling conditions by means of drugs is an important method of achieving an enhanced perioperative quality of life in cases of dentoalveolar surgery. in prospective, randomised, double-blind studies the influence of various concentrations of local anaesthetics and accompanying analgesic and antioedematons drugs was investigated in the case of osteotoimes. all of the studies were carded out according to a standardised study procedure. a comparison of the local anaesthetics articaine % mad articaine % (study ) demonstrated the superior effect of articaiue % with respect to onset relief on pain, period of effectiveness and ischaemia. recordings of the cheek swelling in the remaining studies were made both sonographically and with tape measurement, while the documentation of the pain was carried out by means of visual analogue scales on the day of operation and on the first and third post-operative days. tile perioperative, exculsive administration of x mg dexamethasone (study ) resulted in a significant reduction in the swelling ( %) while the exclusive administration of x mg lbuprofen (study ) was accompained by a marked decrease in pain ( %) but no significant reduction of swelling in comparison to the placebo group. the combination of x mg ibuprofen und mg methylprednisolone (study ) yielded a decrease in pain of . % and a reduction in swelling of %. a cdmparison between a mono-drug ibuprofen and a combination drug ass/paracetamol (study ) resulted in no significant difference in the reduction of swelling and pain and therefore highlighted no advantages for the combined drug. a mono-drug should therefore be given priority as an analgesic. the combinatton of ibuprofen und methylprednisolone offers the greatest reduction in pain and swelling. using the results of the randomised studies, a phased plan for a patietu-orietued, anti-inflammatory therapy to accompany dento-alveolar surgery is presented. in a placebo controlled study patients with congestive heart failure (nyka class ii) were treated orally for seven days with i mg ibopamine t.i.d, i subjects had a normal renal function (mean inulin clearance (gfr) ± , ml/min), i patients suffered from chronic renal insufficiency (gfr ± , ml/min; x ± sem). pharmacokinetic parameters of epinine, the maximum plasma concentration, the time to reach maximum plasma concentration and the area under the curve from to hours were unaltered in impaired renal function when measured on the first or on the seventh treatment day. however plasma concentrations in both groups were significantly higher on the first treatment day than after one week of ibopamine administration. in this context antipyrine clearance as a parameter of oxidative liver metabolism which might have been induced by ibopamine revealed no differences between placebo and ibopamine values. in conclusion kinetic and dynamic behaviour of ibopamine was not altered by impaired renal function. human protein c (hpc) is a vitamin k-dependent in the liver produced glycoprotein with anticoagulant properties. when active protein c splits the coagulation factors va and vuia by means of limited proteolysis (kisiel et al ) . its concentration in normal plasma is - lag/m[ i-ipc's biological importance became evident when a congenital protein c deficiency, which results in difficult recurrent thromboembolic diseases was discovered (griffin eta/ ) . the recognition of a congenital hpc deficiency, as wall as the connection between acquired protein c deficiency and the appearance of thromboembolic complications by means of highly accurate and sensitive ascertained methods is therefore of great practical importance for the clinic. murine monoclonal antibodies (moabs) against hpc were formed. antibody producing hybridomas were tested by an ,,indirect elisa" against soluble antigens. the plates were coated with purified hpc up to ng/ al. the peroxydase-system was used to identify antibodies the antibodies were tested with the remaining vitamin k-dependent proteins for cross-reactivity, as well as with hpc deficiency plasma for disturbances by other plasma proteins. the above described experiment represents a sensitive and specific method for measuring the hpc concentration with moabs. assessment of local drug absorption differences ("absorption window") in the human gastrointestinal tract is relevant for the development of prolonged release preparations and for the prediction of possible absorption changes by modification of gastrointestinal motility. current methods are either invasive and expensive (catheterization of the intestinum, hf-capsule method) or do not deliver the drug to a precisely defined localization. we evaluated the delay of drug release from tablets coated with methacrylic acid copolymer dissolving at different ph values as an alternative method. three coated preparations of caffeine tablets (onset of drug release in in vitro tests at ph . , . and . ) and an uncoated tablet (control) were given to six healthy male volunteers in a randomized order. caffeine was used because of its rapid and complete absorption and good tolerability. blood samples were drawn up to h postdose (coating ph . up to h postdose), and caffeine concentrations were measured by hplc. auc, time to reach measurable caffeine concentrations (tia~), tr, ax, cmax and mean absorption time (mat) values for coated preparations were compared to the reference tablet (mean + sd of n= ): the relative bioavailibility for the coated preparations did not differ from the reference, suggesting complete release of caffeine. all coatings delayed caffeine absorption onset. the tlag for the ph . preparation suggests that release started immediately after the tablet had left the stomach. the mean delay of . h for the ph . coating was highly reproducible and should reflect small intestine release. the ph . coating delayed absorption to the highest extent, however the drug was probably released before the colon was reached. there is evidence that nitric oxide (no) plays a role in cardiovascular disease like hypertension, myocardial ischemia and septic cardiomyopath.y. no stimulates the guanylyl cyclase leading to an increase m cgmp content we investigated by immunoblotting the expression of the inducible nitric oxide synthase (inos) in left ventricular myocardium from failing human hearts due to idiopathic dilative cardiomyopathy (idc, n= ), ischemic cardiomyopathy (icm, n= ), beeker muscular dystrophy (n= ) and sepsis (sh, n= ) compared to non-failing human hearts (nf, n= ). cytokine-stimulated mouse macrophages were used as positive controls sds-polyacrylamide gel electrophoresis ( . %) was perfomed with homogenates of left ventricular myocardium and mouse macrophages respectively. proteins were detected by enhanced chemiluminescence using a mouse monoclnal antibody raised against inos. furthermore, we measured the cgmp content in these hearts by radioimmunoassy. a band at about kda was observed in two out of three hearts from patients with sepsis and in stimulated mouse macrophage~ no inos-protein expression was detected in either non-failing human hearts (n= ) or failing human hearts due to idc, ihd or bmd. in ventricular tissue from patients with sepsis cgmp content was increased to % ( + fmol/mg ww, n= ) compared to non-failing hearts ( % or + . fmol/mg ww, n= ). in left ventricular tissue tissue from patients with heart failure due to idc, ihd and bmd cgmp content did not differ from that in non-failing hearts. it is concluded that an enhanced inos protein expression may play a role in endotoxin shock, but is unlikely to be involved in the pathophysiology of end-stage heart failure due to idc, ihd and bmd. (supported by the dfg.) nitric oxide (no) has been shown to be a major messenger molecule regulating blood vessel dilatation, platelet aggregation and serving as central and peripheral neurotransmitter; furthermore no is a crucial mediator of macrophage cytotoxicity. no production can be assessed reliably by determination of its main metabolites nitrite and nitrate in serum, reflecting no synthesis at the time of sampling, or in h urine, reflecting daily no synthesis. farrell et ai. (ann rheum dis ; : ) recently reported elevated serum levels of nitrite in patients with rheumatoid arthritis (ra). we report here total body nitrate production and the effect of prednisolone in patients with ra. nitrate excretion in h urines of patients with ra as defined by the revised criteria of the american rheumatism association was measured by gas chromatography at times: first before start of a antiinflammatory therapy with prednisolone, when the patients had high inflammatory activity as indicated by mean crp serum concentrations of + sd mg/i and elevated esr with a mean of ]: after hour. secondly - weeks after start of prednisolone therapy in a dosage of . mg/kg body weight, when the patients showed clinical and biochemical improvement (crp + mg/i, p< . , esg + , p< . , two-tailed, paired t-test). for comparison h urines from healthy volunteers were obtained. before start of predniselone therapy the urinary nitrate excretion in patients with ra (mean + sd p.mol/mmol creatinine) was more than twofold higher (p< . , twoaailed unpaired t-test) than in healthy volunteers ( + ~tmol/mmol creatinine). the urinary nitrate excretion decreased significantly (p< . , two-tailed, paired t-test) to + i.tmol/mmol creatinine under therapy with prednisolone, when inflammatory activity was reduced considerably. despite the decrease the urinary nitrate excretion was still twc, fold higher (p< . , two-tailed, unpaired t-test) in patients with ra than in the control group. our data suggest that the endogenous no production is enhanced in patients with ra. furthermore the results indicate that this elevated no synthesis could be reduced in accordance with suppression of systemic inflammation by prednisolone therapy. but now as ever the physicians are entitled to prescribe drugs which have to prepare in a pharmacy for a particular patient. little information is available on the frequency and patterns of these prescriptions. we had occasion to analyse the prescriptions of drugs which were prepared in pharmacies in north thuringia (east germany) from october to december at the expense of a large health insurance company (allgemeine ortskrankenkasse). the selected pharmacies are loealised in cities. we found prescriptions of drugs made up in pharmacies among a total number of reviewed drug prescriptions. this is . % of the total. most of these prescriptions were performed by dermatologists ( . %), general practitioners ( . %), paediatrists ( . %) and otolaryngologists ( . %). according to this, the most frequently prescribed groups of drugs were dermatics enteric eoated tablets with nag and nag acetylsalicylic acid (asa) have been developed wluch should avoid the known gastrointestinal adverse events by a controlled drug release mainly in the duodenum after having passed the stomach. a -way cross-over study in healthy male subjects, aged from - years, was conducted to investigate the pharmacokinetics, bioavailability, safety, and tolerance of asa and its metabolites salicylic acid and salicylurie acid following enteric coated tablets in comparison with plain tablets. asa and its metabolites were determined by a sensitive, specific, and validated hplc method. pharmacokinetic parameters were determined by non-compartreental analysis. bioequivalence was assessed by % confidence intervals. following the admimstration of enteric coated tablets, a delayed absorption can be observed for both the mg dose and the rag dose. this is likely due to a delayed release of the active substance from the enteric-coated tablets in the small intestine arer gastric passage. considering the mean residence times (mrt), there is a difference of at least . h following the enteric coated tablets compared to the plain tablets for asa and the two metabolites measured• this difference represents the sum of residence time in the stomach plus the time needed to destroy the coating of the tablet when it left the stomach• in general, the maximum observed concentrations of both enteric coated formulations occurred - h post dose. the pharmacokinetics of a novel immunoglobulin g (lgg) preparation (bt , biotest, dreieich, frg) have been determined in healthy, male anti-hbs-negative volunteers. for this preparation only plasma from hiv-, hbv-and hcv-negative donors was used, the quality control for the product was in accordance with the ec-guideline for virus removal and inactivation procedures. each volunteer received a single, intravenous infusion of ml bt containing g igg and anti-hbs > , iu. anti-hbs was used as a simply measurable and representative marker for the igg. blood samples for determination of anti-hbs (ausab eia, abbott, frg) were drawn before and directly after the infusion, after , , , and hours, on day , , , , , , , , , and . additionally, total protein, igg, iga, igm and c /c complement were measured and blood hematology and clinical chemistry parameters determined. the phar~gacokinetic parameters of anti-hbs were calculated using the topfit ~" pc program assuming a -compartment model. pharmacoeconomic evaluations (pe) describe the relationship between a certain health care input (costs) for a defined treatment and the clinical outcome of patients measured in common natural units (e.g. blood pressure reduction in mmhg), quality of life (qol) gained, lifes saved or even in money saved due to the improvement in patients functional status. this implies that the efficacy of a treatment has been measured and proven in clinical trials. in addition, in order to transfer data obtained in clinical trials to the clinical setting, an epidemiological database for diseases and eventually drug utiiization may be required. the evaluation of the efficacy depends on the disease to be treated or prevented and the mode of treatment. for acute, e.g. infectious diseases, the endpoint can be defined easily by the cure rate, but for pe the time (length of hospital stay) and other factors (e.g. no. of dally drug administrations) have to be considered. in the case of chronic diseases, e.g. hypertension or hypercholesterolaemia, surrogate endpoints (blood pressure or serum cholesterol reduction) and information on side effects may be acceptable for the approval, but cannot be used for a meaningful pe. the latter should include the endpoints of the disease, i.e. cardiovascular events (requiring hospitalisation and additional treatment) and mortality. furthermore, the qol has to be measured and considered for chronic treatment. several questionaires have been developed to measure the overall qol or the health related qol. especially the latter may be a more useful tool to detect mad quantify the impact of a treatment on qol. combining the clinical endpoint mortality and qol by using qalys (quality-adjusted lifeyears) may be a useful tool to determine the value and costs of a given drug treatment but cannot be applied to all treatments under all circumstances. sorbitol was used as a model substance to investigate the dynamics of the initial distribution process following bolus intravenous injection of drugs. to avoid a priori assumptions on the existence of well-mixed compartments data analysis was based upon the concept of residence time density in a recirculatory system regarding the pulmonary and systemic circulation as subsystems. the inverse gaussian distribution was used as an empirical model for the transit time distribution of sorbitol across the subsystems, distribution kinetics was evaluated by the relative dispersion of transit (circulation) times. the distribution volumes calculated from the mean transit times were compared with the modelindependent estimate of the steady-state volume of distribution. kinetic data and estimates of cardiac output were obtained from patients after percutaneous transluminal coronary angioplasty. each received a single . g iv bolus dose of sorbitol. arterial blood samples were collected over hours. while the disposition curve could be well fitted by a tri-exponential function the results indicate that distribution kinetics is also influenced by the transit time through the lungs, in contrast to the assumption of a wellmixed plasma pool underlying compartmental modelling. a karit@ "bu£ter" is used traditionally in west afr%can manding colture as a cosmetic to protect the skin against the.sun. gas chromatography was used to analyze the ingredients of karit@ butter from guinea. we found % palmitic acid, % stearic acid, % oleic acid and % linoleic acid and . % of other fatty acids with higher chain lengths like arachidonio acid. some of these are essential fatty aclds (vitamine f). furthermore karit@ contains vitamine a and d as well as triterpene alcohols and phytosterines. an original extract was used to prepare a skin cream. this preparation was tested in volunteers ( women, men; age - y.). the cream contained at least % karit@, glycerol, emulsifiers and no preservative agent except for sorbic acid. of the volunteers very well tolerated the cream and thought it effective. the skin became more tender and elastic. good results were obtained when the volunteers suffered from very dry skin. two of them who were known to be allergic against the most available skin creams had no problems in using our karit cream. pure karit@ butter was used for four months to treat an african infant with neurodermitis. after this time the symptoms had markedly improved whereas previous therapy trials with other usual topical medicaments had been unsuccessful. these pre-studies had shown that dermatologic preparations containing karit# may be a good alternative in the treatment of therapyreslstent skin diseases and may in some cases be able to replace eorticoid treatment. ) and a low molecular weight heparin preparation (fragmin ~, iu/kg bodyweight s.c.) on coagulation and platelet activation in vivo by measuring specific coagulation activation peptides [prothrombin fragment + (f + ), thrombin antithrombin iii complex (tat), -thromboglobulin (~-tg)] in bleeding time blood (activated state) and in venous blood (basal state). in bleeding time blood, r-hirudin and the heparin preparations significantly inhibited the formation of both tat and f + . however, the inhibitory effect of r-hirudin on f + generation was short-lived and weaker compared to ufh and lmwh and the tat/f + ratio was significantly lower after r-hirudin than both ufh and lmwh. thus, in vivo when the coagulation system is in an activated state r-hirudin exerts its anticoagulant effects predominantly by inhibiting thrombin (lla), whereas ufh and lmwh are directed against both xa and ila. a different mode of action of ufh and lmwh was not detectable. in venous blood, r-hirudin caused a moderate reduction of tat formation and an increase (at hour) rather than decrease of f + generation. formation of tat and f + was suppressed at various time points following both ufh and lmwh. there was no difference in the tat/f + ratio after r-h[rudin and heparin. thus, a predominant effect of rhirudin on ila (as found in bleeding time blood) was not detectable in venous blood. in bleeding time blood, r-hirudin (but neither ufh nor lmwh) significantly inhibited ~-tg release. in contrast, both ufh and lmwh caused an increase of ~-tg hours after hepadn application. our observation of reduction of platelet function after r-hirudin compared to delayed platelet activation following ufh and lmwh suggests an advantage of r-h[rudin over heparin, especially in those clinical situations (such as arterial thromboembolism) where enhanced platelet activity has been shown to be of particular importance. the human cytochrome p isoform cyp a determines the level of a variety of drugs metabolized by the enzyme, including caffeine (ca) and theophylline (th). more than compounds are potential or proven inhibitors of this enzyme. some of them were reported to be substrates or inhibitors to cyp a in vitro, ethers caused pharmacokinetic interactions with drugs metabolised by cyp a . we characterized a series of these compounds with.respect to their effect on cyp a in human liver microsomes in relation to-published pharmacokinetic interactions in vivo. cyp a activity in vitro was measured as ca -demethylation at the high affinity site in human liver microsomes, using rain incubation at °c with - jm caffeine, an nadph generating system, and inhibitor concentrations covering . orders of magnitude. apparent kr values were estimated using nonlinear regression analysis. for inhibitory effects on cyp a activity in vivo, the absorbed oral dose causing % reduction in ca or th clearance (edso) was estimated from all published interaction studies using the emax model. %)i followed by disinfectants ( . %)r ointments ( . %) and solutions ( . %) were the most frequent drug forms %) or german ( . %). our results show that even now drugs prepared trend analysis of the expenses at the various departments may be a basis for a ratio-hal and economic use of the drug budget. total drug expenses amounted to mill. dm in . s milldm ( %) were used in surgical departments with intensive care units (icu) (general surgery, kardiovascular surgery, neurosurgery, gynecology, anaesthesiology) of wtfich % are needed by the icu and % in the operating rooms. surgical departments without scu but similar patient numbers (ophthalmology, ent, orthopedics and urology) get only % of the budget ( % needed for the operating rooms). the medical departments spent s mill.dm of which icu needs only % whereas the oncology (oncu) and antiinfective units uses more than %• similar relation could be seen in the child hospital ( . milldm, %) where % were spent for icu and % for oncu. the departments of dermatology and neurology get %, the depart-merits of radiology, nuclear medicine and radiation therapy only % of the budget. antiinfective drugs (antibiotics, antimycotics, virustatics) are most expensive ( % of budget) followed by drugs used for radiological procedures ( %) sncreasing the knowledge about the costs of medical items and the rational and economical use may stop the overproportional increase of the drug budget the mostly used : ) and a -fold higher efficiency than the r-form the elimination of the talinolol enantiomers was studied in healthy volunteers (age: - years, body weight: - kg) given a single oral dose ( mg) or an intravenous infusion ( rag) of the racemi c drug. three volunteers were phenotypically poor metabolisers and nine were extensive metabolisers of the debrisoquine-type of hydroxylation. the r-and senantiomers of talinolol were analysed in urine by a hplc method after enantioselective derivatisation. the concentrations of the enantiomers within every sampling period as well as the amounts of s-and r-enantiomer this corresponds to a s/r-ratio of , + , . the mean total amount (= s-+ r-enantiomer) eliminated was on average % &the administered dose. after oral administration _+ % of the dose were eliminated within h. the amounts of talinolol enantiomers recovered were equally (senantiomer: _+ gg the ratios of s-to r-concentrations at every sampling interval and of every volunteer were assessed between , and , (mean: , after infusion and , after oral administration, respectively) medizinische fakult~t carl gustav cams, teelmische university, t, fiedlerstr nitric oxide (no), synthesized by the inducib]e form of no synthase, has been implicated as an important mediator of-specific and non-specific immune response, little is known about the in vivo synthesis or no in inflammatory joint diseases. therefore we have studied the excretion of the major urinary metabolite of no, nitrate, in rats with adjuvant arthritis, a well established model of polyarthritis in addition we assessed the urinary excretion of cyclic gmp, which is known to serve as second messenger for the vascular effects of no, synthesized by the constitutive form of no synthase, affecting blood vessels, plate]et aggregation and neurotransmission, in h urines of male sprague daw]ey rats at day after induction of adjuvant arthritis we measured nitrate excretion by gas chromatography and cyclic gmp by radioimmunoassay. for contro] we determined the same parameters in h urines of non-arthritic rats of the same strain and age, we found a significant (p < , two-tailed, unpaired t-test), more than -fo]d increase of urinary nitrate excretion in arthritic rats (mean ± sd pmo]/mmol creatinine) as compared to non arthritic rats ( _+ izmot/mmo] creatinine). urinary cyclic gmp excretion was slightly, but not significant]y lower in arthritic rats ( ± nmol/mmol creatinine) than in controls ( ± nmo]/mmo] creatinine).there were no major differences in food or water intake which cou]d account for these results. the increased urinary nitrate excretion accompanied by normal cyclic gmp excretion suggests that no production by the inducible form of no synthase is enhanced in rats with adjuvant arthritis institute of c]inica] pharmacology~ hannover medical school, d- hannover, germany and *research center gr@nentha] gmbh, zieg]erstr , d- aachen, germany background: pge has been shown to be efficacious in the treatment of critical leg ischemia. despite of an almost complete first pass metabolism in the lung the clinical effects of intraarterial and intravenous pge do not differ significantly. in addition, it is not fully understood which of the various pharmacological actions of pge is the main factor; by most authors, however, it is thought to be the increase of cutaneous and muscular blood flow. by means of [ - ]-h -pet, we studied muscular blood flow (mbf) of the leg in patients with peripheral arterial disease comparing intraarterial and intravenous pge . patients and methods: patients ( f, m; mean age y) with pad were studied, ( atherosclerosis, thromboangiitis obliterans). at the first day, pg pge were infused intraarterially within minutes; pet scanning of the lower leg was performed at minutes , und . at the following day, pg pge were infused intravenously within hours; pet scanning was performed at minutes , , and . results: in the infused leg the increase of mbf caused by intraarterial pge averaged + % at minute and _ % at minute ; in the not infused leg there was no effect. the increase rate in the infused leg was highly variable but did not correlate with sex, age, disease or clinical outcome. for intravenous pge the change of mbf at any time averaged almost %. conclusion: unlike intraarterial pge , intravenous pge does not increase the muscular blood flow of the leg. a comparable clinical effect provided, increase of muscular blood flow may not be considered the main way of action of pge in critical leg ischemia. eslrogen(er) and progesterone(pr) receptor status as well as lymph node involvement are important factors in predicting prognosis and sensitivity to hormone and chemotherapy in patients with breast cancer. prognostic relevance of ps -protein, egfr and cathepsin d is currently under debate. especially ps and egfr expression appears to provide additional information regarding the responsiveness of the tumour tissue to tamoxifen. the aim of the present study was to investigate the relationships between these parameters and established prognostic factors in breast cancer. in a prospective study ps and cathepsin d were assayed immunoradiometricauy in the tumour cytosol of patients, egfr was measured by elisa. relating the level of these factors to the lymph node involvement, menopausal status as well as turnout size, no significant association could be established. jn our findings er and pr are significantly correlated with the expression of ps but none is correlated with the cathepsin d status. egfr was shown to be inversely correlated with the content of er. a significant association between cathepsin d and ps could be established in patients with early recurrence. at a median follow-up of - months, recurrence was more common in patients with tumours having negative status for ps , independent of receptor status. in conclusion, because of the relative independence on the er and pr status and other prognostic factors and the influence on the recurrence behaviour, demonslrated here, and their role in promoting tumour dissemination and changing hormone therapy sensitivity, all three factors represent markers of prognostic relevance.deparlancnts of clinical pharmacology l, nuclear medicine and surgery ,pharmacoeconomic studies, conducted either separately from or together with clinical trials are increasing in both number and meaning. in a period of limited health care budgets, political and medical decision makers alike run the risk of accepting the results of such studies without critical reflection. careful evaluation of those studies by state-of-the-art methods is one way out of the trap. another could be to refer to ethical considerations. the problem in this context is, that the discussion concerning ethical aspects of pharmacoeconomic research, at least in europe, is just in its beginning. therefore, no widely accepted standards are available. but they are essential to answer four main questions: . who should perfom a pharmacoeconomic study? . which objectives should be considered? . what kind of study should be performed (e. g. cost-effectiveness, cost-utility, cost-benefit analysis)? . which consequences will be drawn from the results?based on the case study-orientated "moral cost-benefit model" (r. wilson, sci. tech. human values : - , ) , a three-step decision and evaluation model is proposed to handle bioethical problems in pharmacoeconomic studies: . moral risk analysis . moral risk assessment . moral risk management. possible practical consequences for decision making in research policy, study design and assessment of results are discussed. hirudin is the most potent known natural inhibitor of thrombin and is presently gaining popularity as an anticoagulant since recombinant forms have become available. the aim of the present study was to compare platelet aggregation, sensitivity to prostaglandin e (pge ) and thromboxane a (txa ) release in r-hirudinized and heparinized blood. platelet aggregation was measured turbidimetrically using a dual channel aggregometer (labor, germany) in blood samples of healthy volunteers anticoagulated with r-hirndin w (behring) and hepatin ( gg/mi blood each). aggregation was induced by arachidonic acid (aa; . , . and . ram) and adp ( . lam). pge in concentrations , and ng/ml was used. plasma txb content was measured by gas chromatography/mass spectrometry. this study showed a significantly lower a.a-induced platelet aggregation in r-hirudinized plasma. three minutes after the aggregation induction by . mm aa the plasma txb concentration was ng/ml in blood anticoagulated with rhimdin and . ng/ml in heparin-anticoagulated blood. the extent of the adp-induced aggregation was nearly the same in rhimdinized and heparinized plasma. platelet sensitivity to pge was significantly higher in r-hirudinized blood. thus, aa-induced platelet aggregation is significantly lower and sensitivity to pgei higher in r-himdin-anticoagulated blood in comparison with beparin-anticoagulated blood.university of tartu, puusepa str. , tartu ee , estonia anaemia has been reported in renal transplant (ntx) recipients treated with azathioprine (aza) and angiotensin converting enzyme-inhibitors (ace-i). an abnormal aza metabolism with increased -thioguanine nucleotide (tgn) levels in erythrocytes is a possible cause of severe megaloblastic anaemia (lennard et al, br j clin pharmaco ). methods: ntx patients receiving aza ( , _+ , mg/kg/d), prednisolone ( , + , mg/kg/d) and enalapril (ena) ( , + , mg/kg/d) for more than months were studied prospectively. blood samples were taken before and h after administration of aza on visits during ena treatment and weeks after ena had been replaced by other antihypertensives (x). tgn in erythrocytes, -mercaptopurin (mp) and -thiouric acid (tua) in h post dose plasma (p.) und h urine (u.) samples were analyzed by hplc using a mercurial cellulose resin for selective absorption of the thiol compounds. pharmacodynamic variables were hemoglobin (hb), erythropoietin (epo) and creatinine clearance (ci ace~,lcholine plays an important role in regulating various functions in the airway's. in human lung less is known about regional differences in cholinergic innervation and about receptor-mediated r%m.flation of acetylcholine release. in the present study the tissue content of endogenous acetylcholine and the release of newly-synthesized [~h]acetylcholine were measured in human lung human tissue was obtained at thoracotomy from patients with lung cancer moreover, in isolated rat tracheae with intact extrinsic vagal innervation possible effects of g__-adrenoceptor agonists on evoked ph]acctylcholine release were studied. endogenous acetylcholine was measured by hplc with ec-detection; evoked ph]acetylcholme release was measured after a preceding incubation of the tissue with [~h]choline. huma n large (main bronchi) and small (subsegmental bronchi) airways contained similar amounts of acetylcholine ( pmol/ mg), whereas significantly less acetylcholine was found in lung parenchym ( pmol/ mg). release of [ h]acetylcholine ,,,,'as evoked in human bronchi by transmural electrical stimulation (four s trains at hz). oxotremorine, an agonist at muscarine receptors, inhibited evoked [~hiacetylcholine release indicating the existence of neuronal inhibitor ' receptors on pulmona~ parasympathetic neurones. scopolamine shifted the oxotremorine curve to the right suggesting a competitive interaction (pa value: : slope &the schild plot not different from unity) however, a rather sluggish schdd plot was obtained for pirenzepine. scopolamine but not pirenzepine enhanced evoked [ h]acetylcholine release. the present experiments indicate a dense cholinergic innervation in human bronchi; release of aceu, lcholine appears to be controlled by facilitatory and inhibitou' nmscarinc receptors. in isolated, mucosa-intact rat tracheae isoprenaline ( nm) inhibited [~h]acetylcholine release evoked by preganglionic nerve stimulation isoprenaline was ineffective in mucosa-denuded tracheae or in the presence of indomethacin thus, adrenoceptor agonists appear to inhibit acetylcholine release in the airways by the liberation of inhibitoiy prostanoids from the mucosa. the occurrence of the non-enzymatic reactions between glucose and structural proteins is well known (vlassara h et al. ( ) lab invest : - ) . the reaction between proteins and fructose (i.e. fmctation), however, can also occur. like glucose-protein adducts the fructose analognes are able to form so-called advanced glycation endproducts (age). the inhibition of early and advanced products of fmctation may be ilnportant for the prevention of diabetic late complications (mcpherson jd et al. ( ) biochemistry : - . we investigated the in vitro fmctation of human serum albumin (hsa) and its inhibition by selected drugs. hsa was fmctated by incubation with mmol/ fructose in . i mol/l phosphate buffer, ph= . .,at ° c for days. the rate of fmctation was measured by the following methods: -a colorimetric method based on deglycatien of glycated, proteins by hydrazine (kobayashi k et ai.( ) bioi pharm bull : - ), -affinity chromatography with aminophenyl-boronate-agarose, -fluorescence measurement for the delermination of age we used aminoguanidine, pcnicillamine, captopril and alpha-lipoic acid( mmol/ ) to study the inhibition of hsa fmctation. after three weeks incubation the formation of early glycation products was inhibited by aminogalanidine ( %) and captopril ( %) whereas penicillamine and alpha-lipoic acid showed minimal inhibition. aminognanidine inhibited the formation of age by %, penicillamine by %, alpha-lipoic acid by % and captopril by %. these results may suggest a potential use of the investigated drags in the prevention of the formation of protein-fructose addncts. key: cord- -esa w authors: pinzón, carlos; rocha, camilo; finke, jorge title: algorithmic analysis of blockchain efficiency with communication delay date: - - journal: fundamental approaches to software engineering doi: . / - - - - _ sha: doc_id: cord_uid: esa w a blockchain is a distributed hierarchical data structure. widely-used applications of blockchain include digital currencies such as bitcoin and ethereum. this paper proposes an algorithmic approach to analyze the efficiency of a blockchain as a function of the number of blocks and the average synchronization delay. the proposed algorithms consider a random network model that characterizes the growth of a tree of blocks by adhering to a standard protocol. the model is parametric on two probability distribution functions governing block production and communication delay. both distributions determine the synchronization efficiency of the distributed copies of the blockchain among the so- called workers and, therefore, are key for capturing the overall stochastic growth. moreover, the algorithms consider scenarios with a fixed or an unbounded number of workers in the network. the main result illustrates how the algorithms can be used to evaluate different types of blockchain designs, e.g., systems in which the average time of block production can match the average time of message broadcasting required for synchronization. in particular, this algorithmic approach provides insight into efficiency criteria for identifying conditions under which increasing block production has a negative impact on the stability of a blockchain. the model and algorithms are agnostic of the blockchain’s final use, and they serve as a formal framework for specifying and analyzing a variety of non-functional properties of current and future blockchains. a blockchain is a distributed hierarchical data structure that cannot be modified (retroactively) without alteration of all subsequent blocks and the consensus of a majority. it was invented to serve as the public transaction ledger of bitcoin [ ] . instead relying on a trusted third party, this digital currency is based on the concept of 'proof-of-work', which allows users to execute payments by signing transactions using hashes through a distributed time-stamping service. resistance to modifications, decentralized consensus, and robustness for supporting cryptocurrency transactions, unleashes the potential of blockchain technology for uses in various industries, including financial services [ , , ] , distributed data models [ ] , markets [ ] , government systems [ , ] , healthcare [ , , ] , iot [ ] , and video games [ ] . technically, a blockchain is a distributed append-only data structure comprising a linear collection of blocks, shared among so-called workers, also referred often as miners. these miners generally represent computational nodes responsible for working on extending the blockchain with new blocks. since the blockchain is decentralized, each worker possesses a local copy of the blockchain, meaning that two workers can build blocks at the same time on unsynchronized local copies of the blockchain. in the typical peer-to-peer network implementation of blockchain systems, workers adhere to a consensus protocol for inter-node communication and validation of new blocks. specifically, workers build on top of the largest blockchain. if they encounter two blockchains of equal length, then workers select the chain whose last produced block was first observed. this protocol generally guarantees an effective synchronization mechanism, provided that the task of producing new blocks is hard to achieve in comparison to the time it takes for inter-node communication. the effort of producing a block relative to that of communicating among nodes is known in the literature as 'proof of work'. if several workers extend different versions of the blockchain, the consensus mechanism enables the network to eventually select only one of them, while the others are discarded (including the data they carry) when local copies are synchronized. the synchronization process persistently carries on upon the creation of new blocks. the scenario of discarding blocks massively, which can be seen as an efficiency issue in a blockchain implementation, is rarely present in "slow" block-producing blockchains. the reason is that the time it takes to produce a new block is long enough for workers to synchronize their local copies of the blockchain. slow blockchain systems avert workers from wasting resources and time in producing blocks that are likely to be discarded in an upcoming synchronization. in bitcoin, for example, it takes on average minutes for a block to be produced and only . seconds to communicate an update [ ] . the theoretical fork-rate of bitcoin in was approximately . % [ ] . however, as the blockchain technology finds new uses, it is being argued that block production needs to be faster [ , ] . broadly speaking, understanding how speed-ups in block production can negatively impact blockchains, in terms of the number of blocks discarded due to race conditions among the workers, is important for designing new fast and yet efficient blockchains. this paper introduces a framework to formally study blockchains as a particular class of random networks with emphasis in two key aspects: the speed of block production and the network synchronization delays. as such, it is parametric on the number of workers under consideration (possibly infinite), the probability distribution function that specifies the time for producing new blocks, and the probability distribution function that specifies the communication delay between any pair of randomly selected workers. the model is equipped with probabilistic algorithms to simulate and formally analyze blockchains concurrently producing blocks over a network with varying communication delays. these algorithms focus on the analysis of the continuous process of block production in fast and highly distributed systems, in which inter-node communication delays are cru-cial. the framework enables the study of scenarios with fast block production, in which blocks tend to be discarded at a high rate. in particular, it captures the trade-off between speed and efficiency. experiments are presented to understand how this trade-off can be analyzed for different scenarios. as fast blockchain systems tend to spread to novel applications, the algorithmic approach provides mathematical tools for specifying, simulating, and analyzing blockchain systems. it is important to highlight that the proposed model and algorithms are agnostic of the concrete implementation and final use of the blockchain system. for instance, the 'rewards' for mining blocks such as the ones present in the bitcoin network are not part of the model and are not considered in the analysis algorithms. on the one hand, this sort of features can be seen as particular mechanisms of a blockchain implementation that are not explicitly required for the system to evolve as a blockchain. thus, including them as part of the framework can narrow its intended aim as a general specification, design, and analysis tool. on the other hand, such features may be abstracted away into the proposed model by tuning the probability distribution functions that are parameters of the model, or by considering a more refined base of choices among the many probability distribution functions at hand for a specific analysis. therefore, the proposed model and algorithms are general enough to encompass a wide variety of blockchain systems and their analysis. the contribution of this work is threefold. first, a random network model is introduced (in the spirit of, e.g., and erdös-renyi [ ] ) for specifying blockchains in terms of the speed of block production and communication delays for synchronization among workers. second, exact and approximation algorithms for the analysis of blockchain efficiency are made available. third, based on the proposed model and algorithms, empirical observations about the tensions between production speed and synchronization delay are provided. the remaining sections of the paper are organized as follows. section summarizes basic notions of proof-of-work blockchains. sections and introduce the proposed network model and algorithms. section presents experimental results on the analysis of fast blockchains. section relates these results to existing research, and draws some concluding remarks and future research directions. this section overviews the concept of proof-of-work distributed blockchain systems and introduces basic definitions, which are illustrated with the help of an example. a blockchain is a distributed hierarchical data structure of blocks that cannot be modified (retroactively) without alteration of all subsequent blocks and the consensus of the network majority. the nodes in the network, called workers, use their computational power to generate blocks with the goal of extending the blockchain. the adjective 'proof-of-work' comes from the fact that producing a single block for the blockchain tends to be a computationally hard task for the workers, e.g., a partial hash inversion. definition . a block is a digital document containing: (i) a digital signature of the worker who produced it; (ii) an easy to verify proof-of-work witness in the form of a nonce; and (iii) a hash pointer to the previous block in the sequence (except for the first block, called the origin, that has no previous block and is unique). technical definitions of blockchain as a data structure have been proposed by different authors (see, e.g., [ ] ). most of them coincide on it being an immutable, transparent, and decentralized data structure shared by all workers in the network. for the purpose of this paper, it is important to distinguish between the local copy, independently owned by each worker, and the abstract global blockchain, shared by all workers. the latter holds the complete history of the blockchain. definition . the local blockchain of a worker w is a non-empty sequence of blocks stored in the local memory of w. the global blockchain (or, blockchain) is the minimal rooted tree containing all workers' local blockchains as branches. under the assumption that the origin is unique (definition ), the (global) blockchain is well-defined for any number of workers present in the network. if there is at least one worker, then the blockchain is non-empty. definition allows for local blockchains to be either synchronized or unsynchronized. the latter is common in systems with long communication delays or in the presence of anomalous situations (e.g., if a malicious group of workers is holding a fork intentionally). as a consequence, the global blockchain cannot simply be defined as a unique sequence of blocks, but rather as a distributed data structure against which workers are assumed to be partly synchronized to. figure presents an example of a blockchain with five workers, where blocks are represented by natural numbers. on the left, the local blockchains are depicted as linked lists; on the right, the corresponding global blockchain is depicted as a rooted tree. some of the blocks in the rooted tree representation in figure are labeled with the identifier of a worker, which indicates the position of each worker in the global blockchain. for modeling, the rooted tree representation of a blockchain is preferred. on the one hand, it can reduce the amount of memory needed for storage and, on the other hand, it visually simplifies the inspection of the data structure. furthermore, storing a global blockchain with m workers containing n unique blocks as a collection of lists requires in the worst-case scenario o(mn) memory (i.e., with perfect synchronization). in contrast, the rooted tree representation of the same blockchain with m workers and n unique blocks requires o(n) memory for the rooted tree (e.g., using parent pointers) and an o(m) map for assigning each worker its position in the tree, totaling o(n + m) memory. a blockchain tends to achieve synchronization among the workers due to the following reasons. first, workers follow a standard protocol in which they are constantly trying to produce new blocks and broadcasting their achievements to the entire network. in the case of cryptocurrencies, for instance, this behavior is motivated by paying a reward. second, workers can easily verify (i.e., with a fast algorithm) the authenticity of any block. if a malicious worker (i.e., an attacker ) changes the information of one block, that worker is forced to repeat the extensive proof-of-work process for that block and all its subsequent blocks in the blockchain. otherwise, its malicious modification cannot become part of the global blockchain. since repeating the proof-of-work process requires that the attacker spends a prohibitively high amount of resources (e.g., electricity, time, and/or machine rental), such a situation is unlikely to occur. third, the standard protocol forces any malicious worker to confront the computational power of the whole network, assumed to have mostly honest nodes. algorithm presents a definition of the above-mentioned standard protocol, which is followed by each worker in the network. when a worker produces a new block, it is appended to the block it is standing on, moves to it, and notifies the network about its current position and new distance to the root. upon reception of a notification, a worker compares its current distance to the root with the incoming position. such a worker switches to the incoming position whenever it represents a greater distance. to illustrate the use of the standard protocol with a simple example, consider the blockchains depicted in figures and . in the former, either w or w produced block , but the other workers are not yet aware of its existence. in the latter, most of the workers are synchronized with the longest branch, which is typical of a slow blockchain system, and results in a tree with few and short branches. some final remarks on inter-node communication and implementations for enforcing the standard protocol are due. note that message communication in the standard protocol is required to include enough information about the position of a worker to be located in the tree. the detail degree of this information depends, generally, on the design of the particular blockchain system. on the one hand, sending the complete sequence from root to end as part of such a message is an accurate, but also expensive approach, in terms of bandwidth, computation, and time. on the other hand, sending only the last block as part of the message is modest on resources, but can represent a communication conundrum whenever the worker being notified about a new block x is not yet aware of the parent block of x. in contrast to slow systems, this situation may frequently occur in fast systems. the workaround is to use subsequent messages to query the previous blocks of x, as needed, thus extending the average duration of inter-working communication. the network model generates a rooted tree representing a global blockchain from a collection of linked lists representing local blockchains (see definition ) . it consists of three mechanisms, namely, growth, attachment, and broadcast. by growth it is meant that the number of blocks in the network increases by one at each time step. attachment refers to the fact that new blocks connect to an existing block, while broadcast refers to the fact that the newly connected block is announced to the entire network. the model is parametric in a natural number m specifying the number of workers, and two probability distributions α and β governing the growth, attachment, and broadcast mechanisms. internally, the growth mechanism creates a new block to be assigned at random among the m workers by taking a sample from α (the time it takes to produce such a block) and broadcasts a synchronization message, whose reception time is sampled from β (the time it takes the other workers to update their local blockchains with the new block). a network at a given discrete step n is represented as a rooted tree t n = (v n , e n ), with nodes v n ⊆ n and edges e n ⊆ v n × v n , and a map w n : { , , . . . , m − } → v n . a node u ∈ v n represents a block u in the network and an edge (u, v) ∈ e n represents a directed edge from block u to its parent block v. the assignment w n (w) denotes the position (i.e., the last block in the local blockchain) of worker w in t n . definition . (growth model) let α and β be positive and non-negative probability distributions. the algorithm used in the network model starts with v = {b }, e = {} and w (w) = b for all workers w, being b = the root block (origin). at each step n > , t n evolves as follows: uniformly at random, a worker w ∈ { , , . . . , m − } is chosen for the new block to extend its local blockchain. a new edge appears so that e n = e n− ∪ {(w n− (w), n)}, and w n− is updated to form w n with the new assignment w → n, that is, w n (w) = n and w n (z) = w n− (z) for any z = w. broadcast. worker w broadcasts the extension of its local blockchain with the new block n to any other worker z with time β n,z sampled from β. the rooted tree generated by the model in definition begins with block (the root) and adds new blocks n = , , . . . to some of the workers. at each step n > , a worker w is selected at random and its local blockchain, ← · · · ← w n− (w), is extended to ← · · · ← w n− (w) ← n = w n (w). this results in a concurrent random global behavior, inherent to distributed blockchain systems, not only because the workers are chosen randomly due to the proofof-work scheme, but also because the communication delays bring some workers out of sync. it is important to note that the steps n = , , , . . . are logical time steps, not to be confused with the sort of time units sampled from the variables α and β. more precisely, although the model does not mention explicitly the time advancement, it assumes implicitly that workers are synchronized at the corresponding point in the logical future. for instance, if w sends a synchronization message of a newly created block n to another worker z, at the end of logical step n and taking β n,z time, the message will be received by z during the logical step n ≥ n that satisfies another two reasonable assumptions are implicitly made in the model, namely: (i) the computational power of all workers is similar; and (ii) any broadcasting message includes enough information about the new and previous blocks, so that no re-transmission is required to fill block gaps (or, equivalently, that these re-transmission times are included in the delay sampled from β). assumption (i) justifies why the worker producing the new block is chosen uniformly at random. thus, instead of simulating the proof-of-work of the workers to know who will produce the next block and at what time, it is enough to select a worker uniformly and take a sample time from α. assumption (ii) helps in keeping the model description simple. without assumption (ii), it would be mandatory to explicitly define how to proceed when a worker is severely out of date and requires several messages to get synchronized. in practice, the distribution α that governs the time it takes for the network, as a single entity, to produce a block is exponential with meanᾱ. since proofof-work is based on finding a nonce that makes a hashing function fall into a specific set of targets, the process of producing a block is statistically equivalent to waiting for a success in a sequence of bernoulli trials. such waiting times would correspond -at first-to a discrete geometric distribution. however, because the time between trials is very small compared to the average time between successes (usually fractions of microseconds against several seconds or minutes), the discrete geometric distribution can be approximated by a continuous exponential distribution function. finally, note that the choice of the distribution function β that governs the communication delay, and whose mean is denoted byβ, heavily depends on the system under consideration and its communication details (e.g., its hardware and protocol). this section presents an algorithmic approach to the analysis of blockchain efficiency. the algorithms are used to estimate the proportion of valid blocks that are produced during a fixed number of growth steps, based on the network model introduced in section , for blockchains with fixed and unbounded number of workers. in general, although presented in this section for the specific purpose of measuring blockchain efficiency, these algorithms can be easily adapted to compute other metrics of interest, such as the speed of growth of the longest branch, the relation between confirmations of a block and the probability of being valid in the long term, or the average length of forks. definition . let t n = (v n , e n ) be a blockchain that satisfies definition . the proportion of valid blocks p n in t n is defined as the random variable: the proportion of valid blocks p produced for a blockchain (in the limit) is defined as the random variable: their expected values are denoted withp n andp, respectively. note thatp n andp are random variables particularly useful to determine some important properties of blockchains. for instance, the probability that a newly produced block becomes valid in the long run isp. the average rate at which the longest branch grows is approximated byp/ᾱ. moreover, the rate at which invalid blocks are produced is approximately ( −p)/ᾱ and the expected time for a block to receive a confirmation isᾱ/p. although p n and p are random for any single simulation, their expected valuesp n andp can be approximated by averaging several monte carlo simulations. the three algorithms presented in the following subsections are sequential and single threaded , designed to compute the value of p n under the standard protocol (algorithm ). they can be used for computingp n and, thus, for approximatingp for large values of n. the first and second algorithms compute the exact value of p n for a bounded number of workers. while the first algorithm simulates the three mechanisms present in the network model (i.e., growth, attachment, and broadcast -see definition ), the second one takes a more timeefficient approach for computing p n . the third algorithm is a fast approximation algorithm for p n , useful in the context of an unbounded number of workers. it is of special interest for studying the efficiency of large and fast blockchain systems because its time complexity does not depend on the number of workers in the network. algorithm simulates the model with m workers running concurrently under the standard protocol for up to n logical steps. it uses a list b of m block sequences that reflect the local copy of each worker. the sequences are initially limited to the origin block and can be randomly extended during the simulation. each iteration of the main loop consists of four stages: (i) the wait for a new block to be produced, (ii) the reception of messages within a given waiting period, (iii) the addition of a block to the blockchain of a randomly selected worker, and (iv) the broadcasting of the new position of the selected worker in the shared blockchain to the other workers. the priority queue pq is used to queue messages for future delivery, thus simulating the communication delays. messages have the form (t , i, b ), where t represents the arrival time of the message, i is the recipient worker, and b the content that informs that a (non-specified) worker has the sequence of blocks b . the statements α() and β() draw samples from α and β, respectively. the overall complexity of algorithm depends, as usual, on specific assumptions on its concrete implementation. first, let the time complexity to query α() and β() be o( ), which is a reasonable assumption in most computer programming languages. second, note that the following time complexity estimates may be higher depending on their specific implementations (e.g., if a histogram is used instead of a continuous function for sampling these variables). in particular, consider two implementation variants. for both variants, the average length of the priority queue with arbitrarily large n is expected to be o(m), more precisely, mβ/ᾱ. consider a scenario in which the statement b i ← b is implemented by creating a copy in o(n) time and the append statement is o( ) time. the overall time complexity of the algorithm is o(mn ). now consider a scenario in which b i ← b merely copies the list reference in o( ) time and the append statement creates a copy in o(n) time. for the case where n m, under the assumption that the priority queue has log-time insertion and removal, the time complexity is brought down to o(n ). in either case, the spatial complexity is o(mn). a key advantage of algorithm is that with a slight modification it can return the blockchain s instead of the proportion p n , which enables a richer analysis in the form of additional metrics different than p. for example, assume algorithm : simulation of m workers using a priority queue. algorithm : simulation of m workers using a matrix d .., β(), , β(), ..., β() j'th position end return zn− algorithm is a faster alternative to algorithm . it uses a different encoding for the collection of local blockchains. in particular, algorithm stores the length of the blockchains instead of the sequences themselves. thereby, it suppresses the need for a priority queue. algorithm offers an optimized routine that can be called from algorithm . let t k represent the (absolute) time at which block k is created, h k the length of the local blockchain after being extended with block k, and z k the cumulative maximum given by the spatial complexity of algorithm is o(mn) due to the computation of matrix d and its time complexity is o(nm + n ) when algorithm is not used. note that there are n iterations, each requiring o(n) and o(m) time for computing h k and d k , respectively. however, if algorithm is used for computing h k , the average overall complexity is reduced. in the worst-case scenario, the complexity of algorithm is o(k). however, the experimental evaluations suggest an average below o(β/ᾱ) (constant with respect to k). thus, the average runtime complexity of algorithm is bounded by o nm + min{n , n + nβ/ᾱ} , and this corresponds to o(nm), unless the blockchain system is extremely fast (β ᾱ). algorithms and compute the value of p n for a fixed number m of workers. both algorithms can be used to compute p n for different values of m. however, the time complexity of these two algorithms heavily depends on the value of m, which presents a practical limitation when faced with the task of analyzing large blockchain systems. this section introduces an algorithm for approximating p n for an unbounded number of workers. it also presents formal observations that support the proposed approximation. recall that p n can be used as a measure of efficiency in terms of the proportion of valid blocks that have been produced up to step n in the blockchain t n = (v n , e n ). formally: this definition assumes a fixed number of workers. that is, p n can be written as p m,n to represent the proportion of valid blocks in t n with m workers. for the analysis of large blockchains, the challenge is to find an efficient way to estimate p m,n for large values of m and n. in other words, to find an efficient algorithm for approximating the random variables p * n and p * defined as: the proposed approach modifies algorithm by suppressing the matrix d. the idea is to replace the need for computing d i,j by an approximation based on the random variable β and the length of the blockchain h k in each iteration of the main loop. note that the first row can be assumed to be wherever it appears because d ,j = for all j. for the remaining rows, an approximation is introduced by observing that if an element x m is chosen at random from the matrix d of size (n − ) × m (i.e., matrix d without the first row), then the cumulative distribution function of x m is given by this is because the elements x m of d are either samples from β, whose domain is r ≥ , or with a probability of /m since there is one zero per row. therefore, given that the following functional limit converges uniformly (see theorem below), each d i,j can be approximated by directly sampling the distribution β. as a result, algorithm can be used for computing h k by replacing d i,j with β(). theorem . let f k (r) := p (x k ≤ r) and g(r) := p (β() ≤ r). the functional sequence {f k } ∞ k= converges uniformly to g. proof. let > . define n := and let k be any integer k > n. then using theorem , the need for the bookkeeping matrix d and the selection of a random worker j are discarded from algorithm , resulting in algorithm . the proposed algorithm computes p * n , an approximation of lim m→∞ p m,n in which the matrix entries d i,j are replaced by samples from β, each time they are needed, thus ignoring the arguably negligible hysteresis effects. algorithm : approximation for lim m→∞ p m,n simulation t , h , z ← , , for k ← , ..., n − do algorithm * stands for algorithm with β() instead of di,j (approximation) the time complexity of algorithm implemented by using algorithm with β() instead of d i,j is o(n ) and its space complexity is o(n). if the pruning algorithm is used, the time complexity drops below o(n + nβ/ᾱ)) according to experimentation. this complexity can be considered o(n) as long asβ ᾱ. this section presents an experimental evaluation of blockchain efficiency in terms of the proportion of valid blocks produced by the workers for the global blockchain. the model in section is used as the mathematical framework, while the algorithms in section are used for experimental evaluation on that framework. the main claim is that, under certain conditions, the efficiency of a blockchain can be expressed as a ratio betweenᾱ andβ. experimental evaluations provide evidence on why algorithm -the approximation algorithm for computing the proportion of valid blocks in a blockchain system with an unbounded number of workers-is an accurate tool for computing the measure of efficiency p * . note that the speed of a blockchain can be characterized by the relationship between the expected values of α and β. definition . let α and β be the distributions according to definition . a blockchain is classified as: chaotic ifᾱ β , and fast ifᾱ ≈β. definition captures the intuition about the behavior of a global blockchain in terms of how alike are the times required for producing a block and for local block synchronization. note that the bitcoin implementation is classified as a slow blockchain system because the time between the creation of two consecutive blocks is much larger than the time it takes for local blockchains to synchronize. in chaotic blockchains, a dwarfing synchronization time means that basically no (or relatively little) synchronization is possible, resulting in a blockchain in which rarely any block would be part of "the" valid chain of blocks. a fast blockchain, however, is one in which both the times for producing a block and broadcasting a message are similar. the two-fold goal of this section is first, to analyze the behavior ofp * for the three classes of blockchains, and second, to understand how the trade-off between production speed and communication time affects the efficiency of the data structure by means of a formula. in favor of readability, the experiments presented next identify algorithms and as a m and a ∞ , respectively. furthermore, the claims and experiments assume that the distribution α is exponential, which holds true for proof-of-work systems. claim unless the system is chaotic, the hysteresis effect of the matrix entries note that theorem implies that if the hysteresis effect of the random variables d i,j is negligible, then algorithm is a good enough approximation of algorithm . however, it does not prove that this assertion holds in general. experimental evaluation suggests that this is indeed the case, as stated in claim . figure summarizes the average output of a m and the region that contains half of these outputs, for several values of m. all outputs seem to approach that of a ∞ , not only for the expected value ( figure .(a) ), but also in terms of the generated p.d.f. (figure .(b) ). similar results were obtained with several distribution functions for β. in particular, the exponential, chi-squared, and gamma probability distribution functions were used (with k ∈ { , . , , , , }), all with different mean values. the resulting plots are similar to the ones depicted in figure . as the quotientβ/ᾱ grows beyond , the convergence of a m becomes much slower and the approximation error is noticeable. an example is depicted in figure , where a blockchain system produces on average blocks during the transmission of a synchronization message (i.e., the system is classified as chaotic). even after considering workers, the shape of the p.d.f. is shifted considerably. the error can be due to: (i) the hysteresis effect that is ignored by a ∞ ; or (ii) the slow rate of convergence. in any case, the output of this class of systems is very low, making them unstable and useless in practice. an intuitive conclusion about blockchain efficiency and speed of block production is that slower systems tend to be more efficient than faster ones. that is, faster blockchain systems have a tendency to overproduce blocks that will not be valid. claim if the system is either slow or fast, then p * =ᾱ α +β . figure presents an experimental evaluation of the proportion of valid blocks in a blockchain in terms of the ratioβ/ᾱ. for the left and right plots, the horizontal axis represents how fast blocks are produced in comparison with how slow synchronization is achieved. if the system is slow, then efficiency is high because most newly produced blocks tend to be valid. if the system is fast, however, then efficiency is balanced because the newly produced blocks are likely to either become valid or invalid with equal likelihood. finally, note that for fast and chaotic blockchains, say for − ≤β/ᾱ, there is still a region in which efficiency is arguably high. as a matter of fact, even if synchronization of local blockchains takes on average a tenth of the time it takes to produce a block, in general, the proportion of blocks that become valid is almost %. in practice, this observation can bridge the gap between the current use of blockchains as slow systems and the need for faster blockchains. a comprehensive account of the vast literature on complex networks is beyond the scope of this work. the aim here is more modest, namely, the focus is on related work proposing and using formal and semi-formal algorithmic approaches to evaluate properties of blockchain systems. there are a number of recent studies that focus on the analysis of blockchain properties with respect to metaparameters. some of them are based on network and node simulators. other studies conceptualize different metrics and models that aim to reduce the analysis to the essential parts of the system. in [ ] , a. gervais et al. introduce a quantitative framework to analyze the security and performance implications of various consensus and network parameters of proof-of-work blockchains. they devise optimal adversarial strategies for several attack scenarios while taking into account network propagation. ultimately, their approach can be used to compare the tradeoffs between blockchain performance and its security provisions. y. aoki et al. [ ] propose simblock, a blockchain network simulator in which blocks, nodes, and the network itself can be instantiated by using a comprehensive collection of parameters, including the propagation delay between nodes. towards a similar goal, j. kreku et al. [ ] show how to use the absolut simulation tool [ ] for prototyping blockchains in different environments and finding optimal performance, given some parameters, in constrained platforms such as raspberry pi and nvidia jetson tk . r. zhang and b. preneel [ ] introduce a multi-metric evaluation framework to quantitatively analyze proof-of-work protocols. their systemic security analysis in seven of the most representative and influential alternative blockchain designs concludes that none of them outperforms the so-called nakamoto consensus in terms of either the chain quality or attack resistance. all these efforts have in common that simulation-based analysis is used to understand non-functional requirements of blockchain designs such as performance and security, up to a high degree of confidence. however, in most of the cases the concluding results are tied to a specific implementation of the blockchain architecture. the model and algorithms presented in this work can be used to analyze each of these scenarios in a more abstract fashion by using appropriate parameters for simulating the blockchain growth and synchronization. an alternative approach for studying blockchains is through formal semantics. g. rosu [ ] takes a novel approach to the analysis of blockchain systems by focusing on the formal design, implementation, and verification of blockchain languages and virtual machines. his approach uses continuation-based formal semantics to later analyze reachability properties of the blockchain evolution with different degrees of abstraction. in this direction of research, e. hildenbrandt et al. [ ] present kevm, an executable formal specification of ethereum's virtual machine that can be used for rapid prototyping, as well as a formal interpreter of ethereum's programming languages. c. kaligotla and c. macal [ ] present an agent-based model of a blockchain systems in which the behavior and decisions made by agents are detailed. they are able to implement a generalized simulation and a measure of blockchain efficiency from an agent choice and energy cost perspective. finally, j. göbel et al. [ ] use markov models to establish that some attack strategies, such as selfish-mine, causes the rate of production of orphan blocks to increase. the research presented in this manuscript uses random networks to model the behavior of blockchain systems. as future work, the proposed model and algorithms can be specified in a rewrite-based framework such as rewriting logic [ ] , so that the rule-based approach in [ , ] and the agent-based approach in [ ] can both be extended to the automatic analysis of (probabilistic) temporal properties of blockchains. moreover, as it is usual in a random network approach, topological properties of blockchain systems can be studied with the help of the model proposed in this manuscript. in general, this paper differs from the above studies in the following aspects. the proposed analysis is not based on an explicit low-level simulation of a network or protocol; it does not explore the behavior of blockchain systems under the presence attackers. instead, this work simulates the behavior of blockchain efficiency from a meta-level perspective and investigates the strength of the system with respect to shortcomings inherent in its design. therefore, the proposed analysis differs from [ , , , ] and is rather closely related to studies which consider the core properties of blockchain systems prior to attacks [ , ] . the bounds for the meta-parameters are more conservative and less secure, compared to scenarios in which the presence of attackers is taken into account. finally, with respect to studying blockchains through formal semantics, the proposed analysis is able to consider an artificial but convenient scenario of having an infinite number of concurrent workers. formal semantics, as well as other related simulation tools, cannot currently handle such scenarios. this paper presented a network model for blockchains and showed how the proposed simulation algorithms can be used to analyze the efficiency (in terms of production of valid blocks) of blockchain systems. the model is parametric on: (i) the number of workers (or nodes); and (ii) two probability distributions governing the time it takes to produce a new block and the time it takes the workers to synchronize their local copies of the blockchain. the simulation algorithms are probabilistic in nature and can be used to compute the expected value of several metrics of interest, both for a fixed and unbounded number of workers, via monte carlo simulations. it is proven, under reasonable assumptions, that the fast approximation algorithm for an unbounded number of workers yields accurate estimates in relation to the other two exact (but much slower) algorithms. claims -supported by extensive experimentation-have been proposed, including a formula to measure the proportion of valid blocks produced in a blockchain in terms of the two probability distributions of the model. the model, algorithms, and experiments provide insights and useful mathematical tools for specifying, simulating, and analyzing the design of fast blockchain systems in the years to come. future work on the analytic analysis of the experimental observations contributed in this work should be pursued. this includes proving the two claims in section . first, that hysteresis effects are negligible unless the system is extremely fast. second, that the expected proportion of valid blocks in a blockchain system is given byᾱ/(ᾱ +β), beingᾱ andβ the mean of the probability distributions governing block production and communication times, respectively. furthermore, the generalization of the claims to non-proof-of-work schemes, i.e. to different probability distribution functions for specifying the time it takes to produce a new block may also be considered. finally, the study of different forms of attack on blockchain systems can be pursued with the help of the proposed model. introducing blockchains for healthcare simblock: a blockchain network simulator blockchain technologies: the foreseeable impact on society and industry emergence of scaling in random networks application of public ledgers to revocation in distributed access control the limits to blockchain? scaling vs. decentralization on scaling decentralized blockchains information propagation in the bitcoin network on random graphs on the security and performance of proof of work blockchains bitcoin blockchain dynamics: the selfish-mine strategy in the presence of propagation delay. performance evaluation blockchain application and outlook in the banking industry bc-med: plataforma de registros médicos electrónicos sobre tecnología blockchain kevm: a complete formal semantics of the ethereum virtual machine the application of blockchain technology in e-government in china managing iot devices using blockchain platform a generalized agent based framework for modeling a blockchain system blockchain solutions for big data challenges: a literature review evaluating the efficiency of blockchains in iot with simulations conditional rewriting logic as a unified model of concurrency challenges and security aspects of blockchain based on online multiplayer games bitcoin: a peer-to-peer electronic cash system blockchain in government: benefits and implications of distributed ledger technology for information sharing formal design, implementation and verification of blockchain languages blockchain technology in the chemical industry: machine-to-machine electricity market how blockchain is changing finance toward more rigorous blockchain research: recommendations for writing blockchain case studies early-phase performance exploration of embedded systems with absolut framework lay down the common metrics: evaluating proof-of-work consensus protocols' security ), which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the creative commons license and indicate if changes were made. the images or other third party material in this chapter are included in the chapter's creative commons license, unless indicated otherwise in a credit line to the material. if material is not included in the chapter's creative commons license and your intended use is not permitted by statutory regulation or exceeds the permitted use key: cord- -wutnt yk authors: lech, karolina; liu, fan; davies, sarah k.; ackermann, katrin; ang, joo ern; middleton, benita; revell, victoria l.; raynaud, florence j.; hoveijn, igor; hut, roelof a.; skene, debra j.; kayser, manfred title: investigation of metabolites for estimating blood deposition time date: - - journal: int j legal med doi: . /s - - -y sha: doc_id: cord_uid: wutnt yk trace deposition timing reflects a novel concept in forensic molecular biology involving the use of rhythmic biomarkers for estimating the time within a -h day/night cycle a human biological sample was left at the crime scene, which in principle allows verifying a sample donor’s alibi. previously, we introduced two circadian hormones for trace deposition timing and recently demonstrated that messenger rna (mrna) biomarkers significantly improve time prediction accuracy. here, we investigate the suitability of metabolites measured using a targeted metabolomics approach, for trace deposition timing. analysis of plasma metabolites collected around the clock at -h intervals for h from male participants under controlled laboratory conditions identified metabolites showing statistically significant oscillations, with peak times falling into three day/night time categories: morning/noon, afternoon/evening and night/early morning. time prediction modelling identified independently contributing metabolite biomarkers, which together achieved prediction accuracies expressed as auc of . , . and . for these three time categories respectively. combining metabolites with previously established hormone and mrna biomarkers in time prediction modelling resulted in an improved prediction accuracy reaching aucs of . , . and . respectively. the additional impact of metabolite biomarkers, however, was rather minor as the previously established model with melatonin, cortisol and three mrna biomarkers achieved auc values of . , . and . for the same three time categories respectively. nevertheless, the selected metabolites could become practically useful in scenarios where rna marker information is unavailable such as due to rna degradation. this is the first metabolomics study investigating circulating metabolites for trace deposition timing, and more work is needed to fully establish their usefulness for this forensic purpose. knowing the time of the day or night when a biological trace was placed at a crime scene has valuable implications for criminal investigation. it would allow verifying the alibi and/ or testimony of the suspect(s) and could indicate whether other, yet unknown suspects may be involved in the crime. as such, knowing the trace deposition time would provide a link, or lack of, between the sample donor, identified via forensic dna profiling, and the criminal event. therefore, finding a means to retrieve information about the deposition time of biological material is of inestimable forensic value. in principle, molecular biomarkers with rhythmic changes in their concentration during the -h day/night cycle and analysible in crime scene traces would provide a useful resource for trace deposition timing. circadian rhythms are oscillations with a (near) -h period present in almost every physiological and behavioural aspect of human biology. they are generated on a molecular level by coordinated expression, translation and interaction of core clock genes and their respective protein products [ ] . together, these genes form a transcriptionaltranslational feedback loop driving the expression of various clock-controlled genes, which manifests as rhythms in numerous processes including metabolism [ ] [ ] [ ] [ ] [ ] [ ] , where circadian timing plays a role in coordinating biochemical reactions and metabolic activities. because of this ubiquity of circadian rhythms and their association with many biological processes, the pool of potential rhythmic biomarkers is vast and diverse [ ] . in a proof-of-principle study, we previously introduced the concept of molecular trace deposition timing, i.e. to establish the day/night time when (not since) a biological sample was placed at the crime scene, by measuring two circadian hormones, melatonin and cortisol, in small amounts of blood and saliva, and demonstrated that the established rhythmic concentration pattern of both biomarkers can be observed in such forensic-type samples [ ] . recently, we identified various rhythmically expressed genes in the blood [ ] and subsequently demonstrated the suitability of such messenger rna (mrna) biomarkers for blood trace deposition timing by establishing a statistical model based on melatonin, cortisol and three mrna biomarkers for predicting three day/night time categories: morning/noon, afternoon/evening and night/early morning [ ] . here, we investigate different types of molecular biomarkers, namely metabolites, i.e. intermediates or products of metabolism, for their suitability in trace deposition timing. metabolic processes are known to be coupled with the circadian timing system in order to properly coordinate and execute them [ , , ] . thus, many (by-)products of metabolism have been shown to exhibit rhythms in their daily concentration levels in metabolomics studies [ , , ] , while none of them as yet have been tested for trace deposition timing. using plasma obtained from blood samples collected every h across a -h period from healthy, young males, metabolites were screened via a targeted metabolomics approach to identifiy those with statistically significant rhythms in concentration. rhythmic markers, as shown previously with hormones and mrna [ , ] , are able to predict day/night time categories. thus, we hypothesized that applying rhythmic metabolites (with or without previously established rhythmic biomarkers) for time prediction modelling could improve the categorical time prediction for trace deposition timing, which was assessed in this study. the plasma metabolite data used in this study were obtained from blood samples collected during the sleep/sleep deprivation study (s/sd) conducted at surrey clinical research centre (crc) at the university of surrey, uk. full details of the study protocol and eligibility criteria have been reported elsewhere [ , , ] . for the present analysis, sequential twohourly blood samples per participant (n = males, mean age ± standard deviation = ± years) were used, giving a total of observations for subsequent model building. these samples spanned the first h of the s/sd study (from : -h day to : -h day ). the samples covering the subsequent sleep deprivation condition, from : h on day to : h on day , were excluded from the analysis. full details of the blood sample collection, plasma extraction method, targeted lc/ms metabolomics analysis and subsequent statistical analyses have been described in materials and methods and supplementary material sections of the previous articles [ , , ] . concentration data of metabolites (μm), belonging to either acylcarnitines, amino acids, biogenic amines, hexose, glycerophospholipids and sphingolipids, were obtained using the absoluteidq p targeted metabolomics kit (biocrates life sciences ag, innsbruck, austria) run on a waters xevo tq-s mass spectrometer coupled to an acquity hplc system (waters corporation, milford, ma, usa). after correcting the metabolite data for batch effect described in detail in [ ] , we analysed the metabolite profiles with the single cosinor and nonlinear curve fitting (nlcf) methods to determine the presence of h rhythmicity, as was done previously [ , ] . this first selection step of metabolites for time category prediction was based on the statistically significant outcomes from the nlcf and single cosinor methods. the selected metabolites had to have a statistically significant amplitude and acrophase, calculated with the nlcf method, and statistically significant fits to a cosine curve, as calculated with the single cosinor method. final selection of markers for prediction modelling was done using multiple regression including all markers as the explainary variables and the sampling time as the dependent variable and ensuring all of the selected markers having statistically significant and independent effect on the overall model fitting. the metabolite markers that did not show a statistically significant independent effect were excluded from the marker selection process. the most suitable predicted time categories were established, based on the average peak times of the metabolites and hormone concentrations, as calculated with the nlcf method. the prediction model was built based on multinomial logistic regression, where the batch-corrected concentration values of metabolites were considered as the predictors and the day/night time categories as the response variable, as described elsewhere [ , ] . additionally, we combined the previously proposed circadian hormones melatonin and cortisol [ ] as well as the previously established rhythmic mrna biomarkers mknk , per and hspa b [ ] with the metabolites in a prediction model, to determine whether a combination of the different types of rhythmic markers improves the prediction accuracy of time estimations. the dataset used for prediction modelling consisted of observations, i.e. individuals and time points per individual. the multinomial logistic regression is written as and the probabilities for a certain day/night category can be estimated as the day and night category with the max (π , π , π ) was considered as the predicted time category. the model predicted the probabilities of different possible outcomes of a categorical dependent variable, given a set of variables (predictors), as previously described and applied for eye and hair colour prediction based on snp genotypes [ ] [ ] [ ] and for trace deposition time using circadian mrna biomarkers [ ] . because of the small sample size, the performance of the generated model(s) was evaluated using the leaving-one-out cross-validation (loocv) method [ ] . this approach builds a prediction model from all observations minus one, in this case for observations, and predicts the time category for the one remaining observation. the whole procedure is repeated once for each observation, i.e. in this case times. the area under the receiver operating characteristic (roc) curve (auc), which describes the accuracy of the prediction, was derived for each time category based on the concordance between the predicted probabilities and the observed time category. in general, auc values range from . , which corresponds to random prediction, to . , which represents perfect prediction. the concordance between the predicted and observed categories was categorized into four groups: true positives (tp), true negatives (tn), false positives (fp) and false negatives (fn). four accuracy parameters were derived: sensitivity = tp / (tp + fn) × , specificity = tn / (tn + fp) × , positive predictive value (ppv) = tp / (tp + fp) × and negative predictive value (npv) = tn / (tn + fn) × . notably, the observations that were used in this study were not completely independent from each other; however, we aimed to minimize the bias by cross-validation using loocv. from the metabolites analysed in the plasma samples, we identified metabolite biomarkers showing statistically significant oscillations, with both the nlcf and cosinor methods (table ). next, these metabolites were assigned to day or night time categories based on their mean peak (acrophase) time estimates (table ). an overrepresentation of metabolites (n = , %) demonstrating peak concentrations in the afternoon, between : and : h, was noted. five out of ( %) metabolites had their highest concentration values during the night, between : and : h. only one metabolite showed a peak time in the early morning, around : h. consequently, we assigned all metabolites to three day/ night time categories, i.e. morning/noon ( : - : h), afternoon/evening ( : - : h) and night/early morning ( : - : h), together comprising one complete -h day/night cycle. in the first step of the biomarker selection, we applied linear regression to all metabolites, identified as significantly rhythmic, to select those with an independent contribution to the model for predicting the three day/night time categories: morning/noon, afternoon/evening and night/early morning, as previously done for mrna and hormone biomarkers [ ] . this analysis revealed a subset of metabolite biomarkers (ac-c , ac-c : , ac-c , isoleucine, proline, pcaac : , pcaac : , pcaec : , pcaec : and smc : ). the remaining metabolites were omitted from the subsequent model building and model validation analysis as their effect on time category prediction was 'masked' by the metabolite biomarkers included in the model. a table ). figure presents z-scored concentration values across the day/night cycle for these metabolite biomarkers. however, our previously established model based on two circadian hormones (melatonin and cortisol) and three mrna biomarkers (mknk , hspa b and per ) gave considerably higher auc values of . , . and . for the same three time categories respectively [ ] , than achieved here with the model based on the plasma metabolites. therefore, we performed time prediction modelling using the metabolite biomarkers highlighted here together with the previously identified hormone and mrna biomarkers. this analysis revealed a subset of seven independently contributing biomarkers: five metabolites (ac-c , ac-c : , ac-c , isoleucine and smc : ), one hormone (melatonin) and one mrna biomarker (mknk ). the auc values obtained with this combined biomarker model were . for morning/noon, . for the afternoon/evening and . for night/early morning ( table ). in this forensically motivated metabolomics study, metabolite biomarkers exhibiting significant daily rhythms in concentration were identified in plasma and were further investigated for their suitability for estimating blood trace deposition time. the metabolites initially tested were included in the absoluteidq p targeted metabolomics kit (biocrates life sciences ag, innsbruck, austria) and belong to five compound classes and are involved in major metabolic pathways, such as energy metabolism, ketosis, metabolism of amino acids, cell cycle and cell proliferation and carbohydrate metabolism, to name a few. metabolism is interconnected with circadian rhythms, influencing them and, in turn, being influenced by them [ , , , , ] . among the metabolites with statistically significant oscillations identified here, we found a strong overrepresentation of those exhibiting peak concentrations in the afternoon, mainly from the phosphatidylcholine class (table ) . although currently we cannot fully understand what causes this overrepresentation, the observed peak times agree with data showing lipid metabolism transcripts in humans having maximum transcription levels during the day [ ] . the prediction model established here utilized metabolite biomarkers for estimating three day/night time categories auc area under the receiver operating characteristic (roc) curve, ppv positive predictive value, npv negative predictive value, spec specificity, sens sensitivity a as established previously [ ] [ ] . in both model comparisons (i and ii), the remaining category was predicted slightly less accurately in the metabolite-based model. however, the final comparison with the combined model, based on two hormones (melatonin, cortisol) and three mrna biomarkers (mknk , hspa b and per ), (iii) showed that the metabolite-based model was considerably less accurate, giving lower auc values by . , . and . , for morning/noon, afternoon/evening and night/early morning respectively [ ] . this final finding was the motivation to combine together in one time prediction model the metabolite biomarkers identified here, with the hormone and mrna biomarkers identified previously [ ] . the best combined model was based on five metabolites (ac-c , ac-c : , ac-c , isoleucine and smc : ), melatonin and the mknk and reached auc values of . for morning/noon, . for afternoon/evening and . for night/early morning. overall, this combined model was slightly more accurate in predicting the afternoon/evening and the night/early morning categories (auc increase of . ) and slightly less accurate in predicting the morning/noon category (auc decrease of . ) compared with the previously established combined hormone and mrna-based model [ ] . this rather minor impact of the newly tested metabolites, relative to the previously tested hormones and mrna biomarkers [ ] , questions the value of using plasma metabolites for trace deposition timing. the major subset of the metabolites identified in the current study peaked during the day, and this might reflect either the feeding-fasting schedule [ , ] or their original source. the original source of metabolites circulating in plasma is difficult to determine accurately since they can be derived from multiple organs that are regulated by different systemic and external cues influencing their function and rhythmicity, which, in turn, modifies the rhythms of the generated metabolites. consequently, if the metabolites identified here are sensitive to feeding and fasting cues, their applicability for trace deposition timing may be rather limited, but their value for monitoring peripheral circadian rhythms in the liver, for instance, may be crucial. furthermore, the previously introduced hormone and mrna biomarkers [ ] can feasibly be analysed by using an elisa assay and rt-qpcr respectively, techniques that nowadays are straightforward and require only basic laboratory instruments and have been shown to be suitable for forensic trace analysis. in comparison, relatively specialized lc/ms equipment and methodology are needed to simultaneously analyse a large number of metabolites circulating in plasma, even more so, when measuring a forensic trace sample. regardless of these constraints, it has been shown that measuring metabolites in dried blood is possible [ , ] , but needs to be studied further in the forensic context, where the quantity and the quality of dried blood stains are often compromised. however, in situations where intact rna is not available and the preferred mrna-based time estimation models can therefore not be used, metabolite markers might be the markers of choice. in such situation, metabolite analysis may provide valuable information on trace deposition time. the technical challenges should thus not impede future studies to fully establish whether plasma metabolites could be useful biomarkers for trace deposition timing, and if additional metabolites can achieve a more detailed and accurate time estimation than the metabolites identified here. additionally, more samples collected around the -h clock from more individuals need to be analysed to make the time prediction model more robust, and the analysis method, at best a multiplex system, needs to be forensically validated including sensitibity testing, specificity testing and stability testing, before final forensic casework application may be considered. central and peripheral circadian clocks in mammals mammalian circadian clock and metabolism-the epigenetic link effects of sleep and circadian rhythm on the human immune system diurnal rhythms in blood cell populations and the effect of acute sleep deprivation in healthy young men effect of sleep deprivation on rhythms of clock gene expression and melatonin in humans metabolism and the circadian clock converge effect of sleep deprivation on the human metabolome improving human forensics through advances in genetics, genomics and molecular biology estimating trace deposition time with circadian biomarkers: a prospective and versatile tool for crime scene reconstruction dissecting daily and circadian expression rhythms of clockcontrolled genes in human blood evaluation of mrna markers for estimating blood deposition time: towards alibi testing from human forensic stains with rhythmic biomarkers circadian rhythms, sleep, and metabolism circadian integration of metabolism and energetics identification of human plasma metabolites exhibiting time-of-day variation using an untargeted liquid chromatography-mass spectrometry metabolomic approach human blood metabolite timetable indicates internal body time eye color and the prediction of complex phenotypes from genotypes irisplex: a sensitive dna tool for accurate prediction of blue and brown eye colour in the absence of ancestry information the hirisplex system for simultaneous prediction of hair and eye colour from dna molecular classification of cancer: class discovery and class prediction by gene expression monitoring circadian rhythm and sleep disruption: causes, metabolic consequences, and countermeasures effects of insufficient sleep on circadian rhythmicity and expression amplitude of the human blood transcriptome plasma amino acid responses in humans to evening meals of differing nutritional composition biomarkers for nutrient intake with focus on alternative sampling techniques targeted metabolomics of dried blood spot extracts ethical statement and informed consent all procedures performed in studies involving human participants were in accordance with the ethical standards of the institutional and/or national research committee and with the helsinki declaration and its later amendments or comparable ethical standards. informed consent was obtained from all individual participants included in the study.open access this article is distributed under the terms of the creative comm ons attribution . international license (http:// creativecommons.org/licenses/by/ . /), which permits unrestricted use, distribution, and reproduction in any medium, provided you give appropriate credit to the original author(s) and the source, provide a link to the creative commons license, and indicate if changes were made. key: cord- -ggoyd authors: valdano, eugenio; fiorentin, michele re; poletto, chiara; colizza, vittoria title: epidemic threshold in continuous-time evolving networks date: - - journal: nan doi: . /physrevlett. . sha: doc_id: cord_uid: ggoyd current understanding of the critical outbreak condition on temporal networks relies on approximations (time scale separation, discretization) that may bias the results. we propose a theoretical framework to compute the epidemic threshold in continuous time through the infection propagator approach. we introduce the weak commutation condition allowing the interpretation of annealed networks, activity-driven networks, and time scale separation into one formalism. our work provides a coherent connection between discrete and continuous time representations applicable to realistic scenarios. contagion processes, such as the spread of diseases, information, or innovations [ ] [ ] [ ] [ ] [ ] , share a common theoretical framework coupling the underlying population contact structure with contagion features to provide an understanding of the resulting spectrum of emerging collective behaviors [ ] . a common keystone property is the presence of a threshold behavior defining the transition between a macroscopic-level spreading regime and one characterized by a null or negligibly small contagion of individuals. known as the epidemic threshold in the realm of infectious disease dynamics [ ] , the concept is analogous to the phase transition in nonequilibrium physical systems [ , ] , and is also central in social contagion processes [ , [ ] [ ] [ ] [ ] [ ] . a vast array of theoretical results characterize the epidemic threshold [ ] , mainly under the limiting assumptions of quenched and annealed networks [ , [ ] [ ] [ ] [ ] , i.e., when the time scale of the network evolution is much slower or much faster, respectively, than the dynamical process. the recent availability of data on time-resolved contacts of epidemic relevance [ ] has, however, challenged the time scale separation, showing it may introduce important biases in the description of the epidemic spread [ ] [ ] [ ] [ ] [ ] [ ] [ ] [ ] [ ] [ ] [ ] [ ] [ ] [ ] [ ] and in the characterization of the transition behavior [ , [ ] [ ] [ ] [ ] . departing from traditional approximations, few novel approaches are now available that derive the epidemic threshold constrained to specific contexts of generative models of temporal networks [ , , , [ ] [ ] [ ] [ ] or considering generic discrete-time evolving contact patterns [ ] [ ] [ ] . in particular, the recently introduced infection propagator approach [ , ] is based on a matrix encoding the probabilities of transmission of the infective agent along time-respecting paths in the network. its spectrum allows the computation of the epidemic threshold at any given time scale and for an arbitrary discrete-time temporal network. leveraging an original mapping of the temporal network and epidemic spread in terms of a multilayer structure, the approach is valid in the discrete representation only, similarly to previous methods [ , , ] . meanwhile, a large interest in the study of continuously evolving temporal networks has developed, introducing novel representations [ , , , ] and proposing optimal discretization schemes [ , , ] that may, however, be inaccurate close to the critical conditions [ ] . most importantly, the two representations-continuous and discrete-of a temporal network remain disjointed in current network epidemiology. a discrete-time evolving network is indeed a multilayer object interpretable as a tensor in a linear algebraic representation [ ] . this is clearly no longer applicable when time is continuous, as it cannot be expressed in the form of successive layers. hence, a coherent theoretical framework to bridge the gap between the two representations is still missing. in this letter, we address this issue by analytically deriving the infection propagator in continuous time. formally, we show that the dichotomy discrete timecontinuous time translates into the separation between a linear algebraic approach and a differential one, and that the latter can be derived as the structural limit of the former. our approach yields a solution for the threshold of epidemics spreading on generic continuously evolving networks, and a closed form under a specific condition that is then validated through numerical simulations. in addition, the proposed novel perspective allows us to cast an important set of network classes into one single rigorous and comprehensive mathematical definition, including annealed [ , , ] and activity-driven [ , ] networks, widely used in both methodological and applied research. let us consider a susceptible-infected-susceptible (sis) epidemic model unfolding on a continuously evolving temporal network of n nodes. the sis model constitutes a basic paradigm for the description of epidemics with reinfection [ ] . infectious individuals (i) can propagate the contagion to susceptible neighbors (s) with rate λ, and recover to the s state with rate μ. the temporal network is described by the adjacency matrix aðtÞ, with t ∈ ½ ; t. we consider a discretized version of the system by sampling aðtÞ at discrete time steps of length Δt (fig. ). this yields a finite sequence of adjacency matrices fa ; a ; …; a t step g, where t step ¼ ⌊t=Δt⌋, and a h ¼ aðhΔtÞ. the sequence approximates the original continuous-time network with increasing accuracy as Δt decreases. we describe the sis dynamics on this discrete sequence of static networks as a discrete-time markov chain [ , ] : where p h;i is the probability that a node i is in the infectious state at time step h, and μΔt (λΔt) is the probability that a node recovers (transmits the infection) during a time step Δt, for sufficiently small Δt. by mapping the system into a multilayer structure encoding both network evolution and diffusion dynamics, the infection propagator approach derives the epidemic threshold as the solution of the equation ρ½pðt step Þ ¼ [ , ] , where ρ is the spectral radius of the following matrix: the generic element p ij ðt step Þ represents the probability that the infection can propagate from node i at time step to node j at time step t step , when λ is close to λ c and within the quenched mean-field approximation (locally treelike network [ ] ). for this reason, p is denoted as the infection propagator. to compute the continuous-time limit of the infection propagator, we observe that p obeys the recursive relation pðh þ Þ ¼ pðhÞ½ − μΔt þ λΔta hþ . expressed in continuous time and dividing both sides by Δt, the relation becomes that in the limit Δt → yields a system of n coupled differential equations whose components are the lhs of eq. ( ) is the derivative of p that is well behaved if all entries are continuous functions of time. a ij ðtÞ are, however, often binary, so that their evolution is a sequence of discontinuous steps. to overcome this, it is possible to approximate these steps with one-parameter families of continuous functions, compute the threshold, and then perform the limit of the parameter that recovers the discontinuity. more formally, this is equivalent to interpreting derivatives in the sense of tempered distributions [ ] . in order to check that our limit process correctly connects the discrete-time framework to the continuous time one, let us now consider the standard markov chain formulation of the continuous dynamics: performing a linear stability analysis of the disease-free state [i.e., around p i ðtÞ ¼ ] in the quenched mean-field approximation [ , ] , we obtain we note that this expression is formally equivalent to eq. ( ). in particular, each row of p ij of eq. ( ) satisfies eq. ( ). furthermore, the initial condition p ij ð Þ ¼ δ ij guarantees that in varying the row i, we consider all vectors of the space basis as initial condition. every solution pðtÞ of eq. ( ) can therefore be expressed as a linear combination of the rows of pðtÞ. any fundamental matrix solution of eq. ( ) obeys eq. ( ) within the framework of the floquet theory of nonautonomous linear systems [ ] . the equivalence of the two equations shows that our limit of the discrete-time propagator encodes the dynamics of the continuous process. it is important to note that the limit process leading to eq. ( ) entails a fundamental change of paradigm on the representation of the network structure and contagion process, where the linear algebraic representation suitable in discrete time turns into a differential geometrical description of the continuous-time flow. while network and spreading dynamics in discrete time are encoded in a multilayer adjacency tensor, the continuous time description proposed in eq. ( ) rests on a representation of the dynamical process in terms of a manifold whose points are adjacency matrices (or a rank- tensor in the sense of ref. [ ] ) corresponding to possible network and contagion states. the dynamics of eq. ( ) is then a curve on such a manifold, indicating which adjacency matrices to visit and in which order. in practice, we recover that the contagion process on a discrete temporal network corresponding to an ordered subset of the full multilayer structure of ref. [ ] becomes in the limit Δt → a spreading on a continuous temporal network represented through a one-dimensional ordered subset of a tensor field (formally the pullback on the evolution curve). the two frameworks, so far considered independently and mutually exclusive, thus merge coherently through a smooth transition in this novel perspective. we now turn to solving eq. ( ) to derive an analytic expression of the infection propagator. by defining the rescaled transmissibility γ ¼ λ=μ, we can solve eq. ( ) in terms of a series in μ [ ] , with p ð Þ ¼ and under the assumption that γ remains finite around the epidemic threshold for varying recovery rates. the recursion relation from which we derived eq. ( ) provides the full propagator for t ¼ t. equation ( ) computed in t therefore yields the infection propagator for the continuous-time adjacency matrix aðtÞ, and is defined by the sum of the following terms: equations ( ) and ( ) can be put in a compact form by using dyson's time-ordering operator t [ ] . it is defined as t aðt Þaðt Þ ¼ aðt Þaðt Þθðt − t Þ þ aðt Þaðt Þθðt − t Þ, with θ being heaviside's step function. the expression of the propagator is thus equation ( ) represents an explicit general solution for eq. ( ) that can be computed numerically to arbitrary precision [ ] . the epidemic threshold in the continuoustime limit is then given by ρ½pðtÞ ¼ . we now discuss a special case where we can recover a closed-form solution of eq. ( ), and thus of the epidemic threshold. we consider continuously evolving temporal networks satisfying the following condition (weak commutation): aðtÞ; i.e., the adjacency matrix at a certain time aðtÞ commutes with the aggregated matrix up to that time. in the introduced tensor field formalism, the weak commutation condition represents a constraint on the temporal trajectory, or equivalently, an equation of motion for aðtÞ. equation ( ) implies that the order of factors in eq. ( ) no longer matters. hence, we can simply remove the timeordering operator t in eq. ( ), yielding where hai ¼ r t dtaðtÞ=t is the adjacency matrix averaged over time. the resulting expression for the epidemic threshold for weakly commuting networks is then this closed-form solution proves to be extremely useful as a wide range of network classes satisfies the weak commutation condition of eq. ( ) . an important class is constituted by annealed networks [ , , ] . in the absence of dynamical correlations, the annealed regime leads to h½aðxÞ; aðyÞi ¼ , as the time ordering of contacts becomes irrelevant. equation ( ) can thus be reinterpreted as h½aðtÞ; aðxÞi x ¼ , where the average is carried out over x ∈ ½ ; tÞ. for long enough t, r t dxaðxÞ=t approximates well the expected adjacency matrix hai of the annealed model, leading the annealed regime to satisfy eq. ( ) . this result thus provides an alternative mathematical framework for the conceptual interpretation of annealed networks in terms of weak commutation. originally introduced to describe disorder on quenched networks [ , ] , annealed networks were mathematically described in probabilistic terms, with the probability of establishing a contact depending on the degree distribution pðkÞ and the twonode degree correlations pðk jkÞ [ ] . here we show that temporal networks whose adjacency matrix aðtÞ asymptotically commutes with the expected adjacency matrix are found to be in the annealed regime. equation ( ) can also be used to test the limits of the time scale separation approach, by considering a generic temporal network not satisfying the weak commutation condition. if μ is small, we can truncate the series of the infection propagator [eq. ( ) ] at the first order, p ¼ þ μp ð Þ þ oðμ Þ, where p ð Þ ðtÞ ¼ t½γhai − , to recover indeed eq. ( ) . the truncation thus provides a mathematical expression of the range of validity of the physical review letters , ( ) time-separation scheme for spreading processes on temporal networks, since temporal correlations can be disregarded when the network evolves much faster than the spreading process. extending the result of the annealed networks, we show that the weak commutation condition also holds for networks whose expected adjacency matrix depends on time as a scalar function (instead of being constant as in the annealed case), haðtÞi ¼ cðtÞhað Þi. also in this case we have h½aðxÞ; aðyÞi ¼ , so that the same treatment performed for annealed networks applies. examples are provided by global trends in activation patterns, as often considered in infectious disease epidemiology to model seasonal variations of human contact patterns (e.g., due to the school calendar) [ ] . when the time scale separation approach is not applicable, we find another class of weakly commuting temporal networks that are used as a paradigmatic network example for the study of contagion processes occurring on the same time scale of contacts evolution-the activity-driven model [ ] . it considers heterogeneous populations where each node i activates according to an activity rate a i , drawn from a distribution fðaÞ. when active, the node establishes m connections with randomly chosen nodes lasting a short time δ (δ ≪ =a i ). since the dynamics lacks time correlations, the weak commutation condition holds, and the epidemic threshold can be computed from eq. ( ). in the limit of large network size, it is possible to write the average adjacency matrix as hai ij ¼ ðmδ=nÞða i þ a j Þ þ oð =n Þ. through row operations we find that the matrix has rankðhaiÞ ¼ , and thus only two nonzero eigenvalues, α, σ, with α > σ. we compute them through the traces of hai (tr½hai ¼ α þ σ and tr½hai ¼ α þ σ ) to obtain the expression of ρ½hai for eq. ( ): the epidemic threshold becomes yielding the same result of ref. [ ] , provided here that the transmission rate λ is multiplied by δ to make it a probability, as in ref. [ ] . finally, we verify that for the trivial example of static networks, with an adjacency matrix constant in time, eq. ( ) reduces immediately to the result of refs. [ , ] . we now validate our analytical prediction against numerical simulations on two synthetic models. the first is the activity-driven model with activation rate a i ¼ a, m ¼ , and average interactivation time τ ¼ =a ¼ , fixed as the time unit of the simulations. the transmission parameter is the probability upon contact λδ and the model is implemented in continuous time. the second model is based on a bursty interactivation time distribution pðΔtÞ ∼ ðϵ þ ΔtÞ −β [ ] , with β ¼ . and ϵ tuned to obtain the same average interactivation time as before, τ ¼ . we simulate a sis spreading process on the two networks with four different recovery rates, μ ∈ f − ; − ; − ; g, i.e., ranging from a value that is orders of magnitude larger than the time scale τ of the networks (slow disease), to a value equal to τ (fast disease). we compute the average simulated endemic prevalence for specific values of λ, μ using the quasistationary method [ ] and compare the threshold computed with eq. ( ) with the simulated critical transition from extinction to endemic state. as expected, we find eq. ( ) to hold for the activity-driven model at all time scales of the epidemic process (fig. ) , as the network lacks temporal correlations. the agreement with the transition observed in the bursty model, however, is recovered only for slow diseases, as at those time scales the network is found in the annealed regime. when network and disease time scales become comparable, the weakly commuting approximation of eq. ( ) no longer holds, as burstiness results in dynamical correlations in the network evolution [ ] . our theory offers a novel mathematical framework that rigorously connects discrete-time and continuous-time critical behaviors of spreading processes on temporal networks. it uncovers a coherent transition from an adjacency tensor to a tensor field resulting from a limit performed on the structural representation of the network and contagion process. we derive an analytic expression of the infection propagator in the general case that assumes a closed-form solution in the introduced class of weakly commuting networks. this allows us to provide a rigorous mathematical interpretation of annealed networks, encompassing the different definitions historically introduced in the literature. this work also provides the basis for important theoretical extensions, assessing, for example, the impact of bursty activation patterns or of the adaptive dynamics in response to the circulating epidemic. finally, our approach offers a tool for applicative studies on the estimation of the vulnerability of temporal networks to contagion processes in many real-world scenarios, for which the discrete-time assumption would be inadequate. we thank luca ferreri and mason porter for fruitful discussions. this work is partially sponsored by the ec-health contract no. (predemics) and the anr contract no. anr- -monu- (harmsflu) to v. c., and the ec-anihwa contract no. anr- -anwa- - (liveepi) to e. v., c. p., and v. c. * present address: department d'enginyeria informàtica i matemàtiques modeling infectious diseases in humans and animals generalization of epidemic theory: an application to the transmission of ideas epidemics and rumours epidemic spreading in scale-free networks a simple model of global cascades on random networks modelling dynamical processes in complex socio-technical systems contact interactions on a lattice on the critical behavior of the general epidemic process and dynamical percolation cascade dynamics of complex propagation propagation and immunization of infection on general networks with both homogeneous and heterogeneous components dynamics of rumor spreading in complex networks kinetics of social contagion critical behaviors in contagion dynamics epidemic processes in complex networks resilience of the internet to random breakdowns spread of epidemic disease on networks epidemic spreading in real networks: an eigenvalue viewpoint discrete time markov chain approach to contact-based disease spreading in complex networks modern temporal network theory: a colloquium impact of non-poissonian activity patterns on spreading processes disease dynamics over very different time-scales: foot-and-mouth disease and scrapie on the network of livestock movements in the uk epidemic thresholds in dynamic contact networks how disease models in static networks can fail to approximate disease in dynamic networks representing the uk's cattle herd as static and dynamic networks impact of human activity patterns on the dynamics of information diffusion small but slow world: how network topology and burstiness slow down spreading dynamical strength of social ties in information spreading high-resolution measurements of face-to-face contact patterns in a primary school dynamical patterns of cattle trade movements multiscale analysis of spreading in a large communication network bursts of vertex activation and epidemics in evolving networks interplay of network dynamics and heterogeneity of ties on spreading dynamics predicting and controlling infectious disease epidemics using temporal networks, f prime rep the dynamic nature of contact networks in infectious disease epidemiology activity driven modeling of time varying networks temporal percolation in activity-driven networks contrasting effects of strong ties on sir and sis processes in temporal networks monogamous networks and the spread of sexually transmitted diseases epidemic dynamics on an adaptive network effect of social group dynamics on contagion epidemic threshold and control in a dynamic network virus propagation on time-varying networks: theory and immunization algorithms analytical computation of the epidemic threshold on temporal networks infection propagator approach to compute epidemic thresholds on temporal networks: impact of immunity and of limited temporal resolution machine learning: ecml effects of time window size and placement on the structure of an aggregated communication network epidemiologically optimal static networks from temporal network data limitations of discrete-time approaches to continuous-time contagion dynamics mathematical formulation of multilayer networks langevin approach for the dynamics of the contact process on annealed scale-free networks thresholds for epidemic spreading in networks controlling contagion processes in activity driven networks beyond the locally treelike approximation for percolation on real networks a course of modern analysis some results in floquet theory, with application to periodic epidemic models the magnus expansion and some of its applications the radiation theories of tomonaga, schwinger, and feynman optimal disorder for segregation in annealed small worlds diffusion in scale-free networks with annealed disorder recurrent outbreaks of measles, chickenpox and mumps: i. seasonal variation in contact rates epidemic thresholds of the susceptible-infected-susceptible model on networks: a comparison of numerical and theoretical results key: cord- -buctm o authors: mullick, shantanu; malshe, ashwin; glady, nicolas title: modeling the costs of trade finance during the financial crisis of – : an application of dynamic hierarchical linear model date: - - journal: information processing and management of uncertainty in knowledge-based systems doi: . / - - - - _ sha: doc_id: cord_uid: buctm o the authors propose a dynamic hierarchical linear model (dhlm) to study the variations in the costs of trade finance over time and across countries in dynamic environments such as the global financial crisis of – . the dhlm can cope with challenges that a dynamic environment entails: nonstationarity, parameters changing over time and cross-sectional heterogeneity. the authors employ a dhlm to examine how the effects of four macroeconomic indicators – gdp growth, inflation, trade intensity and stock market capitalization - on trade finance costs varied over a period of five years from to across countries. we find that the effect of these macroeconomic indicators varies over time, and most of this variation is present in the year preceding and succeeding the financial crisis. in addition, the trajectory of time-varying effects of gdp growth and inflation support the “flight to quality” hypothesis: cost of trade finance reduces in countries with high gdp growth and low inflation, during the crisis. the authors also note presence of country-specific heterogeneity in some of these effects. the authors propose extensions to the model and discuss its alternative uses in different contexts. trade finance consists of borrowing using trade credit as collateral and/or the purchase of insurance against the possibility of trade credit defaults [ , ] . according to some estimates more than % of trade transactions involve some form of credit, insurance, or guarantee [ ] , making trade finance extremely critical for smooth trades. after the global financial crisis of - , the limited availability of international trade finance has emerged as a potential cause for the sharp decline in global trade [ , , ] . as a result, understanding how trade finance costs varied over the period in and see [ ] for counter evidence. around the financial crisis has become critical for policymakers to ensure adequate availability of trade finance during crisis periods in order to mitigate the severity of the crisis. in addition, as the drivers of trade finance may vary across countries, it is important to account for heterogeneity while studying the effect of these drivers on trade finance [ ] . a systematic study of the drivers of trade finance costs can be challenging: modeling the effects of these drivers in dynamic environments (e.g., a financial crisis) requires one to have a method that can account for non-stationarity, changes in parameters over time as well as account for cross-sectional heterogeneity [ ] . first, nonstationarity is an important issue in time-series analysis of observational data [ , ] . the usual approach to address nonstationarity requires filtering the data in the hope of making the time-series mean and covariance stationary. however, methods for filtering time series, such as first differences can lead to distortion in the spectrum, thereby impacting inferences about the dynamics of the system [ ] . further, filtering the data to make the time-series stationary can (i) hinder model interpretability, and (ii) emphasize noise at the expense of signal [ ] . second, the effect of the drivers of trade finance costs changes over time [ ] . these shifts happen due to time-varying technological advances, regulatory changes, and evolution of the banking sector competitive environment, among others. as we are studying - global financial crisis, many drivers of the costs may have different effects during the crisis compared to the pre-crisis period. for example, during the crisis many lenders may prefer borrowers with the top most quality, thus exhibiting a "flight to quality" [ ] . to capture changes in model parameters over time, studies typically use either ( ) moving windows to provide parameter paths, or ( ) perform a before-and-after analysis. however, both these methods suffer from certain deficiencies. models that yield parameter paths [ , ] by using moving windows to compute changes in parameters over time leads to inefficient estimates since, each time, only a subset of the data is analyzed. these methods also presents a dilemma in terms of selection of the length of the window as short windows yield unreliable estimates while long windows imply coarse estimates and may also induce artificial autocorrelation. using before-and-after analysis [ , , ] to study parameter changes over time implies estimating different models before and after the event. the 'after' model is estimated using data from after the event under the assumption that this data represents the new and stabilized situation. a disadvantage of this approach is the loss in such as the world trade organization (wto), the world bank (wb), and the international monetary fund (imf). the studies that used surveys for understanding the impact of financial crisis on trade finance costs [ ] are also susceptible to biases present in survey methods. first, survey responses have subjective components. if this subjectivity is common across the survey respondents, a strong bias will be present in their responses. for example, managers from the same country tend to exhibit common bias in their responses [ ] . second, survey responses are difficult to verify. managers may over-or under-estimate their trade finance costs systematically, depending on the countries where their firms operate. finally, survey research is often done in one cross-section of time, making it impossible to capture the variation over time. methods like vector autoregression (var) often filter data to make it stationary [ , , ] . statistical efficiency as a result of ignoring effects present in part of the data. further, this approach assumes that the underlying adjustment (due to events, such as the financial crisis) occurs instantaneously. however, in practice, it may take time for financial markets to adjust before it reaches a new equilibrium. this also serves to highlight the drawback of the approach in assuming time-invariant parameters for the 'before' model, as well as for the 'after' model. third, the effects of the drivers of trade finance cost may vary across countries [ ] , and we need to account for this heterogeneity. a well accepted way to incorporate heterogeneity is by using hierarchical models that estimate country-specific effects of the drivers of trade finance cost [ ] . however, as hierarchical models are difficult to embed in time-series analysis [ ] , studies tend to aggregate data across cross-sections which leads to aggregation biases in the parameter estimates [ ] . nonstationarity, time-varying parameters and cross-sectional heterogeneity render measurement and modeling of factors that impact the dependent variable of interest-in our case, cost of trade finance-challenging in dynamic environments (such as a financial crisis). therefore, we propose a dynamic hierarchical linear model (dhlm) that addresses all these three concerns and permits us to explain the variations in trade finance costs over several years, while also allowing us to detect any variation across countries, if present. our dhlm consists of three levels of equations. at the higher level, observation equation specifies, for each country in each year, the relationship between trade finance costs and a set of macroeconomic variables (e.g., inflation in the country). the coefficients of the predictors in the observation equation are allowed to vary across crosssection (i.e., countries) and over time. next, in the pooling equation we specify the relationship between the country-specific time-varying coefficients (i.e., parameters) from the observation equation to a new set of parameters that vary over time, but are common across countries. thus, the pooling equation enables us to capture the "average" time-varying effect of macroeconomic variables on trade finance cost. finally, this "average" effect can vary over time and is likely to depend on its level in the previous period. the evolution equation, which is the lowest level of the dhlm, captures these potential changes in the "average" effects of the macroeconomic variables in a flexible way through a random walk. we employ our dhlm to study how the effects of four macroeconomic variables-gdp growth, trade intensity, inflation, and stock market capitalization-on trade finance costs varied across nations over a period of five years from to . although the objective of our paper is to introduce a model that can address the different challenges outlined earlier, our model estimates provide several interesting insights. we find that the effect of macroeconomic indicators on the cost of trade finance varies over time and that most of this variation is present in the years preceding and succeeding the financial crisis. this is of interest to policymakers in deciding how long to implement interventions designed to ease the cost of trade finance. in addition, the trajectory of time-varying effects of gdp growth and inflation are consistent with the "flight to quality" story [ ] : during the crisis, cost of trade finance reduces in countries that have high gdp growth and low inflation. the time-varying effects of trade intensity is also consistent with our expectations, but the time-varying effect of market capitalization is counter-intuitive. finally, we also note heterogeneity in the trajectory of the country-specific time-varying effects, primarily for the effects of stock market capitalization and trade intensity. this research makes two contributions. first, we introduce a new model to the finance literature to study the evolution in the drivers of trade finance costs over time in dynamic environments such as a financial crisis, while also allowing the impact due to these drivers to be heterogeneous across countries. our modeling approach addresses concerns related to nonstationarity, time-varying model parameters and cross-sectional heterogeneity that are endemic to time-series analysis of dynamic environments. our model can be adopted to study evolution of various other variables such as financial services costs and global trade. our model can also be extended to a more granular level to incorporate firm-level heterogeneity by using a second pooling equation. doing this can pave the way to identify the characteristics of companies which may need assistance during a financial crisis. thus, our research can remove subjectivity in extending benefits to the affected exporters and importers. even large scale surveys may not be able to provide such granular implications to policy makers. second, our research has substantive implications. using a combination of data from loan pricing corporation's dealscan database and the world bank, we complement the finance literature by empirically studying the evolution of the drivers of trade finance cost. we find that the impact of these drivers varies over time, with a large part of the variation present in the years preceding and succeeding the financial crisis. to the best of our knowledge, we are the first to study the time-varying impact of these macro-economic drivers on trade finance and this is of use to policy makers in deciding how long to extend benefits to parties affected by the crisis. the paper proceeds as follows. in the first section we describe the dhlm. we provide the theoretical underpinnings necessary to estimate the model. next we describe the data and variables used in the empirical analysis. in the fourth section we provide detailed discussion of the results. we conclude the paper with the discussion of the findings. we specify trade finance cost of a country as a function of country-specific macroeconomic variables and country-specific time-varying parameters using a dhlm. the dhlm has been used by previous studies in marketing and statistics [ , , , , ] to estimate time-varying parameters at the disaggregate level (e.g., at the level of a brand or store). a dhlm is a combination of dynamic linear models (dlm) which estimates time-varying parameters at an aggregate level [ , ] , and a hierarchical bayesian (hb) model which estimates time-invariant parameters at the disaggregate level [ ] . the dhlm and the hb model both have a hierarchical structure which permits us to pool information across different countries to arrive at overall aggregate-level inferences. shrinking of the country-specific parameters to an "average" effect of the key variables across country has been used by other researchers to estimate country-specific tourism marketing elasticity [ ] and to estimate store-level price elasticity [ ] . we specify trade finance cost of a country as a function of country-level variables gdp growth, inflation, stock market capitalization and trade: where trade finance cost it is the cost of trade finance of country i at time t, gdp growth it is the gdp growth of country i at time t, inflation it is the inflation of country i at time t, stock market capitalization it is the stock market capitalization of country i at time t, trade intensity it is the intensity of trade of country i at time t, a it , b it , c it , d it and f it are country-specific time-varying coefficients and u is the error term. in order to specify the equations in a compact manner, we cast eq. as the observation equation of the dhlm. a dhlm also consists of a pooling equation and an evolution equation, and we specify these three equations below. we specify the observation equation as: an observation y t is defined as a vector that consists of country-specific trade finance cost at time t, whereas f t is a matrix that contains the country-specific macroeconomic variables at time t. the vector of parameters h t contains all the countryspecific time-varying parameters defined in eq. : a it , b it , c it , d it and f it . the error term v it is multivariate normal and is allowed to have a heteroskedastic variance r v ;i , and i an identity matrix of appropriate dimension. we specify y t , f t , and h t similar to [ , ] . we specify the pooling equation as: we specify the country-specific time-varying parameters h t as a function of a new set of parameters h t that vary only in time. this hierarchical structure pools information across countries at every point in time, and thus h t represent the "average" timevarying effect. hence, f t is the matrix of 's and 's which allows us to specify the relationship between the average time-varying parameters h t and the country-specific time-varying parameters h t . the error distribution v t is multivariate normal, and i an identity matrix of appropriate dimension. we specify how the average time-varying parameters, h t , evolves over time. we follow the dynamic linear models (dlm) literature [ ] and model the evolution of these parameters over time as a random walk. we specify the evolution equation as: the random walk specification requires g to be an identity matrix and w t is a multivariate normal error, and i an identity matrix of appropriate dimension. we compute the full joint posterior of the set of parameters (h t , h t , and the variance parameters r v ;i , r v , and r w ) conditional on observed data. to generate the posteriors of the parameters we used the gibbs sampler [ ] . in the interest of space, we refer the reader to [ ] for more details. as a robustness check, we estimate our dhlm on simulated data to check if our sampler is able to recover the parameters. the model we use to simulate the data in similar to the one [ ] used for their simulation study. we find that our sampler performs well and recovers the parameters used to simulate the data. space constraints prevent us from including further details. for the empirical tests, the data are derived from two sources. the information on trade finance costs is obtained from loan pricing corporation's dealscan database. the information on macroeconomic variables for the countries is obtained from the world bank. we briefly describe the data sources. dealscan provides detailed information on loan contract terms including the spread above libor, maturity, and covenants since . the primary sources of data for dealscan are attachments on sec filings, reports from loan originators, and the financial press [ ] . as it is one of the most comprehensive sources of syndicated loan data, prior literature has relied on it to a large extent [ , , , , ] . the dealscan data records, for each year, the loan deals a borrowing firm makes. in some instances, a borrower firm may make several loan deals in a year. to focus on trade finance, we limit the sample to only those loans where the purpose was identified by dealscan as one of the following: trade finance, cp backup, pre-export, and ship finance. our trade finance costs are measured as the loan price for each loan facility, which equals the loan facility's at-issue yield spread over libor (in basis points). due to the limited number of observations, we don't differentiate between different types of loans. instead, the trade finance costs are averaged across different types of loans such as revolver loans, term loans, and fixed-rate bonds. we use the world bank data to get information on the economic and regulatory climate, and extent of development of the banking sector of the countries where the borrowing firms are headquartered. the economic and regulatory climate of a country is captured by gdp growth, inflation, stock market capitalization, and trade intensity. countries with high gdp growth are likely to face lower cost of trade finance, particularly during the financial crisis. as a high gdp growth is an indicator of the health of the economy, during the financial crisis lenders are likely to move their assets to these economies. countries with higher inflation will likely have higher cost of trade finance as the rate of returns on the loans will incorporate the rate of inflation. we include stock market capitalization scaled by gdp as a proxy for the capital market development in the country. countries with higher stock market capitalization are likely to have more developed financial markets. therefore, the cost of trade finance in such markets is likely to be lower. finally, we include total trade for the country scaled by the country's gdp as a measure of trade intensity. we expect that countries with a higher trade intensity will face a higher trade finance cost since a greater reliance on trade may make a country more risky during a crisis. as our objective is to study the phenomenon at the national level, we need to merge these two data sets. as our data from dealscan contains trade finance costs at the firm level in a given year, we use the average of the trade finance costs at the level of a borrowing firm's home country to derive country-specific trade finance costs. this permits us to merge the data from dealscan with macro-economic data from world bank. our interest is in modelling trade finance costs around the financial crisis of - . therefore, we use a -year time series starting in and ending in . this gives us a reasonable window that contains pre-crisis, during the crisis, and postcrisis periods. while we would like to use a longer window, we are constrained by the number of years for which the data are available to us from dealscan. after merging the two databases, our final sample consists of eight countries for which we have information on trade finance costs as well as macroeconomic indicators for all the five years. the eight countries are: brazil, ghana, greece, russia, turkey, ukraine, united kingdom (uk), and the united states (usa). we report the descriptive statistics for the sample in table . average trade finance costs are approximately basis points above libor. mean gdp growth is just . %, reflecting the lower growth during the financial crisis. although average inflation is at . %, we calculated the median inflation to be a moderate . %. on average stock market capitalization/gdp ratio is around % while trade/gdp ratio is around %. more detailed summary statistics for the trade finance costs are depicted in fig. . figure captures the variation in trade finance cost over time and across countries. we find countries experience a large increase in trade finance costs from to . also, except for greece, these costs came down in from their peak in . this suggests that the crisis impacted trade finance costs uniformly in our sample. we also see heterogeneity across countries in the manner in which these costs evolve over time. we also tested for multicollinearity among the independent variables, gdp growth, inflation, stock market capitalization and trade intensity. we specified a panel data regression model (i.e., without time-varying parameters) and calculated the variance inflation factors (vifs). the vifs we get for gdp growth, inflation, stock market capitalization and trade intensity are . , . , . and . respectively. as the vifs are less than , we can conclude that multicollinearity is not a concern [ ]. in this section, we present the main results based on our dhlm, and subsequently compare our model to the benchmark hb model in which the parameters do not vary over time. we estimate our model using the gibbs sampler [ ] . we use , iterations and use the last , iterations for computing the posterior, while keeping every tenth draw. we verified the convergence of our gibbs sampler by using standard diagnostics: ( ) we plotted the autocorrelation plot of the parameters and see that the autocorrelation goes to zero [ ] and ( ) we plot and inspect the posterior draws of our model parameters and find that they resemble a "fat, hairy caterpillar" that does not bend [ ] . we first present the estimates for the pooling equation (h ) which are summarized in fig. . these estimates represents the "average" effect across countries of the four macroeconomic variables, gdp growth, inflation, stock market capitalization and the trade/gdp ratio. in fig. , each of the four panels depict the "average" effect, over time, of the macro-economic variables on the cost of trade finance. the dotted lines depict the % confidence interval (ci). we discuss these "average" time-varying effects in the subsequent paragraphs. we see that for all four macro-economic variables, the effects vary over time. in addition, a large part of the variations occur between to , the year span during which the financial crisis happened. our estimates will interest policy makers as it implies that interventions to alleviate the impact of the crisis should start before its onset and should continue for some time after it has blown over. we find that gdp growth has a negative effect on trade finance costs and this effect becomes more negative over time, especially during the years to . our result implies that countries with high gdp growth faced monotonically decreasing cost of trade finance in the years before and during the financial crisis, and can be explained by the "flight to quality" hypothesis advanced in the finance literature [ ] . inflation has a positive effect on the cost of trade finance and this effect become more positive over time, especially during to which are the year preceding the crisis and the year of the crisis. our result implies that countries with high inflation faced monotonically increasing costs of trade finance from to and is also consistent with the "flight to quality" theory. stock market capitalization has a positive effect on the cost of trade finance. this effect seems somewhat counterintuitive as we used stock market capitalization as a proxy for development of financial markets and one would expect that during the financial crisis trade finance costs would decrease as financial markets became more developed. we note that the trade/gdp ratio has a positive effect on the cost of trade finance, and this effect becomes more positive between the years to , similar to the pattern we noticed for the effects of inflation. since this variable measures the intensity of trade of a country, our results indicate that, during the financial crisis, a greater reliance on trade leads to higher costs of trade finance. this is expected since higher reliance on trade may make a country more risky in a financial crisis. countries with higher trade intensity are also exposed to higher counterparty risks. our model can also estimate country-specific time-varying parameters presented in the observation equation (h ). these estimates underscore the advantage of using a model such as ours, since with only observations of our dependent variable, we are able to estimate estimates which are country-specific and time-varying. we note some heterogeneity in the country-specific trajectory of the effects of stock market capitalization and trade intensity. for example, we see that for some countries such as ghana, russia and greece, the effect of trade/gdp ratio on the cost of trade finance witnesses a steeper increase compared to other countries such as usa and ukraine in to , the year of the crisis; we are unable to present these results due to space constraints. however, these findings offer directions for future research. to assess model fit, we compare the forecasting accuracy of our proposed model to the benchmark hierarchical bayesian (hb) model which has time-invariant parameters. we specify the hb model as follows: the above specification is similar to the dhlm with the major difference being that the parameters now do not vary over time. the dependent variables (y) and independent variables (x ) are the same as those in the proposed model, while x is a matrix that adjusts the size of l to that of l . we compare the model fit by computing the out-of-sample one-step-ahead forecast of our proposed model and the benchmark model. we calculate the mean absolute percentage error (mape), which is a standard fit statistic for model comparison [ , ] . we find that the mape of our proposed model is . , while that of the benchmark hb model is . . thus our proposed model forecasts more accurately than the benchmark hb model. in this research, we attempt to shed light on the following question: how can we develop a model that would permit us to examine variations in trade finance costs over time in dynamic environments (such as a financial crisis), while also accounting for possible variations across countries? we addressed this question by proposing a dhlm model that can cope with the three challenges present when modeling data from dynamic environments: nonstationarity, changes in parameters over time and cross-sectional heterogeneity. our model estimates detect variation over time of the macroeconomic drivers o trade finance, which are of interest to policy makers in deciding when and for how long to schedule interventions to alleviate the impact of a financial crisis. further, the trajectory of the time-varying effects of the macroeconomic indicators are in line with our expectations. we also note some degree of countryspecific heterogeneity in the manner in which these drivers evolve over time, and a detailed scrutiny of these findings may prove fertile ground for future research. the dhlm can be easily scaled up thereby allowing us to extend our analysis. first, we can add another level in the model hierarchy by specifying a second pooling equation. this would permit us to study the problem at the firm level since evidence suggests thatduring the crisis -firms from developing countries and financially vulnerable sectors faced higher trade finance costs [ , ] , and one can use recent nlp approaches [ ] to gather firm information across different data sources. second, more macroeconomic variables can be added in the observation equation. in addition, our model can be used to study other contexts that face dynamic environments such as financial services costs and global trade. the suitability of our model for dynamic environments also implies that it can also be used to study the impact of the recent coronavirus (covid- ) on financial activities, since reports from the european central bank have suggested that the virus can lead to economic uncertainty. in many ways, the way the virus impacts the economy is similar to that of the financial crisis: there is no fixed date on which the interventions starts and endsunlike, for example, the imposition of a new state taxand its impact may vary over time as the virus as well as people's reaction to it gains in strength and then wanes and it would be interesting to model these time-varying effects to see how they evolve over time. aggregate risk and the choice between cash and lines of credit: aggregate risk and the choice between cash and lines of credit a theory of domestic and international trade finance liquidity mergers exports and financial shocks the long-term effect of marketing strategy on brand sales building brands boosting trade finance in developing countries: what link with the wto? working paper, ssrn elibrary response styles in marketing research: a crossnational investigation do depositors discipline banks and did government actions during the recent crisis reduce this discipline? an international perspective financial institutions and markets across countries and over time-data and analysis the emergence of market structure in new repeat-purchase categories: the interplay of market share and retailer distribution international and domestic collateral constraints in a model of emerging market crises off the cliff and back? credit conditions and international trade during the global financial crisis using market-level data to understand promotion effects in a nonlinear model improving consumer mindset metrics and shareholder value through social media: the different roles of owned and earned media debtor-in-possession financing and bankruptcy resolution: empirical evidence the persistence of marketing effects on sales sample-based approaches to calculating marginal densities investigating the relationship between the content of online word of mouth, advertising, and brand performance econometric analysis decomposing the great trade collapse: products, prices, and quantities in the - crisis time series analysis foreign banks in syndicated loan markets combining time series and cross sectional data for the analysis of dynamic marketing systems. working paper product line extensions and competitive market interactions: an empirical analysis dynamic hierarchical models: an extension to matrix-variate observations the collapse of international trade during the - crisis. in search of the smoking gun financial intermediation and growth: causality and causes the bugs book: a practical introduction to bayesian analysis trade and trade finance developments in developing countries post creating micro-marketing pricing strategies using supermarket scanner data the long-term impact of promotion and advertising on consumer brand choice price elasticity variations across locations, time and customer segments: an application to the self-storage industry reducing food waste through digital platforms: a quantification of cross-side network effects modeling and forecasting the sales of technology products consumer bankruptcies and the bankruptcy reform act: a time-series intervention analysis, - term based semantic clusters for very short text classification who benefits from store brand entry? marketing budget allocation across countries: the role of international business cycles information asymmetry and financing arrangements: evidence from syndicated loans the dynamic effect of innovation on market structure bayesian forecasting and dynamic models predicting nitrogen and chlorophyll content and concentrations from reflectance spectra ( - nm) at leaf and canopy scales key: cord- -uilzmmxu authors: mo, baichuan; feng, kairui; shen, yu; tam, clarence; li, daqing; yin, yafeng; zhao, jinhua title: modeling epidemic spreading through public transit using time-varying encounter network date: - - journal: nan doi: nan sha: doc_id: cord_uid: uilzmmxu passenger contact in public transit (pt) networks can be a key mediate in the spreading of infectious diseases. this paper proposes a time-varying weighted pt encounter network to model the spreading of infectious diseases through the pt systems. social activity contacts at both local and global levels are also considered. we select the epidemiological characteristics of coronavirus disease (covid- ) as a case study along with smart card data from singapore to illustrate the model at the metropolitan level. a scalable and lightweight theoretical framework is derived to capture the time-varying and heterogeneous network structures, which enables to solve the problem at the whole population level with low computational costs. different control policies from both the public health side and the transportation side are evaluated. we find that people's preventative behavior is one of the most effective measures to control the spreading of epidemics. from the transportation side, partial closure of bus routes helps to slow down but cannot fully contain the spreading of epidemics. identifying"influential passengers"using the smart card data and isolating them at an early stage can also effectively reduce the epidemic spreading. infectious diseases spread through social contacts, such as at schools (salathé et al., ; litvinova et al., ) or conferences (stehlé et al., ) . past studies proved that human mobility networkslike air transportation (colizza et al., ; balcan et al., ) or waterways gatto et al., )-could transport pathogens or contagious individuals to widespread locations, leading to the outbreak of epidemics. recently, the outbreak of coronavirus disease confirmed the strong connection between human mobility network and disease dynamics. the first case of covid- was reported in wuhan, china, at the beginning of dec. , and has then quickly spread to the rest of china through airlines and high-speed rail networks during the spring festival travel season (wu et al., ) . besides the transmissions of pathogens to destination local communities via human mobility network, the densely populated urban public transit (pt) network, however, may also become a key mediate in the spreading of influenza-like epidemics with public transport carriers being the location of transmission (sun et al., ). the pt system in large metropolitan areas plays a key role in serving the majority of urban commuting demand between highly frequented locations, as human trajectories present a high degree of spatiotemporal regularity following simple, reproducible patterns (gonzalez et al., ) . by the end of , the annual patronage of urban metro systems worldwide increased from billion in to billion, and in asia, the systems carry more than billion passengers a year (international association of public transport (uitp), ). the urban pt system is often framed as a key solution for building sustainable cities with concerns of environment, economy, and society's effectiveness (miller et al., ) . but the indoor environment created by crowded metro carriages or buses can also make an infected individual easily transmit the pathogen to others via droplets or airborne routes (xie et al., ; yang et al., ) . in recent years, scholars began to turn their attention to the spreading of the epidemic through the urban pt network. rooted in peoples daily behavior regularity, individuals with repeated encounters in the pt network are found to be strongly connected over time, resulting in a dense contact network across the city (sun et al., ) . such mobility features lead to great risks for the outbreak of infectious diseases through bus and metro networks to the whole metropolitan area (sun et al., ; liu et al., ) . based on the contact network developed by sun et al. ( ) , a variation of human contact network structures has been proposed to characterize the movement and encounters of passengers in the pt system, which are then used to model the epidemic spreading among passengers (bóta et al., a,b; hajdu et al., ; el shoghri et al., ) . however, previous studies focusing on the human contacts in pt systems often use a static passenger contact network, discarding the time-varying nature of encounters. the aggregation of the time-varying edges into a static version of the network offers useful insights but can also introduce bias (perra et al., ; coviello et al., ) . besides, most of the previous studies focused on understanding epidemic spreading and identifying the risks in the pt system. few studies have discussed the pt operation-related epidemic control strategies. pt operation plays an important role in controlling epidemics. recently, a variety of epidemic control strategies in pt systems have been implemented to respond to the outbreak of covid- since late january . for example, in wuhan, almost all pt services have been shut down since jan. th. in wuxi, another chinese big city, except the arterial bus routes kept running with shortened operation hours, all other pt services (roughly % of bus routes) were suspended since feb. st. in milan, italy, the pt services were still in operation, but the suspension of pt has been officially proposed with the rapid surge of covid- cases in the lombardy area. the impacts of these strategies and other possible pt operation strategies (e.g., distributing passengers' departure time, limiting maximum bus load), however, have seldom been carefully explored. to fill this gap, this study proposes a time-varying weighted pt encounter network (pen) to model the spreading of the epidemic through urban pt systems. the social activity contacts at both local and global levels are also considered. we select the epidemiological characteristics of covid- as the case study along with high-resolution smart card data from singapore to illustrate the model at the metropolitan level. different control policies from both the public health side and the transportation side are evaluated. in this work, we do not attempt to reproduce or predict the patterns of covid- spreading in singapore, where a variety of outbreak prevention and control measures have been implemented (ministry of health (moh), ) and make most of epidemic prediction models invalid. instead, since the pt systems in many cities share the similar contact network structure despite the differences in urban structures, pt network layouts and individual mobility patterns (qian et al., ) , this study aims to employ the smart card data and the pt network of singapore as proximity to the universal pen to better understand the general spatiotemporal dynamics of epidemic spreading over the pt system, and to evaluate the potential effects of various measures for epidemic prevention in the pt systems, especially from the pt operation angle. the main contribution of this paper is threefold: • propose a pt system-based epidemic spreading model using the smart card data, where the timevarying contacts among passengers at an individual level are captured. • propose a novel theoretical solving framework for the epidemic dynamics with time-varying and heterogeneous network structures, which enables to solve the problem at the whole population level with low computational costs. • evaluate various potential epidemic control policies from both public health side (e.g., reducing infectious rate) and transportation side (e.g., distributing departure time, closing bus routes) the rest of the paper is organized as follows. in section , we elaborate on the methodology of establishing contact networks and solving the epidemic transmission model. section presents a case study using the smart card data in singapore to illustrate the general spatiotemporal dynamics of epidemic spreading through the pt system. in section , conclusions are made and policy implications are offered. the majority of previous studies investigated the epidemic process in a static network, where the spreading of the disease is virtually frozen on the time scale of the contagion process. however, static networks are only approximations of the real interplay between time scales. considering daily mobility patterns, no individual is in contact with all the friends simultaneously all the time. on the contrary, contacts are changing in time, often on a time scale that is shorter than the whole spreading process. real contact networks are thus inherently dynamic, with connections appearing, disappearing, and being rewired with different characteristic time scales, and are better represented in terms of a temporal or time-varying network. therefore, modeling the epidemic process on pt should be based on a time-varying contact network. although we focus on the contagion process through pt, passengers' social-activity (sa) contacts besides riding the same vehicles are not neglectable. in this study, two components of the contact network are considered: ) a pen that is designated to capture the interaction of passengers on pt, ) and an sa contact network that captures all other interactions among people. pt passengers' encounter patterns have been studied by sun et al. ( ) through an encounter network, which is an undirected graph with each node representing a passenger and each edge indicating the paired passengers that have stayed in the same vehicle. the network is constructed by analyzing the smart card data, which includes passengers' tap-in/tap-out time, location, and corresponding bus id. since pen provides direct contact information of passengers, it is an ideal tool to investigate the epidemic spreading through pt. extending the work by sun et al. ( ) , we propose a time-varying weighted pen to model the epidemic process. we first evenly divide the whole study period into different time intervals t = , ..., t . the length of each interval is τ . for a specific time interval t, consider a weighted graph g t (n , e t , w t ), where n = {i : i = , .., n } is the node set with each node representing an individual, n is the total number of passengers in the system; e t is the edge set and w t is the weight set. the edge between i and j ( i, j ∈ n ), denoted as e t ij , exists if i and j have stayed in the same vehicle during the time interval t. the weight of e t ij , denoted as w t ij , is defined as w t ij = d t ij τ , where d t ij is the duration of i, j staying in the same vehicle during time interval t. by definition, we have ≤ w t ij ≤ . the weight is used to capture the reality that epidemic transmission is related to the duration of contact. in addition to contacts during rides on the pt, passengers may also contact each other during their daily social activities. given the heterogeneity of passengers' spatial distributions, people may have various possibilities to contact with different people. however, capturing the real connectivity of passengers in social activities requires a richer dataset (e.g., mobile phone, gps data), which is beyond the scope of this research. in this study, we made the following assumptions to build the sa contact network. • global interaction: passengers may interact with any other individuals in the system during a time interval of t with a uniform probability of θ g . • local interaction: passengers with same origins or destinations of pt trips may interact with each other during time interval t with a uniform probability θ l . since local interaction is more intense than global interaction, we have θ l > θ g . for the global interaction, we assume that the contact time for all connected individuals is τ for a specific time interval if there are no pt and local contacts between them. otherwise, the contact time should be subtracted by the pt and local contact duration (cd) at that time interval. for the local interaction, the contact time calculation is illustrated by the following example. consider passenger i with pt trip sequence where t k is the time when the passenger board or alight the vehicles. o i t k and d i t k are the trip origin and destination, respectively. the trip sequence is defined as a sequence of consecutive pt trips where every adjacent trip pair has an interval of fewer than h (e.g., t − t < h). we call the interval between two consecutive pt trips (e.g., [t , t ]) as activity time hereafter. since passengers may not stay in the same place between two consecutive trips, we may have d i t = o i t . we further assume that from time t to t , the passenger spends half of the activity time at d i t and half of the activity time at o i t . suppose passenger j has a trip sequence )}, and d j t = o i t ; and the overlapping time between intervals [t , t ] and [t , t ] are not zero. this means passengers i and j may have local contact because they have stayed in the same place d j t = o i t (by definition, the probability of having local contact is θ l ). recall that we assume that passengers spend half of the activity time at a specific origin or destination. if they have a local contact, then the cd between passengers i and j is calculated as half of the overlapping time between interval [t , t ] and interval [t , t ]. this calculation gives us the total cd of i and j at the local interaction level. for example, if t < t < t < t , the total local cd between i and j is (t − t ). analogizing to the pen, the total local cd can be mapped to each time interval. for example, if t * is the time boundary for time interval t and time interval t + , and t * − τ < t < t * < t < t * + τ . denote the local cd between i and j for time interval t asd l,t ij ( ≤d l,t ij ≤ τ ). then we haved l,t ij = t * − t andd l,t+ we denote the sa contact network asg t (n ,Ẽ g,t ,Ẽ l,t ,w g t ,w l t ), whereẼ g,t is the edge set of global interaction;Ẽ l,t is the edge set of local interaction. the edge of global interaction between any i and j, denoted asẽ g,t ij , exists with probability θ g for all i, j ∈ n . when i and j share the same pt trip origins or destinations during time interval t, the edge of local interaction between i and j (ẽ l,t ij ) exists with probability θ l .w g t andw l t are the weight set for global and local interaction edges, respectively. by the discussion above, we havew l,t ij =d l,t ij τ for allw l,t ij ∈w l t andw g,t ij = −w l,t ij − w t ij for allw g,t ij ∈w g t . by definition, the contacts from three sub-networks (local, global, and pt) are mutually exclusive. to illustrate the proposed epidemic contact network, we present a five-passenger system with a single bus route (n = ) in figure . we consider the time period from : to : with τ = h. for illustrative purpose, we neglect the global interaction and set the local interaction insensitivity θ l = . the bottom of the graph shows the passengers trajectories along the bus route. in the time interval t = , at : , passengers , , and board the bus; since they share the same origin and also are in the same bus during t = , we have edges of pen (colored green) and edges of location interaction of sa contact network (colored orange). accordingly, we have d = d = d = . h. the weights are calculated as w = w = w = . / = . . in the meanwhile, from the trajectories at t = , we noticed passengers and also share the same origin. thus, we also have an sa contact edge between and at t = . the local cd for passengers , , and at time interval t = isd l, =d l, =d l, = × . = . h. the comes from the assumption that these passengers only spend half of their time around this bus station (see section . . ). hence, the corresponding weights for the sa contact network arew l, =w l, =w l, = . / = . . similarly, we haved l, = × = . andw l, = . . the weights for t = and t = are calculated in the same way. the epidemic transition model is independent of the network representation. we can model various infectious diseases based on the proposed pen using different epidemic transition frameworks. for the case study we considered (covid- ), we employed the susceptible-exposed-infectious-removed (seir) diagram in this study. the seir model is generally used to model influenza-like illness and other respiratory infections. for example, small and tse ( ) used this model to numerically study the evolution of the severe acute respiratory syndrome (sars), which shares significant similarities with the covid- . we first divided the population into four different classes/compartments depending on the stage of the disease (anderson et al., ; diekmann and heesterbeek, ; keeling and rohani, ) : susceptibles (denoted by s, those who can contract the infection), exposed (e, those who have been infected by the disease but cannot yet transmit it or can only transmit with a low probability), infectious (i, those who contracted the infection and are contagious), and removed (r, those who are removed from the propagation process, either because they have recovered from the disease with immunization or because they have died). by definition, we have n = s ∪ e ∪ i ∪ r, where n is the set of the whole population. the diagram of the seir model is shown in figure . the diagram shows how individuals move through each compartment in the model. the infectious rate, β, controls the rate of spread and is associated with the probability of transmitting disease between a susceptible (s) and an exposed individual (e). the incubation rate, γ, is the rate of exposed individuals (e) becoming infectious (i). removed rate, µ, is the combination of recovery and death rates. the seir model typically assumes the recovered individuals will not be infected again, given the immunization obtained. it is worth noting that this study focuses on the early stage of an epidemic process, where the impact of outside factors on n (e.g., birth and natural death) are not considered. for the epidemic process models, people are concerned about the steady-state, epidemic threshold and reproduction number. according to pastor-satorras et al. ( ) , the number of infected individuals in the seir model always tends to zero after a long term (see figure a ). this is obvious from the diagram of seir (figure ) , where there is only one recurrent state r. the basic reproduction number, denoted by r , is defined as the average number of secondary infections caused by a primary case introduced in a fully susceptible population (anderson et al., ) . in the standard seir model, we have r = β µ . epidemic threshold, in many cases, is defined based on the value of r . when r < , the number of infectious individuals tends to decay exponentially; thus, there is no epidemic. however, if r > , the number of infectious individuals could grow exponentially with an outbreak of epidemic (see figure b ). the typical modeling of epidemic falls into two different categories: the individual-based approach and the degree-based approach. generally, the individual-based approach models the epidemic transmission at the individual level while the degree-based approach captures the infection process at the group level, where each group includes a set of nodes (individuals) with the same degree. since in the pen, we can characterize the behaviors of human interaction at the individual level, the individual-based framework is used in this study. we denote s i,t , i i,t , e i,t , and r i,t as the bernoulli random variable that describes whether individual i is in class s, i, e, and r at time interval t, respectively (yes = ). by definition we have s i,t +i i,t +e i,t +r i,t = for all i and t. let p(x i,t = ) = p x i,t , where x ∈ {s, i, e, r} and x p x i,t = . since the contact network is defined in discrete time, we can describe the epidemic process of the seir model as a discrete markov process with specific transition probabilities. to match with the epidemiological characteristics of covid- , we assume that the exposed individual can also infect others based on the recent finding (rothe et al., ) , which may not be the common case in the seir model. let β i be the probability of a susceptible individual i ∈ s getting infected by an infected individual j ∈ i at a time interval t if i and j contact each other (either by pt or sa) for the entire time interval. since the actual transmission probability is related to the interaction duration, we can write the actual probability of i getting infected by j (β i i,j,t ) as where h(·, ·) is a function to describe the actual transmission probability with respect to cd. it can be a form of survival function (e.g., exponential, weibull) or a linear function (i.e., h(w, β) = wβ, which is used in the case study). a t ij (ã l,t ij ,ã g,t ij ) is an indicator variable showing whether e t ij (ẽ l,t ij ,ẽ g,t ij ) exists. it is worth noting that a t ij is a known constant butẽ l,t ij andẽ g,t ij are random variables with bernoulli distribution: a l,t ij ∼ b(l t ij θ l ) andã g,t ij ∼ b(θ g ), where l t ij = if i and j share the same origin or destination at time interval t and l t ij = otherwise (see section . . for details). therefore, we have similarly, we define β e as the probability of a susceptible individual i ∈ s getting infected by an exposed individual j ∈ e at time interval t if i and j contact each other for the entire time interval (β e β i ). the actual transmission probability considering interaction duration is note that if i and j have been in contact, we assume the transmission probability only depends on the cd. the variation of transmission probability due to spatial distribution is neglected. capturing spatial factors requires dedicated transmission models (e.g., wellsriley model (wells et al., ) ) and an assumption of passengers' spatial distribution in a vehicle, which can be done in the future work. let γ be the probability of e → i, which is unrelated to the network. µ is the probability of i → r and the notations and epidemic transmission mechanism allow us to write the following system equations: calculating p(s i,t = , x j,t = ) requires the joint distribution of s i,t and x j,t , which is usually unavailable. according to the individual-based mean-field approximation, we can assume that the state of neighbors is independent (hethcote and yorke, ; chakrabarti et al., ; sharkey, sharkey, , . hence, this leads to by plugging eqs. and into eqs. and , we can get a new group of solvable system equations. different from the typical seir model, the proposed epidemic model with the individual-based pen has two challenges. first, the infection rate in a typical seir model is defined at the population level (i.e., homogeneous network assumption). however, in the proposed framework, we consider one-to-one contagious behaviors at the individual level with heterogeneous contact networks. the heterogeneity is difficult to characterize by probabilistic models (e.g., degree distributions) because contact structures are known from the smart card data. second, the proposed framework lies on a time-varying network, for which the contagious behaviors and interacted individuals vary over time. one of the solution methods for eqs. - is simulation. similar to many other complex stochastic process, simulation can output approximate values for p x i,t for all x ∈ {s, i, e, r} and t. the simulation process is described in algorithm , wherep x t (x ∈ {s, i, e, r}) is the proportion of people in class x for time interval t. initialization can assign some seed infectious people in the system. at each time step t, we calculate the one-to-one transmission probability β i i,j,t and β e i,j,t for each person in class s, where the time complexity is o(n ). therefore, the total time complexity of simulation is o(n t ), where t is the total number of time intervals considered. the model also requires to store the network structures and individual states at each time step, where the space complexity is also o(n t ). algorithm simulation-based solving algorithm assign i i,t = with probability γ. : assign r i,t = with probability µ = µ r + µ d let : else : assign r i,t = , and , , ..., t }, and x ∈ {s, i, e, r}. given the o(n t ) of time and space complexity, the simulation-based solving framework is hard to scale up to the whole population level ( . million in our case study). according to the numerical results, n ≥ k would cause memory errors to a gb ram personal computer. considering computational costs, a scalable and lightweight theoretical model is proposed to handle the epidemic simulations. the theoretical model, though simplified from agent-level simulation, can retain the flexibility to capture the behavioral, mechanical, networked, and dynamical features of the simulation-based models. the framework is separated into three steps. ) we first build up a multi-particle dynamics model for the epidemic process to represent the individual-based model. ) considering properties of contact network and multi-particle dynamics (gao et al., ) , an effective model is employed to represent the multi-dimensional dynamics (individual-based) into one-dimension (mean-based). ) previous effective models are developed for static network structure. to fit with the time-varying contact network, we innovatively combine the effective model with a temporal network model by adding energy flow into the equations, from which we can capture the impact of the time-varying contact networks on the entire dynamical system. for the multi-particle dynamics model, we focus on the early stage of the epidemic process, where the percent of susceptible people is almost %, and recovered people are %. hence, we could use taylor expansion to simplify the four-dimension (s,e,i, and r) individual dynamical epidemic process described in eq. ( - ) to two dimensions (e and i): in this formula, we embedded the dynamical network structure into two tensors these two tensors are non-negative, and each temporal slice of these tensors (i.e., are symmetric due to the property of infection. according to the previous studies (gao et al., ; tu et al., ) , the infectious burst in this canonical system could be captured by a one-dimensional simplification of the individual-based model. this simplification is based on the fact that in a network environment, the state of each node is affected by the state of its immediate neighbors. more details on the simplification can be found in gao et al. ( ) and tu et al. ( ) . therefore, we can characterize the effective state of the system using the average nearest-neighbor activity: where is the effective proportion of exposed (infectious) people in the system at time interval t. if we assume that all individuals hold a uniform probability to come into contact with each other, p e eff,t and p i eff,t are good proxies forp e t andp i t , wherep e t andp i t are the actual proportion of exposed and infectious population (i.e., . however, this assumption may not hold in reality. the relaxation of the assumption will be described later. p e eff,t and p i eff,t allow us to reduce the individual-based equations (eq. and ) to an effective mean-based equations: where: considering that people's interaction probabilities are actually heterogeneous, in practice, to relax the uniform contact assumption, we further consider the dynamics of the mobility network on the multi-particle systems based on li et al. ( ) , which recommends adding the energy flow f x t = ( /n ) i,j∈n (β x i,j,t ) (x ∈ {e, i}) into the general dynamical process: are parameters to be estimated. the energy flow and corresponding parameters are expected to capture the heterogeneous contacts in the network. the theoretical model is calibrated from a two-layer regression method. in the first layer, given a trajectory of epidemic process: eq. and . this leads to a linear regression problem with a total of t samples: where the only unknown parameters are k; β x eff,t and f x t are calculated from the constructed contact network, andp x t is given (x ∈ {e, i}). therefore, k can be obtained for every given epidemic trajectory. the epidemic trajectory is generated using the simulation-based solving framework for a small sample size (e.g., k). in the second layer, we aim to obtain the relationship between k and epidemic/mobility parameters (i.e., for every combination of Θ, we can use the simulation model to generate a trajectory and thus to estimate k (as we described above, the first layer regression). therefore, based on different values of Θ, we can estimate a series of k. then, we assume a linear relationship between Θ and k, and use a linear regression model to fit the relationship based on the generated Θ, k pairs. after the two-layer regression, the theoretical model can be used to flexibly predict the epidemic process under different policy conditions (i.e., different Θ, β e , or β i ). the theoretical model can smoothly consider different sizes of the studied population by scaling effective exposed proportion p e eff,t , effective infectious proportion p i eff,t , and network energy flows f e t and f i t . those variables can be directly extracted once the contact network is constructed. this model can also efficiently test different policy combinations with a low computational cost. according to the numerical test, it can evaluate one-million policy combinations for the full population ( . million) within seconds. the memory and computational complexity are both o(t ). this allows us to find the optimal policy to control the contagion. in epidemiology, the basic reproduction number (expressed as r ) of the infection can be viewed as the expected number of cases directly generated by one case in a population where all individuals are susceptible to infection (fraser et al., ). the most important use of r is to determine whether an emerging infectious disease would spread throughout the population. in a common infection model, if r > , the infection starts to spread throughout the population, but not if r < (see figure b ). in general, the larger the value of r , the more difficult it is to control the epidemic (fine et al., ) . in the ideal seir model where diseases spread uniformly over time and people have a uniform contact probability, r is easy to define. to match with the discrete-time expression in this study, letβ be the average number of people infected by one infectious person within one time interval in the ideal sir system, andμ be the probability that the infectious people (i) are removed (r) within one time interval (note that in continuous time context,β andμ represents infectious rate and removal rate, respectively). the basic reproduction number for ideal seir model is calculated as where i t is the number of infectious people at time interval t. note that it+ −it it = β, ∀t only holds under the ideal seir system. in heterogeneous populations, the definition of r is more subtle. the definition must take into account the fact that the contact between people is not uniform. one person may only contact a small group of friends and be isolated from the rest of the population. on the temporal evolving side, people's mobility patterns may vary every day, resulting in time-varying contact networks. this defeats ideal seir system assumptions. to consider the network heterogeneity damgaard et al. ( ) , we define r as "the expected number of secondary cases of a typical infected person in the early stages of an epidemic, which focuses on the expected direct infected population for each time step at the early stage let e t and i t be the number of exposed and infectious people at time interval t. given a trajectory of epidemic process [(e t , i t )] t= ,...,t (either from the simulation model or the theoretical model), we define the equivalent reproduction number for time period t (r (t )) as: we assume that the incubation period γ is much longer than a time interval, which holds true for most of the diseases (e.g., in our case study, days h). therefore, e t+ − e t in this study is a good proxy for the equivalent r enables flexible and fair comparison of different epidemic processes with heterogeneous contact and time-varying networks. in the following sections, we will use the defined equivalent r as a major epidemic measure for different policy discussions. we use the singapore bus system as a proximity to demonstrate the dynamics of epidemic spreading through a pt network based on the developed time-vary pen approach. the time-varying pen is constructed based on daily mobility patterns in the bus system and is then calibrated according to the epidemiological characteristics of covid- . a series of disease control policies are evaluated to exhibit the sensitivity of the developed pen. singapore is a city-state country where inter-city land transportation is relatively small. this provides an ideal testbed to focus on epidemic spreading through intra-city transportation, especially for bus systems, which count for a high proportion of modes shared in singapore (mo et al., ; shen et al., ) . according to singapore land transport authority (lta) ( ), the average daily ridership of buses is around . million, accounting for almost half of all travel modes. there are more than scheduled bus routes operated by four different operators. a total of approximately , buses are currently in operation. in the case study, the mass rapid transit (mrt) system is neglected because a) passengers' contacts in a bus are more conducive for epidemic transmission compared to the mrt system, given the limited space in a bus; b) smart card data can provide exact bus id to identify the direct contact of passengers. the direct contact in trains is, however, difficult to obtain from smart card data because the transactions are recorded at the station level. the smart card data used in this study are from august th (monday) to august st (sunday), , with a length of four weeks. the dataset contains . million bus trip transaction records from . million individual smart cardholders. given that the population of singapore in is around . million, the smart card data is representative of the population (accounting for % of the population) and can model the epidemic spreading for the whole city. figure shows the hourly ridership distribution for one week (average of four weeks). the ridership of weekdays shows highly regular and recurrent patterns with morning ( : - : am) and evening peaks ( : - : ). while the ridership distributions on weekends are different from those on weekdays, there are no prominent peaks observed. the usage of bus systems is related to daily activities, which represent mobility patterns of metro travelers in singapore and can influence the epidemic spreading. figure a shows the distribution of trip duration (p (t d)) in four weeks, where t d means trip duration. most trips have a duration of fewer than min. from the inset of figure a , we found the tail of p (t d) can be well characterized by an exponential function: when t d ≥ min, we have p (t d) ∼ e − t d λ td , where λ td = . min calculated by regression. as people tend to use mrt for long-distance travel, the duration of bus trips is relatively short. on average, the duration of bus trips is . ± . min (mean±standard deviation). in summary, singapore has an intense usage of bus systems with high ridership and user frequency, though the trip duration is relatively small. this implies that for highly infectious diseases that can be infected by short-term exposure, the bus system may play a crucial platform for the epidemic spreading. as discussed in section . , the pen and local interaction network (lin) highly depend on passengers' mobility patterns and present time-varying properties. figure shows example networks of passengers extracted from real-world data ( : - : am). the length of time interval τ = h is used in this study. for better visualization, these passengers are chosen from the same bus, and θ l = is used for lin. both the property of the contact network is essential for analyzing the epidemic spreading. figure summarizes the degree and cd distribution of pen and lin. note that the global interaction network is, by definition, a simple random graph with homogeneous structures. hence, we did not plot it. given the time-varying properties of networks, we consider three different time intervals: morning peak ( : - : am), noon off-peak ( : - : ), and evening peak ( : - : ). figures a and b show that the degree distribution of pen displays a power-law tail (p (k) ∼ k −λ k , where k is the degree), implying a significant degree heterogeneity. most of the nodes are of low or medium degree. the number of super-nodes with a high degree is limited, and the maximum degree is bounded, which is reasonable given the limited capacity of buses. these properties are consistent with the findings in qian et al. ( ) and indicate that pens are a type of small-world network (telesford et al., ) . although the shapes of p (k) for different times are similar, the exact values are still time-dependent. on weekdays, p (k) for morning and evening peaks are similar but different from the off-peak curve. pens in peak hours also have a larger degree of nodes. on weekends, however, the degree distributions in the three time intervals are similar. figures c and d show the degree distribution of the lin (θ l = × − is used to correspond to the case study in the following sections). similar to pen distribution, a power-law tail is also observed. however, the degree distribution in lins is less concentrated and shows noisy patterns for high degree distributions. we also find that other local cd values are nearly uniformly distributed. since local interaction duration that does not equal or min indicates that the trip occurs or ends in this time interval, the uniform distribution implies a poisson start and end time of bus trips within the time interval. covid- , also known as -ncov, is an infectious disease caused by "sars-cov- ", a virus closely figure shows the number of confirmed (infectious), cured, and dead people from jan to feb , , in wuhan. up to feb , there are more than thousand confirmed covid- cases. the total number of healed and dead patients is around , and , , respectively. the sudden increase in confirmed cases on feb is due to the revision of diagnosis criteria (adding the cases of clinical diagnosis). ( )). the inset plot is the zoom-in of the number of cured and dead people. we select covid- as the case study for the following reasons: a) covid- is extremely contagious. it is primarily spread between people via respiratory droplets from infected individuals when they cough or sneeze. according to cdc ( ), anyone who has been within approximately m of a person with covid- infection for a prolonged period (more than or min) is considered risky to get infected. therefore, pt can play a significant intermediary for such a highly contagious disease. b) even when authors are writing this article, covid- is a big threat to global public health. singapore is also experiencing the impact of covid- (ministry of health (moh), ). the case study of covid- can provide disease control suggestions from the transportation side, which adds real-time value to this research. the seir model parameters are chosen based on the epidemiological characteristics of covid- . time from exposure to onset of symptoms (latent or incubation period) is generally between to days for covid- . read et al. ( ) suggested setting the latent period as days. we, therefore, have γ = × = . (probability from e to i per h). according to read et al. ( ) , the transmission rate of covid- in the static seir model is . day − , which can be seen as the number of people that one infectious person can infect per day in a well-mixed network. therefore, assuming one person, on average, has close contact with others per day, we can calculate the hourly one-to-one infectious probability as β i = . × = . × − . although recent studies show β e > for covid- (rothe et al., ) , calibrating the exact value of β e is difficult due to lack of data. since people in the latent period (group e) usually have extremely lower probability of transmission, we arbitrarily set β e = . β i . we calculate µ r and µ d using data from wuhan. figure shows the daily cure and death rate (number of cured/dead people per day divided by the total number of confirmed people on that day) in wuhan. the reason for the high value on the first day may be the inaccurate data. from figure , we observe the average daily cure and death rate are approximately % at the early stage. therefore, we can calculate the hourly cured and death probability as µ r = µ d = . = . × − . figure : daily cure and death rate in wuhan (data sources: ding xiang yuan ( )) θ l = × − is used for the status quo analysis. this value is calculated as follows. consider a community with , people, assume each person may have close contact with another people on average per hour locally. we therefore have θ l = , = × − . the global interaction captures individual's probability of close contact with people outside his/her community. given that the population of singapore is around . million, we assume one person, on average, can closely contact people per day globally. then θ g = . × × = . × − . note that the number in the denominator is used to get the hourly probability. table summarizes all parameters of the status quo analysis, which can be seen as the reference scenario. the sensitivity analysis column indicates whether this value will be changed in the following policy analysis sections. we first calibrate the theoretical model using the generated epidemic dynamics from the simulation model. cases with different combination of parameters (table ) for k sample passengers are simulated. these cases are fed into a regression model to obtain the parameters for the theoretical model. figure shows the comparison of the number of infectious and exposed people between the simulation and the calibrated theoretical model. we observe a high goodness-of-fit for the theoretical models, which implies the proposed theoretical framework can capture the epidemic spreading through pt and sa contacts. figure shows the comparison of the number of infectious and exposed people by time for the five selected cases. generally, the simulation and theoretical models show a similar number of infectious people over time. for the number of exposed people, the two models show similar dynamic fluctuations with only a slight difference for some periods. human mobility θ l [ × − , × − , × − , ] θ g [ . × − , × − , × − , ] (a) comparison on infectious people (b) comparison on exposed people figure : comparison between simulation model and calibrated theoretical model ( k sample passengers). one dot represents # of infectious/exposed people in a specific time interval. (a) comparison on infectious people (b) comparison on exposed people figure : comparison of infectious/exposed people by time for cases ( k sample passengers) based on the parameters in table , we evaluate the epidemic process in singapore for all smart cardholders ( . million) using the calibrated theoretical model. figure shows the dynamics of the number of infectious and exposed people. we randomly assign initial infectious passengers in the system. results show that if there are no control policies for the disease, the number of infectious people will increase to more than , ( times the initial value) after four weeks. this is consistent with liu et al. ( ) 's results on early human-to-human transmission of covid- in wuhan. the equivalent r is . , which is consistent with many previous estimates: . . (majumder and mandl, ); . (imai et al., ); . (liu et al., ). the inset plot shows the intra-day dynamics of the number of exposed people in week , from which we observe the high intensity of infections from : to : . the sudden increases in morning and evening peaks for weekdays (day - ) highlights the transmissibility through pt. during the weekends (day and ), the number of exposed people shows a decreasing trend, which implies lower transmission rates on weekends. figure : epidemic process in status quo scenario (whole population). the inset plot shows the zoom-in of the number of exposed people from day (monday) to day (sunday). motivated by current epidemic control strategies worldwide in pt systems, especially the control policies of covid- exemplified in appendix a, we can hardly find the general criteria to determine whether, when, and where to suspend the urban pt services. the time-varying weighted pen developed in this work contributes to better understanding of the spatiotemporal impacts of various pt operation strategies for epidemic control, to facilitate the decision-making process for pt operation adjustments. we first evaluate the impact of β i and µ. β i is related to people's preventive behavior, such as wearing masks and sanitizing hands, which results in a decrease of β i . µ is related to the hospital's medical behavior, such as increasing cure rate and developing vaccines, which leads to an increase in µ. figure shows the impact of β i and µ on equivalent r . β i is scaled from − to and µ is scaled from − to . we fixed β e = . β i for all testing. figure b shows that the epidemics would fade out (equivalent r < ) if transmissibility was reduced to less than − of current value. however, figure c suggests that even if the cure rate is increased by times, the epidemics would still happen, though the process would be lagged (with smaller r ). this implies reducing transmissibility is more effective than increasing the cure rate for covid- . in figure a , we show the joint impact of β i and µ and the critical bound of equivalent r . if β i was decreased to % and µ was enlarged tenfold, the equivalent r would be decreased to less than . if the cost of controlling each parameter is given, this graph can help to optimize the controlling strategies with limited costs. one of the typical control strategies for the epidemic is decreasing the trip occurrence rate in a city. at the average level, this is equivalent to reducing the total contact time, and total squared contact time . figure shows the impact of different control percentage of trips (i.e., reducing the percentage of total contact and squared contact time). we observe that controlling one trip (pt or local or global) cannot eliminate the epidemic. based on current parameter settings, reducing pt trips contribute more to the control of the epidemic process than the other two. the impact of reducing trips to r is generally linear unless the reduction percentage is sufficiently large. when all trips are reduced by more than %, the reduction rate for r starts to accelerate. when all trips were decreased by %, the spreading process would fade out, which implies travel control can only be effective at an extreme level. this is corresponding to wu et al. ( ) 's statement that a % reduction in inter-city mobility in wuhan has a negligible effect on the covid- epidemic dynamics. figure shows the influence of distributing departure time with different flexibility (from to ± min). note that the benchmark equivalent r is . for the sample passengers situation, rather than . as the whole population situation because fewer passengers in the system can reduce the human contacts and limit the epidemic process. we observe a decline in equivalent r as the departure time flexibility increases. this is because higher flexibility allows more dispersed riding on the bus; thus, fewer contacted passengers. however, the effectiveness of distributing passengers is very limited. with ± min flexibility, there is only a . % decrease in r (from . to . ). as what has been summarized in section , the closure of bus routes is an in-practice implemented strategy from the transportation side to reduce people's close-contacts during the outbreak of covid- . we assume that the suspension of bus service is a sign of travel restrictions on the corresponding bus routes. while keeping the sa contact network the same, we evaluate four different strategies of closing various percentages of bus routes (from % to %): a) close from high demand to low demand routes (h-l). b) close from low demand to high demand routes (l-h). c) close by randomly picking bus routes (random). d) close by different local planning areas. we assume that passengers who originally take these closed bus routes will change to alternative routes if available; otherwise, they will cancel their trips. again, given the computational burden, we evaluate this policy for sample passengers. high-demand bus routes can reduce the equivalent r by . %. it is also important to look at how many passengers are affected by each strategy. the affected passengers are defined as those who cannot find alternative bus routes when the original routes are closed. as expected, for the same closing percentage of bus routes, the h-l strategy affected more passengers than the other two policies. however, we find that the h-l strategy is more effective in terms of r reduction per affected passengers. if we fix the percentage of affected passengers at approximately % (black dashed line), the h-l strategy can reduce r to . , while random and l-h strategies can only reduce r to . and . , respectively. this may be because passengers in high demand bus routes are more influential (e.g., with a higher degree in the pen) in the system. therefore, to control the epidemic with fewer people getting affected, pt agencies should close buses from high to low demand routes. figure a shows the percentage of reduction in r resulting from the closure of bus routes by planning areas, while figure b shows the corresponding percentage of affected pt passengers. generally, the high reduction in r is the result of a high number of affected passengers. closing bus routes in the main business and residential areas in the southern part of singapore island leads to higher controlling effects of epidemics than closing other areas. however, to minimize the impact on passengers' daily travel, pt agencies should first close bus routes in regions with relatively high r reduction and a low number of affected passengers, such as core central business district (cbd) areas. passengers who take buses crossing core cbd areas can easily find alternative routes. thus, the concentrated demands at cbd areas can be distributed to other less crowded routes, which leads to a r reduction and less number of affected passengers. although closing bus routes can postpone the epidemic spreading, it also brings huge inconvenience to society, for example, decreasing people's accessibility to hospitals. a moderate alternative way is preserving the pt supplies but limiting the maximum bus load to reduce passengers' interaction. figure shows the impact of this policy. since we use k sample data, the maximum bus load is relatively small. hence, we test the maximum bus load from to . passengers who cannot board the bus due to this policy are assumed to cancel their trips (these passengers are called affected passengers). to compare with closing bus routes strategies, the x-axis is set as the percentage of affected passengers. only the h-l strategy is plotted as it is the most effective one among all closing bus routes strategies. we find the policy of limiting the maximum bus load can only take effects when the available maximum bus load is very small. with the same percentage of affected passengers, it is not as effective as closing bus routes from high demand to low demand. however, since this policy preserves the city's mobility capacity, it can be seen as a more moderate way to control epidemics compared with directly closing bus routes. figure : impact of limiting maximum bus load ( k sample passengers). mbl: maximum bus load. cp: close percentage of the h-l strategy . . . impact of isolating critical passengers ideally, a more precise pandemic policy goes into the individual-level. the government could find out those influential passengers who are potential to spread viruses considerably and get them isolated at an early stage. individual-based policies can outperform region-based or population-level policies in effectiveness and flexibility. however, due to the computational cost, it is hard to optimize the isolation options for each individual directly. hence, we employed an isolation method based on k-core decomposition. different from traditional degree-based methods, k-core method shows higher impacts on the dynamics of multi-particle systems (kitsak et al., ; morone et al., ; borge-holthoefer and moreno, ; yang et al., ) . a k-core of a graph g is a maximal connected subgraph of g in which all vertices have a degree of at least k. each vertex in a k-core is connected to at least k other nodes (i.e., has a degree of at least k). a high k number represents the highly concentrated structure of the local network, which indicates the most clustered part in the whole network. nodes in a core with larger k usually have larger degrees on average (dorogovtsev et al., ) . in the context of infectious diseases, if one node in a high k-core is infected, it has, on expectation, > k times chances to spread the disease to other nodes in that core in one-time step compared to an arbitrary node (if the network is under scaling law) (serrano and boguná, ) . therefore, we designed the policy by first limiting the nodes in high k-cores, which means limiting the more influential nodes. the population in a higher k-core is always lower than that in a lower k-core. thus, by applying this policy, we can isolate a small portion of people to limit the spreading of the disease. figure shows the impact of isolating passengers in different k-cores and the corresponding number of isolated passengers. for comparison purposes, we also evaluate a random policy. for each k-core, the random policy isolates the same number of randomly picked passengers in the system, which corresponds to implementing isolation at the population level. since the number of passengers with a core number greater than is low (in the k sample passengers case), the reduction in r is not significant. however, isolating all -core passengers, which accounts for % of the whole population, the equivalent r is reduced from . to . ( . %), which shows higher effectiveness than any other region-based or route-based policies in section . . . we also observe that the k-core isolating method can outperform the benchmark random isolating method. figure : impact of isolating critical passengers ( k sample passengers). "base" and "k-core" scenarios indicate no isolation and isolation of people of core number ≥ k in the pen, respectively. this paper proposed a general time-varying weighted pen to model the spreading of infectious diseases over the pt system. the social-activity contacts at both local and global level are also considered. the network is constructed using smart card data as an undirected graph, in which a node refers to a pt passenger; an edge refers to a pair of passengers staying in the same vehicle; the weight of an edge captures the cd. we employ the seir diagram-a general diagram to model the influenza-like disease-to model the disease dynamics using the recent global outbreak of covid- as a case study. a scalable and lightweight theoretical framework is derived to capture the time-varying and heterogeneous network structures, which enables to solve the problem at the whole population level with low computational costs. we use the pt smart card data from singapore as a proximity to understand the general spatiotemporal dynamics of epidemic spreading over the pt networks in one month. the status-quo analysis shows that the covid- infected population is expected to increase by times of their initial value by the end of the month without any disease control enforcement. a series of disease control and prevention scenarios are envisioned from both public health policy and pt operations sides. from the public health side, the model sheds light on people's preventative behavior. wearing face masks and sanitizing hands, are considered the most effective measures to control the spreading of the epidemic; however, an increased cure rate can only postpone the outbreak of the disease. from the perspective of pt operation adjustments, several policies are evaluated, including reducing trip occurrences, enlarging departure time flexibility, closing bus routes, limiting maximum bus loads, and isolating critical passengers. in general, the control of the epidemic process starts to take effect, with over % of all trips being canceled. the equivalent r can be reduced to if over % of trips are banned. in terms of bus operation policies, distributing departure times and limiting maximum bus loads can slightly decelerate the spreading process. closing high-demand bus routes, especially in the main business areas, is more effective than the closure of low-demand bus routes. the most effective approach is isolating influential passengers at the early stage, in which the epidemic process can be significantly reduced with a small proportion of people being affected. many policy implications can be derived from the case study. on the public health side, the government should encourage people to take preventative behaviors, such as wearing face masks and sanitizing hands, to reduce the transmission probability. the travel restriction policy can take effect-with an equivalent r less than -only at the extreme travel, such as in hubei province, china, where all travels were banned during the outbreaks of covid- in feb . on the pt operation side, according to our models, partial closure of bus routes and limiting the maximum bus load can postpone the spreading of epidemics. the most effective way is closing bus routes with high demands, especially those crossing the cbd areas. in practice, a (partially) shutdown of pt services is a serious decision for authorities, many related issues, such as equity and accessibility, should be considered to determine how to design the suspension of pt services in the pandemics. for prevention purpose, if possible, the government could identify the influential passengers with large core numbers based on smart card data, and suggest them to isolate themselves or reduce travels. similarly, all entities should cancel events with a high number of participants to avoid generating large k-core contact networks. several limitations of this study are as follows: ) some parameters of the model (e.g., θ l , θ g ) are determined by the authors' assumptions, which hurts the credibility of the results. although we end up with a reasonable r , suggesting the values of these parameters are reasonable, more parameter calibration jobs should be done in the future. ) policy evaluations are based on some ideal assumptions. in the real world, many unexpected results can happen. for example, closing bus routes may decrease people's accessibility to the hospital, thus, decrease the cure rate. the government should think cautiously from multiple perspectives before applying any control strategies. ) this study did not model the contacts of passengers at trains due to difficulties in identifying the vehicles that passengers belong to. future research can incorporate a transit assignment model (e.g., zhu et al., ; mo et al., ) to infer passengers' boarding trains and construct the pen by trains. meanwhile, due to the large space in a train, the variation of transmission probability due to passengers' spatial distribution should also be captured. ) other modes of transmission than contact transmission are not considered (e.g., infectious passengers contaminating surfaces), which may result in under-estimation of transmissibility of pt systems. ) population infected heterogeneity is neglected in this study. in reality, the infectious probability may depend on age, gender and health conditions. the demographics distribution can be incorporated in the future. future works include the following: ) elaborate on the sa contacts based on other data sources (e.g., mobile phone data) and extend n to the whole population. due to a lack of data, the contacts of social activities are simplified and n is assumed to be pt users in this paper. though pt users in singapore account for % of the population, future research can combine different data sources to model the sa contacts for the whole population in more detail (wang et al., ) . ) incorporate spatial effect and model transmission probability more finely. the current transmission probability between two individuals only depends on the contact duration. the contact distance, passenger density, and distribution on a vehicle can be considered in future research. ) conduct case studies in cities with covid- outbreaks (e.g., wuhan, new york city) to validate the model. these case studies can calibrate the model based on ground truth data, quantify the contribution of disease transmission by pt systems, predict the epidemic spreading, and evaluate the effects of different policies. ) incorporate the time-varying epidemic and mobility parameters to better predict the reality. although this study does not attempt to predict and reproduce the covid- spreading, the proposed model can potentially better fit the epidemic process, given the more fine-grained framework. however, the complexity in the real-world lies on the time-varying mobility and epidemic patterns. future research can make the epidemic parameters (Θ) as time-dependent (Θ(t)), instead of constant, to better fit the reality. the authors confirm contribution to the paper as follows: study conception: b. shen. all authors reviewed the results and approved the final version of the manuscript. the research is sponsored by the natural science foundation of china ( ) and the natural science in practice, since the outbreak of covid- in late january , a variety of epidemic control strategies in pt systems, such as the requirement of pt riders to wear face masks, the sterilization of bus and metro carriages, the adjustment of pt operation schedules, the closure of bus routes, etc., have been implemented during the outbreak of covid- . in china, the requirement of wearing face masks in pt systems has been successively implemented in many provinces since late january of . in addition, a variety of pt operation control strategies have also been enforced in many cities. in cities of hubei province, especially in wuhan, along with the lockdown policies to control the spreading of covid- , almost all pt services have been shut down since jan. rd and th, whereas the patients with severe symptoms are transported by ambulance. after the lockdown and travel restrictions of hubei province, the pt operation adjustment strategies implemented in other chinese cities were largely diverse. different from wuxi with only % of arterial bus routes kept running, in nanjiang, another major city of jiangsu province, the pt services were still in operation but with shortened operation hours and dispatching frequencies. in shanghai, both inter-provincial pt services and the bus services between rural districts like qingpu and jinshan were closed from jan. th, but most of the urban pt services remained in operation by limiting the maximum passenger loads. outside china, in italy, where the reported covid- cases dramatically increased in march , the suspension of pt has been officially proposed in the lombardy area, where the urban pt service the transport for london (tfl) is running reduced service across the network in london, closing up to stations. no services of waterloo and city line was provided since march , . many us cities have seen reduced services, including boston and washington d.c., in cities of other countries with fewer covid- reported cases infectious diseases of humans: dynamics and control multiscale mobility networks and the spatial spreading of infectious diseases absence of influential spreaders in rumor dynamics identifying critical components of a public transit system for outbreak control modeling the spread of infection in public transit networks: a decision-support tool for outbreak planning and control guidance for risk assessment and public health management of healthcare personnel with potential exposure in a healthcare setting to patients with epidemic thresholds in real networks the role of the airline transportation network in the prediction and predictability of global epidemics predicting and containing epidemic risk using friendship networks social and sexual function following ileal pouch-anal anastomosis mathematical epidemiology of infectious diseases: model building, analysis and interpretation covid- real-time data k-core organization of complex networks how mobility patterns drive disease spread: a case study using public transit passenger card travel data herd immunity: a rough guide pandemic potential of a strain of influenza a (h n ): early findings universal resilience patterns in complex networks generalized reproduction numbers and the prediction of patterns in waterborne disease understanding individual human mobility patterns discovering the hidden community structure of public transportation networks gonorrhea transmission dynamics and control statistics brief -world metro comparing different approaches of epidemiological modeling stochastic dynamics. modeling infectious diseases in humans and animals identification of influential spreaders in complex networks public transport utilisation: average daily public transport ridership the fundamental advantages of temporal networks reactive school closure weakens the network of social interactions and reduces the spread of influenza investigating physical encounters of individuals in urban metro systems with large-scale smart card data time-varying transmission dynamics of novel coronavirus pneumonia in china early transmissibility assessment of a novel coronavirus in wuhan, china. china modelling cholera epidemics: the role of waterways, human mobility and sanitation public transportation and sustainability: a review . past update on covid- local situation capacity-constrained network performance model for urban rail systems impact of built environment on first-and last-mile travel mode choice the k-core as a predictor of structural collapse in mutualistic ecosystems epidemic processes in complex networks random walks and search in time-varying networks scaling of contact networks for epidemic spreading in urban transit systems novel coronavirus -ncov: early estimation of epidemiological parameters and epidemic predictions transmission of -ncov infection from an asymptomatic contact in germany a high-resolution human contact network for infectious disease transmission percolation and epidemic thresholds in clustered networks deterministic epidemiological models at the individual level deterministic epidemic models on contact networks: correlations and unbiological terms built environment and autonomous vehicle mode choice: a first-mile scenario in singapore small world and scale free model of transmission of sars simulation of an seir infectious disease model on the dynamic contact network of conference attendees efficient detection of contagious outbreaks in massive metropolitan encounter networks understanding metropolitan patterns of daily encounters the ubiquity of small-world networks collapse of resilience patterns in generalized lotka-volterra dynamics and beyond inferring metapopulation propagation network for intra-city epidemic control and prevention airborne contagion and air hygiene. an ecological study of droplet infections. airborne contagion and air hygiene. an ecological study of droplet infections nowcasting and forecasting the potential domestic and international spread of the -ncov outbreak originating in wuhan, china: a modelling study how far droplets can move in indoor environmentsrevisiting the wells evaporation-falling curve small vulnerable sets determine large network cascades in power grids the transmissibility and control of pandemic influenza a (h n ) virus a probabilistic passenger-to-train assignment model based on automated data key: cord- - myuf q authors: feo-arenis, sergio; vujinović, milan; westphal, bernd title: on implementable timed automata date: - - journal: formal techniques for distributed objects, components, and systems doi: . / - - - - _ sha: doc_id: cord_uid: myuf q generating code from networks of timed automata is a well-researched topic with many proposed approaches, which have in common that they not only generate code for the processes in the network, but necessarily generate additional code for a global scheduler which implements the timed automata semantics. for distributed systems without shared memory, this additional component is, in general, undesired. in this work, we present a new approach to the generation of correct code (without global scheduler) for distributed systems without shared memory yet with (almost) synchronous clocks if the source model does not depend on a global scheduler. we characterise a set of implementable timed automata models and provide a translation to a timed while language. we show that each computation of the generated program has a network computation path with the same observable behaviour. automatic code generation from real-time system models promises to avoid human implementation errors and to be cost and time efficient, so there is a need to automatically derive (at least parts of) an implementation from a model. in this work, we consider a particular class of distributed real-time systems consisting of multiple components with (almost) synchronous clocks, yet without shared memory, a shared clock, or a global scheduler. prominent examples of such systems are distributed data acquisition systems such as data aggregation in satellite constellations [ , ] , the wireless fire alarm system [ ] , iot sensors [ ] , or distributed database systems (e.g. [ ] ). for these systems, a common notion of time is important (to meet real-time requirements or for energy efficiency) and is maintained up to a certain precision by clock synchronisation protocols, e.g., [ , , ] . global scheduling is undesirable because schedulers are expensive in terms of network bandwidth and computational power and the number of components in the system may change dynamically, thus keeping track of all components requires large computational resources. timed automata, in particular in the flavour of uppaal [ ] , are widely used to model real-time systems (see, for example, [ , ] ) and to reason about the correctness of systems as the ones named above. modelling assumptions of timed automata such as instantaneous updates of variables and zero-time message exchange are often convenient for the analysis of timed system models, yet they, in general, inhibit direct implementations of model behaviour on real-world platforms where, e.g., updating variables take time. in this work, we aim for the generation of distributed code from networks of timed automata with exactly one program per network component (and no other programs, in particular no implicit global scheduler), where all execution times are considered and modelled (including the selection of subsequent edges), and that comes with a comprehensible notion of correctness. our work can be seen as the first of two steps towards bridging the gap between timed automata models and code. we propose to firstly consider a simple, iterative programming language with an exact real-time semantics (cf. sect. ) as the target for code generation. in this step, which we consider to be the harder one of the two, we deal with the discrepancy between the atomicity of the timed automaton semantics and the non-atomic execution on real platforms. the second step will then be to deal with imprecise timing on real-world platforms. our approach is based on the following ideas. we define a short-hand notation (called implementable timed automata) for a sub-language of the well-known timed automata (cf. sect. ). we assume independency from a global scheduler [ ] as a sufficient criterion for the existence of a distributed implementation. for the timing aspect, we propose not to use platform clocks directly in, e.g., edge guards (see related work below) but to turn model clocks into program variables and to assume a "sleep" operation with absolute deadlines on the target platform (cf. sect. ). in sect. , we establish the strong and concrete notion of correctness that for each time-safe computation of a program obtained by our translation scheme there is a computation path in the network with the same observable behaviour. section shows that our short-hand notation is sufficiently expressive to support industrial case studies and discusses the remaining gap towards realworld programming languages like c, and sect. concludes. generating code for timed systems from timed automata models has been approached before [ , , , , ] . all these works also generate code for a scheduler (as an additional, explicit component) that corresponds to the implicit, global scheduler introduced by the timed automata semantics [ ] . thus, these approaches do not yield the distributed programs that we aim for. a different approach in the context of timed automata is to investigate discrete sampling of the behaviour [ ] and so-called robust semantics [ , ] . a timed automaton model is then called implementable wrt. to certain robustness parameters. bouyer et al. [ ] have shown that each timed automaton (not a network, as in our case) can be sampled and made implementable at the price of a potentially exponential increase in size. a different line of work is [ , , ] . they use timed automata (in the form of rt-bip components [ ] ) as abstract model of the scheduling of tasks. considering execution times for tasks, a so-called physical model (in a slightly different formalism) is obtained for which an interpreter has been implemented (the real-time execution engine) that then realises a scheduling of the tasks. the computation time necessary to choose the subsequent task (including the evaluation of guards) is "hidden" in the execution engine (which at least warns if the available time is exceeded), and they state the unfortunate observation that time-safety does not imply time-robustness with their approach. there is an enormous amount of work on so-called synchronous languages like esterel [ ] , signal [ ] , lustre [ ] and time triggered architectures such as giotto/htl [ ] . these approaches provide an abstract programming or modelling language such that for each program, a deployable implementation, in particular for signal processing applications, can be generated. as modelling formalism (and input to code generation), we consider timed automata as introduced in [ ] . in the following, we recall the definition of timed automata for self-containedness. our presentation follows [ ] and is standard with the single exception that we exclude strict inequalities in clock constraints. a timed automaton a = (l, a, x, v, i, e, ini ) consists of a finite set of locations (including the initial location ini ), sets a, x, and v of channels, clocks, and (data) variables. a location invariant i : l → Φ(x) assigns a clock constraint over x from Φ(x) to a location. finitely many edges in e are of the form ( , α, ϕ, r, consists of input and output actions on channels and the internal action τ , Φ(x, v ) are conjunctions of clock constraints from Φ(x) and data constraints from Φ(v ), and r(x, v ) * are finite sequences of updates, an update either resets a clock or updates a data variable. for clock constraints, we exclude strict inequalities as we do not yet support their semantics (of reaching the upper or lower bound arbitrarily close but not inclusive) in the code generation. in the following, we may write (e) etc. to denote the source location of edge e. the operational semantics of a network n = a . . . a n of timed automata as components -and with pairwise disjoint sets of clocks and variables -is the (labelled) transition system t (n ) = (c, Λ, { λ − →| λ ∈ Λ}, c ini ) over configurations. a configuration c ∈ c = { , ν | ν |= i( )} consists of location vector (an n-tuple whose i-th component is a location of a i ) and a valuation ν : x(n ) ∪ v (n ) → r + ∪ d of clocks and variables. the location vector has invariant i( ) = n i= i( i ), and we assume a satisfaction relation between valuations and clock and data constraints as usual. labels are Λ = {τ } ∪ r + , and the set of initial configurations is there is an internal transition , ν τ − → , ν , if and only if there is an edge e = ( , τ, ϕ, r, ) enabled in , ν and ν is the result of applying e's update vector to ν. an edge is enabled in , ν if and only if its source location occurs in the location vector, its guard is satisfied by ν, and ν satisfies the destination location's invariant. there is a rendezvous transition , ν τ − → , ν , if and only if there are edges e = ( , a!, ϕ , r , ) and e = ( , a?, ϕ , r , ) in two different automata enabled in , ν and ν is the result of first applying e 's and then e 's update vector to ν. a transition sequence of n is any finite or infinite, initial and consecutive sequence of the form , ν λ −→ , ν λ −→ · · · . n is called deadlock-free if no transition sequence of n ends in a configuration c such that there are no c , c such that c next, deadline, boundary. given an edge e with source location and clock constraint ϕ clk , and a configuration c = , ν , we define next(c, ϕ clk ) = min{d ∈ r + | ν+d |= i( )∧ϕ clk } and deadline(c, ϕ clk ) = max{d ∈ r + | ν+next(c, ϕ clk )+ d |= i( )∧ϕ clk } if minimum/maximum exist and ∞ otherwise. that is, next gives the smallest delay after which e is enabled from c and deadline gives the largest delay for which e is enabled after next. the boundary of a location invariant ϕ clk is a clock constraint ∂ϕ clk s.t. ν + d |= ∂ϕ clk if and only if d = next(c, ϕ clk ) + deadline(c, ϕ clk ). a simple sufficient criterion to ensure existence of boundaries is to use location invariants of the form ϕ clk = x ≤ q, then ∂ϕ clk = x ≥ q. in the following, we introduce implementable timed automata that can be seen as a definition of a sub-language of timed automata as recalled in sect. . as briefly discussed in the introduction, a major obstacle with implementing timed automata models is the assumption that actions are instantaneous. the goal of considering the sub-language defined below is to make the execution time of resets and the duration of message transmissions explicit. other works like, e.g., [ ] , propose higher-dimensional timed automata where actions take time. we propose to make action times explicit within the timed automata formalism. implementable timed automata distinguish internal, send, and receive edges by action and update in contrast to timed automata. an internal edge models (only) updates of data variables or sleeping idle (which takes time on the platform), a send edge models (only) the sending of a message (which takes time), and a receive edge (only) models the ability to receive a message with a timeout. all kinds of edges may reset clocks. figure shows an example implementable timed automaton using double-outline edges to distinguish the graphical representation from timed automata. the edge from to , for example, models that message 'lz[id]' may be transmitted between time s + g (including guard time g and operating time) and s + g + m, i.e., the maximal transmission duration here is m. the time n l would be the operating time budgeted for location . the semantics of the implementable network n consisting of implementable timed automata i , . . . , i n is the labelled transition system t (a i . . . a in ). the timed automata a ii are obtained from i i by applying the translation scheme in fig. edge-wise. the construction introduces fresh × -locations. intuitively, a discrete transition to an × -location marks the completion of a data update or message transmission in i that started at the next time of the considered configuration. after completion of the update or transmission, implementable timed automata always wait up to the deadline. if the update or transmission has a certain time budget, then we need to expect that the time budget may be completely used in some cases. using the time budget, possibly with a subsequent wait, yields a certain independence from platform speed: if one platform is fast enough to execute the update or transmission in the time budget, then all faster platforms are. note that the duration of an action may be zero in implementable timed automata (exactly as in timed automata), yet then there will be no timesafe execution of any corresponding program on a real-world platform. in [ ] , the concept of not to depend on a global scheduler is introduced. intuitively, independency requires that sending edges are never blocked because no matching receive edge is enabled or because another send edge in a different component is enabled. that is, the schedule of the network behaviour ensures that at each point in time at most one automaton is ready to send, and that each automaton that is ready to send finds an automaton that is ready for the matching receive. similar restrictions have been imposed on timed automaton models in [ ] to verify the zeroconf protocol. whether a network depends on a global scheduler is decidable; for details, we refer the reader to [ ] . figure shows an artificial network of implementable timed automata whose independency from a global scheduler depends on the parameters s , + w and s , + w . if the location , is reached, then the standard semantics of timed automata would (using the implicit global scheduler) block the sending edge until , is reached. yet in a distributed system, the sender should not be assumed to know the current location of the receiver. by choosing the parameters accordingly (i.e., by protocol design), we can ensure that the receiver is always ready before the sender so that the sender is never blocked. in this case, we can offer a distributed implementation. in the following sections, we only consider networks of implementable timed automata that are deadlock-free, closed component (no shared clocks or variables, no committed locations (cf. [ ] )), and do not depend on a global scheduler. in this section, we introduce a timed programming language that provides the necessary expressions and statements to implement networks of implementable timed automata as detailed in sect. . the semantics is defined as a structural operational semantics (sos) [ ] that is tailored towards proving the correctness of the implementations obtained by our translation scheme from sect. . we use a dedicated time component in configurations of a program to track the execution times of statements and support a snapshot operator to measure the time that passed since the execution of a particular statement. due to lack of space, we introduce expressions on a strict as-needed basis, including message, location, edge, and time expressions. in a general purpose programming language, the former kinds of expressions can usually be realised using integers (or enumerations), and time expressions can be realised using platform-specific representations of the current system time. syntax. expressions of our programming language are defined wrt. given network variables v and x. we assume that each constraint from Φ(x, v ) or expression from Ψ (v ) over v and x has a corresponding (basic type) program expression and thus that each variable v ∈ v and each clock x ∈ x have corresponding (basic type) program variables v v , v x ∈ v b . in addition, we assume typed variables for locations, edges, and messages, and for times (on the target platform). we additionally consider location variables v l to store the current location, edge variables v e to store the edge currently worked on, message variables v m to store the outcome of a receive operation, and time variables v t to store platform time. message expressions are of the form mexpr ::= m | a, m ∈ v m , a ∈ a, location expressions are of the form lexpr ::= l | | nextloc i (mexpr ), l ∈ v l , ∈ l, and edge expressions are of the form eexpr ::= e | e, e ∈ v e , e ∈ e. a time expression has the form texpr ::= | t | t + expr , where is the current platform time and t ∈ v t . note that time variables are different from clock variables. the values of clock variable v x are used to compute a new next time, which is then stored in a time variable, which can be compared to the platform time. clock variables can be represented by platform integers (given their range is sufficient for the model) while time variables will be represented by platform specific data types like timespec with c [ ] and posix. in this way, model clocks are only indirectly connected (and compared) to the platform clock. table . statements s, statement sequences s, and programs p . | if e = eexpr : s . . . e = eexpr n : snfi | while expr do s od s ::= | s | s | s; s | s ; s ( ; s ≡ s; ≡ s), p ::= s · · · sn. the set of statements, statement sequences, and timed programs are given by the grammar in table . the term nextedge i ([mexpr ]) represents an implementation of the edge selection in an implementable timed automaton that can optionally be called with a message expression. we denote the empty statement sequence by and introduce as an artificial snapshot operator on statements (see below). the particular syntax with snapshot and non-snapshot statements allows us to simplify the semantics definition below. we use stmseq to denote the set of all statement sequences. π = s, (β, γ, w, u) , σ consisting of a statement sequence s ∈ stmseq, the operating time of the current statement β ∈ r + i.e., the time passed since starting to work on the current statement), the time to completion of the current statement γ ∈ r + ∪ {∞} (i.e., the time it will take to complete the work on the current statement), the snapshot time w ∈ r + (i.e., the time since the last snapshot), the platform clock value u ∈ r + , and a type-consistent valuation σ of the program variables. we will use operating time and time to completion to define computations of timed while programs (with discrete transitions when the time to completion is ), and we will use the snapshot time w as an auxiliary variable in the construction of predicates by which we relate program and network computations. the valuation σ maps basic type variables from v b to values from a domain that includes all values of data variables from d as used in the implementable timed automaton and all values needed to evaluate clock constraints (see below), i.e. σ(v b ) ⊆ d b . time variables from v t are mapped to non-negative real numbers, i.e., σ(v t ) ⊆ r + , message variables from v m are mapped to channels, i.e., σ(v m ) ⊆ a ∪ {⊥} or the dedicated value ⊥ representing 'no message', location variables from v l are mapped to locations, i.e., σ(v l ) ⊆ l, and edge variables from v e are mapped to edges, i.e., σ(v e ) ⊆ e. for the interpretation of expressions in a component configuration we assume that, if the valuation σ of the program variables corresponds to the valuation of data variables ν, then the interpretation expr (π) of basic type expression expr corresponds to the value of expr under ν. other variables obtain their values from σ, too, i.e. t (π) = σ(t), m (π) = σ(m), l (π) = σ(l), and e (π) = σ(e); constant symbols are interpreted by their corresponding value, i.e. a (π) = a, (π) = , and e (π) = e, and we have t + expr (π) = t (π) + expr (π). there are two non-standard cases. the -symbol denotes the platform clock value of π, i.e.. (π) = u, and we assume that nextloc i ([mexpr ]) (π) yields the destination location of the edge that is currently processed (as given by e), possibly depending on a message name given by mexpr . if e (π) denotes an internal action or send edge e, this is just the destination location (e), for receive edges it is (e) if mexpr evaluates to the special value ⊥, and an i from a (a i ?, i ) pair in the edge otherwise. if the receive edge is non-deterministic, we assume that the semantics of nextloc i resolves the non-determinism. program computations. table gives an sos-style semantics with discrete reduction steps of a statement sequence (or component). note that the rules in table (with the exception of receive) apply when the time to completion is , that is, at the point in time where the current statement completes. each rule then yields a configuration with the operating time γ for the new current statement. the new snapshot time w is if the first statement in s is a snapshot statement s , and w otherwise. rule (r ) updates m to a, which is a channel or, in case of timeout, the 'no message' indicator '⊥'. rule (r ) is special in that it is supposed to represent the transition relation of an implementable timed automaton. depending on the program valuation σ, (r ) is supposed to yield a triple of the next edge to work on, this edge's next and deadline. for simplicity, we assume that the interpretation of nextedge i ([mexpr ]) is deterministic for a given valuation of program variables. a configuration of program p = s · · · s n is an n-tuple Π = ( s , (β , γ , w , u ), σ , . . . , s n , (β n , γ n , w n , u n ), σ n ) of component configurations; c(p ) denotes the set of all configurations of p . the operational semantics of a program p is the labelled transition system on system configurations defined as follows. there is a delay transition if no current statement completes strictly before δ. there is an internal transition if for some i, ≤ i ≤ n, a discrete reduction rule from table there is a synchronisation transition , σ j by (r ), and β j ≥ β i , i.e. if component j has been listening at least as long as component i has been sending. note that this definition of synchronisation allows multiple components to send at the same time (which may cause message collision on a shared medium) and that, similar to the rendezvous communication of timed automata, out of multiple receivers, only one takes the message. in our application domain these cases do not happen because we assume that implementable networks do not depend on a global scheduler. that is, the program of an implementable network never exhibits any of these two behaviours. a program configuration is called initial if and only if the k-th component configuration, ≤ k ≤ n, is at s k , with any β k , γ k = , w k = , u k = , and any σ k with σ k (v b ) = . we use c ini (p ) to denote the set of initial configurations of program p . a computation of p is an initial and consecutive sequence of program configurations ζ = Π , Π , . . . , i.e. Π ∈ c ini (p ) and for all i ∈ n exists λ ∈ r + ∪ {τ } such that Π i λ − → Π i+ as defined above. we need not consider terminating computations of programs here because we assume networks of implementable timed automata without deadlocks. the program of the network of implementable timed automata n = i . . . i n is p (n ) = s(i ) . . . s(i n ) (cf. table c ). the edges' work is implemented in the corresponding line of the statement sequences in tables a and b. the remaining lines to include the evaluation of guards to choose the edge to be executed next. the result of choosing the edge is stored in program variable e which (by the while loop and the if-statement) moves to line of the implementation of that edge. the program's timing behaviour is controlled by variable t and is thus decoupled from clocks in the timed automata model. after line , the value of t denotes the absolute time where the execution of the next edge is due. that is, clocks in the program are not directly compared to the platform time (which would raise issues with the precision of platform clocks) but are used to determine points in time that the target platform is supposed to sleep to. by doing so, we also lower the risk of accumulating imprecisions in the sleep operation of the target platform when sleeping for many relative durations. the idea of scheduling work and operating time is illustrated by the timing diagram in fig. . row (a) shows a naïve schedule for comparison: from time t i− , decide on the next edge to execute and determine this edge's next time at t i (light grey phase: operating time, must complete within the next edge's next time n e ), then sleep up to the next time (dashed grey line), then execute the edge(s) actions (dark grey phase: work time, must complete within the edge's deadline d e ), then sleep up to the edge's deadline at t i+ , and start over. the program obtained by our translation scheme implements the schedule shown in row (b). the program begins with determining the next edge right after the work phase and then has only one sleep phase up to, e.g., t i+ where the next work phase begins. in this manner, we require only one interaction with the execution platform that implements the sleep phases. row (c) illustrates a possible extension of our approach where operating time is needed right before the work phase, e.g., to prepare the platform's transceiver for sending a message. we call the program p (n ) a correct implementation of network n if and only if for each observable behaviour of a time-safe execution of p (n ) there is a corresponding computation path of n . in the following, we provide our notion of time-safety and then elaborate on the above mentioned correspondence between program and network computations. intuitively, a computation of p (n ) is not time-safe if either the execution of an edge's statement sequence takes longer than the admitted deadline or if the next time of the subsequent edge is missed, e.g., by an execution platform that is too slow. note that in a given program computation, the performance of the platform is visible in the operation time β and time to completion γ. we write Π k :l e n to denote that the program counter of component k is at line n of the statement sequence of edge e. we use σ| x∪v to denote the (network) configuration encoded by the values of the corresponding program variables. we assume that for each program variable v, the old value, i.e., the value before the last assignment in the computation is available as @v. i.e., if the i-th configuration completes (γ i,k = ) line of an edge's statement sequence, not more time than admitted by its deadline has been used (w k ), i.e., the sleepto statement in line completes exactly after the deadline of the previously worked on edge plus the current edge's next time. ♦ note that, by definition , operating times may be larger than the subsequent edge's next time in a time-safe computation (if the execution of the current edge completes before its deadline). stronger notions of time-safety are possible. for correctness of p (n ), recall that we introduced timed while programs to consider the computation time that is needed to compute the transition relation of an implementable network on the fly. in addition, program computations have a finer granularity than network computations: in network computations, the current location and the valuation of clocks and variables are updated atomically in a transition. in the program p (n ), these updates are spread over three lines. we show that, for each time-safe computation ζ of program p (n ), there is a computation of network n that is related to ζ in a well-defined way. the relation between program and network configurations decouples both computations in the sense that at some times (given by the respective timestamp) the, e.g., clock values in the program configuration are "behind" network clocks (i.e., correspond to an earlier network configuration), at some times they are "ahead", and there are points where they coincide. figure illustrates the relation for one edge e. the top row of fig. gives a timing diagram of the execution of the program for edge e of one component. the rows below show the values over time for each program variable v up to e, n, and d. for example, the value of l will denote the source location of e until line is completed, and then denotes the destination location . similarly, v and x denote the effects of the update vector of e on data variables and clocks. note that, during the execution of line , we may observe combinations of values for v and l that are never observed in a network computation due to the atomic semantics of networks. the two bottom lines of fig. show related network configurations aligned with their corresponding program lines. note that the execution of each line except for line may be related to two network configurations depending on whether the program timestamp is before or after the current edge's deadline. figure illustrates the three possible cases: the execution of program line (work time, dark gray) is related to network configurations with the source location of the current edge. right after the work time, the network location × is related and at the current edge's deadline the destination location is related. in the related network computation, the transition from × to always takes place at the current edge's deadline. this point in time may, in the program computation, be right after work time (fig. a , no delay in × ), in the operating time (fig. b) , or in the sleep time (fig. c) . the relation between program and network configurations as illustrated in fig. can be formalised by predicates over program and network configurations, one predicate per edge and program line. the following lemma states the described existence of a network computation for each time-safe program computation. the relation gives a precise, component-wise and phase-wise relation of program computations to network computations. in other words, we obtain a precise accounting of which phases of a time-safe program computation correspond to a network computation and how. we can argue component-wise by the closed component-assumption from sect. . table c reach the line of a send or receive edge (cf . table a and b) and establish a related network configuration. for the induction step, we need to consider delays and discrete steps of the program. from time-safety of ζ we can conclude to possible delays in n for the related configurations with a case-split wrt. the deadline (cf. fig. ). when the program time is at the current edge's deadline, the network may delay up to the deadline in an intermediate location × , take a transition to the successor location , and possibly delay further. for discrete program steps, we can verify that n has enabled discrete transitions that reach a network configuration that is related to the next program line. here, we use our assumptions from the program semantics that update vectors have the same effect in the program and the network. and we use the convenient property of our program semantics that the effects of statements only become visible with the discrete transitions. for synchronisation transitions of the program, we use the assumption that the considered network of implementable timed automata does not depend on a global scheduler, in particular that send actions are never blocked, or, in other words, that whenever a component has a send edge locally enabled, then there is a receiving edge enabled on the same channel. our main result in theorem is obtained from lemma by a projection onto observable behaviour (cf. definition ). intuitively, the theorem states that at each point in time with a discrete transition to line , the program configuration exactly encodes a configuration of network p (n ) right before taking an internal, send, or receive edge. . . be the projection of a computation path ξ of the implementable network n onto component k, ≤ k ≤ n, labelled such that each configuration k i, , ν k i, is initial or reached by a discrete transition to a source location of an internal, send, or receive edge. the sequence ξ k is the largest index such that between c := k j, , ν k j, and k j,ij , ν k j,ij + d j exactly next(c) time units have passed, is called the observable behaviour of component k in ξ. ♦ theorem . let n be an implementable network and ζ k = π , , . . . , π ,n , π , , . . . the projection onto the k-th component of a time-safe computation ζ of p (n ) labelled such that π i,ni , π i+ , are exactly those transitions in ζ from a line to the subsequent line . then ( σ i, (l), σ i, | x∪v , u i, ) i∈n is an observable behaviour of component k on some computation path of n . ♦ fig. . timed automaton of the implementable timed automaton (after applying the scheme from fig. ) for the lz-protocol of sensors [ ] . the work presented here was motivated by a project to support the development of a new communication protocol for a distributed wireless fire alarm system [ ] , without shared memory, only assuming clock synchronisation and message exchange. we provided modelling and analysis of the protocol a priori, that is, before the first line of code had been written. in the project, the engineers manually implemented the model and appreciated how the model indicates exactly which action is due in which situation. later, we were able to study the handwritten code and observed (with little surprise) striking regularities and similarities to the model. so we conjectured that there exists a significant sublanguage of timed automata that is implementable. in our previous work [ ] , we identified independency from a global scheduler as a useful precondition for the existence of a distributed implementation (cf. sect. ). for this work, we have modelled the lz-protocol of sensors in the wireless fire alarm system from [ ] as an implementable timed automaton (cf. fig. ; fig. shows the timed automaton obtained by applying the scheme from fig. ). hence our modelling language supports real-world, industry case-studies. implementable timed automata also subsume some models of time-triggered, periodic tasks that we would model by internal edges only. from the program obtained by the translation scheme given in table , we have derived an implementation of the protocol in c. clock, data, location, edge, and message variables become enumerations or integers, time variables use the posix data-structure timespec. the implementation runs timely for multiple days. although our approach with sleeping to absolute times reduces the risk of drift, there is jitter on real-world platforms. the impact of timing imprecision needs to be investigated per application and platform when refining the program of a network to code, e.g., following [ ] . in our case study, jitter is much smaller than the model's time unit. another strong assumption that we use is synchrony of the platform clocks and synchronised starting times of programs which can in general not be achieved on real-world platforms. in the wireless fire alarm system, component clocks are synchronised in an initialisation phase and kept (sufficiently) synchronised using system time information in messages. robustness against limited clock drift is obtained by including so-called guard times [ , ] in the protocol design. in the model, this is constant g: components are ready to receive g time units before message transmission starts in another component. note that theorem only applies to time-safe computations. whether an implementation is time-safe needs to be analysed separately, e.g., by conducting worst-case execution time (wcet) analyses of the work code and the code that implements the timed automata semantics. the c code for the lz-model mentioned above actually implements a sleepto function that issues a warning if the target time has already passed (thus indicating non-time-safety). the translation scheme could easily be extended by a statement between lines and that checks whether the deadline was kept and issues a warning if not. then, theorem would strengthen to the statement that all computations of p (i) either correspond to observable behaviour of i or issue a warning. note that, in contrast to [ , , ] , our approach has the practically important property that time-safety implies time-robustness, i.e., if a program is time-safe on one platform then it is time-safe on any 'faster' platform. furthermore, we have assumed a deterministic choice of the next edge to be executed for simplicity and brevity of the presentation. non-deterministic models can be supported by providing a non-deterministic semantics to the nextedge i function in the programming language and the correctness proof. we have presented a shorthand notation that defines a subset of timed automata that we call implementable. for networks of implementable timed automata that do not depend on a global scheduler, we have given a translation scheme to a simple, exact-time programming language. we obtain a distributed implementation with one program for each network component, the programs are supposed to be executed concurrently, possibly on different computers. we propose to not substitute (imprecise) platform clocks for (model) clocks in guards and invariants, but to rely on a sleep function with absolute deadlines. the generated programs do not include any "hidden" execution times, but all updates, actions, and the time needed to select subsequent edges are taken into account. for the generated programs, we have established a notion of correctness that closely relates program computations to computation paths of the network. the close relation lowers the mental burden for developers that is induced by other approaches that switch to a slightly different, e.g., robust semantics for the implementation. our work decomposes the translation from timed automata models to code into a first step that deals with the discrepancy between atomicity of the timed automaton semantics and the non-atomic execution on real platforms. the second step, to relate the exact-time program to real platforms with imprecise timing is the subject of future work. model-based implementation of real-time applications rigorous implementation of real-time systems -from theory to application synthesis of ada code from graph-based task models code synthesis for timed automata on global scheduling independency in networks of timed automata modeling heterogeneous real-time components in bip a tutorial on uppaal synchronous programming with events and relations: the signal language and its semantics compositional abstraction in real-time model checking the esterel synchronous programming language: design, semantics, implementation timed automata can always be made implementable spanner: google's globally distributed database higher-dimensional timed automata automated analysis of aodv using uppaal ready for testing: ensuring conformance to industrial standards through formal verification parameterized verification of track topology aggregation protocols clock synchronization of distributed, real-time, industrial data acquisition systems ridesharing: fault tolerant aggregation in sensor networks using corrective actions the synchronous data flow programming language lustre translating uppaal to not quite c giotto: a time-triggered language for embedded programming programming languages -c formal approach to guard time optimization for tdma optimizing guard time for tdma in a wireless sensor network -case study automatic translation from uppaal to c real-time systems -formal specification and automatic verification a structural approach to operational semantics dynamical properties of timed automata on generating soft real-time programs for non-realtime environments a methodology for choosing time synchronization strategies for wireless iot networks model-based implementation of parallel real-time systems ad hoc routing protocol verification through broadcast abstraction almost asap semantics: from timed models to timed implementations key: cord- - rlvmwce authors: christman, ananya; chung, christine; jaczko, nicholas; li, tianzhi; westvold, scott; xu, xinyue; yuen, david title: new bounds for maximizing revenue in online dial-a-ride date: - - journal: combinatorial algorithms doi: . / - - - - _ sha: doc_id: cord_uid: rlvmwce in the online-dial-a-ride problem (oldarp) a server travels to serve requests for rides. we consider a variant where each request specifies a source, destination, release time, and revenue that is earned for serving the request. the goal is to maximize the total revenue earned within a given time limit. we prove that no non-preemptive deterministic online algorithm for oldarp can be guaranteed to earn more than half the revenue earned by [formula: see text]. we then investigate the segmented best path ([formula: see text]) algorithm of [ ] for the general case of weighted graphs. the previously-established lower and upper bounds for the competitive ratio of [formula: see text] are and , respectively, under reasonable assumptions about the input instance. we eliminate the gap by proving that the competitive ratio is (under the same assumptions). we also prove that when revenues are uniform, [formula: see text] has competitive ratio . finally, we provide a competitive analysis of [formula: see text] on complete bipartite graphs. in the on-line dial-a-ride problem (oldarp), a server travels through a graph to serve requests for rides. each request specifies a source, which is the pick-up (or start) location of the ride, a destination, which is the delivery (or end) location, and the release time of the request, which is the earliest time the request may be served. requests arrive over time; specifically, each arrives at its release time and the server must decide whether to serve the request and at what time, with the goal of meeting some optimality criterion. the server has a capacity that specifies the maximum number of requests it can serve at any time. common optimality criteria include minimizing the total travel time (i.e. makespan) to satisfy all requests, minimizing the average completion time (i.e. latency), or maximizing the number of served requests within a specified time limit. in many variants preemption is not allowed, so if the server begins to serve a request, it must do so until completion. on-line dial-a-ride problems have many practical applications in settings where a vehicle is dispatched to satisfy requests involving pick-up and delivery of people or goods. important examples include ambulance routing, transportation for the elderly and disabled, taxi services including ride-for-hire systems (such as uber and lyft), and courier services. we study a variation of oldarp where in addition to the source, destination and release time, each request also has a priority and there is a time limit within which requests must be served. the server has unit capacity and the goal for the server is to serve requests within the time limit so as to maximize the total priority. a request's priority may simply represent the importance of serving the request in settings such as courier services. in more time-sensitive settings such as ambulance routing, the priority may represent the urgency of a request. in profit-based settings, such as taxi and ride-sharing services, a request's priority may represent the revenue earned from serving the request. for the remainder of this paper, we will refer to the priority as "revenue," and to this variant of the problem as roldarp. note that if revenues are uniform the problem is equivalent to maximizing the number of served requests. the online dial-a-ride problem was introduced by feuerstein and stougie [ ] and several variations of the problem have been studied since. for a comprehensive survey on these and many other problems in the general area of vehicle routing see [ ] and [ ] . feuerstein and stougie studied the problem for two different objectives: minimizing completion time and minimizing latency. for minimizing completion time, they showed that any deterministic algorithm must have competitive ratio of at least regardless of the server capacity. they presented algorithms for the cases of finite and infinite capacity with competitive ratios of . and , respectively. for minimizing latency, they proved that any algorithm must have a competitive ratio of at least and presented a -competitive algorithm on the real line when the server has infinite capacity. ascheuer et al. [ ] studied oldarp with multiple servers with the goal of minimizing completion time and presented a -competitive algorithm. more recently, birx et al. [ ] studied oldarp on the real line and presented a new upper bound of . for the smartstart algorithm [ ] , which improves the previous bounds of . [ ] and . [ ] . for oldarp on the real line, bjelde et al. [ ] present a preemptive algorithm with competitive ratio . . the online traveling salesperson problem (oltsp), introduced by ausiello et al. [ ] and also studied by krumke [ ] , is a special case of oldarp where for each request the source and destination are the same location. there are many studies of variants of oldarp and oltsp [ , , , ] that differ from the variant that we study which we omit here due to space limitations. in this paper, we study oldarp where each request has a revenue that is earned if the request is served and the goal is to maximize the total revenue earned within a specified time limit; the offline version of the problem was shown to be np-hard in [ ] . more recently, it was shown that even the special case of the offline version with uniform revenues and uniform weights is np-hard [ ] . christman and forcier [ ] presented a -competitive algorithm for oldarp on graphs with uniform edge weights. christman et al. [ ] showed that if edge weights may be arbitrarily large, then regardless of revenue values, no deterministic algorithm can be competitive. they therefore considered graphs where edge weights are bounded by a fixed fraction of the time limit, and gave a competitive algorithm for this problem. note that this is a natural subclass of inputs since in real-world dial-a-ride systems, drivers would be unlikely to spend a large fraction of their day moving to or serving a single request. in this work we begin with improved lower and upper bounds for the competitive ratio of the segmented best path (sbp) algorithm that was presented in [ ] . we study sbp because it has the best known competitive ratio for roldarp and is a relatively straightforward algorithm. in [ ] , it was shown that sbp's competitive ratio has lower bound and upper bound , provided that the edge weights are bounded by a fixed fraction of the time limit, i.e. t /f where t is the time limit and < f < t, and that the revenue earned by the optimal offline solution (opt) in the last t /f time units is bounded by a constant. this assumption is imposed because, as we show in lemma , no non-preememptive deterministic online algorithm can be guaranteed to earn this revenue. we note that as t grows, the significance of the revenue earned by opt in the last two time segments diminishes. we then close the gap between the upper and lower bounds of sbp by providing an instance where the lower bound is (sect. . ) and a proof for an upper bound of (sect. . ). we note that another interpretation of our result is that under a weakened-adversary model where opt has two fewer time segments available, while sbp has the full time limit t , sbp is -competitive. we then investigate the problem for uniform revenues (so the objective is to maximize the total number of requests served) and prove that sbp earns at least / the revenue of opt, minus an additive term linear in f , the number of time segments (sect. ). this variant is useful for settings where all requests have equal priorities such as not-for-profit services that provide transportation to elderly and disabled passengers and courier services where deliveries are not prioritized. we then consider the problem for complete bipartite graphs; for these graphs every source is from the left-hand side and every destination is from the righthand side (sect. ). these graphs model the scenario where only a subset of locations may be source nodes and a disjoint subset may be destinations, e.g. in the delivery of goods from commercial warehouses only the warehouses may be sources and only customer locations may be destinations. we refer to this problem as roldarp-b. we first show that if edge weights are not bounded by a minimum value, then roldarp on general graphs reduces to roldarp-b. we therefore impose a minimum edge weight of kt /f for some constant k such that < k ≤ . we show that if revenues are uniform, sbp has competitive ratio /k . finally, we show that if revenues are nonuniform sbp has competitive ratio /k , provided that the revenue earned by opt in the last t /f time units is bounded by a constant. (this assumption is justified by lemma which says no non-preemptive deterministic algorithm can be guaranteed to earn any fraction table . bounds on the algorithm sbp for roldarp variants. † this upper bound assumes the optimal revenue of the last two time segments is bounded by a constant. ‡ this upper bound assumes the number of time segments is constant. § k is a constant where < k ≤ such that the minimum edge weight is kt /f where t is the time limit and < f < t . competitive ratio ρ of sbp for roldarp uniform revenue nonuniform revenue of what is earned by opt in the last t /f time units.) table summarizes our results. the revenue-online-dial-a-ride problem (roldarp) is formally defined as follows. the input is an undirected complete graph there is a weight w u,v > , which represents the amount of time it takes to traverse (u, v). one node in the graph, o, is designated as the origin and is where the server is initially located (i.e. at time ). the input also includes a time limit t and a sequence of requests, σ, that are dynamically issued to the server. each request is of the form (s, d, t, p) where s is the source node, d is the destination, t is the time the request is released, and p is the revenue (or priority) earned by the server for serving the request. the server does not know about a request until its release time t. to serve a request, the server must move from its current location x to s, then from s to d. the total time for serving the request is equal to the length (i.e. travel time) of the path from x to s to d, and the earliest time a request may be released is at t = . for each request, the server must decide whether to serve the request and if so, at what time. a request may not be served earlier than its release time and at most one request may be served at any given time. once the server decides to serve a request, it must do so until completion. the goal for the server is to serve requests within the time limit so as to maximize the total earned revenue. (the server need not return to the origin and may move freely through the graph at any time, even if it is not traveling to serve a request.) the algorithm segmented best path (sbp) [ ] starts by splitting the total time t into f segments each of length t /f (recall that f is fixed and < f < t ). input is complete graph g with time limit t and maximum edge weight t /f. at the start of ti, find the max-revenue-request-set, r. : if r is non-empty then : move to the source location of the first request in r. : at the start of ti+ , serve request-set r. : else : remain idle for ti and ti+ : end if : let i = i + . at the start of a time segment, the server determines the max-revenue-requestset, i.e. the maximum revenue set of unserved requests that can be served within one time segment, and moves to the source of the first request in this set. during the next time segment, it serves the requests in this set. it continues this way, alternating between moving to the source of first request in the max-revenuerequest-set during one time segment, and serving this request-set in the next time segment. to find the max-revenue-request-set, the algorithm maintains a directed auxiliary graph, g to keep track of unserved requests (an edge between two vertices u,v represents a request with source u and destination v). it finds all paths of length at most t /f between every pair of nodes in g and returns the path that yields the maximum total revenue (please refer to [ ] for full details). it was observed in [ ] that no deterministic online algorithm can be guaranteed to serve the requests served by opt during the last time segment and the authors proved that sbp is -competitive barring an additive factor equal to the revenue earned by opt during the last two time segments. more formally, let rev(sbp(t j )) and rev(opt(t j )) denote the revenue earned by sbp and opt respectively during the j-th time segment. it was also shown in [ ] that as t grows, the competitive ratio of sbp is at best (again with the additive term equal to rev(opt(t f )) + rev(opt(t f − ))), resulting in a gap between the upper and lower bounds. we first present a general lower bound for this problem and show that no non-preemptive deterministic online algorithm (e.g. sbp) can be better than -competitive with respect to the revenue earned by the offline optimal schedule (ignoring the last two time segments; see lemma , below). can be guaranteed to earn more than half the revenue earned by opt in the first t − t /f time units. this is the case whether revenues are uniform or nonuniform. proof (sketch). the adversary repeatedly releases requests such that depending on which request(s) the algorithm serves, other request(s) are released that the algorithm cannot serve in time. this scheme requires carefully constructed edge weights, release times, and revenues so that the optimal offline revenue is always twice that of any online algorithm. please see the full version of the paper for details [ ] . we now show that no non-preemptive deterministic online algorithm (e.g. sbp) can be competitive with the revenue earned by opt in the last two segments of time. we note that this claim applies to the version of non-preemption where, as in real-world systems like uber/lyft, once the server decides to serve a request, it must move there and serve it to completion. proof (). the adversary releases a request in the last two time segments and if the online algorithm chooses not to serve it, no other requests will be released. if the algorithm chooses to serve it, another batch of requests will be released elsewhere that the algorithm cannot serve in time. please see the full version of the paper for details [ ] . in this section we improve the lower and upper bounds for the competitive ratio of the segmented best path algorithm [ ] . in particular, we eliminate the gap between the lower and upper bounds of and , respectively, from [ ] , by providing an instance where the lower bound is and a proof for an upper bound of . note that throughout this section we assume the revenue earned by opt in the last two time segments is bounded by some constant. we must impose this restriction on the opt revenue of the last two time segments because, as we showed in lemma , no non-preemptive deterministic online algorithm can be guaranteed to earn any constant fraction of this revenue. theorem . if the revenue earned by opt in the last two time segments is bounded by some constant, and sbp is γ-competitive, then γ ≥ . proof (sketch). for the formal details, please refer to the proof of theorem in the full version [ ] . consider the instance depicted in fig. . since t = hf in this instance, h represents "half" the length of one time segment, so only one request of length h + fits within a single time segment for sbp. the general idea of the instance is that while sbp is serving every other request across the top row of requests (since the other half across the top are not released until after sbp has already passed them by), opt is serving the entire bottom row in one long chain, then also has time to serve the top row as one long chain. we now show that sbp is -competitive by creating a modified, hypothetical sbp schedule that has additional copies of requests. first, we note that sbp loses a factor of due to the fact that it serves requests during only every other time segment. then, we lose another factor of two to cover requests in opt that overlap between time segments. finally, by adding at most one more copy of the requests served by sbp to make up for requests that sbp "incorrectly" serves prior to when they are served by opt, we end up with copies of sbp being sufficient for bounding the total revenue of opt. note that while this proof uses some of the techniques of the proof of the -competitive upper bound in [ ] , it reduces the competitive ratio from to by cleverly extracting the set of requests that sbp serves prior to opt before making the additional copies. let rev(opt) and rev(sbp) denote the total revenue earned by opt and sbp over all time segments t j from j = . . . f. theorem . if the revenue earned by opt in the last two time segments is bounded by some constant c, then sbp is -competitive, i.e., if rev(opt(t f )) + rev(opt(t f − )) ≤ c, then f j= rev(opt(t j )) ≤ f j= rev(sbp(t j )) + c. note that another interpretation of this result is that under a resource augmentation model where sbp has two more time segments available than opt, sbp is competitive. proof. we analyze the revenue earned by sbp by considering the time segments in pairs (recall that the length of a time segment is t /f for some < f < t ). we refer to each pair of consecutive time segments as a time window, so if there are f time segments, there are f/ time windows. note that the last time window may have only one time segment. for notational convenience we consider a modified version of the sbp schedule, that we refer to as sbp , which serves exactly the same set of requests as sbp, but does so one time window earlier. specifically, if sbp serves a set of requests during time window i ≥ , sbp serves this set during time window i − (so sbp ignores the set served by sbp in window ). we note that the schedule of requests served by sbp may be infeasible, and that it will earn at most the amount of revenue earned by sbp. let b i denote the set of requests served by opt in window i that sbp already served before in some window j < i. and let b be the set of all requests that have already been served by sbp in a previous window by the time they are served in the opt schedule. formally, b = let opt(t j ) denote the set of requests served by opt in time segment t j . let opt i denote the set of requests served by opt in the time segment of window i with greater revenue, i.e. opt i = arg max{rev(opt(t i− )), rev(opt(t i ))}. note this set may include a request that was started in the prior time segment, as long as it was completed in the time segment of opt i . let rev(opt i ) denote the revenue earned in opt i . let sbp i denote the set of requests served by sbp in window i and let rev(sbp i ) denote the revenue earned by sbp i . let h denote the chronologically ordered set of time windows w where rev(opt w ) > rev(sbp w ), and let h j denote the jth time window in h. we refer to each window of h as a window with a "hole," in reference to the fact that sbp does not earn as much revenue as opt in these windows. in each window h j there is some amount of revenue that opt earns that sbp does not. in particular, there must be a set of requests that opt serves in window h j that sbp does not serve in h j . note that this set must be available for sbp in h j since opt does not include the set b. let opt hj = a j ∪ c * j , where a j is the subset of requests served by both opt and sbp in h j and c * j is the subset of opt requests available for sbp to serve in h j but sbp chooses not to serve. let us refer to the set of requests served by sbp in h j as sbp hj = a j ∪ c j for some set of requests c j . note that if opt hj = a j ∪ c * j can be executed within a single time segment, then rev(c j ) ≥ rev(c * j ) by the greediness of sbp . however, since h j is a hole we know that the set opt hj cannot be served within one time segment. our plan is to build an infeasible schedule sbp that will be similar to sbp but contain additional "copies" of some requests such that no windows of sbp contain holes. we first initialize sbp to have the same schedule of requests as sbp . we then add additional requests to h j for each j = . . . |h|, based on opt hj . consider one such window with a hole h j , and let k be the index of the time segment corresponding to opt hj . we know opt must have begun serving a request of opt hj in time segment t k− and completed this request in time segment t k . let us use r * to denote this request that "straddles" the two time segments. after the initialization of sbp = sbp , recall that the set of requests served by sbp in h j is sbp hj = a j ∪ c j for some set of requests c j . we add to sbp a copy of a set of requests. there are two sub-cases depending on whether r * ∈ c * j or not. case r * ∈ c * j . in this case, by the greediness of sbp, and the fact that both r * alone and c * j \{r * } can separately be completed within a single time segment, we have: rev(c j ) ≥ max{rev(r * ), rev(c * j \ {r * })} ≥ rev(c * j ). we then add a copy of the set c j to the sbp schedule, so there are two copies of c j in h j . note that for sbp, h j will no longer be a hole since: rev(opt hj ) = rev(a j )+rev(c * j ) ≤ rev(a j ) + · rev(c j ) = rev(sbp hj ). case r * / ∈ c * j . in this case c * j can be served within one time segment but sbp chooses to serve a j ∪ c j instead. so we have rev(a j ) + rev(c j ) ≥ rev(c * j ), therefore we know either rev(a j ) ≥ rev(c * j ) or rev(c j ) ≥ rev(c * j ). in the latter case, we can do as we did in the first case above and add a copy of the set c j to the sbp schedule in window h j , to get rev(opt hj ) ≤ rev(sbp hj ), as above. in the former case, we instead add a copy of a j to the sbp schedule in window h j . then again, for sbp, h j will no longer be a hole, since this time: rev(opt hj ) = rev(a j ) + rev(c * j ) ≤ · rev(a j ) + rev(c j ) = rev(sbp hj ). note that for all windows w / ∈ h that are not holes, we already have rev(sbp w ) ≥ rev(opt w ). so we have where the second inequality is because sbp contains no more than two instances of every request in sbp . combining ( ) with the fact that sbp earns at most what sbp does yields since sbp serves in only one of two time segments per window, we have ). hence, by the definition of opt, and by ( ) we can say rev(sbp(t j )) + rev(opt(t f − )) + rev(opt(t f )). ( ) now we must add in any request in b, such that opt serves the request in a time window after sbp serves that request. by definition of b (as the set of all requests that have been served by sbp in a previous window) b may contain at most the same set of requests served by sbp . therefore rev(b) ≤ rev(sbp ), so rev(b) ≤ rev(sbp). by the definition of opt, opt = opt + b, so and by combining ( )-( ) with the fact that rev(b) ≤ rev(sbp), we have we now consider the setting where revenues are uniform among all requests, so the goal is to maximize the total number of requests served. this variant is useful for settings where all requests have equal priorities, for example for notfor-profit services that provide transportation to elderly and disabled passengers. the proof strategy is to carefully consider the requests served by sbp in each window and track how they differ from that of opt. the final result is achieved through a clever accounting of the differences between the two schedules, and bounding the revenue of the requests that are "missing" from sbp. we note that the lower bound instance of theorem can be modified to become a uniform-revenue instance that has ratio − /f. we further note that the lower bound instance provided in [ ] immediately establishes a lower bound instance for sbp that has a ratio of . we now show that opt earns at most times the revenue of sbp in this setting if we assume the revenue earned by opt in the last two time segments is bounded by a constant, and allow sbp an additive bonus of f . note that even when revenues are uniform, no nonpreemptive deterministic online algorithm can earn the revenue earned by opt in the last two time segments (see lemma ) . we begin with several definitions and lemmas. as in the proof of theorem , we consider a modified version of the sbp schedule, that we refer to as sbp , which serves exactly the same set of requests as sbp, but does so one time window earlier. for all windows i = , , ..., m, where m = f/ − , we let s i denote the set of requests served by sbp in window i and s * . the set s * i cannot be served within one time segment. this means there must be one request in s * i that opt started serving in the previous time segment. we refer to this straddling request as r * . there are three sub-cases based on where r * appears. (a) if r * ∈ y * i , then due to the greediness of sbp , we know that since otherwise sbp would have chosen to serve r * . we also know since otherwise sbp would have chosen to serve y * i \{r * }. from ( ), we have |x i | + |y i | ≥ and from ( ) then r * is served by both opt and sbp . we know that a i ∪ y * i \{r * } can be served within one time segment since r * is the only request that causes s * i to straddle between two time segments. again by the greediness of sbp , we have rev therefore, for all cases, for window i, we have now we will build an infeasible schedule sbp that will be similar to sbp but contain additional "copies" of some requests such that no windows of sbp contain holes, i.e. such that rev(sbp) ≥ m i= rev(s * i ). we define a modified opt schedule which we refer to as opt such that by lemma and eq. ( ), we can say rev(opt this tells us that to form an sbp whose revenue is at least that of opt , we must "compensate" sbp by adding to it at most copies of all requests in the set y i for all i = , , ..., m, plus m "dummy requests." in other words, we know the total revenue of all y i can not exceed the total revenue of sbp , hence we have combining ( ) and ( ), we get rev(opt ) ≤ rev(sbp ) + m, which means recall that s * i is the set of requests served by opt during the time segment of window i with greater revenue. in other words, , which, combined with ( ), gives us we assumed that the total revenue of requests served in the last two time segments by opt is bounded by c. from ( ), we get we also know that the total revenue of requests served by sbp during the first m windows is less than or equal to the total revenue of sbp. therefore, from ( ) , we have f j= rev(s * (t j )) ≤ f j= rev(s(t j )) + m + c. in this section, we consider roldarp for complete bipartite graphs g = (v = v ∪ v , e), where only nodes in v maybe be source nodes and only nodes in v may be destination nodes. one node is designated as the origin and there is an edge from this node to every node in v (so the origin is a node in v ). due strictly to space limitations, most proofs of theorems in this section are deferred to the full version of the paper [ ] . we refer to this problem as roldarp-b and the offline version as rdarp-b. we first show that if edge weights of the bipartite graph are not bounded by a minimum value, then the offline version of roldarp on general graphs, which we refer to as rdarp, reduces to rdarp-b. since rdarp has been show in [ , ] to be np-hard (even if revenues are uniform), this means rdarp-b is np-hard as well. theorem . the problem rdarp is poly-time reducible to rdarp-b. also, rdarp with uniform revenues is poly-time reducible to rdarp-b with uniform revenues. proof (sketch) . the idea of the reduction is to split each node into two nodes connected by an edge in the bipartite graph with a distance of . then we turn each edge in the original graph into two edges in the bipartite graph. please see the full version for details [ ] . we show that for bipartite graph instances, if revenues are uniform, we can guarantee that sbp earns a fraction of opt equal to the ratio between the minimum and maximum edge-length. proof (sketch). the proof idea is akin to that of theorem below. please see the full version of the paper for details [ ] . in this section we show that even if revenues are nonuniform, we can still guarantee that sbp earns a fraction of opt equal to the ratio between the minimum and maximum edge-length, minus the revenue earned by opt in the last window. recall that we refer to each pair of consecutive time segments as a time window. note that no non-preemptive deterministic online algorithm can be competitive with any fraction of the revenue earned by opt in the last t /f time units (i.e. lemma also holds for roldarp-b with nonuniform revenues). due space limitations, please refer to the full version of this work [ ] for the proof of the following theorem. maximizing the number of rides served for dial-a-ride online dial-a-ride problems: minimizing the completion time algorithms for the on-line travelling salesman tight analysis of the smartstart algorithm for online dial-a-ride on the line improved bounds for open online dial-a-ride on the line tight bounds for online tsp on the line new bounds for maximizing revenue in online dial-a-ride revenue maximization in online dial-a-ride maximizing revenues for on-line dial-a-ride on-line single-server dial-a-ride problems generalized online routing: new competitive ratios, resource augmentation, and asymptotic analyses online vehicle routing problems: a survey online travelling salesman problem on a circle online optimization: competitive analysis and beyond on minimizing the maximum flow time in the online dial-a-ride problem typology and literature review for dial-aride problems i denote the set of requests served by opt during the time segment of window i with greater revenue, i.e. s * i = arg max{rev(opt(t i− ), rev(opt(t i ))} where rev(opt(t j )) denotes the revenue earned by opt in time segment t j . we define a new set j * i as the set of requests served by opt during the time segment of window i with less revenue, i.e. j * i = arg min{rev(opt(t i− ), rev(opt(t i ))}.( ) a i is the set of requests that appear in both s * i and s i ; ( ) x * i is the set of requests that appear in s w for some w = , , ..., i − . note there is only one possible w for each individual request r ∈ x * i , because each request can be served only once; ( ) y * i is the set of requests such that no request from y * i appears in s w for any w = , , ..., i − , i; ( ) x i is the set of requests that appear in s * w for some w = , , ..., i − . note there is only one possible w for each individual request r ∈ x i , because each request can be served only once; ( ) y i is the set of requests such that no request from y i appears in s * w for any w = , , ...., m, or may not appear in any other sets. also note that since each request can be served at most once, we have:given the above definitions, we have the following lemma whose proof has been deferred to the full version of the paper [ ] . it states that at any given time window, the cumulative requests of opt that were earlier served by sbp are no more than the number that have been served by sbp but not yet by opt. proof. note that since revenues are uniform, the revenue of a request-set u is equal to the size of the set u , i.e., rev(u ) = |u |. consider each window i where rev(s * i ) > rev(s i ). note that the set s * i may not fit within a single time segment. we consider two cases based on s * i . . the set s * i can be served within one time segment. note that within s * i = a i ∪x * i ∪y * i , x * i is not available for sbp to serve because sbp has served the requests in x * i prior to window i. among requests that are available to sbp , sbp greedily chooses to serve the maximum revenue set that can be served within one time segment. therefore, we have rev(x i ) + rev(y i ) ≥ rev(y * i ). since revenues are uniform, we also have |x i | + |y i | ≥ |y * i |. if this is not the case, then sbp would have chosen to serve y * i instead of x i ∪y i since it is feasible for sbp to do so because the entire s * i can be served within one time segment. key: cord- - k oegu authors: turky, ayad; rahaman, mohammad saiedur; shao, wei; salim, flora d.; bradbrook, doug; song, andy title: deep learning assisted memetic algorithm for shortest route problems date: - - journal: computational science - iccs doi: . / - - - - _ sha: doc_id: cord_uid: k oegu finding the shortest route between a pair of origin and destination is known to be a crucial and challenging task in intelligent transportation systems. current methods assume fixed travel time between any pairs, thus the efficiency of these approaches is limited because the travel time in reality can dynamically change due to factors including the weather conditions, the traffic conditions, the time of the day and the day of the week, etc. to address this dynamic situation, we propose a novel two-stage approach to find the shortest route. firstly deep learning is utilised to predict the travel time between a pair of origin and destination. weather conditions are added into the input data to increase the accuracy of travel time predicition. secondly, a customised memetic algorithm is developed to find shortest route using the predicted travel time. the proposed memetic algorithm uses genetic algorithm for exploration and local search for exploiting the current search space around a given solution. the effectiveness of the proposed two-stage method is evaluated based on the new york city taxi benchmark dataset. the obtained results demonstrate that the proposed method is highly effective compared with state-of-the-art methods. finding shortest routes is crucial in intelligent transportation systems. shortest route information can be utilised to enable route planners to compute and provide effective routing decisions [ , , , , ] . however, shortest route computation is a challenging task partially due to dynamic environments [ ] . for instance, the shortest path is impacted by various spatio-temporal factors, which are dynamic in nature, including weather, the time of the day, and the day of the week. that makes the current shortest route computation techniques ineffective [ , ] . moreover, it is a challenging problem to incorporate these dynamic factors into shortest route computation. in recent years, the proliferation of pervasive technologies has enabled the collection of spatio-temporal big data associated with user mobility and travel routes in a real-time manner [ ] . modern cars are equipped with telematics devices including in-car gps (global positioning system) devices which can be used as a source of valuable information in traffic modelling [ ] . the traces generated from gps devices has been leveraged by many scenarios such as spatiotemporal context recognition, taxi-passenger queue time prediction, study of city dynamics and transport demand estimation [ , , , , ] . one important aspect of finding shortest routes in realistic environments, which are inherently dynamic, is travel time prediction [ , ] . due to the dynamic nature of in the travel routes, traditional machine learning methods cannot be applied directly onto travel time prediction. one of the key challenge for traditional machine learning models is the unavailability of hand-crafted features which requires substantial involvement of domain experts. one relevant approach is the recent use of evolutionary algorithms in other domains to work along with deep learning models for effective feature extraction and selection [ ] [ ] [ ] [ ] . in this study, we aim to identify relevant features for shortest route finding between an origin and destination, leveraging the auto-feature generation capability of deep learning. thereby we propose a novel two-stage architecture for the travel time prediction and route finding task. in particular we design a customized memetic algorithm to find shortest route based on the predicted travel time from the earlier stage. the contributions of this research are summarised as follows: -a novel two-stage architecture for the shortest route finding under dynamic environments. -development of a deep learning method to predict the travel time between a origin-destination pair. -a customised memetic algorithm to find shortest route using the predicted travel time. the rest of the paper is organized as follows. in sect. , we present our proposed methodology for this study. section describes the experimental settings which is followed by the discussion of experimental results in sect. . finally, we conclude the paper in sect. . in this paper, we propose a deep learning assisted memetic algorithm to solve the shortest route problems. the proposed method has two stages which are ( ) prediction stage and ( ) optimisation stage. the prediction stage is responsible to predict the travel times between a pair of origin and destination along the given route by using deep learning. the second stage uses memetic algorithm to actually find the shortest path to visit all locations along the given route. in the following subsections, we discuss the main steps of the proposed method and the components of each stage in detail. figure shows our proposed approach. conventional route finding methods assume fixed cost or travel time between any pairs of points. that is rarely the case in reality. one approach to the dynamic travel time issue is prediction. in this work, we incorporate the weather data along with the temporal-spatial data to develop a deep learning predictive approach. the goal of the proposed predictive approach is to predict future travel time between any points in the problem based on historical observations and weather condition. specifically, given a group of historical travel time data, weather data and road network data, the aim is to predict travel time between source (s) and destination (d) s i , d i ∈ r, i ∈ [ , , ..., n], where n is the number of locations in the road network. our predictive approach tries to predict the travel time at t+ based on the given data at t. the proposed predictive approach has three parts: input data, data cleaning and aggregation, the prediction approach. figure shows the deep learning approach. input data. in this work, we use data from three different sources. the data involves around . million trip records. these include the travel time data, weather data and road network data. -travel time data. the travel times between different locations were collected using nyc yellow cab trip record data. -weather data. we use the weather data in new york city - . the data involves: date, maximum temperature, minimum temperature, average temperature, precipitation, snow fall and snow depth. -road network data. the road network data involves temporal and spatial information as follows: • id -a trip identifier. • vendor id -a code indicating whether the provider is involved with the trip record. • pickup date-time -date and time when the meter was started. • drop-off date-time -date and time when the meter was disconnected. • passenger count -indicates the total number of riders in the vehicle. • pickup longitude -the longitude of picked passenger. • pickup latitude -the latitude of the picked passenger. • dropoff longitude -the longitude of the dropped passenger. • dropoff latitude -the latitude of the dropped passenger. • store flag -indicates if the trip record was saved in vehicle memory before sending to the vendor where y = store and forward; n = not a store and forward trip. • trip duration -duration of the trip in seconds. this process involves removal of all error values, outliers, imputation of missing values and data aggregation. to facilitate the prediction we bound the data ranges between (average + ) × standard deviation to (average − ) × standard deviation. values outside of these ranges are considered as outliers and are removed. the missing values are imputed by the average values. any overlapping pick-up and drop-off locations are also removed. in the aggregation step, we combine the travel time data, weather data and road network each time step so that it can be fed into our deep networks. prediction approach. the main goal of this step is to provide high accuracy prediction of the travel times between different locations in the road network. the processed and aggregated data is provided as an input for the prediction approach. once the prediction model is trained and retrieved, it is then ready to actually predict the travel times between given locations. in this work, we propose a deep learning technique based on feedforward neural network to build our prediction approach. the deep neural network consists of one input layer, multiple hidden layers and one output layer. each layer (input, hidden and output) involves a set of neurons. the total number of neurons in the input layer is same as the number of input variables in our input data. the output layer has one single neuron which represents the predicted value. in deep neural network, we have m number of hidden layers and each one has k number of neurons. the input layer takes the input data and then feed them into the hidden layers. the output of the hidden layers are used as an input for the output layer. given the input data x (x =x , .. x n ) and the output value y, the prediction approach aims to find the estimated value y est using a simple approach is as follows: where w is the weight and b is the bias. using a four-layer (one input, two hidden and one output) neural network as example, the y est can be calculated as follows: is the output of the network and f is the activation function. in this work, keras [ ] based on tensorflow [ ] is used to develop our predication model. this subsection presents the proposed memetic algorithm (ma) for shortest route problems. ma is a population-based metaheuristic that combines the strengths of local search algorithm with population-based metaheuristic to improve the convergence process [ , ] . in this paper, we used genetic algorithm (ga) and local search (ls) algorithm to form our proposed ma. ga is responsible for exploring new areas in the search space of solutions. ls is used to accelerate the search convergence. the pseudocode of the proposed ma is presented in is shown in ( ) . the overview of the process is given below followed by detailed description of these steps. our proposed algorithm starts from setting parameters, creating a population of solutions, calculating the quality of each solution and identifying the best solution in the current population. next, the main steps of ma will iterate over a number of generations until the stopping criterion is met. at each generation, good solutions are selected from the population by the selection procedure. then the crossover operator is applied on the selected solutions to generate new solutions. after that the mutation operator is applied on the new solutions by randomly changing them. a repair procedure is applied to check the feasibility of the generated solutions and fix the infeasible solutions as some solutions are no longer feasible. afterwards a local search algorithm is invoked to iteratively improve the current solutions. if one of the stopping criteria is satisfied, then the whole ma procedure will stop and the current best solution will be returned as the output. otherwise, the fitness of the current pool of solutions will be calculated. then the population is updated since new solutions have been generated by crossover, mutation, repair procedure and local search. after that a new iteration starts from the selection procedure again. the main parameters of the proposed ma are initialised in this step. the proposed ma has several parameters. these are: population size, the number of generations, crossover rate, mutation rate and the number of non improvement iterations for the local search. initial population. the initial population is randomly generated. each solution is represented as one chromosome, e.g. one-dimensional array. each cell of the array contains an integer number which represent the location. fitness function. in this step, the fitness value of each solution based on the objective function is calculated. the better the fitness value is, the higher chance the solution will be selected to reproduce the next generation of solutions. for shortest route problems, the fitness is the total travel time between the origin and destination locations. therefore, solution with shortest travel time is the better. selection procedure. this step is responsible for selecting two solutions for producing the next generation. in this paper, we adopted the traditional tournament selection mechanism [ ] [ ] [ ] . the tournament size is set to , indicating that each tournament has two solutions competing with each other. at each call, two solutions are randomly selected from the current population and the one with highest fitness value will be added to the reproduction pool. crossover. this step is responsible to generate new solutions by taking the selected solutions and mixes their genetic materials to produce new offsprings. in this paper, single-point crossover method is used which only swap genetic materials at one point [ , ] . it first finds a common point between source node and destination node and then all points behind the common point are exchanged between the two solutions, thus resulting in two offspring's. mutation operator helps explore a large search space by producing some random changes in various solutions. in this paper, we used a one-point mutation operator [ ] . crossover point is randomly selected and then all points behind the selected mutation point are changed with a random sequence. repair procedure. the aim of this step is to turn infeasible solutions into feasible ones. after crossover and mutation operations, the resulting solutions may become infeasible [ , ] . in this paper, the ma in our experiments has repair procedure that ensure all infeasible solutions are repaired. local search algorithm. the main role of this step is to improve the convergence process of the search process in order to attain higher quality solutions [ , ] . in this paper, the utilised local search algorithm is the steepest descent algorithm. steepest descent algorithm is a simple variation of the gradient descent algorithm. it starts with a given solution as an input and uses a neighbourhood structure to move the search process to other possibly better solutions. it uses an "accept only" improving acceptance criterion whereby only a better solution will be used as a new starting point. given s i , it applies a neighbourhood structure to create s n . replace s n with s i if s n is better. the pseudocode of the steepest descent algorithm is shown in ( ) . stopping condition. if the stopping condition is met, terminate the search process and return the best found solution. for our proposed memetic algorithm, it will stop if the maximum number of generations is reached. otherwise, go to step . in this section, the parameter settings of the deep learning and the proposed algorithm are provided. the values of parameters were selected empirically based on our preliminary experiments, where we tested the deep learning model and the proposed algorithm with different parameter combination using different values for each parameter. the values of these parameters are determined one by one through manually changing the value of one parameter, while fixing the others. then, the best values for all parameters are recorded. the final parameter values of the deep learning and the proposed algorithm are presented in tables and . this section is divided into two subsections. the first examines the performance comparison between the deep learning approach and other machine learning models (sect. . ). the second assesses the benefit of incorporating the proposed components on search performance (sect. . ). in this paper, we have implemented a number of machine learning models and the results of these models are compared with the deep learning model proposed in this work. we have tested the followings methods: xgboost, random forest, artificial neural network, multivariate regression. the root-mean squared-error (rmse) was used as an evaluation metric. table shows the results in term of rmse on the nyc taxi dataset. in the table, the best obtained result is highlighted in bold. from table , it can be seen that our deep prediction model is superior to the other machine learning models in term of rmse. the best values with the lowest rmse is . achieved by our approach, followed by . from random forest, . from xgboost, . from multivariate regression and . from artificial neural network. this good result can be attributed to the factor that deep learning consider all input features and then utilise best ones through the internal learning process. on the other hand, other machine learning methods require feature engineering step to identify the best subset of features which is a very time consuming and needs a human expert. this section evaluates the effectiveness of the machine learning models and the proposed memetic algorithm. to this end, genetic algorithm (ga) and memetic algorithm (ma) with different machine learning models are tested and compared against each other. these are: ga with xgboost, ga with random forest, ga with artificial neural network, ga with multivariate regression, ga with deep prediction model, ma with xgboost, ma with random forest, ma with artificial neural network, ma with multivariate regression and ma with deep prediction model. the main aim is to evaluate the benefit of using our deep prediction model and local search algorithm within ma. to ensure a fair comparison between the compared algorithms, the initial solution, number of runs, stopping condition and computer resources are the same for all instances. all algorithms were executed for independent runs over all instances. we also used instances with a different number of locations ranging between and locations, which can be seen as small, medium, large and very large. the computational comparisons of the above algorithms are presented in tables and . the comparison is in terms of the best cost (travel time) and standard deviation (std) for each number of locations, where the lower the better. the best results are highlighted in bold. a close scrutiny of tables and reveals that, of all the instances, the proposed ma algorithm with deep learning approach outperforms the other algorithms in all instances. from tables and , we can make the following observations: -ga with deep prediction model obtained better results when compared to ga with all other prediction models across all instances. this justifies the benefit of using deep learning approach to predict the travel time and the proposed memetic algorithm to exploit the current search space around the given solution. in this study, we proposed a novel two-stage approach for finding the shortest route under dynamic environment where travel time changes. firstly, we developed a deep learning method to predict the travel time between the origin and destination. we also added the weather conditions into the input to demonstrate that our approach can predict the travel time more accurately. secondly, a customised memetic algorithm is developed to find shortest route using the predicted travel time. the effectiveness of the proposed method has been evaluated on new york city taxi dataset. the obtained results lead to our conclusion that the proposed two-stage shortest route is effective, compared with conventional methods. the proposed deep prediction model and memetic algorithm are beneficial. tensorflow: large-scale machine learning on heterogeneous systems deepist: deep image-based spatio-temporal network for travel time estimation a comparative analysis of selection schemes used in genetic algorithms genetic algorithms and machine learning genetic algorithms travel time estimation without road networks: an urban morphological layout representation approach multi-task representation learning for travel time estimation on evolution, search, optimization, genetic algorithms and martial arts: towards memetic algorithms. caltech concurrent comput. program, c p rep memetic algorithms and memetic computing optimization: a literature review solving multiple travelling officers problem with population-based optimization algorithms predicting imbalanced taxi and passenger queue contexts in airport queue context prediction using taxi driver knowledge coact: a framework for context-aware trip planning using active transport using big spatial data for planning user mobility capra: a contour-based accessible path routing algorithm wait time prediction for airport taxis using weighted nearest neighbor regression optimising deep belief networks by hyper-heuristic approach an evolutionary hyper-heuristic to optimise deep belief networks for image reconstruction evolutionary model construction for electricity consumption prediction multi-resolution selective ensemble extreme learning machine for electricity consumption prediction when will you arrive? estimating travel time based on deep neural networks. in: thirty-second aaai conference on artificial intelligence ridesourcing systems: a framework and review learning to estimate the travel time acknowledgements. this work is supported by the smarter cities and suburbs grant from the australian government and the mornington peninsula shire council. key: cord- -h j q authors: pianini, danilo; mariani, stefano; viroli, mirko; zambonelli, franco title: time-fluid field-based coordination date: - - journal: coordination models and languages doi: . / - - - - _ sha: doc_id: cord_uid: h j q emerging application scenarios, such as cyber-physical systems (cpss), the internet of things (iot), and edge computing, call for coordination approaches addressing openness, self-adaptation, heterogeneity, and deployment agnosticism. field-based coordination is one such approach, promoting the idea of programming system coordination declaratively from a global perspective, in terms of functional manipulation and evolution in “space and time” of distributed data structures, called fields. more specifically, regarding time, in field-based coordination it is assumed that local activities in each device, called computational rounds, are regulated by a fixed clock, typically, a fair and unsynchronized distributed scheduler. in this work, we challenge this assumption, and propose an alternative approach where the round execution scheduling is naturally programmed along with the usual coordination specification, namely, in terms of a field of causal relations dictating what is the notion of causality (why and when a round has to be locally scheduled) and how it should change across time and space. this abstraction over the traditional view on global time allows us to express what we call “time-fluid” coordination, where causality can be finely tuned to select the event triggers to react to, up to to achieve improved balance between performance (system reactivity) and cost (usage of computational resources). we propose an implementation in the aggregate computing framework, and evaluate via simulation on a case study. emerging application scenarios, such as the internet of things (iot), cyberphysical systems (cpss), and edge computing, call for software design approaches addressing openness, heterogeneity, self-adaptation, and deployment agnosticism [ ] . to effectively address this issue, researchers strive to define increasingly higher-level concepts, reducing the "abstraction gap" with the problems at hand, e.g., by designing new languages and paradigms. in the context of coordination models and languages, field-based coordination is one such approach [ , , , , , ] . in spite of its many variants and implementations, fieldbased coordination roots in the idea of programming system coordination declaratively and from a global perspective, in terms of distributed data structures called (computational) fields, which span the entire deployment in space (each device holds a value) and time (each device continuously produces such values). regarding time, which is the focus of this paper, field-based coordination typically abstracts from it in two ways: (i) when a specific notion of local time is needed, this is accessed through a sensor as for any other environmental variable; and (ii) a specification is actually interpreted as a small computation chunk to be carried on in computation rounds. in each round a device: (i) sleeps for some time; (ii) gathers information about state of computation in previous round, messages received by neighbors while sleeping, and contextual information (i.e. sensor readings); and (iii) uses such data to evaluate the coordination specification, storing the state information in memory, producing a value output, and sending relevant information to neighbors. so far, field-based coordination approaches considered computational rounds as being regulated by an externally imposed, fixed distributed clock: typically, a fair and unsynchronized distributed scheduler. this assumption however, has a number of consequences and limitations, both philosophical and pragmatic, which this paper aims to address. under a philosophical point of view, it follows a pre-relativity view of time that meets general human perception, i.e., where time is absolute and independent of the actual dynamics of events. this hardly fits with more modern views connecting time with a deeper concept of causality [ ] , as being only meaningful relative to the existence of events as in relational interpretations of space-time [ ] , or even being a mere derived concept introduced by our cognition [ ] -as in loop quantum gravity [ ] . under a practical point of view, consequences on field-based coordination are mixed. the key practical advantage is simplicity. first, the designer must abstract from time, leaving the scheduling issue to the underlying platform. second, the platform itself can simply impose local schedulers statically, using fixed frequencies that at most depend on the device computational power or energetic requirements. third, the execution in proactive rounds allows a device to discard messages received few rounds before the current one, thus considering non-proactive senders to have abandoned the neighborhood, and simply modeling the state of communication by maintaining the most recent message received from each neighbor. however, there is a price to pay for such a simple approach. the first is that "stability" of the computation, namely, situations in which the field will not change after a round execution, is ignored. as a consequence, sometimes "unnecessary" computations are performed, consuming resources (both energy and bandwidth capacity), and thus reducing the efficiency of the system. symmetrically, there is a potential responsiveness issue: some computations may require to be executed more quickly under some circumstances. for instance, consider a crowd monitoring and steering system for urban mass events as the one exemplified in [ ] : in case the measured density of people gets dangerous, a more frequent evaluation of the steering advice field is likely to provide more precise and timely advices. similar considerations apply for example to the area of landslide monitoring [ ] , where long intervals of immobility are interspersed by sudden slope movements: sensors sampling rate can and should be low most of the time, but it needs to get promptly increased on slope changes. this generally suggests a key unexpressed potential for field-based computation: the general ability to provide improved balance between performance (system reactivity) and cost (usage of computational resources). for instance, the crowd monitoring and landslide monitoring systems should ideally slow down (possibly, halt entirely) the evaluation in case of sparse crowd density or of absence of surface movements, respectively. and they should start being more and more responsive with growing crowd densities or in case of landslide activation. the general idea that round execution distribution can actually dynamically depend on the outcome of computation itself, can be captured in field-based coordination by modeling time by a causality field, namely, a field programmable along with (and hence intertwined with) the usual coordination specification, dictating (at each point in space-time) what are the triggers whose occurrence should correspond to the execution of computation rounds. programming causality along with coordination leads us to a notion of time-fluid coordination, where it is possible to flexibly control the balance between performance and cost of system execution. accordingly, in this work we discuss a causality-driven interpretation of field-based coordination, proposing an integration with the field calculus [ ] with the goal of evaluating a model for time-fluid, field-based coordination. in practice, we assume computations are not driven by time-based rounds, but by perceivable local event triggers provided by the platform (hardware/software stack) executing the aggregate program, such as messages received, change in sensor values, and time passing by. the aggregate program specification itself, then, may affect scheduling of subsequent computations through policies (expressed in the same language) based on such triggers. the contribution of this work can be summarized under three points of view. first, the proposed model enriches the coordination abstraction of field-based coordination with the possibility to explicitly and possibly reactively program the scheduling of the coordination actions; second, it enables a functional description of causality and observability, since manipulation of the interaction frequency among single components of the coordinated system reflects in changes in how causal events are perceived, and actions are taken in response to event triggers; third, the most immediate practical implication of a time-fluid coordination when compared to a traditional time-driven approach is improved efficiency, intended as improved responsiveness with the same resource cost. the remainder of this work is as follows: sect. frames this work with respect to the existing literature on topic; sect. introduces the proposed time-fluid model and discusses its implications; sect. presents a prototype implementation in the framework of aggregate computing, showing examples and evaluating the potential practical implications via simulation finally, sect. discusses future directions and concludes the work. time and synchronization have always been key issues in the area of distributed and pervasive computing systems. in general, in distributed systems, the absence of a globally shared physical clock among nodes makes it impossible to rely on absolute notions of time. logical clocks are hence used instead [ ] , realizing a sort of causally-driven notion of time, in which the "passing time" of a distributed computation (that is, the ticks of logical clocks) directly expresses causal relations between distributed events. as a consequence, any observation of a distributed computation that respects such causal relations, independently of the relative speeds of processes, is a consistent one [ ] . our proposal absorbs these foundational lessons, and brings them forward to consider the strict relations between the spatial dimension and the temporal dimension that situated aggregate computations have to account for. in the area of sensor networks, acquiring a (as accurate as possible) globally shared notion of time is of fundamental importance [ ] , to properly capture snapshots of the distributed phenomena under observation. however, global synchronization also serves energy saving purposes. in fact, when not monitoring or not communicating, the nodes of the network should go to sleep to avoid energy waste, but this implies that to exchange monitoring information with each other they must periodically wake-up in a synchronized way. in most of existing proposals, though, this is done in awakening and communicating rounds of fixed duration, which makes it impossible to adapt to the actual dynamics of the phenomena under observation. several proposals exist for adaptive synchronization in wireless sensor networks [ , , ] , dynamically changing the sampling frequency (and hence frequency of communication rounds) so as to adapt to the dynamics of the observed phenomena. for instance, in the case of crowd monitoring systems, it is likely that people (e.g, during an event) stay nearly immobile for most of the time, then suddenly start moving (e.g., at the end of the event). similarly, in the area of landslide monitoring, the situation of a slope is stable for most of the time, with periodic occurrences of (sometimes very fast) slope movements. in these cases, waking up the nodes of the network periodically would not make any sense and would waste a lot of energy. nodes should rather sleep most of the time, and wake up only upon detectable slope movements. such adaptive sampling approaches challenge the underlying notion of time, but they tend to focus on the temporal dimension only (i.e., adapting to the dynamics of a phenomena as locally perceived by the nodes). our approach goes further, by making it possible to adapt in time and space as well: not only how fast a phenomenon changes in time, but how fast it propagates and induces causal effects in space. for instance, in the case of landslide monitoring or crowd monitoring, adapting to the dynamics of local perceived movements to the overall propagation speed of such movements across the monitored area. besides sensor networks, the issue of adaptive sampling has recently landed in the broader area of iot systems and applications [ ] , again with the primary goal of optimizing energy consumption of devices while not losing relevant phenomena under observation. however, unlike what promoted in sensor net-works, such optimizations typically take place in a centralized (cloud) [ ] or semi-decentralized (fog) way [ ] , which again disregards spatial issues and the strict space-time relations of phenomena. since coordination models and languages typical address a crosscutting concern of distributed systems, they are historically concerned with the notion of time in a variety of ways. for instance, time is addressed in space-based coordination since javaspaces [ ] , and corresponding foundational calculi for timebased linda [ , ] : the general idea is to equip tuples and query operations with timeouts, which can be interpreted either in terms of global or local clocks. the problem of abstracting the notion of time became crucial when coordination models started addressing self-adaptive systems, and hence openness and reactivity. in [ ] , it is suggested that a tuple may eventually fade, with a rate that depends on a usefulness concepts measuring how many new operations are related to such tuple. in the biochemical tuple-space model [ ] , tuples have a time-dynamic "concentration" driven by stochastic coordination rules embedded in the data-space. field-based coordination emerged as a coordination paradigm for selfadaptive systems focusing more on "space" rather than "time", in works such as tota [ ] , field calculus [ , ] , and fixpoint-based computational fields [ ] . however, the need for dealing with time is a deep consequence of dealing with space, since propagation in space necessarily impacts "evolution". these approaches tend to abstract from the scheduling dynamics of local field evolution, in various ways. in tota, the update model for distributed "fields of tuples" is an asynchronous event-based one: anytime a change in network connectivity is detected by a node, the tota middleware provides for triggering an update of the distributed field structures so as to immediately reflect the new situation. in the field calculus and aggregate computing [ ] as already mentioned, an external, proactive clock is typically used. in [ ] this issue is mostly neglected since the focus is on the "eventual behavior", namely the stabilized configuration of a field, as in [ ] . for all these models, scheduling of updates is always transparent to the application/programming level, so the application designer cannot intervene on coordination so as to possible optimize communication, energy expenses, and reactivity. in this section, we introduce a model for time-fluid field-based coordination. the core idea of our proposed approach is to leverage the field-based coordination itself for maintaining a causality field that drives the dynamics of computation of the application-level fields. our discussion is in principle applicable to any fieldbased coordination framework, however, for the sake of clarity, we here focus on the field calculus [ ] . considering a field calculus program p, each of its rounds can be though of as consuming: i) a set of valid messages received from neighbors, m ∈ m; and ii) some contextual information s ∈ s, usually obtained via so-called sensors. the platform or middleware in charge of executing field calculus programs has to decide when to launch the next evaluation round of p, also providing valid values for m and s. note that in general the platform could execute many programs concurrently. in order to support causality-driven coordination, we first require the platform to be able to reactively respond to local event triggers, each representing some kind of change in the values of m or s-e.g., "a new message is arrived", "a given sensor provides a new value", or " second is passed". we denote by t the set of all possible local event triggers the platform can manage. then, we propose to associate to every field calculus program p a guard policy g (policy in short), which itself denotes a field computation-and can hence be written with a program expressed in the same language of p, as will be detailed in next section. most specifically, whenever evaluated across space and time, the field computation of a policy can be locally modeled as a function where p(t ) denotes the powerset of t . namely, a policy has the same input of any field computation, but specifically returns a pair of boolean b ∈ { , } and a set of event triggers t c ⊆ t . t c is essentially the set of "causes": g will get evaluated next time by the platform only when a new event trigger is detected that belongs to t c . then, such an evaluation produces the second output b: when this is true (value ) it means that the program p associated to the policy must be evaluated as soon as possible. on system bootstrap, every policy gets evaluated for the first time. in the proposed framework, hence, computations are caused by a field of event triggers (the causality field ) computed by a policy, which is used to i) decide whether to run the actual application round immediately, and ii) decide which event triggers will cause a re-evaluation of the policy itself. this mechanism thus introduces a sort of guard mediating between the evolution of the causality field and the actual execution of application rounds, allowing for fine control over the actual temporal dynamics, as exemplified in sect. . . crucially, the ability to sense context (namely, the contents of s) and to express event triggers (namely, the possible contents of t ) has a large impact on the expressivity of the proposed model. for the remainder of this work, we will assume the platform or middleware hosting a field computation to provide the following set of features, which we deem reasonable for any such platform-this is for the sake of practical expressiveness, since even a small set of event triggers could be of benefit. first, t must include changes to any value of s; this allows the computation to be reactive to changes in the device perception, or, symmetrically speaking, makes such changes the cause of the computation. second, timers can be easily modeled as special boolean sensors flipping their value from false to true; making the classic time-driven approach a special case of the proposed framework. third, which specific event trigger caused the last computation should be available in s, accessible through the appropriate sensor. fourth, the most recent result of any field computation p that should affect the policy must be available in s; this is crucial for field computations to depend on each other, or, in other words, for a field computation to be the cause of another, possibly more intensive field computation. for instance, consider the crowd sensing and steering application mentioned in sect. to be decomposed in two sub-field computations: the former, lightweight, computing the local crowd density under a policy triggering the computation anytime a presence sensor counts a different number of people in the monitored area; the latter, resource intensive, computing a crowd steering field guiding people out of the over-crowded areas, whose policy can leverage the value of the density field to raise the evaluation frequency when the situation gets potentially dangerous. fifth, the conclusion of a round of any field program is a valid source of event triggers, namely, t also contains a boolean indicating whether a field program of interest completed its round. programming the space-time and propagating causality. as soon as we let the application affect its own execution policy, we are effectively programming the time (instead of in time, as is typically done in field-based coordination): evaluating the field computation at different frequencies would actually amount at modulating the perception of time from the application standpoint. for instance, sensors' values may be sampled more often or more sparsely, affecting the perception that the application has of its operating environment along the time scale. in turn, as stemming from the distributed nature of the communicating system at hand, such an adaptation along time would immediately cause adaptation across space too, by affecting the communication rate of devices, hence the rate at which events and information spread across the network. it is worth emphasizing that this a consequence of embracing a notion of time founded on causality. in fact, as we are aware of computational models adaptive to the time fabric, as mentioned in sect. , we are not aware of any model allowing programming the perception of time at the application level. adapting to causality. being able to program the space-time fabric as described above necessarily requires the capability of being aware of the spacetime fabric in the first place. when the notion of space-time is crafted upon the notion of causality between events, such a form of awareness translates to awareness of the dynamics of causal relations among events. under this perspective, the application is no longer adapting to the passage of time and the extent of space, but to the temporal and spatial distribution of causal relations among events. in other words, the application is able to "chase" events not only as they travel across time and space, but also as their "traveling speed" changes. for instance, whenever in a given region of space some event happens more frequently, devices operating in the same area may compute more frequently as well, increasing the rate of communications among devices in that region, thus leading to an overall better recognition of the quickening dynamics of the phenomenon under observation. controlling situatedness. the ability to control both the above mentioned capabilities at the application level enables unprecedented fine control over the degree of situatedness exhibited by the overall system, along two dimensions: the ability to decide the granularity at which event triggers should be perceived; and the ability to decide how to adapt to changes in events dynamics. in modern distributed and pervasive systems the ability to quickly react to changes in environment dynamics are of paramount importance [ ] . for instance, in the mentioned case of landslide monitoring, as anomalies in measurement increase in frequency, intensity, and geographical coverage, the monitoring application should match the pace of the accelerating dynamics. on the practical side, associating field computations to programmable scheduling policies brings both advantages and risks (as most extensions to expressiveness do). one important gain in expressiveness is the ability to let field computation affect the scheduling policy of other field computations, as in the example of crowd steering or landslide monitoring: the denser some regions get, the faster will the steering field be computed; the more intense vibrations of the ground get, the more frequently monitoring is performed. on the other hand, this opens the door to circular dependencies among fields computations and the scheduling policies, which can possibly lead to deadlocks or livelocks. therefore, it is good practice for time-fluid field coordination systems that at least one field computation depends solely on local event triggers, and that dependencies among diverse field computations are carefully crafted and possibly enriched with local control. pure reactivity and its limitations. technically, replacing a scheduler guided by a fixed clock with one triggering computations as consequence of events, turns the system from time-driven to event-driven. in principle, this makes the system purely reactive: the system is idle unless some event trigger happens. depending on the application at hand, this may be a blessing or a curse: since pro-activity is lost, the system is chained to the dynamics of event triggers, and cannot act on its own will. of course, it is easy to overcome such a limitation: assuming a clock is available in the pool of event triggers makes pro-activity a particular case of reactivity, where the tick of the clock dictates the granularity. furthermore, since policies allow the specification of a set of event triggers causing re-evaluation, the designer can always design a "fall-back" plan relying on expiration of a timer: for instance, it's possible (and reasonable) to express a policy such as "trigger as soon as happens, or timer τ expires, whichever comes first". the proposed model has been prototypically reified within the framework of aggregate computing [ ] . in particular, we leveraged the alchemist simulator [ ] 's pre-existing support for the protelis programming language [ ] and the scafi scala dsl [ ] , and we produced a modified prototype platform supporting the definition of policies using the same aggregate programming language used for the actual software specification. the framework has been open sourced and publicly released, and it has been exercised in a paradigmatic experiment. in this section we first briefly provide details about the protelis programming language, which we use to showcase the expressive power of the proposed system by examples, then we present an experiment showing how the time-fluid architecture may allow for improved precision as well as reduced resource use. this protelis language primer is intended as a quick reference for understanding the subsequent examples. entering the language details is out of the scope of this work, only the set of features used in this paper will be introduced. protelis is a purely functional, higher-order, interpreted, and dynamically typed aggregate programming language interoperable with java. programs are written in modules, and are composed of any number of function definitions and of an optional main script. module some:namespace creates a new module whose fully qualified name is some:namespace. modules' functions can be imported locally using the import keyword followed by the fully qualified module name. the same keyword can be used to import java members, with org .protelis.builtins, java.lang.math, and java.lang.double being pre-imported. similarly to other dynamic languages such as ruby and python, in protelis top level code outside any function is considered to be the main script. def f(a, b) { code } defines a new function named f with two arguments a and b, which executes all the expressions in code upon invocation, returning the value of the last one. in case the function has a single expression, a shorter, scala/kotlin style syntax is allowed: def f(a, b) = expression. the rep (v <-initial) { code } expression enables stateful computation by associating v with either the previous result of the rep evaluation, or with the value of the initial expression, the code block is then evaluated, and its result is returned (and used as value for v in the subsequent round). the if(condition) {then} else {otherwise} expression requires condition to evaluate to a boolean value; if such value is true the then block is evaluated and the value of its last expression returned, while if the value of condition is false the otherwise code block gets executed, and the value of its last expression returned. notably, rep expressions that find themselves in a non-evaluated branch lose their previously computed state, hence restarting the state computation from the initial value. this behavior is peculiar of the field calculus semantics, where the branching construct is lifted to a distributed operator with the meaning of domain segmentation [ ] . the let v = expression statement adds a variable named v to the local name space, associating its value to the value of the expression evaluation. square brackets delimit tuple literals: [] evaluates to an empty tuple, [ , ," foo"] to a tuple of three elements with two numbers and a string. methods can be invoked with the same syntax of java: obj.method(a, b) tries to invoke method member on the result of evaluation of expression obj, passing the results of the evaluation of expressions a and b as arguments. special keywords self and env allow access to contextual information. self exposes sensors via direct method call (typically leveraged for system access), while env allows dynamic access to sensors by name (hence supporting more dynamic contexts). anonymous functions are written with a syntax reminiscent of kotlin and groovy: { a, b, -> code } evaluates to an anonymous function with two parameters and code as body. protelis also shares with kotlin the trailing lambda convention: if the last parameter of a function call is an anonymous function, then it can be placed outside the parentheses. if the anonymous function is the only argument to that call, the parentheses can be omitted entirely. the following calls are in fact equivalent: [ , ] . map { a -> a + } // returns [ , ] in this section we exemplify how the proposed approach allows for a single fieldbased coordination language to be used for expressing both p and g. in the following discussion, event triggers provided by the platform (i.e., members of t ), will be highlighted in green. in our first example, we show a policy recreating the round-based, classic execution model, thus demonstrating how this approach supersedes the previous. consider the following protelis functions, which detect changes in a value: where current is the current value of the signal being tracked, and condition is a function comparing the current with the previously memorized value and returning true if the new value should replace the old one. function changed is the simplest use of update, returning true whenever the input signal current changes. in the showcased code, the second argument to updated is provided using the trailing lambda syntax (see sect. . ). they can be leveraged for writing a policy sensitive to platform timeouts. for instance, in the following code, we write a policy that gets re-evaluated every second (we only return timer( ) of all the possible event triggers in t ), and whose associated program runs if at least one second passed since the last round. finally, we articulate a case in which the result of an aggregate computation is the cause for another computation to get triggered. consider the crowd steering system mentioned in sect. : we would like to update the crowd steering field only when there is a noticeable change in the perceived density of the surroundings. to do so, we first write a protelis program leveraging the scr pattern [ ] to partition space in regions meters wide and compute the average crowd density within them. functions s (network partitioning at desired distance), summarize (aggregation of data over a spanning tree and partitionwide broadcast of the result), and distanceto (computation of distance) come from the protelis-lang library shipped with protelis [ ] . its execution policy could be, for instance, reactive to updates from neighbors and to changes in a "people counting sensor", reifying the number of people perceived by this device (e.g. via a camera). now that density computation is in place, the platform reifies its final result as a local sensor, which can in turn be used to drive the steering field computation with a policy such as: in which a low pass filter exponentialbackoff avoids to get the program running in case of spikes (e.g. due to the density computation re-stabilization). note that access to the density computation is realized by accessing a sensor with the same name of the module containing the density evaluation program, thus reifying a causal chain between field computations. we exercise our prototype by simulating a distance computation over a network of situated devices. we consider a × irregular grid of devices, each located randomly in a disc centered on the corresponding position of a regular grid; and a single mobile node positioned to the top left of the network, free to move at a constant speed v from left to right. once the mobile device leaves the network, exiting to the right side, another identical one enters the network from the left hand side. mobile devices and the leftmost device at bottom are "sources", and the goal for each device is to estimate the distance to the closest source. computing distance from a source without a central coordinator in arbitrary networks is a representative application of aggregate computing, for which several implementations exist [ ] . in this work, since the goal is exploring the behavior of the platform rather than the efficiency of the algorithm, we use an adaptive bellman-ford [ ] , even though it's known not to be the most efficient implementation for the task at hand [ ] . we choose to compute the distance from a source (a gradient) as our reference algorithm as it is one of the most common building block over which other, more elaborate forms of coordination get built [ , ] . we expect that an improvement in performance on this simple algorithm may lead to a cascading effect on the plethora [ ] of algorithms based on it, hence our choice as a candidate for this experiment. we let devices compute the same aggregate program with diverse policies. the baseline for assessing our proposal is the classic approach to aggregate computing: time-driven, unsynchronized, and fair scheduling of rounds set at hz. we compare the classic approach with time fluid versions whose policy is: run if a new message is received or an old message timed out, and the last round was at least f − seconds ago. the latter clause sets an upper bound to the number of event triggers a device can react to, preventing well-known limit situations such as the "raising value problem" for the adaptive bellman-ford [ ] algorithm used in this work. we run several versions of the reactive algorithm, with diverse values for f ; and we also vary v . for each combination of f and v , we perform simulations with different random seeds, which also alter the irregular grid shape. we measure the overall number of executed rounds, which is a proxy metric for resource consumption (both network and energy), and the root mean square error of each device. the simulation has been implemented in alchemist [ ] , writing the aggregate programs in protelis [ ] . data has been processed with xarray [ ] , and charts have been produced via matplotlib [ ] . for the sake of reproducibility, the whole experiment has been automated, documented, and open sourced . intuitively, devices situated closer to the static source than to the trajectory of mobile sources should be able to execute less often. figure confirms such intuition: there is a clear border separating devices always closer to the static source, which execute much less often, from those that at times are instead closer to the mobile source. figure shows the precision of the computation for diverse values of v and f , compared to the baseline. the performance of baseline is equivalent with the performance of the time-fluid version with figure depicts the cost to be paid for the algorithm execution. the causal version of the computation has a large advantage when there is nothing to recompute: if the mobile device is stands still, and the gradient value does not need to be recomputed, the computation is fundamentally halted. when v = , the resource consumption grows; however, compared to the classic version, we can sustain f = . hz with the same resource consumption. considering that the performance of the classic version gets matched with f = hz, and cost gets equalized at f = . hz, when hz < f < . hz we achieve both better performance and lower cost. in conclusion, the time-fluid version provides a higher performance/cost ratio. fig. . root mean squared error for diverse v . when the network is entirely static (top left), raising f has a minimal impact on the overall cost of execution, as the network stabilizes and recomputes only in case of time outs. in dynamic cases, instead, higher f values come with a cost to pay. however, in the proposed experiment, the cost for the baseline algorithm matches the cost of the time fluid version with f = . hz, which in turn has lower error (as shown in fig. ). in this work we introduced a different concept of time for field-based coordination systems. inspired by causal models of space-time in physics, we introduce the concept of field of causality for field computations, intertwining the usual coordination specification with its own actual evaluation schedule. we introduce a model that allows expressing the field of causality with the coordination language itself, and discuss the impact of its application. a model prototype is then implemented in the alchemist simulation platform, supporting the execution of the aggregate computing field-based coordination languages protelis, demonstrating the feasibility of the approach. finally, the prototype is exercised in a paradigmatic experiment, highlighting the practical relevance of the approach by showing how it can improve efficiency-intended as precision in field evaluation over resource consumption. future work will be devoted to provide more in-depth insights by evaluating the impact of the approach in realistic setups, both in terms of scenarios (e.g. using real world data) and evaluation precision (e.g. by leveraging network simulators such as omnet++ or ns ). moreover, further work is required both for the current prototype to become a full fledged implementation, and for the model to be implemented in practical field-based coordination middlewares. towards an adaptive synchronization policy for wireless sensor networks optimal single-path information propagation in gradient-based algorithms a higher-order calculus of computational fields consistent global states of distributed systems: fundamental concepts and mechanisms aggregate programming for the internet of things process calculi for coordination: from linda to javaspaces modelling and simulation of opportunistic iot services with aggregate computing self-organising coordination regions: a pattern for edge computing a lyapunov analysis for the robust stability of an adaptive bellman-ford algorithm description and composition of bio-inspired design patterns: a complete overview towards a foundational api for resilient distributed systems design javaspaces: principles, patterns, and practice adaptive sensing scheme using naive bayes classification for environment monitoring with drone xarray: n-d labeled arrays and datasets in python matplotlib: a d graphics environment decentralized control of adaptive sampling in wireless sensor networks time, clocks, and the ordering of events in a distributed system monitoring of iot data for reducing network traffic software engineering for self-adaptive systems: research challenges in the provision of assurances on the expressiveness of timed coordination via shared dataspaces asynchronous distributed execution of fixpoint-based computational fields nature of time and causality in physics field-based coordination for pervasive multiagent systems programming pervasive and mobile computing applications: the tota approach the fading concept in tuple-space systems chemical-oriented simulation of computational systems with protelis: practical aggregate programming landslide monitoring with sensor networks: experiences and lessons learnt from a real-world deployment quantum mechanics without time: a model relational quantum mechanics loop quantum gravity pervasive social context: taxonomy and survey clock synchronization for wireless sensor networks: a survey optimized on-demand data streaming from sensor nodes low-cost adaptive monitoring techniques for the internet of things engineering resilient collective adaptive systems by self-stabilisation from fieldbased coordination to aggregate computing biochemical tuple spaces for self-organising coordination simulating large-scale aggregate mass with alchemist and scala linda in space-time: an adaptive coordination model for mobile ad-hoc environments acknowledgements. this work has been supported by the miur prin project "fluidware". the authors want to thank dr. lorenzo monti for the fruitful discussion on causality, the shape and fabric of space and time, and physical models independent of time. key: cord- -z bt hp authors: grote, gudela; pfrombeck, julian title: uncertainty in aging and lifespan research: covid- as catalyst for addressing the elephant in the room date: - - journal: work aging retire doi: . /workar/waaa sha: doc_id: cord_uid: z bt hp uncertainty is at the center of debates on how to best cope with the covid- pandemic. in our exploration of the role of uncertainty in current aging and lifespan research, we build on an uncertainty regulation framework that includes both reduction and creation of uncertainty as viable self-regulatory processes. in particular, we propose that future time perspective, a key component in models of successful aging, should be reconceptualized in terms of uncertainty regulation. we argue that by proactively regulating the amount of uncertainty one is exposed to, individuals’ future time perspective can be altered. we show how extant research might be (re)interpreted based on these considerations and suggest directions for future research, challenging a number of implicit assumptions about how age and uncertainty are interlinked. we close with some practical implications for individuals and organizations for managing the covid- crisis. the covid- pandemic has painfully exposed our global vulnerability. we are all called upon to cope with the ensuing uncertainties. it might be considered a particularly inappropriate time to discuss potential benefits of uncertainty. however, there is also much debate about what we can learn from this massive change in the way we live and work for a more positive future. we follow this reasoning and propose a model of uncertainty regulation, which includes mechanisms that reduce and create uncertainty. we use this as a basis for extending research and practice for successful aging. notably, some age-related research has already exposed positive attitudes toward uncertainty by examining young children's curiosity and desire to learn (e.g., kidd & hayden, ; oudeyer & smith, ) and older adults' willingness to proactively approach new identities as part of their transition to retirement (e.g., bordia, read, & bordia, ) . there also appears to be a direct positive link between age and tolerance for uncertainty (basevitz, pushkar, chaikelson, conway, & dalton, ; laguerre & barnes-farrell, ) . we take these findings as inspiration into exploring the role of uncertainty in aging and lifespan research. consistent with griffin and grote's (in press) uncertainty regulation model, we consider that individuals may not always reduce uncertainty, but regulate uncertainty towards an optimal level, which contributes to fostering a more positive future time perspective as a crucial resource for successful aging. we aim to delineate new avenues for aging and lifespan research regarding uncertainty and to promote a fuller understanding of individuals' uncertainty management, which is particularly relevant in difficult times, such as the current pandemic. adopting such a fresh look at uncertainty may raise awareness for opportunities amidst the many personal, social, and economic threats. in this commentary, we start with a brief introduction to griffin and grote's (in press) uncertainty regulation model. we then discuss future time perspective as a key component of self-regulatory processes in aging and position it within an uncertainty regulation framework. we show how extant research that touches on the age-uncertainty relationship might be (re)interpreted based on this framework. we propose directions for future research, challenging a number of implicit assumptions about how age and uncertainty are interlinked. we close with a few practical considerations for individuals and organizations that are relevant for managing the covid- crisis but that should also be applicable in happier times. griffin and grote (in press) postulate that individuals not only selfregulate their efforts to achieve certain goals at work but also regulate the amount of uncertainty they are exposed to in this process. they outline a work performance model in which individuals align their preferred level of endogenous uncertainty-that is, uncertainty over which individuals have immediate control-with the requirements for uncertainty management inherent in the task they are trying to accomplish. griffin and grote argue that the appraisal of uncertainty can create an aversive and/or a desirable state depending on individuals' work, aging and retirement, , vol. xx, no. xx, pp. - doi: . /workar/waaa commentary predispositions and situational demands. based in that appraisal, individuals reduce or increase uncertainty in a self-regulatory feedback loop. whereas uncertainty reduction aims to (re)establish predictability and control, uncertainty creation is founded in the desire to enhance learning opportunities and to expand one's vision of possible futures well beyond the currently known in an act of expansive agency. in the griffin and grote's model, a second self-regulatory loop is postulated through which individuals align the regulation of endogenous uncertainty with the requirements stemming from exogenous uncertainty. exogenous uncertainty is largely determined by the broader environment, such as uncertainties at the macroeconomic level caused by the current pandemic. in the following, we focus on the self-regulatory cycles of managing endogenous uncertainty. especially in relation to future time perspective, we see endogenous uncertainty regulation at the center of intraindividual processes linked to individuals' aging experience. toward the end of this commentary, we broaden our perspective and discuss how individuals' uncertainty regulation may help in managing some of the exogenous challenges related to the covid- crisis. griffin and grote (in press) developed their model of uncertainty regulation in line with the fundamental principles of self-regulation. most theories in lifespan development, on which aging research is based, also build on self-regulation as the core process through which individuals plan and implement action in pursuit of valued goals, such as motivational theory of lifespan development (heckhausen, wrosch, & schulz, ) , theory of selection, optimization, and compensation (baltes, ; baltes & baltes, ) , and socioemotional selectivity theory (carstensen, (carstensen, , carstensen, isaacowitz, & charles, ) . opportunities for self-determined goal striving and sufficient control and self-efficacy are key for successful personal development and aging. however, such opportunities and the capabilities to capitalize on them are assumed to dwindle across the life course, beginning in midlife with the proverbial midlife crisis (heckhausen, ) and declining more rapidly in old age (heckhausen et al., ) . a variety of compensatory processes to cope with the loss of opportunities and primary control have been theorized, which center on individuals' adaptive capacity and a shift from achievement goals to a search for emotionally rewarding experiences and larger meaning. for example, the motivational theory of lifespan development suggests an optimization in primary and secondary control strategies by means of adaptive goal engagement and disengagement (heckhausen et al., ) , where primary control aims at changing external conditions to better fit personal needs and interests and secondary control does the opposite, that is change the self to better cope with external forces. the selection, optimization, and compensation model proposes the selection of goals, optimization of skills or resources, and the compensation of aging-related resource losses as coping strategy. socioemotional selectivity theory emphasizes the emotional regulation with a focus on emotionally rewarding experiences and closer relationships in later age (carstensen, ) . much of this adaptation is assumed to hinge on individuals' future time perspective (carstensen, (carstensen, , carstensen et al., ) , where a more open-ended future time perspective, which includes more opportunities and a longer timeframe, promotes successful lifespan development and aging (henry, zacher, & desmette, ; kooij, kanfer, betts, & rudolph, ; rudolph, kooij, rauvola, & zacher, ) . however, with progressing chronological age, individuals have persistently been found to perceive fewer opportunities and less remaining time for making use of those opportunities (baltes, wynne, sirabian, krenn, & de lange, ; rudolph et al., ; weikamp & göritz, ; zacher & frese, ) . besides age, studies have also found socioeconomic status, health, personality dimensions, and other dispositional characteristics such as self-efficacy, locus of control, and optimism, the experience of aging-related gains and losses, and the adoption of a growth mindset to influence future time perspective (fasbender, wöhrmann, wang, & klehe, ; kooij et al., ; rudolph et al., ; weiss, job, mathias, grah, & freund, ) . in addition to individual factors, contextual factors such as job autonomy and job complexity have been identified as antecedents . in the following, we propose that future time perspective is also influenced by individuals' uncertainty regulation. with its focus not only on remaining time per se but on the opportunities that may arise and could be taken advantage of, future time perspective seems a natural ally to uncertainty regulation. a longer future and more possibilities offered by that future imply more unpredictability and thereby more uncertainty. however, uncertainty to date has not been explicitly included in theories of how future time perspective affects lifespan development and aging. we argue that by regulating the amount of uncertainty one is exposed to, future time perspective can be altered. this consideration complements prior research that has focused on the impact of experienced aging-related gains and losses on perceived future time perspective (fasbender et al., ; weiss et al., ) . in figure , we illustrate how, in addition to past experiences, an individual's uncertainty regulation might influence future time perspective. in the center of figure , key processes from griffin and grote's (in press) model are depicted, where individuals strive to maintain a desired level of uncertainty by engaging in either opening or closing behaviors. opening behaviors are proactive and future-oriented and generate uncertainty as opportunities for learning and exploration of entirely new goals, such as changing one's occupation. in contrast, closing behaviors rely on existing knowledge and intend to exploit that knowledge, thereby also reducing uncertainty, for example, selecting a task one can do particularly well. by regulating the amount of experienced uncertainty, individuals can increase or reduce the perceived future opportunities and remaining time, that is, their future time perspective, in a recursive cycle. some similarities of the proposed processes to existing aging models and research are apparent. for example, kooij, zacher, wang, and heckhausen's (in press) model of successful aging-defined as older workers' ability and motivation to continue working-centers around proactive and adaptive processes of goal (dis)engagement. these processes are aimed at restoring person-environment fit after anticipated or experienced discrepancies between personal needs and abilities and environmental demands and resources. we suggest that individuals may not only continue to strive for their goal in the face of adverse conditions, but occasionally may give up more routine goals in search of learning opportunities and expression of expansive agency. over their careers, individuals may dynamically switch back and forth between exploiting existing skills and competencies and exploring new knowledge domains to achieve both optimal exposure to uncertainty and successful goal striving. in the study by laguerre and barnes-farrell ( ), future time perspective mediated the positive relationship between tolerance for uncertainty and motivation to continue working after retirement as well as financial risk tolerance. in griffin and grote's model, tolerance for uncertainty is discussed as an individual predisposition that influences the level of desirable uncertainty, which in turn constitutes the set value for uncertainty regulation. thus, laguerre and barnes-farrell's ( ) findings might be considered as tentative support for our proposition that future time perspective is shaped by and exerts its influence through processes of uncertainty regulation. in their study of entrepreneurial activity, gielnik, zacher, and wang ( ) found that the relationship between opportunity identification and entrepreneurial intentions was weaker for employees with a more limited future time perspective. at the same time, prior entrepreneurial experience strengthened the relationship between entrepreneurial intentions and activity. based on our theorizing, one might investigate whether entrepreneurial activity affects future time perspective via the experience of successfully regulating endogenous uncertainty, for instance having exploited existing personal networks to explore a new business sector. we believe that these examples underscore the value of exploring the role of uncertainty regulation in the development and impact of future time perspective. by achieving a more balanced uncertainty regulation, which includes both the exploitation of existing knowledge and the deliberate encounter with and exploration of the unknown, the perception of future opportunities in one's life may be promoted. in the final part of our commentary, we employ our proposed model to exemplarily (re)interpret research that touches on the age-uncertainty relationship and discuss how researchers could further examine uncertainty regulation in relation to individuals' future time perspective. we conclude with some practical considerations for managing the uncertainties, which ensue from the covid- pandemic. the covid- pandemic has brought uncertainty to the fore which despite or possibly because of its omnipresence often remains implicit in psychology and management research. if it is explicitly addressed, it is usually treated as an aversive state individuals try to avoid (e.g., cooper & thatcher, ; heckhausen, ; hogg, ). an important first step we propose is to explicitly include uncertainty into the study designs used in lifespan and aging research and to approach the experience of uncertainty as something potentially positive as well. in support of this proposal, dweck ( ) argues that individuals strive not for complete, but for optimal predictability concerning the relationships among events and among things in the world, as one basic need that drives development. quite similar to griffin and grote (in press), dweck ( ) postulates that complete predictability is not desirable because people are motivated to experience new and complex situations. in our literature search for this commentary, we identified a number of studies where findings might be interpreted differently and possibly even more convincingly if an uncertainty regulation perspective were used. for instance, meta-analytical evidence shows that job autonomy and complexity have positive effects on occupational future time perspective . the authors explain these results with the fact that these two job characteristics act as resources. however, one may also argue that job autonomy and complexity imply more uncertainty through the discretion they offer to the job holder. along with this uncertainty, opportunities arise, for instance for competence development, learning, and exploration, leading also to a more general perception of occupational opportunities expressed in occupational future time perspective. similarly, the finding that older employees experience less strain when confronted with role ambiguity may not (only) be explained by their greater reliance on crystallized abilities, which help them to manage uncertainty (abbasi & bordia, ) , but also by their interest in capitalizing on the opportunities that arise from uncertainty. such an interpretation receives some backing from the finding by basevitz and colleagues ( ) that older adults generally are more tolerant of uncertainty and worry less, presumably due to their increased focus on and capability for emotion regulation (scheibe, spieler, & kuba, ; toomey & rudolph, ). ainsworth's ( ) data on more and more successful older entrepreneurs may also be interpreted as an indicator of older adults being more willing to expose themselves to uncertainty. uncertainty regulation may be one of those processes that should be scrutinized in response to heckhausen's ( ) call to examine new constructs as sources of differences between more and less successful developmental paths. future studies may examine whether age differences in uncertainty regulation or in preferred levels of uncertainty exist. moreover, researchers may want to explore whether engaging in expansive agency to create uncertainty at different time points in life may be an effective strategy for successful aging. there might be more gain-oriented development in older age than accounted for in contemporary models of aging, based on opportunities generated by an openness to, and possibly active search for uncertainty. gains may include primary and secondary control aimed at the maintenance of workability, but also an exploration of new routes to meaningful engagement in society. for instance, by building on the finding by bordia and colleagues ( ) that openness to shed old identities and explore new ones was important for successful retirement transitions, interventions could be designed to get individuals to reflect on and possibly change their personal approach to managing uncertainty. thereby, not only secondary control of uncertainty could be strengthened, for instance through developing more tolerance for uncertainty, but also primary control, as individuals are enabled to more freely choose between reducing and increasing uncertainty for themselves. this may support individuals both in their daily life and during major transformations. studying the impact of such interventions would also allow to better understand the relationships between uncertainty regulation, future time perspective, and successful aging. although the covid- pandemic has primarily led to societal and economic uncertainties that represent serious threats, these difficult times may also hold a promise. specifically, they may allow for a future where individuals and societal actors become better equipped for regulating uncertainty in ways that balance the rewards of exploiting the known and exploring the opportunities offered by the unknown. as we have aimed to show, research on aging and lifespan development is well positioned to investigate this promise further. in addition, we want to point out some practical implications of our suggested new perspective on uncertainty, which are related to mastering the immediate challenges that covid- has brought upon us. most fundamentally, a systematic reflection of the uncertainties caused by the pandemic and the threats and opportunities they imply is paramount so that individuals and organizational actors can develop a more measured approach to uncertainty management. rather than proclaiming false certainties (e.g., there will be a vaccine within a year), as we experience daily by many politicians, we need an understanding of how covid- -related uncertainties might develop and how we can best brace ourselves to master and possibly even take advantage of them. this may sound harsh, but even job loss has been found to open new avenues for personal development (zikic & richardson, ) . for some older workers, this experience turned out to be an opportunity to reflect upon their careers and engage in career exploration, opening avenues to alternative career paths such as self-employment. if, as prior research has shown, older individuals have more measured responses to uncertainty due to their superior emotion regulation, they may not only be better at coping with uncertainty related to themselves, but they can be valuable resources in their organizations by helping others to cope with covid- induced uncertainties (settersten et al., ) . lastly, the proven resourcefulness of organizations and individuals in managing uncertainties during this crisis could be harnessed more broadly in a dialogue between employers and employees on more flexible and self-directed forms of working that lie at the heart of well-being at work across all ages. thinking, young and old: cognitive job demands and strain across the lifespan aging entrepreneurs and volunteers: transition in late career on the incomplete architecture of human ontogeny psychological perspectives on successful aging: the model of selective optimization with compensation future time perspective, regulatory focus, and selection, optimization, and compensation: testing a longitudinal model age-related differences in worry and related processes retiring: role identity processes in retirement transition selectivity theory: social activity in life-span context the influence of a sense of time on human development taking time seriously: a theory of socioemotional selectivity identification in organizations: the role of self-concept orientations and identification motives from needs to goals and representations: foundations for a unified theory of motivation, personality, and development is the future still open? the mediating role of occupational future time perspective in the effects of career adaptability and aging experience on late career planning age in the entrepreneurial process: the role of future time perspective and prior entrepreneurial experience when is more uncertainty better? a model of uncertainty regulation and effectiveness adaptation and resilience in midlife social inequalities across the life course: societal unfolding and individual agency a motivational theory of life-span development future time perspective in the work context: a systematic review of quantitative studies uncertainty-identity theory the psychology and neuroscience of curiosity future time perspective: a systematic review and meta-analysis successful aging at work: a process model to guide future research and practice the role of intolerance of uncertainty in predicting future time perspective and retirementrelated outcomes how evolution may work through curiosity-driven developmental process occupational future time perspective: a meta-analysis of antecedents and outcomes an older-age advantage? emotion regulation and emotional experience after a day of work understanding the effects of covid- through a life course lens age-conditional effects in the affective arousal, empathy, and emotional labor linkage: withinperson evidence from an experience sampling study how stable is occupational future time perspective over time? a six-wave study across years the end is (not) near: aging, essentialism, and future time perspective remaining time and opportunities at work: relationships between age, work characteristics, and occupational future time perspective unlocking the careers of business professionals following job loss: sensemaking and career exploration of older workers g. g. and j. p. contributed equally to this study. key: cord- - athnjkh authors: etemad, hamid title: managing uncertain consequences of a global crisis: smes encountering adversities, losses, and new opportunities date: - - journal: j int entrep doi: . /s - - -z sha: doc_id: cord_uid: athnjkh nan more and faster, they faced uncertainties concerning the length of time that the higher demand would continue to justify the additional investments. the phenomenon of newly found (or lost) opportunities and associated uncertainties occupied most smes. generally, smes depend intensely on their buyers, supplies, employees, and resource providers without much slack in their optimally tuned value creation system. a fault, disruption, slow-down, strike, or the likes anywhere in the system would travel rapidly upstream and downstream within a value creation cycle with minor differences in its adverse impact on nearly every member. when a crisis strikes somewhere in the value creation stream, all members would soon suffer the pains. consider, for example, the impact of national border closures on international supplies and sales. generally, disruptions in logistics, including international closures, could stop ismes' flow of international supplies and, after the depletion of inventories, shipping and international deliveries would be forced to stop, which in turn would be exposing nearly all other members of the value-net slow-downs and stoppages indirectly, if not directly, sooner rather than later. in spite of many advantages of smes relying on collaborative international networks, the covid- crisis pointed out that all members will need to devise alternative contingency plans for disruptions that may affect them more severely than otherwise. the rapidly emerging evidence suggests that the capable, far-sighted, and innovative enterprises perceived the slow-downs, or stoppages in some cases, as an opportunity for starting, or increasing, their alternative ways of sustaining activities, including on-line and remote activities and involvements, in order to compensate for the shrinkage in their pre-covid demands, while the short-sighted or severely resource-constrained smes faced the difficult decision of closure in favor of "survival or self-preservation" strategy, thus losing expansion opportunities. the silver lining of the covid darkness is that we have collectively learned invaluable lessons that deserve a review and re-examination from entrepreneurial and internationalization perspectives in order to prepare for the next unexpected crises, regardless of its cause, location, magnitude, and timing. in few words, the world experienced a crisis of massive scale for which it was unprepared and even after some months there is no effective remedial strategy (or solution) for crises caused by the covid- pandemic in sight. the inevitable lesson of the above exposition is that even the most-prepared institutions of the society's last resort nearly collapsed. given such societal experiences, the sufferings of industries and enterprises, especially smaller ones, are understandable and the scholarly challenge before us is what should be researched and learned about, or from, this crisis to avoid the near collapse of smaller enterprises and industries, on which the society depends and may not easily absorb another crisis of similar, or even smaller, scale. the main intention of this introduction is not to review the emergence and unfolding of a major global crisis that inflicted massive damage on smes in general and on ismes in particular, but to search for pathways out of the covid- 's darkness to brighter horizons. accordingly, the logical questions in need of answers are: were there strategies that could reduce, if not minimize, the adverse impact of the crisis. could (or should) smes or isme have prepared alternative plans to protect themselves or possibly avoid the crippling impact of the crisis. why were smes affected so badly. are there lessons to be learned to fight the next one, regardless of location, time, and scale? in spite of the dominating context of the ongoing and still unfolding covid- crises, there is a need to learn about the world's both effective and difficult experiences at this point in time, which are beyond the aims and scope of this journal. rather, it aims to analyze and learn about the bright rays of light that can potentially enlighten entrepreneurial and human innovative ingenuity to find pathways from the darker to the brighter side of this global and non-discriminatory crisis, within the scope of international entrepreneurship. naturally, in seeking those pathways, one is expected to encounter barriers, obstacles, and institutional rigidities that could still pose nearly insurmountable challenges to the society, and especially to smes and ismes, have experienced in the past which were partially due to endemic rigidities (aparicio et al. ; north ). on the positive side of the ledger is that many of the above adverse factors are among the host of smaller crisis-like challenges that entrepreneurial enterprises face regularly and manage to bridge across them to realize fertile and promising opportunities. learning how such bridges are built and maintained not only is entrepreneurial, but also may help the causes of humanity by showing the way out of this, and other similar, crises. this will be a noble objective if it can be accomplished, which should motivate many to take up corresponding challenges in spite of the low chances of its success. we will return to this topic at the end of this article. a cautionary note it is very important to note that the next four articles appearing in this issue were neither invited for this issue nor received any special considerations. they are included in this issue as they offer concepts, contents, contexts, and issues that are relevant to the overriding theme of this issue and may assist smes trying to manage a crisis facing them and scholars interested in investigating related issues. without exceptions, they were subjected to the journal's rigorous and routine double-blind peerreview processes prior to their acceptance. they were then published in the journal website's "on-line first" option waiting for placement in an issue with a coherent theme drawing on the research of each and all the selected articles for that issue. the highlights of the four articles that follow are presented in the next section of this article. they offer promising argument and plausible pathways based on their scholarly research relevant to an emerging or unfolding crisis. structurally, this article comprises five parts. a developmental discussion of uncertainties and their types, causes, and remedies as well as enabling topics relating to crisis-like challenges follows this brief introduction in "developmental arguments." a brief highlight of each of the four articles appearing in this issue, and their interrelationships, will be presented in "the summary highlight of articles in this issue." "discussions" provides discussions related to the overriding theme of this article. conclusion and implications for further research, management, and conducive public policy will appear at "conclusion and implications." the extra-ordinary socio-economic pains and the added stress of the covid- crisis exposed entrepreneurs, smes, larger enterprises, and even national governments to unprecedented conditions and issues. as stated earlier, there is a need for understanding how and why it became such a major world crisis and what factors contributed to expanding and amplifying its impact in the early quarters of the year . although the primary aim of this issue is not reviewing the crippling impact of the covid- crisis, for it is done elsewhere (e.g., etemad ; surico and galeotti ), some of its influential factors emerged and stood out in the early days of that effected international business and entrepreneurial institutions from the very beginning; and yet, it took some time to enact defensive private and public actions against it. although covid- was not the first world-wide health crisis, many enterprises, one-afteranother, were defenselessly affected by it, even after a few months. while we have learned about some of the contributing factors to this expansive crisis, we are still in the dark as to why the broader community, and even resourceful enterprises, had failed to foresee the emergence and unfolding of such a crisis (surico and galeotti ) or prepare potential defenses against it. the crisis' high magnitude and broad scope involved nearly everyone worrying and learning about its impact first-hand as it unfolded; but it appears that top management teams (tmts) had not learned fully from the past or taken precautions against the emergence of potential crises, and for this one, in a timely fashion. however, the literature of managing a major crisis of the past, mainly in the large enterprises, has pointed out a few known forces, or potential factors, and issues that had contributed to past crises and are briefly reviewed below as follows. uncertainties as similar broad world-wide crisis-like challenges involving nearly all institutions have not been experienced in the recent past, enterprises and especially smaller firms found themselves unprepared and encountered high levels of discomfort and taxing uncertainties in their daily lives. generally, such effects are more disabling when enterprises are in the earlier stages of their life cycle when they suffer from lack of rich experience to guide them forward, and they do not have access to the necessary capabilities and resources to support them through (etemad a (etemad , b, . entrepreneurial enterprises that have already internationalized, or aspire to internationalize, encounter the customary risks and uncertainties of "foreignness" (e.g., hymer ; zaheer ; zaheer and mosakowski ) , lack of adequate "experiential experience"(e.g., eriksson et al. ; johanson and vahlne ) , "outsidership" (johanson and vahlne ) , and the liability of "newness" (stinchcombe ) . the covid crisis added risks and uncertainties arising from national lockdowns, unprecedented regulatory restrictions, closure of international borders not experienced since the second world war, the near collapse of international supply chains and logistics, among many others, most of which became effective without much prior notice, and each of which alone could push smaller enterprises to the edges of demise due to the consequent shortages, operational dysfunctions, closure, and potential bankruptcies. survival during the heights of the covid- crisis required rapid strategic adaptations, mostly technological, and use of alternative online facilities, capabilities, and novel strategies to reach stakeholders (customers, supplier, employees, investors, and the likes) to quickly compensate, if not substitute, for their previous arrangements that had become dysfunctional. smaller firms that had prepared alternative contingency plans, supported with reserved dynamic capabilities and resources (eisenhardt and martin ; jantunen et al. ) , viewed the dysfunctionality of rivals as opening opportunities and managed rapidly a successful transition to exploit them in a timely fashion, either through their own or through established others' functional "platform-type operations" (e.g., amazon, alibaba, shopify, spotify, and many similar multi-sided platforms). they were viewed by others as exceptional, progressive, even somewhat "disruptive" (utterback and acee ) and creatively destructive to others (chiles et al. ) in some cases as internationalized firms restrategized and refocused on their domestic markets in reaction to the closure of border and international logistics dysfunctions. however, such adaptations, deploying innovative and additive technologies and other innovative industry . (e.g., additive technologies, artificial intelligence, internet of things (iot), robotic, -d printing, and other i. . technologies) (hannibal , forthcoming) or collaboration with established on-line or off-line establishments, faced their own unexpected operational difficulties nationally, while their counterparts experienced them internationally, including "cross-cultural communication and misunderstandings" (noguera et al. ; mitchell et al. ; mcgrath et al. ) , national and international logistic problems, supply chain disruption, among many others, mostly attributable to covid-related restrictions. among such unexpected international factors were forced rapid change in consumer behavior and national preferences in exporting countries (verging on implicit discriminatory practices ), worsening diplomatic relations, rising international disputes, regulatory restriction, and a host of other well-documented causes, exposing firms to unforeseen risks and uncertainties not experienced for decades. therefore, the concepts of risk, uncertainty, and the way for mitigating, or getting over, true or perceived crises deserve discussion as they are pertinent to resolving crisis-like challenges facing smaller firms, regardless of their particular timing and situation. similarly, factors contributing to, or mitigating against, the experience level(s) of ex-ante unknowns, or "un-knowables" (huang and pearce ) , contributing to uncertainties merit equal reportedly, rapidly growing internationalized medium-sized enterprises reconfigured and redeployed parts of their facilities rapidly to fabricate and provide goods locally to reduce shortages in products previously imported from international markets. for example, canada goose, manufacturer of luxury winter clothing, began making personal protective garments for hospital staff (see an article entitled as "toronto -canada goose holdings inc. is moving to increase its domestic production of personal protective equipment for health-care workers across canada at https://globalnews.ca/news/ /canada-goose-production-medicalppe-coronavirus/ visited on april , ). similarly, many other companies, including ccm sporting equipment and yoga jeans, began producing protective visors, glasses, and gowns for essential workers and hospital staff members (see article entitled as "quebec companies answer the call to provide protective equipments" at https://montrealgazette.com/business/local-business/masks-and-ppes...visited on june , ). for all of the above companies, their sales required very different distribution channels, such as pharmacies and hospital supply companies that are far from clothing and sport equipment. the us-based m was ordered not to ship n face mask to canada in march-april . similarly, some chinese suppliers refused to ship previously placed and paid-for ordered supplies. considerations. nearly all articles appearing in this issue relate to such contributing factors and offer different bridging pathways, if not causeways, over the sea of scholarly challenges faced by international entrepreneurs in quests for their success in entrepreneurial internationalization. in the context of the ongoing crisis, the pertinent discussion of uncertainties is extensive (liesch et al. ; bylund and mccaffrey ; coff ; dimov ; dow ; huff et al. ; liesch et al. ; matanda and freeman ; mckelvie et al. ; mcmullen and shepherd ) and ranges from one extreme to another classic view at the other extreme-namely, from the akerlofian cross-sectional (akerlof ) to the knightian longitudinal uncertainties (knight ) . at the root of both is in the absence of objective, or reliable information and knowledge with very different density distributions. the akerlof's cross-sectional uncertainty relates to a relatively shorter term and the information and knowledge (erikson and korsgaard ) discrepancies (between or among agents) favoring those who have more of them and exposing those who have less, or lack of them. consider, for example, the case of buying a used car (or second-hand equipment). generally, the seller has more reliable, if not near perfect, knowledge about the conditions of his offerings in terms of its age, performance, repairs, faults, and the likes than a potential buyer who will have to assume, predict, or perceive the offer's conditions without reliable information in order to justify his decision to either buy the car (or the equipment) or not. the potential buyer may ask for more detailed information about the offers' conditions or seek assurances (or even guarantees) against its dysfunctions to pursue the transaction or not when he is in doubt. the noteworthy point is that the objective information(or knowledge) is available but the buyer cannot access it to assess it objectively-thus, the cross-sectional uncertainty is due to the asymmetric state of information and knowledge among parties involved in a transaction (townsend et al. ) , which clears soon after the transaction is consummated. in williamson's transaction cost approach (williamson ) , such discrepancies are viewed as transaction frictions between the parties, where at least one party acts opportunistically to maximize self-interest at the cost to the other(s), while the other party(ies) is incapable of securing the necessary objective information to form the required knowledge for enabling a prudent decision. within the uncertain state of covid- crisis, both of the above phenomena (asymmetry and opportunistic behavior) were clearly observable and contributed to creating subsidiary crises of different magnitudes-relatively larger ones for smaller enterprises and smaller ones for the larger companies, some of which were unduly amplified due to the lack of objective information and opportunistic behavior at the time. retrospectively, we collectively learned, for example, that there was no worldwide shortage of health-care equipment and supplies but major suppliers, or intermediaries, withheld their supplies and did not ship orders on time as they usually would have previously done, which created the perception of acute shortages forcing prices higher, knowing well that buyers were incapable of assessing the true availability of inventoried supplies for demanding lower prices, especially when the richer buyers (e.g., national governments) were willing to bid-up prices due to urgency of their situations. this is not far from a discriminating monopolist taking advantage of its uninformed buyers. similar situation happens when a small company fails to plan for contingencies to cover for emerging uncertainties by ordering just sufficient for their regularly needed supplies (e.g., the minimum order quantity) to minimize the short-term costs of holding inventory. the longer term overall costs of over-ordering to build contingency supplies is the cumulative cost of holding excess inventory over time, which could be viewed as an insurance premium for avoiding supply shortages, or stock-outs, while the true costs of such internal imprudent strategies become much higher when, for example, potential customer switch to other available brands, or there are uncertain and adverse external conditions, including artificially created shortages, as discussed earlier. generally, the top management of resource-constrained smaller companies aims to ensure the efficiency of their resources, including supplies, and to preserve adequate cash flows, to avoid short-term uncertainties of insolvency, akin to the akerlofian type (akerlof ) . in contrast, the absence of reliable information (or assurances) about steady supplies may contribute to, if not cause, a change in potential buyers' consumer behavior and further contribute to suppliers' over-estimation of buyers' demand trajectory over a longer time period fearing from facing acute adverse conditions such as those discussed in the previous case. however, such internal (e.g., management oversights) or external (e.g., suppliers withholding shipments or change in consumer behavior) causes due to absence of the required information, reliable forecasts or estimates, and imperfect knowledge over time begin to pertain more to knightian uncertainties than akerlofian types (i.e., across transacting agents, which is comparatively shorter term and more frequently encountered uncertainties). the impact of resources and capabilities naturally, a firm's level of resources (wernerfelt (wernerfelt , barney et al. ) or institutional inadequacies and restrictions (bruton et al. ; north dc ; kostova ; yousafzai et al. ) may play influential roles in mitigation of encountered uncertainties. consider, for example, the difference in abilities of smaller resource-constrained enterprises in continual need of minimizing fixed costs and larger and richer institutions (such as national governments) with higher priorities (than costs) to enforce performance contract(s). the richer resources of larger institutions pose a more credible threat of suing the supplier(s) lateron for potential damages of higher costs or delayed shipments than those of smaller firms, thus reducing the temptation for opportunistic behaviors (williamson ) over time. as the transaction cost theory suggests (williamson ) , the ever-presence of such threats may dissuade suppliers from delaying and withholding shipments for the hope of higher revenues. furthermore, even the opportunists may be exposed as other lower costs suppliers may realize the opportunities and respond with lower prices. time, timing, and longer term uncertainties the above demonstrative discussions point to the critical role of timely-planned acquisition of capabilities and resources over time before emergencies, or shortages, become acute. the time dimension of this discussion relates to knight's ( ) longitudinal uncertainties. the future is inherently uncertain; but one's needs and their corresponding transactions costs are more predictable at the time as, for example, transactions can be consummated at the prevailing prices. delaying a transaction in the hope of buying at lower costs exposes the transaction to longitudinal uncertainties, as the uncertainties' ex-post costs and the true prices are only revealed in the due course of time. similarly, the longer term costs of preparedness and security can minimize the short-term costs to individual employees and other corporate persons. accordingly, the intensity of a crisis, and its cumulative costs, may force national and local authorities to bid-up prices and absorb the much higher short-term costs at the time to ensure acquisition of essential supplies in order to avoid the difficult-to-predict costs of longitudinal uncertainties. for smaller enterprises, however, the state of their resources and the extent of prior experiences may influence their decision at the time. this will be discussed below. past experience and the firm's stages of life-cycle generally, smaller and younger companies are short of excess resources and lack rich experience to provide them with a longer and far-sighted outlook for avoiding longitudinal uncertainties of the knightian types by, for example, keeping a level of contingency inventories for difficult conditions and rainy days. however, even smaller start-ups with experienced serial entrepreneurs at the helm can benefit from the past experiences of their founding entrepreneur(s) through what etemad calls as "the carry-over of paternal heritage" (etemad a) to enable planning and providing for their necessary resources. the state of competition on one extreme, a monopolist can control supplies and create artificial shortages to force prices up in normal conditions. under distress and unusual conditions, customers may bid-up prices to have priority access to the available supplies. in the perfect competition, on the other extreme, many suppliers compete to attract demand and prices remain relatively competitive due to highly elastic demands. practically, however, the state of global competition is likely to be closer to a combination of regional oligopolistic (or monopolistic) competition and global competitive conditions, where suppliers perceive to have certain monopolistic powers to manipulate prices (e.g., due to their brand equity, location, or product quality), while they need to compete in a nearly hyper-competitive state (chen et al. ) with other competitors who provide similar offerings. the knowledge of the competitive and institutional structures (jepperson ; yousafzai et al. ; welter and smallbone ) is, therefore, essential, especially to smes for deciding as optimally as possible, which all depends on both the buyers and suppliers state of information, communication, and knowledge, which is further discussed below. the state of communication and information the advanced state of a firm's information and communication technology (ict) is highly likely to enable it to decide prudently. as discussed earlier, uncertainties depend on one's state of reliable information impacting the achievement of optimality, which in turn depends on the state of information at the time. in short, a small firm's potential exposure to cross-sectional and longitudinal risks and uncertainties is also likely to depend on information on a combination of influential factors, some of which are discussed above; prominent similar arguments apply to national preparedness and national security over time to shield individual and corporate citizens from bearing short-term or long-term high costs-the national costs per capita may pale relative to the immeasurable costs of human mortalities paid by the deceased people and their families, the massive unemployment, or high costs related to shortages in major crises, such as the covid- pandemic. among such influential factors is reliable information about their operating context at the time and its probable trajectory in the near future. furthermore, nearly all of the emerging advances in management and production, including additive technologies, depend heavily on information (hannibal , forthcoming) finally, the next section will seek to discuss the above elements within the articles that follow. this part consists of summary highlights of the contribution of the four double-blind, peer-reviewed articles with relevant materials to an emerging or an unfolding crisis. the second article in this issue is entitled "muddling through akerlofian and knightian uncertainty: the role of socio-behavioral integration, positive affective tone, and polychronicity" and is co-authored by daniel leunbach, truls erikson, and max rapp-ricciard. as discussed earlier, uncertainty and risk-taking propensity have been recognized as integral parts of general entrepreneurship for a long time (e.g., gartner and liao ) and this article focuses on studying them as they relate to individual entrepreneurs' affective socio-behavioral and also the way entrepreneurs function, including how they perceive their situation, manage, progress, and adjust their outlook within the environment(s) that exposes them to perceived risks and uncertainties. from an entrepreneurial perspective, a combined interaction of time and flow of information, or lack thereof, forming a knowledge base, is what entrepreneurial decisions depend. when the entrepreneurs need to make decisions without perfect cognition (based on his information and knowledge about the state of affairs at the time) within a relatively short time period, they and their decisions are exposed to an uncertain state of the world. such uncertainty(ies) within a relatively short time span is (are) termed as crosssectional uncertainty. generally, it is difficult to acquire nearly perfect information due to the shortage of time or the cost of searching for the information. george akerlof ( ) suggested that such perceived uncertainty would not be as much due to lack of pertinent information as it would be due to the asymmetric distribution of information (brown ) and the corresponding knowledge among agents-i.e., those who had more potent information and those who did not but needed it. consider a typical entrepreneur in need of acquiring a good, or service, from a supplier or service provider, who has nearly perfect information about, or the knowledge of, the good or service he offers but does not fully disclose them to the entrepreneur in self-interest, which gives rise to the asymmetric distribution of the information (or knowledge) between the supplier and the potential buyer. generally, this is also termed as "akerlofian uncertainty." time is an important factor in entrepreneurial decisions, and the influence of time is as significant as the state of information. for example, the urgency of a decision deprives the entrepreneur of sufficient time for conducting informative research to enrich his state of information and forces the entrepreneur to decide earlier, rather than later, with some discomfort and reservation due to his insufficient information. with more time for acquiring sufficient information for forming a supportive knowledge base, he can comfortably decide to consummate a particular transaction or not. the time cost of switching to another supplier or conducting research may increase the transaction costs and expose the transaction to longitudinal uncertainties as well. in contrast to the asymmetric distribution of information across individuals in the short term (brown ) , the required time to acquire, or develop, the required information about the relevant state of affairs for forming the corresponding knowledge for portraying the future, or the near future, gives rise to longitudinal uncertainty, as suggested by frank knight's article in (knight ) and is viewed as "knightian uncertainty." generally, the future is uncertain and it is not prudent to assume that it will be a linear extension of the current state of affairs, or alternatively, it will be predictable perfectly. again, both the information and time are influential factors as more pertinent information is revealed over time. entrepreneurial start-ups, and new ventures, for example, suffer from both the shortage of time and information and thus offer fertile context for exploring not only the interaction of time and information, but also how new venture teams (nvts) perceive the gravity of risks and uncertainties facing them. therefore, exploring uncertainty within new venture teams, especially those based on new science and technology, which usually encounter higher uncertainties and commercialization risks, enables a deeper understanding of how uncertainties are perceived and managed by the new venture teams. as discussed in some length in the paper, the authors' research methodology enabled them to observe the impact of nvt's socio-behavioral and psychological characteristic and explore their reactions and response to both the shorter and longer term uncertainties facing them. in the context of a major crisis, smes' top management teams (tmts) suffer from both the inadequacies of information and shortage of time in most cases, neither of which they can control or extend into the future. in a major crisis, there are complex uncertainties with unclear prospects and come without prior warning-e.g., what will have an immediate adverse impact, will it increase or subside, how long will it last, and what will be the magnitude of accumulated damage when the crisis is nearly over? these are among many other questions that have no certain answers at the time. similar to young start-ups, where founder-entrepreneurs suffer from shortages of time, knowledge (or reliable information), and resources, in addition to uncertainties associated with consumer behavior and market reactions as well as regulatory restriction, smes, and especially ismes, suffer from complex uncertainties for which they were not prepared, nor would they have sufficient time and resource to deal with the unfolding crisis satisfactorily. furthermore, the normal sources of previous help and advice, including their social networks and support agencies, such as lenders, service providers, and suppliers, would be facing even larger problems of their own and incapable of assisting them in a timely fashion, which call for adequate alternative contingency plans for the rainy days as discussed earlier and further reviewed in a later section. this discussion points to a need for examining the potential role of environmental context in increasing uncertainties or mitigating against them. the next article examines this very topic next. the third article of this issue examines the context within which entrepreneurial decisions are made. it is entitled "home country institutional context and entrepreneurial internationalization: the significance of human capital attributes" and co-authored by the team of vahid jafari-sadeghi, jean-marie nkongolo-bakenda, léo-paul dana, robert b. anderson, and paolo pietro biancone. nearly all decisions are embedded in a context, and the context for most international entrepreneurship decisions is perceived as more complex than those in the home market, as extensively discussed by internationalization literature. internationalization involves at least two contextsone characterized by formal institutional structures and informal socio-cultural values (hofstede (hofstede , (hofstede , hofstede et al. ) at home, both helpful and restrictive, and the other at the host country environment, where each country's institutional structures differ from the others (chen et al. ; li and zahra ) . even in the european union's (eu) single market, where eu has increasing harmonized intercountry-wide regulatory and institutional requirements ever since , different local socio-cultural and behavioral forces influence decisions differently, especially those affecting consumer behavior and market-sensitive aspects, which are more deeply embedded in their country's institutional structures than others, encouraging or restricting certain entrepreneurial actions. generally, international entrepreneurship and their corresponding entrepreneurial actions are deeply embedded in their more complex contexts (barrett, jones, mcevoy, granovetter ; wang and altinay ; yousafzai et al. ) and highly internationalized smes (ismes), and even larger firms, need to respond sensitively to the various local (i.e., contextual) facets and adapt their practices accordingly (welter ) , which in turn add incremental complexities and expose the firm's early entrepreneurial, and especially marketing, to a higher degree of risk and cross-sectional uncertainties than those at home which is more familiar than elsewhere. however, firms learn from their host environment and also from their competitors as to how to mitigate their risks and remove the information (and knowledge) asymmetries over time in order to operate successfully after their early days in the host county environment. naturally, entrepreneurial activities of innovative start-ups face higher risks and uncertainties at the outset, as discussed earlier. although the cross-sectional methodology of this article's research across european country-environments, using structural equation modeling (sem), could not examine the specific impact of various environmental characteristics on entrepreneurial orientation and entrepreneurial practices at the local levels in each country context over time, the overall indicators pointed to contextual influences strongly affecting various facets of internationalization and international entrepreneurship. it is noteworthy that the entrepreneurial intentions and orientations of "non-entrepreneurs" that portray their context had a significant positive influence on the creation of entrepreneurial businesses and their internationalizations. in summary, the findings of this research strongly support the notion that the true, or perceived, state of the firms' environment influences their strategic management of their regular affairs as well as the management of an emerging or unfolding crisis, regardless of its magnitude and timing. the fourth article in this issue complements the previous article through a deeper examination of institutional impacts from a women entrepreneurs' perspective. it is entitled "the neglected role of formal and informal institutions in women's entrepreneurship: a multi-level analysis" and is co-authored by daniela gimenez-jimenez, gimenez-jimenez, andrea calabrò, and david urbano. this article draws on, and extends, the impact of institutional context, discussed above, to include what the authors termed as "informal institutions' impact on women entrepreneurs." as discussed in the context of european countries earlier, socio-cultural and behavioral aspects of the various societies vary and influence different entrepreneurship initiatives differently. in contrast to the tangible influences and effects of formal institutions, the socio-cultural values of a society remain nearly invisible, but quite influential. what the authors call as neglected "informal institutions" are widely portrayed as a society's "software" by cultural anthropologists, such as geert hofstede, among others (hofstede (hofstede , (hofstede , hofstede et al. ) . in contrast to the "hardware" that is structural and tangible, the "software" remains hidden, if not intangible, neglected, and ignored, that it functions consistent with its socio-cultural values and daily behavioral routine, which act as design parameters woven into the software's programs that control social functions quietly. the underlying multi-level research methodology analyzing the entrepreneurial experience of more than , women in countries in this article suggests that both of the societal formal and informal institutions impact entrepreneurship, and especially women entrepreneurs more significantly and profoundly, and yet they have remained "neglected." in the context of a major crisis, facing society in general, and entrepreneurial smes in particular, the question of how do the formal and informal institutions of a society assist or hamper effective crisis management, especially by women executives, resumes high importance. the casual observation of conditions imposed by the covid- crisis in the past months, or so, suggests that both the society's formal and informal institutions of the affected environments imposed higher expectation, if not more responsibilities, on women than their previous family setting transformed into home and office at the same time, which have adversely affected women's time, effort, and attention in effectively managing their firm's crisis and also attending to their family as they did previously. assuming that crisis management requires management's more intensive attention and effort than those of normal times, the important question for women executives is, how should the required additional efforts by busier women executives be assisted? and if they cannot be, who should be bearing the additional costs and the consequent damages to both the women's family and their firms? specifically, what should be the uncodified, but understood, societal expectations of women executives? are they expected to sacrifice their family's wellbeing or not concentrate fully on managing their enterprises' crisis effectively? naturally, the preliminary answer lies in what is consistent with the society's informal sociocultural value systems as well as those formally codified in the society's laws, regulations, and broadly accepted behaviors. this discussion provides a socio-cultural bridge to the next article. the fifth article of this issue is entitled "market orientation and strategic decisions on immigrant and ethnic small firms" and is co-authored by eduardo picanço cruz, roberto pessoa de queiroz falcão, and rafael cuba mancebo. as the title suggests, this research is about entrepreneurs facing a new and possibly different environmental context than their familiar previous one at home, thus exposing them to the fear, if not the uncertainty, of unknowns, including the hidden and intangible socio-cultural value systems. they need to decide about their overall strategy, including marketing orientation, in their newly adopted environment. immigrant entrepreneurs face the challenges of belonging to two environments, one at home, which they left behind, and the other in their unfamiliar new home (the host country), in which they would aim to succeed (etemad b) . when there are significant differences between the two, they face a minor crisis in terms of the uncertainty of if their innate home strategic orientation or that of their host can serve them best. either of the two strategic choices exposes them to certain uncertain costs and benefits, which are not clear at the time. naturally, their familiarity with their previous home's socio-cultural environment, within which they feel comfort and may need nearly no new learning and adaptation, pushes them to operate in a similar environment to home to give them certain advantages and possibly lower risks and uncertainties. this orientation attracts them towards their ethnic and immigrant community, or enclave, based primarily on the perception that their ethnic communities, enclaves, and their market segments in their adopted home still resemble their home environment's context, which in turn suggest that they can capitalize on them by relying on their common ethnic social capital (davidsson and honig ) , using their home language, culture, and routine practices with minimal cognitive and psychic pain of adapting to the new context. however, that perception, or assumption, may not be valid or functional, where the society's sociocultural values encourage rapid adaption and change so that immigrants become like other native citizens. although the market orientation of concentrating on their ethnic community in their adopted country for its advantages, including a lower perceived short-term uncertainty (e.g., akerlofian type of risk and uncertainty), it may not work or prove to be restrictive in longer term due to, for example, the community may be small, decreasing in size and gradually adapting to their host country prevailing sociocultural values, thus posing an uncertainty of knightian type, where the future state is difficult to predict. in contrast, adopting a strategic and market orientation towards attractive market segments of their new home's socio-cultural values and routine practices may expose the young entrepreneurial firm to the other well-documented risks and uncertainties, which are similar to difficulties encountered by firm starting a new operation in a foreign country (hymer ; vahlne , ; zaheer ; zaheer and mosakowski , among many others) . this strategy may also force the nascent firm to compete with the entrenched competition, both of the immigrant and indigenous origins, unless it can offer innovative, or unique, products (or services) similar to other native innovative start-ups. the noteworthy point, and as discussed earlier (in the "introductions" and "developmental arguments" sections), the state of the firm's resources and the entrepreneur's (or the firm's top management team's) extent of experience, information, and knowledge may make the difference between the ultimate success or mere survival in either of the above strategies. the rich multi-method and longitudinal research methodology of this article over a -year period involving interviews, ethnographic observation, and regular data collection among the ethnic and immigrant entrepreneurs in brazilian enclaves world-wide enabled the authors to offer a conceptual framework and complementary insights based on their findings and experiential knowledge. in summary, the research supporting articles in this part are both consistent and supportive of arguments presented in "introductions" and "developmental arguments." they will also serve as a basis for arguments in the following "discussions." as stated in the "introductions" section, this issue's release would coincide with the world in the midst of the coronavirus pandemic. initially and on the face of it, the pandemic was perceived as a health-care problem in china followed by other counties in the east and south-east asia; but it soon turned into a world-wide crisis far-beyond health-care in a few other countries quickly affecting nearly all aspect of the others before inflicting them with the unfolding crises of their own. generally, health-care institutions are viewed as the societies' institution of last resort and are expected to deal with potential crises of others, rather than becoming the epicenters of a crisis, posing challenges to others and to their respective societies as a whole. the health-care system in the publicly financed countries is given resources; held in high regard because of their highly capable human resources; is assumed to be well managed ready to resolve health-care-related problems, if not crises; and consequently expected to effectively solve all health-related challenges as they arise. however, and regardless of their orientation-privately held or publicly supported systems-the health-care system traveled to the brinks of breakdown and collapse, although they had dealt with similar, but smaller, outbreaks of regional and seasonal flu or other epidemics previously-e.g., the hiv/aids outbreak of the late s and now endemic world-wide, sars epidemic of , and n h flu of that became a pandemic, among others, in the near recent memories-but the covid- pandemic overwhelmed them. retrospectively, the health-care institutions, and the system as a whole, were not the only sector experiencing high levels of systematic fatigue, stress, and strains nearing breakdowns, suggesting that some countries were not prepared to deal with a major crisis. naturally, less prominent institutions than the country' healthcare system, and subsequently the government alike, were not spared; and many ad-hoc experimental procedures had to be used and valuable lessons had to be learned in hurry in various institutions and many occasions in the hope of saving the moment pregnant with precarious lives and livelihoods. such rapidly developing phenomena, seemingly beyond control initially, influenced the overall theme of this issue, although already accepted articles waiting to be placed in a regular publication were not written on the topic of crisis management. given the gravity of the covid- pandemic pushing many institutions into their crisis of survival, this issue adopted the overriding thematic topic of crisis management perspective to enable a richer discussion of different components of crisis management with a focus on smes and ismes based on the specific research of each of the articles accepted through the journal rigorous doubleblind review process. expectedly, the resource-constrained small-and medium-sized enterprises (smes) and their internationalized counterparts (ismes) suffered deeply due to lockdown of their customers, employees, and service providers. similarly, the sudden stoppage of major national and international economic activities in many advanced countries, including those in the european and american continent, paralyzed them initially as the early impacts were totally unexpected. the health-care system was not the only sector experiencing dangerous levels of stress and strains. entertainment hotel and lodging industries, performing and creative arts, hospitality and restaurant industries, and tourism and their complementary goods and service providers, mostly smaller enterprises, among many other smes, were caught off-guard and suffered deeply from lack of demand due to rapid economic slow-down, fears of infections, and enforcement of lockdowns in many affected countries. similarly, integrated manufacturing systems, such as automobile industries, where parts had to arrive from different international sources on time, if not just in time, came to a halt because of the near collapse of international supply chains in addition to national protectionism of the past showing its ugly face after sometimes. such conditions were not seen for some seven decades, after the second world war triggering the multi-lateral agreement in bretton wood in to 's multi-national conferences that created the world's enduring world institutions such as gatt (replaced by wto), imf, and world bank. at the socio-cultural and economic levels, the imposed self-isolation and lockdown of cities and communities, to avoid further transmission of the coronavirus to the unsuspecting others, entailed immobility, and the imposition of social distancing disrupted all normal routine behaviors. many industries could no longer operate as safe social distances could not be provided. international, national, and even regional travels were shuttered down as national boarders were nearly closed. as a direct result, small-and medium-sized enterprises in the affected industries, who depend on others intensively, suffered a double whammy massively-their demand had collapsed, and their supplies had stopped. some were ordered closed and others had no reason to operate due to the immobility of employees, customers, and clients alike as well as severe shortages of parts and supplies. consequently, they had to shut down to minimize unproductive fixed costs. in short, the world has been, and in some cases is still, struggling with the covid- crisis at the time of this writing in june . only months earlier in december , not many people imagined the emergence of the crisis in their locale, let alone a massive global pandemic, crippling community after community, which revealed deep socio-cultural, economic, and institutional unattended faults. collectively, they pointed to unpreparedness of many unsuspecting productive enterprises and institutions alike. in contrast, more far-sighted and conservative institutions with alternative contingency plans based on their relatively minor crisis-like experiences previously, such as transportation and labor strikes, not comparable to covid- , activated their relevant contingency plans. consider, for example, that the alternative of online marketing and sales could compensate for the immobility of customers and in-person sales transactions. naturally, enterprises with on-line capabilities either gained market shares or suffered less severely. in short, the overriding lesson of this crisis as discussed in some length in "developmental arguments" and in "the summary highlight of articles in this issue" and in response to queries raised in "introductions" is that the institutional under-and unpreparedness, regardless of the level, location, and size, inflicted far higher harms than the incremental costs of carrying alternative contingency plans for the rainy days as evidenced by the considerable success of on-line expansion and quick reconfiguration of flexible manufacturing to accommodate the unexpected oddities of the unfolding crisis. aside from the global scale of the covid crisis, similar, if not more severe, crises had happened in different locations, some repeatedly, and humanity had suffered and should have learned. consider, for example, the torrential rains and subsequent flooding and mudslides in the temperate regions; massive snow falls in the northern hemisphere shutting down activities for days, if not weeks, at the time; massive earthquakes destroying residential and office buildings without warning (e.g., christ church city, new zealand, the ancient trading city of baam in southeastern iran); heavy ice storm in eastern canada destroying electrical transmission lines that resulted in shutting down cities for many days; the massive and widespread tsunami of indian ocean destroying coastal areas in about countries with quarter-million casualties, among many others, should have served as wake-up calls. while many of the past stricken areas still remain exposed and vulnerable to recurrence, reinforcement and warning systems are in place for a minority of them. for example, the earthquake detection system in the deep seas neighboring tsunami-prone areas has provided ample warning to the vulnerable regions and have avoided major damages. conversely, the massive tsunami of eastern japan that destroyed the fukushima daichi nuclear power plant in addition to large financial, property, and human losses as well as untold missing persons could have been avoided. in an earthquake-prone country such as japan, the design faults in the protective barrier walls should have protected the fukushima nuclear power plant and avoided the release of toxic nucleic emissions. the above discussions point to a few noteworthy lessons and implications as follows: the possibility of recurrence, possibly with higher striking probabilities than before, is not out of the realm of reality, thus calling for planned precautions and, if not adequate, preparations for preparedness. the organs, institutions, and systems weakened by a crisis, regardless of its magnitude and gravity, are in need of rebuilding and re-enforcement to endure the next adverse impacts, which also include smes that reached near demise and their management systems were nearly compromised by the emerging covid and its still uncertain unfolding and post-covid-aftermath. the primary vital support systems, especially the support systems of the last resort, including the first responders, emergency systems, warning and rescue systems, among others, need to develop alternative and functional contingencies and stay near readiness as the timing of the next crisis may remain a true uncertainty (of knightian type discussed earlier). the immediate support systems, agencies, or persons need to have planned redundancies and ready-to-act as backups should their clients be affected by unforeseen events. sustainability and resilience need to become an integral part of all contingency plans as the strength of the collectivity depends on the strength and resilience of the weakest link(s) (blatt ). the prevention of a natural disaster of covid-scale possibly engulfing humanity is in need of supra-national institutions with effective plans, incentives, and sanctions to prevent self-interest at the cost to larger collectivity, if not to the humankind. the immediate implication of the above discussions in the post-mortem analysis of a crisis, regardless of its scale and magnitude, is to learn about causes and the reason for failure to stop and possibly reverse their effects in a timely fashion. in the context of smes and ismes, management training, simulation to test the efficacy and reliability of crisis scenarios for alternative contingency plans, and their feasibility and functionality, among others, are of critical importance, which point to the four equally important efforts: crisis management needs to become an indispensable part of education at all professional levels to enable individuals to protect themselves and assist others in need as well as to reduce the burdens and gravity of the collective harms. the societal backbone institutions and institutional infra-structures on which others depend must be strengthened so that they can stand the impact of the next crisis, regardless of its timing and origin, and support their dependents. the widespread and learned lessons of covid- crisis should be utilized to prepare for a more massive crisis in not so distant future the smes, and especially ismes, as socio-economic institutions with societal impact need to re-examine their dependencies on others and take steps to avoid their recurrence in ways consistent with their long-term aims and objectives. in the final analysis, the experience of the covid- pandemic indicates that humanity is fragile and only collective actions can provide for the necessary capabilities and resources for dealing with the next potential disaster. similarly, the smaller institutions that provide for the basic ingredients, parts, and support for the full functioning of their networks and the livelihood of their respective members need the assurance of mutual support in order to survive and to deliver their vital support needed. on a final note, it is an opportune privilege for the journal of international entrepreneurship to take this invaluable opportunity to reflect on the ongoing crisis with the ability to still inflict further harm and more damages nearly beyond the control of national governments. similarly, and on the behalf of the journal, i invite the scholarly community to take up the challenge of educating and preparing us for the next crisis, regardless of its nature, location, and timing. the journal is prepared to offer thematic and special issue(s) covering the management of crisis in smes and ismes alike. the market for "lemons": quality uncertainty and the market mechanism institutional factors, opportunity entrepreneurship and economic growth: panel data evidence the resource-based view of the firm: ten years after resilience in entrepreneurial teams: developing the capacity to pull through the palgrave encyclopedia of strategic management institutional theory and entrepreneurship: where are we now and where do we need to move in the future? a theory of entrepreneurship and institutional uncertainty navigating hypercompetitive environment: the role of action aggressiveness and tmt integration home country institutions, social value orientation, and the internationalization of ventures beyond creative destruction and entrepreneurial discovery: a radical austrian approach to entrepreneurship how buyers cope with uncertainty when acquiring firms in knowledge-intensive industries: caveat emptor the role of social and human capital among nascent entrepreneurs uncertainty about uncertainty. foundations for new economic thinking experiential knowledge and cost in the internationalization process knowledge as the source of opportunity early strategic heritage: the carryover effect on entrepreneurial firm's life cycle advances and challenges in the evolving field of international entrepreneurship: the case of migrant and diaspora entrepreneurs actions, actors, strategies and growth trajectories in international entrepreneurship ) management of crisis by smes around the world risk-takers and taking risks economic action and social structure: the problem of embeddedness the influence of additive manufacturing on early internationalization: considerations into potential avenues of ie research the cultural relativity of organizational practices and theories culture's consequences: comparing values, behaviors, institutions and organizations across nations cultures and organizations: software of the mind managing the unknowable: the effectiveness of early-stage investor gut feel in entrepreneurial investment decisions a conversation on uncertainty in managerial and organizational cognition. in: uncertainty and strategic decision making the international operations of national firms: a study of direct foreign investment entrepreneurial orientation, dynamic capabilities and international performance institutions, institutional effects, and institutionalism the internationalization process of a firm -a model of knowledge foreign and increasing market commitments the uppsala internationalization process model revisited: from liability of foreignness to liability of outsidership country institutional profiles: concept and measurement formal institutions, culture, and venture capital activity: a cross-country analysis risk and uncertainty in internationalisation and international entrepreneurship studies. the multinational enterprise and the emergence of the global factory effect of perceived environmental uncertainty on exporter-importer interorganizational relationships and export performance improvement elitists, risk-takers, and rugged individualists? an exploratory analysis of cultural differences between entrepreneurs and non-entrepreneurs unpacking the uncertainty construct: implications for entrepreneurial action entrepreneurial action and the role of uncertainty in the theory of the entrepreneur cross-cultural cognitions and the venture creation decision socio-cultural factors and female entrepreneurship institutions, institutional change and economic performance social structure and organizations the economics of a pandemic: the case of covid- uncertainty, knowledge problems, and entrepreneurial action disruptive technologies: an expanded view social embeddedness, entrepreneurial orientation and firm growth in ethnic minority small businesses in the uk contextualizing entrepreneurship-conceptual challenges and ways forward institutional perspectives on entrepreneurial behavior in challenging environments resource-based view of the firm the resource-based view of the firm: ten years after the economics of organization: the transaction cost approach the new institutional economics: taking stock, looking ahead institutional theory and contextual embeddedness of women's entrepreneurial leadership: evidence from countries overcoming the liability of foreignness the dynamics of the liability of foreignness: a global study of survival in financial services publisher's note springer nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations key: cord- -i q authors: brix, gunnar; kolem, heinrich; nitz, wolfgang r.; bock, michael; huppertz, alexander; zech, cristoph j.; dietrich, olaf title: basics of magnetic resonance imaging and magnetic resonance spectroscopy date: journal: magnetic resonance tomography doi: . / - - - - _ sha: doc_id: cord_uid: i q in this chapter, the basic principles of magnetic resonance imaging (mri) and magnetic resonance spectroscopy (mrs) (sects. . , . , and . ), the technical components of the mri scanner (sect. . ), and the basics of contrast agents and the application thereof (sect. . ) are described. furthermore, flow phenomena and mr angiography (sect. . ) as well as diffusion and tensor imaging (sect. . ) are elucidated. in this chapter, the basic principles of magnetic resonance imaging (mri) and magnetic resonance spectroscopy (mrs) (sects. . , . , and . ), the technical components of the mri scanner (sect. . ), and the basics of contrast agents and the application thereof (sect. . ) are described. furthermore, flow phenomena and mr angiography (sect. . ) as well as diffusion and tensor imaging (sect. . ) are elucidated. the basic physical principles of the nuclear magnetic resonance (nmr in medical literature: magnetic resonance [mr] ) can be understood in depth and in detail based on quantum mechanics. in sect. . , however, another description is attempted that is almost physically exact and uses only a few simple arguments of quantum mechanics. in turn, the presentation will be more complex, but still can be understood with only basic knowledge in physics. for this reason, this synopsis should precede the detailed description in the following sections to guide the reader. mr examinations are possible if atomic nuclei of tissue of interest possess a nuclear magnetic moment µ. atomic nuclei with odd numbers of nucleons (here: protons, neutrons) do possess such magnetic moments. the nucleus of the hydrogen atom consisting of only one proton is the simplest atomic nucleus with an odd number of nucleons and thus has the biggest magnetic moment of all nuclei. its natural abundance of almost % and its ubiquitous occurrence and the high mobility of water protons in living matter are further prepositions for using low-sensitivity nmr method for imaging in human subjects. this low sensitivity compared with other imaging methods-e.g., positron emission tomography-cannot be emphasized enough. the sensitivity difference of this both methods is several orders of magnitude (~ - ). this fact has to be taken into account when magnetic resonance imaging is envisioned for specific probe imaging, nowadays known as molecular imaging. in spite of the abovementioned low sensitivity of mr, proton imaging is possible in humans because of the high magnetic moment, ~ % abundance, high concentra- tion, and high mobility of protons in tissue. the following consideration will be restricted to the hydrogen nuclei only. the basis of the magnetic resonance imaging is a simple resonance phenomenon. in a magnetic field, free environmental magnetic moments of a specimen are not oriented at all; however, in an external magnetic field the magnetic moments are no longer randomly oriented. the application of an external magnetic field b forces the magnetic moments µ to align along the magnetic field. due to basic physics principles, the orientation has two quantum states with respect to the external magnetic field: first the parallel, and second the antiparallel state, both of which have different magnetic energies em, and its energy difference being ∆em = γ · ħ · b , and γ, ħ being the gyromagnetic ratio and planck's constant, respectively. in thermal equilibrium, both states possess different occupation numbers, with the low-energy parallel state having higher probability of occupation than does the low-energy antiparallel state, resulting in a macroscopic and therefore measurable net magnetization parallel to the orientation of the external magnetic field. this thermal equilibrium state can be distorted by irradiation with alternating electromagnetic field having a radiation energy erf identically to the energy-splitting ∆em caused by the magnetic field, and the radiation energy being erf = ħ · ω , and ω being the resonance frequency of the spin system-the so-called larmor frequency. due to the resonant irradiation, the spin system takes up additional energy that can be dissipated only if the system is coupled to its microenvironment. this coupling strength is described by the so-called t relaxation time (also known as longitudinal or spin-lattice relaxation time). an equivalent for the coupling of the spins to each other is the t relaxation time (also known as transversal or spin-spin relaxation time). for tissues, typical t relaxation times for tissues are between and , ms and t relaxation times between and , ms. mr imaging utilizing pulsed nmr-this means the alternating electromagnetic field, the so-called radiofrequency (rf) field-is applied only for a short period of time (in general, pulses are some milliseconds). the short rf pulse excites the spin system via a transmitter coil. after irradiation of the nuclear spin system, a receiver coil can detect a damped time-dependent signal with a frequency of ω . this signal is called the free induction decay (fid) . the damping of the signal is ruled by the t relaxation times, and the period by the strength of the external magnetic field (constant magnetic moment assumed). in practical terms, not only does the t relaxation time influence the damping of the signal, but also the technically related inhomogeneity of the external magnetic field. the signal damping caused by the inhomogeneity is called t * relaxation time, and is in general much stronger than that caused by t relaxation times. only special pulse sequences (e.g., spin-echo sequences) can eliminate the influence of the inhomogeneity of the external magnetic field and thus allow the measurement of the t relaxation times specific to the substance/tissue. the influence of t relaxation times is mainly limited to the amplitude of the signal. preposition for the image reconstruction (sect. . ) is the exact information about the mr signal's origin. this spatial information can be generated by space-dependent magnetic fields additionally applied along the three space coordinates. these space-dependent magnetic fieldscalled magnetic field gradients-are small as compared with the main external field and are generated by special coils mounted in the bore of the magnet. due to these additional magnetic field gradients, the total magnetic field is slightly different in each volume element (voxel) and in turn, so is the resonance frequency of the spin system in each voxel. as a result, irradiation with a rf pulse of defined frequency ω′ excites only those nuclei in such voxels where the larmor frequency ω given by the field strength matches the resonance condition. suitable changes of the field gradients allow moving a volume element in space, fulfilling this condition. keeping in mind that the signal intensity of a volume element is given by the number of the spins in the volume element, the relaxation times of the tissue and the specific measurement parameters (e.g., pulse repetition time, echo time etc.), this signal intensity is assigned to the corresponding picture element (pixel). in this manner, the region of interest can be sampled by moving the volume element through space, and successively, an image with respect to pixels can be constructed. this method requires a long time to acquire images, assuming every experiment needs about s to measure a voxel and a pixel, respectively. thus, the measurement of an image × pixels will require more than , s to complete. nowadays, d-, d-, and/or phase encoding methods as well as half-fourier methods are applied, allowing data acquisition times of minutes or even less. special fast imaging techniques (e.g., flash, rare, epi sequences) allow further reduction of the acquisition time (cf. sect. . ) . in contrast to x-ray computed tomography, where the attenuation is governed purely by the electron density, as mentioned above, in mri the signal intensity is a complex function of the proton density and the t , t , and t * relaxation times. additionally, the signal intensity-and hence the image contrast-can be influenced by the measurement parameters (e.g., echo time, repetition time) set at the scanner. the knowledge of these interrelations of the different parameters influencing the signal intensity and hence the image contrast is mandatory in interpreting mr images correctly. the mr scanner is a complex system (sect. . ). its main components are the magnet, the rf system, and the gradient coils. the entire system is controlled and supervised by a computer. the development of mr imaging was only possible after the development of fourier trans-form nmr as well as fast computers calculating fast fourier transformations within minutes. the development of large-bore superconducting magnets of ≥ . - . t in the s accelerated the development and the application of mri in clinical practice. nowadays, -t scanners are in routine clinical use. scanners with ≥ t are installed and will further accelerate the development of mri and mrs. most of the magnets are made of solenoid coils. other magnet types, like scanners with helmholtz coils configuration, give better access to the patients; however, are installed mostly for special purposes, e.g., in an operation suite. mr scanners with conventional resistivity magnets and fields smaller than . t are rarely used, except in countries with short supplies of helium or other restrictions that may not allow installation of a superconducting system. the risk of side effects is assumed low if the magnetic fields are ≤ . t, except for the danger caused by ferromagnetic subjects accelerated into the magnet. nevertheless, at fields of . t and even ≥ t, the knowledge about side effects is rare, especially the long-term exposure due to high static magnetic fields, gradient fields, and rf fields to organisms. the problems concerning safety are extensively discussed in sect. . . in the early days of mri, the simplicity and wide range with which to manipulate contrasts in mri by changing the imaging parameters led to the conclusion that development of mr contrast agents is dispensable. however, experience taught that contrast media significantly improve mr diagnostics, not only in the central nervous system, but also in other diagnostic procedures. in contrary to x-ray contrast agents, where absorption is the dominating physical effect producing the contrast, mr contrast media are based on other principles. the paramagnetic and/or super-paramagnetic properties of the contrast media influence the relaxation times of tissue, or change contrast by obliterating the signal of protons and thus increase contrast. whereas in x-ray the contrast is proportional to the concentration of the contrast medium, in mr the dependency on the concentration is in general much stronger than linear -most often exponential. mr-contrast media are described in sect. . . the intrinsic sensitivity of nmr to motion was already observed early in the s. in mr imaging, motion, in particular flow, is often recognized as artifacts. however, these phenomena can be used to measure flow and/or represent the vascular system. two effects are used for these kinds of measurements, the time-of-flight phenomenon (or the wash-in/wash-out effect) or the spin-phase phenomenon. in time-of-flight measurements, moving spins are excited at one location (in the vessel), and detection of the spins is performed downstream at another known location (slice). the delay time between excitation and detection can be used to calculate the flow velocity. several modifications of the method exist (e.g., presaturation, bolus tracking), and are used depending on the setup of the measurement and sequences used. the spin-phase phenomenon can be used for angiographic imaging as well. the phase of the transverse magnetization of moving spins along a field gradient changes according to the larmor equation. these phase-shift effects are observed for flow in all directions. the phase changes are prone to different flow parameters (e.g., velocity, turbulences, acceleration, etc.) and on the pulse sequences used. the signal variations produced by the two effects can be used to produce images of the vascular structures. using phase-sensitive effects, magnitude subtraction is a common procedure: dephased and rephrased image are acquired sequentially and are subtracted. using time-offlight effects, mostly maximum-intensity projection is used to construct images of the vasculature. the angiographic techniques are described in detail sect. . . diffusion-weighted and -tensor imaging is a method applied first for clinical problems in brain, e.g. stroke, characterization of brain tumors, multiple sclerosis, etc. molecules in gases and fluids undergo microscopic random motions due to the thermal energy proportional to the temperature of the gas or fluid. if the molecules-in this context only water molecules are considered-are imbedded in a structure, for instance in tissues, the random walk motion may be restricted by the cellular tissue structure and hence reduce diffusion constants. if the structure of tissue has a preferred direction, diffusion will no longer isotropic; the diffusion will have higher components in the preferred direction of tissue. this kind of diffusion is called anisotropic diffusion. in mathematical terms, the anisotropic diffusion can be represented by a tensor. the so-called apparent diffusion coefficient can be measured, and the anisotropy of the diffusion can be determined and contains information about the structure of tissue. the basics of diffusion imaging are elucidated in sect. . . g. brix all nuclei with an odd number of protons and/or neutrons possess in their ground state a non-zero angular momentum or nuclear spin i, which results from the intrinsic angular momentums and the orbital angular momentums of the constituent protons and neutrons. as with any other angular momentum at the atomic and nuclear level, the angular momentum vector i is quantized. this quantization is described by the following fundamental postulates of quantum physics: • quantization of the magnitude: the magnitude (length) |i | of the angular momentum vector can only take the discrete values |i| = ħ i(i + ), with ħ being the planck's constant (ħ = . × - js) and i the spin quantum number, which is either integer or half-integer. • quantization of the direction: the component iz of the angular momentum vector i along the direction of an external magnetic field is quantized. for a given value of i, only the discrete values of iz= mħ are admitted, where m is the magnetic quantum number which is limited to the values -i, -i + , . . . , i - , i. in total, there are thus only i + orientations of the angular momentum vector i allowed. example: figure . . illustrates spin quantization in form of a vector diagram for a nucleus with the spin quantum number i = / . in this case, there are i + = · / + = orientations of the spin vector i with the magnitude (length) i(i+ ) |i| / · ( / + ) / ћ ћ ћ allowed. remark: the spin quantum number i is frequently referred to as "nuclear spin, " which means that the maximum (minimum) component of the vector i along the chosen axes is ħi (-ħi). the angular momentum i of an atomic nucleus is always related with a magnetic moment μ. this nuclear magnetism forms the basis of magnetic resonance. remark: an atomic nucleus can be imagined as a rotating, positively charged sphere ( fig. . . ). the rotation of the charge results in a circular electric current, inducing a magnetic dipolar field. both the direction and magnitude of the magnetic field are characterized by the magnetic moment μ. in the simple model considered, the vector μ is collinear with the mechanical angular momentum of the sphere. surprisingly, in quantum physics this simple relationship is even valid when the angular momentum is an inherent property of a particle (e.g., an electron or a nucleus) which is not associated with a mechanic rotation. as shown by a large number of experiments, there is a linear relationship between the nuclear magnetic moment and the nuclear spin μ = γ i. ( . . ) the proportionality constant γ is denoted as gyromagnetic ratio and is a characteristic property of a nuclide. whereas all nuclei with i ≠ can be used in principle for spectroscopic mr examinations, the nucleus of the hy- in the classical model, the rotation of a charged particle, described by its angular momentum i, results in an electric current, which induces a magnetic dipolar field. direction and magnitude of this field are described by the magnetic moment μ. the vector μ is directed collinear to the angular momentum i of the sphere (magnetomechanic parallelism) drogen atom, which has a spin quantum number of i = / , is almost exclusively used in mri due to two reasons: • it is the most abundant nucleus in biological systems. • it has the largest gyromagnetic ratio of all stable nuclei. in the absence of a magnetic field, all allowed orientations of the magnetic moment μ = γ i are energetically equal. this corresponds to the well-known fact that a bar magnet can be positioned arbitrarily within the field-free space; its potential energy is independent of its orientation. however, if the nucleus is located in a homogenous static magnetic field with the magnetic flux density b (magnitude, b = |b |) directed along the z-axis of a coordinate system, the nucleus has the additional potential energy splitting of the energy levels of a nucleus with the spin quantum number i = / in an external magnetic field with the flux density b . the energy difference between the four equidistant nuclear zeeman levels is Δe = ħω = γħb the b field represents the "real" magnetic field that interacts with the magnetic moments of the nuclei. the relation between the two magnetic field quantities is explained in sect. . . . . when considering an isolated magnetic moment within a static magnetic field, one will find that transitions between the different energy levels are prohibited due to the law of energy conservation. transitions can exclusively be induced by an additional time-dependent electromagnetic rf field that interacts with the magnetic moment, the effect is known as magnetic resonance (mr). in mr, transitions are induced by a magnetic rf field b (t) with the angular frequency ωrf, which is irradiated perpendicular to the direction of the static magnetic field b . such a time-dependent magnetic field, however, can only induce transitions fulfilling the selection rule ∆m = ± , i.e., transitions between neighboring energy levels. as a consequence, the energy erf = ħωrf of a photon of the rf field must be identical with the energy difference Δe = ħω = γħb between two neighbored energy levels, which yields the resonance condition ωrf = ω = γb . ( . . ) remarkably, planck's constant ħ does not occur in this fundamental equation of magnetic resonance. this indicates that the basic principles of magnetic resonance can-not only be described by quantum physics, but also by a classical approach, which is mediated by the intuitive semi-classical model described in the next section. in an external magnetic field, a cylindrical permanent magnet-characterized by a magnetic moment μ-experiences a mechanical torque that tends to align the permanent magnet parallel to the external magnetic field and thus minimize the potential energy of the system. however, in the case that the permanent magnet rotates around its longitudinal axis and thus possesses an angular momentum ("magnetic gyroscope"), it cannot align parallel to the external field due to the conservation of the angular momentum. in this situation, it experiences a torque perpendicular to both the direction of the magnetic field and the angular momentum, which results in a rotation (precession) of the magnet on a cone about the direction of the external b field (see fig. . . b). the frequency of this precession, the larmor frequency, corresponds to the resonance frequency ω given by eq. . . . magnetic field can be illustrated by a mechanic analog. when a child's spinning top is deflected so that its axis is not parallel to the top and the nucleus is that the nucleus possesses an intrinsic angular momentum i, whereas the angular momentum l of the top has to be initiated mechanically the direction of the gravitational field, it will continue rotating around its axis, but the axis itself will start rotating-the top precesses on a cone around the direction of the gravitational field ( fig. . . a). it should be mentioned, however, that the child's top and the nucleus differ with regard to the fact that the child's top has to be spun, whereas the nucleus possesses an intrinsic angular momentum. the quantization of direction of the nuclear magnetic moment μ can be integrated into this classical description by limiting the angle between the field axis and the precession cone to the discrete values which relate to the i + orientations of the angular momentum i permitted. for a spin- / nucleus, this results in a double-precession cone as shown in fig. . . . however, this semiclassical model is rendered questionable, because the classical concept of a continuous trajectory in space is hardly compatible with the quantization of physical quantities. for instance, what would the trajectory of the vector μ look like when transitions between the various precession cones, reflecting discrete energy levels, are induced by an rf field, such as for a spin- / nucleus, the transition from the lower to the upper precession cone (cf. fig. . . )? is it possible to assign to the vector μ a well-defined direction in space at any point in time, and would this direction change over time? if so, then this negates the postulate of discrete energy and angular momentum levels. this aporime can only be solved by a rigorous quantum mechanical treatment of the system. however, when considering only the mean values of physical quantities averaged over a large ensemble of nuclei-which can only be measured in a real mr experiment-it becomes obvious that the models and laws of classical physics are valid. in field-free space, the magnetic moments of nuclei in a macroscopic sample are randomly oriented due to their thermal motion and thus mutually compensate each other. in a homogeneous static magnetic field b , however, only i + discrete orientations of the magnetic moments with respect to the direction of the external field are permitted, the energy levels of which differ according eq. . . . in thermal equilibrium, the population of the i + levels (spin states) is described by the boltzmann statistic: the lower the energy em = -γħb m of a state with the magnetic moment μz = γħm in the zdirection, the greater is the occupation number. example: let us consider an ensemble of hydrogen nuclei in a static magnetic field of the flux density b = t. according to the boltzmann statistic, more nuclei will occupy the state of the lower energy (m = + / , µz parallel to b ) than the state of the higher energy (m = - / , µz antiparallel to b ) (fig. . . ). however, as compared with the thermal energy, the difference between the two energy levels is extremely small, so that the difference in the occupation numbers of the two levels is very small. at body temperature of °c, the difference in the occupation numbers with respect to the total number of spins is as low as . ! fig . . double-precession cone for a nucleus with the nuclear spin quantum number i = / . the two permitted spin states (precession cones) are characterized by the magnetic quantum numbers m = ± / origin of the nuclear magnetization. in thermal equilibrium, the distribution of an ensemble of spin- / nuclei on the two allowed precession cones is described by the boltzmann statistic. the occupation number of the state of the lower energy (m = + / , µz parallel to b ) is somewhat higher than that of the state of the higher energy (m = - / , µz antiparallel to b ) which leads to macroscopic (bulk) magnetization m although the difference in the occupation numbers is extremely small, it results in a measurable bulk magnetic moment along the direction of the b field due to the large number of nuclei in a macroscopic sample ("nuclear paramagnetism"). the macroscopic magnetization in thermal equilibrium is described by the magnetization vector m , which is defined as the vector sum of the nuclear magnetic moments per unit volume v. the magnitude of the equilibrium magnetization m is given by where n is the total number of nuclei in the sample, t the absolute temperature of the sample, and k boltzmann's constant (k = . · - j/k). the ratio ρ = n/v is called spin density. as both the body temperature and the spin density cannot be altered in living beings, the equilibrium magnetization m can only be increased according to eq. . . by increasing the magnetic flux density b . the equilibrium state of a spin system can be disturbed by a magnetic rf field b (t) with a frequency ωrf equal to the larmor frequency ω , which tilts the magnetization m. whereas a nuclear magnetic moment μ can only take i + discrete orientations relative to the static magnetic field b (quantization of direction), the macroscopic magnetization m can take any direction in space and change it steadily. the action of a magnetic rf field b (t), which rotates with the larmor frequency ω around the direction of the static b field, can be analyzed most effectively in a rotating frame, i.e., a coordinate system that rotates with the larmor frequency around the z-axis ( fig. . . ) . the change to a rotating frame with the axes (x′, y′, z) has two advantages: • as the x′-y′-plane of the rotating frame is synchronized with the rf field, the b vector remains stationary in this frame. in the following analysis, we will assume that the static b field points along the x′-axis ( fig. . . ). • as shown in sect. . . . , a nuclear magnetic moment μ precesses with the larmor frequency ω around the direction of the b field (see fig. . . ). of course, this holds equally for the sum of the nuclear magnetic moments, i.e., for the macroscopic magnetization m. therefore, an observer observing the precession of the magnetization m from the rotating frame will come to conclude that the position of the magnetization does not change. from his point of view, the magnetization behaves as if the b field is absent (larmor's theorem). summarizing both reflections, it can be concluded that the dynamics of the magnetization m in the rotating frame is determined only by the static b field. if it points toward the x′-axis, then the magnetization m will precess around the x′-axis ( fig. . . a). analogous to eq. . . , the frequency ω of this precession is given by ω = γb . ( . . ) when looking at this simple rotation of the magnetization m in the y′-z-plane of the rotating frame from a laboratory frame of reference (x, y, z), the movement is superimposed by a markedly faster rotation (b > b ) around the z-axis. thus, within the laboratory frame of reference, the tip of the vector m moves in a helical manner on the surface of a sphere around the b field; the length of the vector m remains constant ( fig. . . b ). if the magnetization m points toward the static field b before the rf field b (t) is switched on, the magnetization m is rotated from the equilibrium position under the influence of the rf field during the duration tpby the flip angle: ( . . ) if the duration tp of the rf field is chosen to rotate the magnetization in the rotating frame by °, then this radiofrequency field in a stationary and in a rotating frame of reference. in the stationary frame (x, y, z) the magnetic rf field b (t) rotates with the angular frequency ωrfin the x-yplane around the z-axis. if one observes this rotation from a rotating frame (x′, y′, z) , which rotates with the angular frequency ωrf around the z-axis, the vector is stationary. typically, the rotating frame is chosen in such a way that the b field points in the x′-direction pulse is denoted as ° or π/ pulse ( fig. . . a ). accordingly, the magnetization m is rotated by ° when the duration of the rf pulse is doubled at the same flux density b . this pulse, which inverts the magnetization from the positive to the negative z-direction, is called ° or π pulse ( fig. . . b). remark: precisely speaking, a short rf pulse with the carrier frequency ωrf will excite not only the nuclei that exactly fulfill the resonance condition ωrf = ω , but also nuclei whose resonance frequency slightly differs from ωrf. this is because the frequency spectrum of an rf pulse of finite duration consists of a continuous frequency band around the nominal frequency ωrf (fig. . . ). the width of the frequency distribution is inversely proportional to the duration tp of the pulse: the shorter the pulse, the broader the frequency spectrum is be distributed around ωrf. if the rf field is irradiated over a very long period (tp → ∞), the spectrum will be quasi-monochromatic. to simplify the following analysis, the magnetization m is separated into two components: the longitudinal magnetization mz, which is parallel to the direction of the static magnetic field b , and the transverse magnetization mxy, which is perpendicular to it ( fig. . . ). in the laboratory frame the transverse magnetization mxy precesses with the larmor frequency ω ; in the rotating frame it remains stationary. it is instructive to describe the effect of a °/ ° pulse on an ensemble of spin- / nuclei within the semiclassi-cal model described in sect. . . . . as can be shown, the magnetic rf field induces transitions between the two permitted spin states (precession cones) until the occupation numbers are either identical ( ° pulse) or inverted ( ° pulse). furthermore, irradiation of a ° pulse results in a phase synchronization of the nuclear magnetic moments of the sample, which yields a macroscopic transverse magnetization mxy, the magnitude of which is equal to that of the equilibrium magnetization m . figuratively speaking, this means that the precession of the transverse magnetization mxy can be described as a common (phase coherent) precession of a "spin package" (fig. . . ). up to this point, we have assumed that interactions of nuclear spins between one another and with their environment can be neglected. however, this assumption is not valid for real spin systems, as the magnetization returns to its equilibrium (mxy = , mz = m ) after rf excitation. this process is called relaxation. two different relaxation processes have to be distinguished: • the relaxation of the longitudinal magnetization mz characterized by the longitudinal or spin-lattice relaxation time t • the relaxation of the transverse magnetization mxy characterized by the transverse or spin-spin relaxation time t . fig . . resonance excitation. a in a rotating frame of reference, which rotates with the larmor frequency ω around the direction of the b field, the magnetization m precesses with the frequency ω around the stationary b field. b in the stationary frame this simple rotation is superimposed by the markedly faster rotation around the z-axis. therefore, the tip of the vector m moves in a helical manner on the surface of a sphere fig. . . ° and ° pulse. if one chooses the rotating frame so that the rf pulse is irradiated along the x′-axis, the magnetization m will be rotated (a) by a ° pulse along the y′-direction and (b) by a ° pulse to the negative z-direction . physical basics in real spin systems, every nucleus is surrounded by other intra- and intermolecular magnetic moments, which are in motion due to rotations, translations, and vibrations of molecules as well as exchange processes. these processes induce an additional fluctuating magnetic field blok(t) at the position of a given nucleus, which has to be added to the external field. as the movements and exchange processes are random, the fluctuating fields differ in time from nucleus to nucleus-in contrast to the coherent rf field blok(t) irradiated from the outside. as any other temporal process, the locally fluctuating magnetic fields blok(t) can be decomposed into its frequency components. remark: the decomposition of a function into harmonic (i.e., sinusoidal) basis functions is denoted as fourier analysis, the mathematical operation that gives the intensity (amplitude) of the harmonic basis functions as fourier transformation. if the given function is periodic with period t, it can be decomposed into a sum of sinus and/or cosine functions with the discrete frequencies ω, ω, ω . . . (ω = π/t ). in contrast, a nonperiodic function has a continuous spectrum of frequencies. the contribution of the different frequency components to the fluctuating local field blok(t) is described by the spectral density function j(ω). a general feature of this function is that the more rapidly the molecular motion is, the broader the frequency spectrum ( fig. . . ). rf pulse in the time and frequency domain. a rf pulse with carrier frequency ωrf and duration tp. b fourier transformation of the rf pulse. due to its finite duration, the frequency spectrum of the pulse is not monochromatic, but contains an entire frequency band, which is distributed around the nominal frequency ωrf phase synchronization by a ° pulse. the ° pulse leads to a synchronization of the phases of the magnetic moments μ of the nuclei in the sample (spin packet), which results in a macroscopic transverse magnetization mxy, the magnitude of which corresponds to that of the longitudinal magnetization before irradiation of the ° pulse. in the figure, only the part of the magnetic moments of the sample which are distributed in an anisotropic manner on the precession cone is shown fig. . . definition of the longitudinal and transverse magnetization. as the macroscopic magnetization m precesses in the stationary frame around the z-axis, it is beneficial to split it into two components: the rotating transverse magnetization mxy and the longitudinal magnetization mz in order to understand the effect of the fluctuating local magnetic fields blok(t) on a spin system, the components parallel and perpendicular to b have to be discussed separately. whereas the parallel component exclusively contributes to t relaxation, the perpendicular component influences both t and t relaxation: • the field component perpendicular to the b field induces-in analogy to the external rf field b (t) -transitions between the energy levels (precession cones) of an individual spin. the probability of these transitions depends on the intensity of the frequency component of the fluctuating fields that oscillates at the larmor frequency ω : the higher the spectral density j(ω ), the more transitions are induced. as fig. . . shows, j(ω ) assumes a maximum when the limiting frequency ωg of the spectral density function is comparable to the larmor frequency ω . the described relaxation process allows the excited spin system to emit and absorb photons of energy ħω until the boltzmann distribution of the energy levels is reached. the energy difference between the excited and the equilibrium state is dissipated to the surrounding medium or "lattice. " since the change in the occupational numbers of the spin states (precession cones) is related with a change in the macroscopic longitudinal magnetization mz, the described mechanism contributes to longitudinal relaxation. moreover, it contributes to t relaxation, as the locally induced transitions between the precession cones destroy the phase coherence between those spins which form, as a spin-package, the macroscopic transverse magnetization (cf. fig. . . ). • the component of the fluctuating field blok(t) oriented parallel to the z-axis locally modulates the static field b at the position of a nucleus and thereby changes the precession frequency ω of its nuclear magnetic moment μ. since the local fluctuations seen by the nuclei are spatially uncorrelated, the precessing magnetic moments within a sample lose their phase coherence, which causes the transverse magnetization to decay (see fig. . . ) . given the fact that the effect of the high-frequency components of the fluctuating field vanishes when averaged over time, only the quasistatic frequency components, the intensity of which is approximately given by j(ω = ), have a measurable effect on the transversal magnetization (see fig. . . ) . as no transitions between the energy levels (precession cones) are induced by the described relaxation mechanism, the longitudinal magnetization mz remains unchanged, which means that the mechanism solely contributes to transversal relaxation. the qualitative discussion of the relaxation mechanisms reveals that their effectiveness depends on two different factors, namely on the magnitude and the temporal characteristics of the field fluctuations. the dependence from the magnitude is utilized when using paramagnetic con-trast agents (see sect. . ), which possess unpaired electron spins and consequently a magnetic moment. when considering the fact that the magnetic moment of an electron amounts to times the magnetic moment of a proton, one can easily understand why even the slightest amounts of paramagnetic substances can lower the relaxation times considerably. for spin systems with a sufficiently high molecular mobility, relaxation processes can be described by exponential functions with the time constant t or t . the longitudinal magnetization increases exponentially toward its equilibrium value mz = m , the transverse magnetization decreases exponentially toward mxy = . figure . . shows the exponential relaxation of both magnetization components after excitation of the spin system by a ° pulse and gives a simple interpretation of the relaxation times t and t : • the longitudinal relaxation time t gives the time required for the longitudinal magnetization after a ° pulse to grow again to % of its equilibrium value m . • the transverse relaxation time t gives the time required for the transverse magnetization after a ° pulse do drop to % of its original magnitude. schematic representation of the density function j(ω) for three substances with a different thermal mobility of the constituting atoms or molecules. a if the atoms or molecules move very slowly (such as in solids), the intensity of high-frequency components is very low. b this is different in fluids. in this case, the atoms or molecules move very rapidly, so that the spectral density function contains high-frequency components to a significant degree. c at a given frequency ω the intensity j (ω ) will attain a maximum if the cut-off frequency ωg of the spectral density function approximately corresponds to the given frequency ω . at low frequencies, j (ω) is nearly independent on the frequency, so that the density of the quasi-static frequency components can be approximated by j (ω = ) dephasing of the transverse magnetization. the transverse magnetization mxy of the sample is split up into several magnetization components, which precess with slightly differing larmor frequencies around the direction of the b field. a immediately after the ° pulse, all magnetization components are aligned parallel. b-d afterward, the components dephase due to their different larmor frequencies, and thus the macroscopic transverse magnetization decays the process of transverse relaxation can be described intuitively on the macroscopic level. to this end, the transversal magnetization mxy is split into different magnetization components, or spin packets. whereas the spins of each spin packet precess with the same larmor frequency, the spins in different packets slightly differ in their lamor frequencies. right after excitation, all components of the magnetization point toward the same direction; shortly afterward, however, some parts precess more quickly than others around the direction of the b field. due to this fact, the components fan out (dephasing), and the resulting transverse magnetization decreases ( fig. . . ). in real mr experiments always macroscopic samples are examined, so that not only the fluctuating local magnetic fields, but also spatial field inhomogeneities of the external field b , introduced by technical imperfections, contribute to the transverse relaxation. as both effects superpose on one another, the resulting effective relaxation time t * is always shorter than the real, substancespecific transverse relaxation time t . relaxation times in solids and fluids differ markedly (fig. . . ) . whereas the longitudinal relaxation in solids can take hours or even days, in pure fluids it only takes some seconds. this difference is because the spectral density function j(ω ) at the larmor frequency is much larger in fluids than it is in solids, in which the low-frequency components dominate (see fig. . . ). for the same physical reason, the t relaxation time in solids usually only amounts to some microseconds, whereas in fluids it is only slightly shorter than the longitudinal relaxation time t . soft tissues range, based on their consistency, between solids and pure fluids: with regard to their relaxation be-havior, they can in general be treated as viscose fluids. table . . summarizes representative proton relaxation times for different biological tissues. due to the considerable differences in the tissue relaxation times, it is possible to acquire mr images with an excellent tissue contrast even when the proton densities of the tissues or organs only slightly differ from one another. when interpreting the relaxation times, two aspects have to be taken into account: • the relaxation time t of biological tissues strongly depends on the larmor frequency, whereas the relaxation time t is nearly independent of the frequency. when comparing t values, one therefore needs to consider the magnetic flux density b at which the t measurement was done. • relaxation processes often consist of multiple components, so that the description by a mono-exponential function is only a rough approximation. the relaxation times given in table . . therefore only represent weighted mean values of an entire spectrum of exponential functions, characterizing the relaxation behavior of protons in different cell and tissue compartments between which the water exchange is slow. however, at the timescale relevant for mri, relaxation processes of most tissues can be approximated rather well by a single exponential function. an exception is fat-containing tissue (such as subcutaneous fatty tissue or bone marrow), which demands at least two exponential functions to be considered for the parameterization of the relaxation processes observed. figure . . shows the general setup of an mr experiment; technical details will be presented in sect. . . the sample to be examined is located within a very homogeneous static magnetic field b , which is created either by a permanent magnet or by a (superconducting) coil. the rf field required for the excitation of the spin system is generated by a transmit coil connected to the rf transmit system. this rf coil is positioned in such a way that the radiofrequency field b (t) is irradiated perpendicular to the b field into the sample volume. remark: whereas atomic nuclei with a nuclear spin quantum number of i ≥ can interact with both the electric and the magnetic component of the electromagnetic rf field, spin- / nuclei are only affected by the magnetic component b (t) of the rf field. after excitation of the spin system by an rf pulse, the precessing transverse magnetization mxy in turn induces a weak alternating voltage in a receiver coil, which in general is identical to the transmit coil ( fig. . . a). the measured voltage is amplified, filtered, digitalized, and fed to the computer of the mr system. the measured mr signal s(t) has the form of a damped oscillation (fig. . . b ), which is denoted as the free induction decay (fid). the fid signal has the following characteristic features: • it oscillates with the larmor frequency ω of the stimulated nuclei. • it decays in time with the time constant t *. • its initial amplitude is proportional to the number n of the excited spins in the sample (n = ρv ∝ m v; cf. eq. . . ). if the sample contains nuclei of a certain type whose resonance frequency slightly differs due to intramolecular interactions (see sect. . . ), the mr signal induced in the receiver coil will consist of several interfering decay curves. however, such a curve is rather complicated to analyze and interpret. therefore, the detected curve is usually spit up into its frequency components (fourier analysis, see sect. . . . ) and presented as frequency spectrum. both types of description are merely different representations of the same data, which can be transformed into one another mathematically by a fourier transformation. example: figure . . b,c illustrates the relation between the description of the mr signal in the time or frequency domain by the example of a substance whose mr spectrum only shows one resonance line. principle setup of an mr experiment. the object to be measured is placed within a homogeneous static magnetic field b . excitation of the spin system is performed by an rf field b (t) irradiated perpendicularly to b by an rf coil. after excitation, the mr signal of the sample is detected by an rf coil and transferred via a receiver channel to the computer of the mr system. (for details, see sect. . ) for quantitative analysis of an mr spectrum, the following features are important: • the center of the resonance curve is at the larmor frequency ω . • the full width Δω at half maximum of the curve is related with the characteristic time constant t * of the fid by the relation Δω = /t * . • the area under the curve is approximately proportional to the number of excited nuclei in the sample. in an mr experiment, only the rf signal can be determined by measurement, which is induced by the rotating transverse magnetization mxy in the receiver coil (cf. sect. . . ). nevertheless, a large variety of mr experiments can be realized that differ in the way by which the spin system is excited and prepared by means of rf pulses before the signal is acquired. a defined sequence of rf pulses, which is usually repeated several times, is called a pulse sequence. in the following, three "classical" pulse sequences are described that are frequently used for mr experiments (imaging sequences are described in sect. . ): • the saturation recovery sequence • the inversion recovery sequence • the spin-echo sequence. the saturation recovery (sr) sequence consists of only a single ° pulse, which rotates the longitudinal magnetization mz into the x-y-plane. the fid signal is acquired immediately after the rf excitation of the spin system. after a delay time, the repetition time tr, the sequence is repeated. the sr sequence is described schematically by the pulse scheme ( °-aq-tr) (aq = signal acquisition period; fig. . . a ). if the repetition time tr is long compared to t , the magnetization m relaxes back to its equilibrium state (see fig. . . ) . in this case, the initial amplitude of the fid, even after repeated excitations, does only depend on the equilibrium magnetization m and does not show any t dependency. however, if the repetition time tr is shortened to a value that is comparable to t , the longitudinal magnetization mz will not fully relax after excitation, and the following ° pulse will rotate the reduced longitudinal magnetization mz(tr) = m [ -exp(-tr/t )] into the x-y-plane (fig. . . b, c) . under the assumption that the transverse magnetization after the repetition time tr has been decreased to zero (tr >> t *), the following expression is obtained for the initial amplitude ssr of the fid signal: which exclusively depends on the relaxation time t and the number n of the excited spins in the sample. free induction decay (fid) and frequency spectrum. a after excitation of the spin system by a ° pulse the magnetization mxy precesses with the larmor frequency ω around the direction of the b field and induces an electric voltage in the receiver coil. b the measured fid signal s(t) has the form of a damped oscillation, the frequency of which is given by the larmor frequency ω . the decay of the signal is defined by the time constant t *. c a fourier transformation of the fid signal gives the frequency spectrum of the mr signal. the resonance curve has its center at the larmor frequency ω ; its full width at half maximum (fwhm) is related with the characteristic time constant t * of the fid by the relation Δω = /t * in the inversion recovery (ir) method, the longitudinal magnetization is inverted by a ° pulse (inversion pulse), which is followed after an inversion time ti by a ° pulse (readout pulse). immediately after the ° pulse, which rotates the partially relaxed longitudinal magnetization mz(ti) into the x-y-plane, the fid signal is acquired (fig. . . ). the ir sequence is described by the pulse scheme ( °-ti - ° - aq). the initial amplitude sir of the fid signal is directly proportional to the longitudinal magnetization immediately before irradiation of the read-out pulse, just as is the case in the sr method. in contrast to saturation recovery sequence. a pulse scheme of the sr sequence (aq: signal acquisition). b the ° pulse rotates the actual longitudinal magnetization into the x-y-plane. during the repetition time tr, the longitudinal magnetization relaxes toward the equilibrium magnetization m . the speed of this process is described by the longitudinal relaxation time t . note that by the first ° pulse, the equilibrium magnetization m is rotated into the x-y-plane, whereas the subsequent ° pulses rotate the reduced longitudinal magnetization mz(tr) = m [ -exp(-tr /t )]. c temporal evolution of the transverse magnetization mxy in the rotating frame. d induced mr signal ssr(t) inversion recovery sequence. a pulse scheme of the ir sequence (aq: signal acquisition). b initially, the longitudinal magnetization is inverted by the ° pulse (inversion pulse), which is followed after an inversion time tiby a ° pulse (readout pulse), which rotates the existing longitudinal magnetization mz(ti) into the x-y-plane. after the ° pulse, the longitudinal magnetization relaxes toward the equilibrium magnetization m . c temporal evolution of the transverse magnetization mxy in the rotating frame. d induced mr signal sir(t) the sr sequence, however, the change in the longitudinal magnetization is twice as high and thus-in analogy to eq. . . -the following expression is obtained (compare figs. . . b and . . ) ssr ∝ n ( - e -t i /t ). ( . . ) the derivation of this relation is based on the assumption that the spin system is in its equilibrium state before it is excited by the inversion pulse. when repeating the ir sequence, one has therefore to make sure that the repetition time tr is markedly longer than the relaxation time t . remark: if the ir sequence is repeated several times with different inversion times ti, it is possible to sample the temporal course of the longitudinal magnetization step by step, since the initial amplitude of the fid signal is directly proportional to the longitudinal magnetization at time ti (see fig. . . ) . this procedure is applied frequently in order to determine the relaxation time t of a sample according to eq. . . . as explained in sect. . . . the temporal decay of the transverse magnetization mxy is caused by two effects: fluctuating local magnetic fields and spatial inhomogeneities of the magnetic field b . the transverse magnetization mxy therefore relaxes not with the substance-specific relaxation time t but rather with the effective time constant t * (t * < t ). when determining the relaxation time t , it is therefore important to compensate the effect of the field inhomogeneities. this can be done, as e. hahn has already shown in , by using the so-called spinecho (se) sequence. this sequence utilizes the fact that the dephasing of the transverse magnetization caused by b inhomogeneities is reversible since they do not vary in time, whereas the influence of the fluctuating local magnetic fields is irreversible. in order to understand the principle of the se sequence with the pulse scheme ( ° - τ - ° - τ - aq; see fig. . . a), we initially neglect the influence of the explanation of the spin-echo experiment in the rotating frame. for the sake of simplicity, the substance-specific transverse relaxation is not been considered in this figure. a the ° pulse rotates the longitudinal magnetization into the x′-y′plane. b,c in the course of time, the magnetization components, which form together the transverse magnetization mxy, dephase so that the transverse magnetization decays with the characteristic time constant t * (see fig. . . ). d,e irradiation of the ° pulse along the x′-axis mirrors the dephased magnetiza-tion vectors at the x′-axis. as neither the precession direction nor the precession velocity of the magnetization components are altered by the ° pulse, the components rephase and thus the transverse magnetization increases. the regeneration of the transverse magnetization is called a spin echo. f at the time te = τ, all magnetization components point into the same direction again. due to the rephrasing effect of the ° pulse, the amplitude mz(te = τ) of the spin echo is independent of the static inhomogeneities of the b field fluctuating local magnetic fields and solely consider the static magnetic field inhomogeneities. immediately after the ° pulse, all magnetization components composing the transverse magnetization mxy point along the y′-axis ( fig. . . a ). shortly afterward, some components precess faster, others more slowly around the direction of the b field, so that the initial phase coherence is lost (see fig. . . ) . when looking at this situation from a rotating frame, one observes a fanning out of the magnetization components around the y′-axis (fig. . . .b, c) . if a ° pulse is applied after a time delay τ along the x′-axis, the magnetization components will be mirrored with respect to this axis ( fig. . . d) . however, the ° pulse does not change the rotational direction of the magnetization components, but merely inverts the distribution of the components: the faster components now follow the slower ones ( fig. . . e). after the time t = τ, all magnetization components again point to the same direction, and the signal comes to a maximum ( fig. . . f ). the ° pulse thus induces a rephasing of the dephased transverse magnetization, which causes the mr signal to increase and to generate a spin echo (fig. . . ). after the spin-echo time te = τ  , the echo decays againas the original fid does-with the time constant t *. due to the rephasing effect of the ° pulse, the spinecho signal sse(te) is independent from the inhomogeneities of the static magnetic field: the loss of signal at the time t = te as compared to the initial signal sse( ) is determined exclusively via the substance-specific relaxation time t . if one irradiates a sequence of k ° pulses at the times τ, τ, τ, …, ( k- )τ, one can detect a spin echo in between the subsequent ° pulses (fig. . . ). the envelope of the echo signals sse( τk) (k = , , , …, k) decays exponentially with the relaxation time t . sse ∝ n e - τk/t . ( . . ) the major advantage of this multi echo sequence consists in the fact that the t decay can very effectively be detected by a single measurement ( fig. . . ). all the considerations so far have been based on the assumption that the external magnetic field b created by the rf coil is not altered by the electrons surrounding a nucleus. however, this is not the case as the electrons interact with the applied external magnetic field. in biological tissues in which atoms are covalently bound, two related effects need to be considered, the diamagnetism and the chemical shift. diamagnetism is a general feature of matter and is because electrons attempt to shield the interior of the sample against the external magnetic field. in electrodynamics, this effect is described by lenz's law. it states that the current induced in a circuit by the change of a magnetic field is directed in such a way that the secondary magnetic field induced by the electric current weakens the primary magnetic field ( fig. . . ). if a sample is positioned in an external magnetic field, a current is induced in the electron shell of the atoms and molecules, whose magnetic moment is directed against the external magnetic field, following lenz's law. however, in contrast to the electrons within a macroscopic circuit, the electrons in the electron shell are "frictionless, " which means that an induced electron current remains constant until the external magnetic field changes or until the sample is removed from the magnetic field. the sum of the induced magnetic moments of the electrons per volume is-similar to the nuclear magnetization m-denoted as electron magnetization me. for averaging, the volume has to be chosen in such a way that, on the one hand, a great number of atoms and molecules is contained, and, on the other hand, that it is small compared to the volume of the sample (for example, µm water contains about . · water molecules). the magnetization me thus represents a macroscopic quantity per definitionem. remark: due to practical reasons, distinction is made in electrodynamics between free and bound currents: free currents are experimentally controllable and are linked to macroscopic circuits, whereas bound currents are linked to atomic and molecular magnetic moments in matter. the field related to free currents is denoted as magnetic field h (unit: ampere/meter), the field created by the total current, i.e. by both the free and the bound current, as magnetic flux density b (unit: tesla). at every point in space, the vector quantities h, b, and me are related by the dimensionless proportionality constant χ is called the magnetic susceptibility. for diamagnetic substances, χ is, according to lenz's law, always negative and has a very small absolute value (e.g. water: χ = - . · - ). when putting a diamagnetic sample into an originally homogeneous magnetic field, a magnetization me is induced according to eq. . . , which itself creates a magnetic field that counters the primary field. therefore, the field distribution of the magnetic flux density b differs both inside and outside of the sample from the original field distribution. example: figure . . shows the field distribution of the magnetic flux density b inside and outside of a homogeneously magnetized sphere (χ = constant), which has been brought into an originally homogeneous field b . inside the sphere, the magnetic flux density b is given by b = ( + χ / ) b . it should be noted, however, that the homogeneously magnetized sphere represents an ideal case in which the b field is homogeneous on multi echo sequence. a pulse scheme (aq: signal acquisition). b decay of the echo amplitudes as a function of time. the signal decay is determined exclusively by the substance-specific transverse relaxation time t , whereas the decay and the regeneration of the fid are essentially determined by technically conditioned field inhomogeneities lenz's law. when a circuit is approached to a bar magnet with the magnetic flux density b, a current i is induced in the circuit. this current induces a magnetic dipolar field, which is directed in such a way that it weakens the primary magnetic field. magnitude and orientation of the dipolar field are described by the magnetic moment μ its inside, whereas in general, the b field is also inhomogeneous inside the object. the discussion provides three important aspects with respect to mr examinations of humans: • the distribution of the magnetic flux density b in the human body depends on the position, size, form, and magnetic susceptibility of all tissues and organs of the body. • at the interface between tissues with different magnetic susceptibilities, there are local field inhomogeneities. • the distortion of the external magnetic field caused by the body adds to the technical imperfections of the external field b . in mri, susceptibility-related inhomogeneities of the static magnetic field inside the body are obviously unavoidable and can result in image artifacts. in spectroscopic examinations, however, this problem can be reduced by acquiring only mr signals from small, morphologically homogeneous tissues regions. furthermore, one has the possibility to locally adjust the b field by means of external shim coils generating a weak additional magnetic field, so the homogeneity within the examined region fulfils the demands. when speaking of the homogeneity of the b field within a given region of the body, this relates to the average macroscopic field; on the microscopic scale, the magnetic field is always inhomogeneous. the larmor frequency of a nucleus is determined by the local magnetic field blok at the position of the nucleus, not by the macroscopic field b, which has been averaged over a small but microscopically large volume surrounding the nucleus. denoting the perturbation of the mean field b at the position of a nucleus caused by the surrounding electrons of the molecule by Δb, we get the relation blok = b + Δb. ( . . ) as experimental and theoretical investigations have shown, the small local field perturbation Δb is proportional to the macroscopic field Δb = -σb, ( . . ) which yields the following expression for the resonance frequency of the nucleus, considered the dimensionless "shielding constant" σ gives the relative resonance frequency shift that is independent of the magnitude of the magnetic field. this shift depends on the distribution of the electrons around the nucleus and thus has different values in different molecules. the magnitude of the additional field Δb at the position of a nucleus generally depends on the orientation of the molecule relative to the macroscopic field b. in molecules that rotate rapidly, such as in fluids and soft tissues, the chemical shift anisotropy vanishes, so the quantity σ in eq. . . could be defined as a direction-independent constant. this quantity describes the shielding effect of the electron shell averaged over all spatial directions. as the absolute value of the frequency shift cannot easily by measured, it is usually determined relative to the resonance frequency ωr of a reference substance. the difference (ω -ωr) of the resonance frequencies is expressed as dimensionless constant relative to the frequency ω = γb of the mr system in parts per million (ppm). the chemical shift δ provides information about how the atom with the nucleus under study is bonded in the molecule and thus makes mr spectroscopy a powerful tool for the determination of the structure of molecules as well as for the investigation of biochemical processes. for the h nucleus, which is surrounded by only one electron, the chemical shift is about ppm; for atoms with several electrons (e.g., c, f, and p) it can amount to several hundreds of ppm. to resolve these small differences in frequency it is necessary to use a strong and homogeneous static magnetic field (b ≥ . t, ∆b/b < . - . ppm depending on the nucleus). variation of the magnetic field by a diamagnetic sphere. distribution of the b field inside and outside a homogeneously magnetized sphere, which was positioned in an originally homogeneous magnetic field. it should be noted, however, that field variations caused by biological tissue are markedly weaker than illustrated here if the object to be imaged, such as the human body, is divided into small cuboidal volume elements (i.e., voxels), the task in mr imaging is to distinguish the signal contributions of the voxels to the detected summation signal from one another and to present them in form of sectional images (tomograms). this can be achieved by superimposing the homogeneous magnetic field b by an additional magnetic field with a well-defined dependence on the spatial position, so the larmor frequency of the mr signal becomes a function of space. in practice, image reconstruction is achieved almost exclusively by means of magnetic gradient fields. these are three additional magnetic fields b x , b y , and b z , whose field vectors point toward the z-direction and whose field strengths depend linearly on the spatial position x, y, or z, respectively ( fig. . . ). if the z-components of the three magnetic gradient fields are denoted by b x , b y , and b z , the fields can be expressed as where the proportionality constants g x , g y , and g z describe the magnitude or steepness of the orthogonal gradient fields. remark: the magnetic gradient fields are shortly denoted as x-, y-, or z-gradients. what is meant are magnetic fields b x , b y , and b z , the magnitude of which varies linearly along the x-, y-, or zaxis, respectively (see fig. . . ). in order to avoid image distortions, the magnitude of the gradients has to be chosen in such a way that the local field variations are markedly greater than the local inhomogeneities of the main magnetic field b ; typical values are between and mt/m. technically, the gradient fields b x , b y , and b z are produced by three coil systems (gradient coils), which can be operated independently from one another. example: assuming a patient diameter in the x-direction of x = cm, a magnetic flux density of the static field of b = t, and a gradient strength of g x = mt/m, the magnetic field b = b + g x x will increase within the patient (-x/ ≤ x ≤ + x/ ) linearly from . to . t. in mr imaging, the magnetic gradient fields are used in two different ways: • for selective excitation of the nuclear spins in a partial body region (e.g., a slice). • for position encoding within an excited partial body region (e.g., a slice). for image reconstruction, the homogeneous magnetic field b shown in a is superimposed by additional magnetic fields b x , b y , and b z , so-called gradient fields, the field vector of which points into the z-direction and the magnitude of which (length of the black arrows) depends linearly on the spatial coordinate x, y, and z, respectively. b and c show the field distributions in case the field b is superimposed by a gra-dient field b y and b z , respectively. the open arrows indicate the direction of the field variation, whereas the constants g y and g z represent the magnitude of the field variation per unit of length. if the z-components of the two magnetic gradient fields are given by b y and b z , then the gradient strengths are defined by g y = Δb y /Δy and g z = Δb z /Δz, respectively an mr signal can principally only be detected from the volume in which the nuclei have been excited before by an rf pulse. this fact is used in planar imaging methods, in order to reduce the primarily d reconstruction problem to a d one by selectively exciting only nuclei in a thin slice of the body. remark: depending on the type of the selectively excited partial body volume, one distinguishes between single-point, line, planar, and volume sampling strategies (fig. . . ). as the intensity of the detected mr signal is proportional to the number of nuclei within the excited volume, the different strategies markedly differ in the time required for the acquisition of qualitatively comparable mr images. due to the long measurement time, single-point and line scanning techniques have not been successful in clinical practice. in order to excite a distinct slice of the body selectively, the homogeneous static magnetic field b is superimposed with a gradient field (slice-selection gradient) that varies perpendicular to the slice, i.e., for an axial slice a gradient field in longitudinal direction of the body. due to this superposition, the larmor frequency ω of the nu-clei varies along the direction of the gradient. if we consider, for instance, a z-gradient with magnitude g z , the larmor frequency is given by (see fig. . . c): consequently, an object slice z ≤ z ≤ z is characterized by a narrow frequency interval γ(b + g z z ) ≤ ω ≤ γ(b + g z z ). if one irradiates an rf pulse, the frequency spectrum of which coincides with this frequency range, only the nuclei within the chosen slice will be excited ( fig. . . ) . for the definition of a body slice, this has two implications: • the width d = z -z of the slice can be varied by changing either the bandwidth of the rf pulse, i.e. the width of the frequency distribution, or the gradient strength g z . • the position of the slice can be altered by shifting the frequency spectrum of the rf pulse. the practical realization of the concept of slice-selective excitation requires not only the shape of the rf pulse but also the switching mode of the slice-selection gradient to be carefully optimized: the main magnetic field b is superimposed with a magnetic gradient field b z = g z z in z-direction (slice-selection gradient), so the larmor frequency ω(z) = γ(b + g z z) of the nuclei depends linearly from the spatial coordinate z. the slice z ≤ z ≤ z within the object is thus unambiguously described by the frequency interval ω(z ) ≤ ω(z) ≤ ω(z ). if one irradiates an rf pulse with a frequency spectrum that corresponds to this frequency interval, only the nuclei within the chosen slice will be excited • pulse modulation: as shown in fig. . . , the frequency spectrum of a rectangular rf pulse consists of several frequency bands with varying intensities. if such a pulse is used for the rf excitation, the profile of the excited slice is defined insufficiently. in order to obtain a uniform distribution of the transverse magnetization over the slice width, the shape of the selective rf pulse is modulated so that the frequency spectrum becomes as rectangular as possible ("sinc pulse, " see fig. . . a). • compensation gradients: if one imaginatively dissects an object slice into several thin subslices, then the magnetization components of all subslices will be deflected by the same angle from the z-direction when an optimized rf pulse is used for slice-selective excitation. however, the magnetization components will be dephased at the end of the excitation period tp, since the larmor frequencies of the distinct subslices differ from one another as the slice-selection gradient is switched on. this effect can be compensated by reversing the polarity of the gradient field for a well-defined period after rf excitation ( fig. . . b). after selective excitation of a partial body region, the mr signal from each voxel within this volume, e.g., a slice, needs to be spatially encoded. this can be achieved by two techniques: frequency and phase encoding of the mr signal. the principle of both encoding techniques will be explained in the following by the example of an axial slice parallel to the x-y-plane. for the sake of simplicity, relaxation effects are neglected in this connection. if the body to be imaged is placed in a homogenous magnetic field with the flux density b , the magnetization components of all voxels in the excited slice will precess with the same frequency around the direction of the b field. thus, the frequency spectrum consist of only a single resonance line at the larmor frequency ω = γ b -it does not contain any spatial information. however, if a magnetic gradient field, e.g., b x = g x x, is switched on during the acquisition phase of the mr signal ( fig. . . ), pulse modulation and gradient refocusing. a in order to obtain an approximately rectangular slice profile, one uses an rf pulse, the envelope of which is not rectangular but modulated in time. b if one dissects a thicker slice of the object into several thin subslices, an optimized rf pulse will deflect the magnetization of each subslice by the same angle from the z-direction, but the magnetization components are dephased after rf excitation, because the larmor frequencies in the subslices differ. if a ° pulse with duration tp is used for slice-selective excitation, then the dephasing effect can be compensated in good approximation by inverting the gradient field after excitation for the duration tp/ readout or frequency-encoding gradient. in frequency encoding, the main magnetic field b is superimposed with a gradient field (here b x + g x x) during the acquisition of the rf signal, so the precession frequency of the transverse magnetization in the selectively excited slice becomes a function of the coordinate x the larmor frequency is related to the position x (see fig. . . ) by the resonance condition or in other words, nuclei in parallel strips oriented perpendicular to the direction of the readout (or frequencyencoding) gradient will experience a different magnetic field and thus contribute with different larmor frequencies ω(x) to the detected mr signal of the excited slicethe spatial information is encoded in the resonance frequency. in order to determine the contribution of the distinct frequency components to the summation signal, a fourier transformation of the measured fid has to be performed (cf. sect. . . ). the intensity i(ω) of the resulting spectrum at the frequency ω is proportional to the number of nuclei precessing with this frequency, i.e., to the number of nuclei that are according to eq. . . at the position x = (ω - γb ) / γ g x . the frequency spectrum of the fid signal therefore gives the projection of the spin density distribution in the excited slice onto the direction of the readout gradient ( fig. . . ). remark: when explaining the concept of frequency encoding, we assumed that there is only one resonance line in the frequency spectrum of the excited body region in the absence of the readout gradient. if this assumption is not fulfilled, i.e., if the spectrum contains several resonance lines due to chemical shift effects described in sect. . . . , then these frequency shifts will be interpreted by the decoding procedure (i.e., the fourier analysis) of the fid signal as position information. consequently, the spin density projections of molecules with different chemical shifts are shifted in space against one another. in h imaging, the situation is rather easy, as only two dominant proton components contribute to the mr signal of the organism, the protons of the water molecules and those of the ch group of fatty acids. as the resonance frequencies of the two components differ by about . ppm (fig. . . ) , fat-and water-containing structures of the body are slightly shifted against one another in readout direction. this chemical-shift artifact becomes apparent predominantly at the interfaces between fat- and water-containing tissue. from a technical point of view, the fid signal s(t) can only be sampled and stored in discrete steps over a limited period of time taq. consequently, there is only a limited number n = taq /Δt of data points that can be used for fourier transformation. due to this reason, the spatial sampling interval Δx of the spin density projection is limited too. the following relations hold between the number n of data points, the maximum object size x, the spatial sampling interval Δx, the temporal principle of frequency encoding. if the fid signal from a slice is measured in the presence of a gradient field b x = g x x (see fig. . . ) , then nuclei in strips oriented perpendicular to the direction of the gradient contribute with different larmor frequencies ω(x) = γ(b = g x x) to the measured mr signal. the contribution i(ω) of the distinct frequency components to the summation signal can be calculated by a fourier transformation of the fid signal. as the intensity i(ω) of the resulting spectrum at the frequency ω is, on the one hand, proportional to the number of nuclei precessing with this frequency, and, on the other hand, the spatial information is encoded in the frequency, the fourier transformation yields the projection of the spin density distribution within the considered object slice on the direction of the readout gradient in vivo h spectrum of a human thigh at . t. the two resonance lines can be attributed to protons in water and in the ch groups of fatty acids sampling interval ∆t, and the gradient strength g x (sampling theorem): example: if the fid signal is sampled times at ∆t = µs in the presence of a magnetic gradient field of the strength g x = . mt/m, then the resolution along the x-axis is ∆x = . mm, and the maximum object size that can be imaged is x = nΔx = cm. with frequency encoding, the mr signal is sampled at discrete points in time tn = nΔt ( ≤ n ≤ n). according to eq. . . , the transverse magnetization at the position x precesses under the influence of the readout gradient until the time tn by the angle the spatial information is therefore encoded via the frequency ω(x) in the phase angles φn(x) ( ≤ n ≤ n). however, the same phase angles can be realized by increasing the gradient strength gn x = nΔg x in equidistant steps Δg x at a fixed switch-on time of the gradient. this equivalent approach is called phase encoding. the concept of phase encoding can easily be realized by applying a magnetic gradient field, e.g., b x = g x x, for a fixed time tx before the fid signal is detected (fig. . . ) . under the effect of this phase-encoding gradient, the magnetization at the position x precesses by the phase angle the parameter kn = γgn x tx = γnΔg x tx is named spatial frequency. after switching off the phase-encoding gradient, the magnetization components of the voxels in the slice precess again with the original, position-independent larmor frequency ω = γb around the direction of the b field-now, however, with position-dependent phase angles φn (x). this is to say, in phase-encoding, all magnetization components of the excited voxels contribute to the detected mr signal with the same frequency ω , but with differing phases φn (x). in order to calculate the projection of the spin density distribution in the slice onto the direction of the phase-encoding gradient, the chosen sequence is repeated n times with different spatial frequencies kn = n(γ Δg x tx) = nΔk ( ≤ n ≤ n) (fig. . . ). however, in contrast to frequency encoding, during phase encoding not the entire fid is sampled, but only the mr signal s(kn, t ) at a definitive time t . after n measurements (sequence cycles), the spin density projection can be calculated by a fourier transformation of the acquired data set s(Δk, t ), s( Δk, t ), s( Δk, t ), …, s(nΔk, t in practical mri, techniques of image reconstruction have prevailed that merely differ only in the way the aforementioned techniques of selective excitation and spatial encoding are combined. phase-encoding gradient. in phase encoding, a gradient field (here b x = g x x), the magnitude of which is increased in equidistant steps ∆g x at each sequence cycle, is switched on for a fixed time tx before the fid signal is acquired. this method, which p. lauterbur used in to generate the first mr image, is based on a technique of image reconstruction used in computed tomography. its basic idea is easy to understand: if projections of the spin density distribution of an object slice are available for various viewing angels Φn ( ≤ n ≤ n), the spin density distribution in the slice can be reconstructed by "smearing back" the (filtered) profiles over the image plane along their viewing directions (fig. . . ). this approach can be implemented easily by making use of the frequency-encoding technique by repeating the sequence shown in fig. . . several times while rotating step by step the direction of the readout gradient in the slice plane. in order to reconstruct a planar image of n × n picture elements (pixel), a minimum of n projections with n data points each is needed. the stepwise rotation of the readout gradient by the angle ΔΦ= °/n is performed electronically by a weighted superposition of two orthogonal gradient fields. the projection reconstruction method is easy to understand, but both mathematical description and data processing are rather complex. furthermore, it carries the disadvantage that magnetic field inhomogeneities and patient movements result in considerable image artifacts. due to these reasons, the fourier techniques described in the following sections are preferred for the reconstruction of mr images. principle of phase encoding (displayed in the rotating frame). as shown in fig. . . , a phase-encoding gradient g x is switched on for a fixed time tx before the fid signal is acquired (aq). a-d the sequence is repeated several times at equidistantly increasing gradient strengths. under the influence of the gradient field b x = g x x, the magnetization components of the different voxels in the slice precess with different larmor frequencies. if the gradient is switched off after the time tx, all components rotate again with the original, position-independent frequency ω = γb around the direction of the b field. however, magnetization components which precess more quickly during the operating time tx of the gradient field, will maintain their advance compared with the slower ones. this advance is described by the phase angle φ(x) = γg x xtx of the different magnetization components. the figure shows the dependence of the phase angle φ(x) on the gradient strength g x and the spatial coordinate x schematically for four different gradient strengths (g x = , ∆g x , ∆g x , and ∆g x ) and three adjacent voxels (with the magnetization vectors m , m , and m ). as shown in b, the magnetization m will rotate at the position x under the influence of the gradient field by the phase angle φ(x ) = ° and the magnetization m at the position x by the phase angle φ(x ) = ° comparison between frequency and phase encoding. with both encoding techniques, the transverse magnetizations of all voxels within the excited slice contribute to the detected mr signal; the spatial information is encoded in both cases by the different phase of the magnetization components, which was formed by the influence of a magnetic gradient field (here b x = g x x) up to the moment of signal detection. in order to calculate the projection of an object onto the direction of the gradient, the mr signal s has to be measured n times (e.g., n = or ), with the phase difference between the magnetization components of the different voxels varying in a well-defined way form measurement to measurement. the difference between the two encoding techniques merely consists in the technique with which the data set {s , s , . . . , sn} is acquired. a with frequency encoding, the mr signal is sampled at equidistant time steps ∆t in presence of a constant gradient field (in the figure ∆g x ). as the magnetization components of the voxels within the excited slice steadily dephase under the influence of the gradient field, there is a different phase difference between them at every point in time tn= n∆t ( ≤ n ≤ n), so the whole data set {s , s , . . . , sn} can be detected by a single application of the sequence. the figure shows the first three values of the signal. it should be noted that only the temporal change of the mr signal caused by the gradient field is shown here, whereas the rapid oscillation of the signal with the larmor frequency ω = γb as well as the t * decay of the signal is neglected. b with phase encoding, a phase-encoding gradient is switched on for a fixed duration tx before the fid signal is acquired. the magnitude of this gradient is increased at each sequence repetition by ∆g x . during the switch-on period of the gradient field, the magnetization components of the different voxels precess with different frequencies so that a phase difference is established between them which is proportional to the magnitude of the gradient applied (see fig. . . ) . after switching off the gradient, all components rotate again with the original, position-independent frequency ω = γb around the direction of the b field. in the chosen description, one therefore observes an mr signal sn that is constant over time. in order to acquire the entire data set {s , s , …, sn}, the sequence needs to be repeated n times with different gradient strengths g x = n∆g x . the figure shows the dependence of the mr signal from the gradient strength schematically for three different gradient strengths (g x = , ∆g x , ∆g x ). if the product of the gradient strength and the switch-on time of the gradient up to the time of signal detection is equal in both encoding techniques, the phase difference between the various magnetization components at the time of signal detection is also identical and thus the same mr signal is measured. the product of the two quantities is indicated in the figure by the dark areas in the planar version of fourier imaging, just as in projection reconstruction, the spins in a slice are selectively excited by an rf pulse in the first step. afterwards, however, spatial encoding of the spins in the slice is not done by a successive rotation of a readout gradient, but by a combination of frequency and phase encoding using two orthogonal gradient fields. if we consider an axial slice parallel to the x-y-plane, then these gradients are g x and g y (fig. . . ). the sequence is repeated n times for different values of the phase-encoding gradient gn x = nΔg x ( ≤ n ≤ n), with the mr signal being measured m times during each sequence cycle at the times tm = mΔt ( ≤ m ≤ m) in the presence of the readout gradient g y . thus, one obtains a measurement value for each combination (kn, tm) of the parameters kn = γnΔg x tx and tm = mΔt, i.e., a matrix of n × m data points. a d fourier transformation of this data set, the so-called hologram or kspace matrix (see fig. . . ), yields the mr image of the slice with a resolution of n × m pixels. in order to extend the d fourier method to a d one, the slice-selection gradient is replaced by a second phaseencoding gradient as shown in fig. . . . this means that the rf pulse excites all spins in the sensitive volume of the rf coil and that the spatial information is encoded exclusively by orthogonal gradients-by two phase-encoding gradients and one frequency-encoding gradient. the spatial resolution in the third dimension is defined by the strength of the related phase-encoding gradient and the number k of the phase-encoding steps. depending reconstruction by back projection. the figure shows three different projections of two objects in the field of view. if many projections are acquired at different viewing angles, an image can be reconstructed by (filtered) back projection of the profiles. for the measurement of the various projections, the frequency-encoding technique is used, with the readout gradient rotating step by step fig. . . typical pulse and gradient sequence in d fourier imaging. g z is the slice-selection gradient, g x the phase-encoding gradient, and g y the readout gradient on the choice of these parameters, the voxels have a cubic or cuboidal shape (isotropic or anisotropic resolution). in order to acquire a d k-space matrix with n × m × k independent measurement values, the imaging sequence needs to be repeated n × k times. a d fourier transformation of the acquired d k-space matrix yields the d image data set of the partial body region excited by the rf pulse. based on this image data set, multiplanar images in any orientation can be refor-matted, which offers-among others-the possibility to look at an organ or a body structure from various viewing directions. in addition to the described conventional fashion of filling the k-space in fourier imaging, there are a number of alternative strategies. • spiral acquisition. as will be described later, the epi sequence is commonly using an oscillating frequencyencoding gradient. if both, the phase encoding as well as the frequency-encoding gradient are oscillating with increasing gradient amplitudes, the acquired data points will be along a spiral trajectory through k-space. that is why such an acquisition is called spiral epi. • radial acquisition. if the direction of the frequencyencoding gradient is rotated as described in sect. . . . , the k-space trajectories will present a star. • blade, propeller, multivane. these hybrid techniques sample k-space data in blocks (so-called blades) each of which consists of some parallel k-space lines. in order to successively cover the entire k-space, the direction of the blades is rotated with a fixed radial increment. this sampling strategy offers some advantages. since each blade contains data points close to the center of the k-space, patient movements can, for example, be easily detected and corrected. to reconstruct mr images from alternative k-space trajectories by means of a conventional d or d fourier transformation, it is necessary to re-grid the sampled kspace data to a rectangular grid. data acquisition by d imaging techniques can be carried out very efficiently when considering the fact that the time required for slice-selective excitation, spatial encoding, and acquisition of the mr signal is much shorter than the time needed by the spin system to relax at least partially after rf excitation, before it can be excited once again. the long waiting periods can be used to excite-in a temporally shifted manner-adjacent slices and to detect the spatially encoded mr signal from these slices. thus, mr images from different parallel slices can be acquired simultaneously without prolongation of the total acquisition time (fig. . . ). example: let us consider that ms are needed for excitation, spatial encoding, and data acquisition per sequence cycle and that the sequence is repeated after tr = , ms. then mr data from adjacent slices can be acquired simultaneously without prolonging the measurement time. typical pulse and gradient sequence in d fourier imaging. g x and g y are the phase-encoding gradients and g z the readout gradient however, when using the multiple-slice technique, one has to consider that the distance between slices may not be too small, as the slice profile usually is not rectangular, but bell shaped. in order to avoid repeated excitation of spins in overlapping slice regions, the gap between adjacent slices should correspond approximately to the width of the slice itself. images from adjacent slices can be obtained in an interleaved manner by applying the sequence twice: in the first measurement, data are acquired from the odd slices and in the second from the even slices ( fig. . . ). principle of the multiple-slice technique. in most d imaging sequences, the time t required for slice-selective excitation, spatial encoding, and detection of the mr signal is markedly shorter than the repetition time tr. the long waiting periods can be used to subsequently excite spins in parallel slices and to detect the spatially encoded signals from these slices consideration of the slice profile by the multipleslice technique. the profile of a slice is generally not rectangular but rather bell shaped. the thickness (th) of the slice is therefore usually defined by the full-width at half-maximum. in order to prevent overlapping of adjacent slices in the multiple-slice technique, a sufficient gap (g) between the two adjacent slices has to be chosen (g ≥ th). often the distance (d = g + th) between the slices is indicated instead of the gap g. images from adjacent slices can be detected without overlap by using in a first step a sequence that acquires data from the even slices and in a second step a sequence that acquires data from the odd slices. in both measurements, the gap g should be identical with the slice thickness th (d = g + th = th) the main advantage of mr imaging, apart from the flexibility in slice orientation, is the excellent soft-tissue contrast in the reconstructed mr images. it is based on the different relaxation times t and t of the tissues, which depend on the complex interaction between the hydrogen nuclei and their surroundings. compared to that, differences in proton densities (pd) are only of minor relevance, at least when considering soft tissues. the term proton density in mr imaging designates only those hydrogen nuclei whose magnetization contributes to the detectable image signal. essentially, this refers to hydrogen nuclei in the ubiquitous water molecules and in the methylene groups of the mobile fatty acids (see sect. . . . and fig. . . ). hydrogen atoms, which are included in cellular membranes, proteins, or other relatively immobile macromolecular structures, usually do not contribute to the mr signal; their fid signal has already decayed to zero at the time of data acquisition (t << te) (brix ). another important contrast factor is the collective flow of the nuclei. the influence of flowing blood on the image signal will be discussed in sect. . separately, in the context of mr angiography. whereas the image contrast of a ct scan only depends on the electron density of the tissues considered (as well as on the tube voltage and beam filtering), the mr signal and thus the character of an mr image is determined by the intrinsic tissue parameter pd, t , and t as well as by the type of the sequence used and by the selected acquisition parameters. this variability offers the opportunity to enhance the image contrast between distinct tissues by cleverly selecting the type of sequence and the corresponding acquisition parameters, and thus to optimize the differentiation between these tissue structures. however, the subtle interplay of the many parameters bears the danger of misinterpretations. in order to prevent these, several mr images are always acquired in clinical routine, with different sequence parameters that are selected in such a way that the tissue contrast of the various images is determined mainly by a single tissue parameter; in this context, one uses the term t -, t -, or pd-weighted images. sometimes, one even goes one step further to calculate "pure" t , t , and pd parameter maps on the basis of several mr images that were acquired with different acquisition parameters. the advantage in doing this consists of the fact that the image contrast on the calculated parameter maps is usually more accentuated than in the weighted images. the calculated tissue parameters can furthermore be used to characterize various normal and pathological tissues. however, experience has shown that a characterization or typing of tissues by means of calculated mr tissue parameters is only possible with reservations (bottomley et al. ; higer and bielke ; pfannenstiel et al. ). this may be due not only to the insufficient measurement and analysis techniques used, but also to the fact that morphological information of the mr images as well as the clinical expertise of the radiologist have been left aside in many cases. these considerations indicate that each mr practitioners should be aware of the dependence of the image contrast on the selected type of imaging sequence as well as on the sequence and tissue parameters in order to fully benefit from the potential of mri and to avoid misinterpretations. the term imaging sequence designates the temporal sequence of rf pulses and magnetic gradient fields, which are used to determine the image contrast and for image reconstruction, respectively. the foregone section has made intuitive use of the term image contrast in order to describe the possibility to distinguish between adjacent tissue structures in an mr image. we will now define this term. if one describes the signal intensities of two adjacent tissues structures a and b with sa and sb, the image contrast between the two tissues can be expressed by the absolute value of the signal difference cab = |sa - sb| ( . . ) or by the normalized difference ( . . ) remark: the delineation of a tissue structure depends, of course, also on the signal-to-noise ratio (s/n) as tiny, weakly contrasted structures can be masked by image noise. some authors therefore proposed to use the contrast-to-noise ratio for evaluating the detectability of a detail. however, the explanatory power of this quantity can hardly be objectified since the contrast-detail detectability strongly depends on the signal detection in the human retina as well as on the signal processing in the central visual system of the observer. in the following, we will therefore use the absolute contrast defined in eq. . . . example: in order to analyze the influence of the tissue and acquisition parameters on the image contrast by an example, we will consider in the following the contrast between white and gray brain matter. representative tissues parameters, which have been measured for a patient collective at . t, are summarized table . . . in clinical routine, the spin-echo (se) sequence is still a frequently applied imaging sequence, due to two reasons: • it is rather insensitive to static field inhomogeneities and other inaccuracies of the mr system. • it allows for the acquisition of t -, t -, and pdweighted images by an appropriate choice of the acquisition parameters tr and te. of about t . usually, however, the sequence is repeated much earlier, so that the longitudinal magnetization will be reduced at the beginning of the next sequence cycle compared to the equilibrium magnetization by the t factor [ - exp(-tr / t )]. accordingly, the t contrast of an se image can be varied by the choice of the repetition time tr. in fig. . . a, the t factor is plotted for white and gray brain matter. as this example shows, the t contrast reaches a maximum if the repetition time tr is between the t relaxation times of the two tissues considered. if tr is markedly longer than the longer t time, then the t contrast vanishes. • t dependence: the influence of the t relaxation process on the signal intensity is described by the t factor exp (-te / t ) in the signal equation. for a given t time the signal loss is the bigger, the longer the echo time te becomes. in fig. . . b, the t factor is plotted for white and gray brain matter versus the echo time te. the contrast will reach a maximum when the echo time te ranges between the t relaxation times of the two tissues considered. for small te values (te << t ), the contrast approximates zero, as the signal intensities in this case are independent of t . and gray (gm) brain matter. as can be seen, the t contrast between the two tissues approaches for very long as well as for very short repetition times. the highest t contrast is obtained for tr ~ ms, i.e. for a repetition time in between the t times of the two tissues considered (see table . . ). b the same considerations hold for the t factor exp(-te /t ). the t contrast maximum is at te ~ ms influence of the acquisition parameters tr and te on the contrast behavior of an se image. the figure shows the interplay of the longitudinal and transversal relaxation for this sequence at the example of white (wm) and gray brain matter (gm) for a fixed repetition time of tr = . ms. in the left part, the temporal evolution of the longitudinal magnetization mz during the recovery period ( ≤ t ≤ tr) is depicted. at t = tr, the partially relaxed longitudinal magnetization is flipped into the x-y-plane by the ° excitation pulse. the t relaxation of the resulting transversal magnetization mxy is plotted in the right part as a function of the echo time te. as can be seen, there is a reversal behavior of the t and t contrast. for te = ms, the contrast is , so that the two types of brain matter cannot be differentiated in the relating se image in spite of differing tissue parameters (see fig. . . d). note that the detected mr signal is directly proportional to the transversal magnetization mxy in general, adjacent tissues differ in all three tissue parameters pd, t and t , so the different factors of the se signal equation, which can partially compensate one another, need to be considered altogether. this holds even more as the relaxation times are usually positively correlated, i.e., the tissues with longer t times usually also have longer t times. in order to illustrate this statement, fig. . . shows the course of the longitudinal and transverse magnetization for both white and gray brain matter for a repetition time of tr = , ms. as can be seen, the transverse magnetization of both substances-and therefore the signal intensities (sse ∝ mxy)-are identical at an echo time of te = ms, so the tissues cannot be distinguished on the related se image in spite of different tissue parameters. fig. . . shows the contrast between white and gray matter as a function of both the repetition and echo time. as expected, there are two regions with a high tissue contrast, which are separated by a low contrast region (cf. fig. inversion time ti as well as a on the repetition time tr. in order to optimize the t contrast, the inversion time ti is usually varied, whereas with the parameter td, respectively tris chosen as high as possible (td >> t ), to allow the recovery of a considerable longitudinal magnetization after rf excitation. the maximum range of values of the t factor is between - and + , thus being double the range of values of the se sequence. there are two types of ir sequences depending on signal interpretation: if only the absolute values of the signal are considered (magnitude reconstruction, irm), the range of values is limited de facto to the interval between and , as in the se sequence. if this mode of data representation is chosen, then the t factor will initially decrease to and then converges toward the equilibrium magnetization m . figure . . shows the dependence of the t factor from the inversion time ti for both possible modes of data representation for white and gray brain matter (tr = , ms). as this example reveals, the neglect of the sign of the t factors in the absolute value representation leads to a destructive t contrast behavior in the region between the zeros of both t functions considered. an ir sequence differentiating between parallel or antiparallel alignment of the longitudinal magnetization at the time of the excitation pulse is called phase sensitive. for evaluation of the tissue contrast, the pd and t dependence of the image signal sir needs to be included in the considerations, too. figure . . demonstrates this with an example. in this figure, the tissue contrast between white and gray brain matter is plotted as a function of the echo time te for ti= ms and tr = , ms. for the chosen ti value, there is a reversal behavior of the t and t contrast, as the relaxation times of the two tissues if the last-mentioned mode of data representation is used, there will be a destructive t contrast behavior in the region between the zeros of the two tissue curves. the t contrast maximum in the considered case is at ti~ ms, i.e., in between the t times of the two tissues considered (see table . . ) examined are positively correlated, i.e., the substance with the longer t time also has a longer t time. in order to fully grasp the complex interplay between the different tissue and acquisition parameters as a whole, the image contrast between white and gray brain matter is plotted in fig. the influence of the acquisition parameters te, ti, and td on the contrast of an ir image is summarized in table . . . in order to maximize the t contrast (t -weighted image), the ti time should be between the t times of the two tissues considered, and the echo time te should be chosen as short as possible. as even for the acquisition of t -weighted images, relatively long repetition times are needed (td >> t ), the ir sequence requires much more time than the se sequence. advantages occur mainly when the image signal of a given tissue structure shall be suppressed, e.g., the retrobulbar fatty tissue for the evaluation of the optic nerve. in this case, the acquisition parameter ti needs to be chosen so that the t factor of the tissue to be suppressed is approximately (fig. remark: an ir sequence with a very short ti time is called stir (short-tau inversion recovery) sequence. if the ti time is selected to suppress the signal from liquor, the sequence is called flair (fluid-attenuated inversion recovery). influence of the acquisition parameters ti and te on the contrast behavior of the ir sequence (absolute value representation). the figure shows the interplay of the longitudinal and transversal relaxation for this sequence at the example of white (wm) and gray brain matter (gm) for a fixed inversion time ti = ms and a fixed repetition time tr = , ms. the left part shows the temporal evolution of the longitudinal magnetization mz during the inversion phase ( ≤ t ≤ ti). at t = ti, the partially relaxed longitudinal magnetization is flipped into the x-y-plane by the ° excitation pulse. the t relaxation of the resulting transversal magnetization mxy is plotted in the right part as a function of the echo time te. in the case considered, there is a reversal behavior of the t and t contrast, so that the contrast between the two brain tissues rapidly reduces with prolonged echo time. note that the detected mr signal is directly proportional to the transversal magnetization mxy • in some cases, the imaging sequence is repeated several times (e.g., naq = or ), in order to improve the signal-to-noise ratio ( . this is especially valid for t -weighted se images, which have a relatively low s/n ratio due to the short repetition time. example: based on these considerations, the following representative acquisition times are obtained for se images: t = . min for a t -weighted image (tr = ms, nph = , naq = ) and t = . min for a t -weighted and/or a pd-weighted image (tr = , ms, nph = , naq = ). by using the multiple slice technique described in sect. . . , one can simultaneously acquire mr images from multiple parallel slices within the given acquisition times, but the overall acquisition time required for the acquisition of the images will not be reduced. in clinical practice, this basic limitation of conventional imaging sequences leads to the following problems: • depending on the clinical question, the time needed for a patient examination ranges between and min. • this demands high cooperation from the patient, as the patient will be asked to remain motionless during the examination in order to assure the comparability of differently weighted mr images. • critically ill patients may not be examined full scale or might not fit for examination at all. • the image quality is impaired by motion artifacts (such as heart beat, blood flow, breathing, or peristaltic movement). this problem is especially acute in patients with thorax and abdominal diseases, as mr images in general cannot be acquired completely during breath hold, as is the case in ct. • dynamic imaging studies are limited. to overcome these limitations, several methods aiming to shorten the acquisition examination times have been developed. they can be categorized into two groups, depending on whether the repetition time tr or the number of sequence cycles nph needed for phase encoding is reduced (see eq. . . ). the two strategies will be discussed in the following sections at the example of some selected imaging sequences. an almost complete overview of the clinically used fast imaging sequences will be provided in sect. . . . the long scan times of conventional imaging sequences are due to the fact that the ° excitation pulse rotates the entire longitudinal magnetization into the x-y-plane, so the pulse sequence can only be repeated when the longitudinal magnetization has been-at least partially-recovered by t relaxation processes. to acquire mr images with an acceptable s/n, the sequence repetition time tr has to be of the order of the t relaxation time. this basic problem in conventional imaging can be prevented, however, by using an rf pulse with a flip angle of α < ° to excite the spin system, so that only a part of the longitudinal magnetization mz will be rotated to the x-y-plane, nevertheless, one obtains a relatively large transverse magnetization. example: if, for instance, a flip angle of α = ° is used, then the longitudinal magnetization mz will be reduced by %, whereas the transverse magnetization mxy amounts to % of the maximum value (fig. . . ) . in order to discuss the principle of low-flip angle excitation, we will initially neglect the gradient fields needed for spatial encoding and consider the simple sequence shown in fig. . . a. it consists of a single rf pulse with a flip angle α< ° and a spoiler gradient, which destroys the remaining transverse magnetization after the acquisition of the fid. remark: as an alternative to spoiler gradients, the phase of the rf excitation pulse may be varied with every sequence cycle, in order to prevent the buildup of a steady state for the transverse magnetization (rf spoiling). if the considered sequence is repeated several times, then the spin system already reaches a dynamic equilibrium after a few sequence cycles. figure . . shows the transient behavior of the longitudinal magnetization in white the value of the steady-state longitudinal magnetization depends not only on the flip angle α of the excitation pulse, but also on the repetition time tr and the longitudinal relaxation time t . it will be smaller when α becomes bigger. for α = °, the longitudinal magnetization reaches the steady-state value m z ss = m [ -e -t r /t ] after the first excitation, as expected. however, the mr signal is not given by the longitudinal magnetization but by the transverse magnetization mxy at the time of data acquisition. by using eq. . . the amplitude s of the mr signal can be described by ( . . ) whereas the factor exp (-te/t *) describes the decay of the fid signal during the delay time te, the factor sinα gives the fraction of the steady state magnetization m z ss , which is rotated in the x-y-plane (see fig. . . b,c). to illustrate this relation, fig. . . shows the signal intensity s as a function of the ratio tr/t . from this plot, two important statements can be derived: • as compared with conventional ° excitation, lowflip angle excitation yields considerably higher signal values for short repetition times. • when using low-flip angle excitation, the signal is already independent of t for tr < t . the signal increase realized by low-flip angle excitation in combination with short repetition times is obtained, however, by omitting the ° pulse generating a spinecho, as the ° pulse not only inverts the phase of the transverse magnetization, but also the longitudinal magnetization (see fig. in contrast to the conventional imaging sequences, the nomenclature of the gre sequences is not unified, but is handled differently by different manufacturers. in the following, the fundamentals of gre imaging will be discussed in detail at the example of two representative sequences denoted by the acronyms flash and truefisp. the excitation of the spin system and position encoding are identical in both sequences; they differ only in that the transverse magnetization is destroyed after acquisition of the mr signal in the flash sequence (spoiled gre sequence), whereas it is maximized in the truefisp sequence (refocused gre sequence). this difference, however, leads to an entirely different contrast behavior. dephasing of the transversal magnetization caused by the slice selection and the readout gradient is compensated by two additional inverted gradients, so that a gradient-echo occurs. the figure shows the de- and rephasing process of two magnetization components (a,b), which are at different positions and therefore precess under the influence of the gradient-fields with different larmor frequencies. φx, φy, and φz are the corresponding phase angles however, this does not imply that for this angle the tissue contrast between two structures is at its maximum. in fig. . . the tissue contrast between white and gray brain matter is plotted for te << t *, i.e., exp(-te / t ) ≈ , as a function of the repetition time tr and the flip angle α. for low flip angles, two contrast regions can be distinguished: for short tr times, the t contrast dominates (t -weighted images), for longer tr times, the pd contrast (pd-weighted image). example: in order to illustrated the discussed contrast behavior, fig. the influence of the acquisition parameters on the contrast of a flash image is summarized in table . . . in , oppelt et al. introduced a gre sequence with the acronym fisp (fast imaging with steady precession), which considerably differs in its contrast from the flash sequence. this sequence was later renamed to truefisp (see below). the pulse and gradient scheme of this sequence is shown in fig. . . . instead of the spoiler gra- influence of the acquisition parameters tr and α on the t contrast of a flash image. for low excitation angles α, there will only be a considerable t contrast (here between white and gray brain matter) when short repetition times are selected. if the flip angle is increased, the t contrast maximum will shift to a higher tr value dient of the flash sequence, refocusing gradient pulses are introduced in slice-selection direction as well as in the direction of frequency and phase encoding, through which the transverse magnetization is not destroyed after the data acquisition of the mr signal, but rather rephased or refocused (fig. . . ). as practice has shown, the truefisp sequence is very susceptible to inhomogeneities of the static magnetic field, which are rendered visible as disturbing image artifacts. a more favorable behavior is achieved by omitting the gradient pulses (which have been shaded darkly in fig. . . ) . in this case, only the dephasing of the transverse magnetization caused by the slice selection and phase-encoding gradient is completely compensated. this realization is called fisp sequence. in the truefisp sequence, not only the longitudinal, but also the transversal magnetization reaches an equilibrium state after several sequence cycles. as both magnetization components are different from zero at the end two different contrast regions can be distinguished for low flip angles: for short tr times, the t contrast dominates (t -weighted image), for longer tr times, the pd contrast (pd-weighted image). if the flip angle is increased, then the t contrast curve will gradually approach the known contrast behavior of the se sequence (α = °). correspondingly, long repetition times need to be chosen in order to acquire pdweighted or t -weighted images of a sequence cycle, they will be mixed by the following rf pulse, i.e., a part of the longitudinal magnetization is flipped into the x-y-plane and a part of the transverse magnetization into the z-direction. consequently, both magnetization components dependent on t and on t . the t dependence increases proportional to the magnitude of the transverse magnetization remaining at the end of the sequence cycle (i.e., with decreasing tr/t ratio). vice versa, this means that the fisp signal for high tr values (tr>>t ) will approximate the flash signal. remark: the difference in the latter case merely consists in the fact, that in the flash sequence the transverse magnetization is rapidly destroyed by a spoiler gradient after the acquisition of the fid, whereas in the fisp sequence it decays with the time constant t . therefore, the flash sequence is more useful for the acquisition of t -weighted and pd-weighted images than is the fisp sequence. as the discussion has shown, the characteristic signal behavior of the fisp sequence manifests itself only for very short repetition times. for this special situation, the dependence of the signal intensity of the fisp sequence on the tissue parameters t , t , and pd can be described due to the complex gradient switching, the dephasing of the transversal magnetization caused by the three gradients is completely compensated after acquisition of the gradient-echo, so the transversal magnetization is restored before irradiation of the subsequent excitation pulse. the figure shows the de- and rephasing process for two magnetization components (a,b), which are at different positions and therefore precess with different larmor frequencies under the influence of the gradient fields. φx, φy, and φz are the corresponding phase angles. at the end of the sequence, both magnetization components are in phase again (φx = φy= φz = ), independent of their spatial position pulse and gradient scheme of the truefisp sequence. α flip angle of the excitation pulse, g z slice-selection gradient, g x phase-encoding gradient, g y readout gradient. instead of the spoiler gradient used in the flash sequence, there are refocusing gradient pulses in all three gradient directions, so that the transversal magnetization after acquisition of the gradient-echo is not destroyed but reconstructed (see fig. . . ) . in practice, the gradient pulses (marked darkly) here is frequently omitted in order to reduce the susceptibility of the sequence for artifacts leading to a fisp sequence approximately by the expression , ( . . ) which is independent of the repetition time tr. as this equation shows, the signal intensity of a fisp image for tr< ms), the t times differ strongly (see fig. . . ). the t times of the hf nuclei are generally higher than ms, whereas for hr nuclei they are less than µs due to the strong dephasing effect of neighboring spins. the different t times are mirrored in the h spectrum (fig. . . ) . as the width ∆ω of a resonance line is inversely proportional to the t time (see sect. . . ), the hf pool has a line width of a few hertz, whereas the spectral width of the hr nuclei is more than khz. it is crucial that the two h pools interact due to intermolecular processes (spin-spin interaction) and/or chemical exchange processes (wolff and balaban ). due to this reason, any change in the magnetization in one pool results in an alteration of magnetization of the other pool. this effect is called magnetization transfer (mt). to utilize this effect for mr imaging, the magnetization of the hrpool is saturated by frequency-selective preparation pulses (saturation transfer). due to the mt effect, this leads to a significant reduction of the mr signal of hf nuclei and thereby to a reduction of the image signal. in the simplest case, the frequency spectrum of the preparation pulse is defined by a rectangular function below and/or above the resonance frequency of the hf pool, as is shown in fig. . . . when doing this, the offset frequency has to be chosen big enough, so that local variations of the resonance frequencies of the hf nuclei due to inhomogeneities of the static magnetic field and differences in tissue susceptibilities do not lead to a direct effect on the hf magnetization. mt preparation pulses are often used in mr angiography to increase the blood-tissue contrast. this interesting application is based on the fact that the mt effect reduces the hf magnetization of stationary tissue, whereas the magnetization of the flowing blood is not affected. imaging in magnetic resonance is based on spin warp imaging but is commonly referred to as fourier imaging. the main underlying principle is the use of magnetic field gradients to prepare the slice-selective excitation and to phase and frequency encode the signal that is induced by the rotating transverse magnetization. the motivation of continued sequence development is fuelled by the aim to improve the tissue distinction and the shortening of measurement time. in recent years, a great number of sequences have been developed (see table . . ), each of which are utilized in routine clinical applications. the following paragraph provides a systematic overview of the sequence families. schematic depiction of a h spectrum of biological tissues. apart from the resonance line of h nuclei in free water ( hf), with a low spectral line width (< hz), there is a broad underground due to h nuclei in macromolecules with reduced mobility ( hr), the mr signal of which cannot be detected directly because of their short t times. note that the spectral widths are not depicted in true scale. the frequency spectrum of a mt preparation pulse is marked in gray a first sequence classification can be performed in assigning the type of sequence in either a spin-echo or a gradient-echo group. the main difference between se and gre is the influence of susceptibility gradients on image contrast. in general, in gre imaging susceptibility gradients lead to a faster decay of the signal, whereas in se imaging dephasing mechanisms that are fixed in location and consistent over time are refocused by the ° refocusing rf pulse. se image contrast depends on the tissue specific transversal relaxation time t , whereas gre image contrast is a function of the transversal relaxation time t *. some gre techniques utilize the excitation pulse also as a refocusing pulse, causing spin-echo components to contribute to the image contrast. within the se and the gre group, the contrast can be manipulated by preparing the longitudinal magnetization prior to starting the imaging sequence or prior to the measurement of a fourier line. in multi-echo imaging, the transverse magnetization is refocused and reutilized after the collection of a fourier line, omitting the necessity of a further excitation for the collection of another fourier line. this method is applicable within the se group as well as the gre group. again, a preparation of the magnetization is generating another sequence family. using only one excitation and multiple phase-encoded echoes to acquire all required k-space lines without a further excitation is called a single-shot technique. figure . . shows an overview scheme that provides one possible sequence-classification. within d gre with low-flip angle excitation and "spoiling" after data acquisition of a single fourier line and fourier interpolation in the direction of partition encoding combines the half-fourier method with a tse sequence to a degree at which only a single excitation pulse suffices to fill the raw-data matrix with following spin-echoes, one applies the so-called haste technique (half-fourier single-shot turbo spin-echo). the mix of spin-echoes with gradient-echoes or, more precisely the acquisition of gradient-echoes within an se envelope leads to the tgse (turbo gradient-echo sequence) sequence, also called grase (gradient and spin-echo). as expected, with the introduction of gradient-echoes within a multi-echo spin-echo sequence, the contrast behavior is also t * related. this sequence is also called a hybrid. similar to the se sequences, the gre sequences can be grouped into: • conventional gre sequences (e.g., flash, fisp, truefisp, dess, ciss, psif) • gre sequences with preparation of the magnetization (e.g., turboflash, mp-rage) • multi-echo gre sequences (e.g., medic, segmented epi) • multi-echo gre sequences with preparation of the magnetization (e.g., segmented dw-se-epi) • single-shot gre sequences (epi) • single-shot gre sequences with preparation of the magnetization (e.g., dw-se-epi) as indicated above, conventional gre sequences can be further divided into: • ssi group (steady-state incoherent), which only aims at a steady state in the longitudinal magnetization (e.g., flash, spgr, t -ffe) and • the ssc group (steady-state coherent), during which the steady state of the transversal magnetization equally contributes to the signal (e.g., fisp, truefisp, grass, fiesta, ffe, bffe). acronyms of the ssi group are flash (fast low-angle shot), spgr (spoiled gradient-recalled acquisition in the steady state), and t -ffe (t -fast field echo). in the ssc group there are truefisp (fast imaging with steady precession), grass (gradient-recalled acquisition in the steady state), and ffe (fast field echo). within the ssc group, there is a slow transition toward spin-echoes, as excitation pulses do not only excite, but also refocus various echo paths of a remaining or refocused transverse magnetization. the extreme form is psif (a backward-running fisp), also named ssfp (general electric) or t -ffe (philips). in these techniques, the excitation pulse of the following measurement operates as a refocusing pulse for the transverse magnetization of the previous excitation. the contrast is t weighted, as the effective echo time amounts to almost two repetition times. a combination of fisp echo and psif echo is called dess (double-echo steady state), and having the fisp and psif echo coincide in time will result in a ciss (constructive interference steady state) or a truefisp sequence. the same preparation of the magnetization utilized for se techniques can be applied to gre techniques. with a very rapid gre sequence (rage, or rapid acquired gradient-echoes), with the aim of measuring as fast as possible, the tr is set to a minimum and consequently so is te, and the excitation angle is set to an optimum (ernst angle) in order to generate as much signal as possible. to reestablish a t weighting, a saturation or inversion pulse is applied, but not prior each fourier line as in se imaging, but at the beginning of the whole measurement. those techniques are called turboflash, fspgr (fast spoiled gradient-recalled acquisition in the steady state), tfe (turbo field echo) or, placing the inversion within the partition loop of a d sequence, mp-rage (magnetization prepared rapid acquired gradientechoes). as is the case in fast se sequences, gre sequences also can make use of multi-echo acquisitions. medic (multiecho data image combination) uses multiple echoes for averaging, thus improving snr and t * contrast. the classical form of a single-shot gre technique, during which the raw data matrix is filled after a single excitation with several phase-encoded gradient-echoes, is called epi (echo planar imaging). simply collecting the free induction decay with multiple phase-encoded gradient-echoes is called fid-epi. placing the gradient-echoes beneath an se envelope is called se-epi. the most common magnetization prepared single shot gradient-echo technique is the diffusion-weighted spinecho echo planar imaging sequence (dw-se-epi). the idea of using multiple phase-encoded spin-echoes to fill the k-space more rapidly as compared with conventional imaging has surfaced as early as , with the acronym rare-rapid acquisition with relaxation enhancement ( fig. . . ). the number of applied echoes is directly proportional to the potential reduction in measurement time. the overall image contrast is dominated by the weighting of those fourier lines acquired in the center of k-space (effective echo time). the qualities of the early images were not close to the quality of conventional t -weighted se imaging. in the course of hard- and software developments, mulkern and melki "re-discovered" multi-echo se imaging during a search for a fast t -localizer, creating the acronym fse (fast spin-echo). siemens and philips use the acronym tse for turbo spin-echo. since the higher spatial frequencies, the "outer" k-space lines, are usually acquired using late echoes, early concern has been that small objects might be missed. fortu-nately, the time saving achieved with the use of multiple echoes has been utilized to improve the contrast by selecting longer repetition times and to improve the spatial resolution by increasing the matrix size. both measures have more than compensated the effect of an under-representation of high spatial frequencies within the k-space matrix. t -weighted tse has replaced conventional spinecho imaging in all clinical applications. the acquired phase-encoded echo train can also be used to create pdweighted and t -weighted images similar to conventional dual-echo spin-echo imaging. the use of phase-encoded echoes for the k-space of the pd-weighted image as well as the k-space of the t -weighted image is customary and this procedure is called shared echo. t -weighted tse imaging is also an option for some applications, although additional echoes will increase (unwanted) t -weighting. t -weighted tse imaging is rarely applied to the central nervous system as the use of additional echoes prolongs the time needed for a single slice and the number of necessary slices may not fit into the desired tr. for the genitourinary system (uterus, cervix, bladder, etc.) about three echoes are used to improve snr or to reduce measurement time. in areas where the amount of t -weighting is less of an issue, e.g. t weighted imaging of the cervical and the lumbar spine for degenerative disease, tse is usually used with an echo train length (etl) of five echoes. the same protocol is applied for enhanced and unenhanced studies of suspected vertebral metastases. t -weighted tse imaging for the abdomen is not an issue, since the restriction of the measurement time down to a breath hold period is suggesting t -weighted gre imaging. the remaining point of concern in comparing tse imaging with conventional se imaging is the reduced sensitivity to susceptibility artifacts. hemorrhagic lesions appear less suspicious on tse imaging as compared with conventional se imaging. the fourier transformation assumes a consistent signal contribution for all fourier lines. any violation of this assumption will lead to over- or under-representation of spatial frequencies, with a correlated image blurring. although tse violates this assumption in using multiple phase-encoded echoes to r structure of the tse sequence. excitation, refocusing, frequency, and phase encoding are done as in the conventional se sequence. the dephasing done for the purpose of spatial encoding is rephased prior to generating another spin-echo using a ° rf pulse followed by another phase-encoding step fill the k-space, where the signal amplitude of the echoes diminishes following t decay, there are several parameter that can be utilized to minimize the artifacts related to the t decay-related k-space asymmetry: • first, especially for t -weighted imaging, the signal amplitudes for the late and closely spaced echoes can be approximated as being constant. tse sequences are mainly used for the acquisition of t -weighted images. as lesions usually have a long t relaxation time, the signal loss caused by the t decay does not play a major role during data acquisition. • second, the matrix dimensions used in tse imaging are usually higher as compared with conventional se imaging. this will significantly minimize the risk of missing small objects. • third, in t -weighted tse imaging, the repetition time is increased considerably, which leads to a remarkable improvement in contrast, again reducing the potential risk of missing small objects due to k-space asymmetry. in a typical tse protocol, - echoes per excitation are used for imaging, implying a theoretical shortening of measurement time of the factor - . in practice, the shortening is about the factor - . longer repetition times are selected for improved pd and t weighting, and larger image matrices are used for improved spatial resolution, diminishing the potential shortening of the measurement time when using multiple phase-encoded echoes. as the mentioned influence on the space encod-ing is only present in the direction of the phase encoding, the effect can be demonstrated by exchanging of the frequency-and phase-encoding gradients. figure . . shows this in the example of the cauda equine. apart from use for high-resolution images, tse sequences are also applied in cardiology, as shown in fig. fast spin-echo imaging demonstrates two essential differences in imaging appearance as compared with conventional spin-echo imaging: fat appears bright, and there is a reduced sensitivity to hemorrhagic lesions. the bright appearance of fat is related to the j-coupling. the so-called j-coupling (see sect. . . ) of the carbon-bound protons provides a slow dephasing of transversal magnetization in conventional se imaging, in spite of the refocusing pulse. if the refocusing pulses follow shortly after image of the cauda equina, acquired with a tse sequence with a the phase-encoding direction from left to right, and b the phase-encoding direction from head to foot. the longitudinal structures of the relatively thin nerves will be better visible, if the frequency-encoding direction is perpendicular to the nerve fiber. the resolution in frequency-encoding direction is not influenced as much by the t decay as by the resolution in phase-encoding direction one another, as is the case in tse imaging, the j-coupling will be overcome; the dephasing will be suppressed. consequently, fat tissue appears brighter in the tse image than it does in a conventional image. if desired fat saturation or fat suppression (see sect. . . . ) can be utilized to suppress this appearance. the susceptibility-related artifact of hemorrhagic lesions in spin-echo imaging is due to diffusion in between excitation, refocusing, and data acquisition. reducing this diffusion time by rapidly succeeding refocusing pulses will also reduce the related artifacts, thus making tse imaging less sensitive for hemorrhagic lesions. the tse sequence, like the conventional se technique, can be used with an inversion pulse for preparation of the longitudinal magnetization. thus, it becomes possible to yield a suppression of the fat signal, based on the short relaxation time of fat (see fig. . . b). relaxation dependent fat suppression using an inversion pulse prior to the fast spin-echo train is routinely used to demonstrate bone infarctions and bone marrow abnormalities like bone marrow edema, e.g., in sickle cell anemia. this fat suppression scheme is also used in genitourinary applications, where the high signal intensity of fat may obscure contrast-enhanced tumor spread. since only the fat suppression is desired, the used inversion recovery technique is not phase sensitive, only the magnitude of the longitudinal magnetization is used. the acronym used in this case is tirm (turbo inversion recovery with magnitude consideration). the structure of a turbo inversion recovery (tir) sequence is presented in fig. . . . the reduction in measurement time due to the utilization of multiple phase-encoded spin-echoes permits the use of inversion times in the order of s, keeping the measurement time acceptable. an inversion time of . s will provide a relaxation dependent suppression of the cerebral spinal fluid (csf) signal ( fig. . . ). the utilization of a long inversion time is called fluid-attenuated inversion recovery, or flair. in combination with tse imaging (the structure of a d tse sequence is presented in fig. . . ), the technique is called turboflair or simply tirm. since csf has the longest t relaxation time, the longitudinal magnetization within all other tissues will be aligned parallel to the main magnetic field, and it is not necessary to have a phase sensitive ir method for this application. the attenuated csf signal allows a better differentiation of periventricular lesions and has demonstrated a superior sensitivity for focal white matter changes in the supratentorial brain, whereas posterior fossa located lesions can be missed. the turboflair method apparently allows the identification of hyperacute subarachnoid hemorrhage with mr, precluding the need for an additional ct. the time-consuming ir method has been used in the past for studying the development of white matter tracts in developmental pediatrics. this technique has been replaced using an inversion pulse prior to a spin-echo train of a tse sequence. the selected inversion time (~ ms) allows a better delineation of small differences in t relaxation times, e.g., for the documentation of the development of the pediatric brain. the improved tissue characterization between gray matter and white matter tracts allows, e.g., the demonstration of mesial temporal sclerosis and visualizing hippocampal atrophy. for this application a so-called phase-sensitive inversion recovery is required to differentiate nuclear magnetization aligned parallel to the magnetic field as compared with antiparallel alignment at the time of excitation. the according acronym is tir. the residual transverse magnetization after measuring a single fourier line, or, in case of multi-echo imaging, after measuring the "package" of fourier lines, is usually spoiled. a later-introduced concept refocuses the transverse magnetization at the end of the echo train and uses an rf pulse to "restore" the residual transverse magnetization back to the longitudinal direction. the method "improves" the recovery of the longitudinal magnetization for tissue with long relaxation times, allowing a further shortening of the repetition time without loss of contrast. the technique is called restore (siemens), fast recovery fast spin-echo frfse (ge) and drive (philips). it does not make a difference whether the magnetization is prepared after the measurement of a fourier line or at the very beginning of a new excitation cycle. for this reason it is justified to list restore as a turbo spin-echo scheme with preparation of the longitudinal magnetization. multi-echo spin-echo imaging has the potential to acquire t -and/or t -weighted spin-echo imaging of the beating heart within a breath hold. the only obstacle that needs to be addressed is the significant flow artifacts caused by the flowing blood. the introduction of dark-blood preparation scheme finally revolutionized cardiac mr imaging. with this preparation scheme, it is now possible to acquire t -and t -weighted images of the beating heart within a breath hold, without any flow artifacts. the magnetization of the whole imaging volume is inverted nonselectively, followed by a selective reinversion of the slice. this is done at end diastole, with the detection of the qrs complex. during the waiting period to follow, most of the reinverted blood will be washed out of the slice, being replaced by the inverted blood-and the spin-echo train acquired again toward end diastole will show "black" blood. a double inversion pulse will even allow not only the black-blood preparation, but also the suppression of fat signal, which will be helpful in characterizing fatty infiltration of the myocardium in arrhythmogenic right ventricular dysplasia (ard). single shot, per definition, refers to a single excitation pulse and the use of multiple phase-encoded echoes to fill the required fourier lines. the original rare has been published as a single-shot spin-echo technique. other acronyms found in the literature are ss fse for single-shot fast spin-echo (ge) or ss tse of single-shot turbo spin-echo (philips). the combination with a half-fourier technique allows a further reduction in measurement time and has been named haste: half-fourier acquired single-shot turbo spin-echo. as elaborated on earlier, the first and the last data point of a fourier line are characterized by the transverse magnetization of adjacent voxel pointing into opposite direction. the same situation is found for the first and last fourier line within k-space, considering the transverse magnetization within adjacent voxel aligned in the direction of phase encoding. k-space is symmetrical. although this hermitian symmetry is ideal and reality is slightly different, it has been claimed, that the deviation from the ideal situation are only of coarse nature and that a few (e.g., eight) fourier lines measured beyond the center of k-space should be sufficient to correct for this insufficiency. as an example for a * matrix, a single-shot spin-echo technique using the half fourier approach would use / + = phase-encoded echoes to fill the k-space. the measurement time using this acquisition method is about a second per slice. the high numbers of echoes suggest that this technique is only useful for t -weighted imaging and the blurring effect due to signal variation in k-space as a result of t decay will be prohibitive for high resolution studies. nevertheless, it is an alternative, even in the brain, for a fast t -weighted study for patients who are not able or willing to cooperate. since it is the perfect technique to visualize fluid-filled cavities, haste is used, e.g., for mrcp (magnetic resonance cholangiopancreatography). a typical result of this sequence is shown in fig. . . . progress in hardware development and the correlated improvement in image quality, together with the pioneering research within this field have recently led to an impressive increase in haste utilization for obstetric imaging. although sonography remains the imaging technique of choice for prenatal assessment, the complementary role of mr imaging is getting more and more important in the early evaluation of brain development of the unborn child or even in the early detection of complications within the fetal circulatory system. the search for shorter measurement times for faster imaging led to the group of gradient-echoes (gre) in . an mr signal can be detected immediately after the excitation pulse. that signal is the free induction decay (fid). in addition to the spin-spin interaction causing the t relaxation, other dephasing mechanism will contribute to the image contrast, dephasing mechanisms that are based on differences in precessional frequencies due to magnetic field variations across a voxel. the main sources of local magnetic field variations are differences in tis-sue-specific susceptibility values. since these dephasing mechanisms are fixed in location and are constant over time, they are refocused using a ° rf refocusing pulse in se imaging. omitting this pulse will lead to a contribution of these dephasing mechanisms to the image contrast. the observed tissue specific relaxation parameter is then called t * rather than t . with t ´ being both machine and sample dependent. although the missing ° rf refocusing pulse will cause a rapid dephasing of the transverse magnetization and with that a rapidly dephasing mr signal, the echo time can be reduced as well and so can the repetition time. the shorter echo time will allow, in most cases, a detection of a signal despite the rapid dephasing of the transverse magnetization. since the echo is now formed by using a bipolar gradient pulse in the direction of frequency encoding, these techniques are called gre. with shorter repetition times an extension of phase encoding for the direction of slice or slab selection can be considered, and d imaging becomes feasible. the short excitation pulses used in common gre imaging will result in a less perfect slice profile as compared with the slice profile achieved with a - ° combination of longer rf pulses as typically utilized in se imaging. as a result, there will be significant contributions of the low angle-excited outer regions of a slice, explaining the basic difference in contrast between a gre image as compared with an se image, even if a ° excitation angle is utilized. similar to the spin-echo sequence acquisition scheme, there is residual transverse magnetization left at the end of the acquisition of one fourier line-and similar to the spoiling of the transverse magnetization at the end of the measurement in se imaging, the same process can be applied to gre imaging as well. spoiling can be done with a gradient pulse, distributing the transverse magnetization evenly, so that the next excitation pulse will not generate a stimulated echo. or the phases of the excitation pulses can be randomized in order to avoid the buildup of a steady state for the transverse magnetization (rf spoiling). spoiled gradient-echo imaging has been introduced as fast low angle shot (flash), t -weighted fast field echo (t -ffe), or spoiled gradient recalled acquisition in the steady state (spgr). flash imaging allows multislice imaging in measurement times short enough to allow breath hold acquisitions. since the contrast mainly depends on the t relaxation time, flash images are usually called t weighted. in clinical routine, flash sequences have been introduced for diagnosing cartilage lesions (fig. . . ), for abdominal breath-hold t weighted imaging (fig. . . ), and in dynamic contrast enhanced studies. as has been discussed in sect. . . . , not only the amplitude of the signal can be controlled, but also the basic contrast behavior can be influenced. for instance, when using an extremely small excitation angle and moderate repetition times, one can minimize the influence of the t relaxation time (see fig. . . ) . thus, one can obtain the alternative to spoiling the residual transverse magnetization after the end of the fourier line acquisition is to rephase what has been dephased for spatial encoding. this was introduced as fast imaging with steady precession (fisp) (oppelt et al. ), later to be called true-fisp. the original implementation and publication of fisp uses a gradient refocusing in phase encoding as well as in frequency-encoding direction and slice-select direction. as this sequence was susceptible to artifacts at the time, the implemented and released fisp sequence, still used today, is only refocused in the direction of phase encoding, and no refocusing in readout and slice-selection direction. such a sequence has been called roast (resonant offset acquired steady state [haacke et al. ] ). for the fisp sequence, the phase encoding is reversed after the acquisition of the fourier line, undoing the dephasing that was applied for spatial encoding. this approach will lead to a steady state not only for the longitudinal magnetization, but also for the transverse magnetization-for tissue with long t relaxation times. differences in fisp contrast as compared with flash applications will only be visible for short repetition times; large excitation angles and will only enhance signal within tissue with long t relaxation times. general electric introduced this technique as gradient-recalled acquisition in the steady state (grass). philips is using the fast field echo (ffe). if one rephases at the end of the measurement of a fourier line, all parts of the transverse magnetization that have been dephased for spatial encoding and if one compensates in advance for the dephasing to be expected while the slice selection gradient is switched on, one obtains the truefisp sequence ( fig. . . ). this technique combines the advantages of the fisp sequence and the psif sequence, with further echo paths contributing to the overall signal. a clinical application of this sequence is shown in fig. . . . this original approach of refocusing all transverse magnetization at the end of the mea- structure of the truefisp sequence. the sequence seems to be "balanced" due to a symmetry in time. all components of the transverse magnetization are refocused at the end of the measurement, leading to a steady state surement of one fourier line will not only cause a steady state for the transverse magnetization, but also the next excitation pulse will also operate as a refocusing pulse. the excitation pulse will not only convert longitudinal magnetization to transverse magnetization, but will also generate a spin-echo. the sequence seems to be symmetric, balanced. the challenge is to get all the generated echoes to have one phase, otherwise the echoes will destructively interfere, causing band-like artifacts. the po-sitions of these bands also depend on the starting phase of the rf pulse. adding another acquisition with a phase shifted rf will lead to a technique called constructive interference steady state technique, (ciss), or phasecycled fast imaging-employing steady-state acquisition (pc-fiesta, the acronym used by ge). since ciss contains spin-echo components, the technique is even useful in regions with significant susceptibility gradients, e.g., nerve imaging at the base of the skull. since this technique is a fast technique with hyperintense appearance of fluid filled cavities, it is primarily applied to study abnormalities of the internal auditory canal. the originally published fisp is the truefisp, where all the dephasing is reversed and even the slice selection gradient is preparing the dephasing to be expected during the first half of the next excitation pulse. the truefisp technique is a fast gradient-echo sequence with spin-echo contributions leading to hyperintense appearance of all tissues with long t relaxation times. the technique is primarily used se signal. such a sequence is called double echo steady state (dess, fig. . . ) . the dess sequence is routinely used in orthopedic imaging (fig. . . ) . it links the advantages of the fisp sequence with the additional signal enhancement of the psif sequence for tissues with long t (e.g., edema and joint effusion). in fast cardiac imaging, for cine snapshots of the beating heart. general electric is using the acronym fiesta for the same technique. philips is using bffe (balanced fast field echo) as the acronym for his technique. the previously mentioned spin-echo component of a balanced technique can be isolated and can be used to generate an image. the psif sequence shown in fig. . . appears to be violating the causality at first: a fisp sequence running backward. the signal inducing transverse magnetization is produced with the first excitation at the end of the first cycle, refocused with the second excitation at the end of the second cycle, and inducing a signal at the beginning of the third cycle. the effective echo time therefore amounts to almost two repetition times. the resulting images consequently show a remarkable t weighting. (note that in this case, it is a spin-echo and not a gradient-echo.) the psif sequence is insensitive to susceptibility gradients. in contrast to ciss, the psif is very sensitive to flow and motion, thus it is not applied for iac imaging but rather used as an adjunct to demonstrate abnormal csf flow pattern. general electric is calling this technique simply steady state free precession (sffp), while philips is using the acronym t -ffe. imaging of the cochlea (fig. . . ) is no longer performed with psif but rather with ciss, due to the intrinsic flow insensitivity of the latter. when combining a fisp image with a psif image, one obtains an image with a t * weighting via the gre signal and a t weighting via the in theory, all the magnetization preparation schemes previously applied for the se group can also be applied for the gre sequences. however, it has to be kept in mind that gre sequences are usually using shorter te and shorter tr than se imaging does, and therefore some preparation schemes should be slightly altered. as, for an example, the fat saturation scheme: the time necessary for a spectral saturation pulse followed by a gradient spoiler will add up to the slice-loop time for an otherwise short tr gre sequence. a feasible modification is to skip the fat saturation for a few slices-this is referred to as "quick fat sat. " although a slice-dependent recovery of the fat signal is observed, the compromise is in general acceptable, since the fat signal stays low and more slices can be measured per tr. the quality of a spectral saturation pulse depends on the overall homogeneity within the imaging volume. in addition, the spectral fat saturating rf pulse is very close to the water resonance, causing a loss in overall snr. for a nonselective excitation, it is theoretically possible to also simply excite either fat or water using the tissue specific larmor frequency. in practice, such an approach is very prone to artifacts due to imperfect field homogeneity within the volume of interest. better results in water excitation or fat excitation have been achieved with binomial pulses ( - , - - , or - - - ). the mechanism of e.g., a - - rf pulse is described as follows, leading finally to a ° rf excitation pulse for just water. after an initial . ° rf pulse, there will be a waiting period, allowing the magnetization within fat to fall behind the magnetization of water. at the point of opposite position of the magnetizations, a ° excitation angle will than move the magnetization within water to a . ° position with respect to the longitudinal direction, whereas the magnetization within fat will be flipped back to the . ° position. after the previously mentioned waiting period, another . ° excitation pulse will accomplish the ° excitation for water, while the magnetization of fat will be restored to the longitudinal position, not contributing to the mr signal. another advantage of these binomial pulses is that they can be either executed using nonselective rf pulses or selective rf pulses. in the latter case, they are called spatial spectral frequency or simply composite pulses. inversion pulse prior to the measurement to improve t contrast short tr, short te gre imaging, utilizing the ernst angle leads to pd-weighted images rather than t -weighted images. in se imaging, the t contrast is improved by placing an inversion pulse prior to the acquisition of the fourier line. this approach is not feasible in gre imaging, since the inversion time would be much larger than the commonly used repetition times. in fast gre imaging, an inversion pulse is used prior to the whole imaging sequence (fig. . . ) . that concept has been introduced as turboflash (snapshotflash (haase et al. ) , fast spgr (fspgr) or turbo field echo (tfe)). the minor drawback is that the longitudinal magnetization, and consequently the generated transverse magnetization, will change throughout the measurement. the resulting violation of k-space symmetry will cause an under-or over-representation of some spatial frequencies producing a slight image blurring, typical for turboflash imaging. when using this method, one has to consider the following three facts: the longitudinal component of the macroscopic magnetization will recover after the inversion pulse with the t relaxation time. this relaxation process also takes place during data acquisition. the various measured fourier lines will have different t weightings. the image contrast is dominated by the t weighting of the fourier line measured at the center of k-space. the recovery of the longitudinal magnetization is influenced by the excitation angles of the rapid gre data after an inversion pulse and an inversion time, the small-angle excitation is repeated several times until the raw data matrix is filled acquisition. in order to minimize this influence and to obtain a maximum effect of the preparation-pulse on the image contrast, the rapid gre acquisition needs to be executed with small excitation angles. every fourier line is measured with a different phase encoding gradient and contains the spatial information of the object in direction of phase encoding. the k-space symmetry is significantly violated due to the change in signal contribution for each spatial frequency measured as a consequence of t relaxation during sampling. as a result, the images appear to be blurred, with imprecise edges and coarse signal oscillations parallel to the edges. with the introduction of short tr gradient-echo acquisition schemes, d imaging became feasible. the application of an inversion pulse prior to a d acquisition scheme is not very promising, since the preparation of the longitudinal magnetization would vastly diminish during the relatively long measurement time and the significant number of low angle excitation pulses. a feasible alternative is to repeat the preparation of the longitudinal magnetization in either the partition-encoding loop, or the phase-encoding loop. although the timesavings would be larger for placing the inversion pulse prior to the longer phase encoding loop, fortunately at the time it was only possible to place the inversion pulse prior to the partition-encoding loop. fortunately, because the previously described turboflash-artifact based on the overand under-representation of k-space lines is now omitted. the phase-encoding gradient is prepared; the inversion pulse is set followed by a rapid execution of the partitionencoding loop, during which the amount of longitudinal magnetization will change according to the course of the t relaxation (recovery influenced by the low-angle excitation pulses). after this, the next phase-encoding line is prepared, the inversion pulse set, and again the whole partition loop executed. the amount of signal within each partition is identical for all phase-encoding steps and the k-space is again symmetric, resulting in artifact-free images. this technique has been introduced as magnetization prepared rapid acquired gradient-echoes (mp-rage, mulger and brookeman ) (fig. . . ). figure . . shows as a typical application of the mp-rage sequence showing the medial-sagittal t -weighted slice out of of an examination covering the entire skull in less than min. the sequence had some promise to replace the conventional t -weighted spin-echo imaging of the brain, since it allows the gapless coverage of the whole brain in less than min. but, it is a gradient-echo sequence. susceptibility gradients especially at the base of the skull will cause geometric distorted representation of the anatomy or even signal voids. another disturbing effect is the appearance of contrast enhancement in active lesions. due to the commonly "squishy" content of lesions, the appear- phase-encoding gradient activated during readout period. such a technique would be called fid-epi, since signal sampling is done during free induction decay. another variant is placing the gradient-echoes under an se envelope (fig. . . ). in this case the central k-space contains a t contrast, in opposition to the t * contrast of the fid-epi version. the se-epi sequence shows a lower sensitivity for susceptibility gradients. fig. . . shows a se-epi with an alternative approach to phase encoding. in this example, phase encoding is done using gradient "blips" during the ramping time of the frequency-encoding gradient. such a technique is called blipped epi. after an excitation pulse, multiple gres are generated using an oscillating frequency-encoding gradient. in this example the phase encoding is achieved with a low-amplitude, constant phase-encoding gradient throughout the measurement rf fig. . . se-epi sequence. after an excitation and refocusing-pulse, multiple gres are generated using an oscillating frequency-encoding gradient. the phase encoding is achieved in this sequence using small gradient pulses (blips) during ramping of the frequency-encoding gradient ance is usually iso- to hypointense in t -weighted imaging. mp-rage allows better control over t weighting, potentially causing the lesion to be more hypointense as compared with se imaging. in conjunction with contrast uptake, lesions will show up hyperintense on t -weighted se imaging. they may or may not show up hyperintense on mp-rage imaging. the appearance has been reported to be inconsistent, likely to be due to the better t weighting (a hypointense lesion may show up isointense after contrast uptake). the rapid acquisition of a gradient-echo or steady state sequence following an inversion is sometimes referred to as single-shot technique. this is not quite correct since as many low angle excitation pulses are applied as fourier lines are needed to fill the k-space. but the singleshot nomenclature allows a differentiation compared to the segmented, or multi-phase, imaging of the beating heart. similar to the stir approach used in fat signalsuppressed imaging, the inversion pulse prior to a singleshot technique enables the nulling of signal for a specific tissue (depending on the t relaxation time). the turboflash technique is used to study the firstpass of a contrast bolus through the cardiac chamber, showing a delayed enhancement in perfusion restricted ischemic myocardium. the inversion time is adjusted, so that normal myocardium will give no signal. in the early phase normal myocardium will be perfused with the t -shortening contrast agent, whereas the perfusion restricted ischemic myocardium will remain hypointense. the same method of tissue signal nulling can be applied to the truefisp. this technique has been used to demonstrate the late enhancement of infarcted myocardium. an advantage for the truefisp versus turboflash is that the additional signal contributions due to refocusing and balancing (spin-echo components), allowing a higher bandwidth acquisition correlated with a shorter te, a shorter tr, and therefore a shorter measurement time (~ ms). in addition, the truefisp has a significant lower sensitivity to flow and motion artifacts as compared to the turboflash, leading to (almost) artifact-free images. both methods are currently evaluated regarding their value in characterizing myocardial viability. the single-shot gradient-echo imaging is echo planar imaging (epi). similar to tse imaging, epi makes use of several phase-encoded echoes to fill the raw data matrix (fig. . . ). there are multiple ways to acquire the data. a single excitation can be utilized, followed by multiple phase-encoded gradient-echoes with a small, constant addressing a different way of k-space sampling, both, the frequency-encoding" gradient and the "phase-encoding" gradient may oscillate, causing a spiral trajectory through k-space. such a method is known as spiral epi. the quotation marks are used to indicate that the magnetic field gradients do no longer have the apparent meaning of frequency and phase encoding. the high sensitivity of epi to local field inhomogeneities is utilized in (brain) perfusion imaging and for monitoring the oxygen level to identify cortical activation in bold (blood oxygenation level-dependent) imaging. in spite of many limitations, the epi sequences have attained high clinical potential in functional imaging and in perfusion studies. the preparation of the longitudinal magnetization is not only possible with the previously described multi-shot techniques, but also with the single-shot version. single shot, per definition, means one excitation pulse and multiple phase-encoded gradient-echoes for sampling of all fourier lines. the primary preparation scheme for singleshot gradient-echo imaging is diffusion weighting. any magnetic field gradient in the presence of a transverse magnetization will cause the larmor frequency to be a function of location. sometimes this effect is desired, as in any phase encoding, and sometimes it is a byproduct of another desired functionality, e.g. the frequency encoding. to rephase or refocus the dephased transverse magnetization, a magnetic field gradient of opposite polarity can be used prior to the frequency-encoding gradient. but, this will only work if the transverse magnetization does not change the position in the meantime, as is the case for diffusion. if the transverse magnetization changed positions, then the phase history will be different as compared with stationary tissue at that new location, and the rephasing will be insufficient. insufficient rephasing will result in a reduced signal. the signal drop is characterized by with b being a system or method specific parameter, and d being the diffusion coefficient for the tissue. the method specific parameter b is a function of the gradient amplitude g used, the duration δ for each amplitude and the temporal distance ∆ between the two gradient amplitudes. ∆ is also called the diffusion time. a typical value for b = , s/mm . a sequence illustration is given in fig. . . . the result of the application of such a technique to a patient with an acute infarction is shown in fig. . . . diffusion-weighted imaging allows an evaluation of the extent of cerebral ischemia in a period where possible interventions could limit or prevent further brain injury. the diffusion anisotropy potentially measured with this method allows the mapping of neuronal connectivity and offers an exciting perspective to brain research. the turbo gradient spin-echo sequence (tgse), also called gradient and spin-echo (grase) is a combination of multiple gradient-echoes that are acquired within multiple se envelopes of a tse sequence, as shown in fig. . . . this method holds several advantages in comparison to the "simple" tse sequence: the use of several phase-encoded gradient-echoes has the potential of further shortening measurement time. figure . . shows a transversal t -weighted head image with a matrix size of , , which has been measured with a tgse sequence in . min. another advantage is the fact that with the use of several gradient-echoes per spin-echo envelope, the gap between the refocusing pulses widens. therefore, the j-coupling remains intact. fat appears darker, and the contrast approaches the contrast of conventional se sequences. further, the enhanced sensitivity toward the susceptibility gradients that has been introduced with the gradient-echoes allows for a better depiction of blooddecay products, similar to conventional se-imaging. clinical mr systems come in various types and shapes; however, the fundamental components of a clinical mr tomograph are essentially the same. these are: • the magnet: the magnet creates a static and homogeneous magnetic field b , which is needed to establish a longitudinal magnetization. • the gradients: the gradient coils generate additional linearly ascending magnetic fields that can be switched on and off. the gradient fields allow assigning a spatial location to the received mr signals (spatial encoding). for an image acquisition three independent gradient systems in x-, y-, and z-direction are required. • the radio frequency (rf) system: to rotate the longitudinal magnetization from its equilibrium orientation along b into the transverse plane, an oscillating magnetic field b is required. this rf field is generated by a transmitter and coupled into the patient via an antenna, the rf coil. radio frequency coils are also used to receive the weak induced mr signals from the patient, which are then amplified and digitized. • the computer system: measurement setup and image post-processing are performed by (distributed) computers that are controlled by a host computer. at this host computer, new measurements are planned and started and the reconstructed images are stored and analyzed. a schematic of the components of a clinical mr system is shown in fig. . . ; more detailed descriptions can be found in the works of oppelt ( ) , vlaardingerbroek et al. ( ), and chen and hoult ( ) . in the upper part, a cross-section through a superconducting magnet can be seen, with field-generating magnet windings embedded in a cryostatic tank. closer to the patient, the gradient coil and the whole-body rf coil are located outside the cryostat. the magnet is surrounded by an electrically conducting cabin (faraday cage) which is needed to optimally detect the weak mr signals, without rf background from other rf sources (e.g., radio transmitters). in the lower part, the computing architecture and the hardware control cabinets are shown. a hardware computer controls the gradient amplifier, the rf transmitter, and the receiver. the received and digitized mr signals are passed on to an image-reconstruction computer, which finally transfers the reconstructed image data sets to the host computer for display and storage. to generate the main magnetic field three different types of magnets can be utilized: permanent magnets, resistive magnets, and superconducting magnets (oppelt ; vlaardingerbroek ; chen ) . the choice of an individual magnet type is determined by the requirements on the magnetic field. important characteristics are the field strength b , spatial field homogeneity, temporal field stability, patient accessibility, as well as construction and servicing costs. as outlined in sect. . a high magnetic induction is desirable as the mr signal s is approximately proportional to b ², and the signal-to-noise ratio (snr) increases approximately linearly with b . it is thus expected that with increasing field strength the measurement time can be substantially decreased. the field strength is limited however for the following reasons: • for tissues in typical magnetic fields of . t and higher, the longitudinal relaxation time t increases with field strength. if the same pulse sequence with identical measurement parameters (tr, te, etc.) would be used at low field and high field, the t contrast would be less pronounced in the high-field image, since image contrast typically depends on the ratio of tr over t . to achieve a similar t contrast with a conventional se or gre pulse sequence, tr (and thus the total measurement time) needs to be increased. • the resonance frequency ω increases linearly with field strength according to ω = γ b . at higher frequencies the wavelength of the rf waves are of the order of or even smaller than the dimensions of the objects to be imaged. under these circumstances, standing waves can be created in the human body, which manifests in areas of higher rf fields (hot spots) and neighboring areas of reduced rf intensity. these unwanted rf inhomogeneities are difficult to control, as they are dependent on the geometry and the electric properties of the imaged object. • the power that is deposited in the tissue during rf excitation rises quadratically with ω (and thus with b ). to ensure patient safety at all times during the imaging procedure the specific absorption rate (sar), i.e., the amount of rf power deposited per kilogram of body weight, is monitored and limited by the mr system. with increasing field strength, the rf power generated by a pulse sequence increases, and thus the flip angle needs to be lowered to stay within the guidelines of sar monitoring. since most pulse sequences require certain flip angles (e.g., a - ° pulse pair for an se), the rf pulses need to be lengthened at higher field strength to reduce the rf power per pulse. additionally, the time-averaged power can be lowered by increasing the tr. • at field strengths above t, only superconducting magnets are used for whole-body imaging systems. these magnets become very heavy and expensive. a typical . -t mr magnet weighs about t, whereas a -t magnet already has a weight of about t. shielding of the stray fields, which is, e.g., necessary to avoid ( ) that is filled with liquid helium. the cryotank also houses the primary magnet coils ( ) together with the shielding coils ( ) that create the magnetic field. the cryotank is embedded in a vacuum tank ( ). in a separate tubular structure in the magnet bore, the gradient coil ( ) and the rf body coil ( ) are mounted. an mr measurement is initiated by the user from the host computer. the timing of the sequence is monitored by the hardware computer, which controls (among others) the rf transmitter, rf receiver, and the gradient system. during the measurement, the rf pulses generated by the transmitter are applied (typically) via the integrated body coil, whereas signal reception is done with multiple receive coils. the digitized mr signals are reconstructed at the image-reconstruction computer, which finally sends the image data to the host for further post-processing and storage interference with cardiac pacemakers, becomes increasingly difficult. • the absolute differences in resonance frequency between chemical substances increase with field strength. this effect is beneficial for high-resolution spectroscopy, as high field strengths allow separating the individual resonance lines. during imaging, however, a substantially increased chemical shift artifact (i.e., a geometric shift of the fatty tissues versus the water-containing tissues) is seen, which can only be compensated using higher readout bandwidths. • differences in magnetic susceptibility between neighboring tissues create a static field gradient at the tissue boundaries. the strength of these unwanted intrinsic field gradients scales with b . therefore, increasingly higher imaging gradients are required at higher field strength to encode the imaging signal without geometric distortion; however, gradient strengths are technically limited. on the other hand, in neurofunctional mri (fmri) the increased sensitivity at higher field strengths is utilized to visualize those brain areas where local susceptibility differences in the blood are modulated during task performance. in the clinical environment, magnetic field strengths between . and t are common. low-field mr systems (b < t) are often used for orthopedic or interventional mri, where the access to the patient during the imaging procedure is important. high field strengths between t and t are used for all other diagnostic imaging applications. recently, whole body mr systems with field strengths up to . t have been realized (robitaille et al. ) . with these systems, in particular neurofunctional studies, high-resolution imaging and spectroscopy as well as non-proton imaging (e.g., for molecular imaging) are planned, since these applications are expected to profit most from the high static magnetic field. in the following, the three different types of magnet are described that are used to create the static magnetic field. permanent mri magnets are typically constructed of the magnetic material ndbfe. permanent magnet materials are characterized by the hysteresis curve, which describes the non-linear response of the material to an external magnetic field. if an external field is slowly increased, the magnetization of the material will also increase until all magnetic domains in the material are aligned-at this point the magnet is saturated, and no further amplification of the external field is possible. if the external field is then switched off a constant, non-vanishing magnetic field remains in the material because some of the domains remain aligned. permanent magnets offer very high remanence field strength. permanent magnets require nearly no maintenance because they provide the magnetic field directly without any electrical components. permanent magnets often use a design with two poles, which are either above or below (fig. . . ) or at the sides of the imaging volume. within this volume the magnetic field lines should be as parallel as possible (high field homogeneity), which is achieved by shaping of the pole shoes. due to their construction, the magnetic field is typically orthogonal to the patient axis, whereas high-field superconductors use solenoid magnets with a parallel field orientation. magnetic field lines are always closed; therefore, an iron yoke is used in permanent magnets to guide the magnetic flux between the pole shoes. with increasing field strength permanent magnets become very heavy ( t and more), and the high price of the material ndbfe becomes a limiting factor. additionally, to achieve high temporal field stability the material requires a constant room temperature, which should not vary by more than k. for these reasons, permanent magnets are typically used only for field strengths below . t. if an electrical current is flowing through a conductor, a magnetic field is created perpendicular to the flow direc- tion that is proportional to the current amplitude. unfortunately, in conventional conductors (e.g., copper wire) the electric resistance converts most of the electric energy into unwanted thermal energy and not into a magnetic field. therefore, a permanent current supply is required to maintain the magnetic field and to compensate for the ohmic losses in the wire. additionally, to dissipate the thermal energy resistive magnets need permanent water cooling as their power consumption reaches several kw. resistive magnets use iron yokes to amplify and guide the magnetic field created by the electric currents. the iron yoke is surrounded by the current-bearing wires so that the field lines stay within the iron. in the simplest form, the closed iron yoke has a gap at the imaging location and the magnet takes the form of a c-arc, which can also be rotated by ° to provide a good access for the patient (fig. . . ). other magnet designs use two or four iron posts that connect the pole shoes. the magnetic field of a resistive magnet is typically not as homogeneous as that of a superconducting magnet of the same size. to achieve high field homogeneity within the imaging field-of-view, the diameter of the pole shoes should not be less than . times the desired diameter of the imaging volume (dfov), and the pole separation should be more than . dfov. at a typical pole separation of cm, the imaging volume would thus have a diameter of cm, and the pole shoe diameter amounts to cm. resistive magnets are susceptible to field variations caused by instabilities of the electric power supply. to minimize this effect the magnetic field of the magnet can be stabilized using an independent method to measure the field strength (e.g., electron spin resonance). the difference between actual and desired field strength is then used to regulate the current in the magnet in a closed feedback loop. to create magnetic fields of more than . t with a bore size of cm or more, today, typically superconducting magnets are utilized (fig. . . ) . in principle, these magnets operate in a similar fashion as resistive magnets without iron yoke-superconducting magnets also generate their magnetic field by wire loops that carry a current. instead of copper wire, superconductors use special metallic alloys such as niobium-titanium (nbti). the alloys completely lose their electric resistance below a certain transition temperature that is characteristic for the material; this effect is called superconductivity. the transition temperature itself is a function of the magnetic field, so that lower temperatures are required when a current is flowing through the wire. unfortunately, an upper limit for the current density in the wire exists, which is also a function of the temperature and the magnetic field. to maintain the required low temperatures cooling with liquid helium is typically necessary (t < - °c). the imaging volume of the mr system is typically kept at room temperature (t = °c), whereas the surrounding superconducting wires require temperatures near the absolute zero (- . °c). to maintain this enormous temperature gradient, the field-generating superconducting coils are encased in an isolating tank, the cryostat. the cryostat is a non-magnetic steel structure that contains radiation shields to prevent heat diffusion, heat conduction, and heat transport. if this isolation is not working properly and the wire is locally warming up over the transition temperature, then this section of the wire will become normally conducting, and the energy stored in the current will be dissipated as heat. the heat will then be transported to adjacent sections of the wire, which will also lose their superconductivity. this very rapid process is called a quench. when the magnet wire is heating up the liquid helium will evaporate, and the cryo- fig. . . iron-frame electromagnet . -t mr system (upright tm mri, fonar) with a horizontal magnetic field. this special construction of the mr systems allows for imaging in both upright and lying positions. this flexibility is especially advantageous for mr imaging of the musculoskeletal system . technical components stat is exposed to an enormous pressure. to prevent the cryostat from exploding, a so-called quench tube is connected to superconducting magnets with helium cooling, which safely guides the cold helium vapor out of the magnet room. recently also ceramic superconducting materials on the basis of niobium-tin (nb sn) alloys have been used to make superconducting wires. the brittle nb sn alloys show a higher transition temperature (- °c) and thus do not necessarily require liquid helium cooling. if the cryostat is equipped with a good thermal vacuum isolation, a conventional cooling system (e.g., gifford-mc-mahon cooler) can be used to maintain the temperature. this technology has been realized both in a dual-magnet system (general electric sp, b = . t) and a low-field open mr system (toshiba opart, b = . t). because a helium-filled cryostat requires more space than does a system without helium, these magnets can be installed in smaller areas than can comparable magnets with helium. in the recent years, several mr magnets have been equipped with helium liquefiers to regain the evaporated helium gas in the magnet. once filled with helium these so-called zero boil-off magnets can operate in principle without any additional helium filling. magnets without helium liquefiers require replenishment of the helium at intervals between several months and - years, depending on the quality of the cryostat and the usage of the mr system. the most widespread form of a superconducting magnet is the solenoid, where the windings of the superconducting wire form loops around the horizontal bore of the cylindrical magnet. at a typical inner bore diameter of cm for clinical mri systems, solenoid magnets can create very homogeneous magnetic fields with varia-tions of only a few parts per million (ppm). because the relatively bulky magnet structure limits access to the patient, shorter magnets of . m length with wider diameters of cm have been designed (siemens magnetom espree, b = . t) (fig. . . ). in these magnets, obese patients can be imaged more conveniently, claustrophobic patients feel more at ease and some mr-guided percutaneous interventions might become feasible. another variant of the solenoid is a dual magnet mr system consisting of two collinear short solenoid magnets (general electric sp, b = . t) - here the imaging area is located in between the two magnets and even intra-operative mr imaging is possible. recently, also two-pole systems with a magnet design similar to low-field resistive magnets have become, which offer a good patient access in combination with higher field strengths ( fig. . . ) . outside a superconducting magnet, the field strength his falling off with the inverse third power of the distance ( /r ³) so that the stray fields can extend far outside the mr room. magnetic fields in commonly accessible areas must not exceed . mt, because higher fields can affect pace makers and other active electric devices (fig. . . ) . for this reason, two shielding technologies have been utilized to reduce the magnetic fringe fields. with passive shielding ferromagnetic materials such as steel are mounted near the magnet. this shielding technique confines the field lines to the interior of the shielding material, and the stray fields are reduced. unfortunately, the amount of shielding material rapidly increases with increasing magnetic field, and between t and t of steel are required to shield a -t magnet (schmitt et al. ) . with active shielding a second set of wire loops is integrated in the cryostat of the magnet. the shielding fig. . . conventional . -t superconducting mr magnet (magnetom symphony, siemens) with a magnet length of cm and a free open-inner bore diameter of cm. the system is equipped with an in-room monitor (left), which allows controlling the mr system from within the rf cabin coils create a magnetic field in the opposite direction of the imaging field so that the stray field falls off more rapidly. the shielding coils have a larger diameter than do the field-generating primary coils. thus, the desired magnetic field within the magnet can be maintained by increasing the current in both coil systems. additionally, the shielding coils and the primary coils repel each other (lorentz forces), which requires a magnet design with more stable coil formers. the attractive forces acting on paramagnetic or ferromagnetic objects near such an actively shielded magnet are significantly higher than near an unshielded magnet; device compatibility and safety should thus always be specified with regard to the investigated magnet type. and an open-bore diameter of cm. the additional cm in bore diameter over conventional mr system with solenoid magnets and the shorter magnet length offer a better access to the patient, so that, e.g., percutaneous interventions can be performed in this magnet structure to localize the mr signals emitted by the imaging object, a linearly increasing magnetic field, the gradient g, is superimposed on the static magnetic field b . the gradient fields are created by gradient coils that are located between the magnet and the imaging volume (schmitt et al. ). for each spatial direction (x, y, and z) a separate gradient coil is required, and angulated gradient fields are realized by linear superposition of the physical gradient fields. in a cylindrical bore superconducting magnet, the gradient coils are mounted on a cylindrical structure, which is often made of epoxy resin. this gradient tube reduces the available space in the cryostat from typically cm, without gradient coils, to cm, with gradient coils. the functional principle of a gradient system is best illustrated by a setup of two coaxial wire loops with a radius a that are separated by a distance d (fig. . . ). if the two coils both carry the same current, however, in counterpropagating directions, their respective magnetic fields cancel at the iso-center of the setup. at distances not too far from iso-center the magnetic field will increase linearly, which is exactly the desired behavior of a gradient field. to achieve this linear gradient field the condition d = a must be met (maxwell coil pair) (jin ) . in commercially available gradient system, much more complicated wiring paths are utilized, which are optimized using the so-called target field approach (turner ). this often results in wire patterns that, when plotted on a sheet of paper, resemble fingerprints (fingerprint design). nevertheless, a common feature of all gradient systems is the absence of current at the cen-tral plane, which allows separating the gradient coils, e.g., for c-arc-type magnets. the quality of the gradient system is characterized by several parameters: the maximum gradient strength gmax, the slew rate smax, the homogeneity, the duty cycle, the type of shielding, and gradient pulse stability and precision. today, clinical mr systems have maximum gradient strengths of up to gmax = mt/m at bore diameters of cm. even higher gradient strengths of mt/m and more can be realized when so-called gradient inserts with smaller diameters are used (e.g., for head imaging). the maximum gradient strength is limited by the capabilities of the power supply of the gradient system-modern gradient systems use power supplies that can deliver voltages up to , v and currents up to a. another limiting factor for gmax is gradient heating: with increasing current through the gradient coil, the windings heat up to levels at which the gradient could be destroyed. therefore, to remove the heat from the gradient tube, pipes are integrated in the gradient coils for water-cooling. the maximum slew rate smax is the ratio of gmax over the shortest time required to switch on the gradient (rise time). when the current in the gradient coil is increased during gradient switching, according to lenz's law the coil will produce a current, which opposes the change. thus, it counteracts the switching process, and the rise time cannot be made infinitely short. during mr imaging, however, it is desirable to have very short rise times (i.e., high slew rates), as these times only prolong the imaging process. clinical mri systems have slew rates between mt/m/ms and mt/m/ms. if the gradient coil is connected to a capacitance via a fast switch, very short rise times can be achieved, as the inductance of the gradient coil and the capacitance form a resonance circuit. such a resonant gradient system has the disadvantage that the characteristic frequency of the resonance circuit determines the possible rise times. additionally, gradients can only be switched on after the capacitances have been charged. resonant gradient systems have nevertheless been successfully applied to epi studies, in which the sinusoidal gradient waveforms are beneficial, and multistage resonant systems have been utilized to approximate the trapezoidal gradient waveforms (harvey ) . when the gradient is switched on, the maximal rate of field change is observed at the ends of the gradient coil (i.e., fovmax/ ): db/dt = smax fovmax/ . a changing magnetic field induces currents in electrically conducting structures in its vicinity-outside the gradient coil this structure is given by the cryostat, and on the inside the patient can act as a conductor. to avoid these parasitic currents (eddy currents) in the cryostat, which in turn create magnetic fields counteracting the gradients, often a second outer gradient structure is integrated in the gradient tube. the inner and outer gradient coils are designed so that their combined gradient field vanishes everywhere outside the gradient coil, whereas the desired gradient amplitudes are realized on the inside. this technique is called active shielding, and is conceptionally similar to the active shielding of superconducting magnets (mansfield and chapman ; harvey ) . gradient-induced currents in the human body pose a more severe problem, as these currents can potentially lead to painful peripheral nerve stimulation or, at higher amplitudes, to cardiac stimulation (mansfield and harvey ; schaefer ; liu et al. ) . these physiologic effects are not only dependent on the amplitude, but also on the frequency of field change. for clinical mr systems, different theoretical models have been established to determine the threshold for peripheral nerve stimulation. to make the best use of the available gradient system some fast pulse sequences (e.g., for contrast-enhanced mra or epi) operate very close to these threshold values. as individuals are more or less susceptible to peripheral nerve stimulation, for some patients the individual threshold might be exceeded, and they experience a tickling sensation during fast mr imaging. this physiologic effect currently prohibits the use of stronger gradient systems. since the field change is lower at shorter distances from iso-center, peripheral nerve stimulation can be avoided if shorter gradient systems are used. unfortunately, a shorter gradient system only covers a limited fov, and the anatomical coverage is compromised. to overcome this limitation a combined gradient system with a shorter, more powerful inner coil and a longer, less intense outer coil has been proposed (twin gradients) (harvey ). such a system can be used, e.g., to rapidly image the beat-ing heart with the small coil, or to acquire image data from the surrounding anatomy at lower frame rates. when the gradient system is mounted in the mr magnet, strong mechanical forces act on the gradient tube, which are proportional to the gradient current. these forces are generated by the interaction of the gradient field with the static magnetic field and thus increase with b . the permanent gradient switching creates time-varying forces that lead to acoustic noise. several techniques have been proposed to reduce noise generation, which in some cases can exceed dangerous sound pressure levels of db. the wire paths in the coil can be designed in such a way that the forces are locally balanced, the gradient tube can be mechanically stabilized, the gradients can be integrated in a vacuum chamber to prevent sound propagation in air, or the gradient system can be mounted externally to reduce acoustic coupling to the cryostat (pianissimo gradient, toshiba). another possibility to reduce acoustic noise is to limit the slew rates in the pulse sequences to lower values than technically possible; in some pulse sequences (e.g., spin-echo sequences), this does not significantly affect the pulse sequence performance, but severely increase patient comfort. shimming is a procedure to make the static magnetic field in the mr system as homogeneous as possible. inhomogeneities of the magnetic field that are caused during the manufacturing of the magnet structure can be compensated with small magnetic plates (passive shim). after a localized measurement of the initial magnetic field, the position of the plates is calculated, and the plates are placed in the magnet. this procedure is repeated until the desired homogeneity of the field is achieved (e.g., . ppm in a sphere of radius cm). during mr imaging, objects are present in the static magnetic field that distort the homogeneous static field. field distortion is caused by susceptibility differences at the tissue interfaces and is thus specific for each patient. to at least locally compensate these field distortions, adjustable magnetic fields are required (active shim). if the field distortion is linear in space, then the gradient coils can be used for compensation. for higher-order field variations, additional shim coils are required. typically, shim coils up to fifth order are present in an mr system. higher-order shimming is particularly important for mr spectroscopy, where the field homogeneity directly affects the spectral line width. to optimize the shim currents, an interactive measurement process (the shim) is started after the patient is positioned in the magnet. during active shimming the field homogeneity is measured (e.g., using localized mr spectroscopy or a field mapping technique), and the currents are then adjusted to improve the field homogeneity (webb and macovski ). the radiofrequency (rf) system of an mr scanner is used to both create the transverse magnetization via resonant excitation and to acquire the mr signals (oppelt ; vlaardingerbroek et al. ; chen and hoult ). the rf system consists of a transmit chain and a receive chain. in the following, the details of the rf system are described. the mr signals, which are acquired by the rf coils of the mr system, are typically very low. to optimally detect these low signals, any other electromagnetic signals (e.g., radio waves) must be suppressed. therefore, the mr system is placed in a radiofrequency cabin (also called a faraday cage), which dampens rf signals at the resonance frequency by typically db and more. in low-field mr tomographs, the rf screening is sometimes realized as a wire mesh that is integrated in the mr system. this has the advantage that rf-emitting equipment such as television screens can be placed very close to the mr unit. at larger magnet dimensions, these local screens are often not suitable. here, the whole mr room is designed as an rf cabin, and the screening material is integrated into the walls, doors, and windows. for screening often copper sheets are used, which are glued to the wall panels, or the cabin consists completely of steel plates. to be able to transmit signals to and receive signals from the rf cabin, openings are integrated in the cabin. in general, one distinguishes between so-called filter plates, which contain electronic filters and open waveguides. waveguides are realized as open tubes with a certain length-to-diameter ratio, which is dependent on the wavelength of the rf frequency. waveguides are used to deliver anesthesia gases to the rf cabin and to guide the quench tube out of the shielded room. at the beginning of the transmit chain the rf transmitter is found, which consists of a synthesizer with highfrequency stability and an rf power amplifier. the lowpower synthesizer oscillates at the larmor frequency. its output signal is modulated by a digitally controlled pulse shaper to form the rf pulse, which is then amplified by the power amplifier. for typical clinical mr systems, the transmitter needs to provide peak power output at the larmor frequency of kw and more. besides high peak power, the rf transmitter should also allow for a high time-averaged power output, as several pulse sequences such as fast spin-echo sequences require rf pulses at short repetition times. the rf power is then transferred into the rf cabin via a shielded cable, and is delivered to the transmit rf coil. to guarantee a safe operation of the transmitter and to limit the rf power to values below the regulatory constraints for the specific absorption rates (sar), directional couplers are integrated in the transmission line. these couplers measure the rf power sent to the rf coil as well as the reflected power. high power reflection is an indicator of a malfunctioning of the connected coil, which could endanger the patient. if the reflected power exceeds a given threshold (e.g., % of the forward power), then the rf amplifier could be damaged by the reflected rf power and the transmitter is switched off. to couple the rf power of the rf transmitter to the human body an rf antenna is required, the so-called rf coil. before mr imaging starts, the coil is tuned to the resonance frequency of the mr system (rf tuning). simultaneously, the properties of the connecting circuitry are dynamically changed to match the resistance of the coil with the imaging object (loaded coil) to the resistance of the transmit cable (rf matching). once the coil is tuned and matched, the transmitter is adjusted. during this procedure, the mr system determines the transmitter voltage required to create a certain flip angle. for a given reference rf pulse shape sref (t), the transmitter voltage uref is varied until the desired flip angle αref (e.g., °) is realized. during the subsequent imaging experiments, use is made of the fact that the flip angle is linearly proportional to the (known) integral over the rf pulse shape, so that the required voltages can be computed from the reference values by linear scaling. radiofrequency coils are categorized into transmit (tx) coils, receive (rx) coils, and transmit/receive (txrx) coils. tx coils are only used to expose the imaging object to an rf b field during rf excitation, whereas rx coils detect the weak echo signal emitted from the human body-only if a coil performs both tasks, is it called a txrx coil. a typical example of a txrx coil is the body coil integrated into most superconducting mr systems; however, in some modern mr systems, it is used as a tx coil only due to its suboptimal receive characteristics. rx-only rf coils are the typical local coils found in mr systems that possess a (global) body coil, and local txrx coils are used in all other mr systems without a body coil (ultra-high field, dedicated interventional systems, openconfiguration low field). during signal reception, the oscillating magnetization in the human body induces a voltage in the rf coil. for an optimal detection of this weak signal, the rf coil should be placed as close to the imaging volume as possible. for this reason, optimized imaging coils exist for nearly any part of the human body. the largest coil of an mr system is typically the body coil (if present), which is often integrated in the magnet cover. to image the head or the knee, smaller volume resonators are used, where the imaging volume is in the interior of the rf coil ( fig. . . ). flexible coils exist, that can be wrapped around the imaging volume (e.g., the shoulder). small circular surface coils are used to image structures close to the body surface (e.g., eyes). unfortunately, the sensitivity of these coils is rapidly decreasing with distance from the coil center, so that they are not suitable for imaging experiments, where a larger volume needs to be covered. during rf transmission, rx coils need to be deactivated, because a tuned and matched rx coil would ideally absorb the transmitted rf power, and a significant amount of the rf energy would be deposited in the coil. to avoid any electronic damages, the coil is actively de-tuned during rf transmission; this is often accomplished by fast electronic switches (e.g., pin diodes), which connect a dedicated detuning circuitry. to combine the high sensitivity of small surface coils with the volume coverage of a large volume resonator, the concept of the so-called phased-array coils has been introduced (roemer et al. ) . a phased-array coil consists of several small coil elements, which are directly connected to individual receiver channels of the mr system. the separate reconstruction of the coil elements is technically demanding, because a full set of receiver electronics (amplifiers, analog-to-digital converters) as well as an individual image reconstruction are required for each coil element. the signals of the individual coil elements are finally combined using a sum-of-squares algorithm, which yields a noise-optimal signal combination. under certain conditions when snr can be sacrificed, also a suboptimal image reconstruction can be achieved by a direct combination of the coil element signals, which reduces the number of receive channels and shortens the image reconstruction time. to be able to manually adjust snr versus reconstruction overhead, special electronic mixing circuits have been introduced which allow combining, e.g., three coil elements into a primary, a secondary, and a tertiary signal (total imaging matrix tim, siemens). in a phased-array coil, the coil elements are positioned in such a way that an induced voltage in one element does not couple to the adjacent element-this can be achieved by an overlapping arrangement of the coil elements (geometric decoupling). phased-array coils with up to elements have been realized; however, typically the number of elements ranges between and . today, mri systems with independent receiver channels are available, at which up to coil elements can be positioned simultaneously. the individual coil elements can be selected manually or automatically to achieve the highest possible snr for a given imaging location. phased-array coils are not only required to achieve a high snr. the individual coil elements can also be used to partially encode the spatial location in the image; this procedure is called parallel imaging. the simplest version of parallel imaging uses two adjacent coil elements with non-overlapping sensitivities. if one wants to image the full fov covered by both coils only fov/ needs to be encoded, since each coil element is sensitive over this distance only. if the phase-encoding direction is chosen in this direction, the phase fov can be reduced by a factor of , which in turn halves the total acquisition time. in practice, the sensitivity profiles of the coil elements overlap and more sophisticated techniques such as smash sodickson and manning ) or sense (dumooulin et al. ) are required to reconstruct the image. nevertheless, in parallel imaging the intrinsic spatial encoding present in the different locations of the imaging coils is exploited to reduce the number of phase encoding steps. various receive coils on the patient table of a clinical . -t mr system (magnetom avanto, siemens) with receive channels and the possibility to connect a total of coil elements. the head coil with coil elements is combined with a neck coil ( elements), and the remaining parts of the anatomy are imaged with multiple flexible anterior phased-array coils ( × elements) and the corresponding posterior coils, which are integrated in the patient table. for smaller imaging volumes dedicated surface coils (flexible coil, open loop coil, small loop coil) can be used, which share a common amplifier interface because the phase encoding direction is different for different slice orientations, the optimal phased-array coil for parallel imaging offers coil elements with separated sensitivity profiles in all directions. for mr spectroscopy and non-proton imaging, rf coils with resonance frequencies for the respective nuclei are required. these non-proton coils can also incorporate a coil at the proton resonance frequency to acquire proton images without the need for patient repositioning. double-resonant coils are also important in situations when both frequencies are used at the same time as, e.g., in decoupling experiments. for interventional mri, dedicated tracking coils have been developed that are attached to the interventional devices (e.g., catheters or needles). the signal from these coils can be used for high-resolution imaging (e.g., of the vessel wall), but it is often only utilized to determine the position of the device (doumoulin et al. ) . in these tracking experiments, the signal of the coil is encoded in a single direction using a non-selective rf excitation, and the position of the coil in this direction is extracted after a one-dimensional fourier transform. the mr signal received by the imaging coil is a weak, analog, high-frequency electric signal. to perform an image reconstruction or a spectral analysis, this signal must be amplified, digitized, and demodulated. the signal amplification is typically performed very close to the imaging coil to avoid signal interference from other signal sources. if the rf coil is a txrx coil, then the signal passes a transmit-receive switch that separates the transmit from the receive path. the amplified analog signal still contains the high-frequency component of the larmor frequency. to remove this unwanted frequency component, the signal is sent to a demodulator, which receives the information about the current larmor frequency from the synthesizer of the transmitter. after demodulation, the mr signal contains only the low-frequency information imposed by the gradients. finally, the analog voltage is converted into a digital signal using an analog-to-digital converter (adc). over the recent years the conversion into a digital signal has increasingly been performed at an earlier stage in the receiver chain (e.g., before demodulation), and all subsequent steps were carried out in the digital domain. at the end of the receiver chain, the digital signal is then handed over to the image reconstruction computer. the computing system of an mr tomograph is typically realized by a system of distributed computers that are connected by a local high-speed network. the requirements for the computing system are manifold: for the user of the system it should provide an intuitive interface for measurement control, image processing, archiving, and printing. during sequence execution, the computers should control the hardware (i.e., gradients, rf, adcs, patient monitoring, etc.) in real time. additionally, the computing system must reconstruct and visualize the incoming mr data. since a single computer cannot perform all of these tasks at the same time, typically three computers are used in an mr system: the host computer for interaction with the user, the hardware control computer for real-time sequence control, and the image reconstruction computer for high-speed data reconstruction. the host computer provides the interface between the user and the mr system. through the mr user interface, the whole mr system can be controlled, mr measurements can be started, and the patient monitoring is visualized. at the host computer, the incoming images are sorted into an internal database for viewing, post-processing, and archiving. the internal database stores and sorts the images by patients, studies, and series. the database is often connected to the picture archiving and communication system (pacs) of the hospital, from where it retrieves the patient information to maintain a unique patient registry. a × mr image typically requires about kb of storage space, and for each patient investigation between and , images are acquired. on an average working day between and patients can be examined. the data of all of these patients need to be stored in the database so that a storage volume of about gb per day should be provided. with increasing matrix sizes and image acquisition rates, these numbers can easily be multiplied by factors of and more. the host computer is also used to transfer the acquired data to archiving media such as magneto-optical disks (mod), tapes, compact disks (cd), digital versatile disks (dvd), or external computer archives (typically, the pacs). data transfer is increasingly accomplished using the image standard dicom (digital imaging and communications in medicine), which regulates not only the image data format, but also the transfer protocols. it is due to this imaging standard that images can be exchanged between systems from different vendors and can be shared between different modalities. for post-processing, typically different software packages are integrated. in mr spectroscopy, software packages for spectral post-processing are available to calculate, e.g., peak integrals automatically. for mr diffusion measurements, the apparent diffusion coefficient can be mapped. with flow-evaluation software, the flow velocities and flow volumes can be assessed. to visualize threedimensional data sets often multi-planar reformatting tools or projection techniques such as the maximum intensity projection (mip) are used. all of these software packages retrieve the image data from the integrated image database, into which the calculated images are finally stored. dedicated computer monitors are connected to the host computer for image visualization, which fulfill the special requirements for diagnostic imaging equipment. in addition, these screens must not be susceptible to distortions due to the magnetic field; for this reason liquid crystal monitors based on the thin-film transistor (tft) technology are increasingly used. for interventional mr, shielded monitors for in-room image display have been designed, where the monitor is shielded against electromagnetic interference. these monitors can be used within the faraday cage of the mr system without interfering with the image acquisition. the control of the imaging hardware (i.e., the gradients in x, y, and z, the rf sub-system, the receiver, and the patient-monitoring system) requires a computer with a real-time operating system. compared with conventional operating systems where the instructions are processed in an order and at a time that are influenced by many factors, a real-time operating system ensures that operations are executed on an exactly defined time scale. this real-time execution is necessary to maintain, e.g., the phase coherence during spin-echo mri or to ensure that a given steady state is established during balanced ssfp imaging. during sequence execution, the different instructions for the hardware are typically sent by the control program to digital signal processors (dsp) that control the individual units. thus, new instructions can be prepared by the control program, whereas the actual execution is controlled close to the individual hardware. to ensure that enough hardware instructions are available, many time steps are computed in advance during sequence execution. for real-time pulse sequences, this advance calculation needs to be minimized to be able to interactively change sequence parameters such as the slice position (controlled by the rf frequency) or orientation (controlled by the gradient rotation matrix). in real-time sequences, the information about the current imaging parameters is thus retrieved not only once at the beginning of the scan, but continuously during the whole imaging experiment. the reconstruction of the data arriving at the adcs is performed by the image-reconstruction computer. to estimate the amount of data this computer needs to process the following estimate can be used: during high-speed data acquisition about raw data points (i.e., × bytes) arrive per imaging coil at time intervals of tr = ms, so that with rx coils a data rate of mb/s results. these incoming data need to be rearranged, corrected, fourier transformed, combined, and geometrically distorted before the final image is sent to the host computer. to perform this task today multiprocessor cpus are used to perform some of these tasks in parallel. in particular, the image reconstruction for multiple coils lends itself naturally to parallelization, since each of the coils is independent of the other. additionally, some manufacturers are including simple post-processing steps into the standard image reconstruction. since the reconstruction computer does not provide a direct user interface, these reconstruction steps need to be designed in such a way that no user interaction is necessary. this is the case for the calculation of activation maps in fmri, for mip calculations under standard views in mr angiography, or for the calculation of the arrival time of a contrast agent bolus in perfusion studies. at the end of the image reconstruction, the image data are transferred to the host computer via the internal computer network. special mr imaging techniques require additional mr components that are not necessarily available at any mr scanner. these components often monitor certain physiologic signals such as the electrical activity of the heart (electrocardiogram, ecg) or breathing motion (fig. . . ) . typically, the measured physiologic signals are not used to assess the health status of the patient but to synchronise the image acquisition with the organ motion, since heart and breathing motion can cause significant artifacts during abdominal imaging. synchronization of the image acquisition is performed either with prospective or retrospective gating. with prospective gating (or triggering), the imaging is started with the arrival of a certain physiologic signal (e.g., the r wave in the ecg). therefore, the physiologic signal is post-processed (e.g., thresholding and low-pass filtering) to create a trigger signal when the physiologic condition is present. with retrospective gating, the measurement is not interrupted, but data are acquired continuously, and for each measured data set, the physiologic state is stored with the data (e.g., the time duration after the last r wave). during image reconstruction, the measured data are sorted in such a way that images are formed from data with similar physiologic signals (e.g., diastolic measurements). the advantage of the retrospective over the prospective data acquisition is the continuous measurement without gaps that could lead to artifacts in steady state pulse sequences. the post-processing effort of retrospectively acquired data is higher because data need to be analyzed and sorted before image reconstruction. additionally, on average more data need to be acquired as compared to prospective triggering to ensure that for each physiologic condition at least one data set is present (over-sampling). to measure the ecg in the mr system mr-compatible electrodes made of silver-silver chloride (ag/agcl) are used. the measurement of the ecg in an mr system is difficult, because the switching of the gradients can induce voltages in the ecg cables that completely mask the ecg signal. this effect can be minimized, if short and loopless ecg cables are utilized. short ecg cables are additionally advantageous since long cables with a loose contact to the skin can be the cause for patient burns that are induced by the interaction with the rf field during rf excitation (kugel et al. ) . to reduce this potential danger to a minimum, ecg systems have been developed that amplify the ecg signal close to the electrodes, and which transmit the ecg signal to the mr system either via optical cables (felblinger et al. ) or as an rf signal at a frequency different from the larmor frequency. with this technology, ecg signals can be acquired even during echo planar imag-ing when gradients are permanently switched on and off (ives et al. ) . it should be noted that the ecg signal in the mr system significantly differs from the signal outside the magnet. the electrically conducting blood is flowing at different velocities in the cardiac cycle. within the magnetic field the blood flow induces velocity-dependent electric fields (hall effect) across the blood vessels, which in turn change the electric potentials measured at the ecg electrodes. typically, the t wave of the ecg is augmented, an effect that is more pronounced at higher field strengths (kangarlu and robitaille ) . for this reason, the ecg acquired in the mr system should not be regarded as of diagnostic quality. pulse oximeters measure the absorption of a red and an infrared light beam that is sent through perfused tissue (e.g., a finger). the absorption is proportional to the oxygen content, so that devices can determine the partial oxygen pressure (po ). additionally, the pulsation of the blood leads to a pulse-related variation of the transmitted light signal, which is used in the mr systems to derive a pulse-related trigger signal (shellock et al. ). since the pulse wave arrives at the periphery with a significant delay after the onset of systole, it is difficult to use the po signal for triggering in systolic mr imaging. pulse oximeters consist solely of non-magnetic and non-conduction optical elements, so that they are not susceptible to any interference with the gradient or rf activity. fig. . . whole-body imaging with array coils covering the patient from head to toe (exelart vantage tm , toshiba). since not all coils are in the imaging volume of the mr system at the same time, a lower number of receiver channels (here: ) are sufficient for signal reception . technical components to detect breathing motion, several mechanical devices such as breathing belts or cushions have been introduced. essentially, all these systems are air filled and change their internal pressure as a function of the breathing cycle when they are attached to the thorax of the patient. the pressure is continuously monitored and is used as an indicator for breathing status. as with the pulse oximeters, these systems are also free of any electrically conducting elements, so that no rf heating is expected. however, in clinical practice breathing triggering can pose a problem in long-lasting acquisitions since patients start to relax over time, and the initial breathing pattern is not reproduced. an alternative approach to the measurement of the breathing cycle is offered by the mr itself: if a single image line is excited in head-foot direction through the thorax (using, e.g., a ° and ° slice that intersect along the desired line), then the signal of this line has high contrast at the liver-lung interface. this diaphragm position can be detected automatically and can be used to extract the relative position in the breathing cycle. this technique is called a navigator echo (ehman and felmlee ) , since an additional echo for navigation needs to be inserted into the pulse sequence. similar approaches using lowresolution two-or three-dimensional imaging can be used to correct for patient motion in long-lasting image acquisitions such as fmri (welch et al. ) . here, the change in position is determined and used to realign the imaging slices (prospective motion correction). for neurofunctional studies, electroencephalogram (eeg) systems have been developed that can be operated in the mr tomography (muri et al. ) . compared with the ecg, the voltages induced during brain activity are about times smaller in eeg recording, which poses a significant detection problem (goldman et al. ; sijbersa et al. ) . blood pulsation, patient motion, as well as induced voltages during gradient and rf activity can cause spurious signals in the eeg leads, which obscure the true eeg signal. to remove the imaging-related artifacts, dynamic filtering can be used, which removes all signal contributions associated with the basic frequencies of the mr system. a large variety of mr systems with different magnet types, coil configurations, and gradient sets is currently available for diagnostic and interventional mr imaging. to choose from these systems, the desired imaging applications as well as economic factors need to be considered: a small hospital might with few mr patients might want to use a low-field permanent magnet system with low maintenance cost, whereas a university hospital with a diverse patient clientele and high patient throughput should better offer a high-field mr system with state-ofthe-art gradient systems. physiologic monitoring and triggering units. the three electrodes of the ecg system as well as the tube of the breathing sensor are connected with a transceiver that transmits both signals to the patient monitoring unit of the mr system. the optical pulse sensor is attached to the finger, and the signals are guided via optical fibers to the detection unit. for increased patient safety the ecg system must be used together with a holder system (not shown here), which provides additional distance between the ecg leads and the patient body . . muri rm, felblinger j, rosler km, jung b, hess cw, boesch c ( ) during the pioneer period of mr imaging, expectations were that the high inherent contrast in mr imaging makes the use of contrast agents superfluous. however, increasing use of the modality in the clinical setting has revealed that a number of diagnostic questions require the application of a contrast agent. similar to other imaging modalities, the use of contrast agents in mr imaging aims at increasing sensitivity and specificity and, thereby, the diagnostic accuracy. the main contrast parameters in mr imaging are proton density, relaxation times, and magnetic susceptibility (ability of a material or substance to become magnetized by an external magnetic field). mr imaging contrast agents focus upon relaxation time and susceptibility changes. most of them are either para- or superparamagnetic. the most efficient elements for use as mr imaging contrast agents are gadolinium (gd), manganese (mn), dysprosium (dy), and iron (fe). the magnetic field produced by an electron is much stronger than that produced by a proton. however, in most substances the electrons are paired, resulting in a weak net magnetic field. gd with its seven unpaired electrons possesses the highest ability to alter the relaxation time of adjacent protons (relaxivity). for mr contrast agents, differentiation between positive and negative agents has to be made. paramagnetic contrast agents gd and mn have a similar effect on t and t and are classified as positive agents. since the t of tissues is much higher than the t , the predominant effect of these contrast agents at low concentrations is that of t shortening. thus, tissues that take up gd- or mn-based agents become bright in t -weighted sequences. on the other hand, negative-contrast agents influence signal intensity by shortening t and t *. superparamagnetic agents belong to this group and produce local magnetic field inhomogeneities of the local magnetic field. t is reduced due to the diffusion of water through these field gradients. magnetite, fe o , is such a paramagnetic particle. coated with inert material (e.g., dextranes, starch), it can be used for oral or intravenous applications. in addition to the classification in positive or negative agents, mr contrast agents can be differentiated according to their target tissue. the targeting of an agent is determined by the pharmaceutical profile of the substance. in the clinical environment, we differentiate currently three classes of agents: • unspecific extracellular fluid space agents • blood-pool and intravascular agents • targeted and organ-specific agents unspecific extracellular fluid space agents. low-molecular-weight paramagnetic contrast agents distribute into the intravascular and extracellular fluid space (ecf) of the body. their contrast effect is caused by the central metal ion. all approved ecf agents contain a gd ion, which contains seven unpaired electrons. because gd itself is toxic, the ion is bound in highly stable complexes. the different complexes and the physicochemical properties of all clinically used agents are listed in table . . . the agents are not metabolized and are excreted in unchanged form via the kidneys. bound, they form low-molecular-weight, water-soluble contrast agents. gadopentetate dimeglumine (magnevist, bayer schering pharma, berlin, germany) and gadoterate meglumine (dotarem, laboratoires guerbet, aulnay-sous-bois, france) are ionic high-osmolality agents, whereas gadodiamide (omniscan, ge healthcare, buckinghamshire, uk) and gadoteriol (prohance, bracco imaging, milan, italy) are non-ionic low-osmolality agents. due to the low total amount of contrast agent usually applied in mr imaging, no difference in tolerance between both classes could be demonstrated (oudkerk et al. ; shellock ). an estimated % of ecf agents (as for example in gadopentetate dimeglumine, size da) is cleared from the vascular space into the extravascular compartment on the initial passage through the capillaries. two agents in the group of ecf agents have to be mentioned separately. gadopentate dimeglumine (mul-tihance, bracco imaging) is an agent with a weak protein binding (about %) in human plasma. the bound fraction of the agent has a higher relaxivity than does the unbound fraction. in sum, the relaxivity of gadopentate dimeglumine is % higher as compared with gadopentetate dimeglumine at . t/ °c in plasma. the effect of higher relaxivity is highest at low field strengths (table . . ). the concentration of the contrast agent is . mol/l. gadopentate dimeglumine was primarily developed as a liver-specific mr imaging agent, and is currently approved both in the indication detection of focal liver lesions and in mr angiography. most of the injected dose of gadopentate is excreted unchanged in urine within h, although a fraction corresponding to . - . % of the injected dose is eliminated through the bile and recovered in the feces (spinazzi et al. ). the second particular ecf agent, gadobutrol (gadovist, bayer schering pharma) is approved in a higher concentration ( m) than all other available mr imaging contrast agents. in addition, gadobutrol has a higher relaxivity than most extracellular . m contrast agents on the market (table . . ). the higher concentration has revealed to be particularly useful for mr perfusion studies and mr angiography (tombach et al. ). stay within the intravascular space with no or only slow physiologic extravasation. the agents can be used for firstpass imaging and delayed blood-pool phase imaging. the prolonged imaging window allows more favorable image resolution and signal-to-noise ratio. the absence of early extravasation also improves the contrast-to-noise ratio. the pharmacokinetic properties of blood-pool agents are expected to be well suited to mr angiography and coronary angiography, perfusion imaging, and permeability imaging (detection of ischemia and tumor grading). currently, three types of blood-pool agents are being developed: gd compounds with a strong but reversible affinity to human proteins such as albumin macromolecular-bound gd complexes ultra small or very small super-paramagnetic particles of iron oxide (uspio and vsop) there are important differences between the three groups regarding pharmacokinetics in the body, i.e., distribution and elimination. gd compounds with a strong but reversible affinity to human proteins such as albumin exhibit prolonged plasma elimination half-life and increased relaxivity. the elimination is done by glomerular filtration of its unbound fraction. given that there is equilibrium between the bound and unbound fraction in the presence of albumin, the excreted molecules are immediately substituted due to dissociation of agent from the agent-albumin complex. two agents with affinities to albumin were developed and tested in clinical trials: gadofosveset (vasovist®, bayer schering pharma) - % bound in human plasma (lauffer et al. ) and gadocoletic acid (b / , bracco imaging), with a protein binding of approximately % in humans (cavagna et al. ; la noce et al. ) . currently, gadofosveset is the only blood-pool agent approved (for mra in europe). all other contrast agents with blood-pool characteristics are in clinical or in earlier phase development. gadofosveset is a stable gd diethylenetriaminepentaacetic acid (gd-dtpa) chelate substituted with a diphenylcyclohexylphosphate group. the mean plasma concentration at , and h after the . ± . . ± . . ± . . ± . . ± . . ± . bolus injection of . mmol/kg body weight dose were %, respectively % and % of the concentration reached min after injection. the mean half-life of the distribution phase (t / α) was . ± . h. relative to the reported clearance values of the non-protein-bound mri contrast agents, the clearance values of gadofosveset are markedly slower. gadofosveset is provided in a concentration of . mol/l and a dose of . mmol/kg body weight is recommended for mra (perrault et al. ) . as a further benefit, gd compounds with a strong, but reversible affinity to human proteins provide a long-lasting blood-pool effect even when small amounts of the substance leak out of the vasculature. the blood-pool effect persists because albumin remains highly concentrated in plasma while it shows a two-to three-times lower concentration in the extravascular space. thus, even when vasovist® leaks from the vasculature, the receptor-induced magnetization enhancement (rime) effect within the vascular spaces ensures that the signal enhancement in the blood dominates the mri contrast. in rabbits, enhancement with gadofosveset persisted at relatively constant levels from two minutes to up to h, whereas the enhancement of ecf had virtually disappeared within min (lauffer et al. ). the second blood-pool agent with binding to human serum albumin, gadocoletic acid has been tested in coronary mra (paetsch et al. ) . compared with gadofosveset, the slightly higher percentage of bounded agent may result in a lower percentage of extravasation and a further decreased elimination period. macromolecular gd-based blood-pool agents are large molecules with sizes between and kda. they are eliminated rapidly by glomerular filtration. due to their large size, they do not extravasate into the interstitial space. the two agents used in clinical trials were gadomer (schering) and p (laboratoires guerbet, aulnay-sous-bois). gadomer contains multiple gd molecules ( gd atoms, mr , ). p is a monodisperse monogadolinated macromolecular compound with mr . kda, based on a gadoterate meglumine core (port et al. ) . four hydrophilic arms account for its intravascular properties. in a preclinical study, p allowed acquisition of high-quality mr angiograms. image quality was rated as superior for p in the postbolus phase images compared with ecf agents. the intravascular properties lead to an excellent signal in the vasculature with limited background enhancement (ruehm et al. ). the first clinical use of uspio was done in specific parenchymal organ imaging due to the incorporation of uspio/spio into cells of the reticuloendothelial system of the liver, bone marrow, spleen, or lymphatic tissue. these particles produce a strong augmentation of the local magnetic field. predominant shortening of t and t * produces a loss of signal intensity on mr images. the agents that have been developed as blood-pool agents provide different characteristics with a predominating t effect and a prolonged intravascular residence time due to the small size of the particles. nc (clariscan, ge healthcare) was the first uspio tested for mra (taylor et al. : weishaupt et al. . it is a strictly intravascular agent with an oxidized starch coating and has an approximate diameter of nm. the half-life is - min, and it has shown to reduce blood t to below ms (wagenseil ). another iron oxide particle mr contrast agent in the phase of clinical development is vsop-c (ferupharm, teltow, germany). it is classified as a vsop with a core diameter of nm and a total diameter of . nm. vsop-c is coated with citrate. the relaxivities in water at . t are (t ) . and (t ) . l/[mmol*s]. the plasma elimination half-life at . mmol fe/kg was . ± . minutes in rats and . ± . minutes in pigs, resulting in a t relaxation time of plasma of < ms for min in pigs (wagner et al. ) . qualitative evaluation of image quality, contrast, and delineation of vessels showed that the results obtained with vsop-c at doses of . and . mmol fe/ kg was similar to those of gadopentetate dimeglumine at . and . mmol gd/kg. vsop-c is suitable for firstpass mra and thus, in addition to its blood-pool characteristics, allows for selective visualization of the arteries without interfering venous signal (schorr et al. ). another uspio is sh u c (supravist, bayer schering pharma), an optimized formulation of carboxydextran-coated ferucarbotran (resovist; bayer schering pharma), which was formerly identified as sh u a, with respect to t -weighted mr imaging. sh u c has a mean core particle size of about - nm and a mean hydrodynamic diameter of about nm in an aqueous environment. relaxivity measurements yielded an r of s - (mmol/l) - and an r of s - (mmol/l) - at °c and mhz in water (reimer et al. ). the efficacy of mri contrast agents is not just determined by their pharmacokinetic properties (distribution and time dependence of their concentration in the area of interest), but also by their magnetic properties, described by their t and t relaxivities. for all commercially available mri contrast agents, relaxivities are published and listed in the respective package inserts. however, the most commonly used field strength for relaxation measurements ( . t) is different from the currently most frequently used field strength of clinical mri instruments ( - t). rohrer et al. evaluated in a well-conducted and standardized phantom measurement study the t and t relaxivities of all currently commercially available mr contrast agents in water and in blood plasma at . , . , , and . t, as well as in whole blood at . t (rohrer et al. ) . they quantified significant dependencies of relaxivities on the field strength and solvents (table . . ). protein binding leads to both increased field strength and solvent dependencies and hence to significantly altered t relaxivity values at higher magnetic field strengths. mr contrast agents are in clinical use since , and a wide experience is reported. severe or acute reactions after single intravenous injection of gd-based ecf agents are rare. in two large multiple-year surveys including, respectively, , and more than , examinations, an incidence of acute adverse reactions between . and . % were reported (li et al. : murphy et al. . the severity of these adverse reactions was classified as mild ( - %), moderate ( - %), and severe ( - %). typical nonallergic adverse reactions include nausea, headache, taste perversion, or vomiting, and typical reactions resembling allergy include hives, diffuse erythema, skin irritation, or respiratory symptoms. the incidence of severe anaphylactoid reaction is very low and was reported to be between . and . % in the literature (de ridder et al. ; li et al. : murphy et al. . the reported life-threatening reactions resembling allergy were severe chest tightness, respiratory distress, and periorbital edema. known risk factors for the development of adverse reactions are prior adverse reactions to iodinated contrast media, prior reactions to a gd-based contrast agent, asthma, and history of drug/food allergy. concerning liver-specific contrast media, a higher percentage of associated adverse reactions were reported for mangafodipir trisodium ( - %) and ferumoxides ( %) (runge ). the recently approved bolus-injectable agent ferucarbotran (resovist, bayer schering pharma) has proven a better tolerance profile during the clinical development compared to ferumoxides. even bolus injections caused no cardiovascular side effects, lumbar back pain, or clinically relevant laboratory changes (reimer and balzer ). for the two approved gdbased agents gadopentate dimeglumine and gadoxetic acid, far fewer patients have been examined to date. according to the results of the clinical trials conducted for the approval of both agents, they are comparable to gdbased ecf agents in terms of safety (bluemke et al. ; halavaara et al. ; huppertz et al. ). post-marketing surveillance of gadopentate dimeglumine reporting approximately , doses revealed an overall adverse event incidence of < . %, with serious adverse eventss reported for < . % of patients (kirchin et al. ). in the class of blood-pool agents, only gadofosveset (bayer schering pharma) has been approved recently in some european countries. the tolerance of the agent must be estimated based on the clinical trials. based on these data, gadofosveset is well tolerated, and the incidence and profile of undesired side effects is very similar to ecf agents (goyen et al. ; petersein et al. ; rapp et al. ) . magnetic resonance contrast agents, particularly the gd-based agents, are extremely safe (niendorf et al. ) and lack in the usually applied diagnostic dosage the nephrotoxicity associated with iodinated contrast media. nevertheless, health care personnel should be aware of the (extremely uncommon) potential for severe anaphylactoid reactions in association with the use of mr contrast media and be prepared should complications arise. nephrogenic systemic fibrosis (nsf) is a rare disease occurring in renal insufficiency that only has been described since . in , a first report about a potential relationship with intravenous administration of gdbased mr contrast medium gadodiamide was published (us food and drug administration ). nsf appears to occur in patients with kidney failure, along with high levels of acid in body fluids (a condition known as metabolic acidosis) that is common in patients with kidney failure. the disease is characterized by skin changes that mimic progressive systemic sclerosis with a predilection for peripheral extremity involvement that can extend to the torso. however, unlike scleroderma, nsf spares the face and lacks the serologic markers of scleroderma. nsf may also result in fibrosis, or scarring, of body organs. diagnosis of nsf is done by looking at a sample of skin under a microscope. the risk of nsf in patients with advanced renal insufficiency does not suggest being the same for all gdbased contrast agents, because distinct physicochemical properties affect their stabilities and thus the release of free gd ions (bundesinstitut für arzneimittel und medizinprodukte [federal institute for drugs and medical devices] ). some gd-based contrast media are more likely than are others to release free gd + through a process called transmetallation, with endogenous ions from the body (thomsen et al. ). these agents have the largest amount of excess chelate. gadodiamide and gadoversetamide differ from other gd-based contrast media because of an excess of chelate and is more likely to release free gd + as compared with other agents. cyclic molecules offer better protection and binding to gd + , compared with linear molecules (thomsen et al. ). the non-linear, non-ionic chelates gadodiamide and gadoversetamide seem to be associated with the highest risk of nsf (broome et al. ; sadowski et al. ) . the recommendations to prevent development of nsf are nonspecific (us food and drug administration ): • gd-containing contrast agents, especially at high doses, should be used only if clearly necessary in patients with advanced kidney failure (those currently na not applicable requiring dialysis or with a glomerular filtration rate (gfr) = ml/min or less). • it may be prudent to institute prompt dialysis in patients with advanced kidney dysfunction who receive a gd contrast mra. although there are no data to determine the utility of dialysis to prevent or treat nsf in patients with decreased kidney function. the use of contrast agents in neuroimaging is an accepted standard for the assessment of pathological processes, which utilizes the extravasation of contrast agents through a compromised blood-brain or blood-spinal cord barrier. compared with contrast-enhanced ct, mr imaging with gd-based contrast agents is far more sensitive and depicts even subtle disruptions of the blood-brain barrier that are caused by a variety of noxious agents as, for example neoplastic or inflammatory processes and ischemic stress. moreover, mr contrast agents are increasingly used to evaluate brain perfusion in clinical practice for a variety of applications, including tumor characterization, stroke, and dementia. the contrast-enhanced brain perfusion mr examination is based on a magnetic susceptibility contrast phenomenon that occurs owing to the t and t * relaxation effects of rapidly intravenous bolus-injected contrast agents. the contrast agents in current use are the standard ecf gd chelates (table . . ). these extracellular agents show no appreciable differences in their enhancement properties and biologic behavior (akeson et al. ; brugieres et al. ; grossman et al. ; oudkerk et al. ; valk et al. ; yuh et al. ) . they equilibrate rapidly between the intra- and extracellular spaces of soft tissues and enter central nervous system lesions only at sites of damaged blood-brain barrier. the standard dose for mr imaging of the central nervous system is . mmol/kg body weight; however, it has been shown that a higher dose of gd chelate-based contrast agents may help reveal more subtle disease states of the bloodbrain barrier regardless whether caused by tumors or by inflammatory lesions (bastianello et al. ; haustein et al. ; yuh et al. ) . this raises the question, in how far gd contrast agents with a higher concentration as for example gadobutrol or agents with a higher relaxivity as for example gadopentate dimeglumine help to increase the sensitivity and accuracy to detect lesions as compared to standard gd chelates. for gadobutrol, no comparative studies to standard gd chelates exist up to now; however, based on smaller cohorts it can be assumed that the higher amount of gd, which can be achieved by the higher gd concentration, is of value for lesion detection and characterization (vogl et al. ) . moreover, based on animal experiments the amount of gd in gliomas was higher after injection of gadobutrol in comparison to gadopentetate dimeglumine although identical doses of gd per kilogram body weight were injected for both contrast agents (le duc et al. ) . gadopentate dimeglumine proved significantly superior tumor enhancement of intraaxial enhancing primary and secondary brain tumors at a dosage of . mmol/kg body weight as compared with the same dosage of gadopentetate dimeglumine (knopp et al. ) . similar results were also obtained in comparison of gadopentate dimeglumine with other contrast agents as well as in special populations as for example in pediatric patients (colismo et al. (colismo et al. , (colismo et al. , . the increased contrast enhancement resulted also in an increased number of detected brain metastases. dynamic susceptibility-weighted (dsc) contrast agent-enhanced mr imaging is increasingly used for the assessment of cerebral perfusion in many different clinical settings, such as ischemic stroke (parsons et al. ) , neurovascular diseases (doerfler et al. ), brain tumors (essig et al. ) , and neurodegenerative disorders (bozzao et al. ) . unlike mr angiography, which depicts the blood flow within larger vessels, perfusion-weighted mr techniques are sensitive to perfusion on the level of the capillaries. the technique is based on the intravenous injection of a t *-relaxing contrast agent and subsequent bolus tracking using a fast susceptibility-weighted imaging sequence. after converting voxel signal into concentration values, parametric maps of regional cerebral blood volume (rcbv) and blood flow (rcbf) can be calculated by unfolding tissue concentration curves and the concentration curve of the feeding artery. the contrast agents used for dynamic susceptibility-weighted mr perfusion are usually standard gd chelates; however, the dosages of gd per kilogram of body weight as well as the value of higher concentrated agents have been widely discussed. during the first pass of the gd chelate, the high intravascular concentration of gd causes the t * effects, which can be measured by rapid imaging techniques. the length and the peak concentration of the bolus seem to have influence on the resulting measured signal with a highly concentrated small bolus of contrast agent being advantageous for mr brain perfusion imaging (essig et al. ; heiland et al. ) . in between the standard gd chelates, no notably different behavior of the available agents has been published up to now. the recommended dose for dsc perfusion mri is in the range of . - . mmol/kg body weight, with most authors preferring a value of . , because the volume of the bolus gets too high when higher dosages are applied (bruening et al. ) . therefore, the use of higher concentrated contrast agents or agents with higher relaxivity are also interesting for cerebral perfusion mri. again, studies were able to demonstrate the value of the m gabobutrol and gadopentate dimeglumine. tombach et al. ( ) showed that m gadobutrol resulted in a significantly improved quality of the perfusion examination in comparison to . m gadobutrol at the same dosage of . mmol/kg body weight. the results were explained by the sharper, more concentrated bolus, which could be achieved due to the smaller injection volume. essig et al. directly compared m gadobutrol and . m gadopentate dimeglumine at . t with a similar dosage of . mmol/ kg body weight and found no significant differences between the two agents (essig ) . the benefit of a double dose of . mmol was observed only as a trend; however, it was not considered to be of clinical relevance. similar results in a comparison between the two agents were recently obtained also on a -t system (thilmann et al. ) . for both agents sufficient high-quality perfusion examinations can be achieved with an acceptable injection volume, which is helpful for their clinical application in daily practice and can be considered superior to standard gd chelates (essig et al. ; thiman et al. ; tombach et al. ) . in a limited number of proof-of-concept studies, uspio were also used in neuroimaging (corot et al. ; manniger et al. ) . the long blood-circulating time and the progressive macrophage uptake in inflammatory tissues of uspios are two properties of major importance for pathologic tissue characterization. in the human carotid artery uspio, accumulation in activated macrophages induced a focal drop in signal intensity compared with unenhanced mri. the uspio signal alterations observed in ischemic areas of stroke patients is probably related to the visualization of inflammatory macrophage recruitment into human brain infarction, since animal experiments in such models demonstrated the internalization of uspio into the macrophages localized in these areas. in brain tumors, uspio particles that do not pass the ruptured bloodbrain barrier at early times post injection can be used to assess tumoral microvascular heterogeneity. twenty-four hours after injection, when the cellular phase of uspio takes place, the uspio tumoral contrast enhancement was higher in high-grade than in low-grade tumors. several experimental studies and a pilot multiple sclerosis clinical trial in patients have shown that uspio contrast agents can reveal the presence of inflammatory lesions related to multiple sclerosis. the enhancement with uspio does not completely overlap with the gd-chelate enhancement. during the last few years magnetic resonance angiography (mra) has been established as a non-invasive alternative to conventional x-ray angiography in the diagnosis of arteriosclerotic and other vascular diseases (meany et al. ; meany ) . with the exception of imaging intracerebral vessels (gibbs et al. ; ozsarlak et al. ), contrast-enhanced techniques have revealed superiority over non-contrast-enhanced techniques as the time-of-flight (tof-mra) or phase-contrast (pc mra) technique (sharafuddin et al. ) . the main advantages over unenhanced techniques are the possibilities to acquire larger volumes, allowing, e.g., demonstration of the carotid artery from its origin to the intracranial portion, shorter acquisition times, and reduced sensibility to flow artifacts. contrast-enhanced mr angiography can be performed during the first-pass of a contrast agent, preferably in breath-hold technique, after rapid bolus injection or during steady-state conditions after injection of vascular specific blood-pool agents. most experiences were reported for first-pass mra after injection of ecf contrast agents. the demands on the agent are a high influence on the signal intensity on blood after injection and the possibility of fast and compact bolus injection. the most commonly applied group of contrast agents are . molar ecf. in the last years, two novel ecf agents with innovative properties were used for mra. the first one, the . m contrast agent gadopentate dimeglumine offers a higher t relaxivity. in studies in which gadopentate dimeglumine is compared at equal dose with other gd-based mr contrast agents without relevant protein binding in plasma, gadopentate dimeglumine has consistently shown significantly better quantitative and qualitative performance (goyen and debatin ) . even at lower doses compared with gadopentetate dimeglumine injected at a dose of . mmol/kg body weight, the greater relaxivity of gadopentate dimeglumine provides higher intravascular signal and signal-to-noise ratio (pediconi et al. ) . thus, gadopentate dimeglumine can be considered to have a very favorable risk-benefit ratio for mra. the second one, gadobutrol is available in m concentration. in combination with a higher relaxivity compared to other ecf agents, the agent has revealed in quantitative evaluations a significant increase in signalto-noise and contrast-to-noise ratios in comparison to gadopentetate dimeglumine in pelvic mra and in whole body mra (goyen et al. (goyen et al. , . better delineation of arterial morphology was reported especially for small vessels, but no statistically significant difference in image quality could be seen. two different options for injection have been described: reduction of the injection rate by % compared to injection protocols using . m ecf (equimolar dosing) or reduction of the injection time by %. the equimolar dosing mainly exploits the higher relaxivity potential of gadobutrol. in this case, the injection duration is identical to a corresponding protocol using a . m contrast agent a similar bolus geometry, and contrast delivery in the roi is obtained (e.g., in a -kg-weighing patient ml of gadobutrol are injected at ml/s compared with ml of gadopentetate dimeglumine injected fig. . . whole-body mra of a healthy volunteer after bolus injection of gadofosveset ( . mmol/kg body weight). firstpass and steady-state acquisition acquired immediately (a) and min (b) after injection of the contrast agent. t -weighted d gradient recalled echo sequence (tr/te/α . / . / , spatial resolution . × × . mm). first-pass imaging depicts exclusively the arteries. steady-state imaging shows an enhancement of both arteries and veins. due to the higher concentration of the contrast agent during first-pass imaging the absolute level of enhancement is higher (a) at ml/s). hence, well-known protocols can be adopted with good results. the second option keeps the injection speed unchanged in comparison to the . m agent protocol, resulting in shortening of the initial bolus duration by a factor of two (fink et al. ). the philosophy is to use a very compact, high-relaxivity bolus and to fully exploit the potential of m gadobutrol. this approach is particularly recommended in conjunction with very fast acquisition techniques, e.g., time-resolved (often referred to as d) mra. although the effective bolus geometry in the respective roi is broadened, dependent on individual physiology and mainly influenced by the lung passage, this approach requires higher demands on precise bolus timing and is recommended to users with advanced mra experience and ultrafast imaging equipment. in addition, a further approach was reported by reducing the amount of contrast agent by a factor of two in abdominal mra (vosshenrich et al. ) . the injection speed was kept constant in comparison to a . m agent protocol, resulting in very short total bolus duration. vosshenrich et al. used an amount of . mmol/kg body weight. they compared the examinations qualitatively and quantitatively to exams acquired after injection of gadopentetate dimeglumine ( . mmol/kg), and concluded that for mra of the hepatic arteries and the portal veins, gadobutrol can be used at half the dosage as recommended for a standard . m contrast agent. the concept of contrast-enhanced mra based on ecf agents has some limitations. the primary problem is the rapid extravasation of the contrast agents limiting acquisition time and therefore spatial resolution as well as contrast-to-noise-ratio. to improve spatial resolution it is necessary to prolong imaging time. intravascular contrast agents are able to overcome the restrictions of spatial resolution. the longer acquisition period can be used to decrease voxel size, to repeat measurements, or to trigger acquisitions by ecg and/or respiratory gating. the second limitation of currently used mra is the quantification of artery stenoses, which still seems to be inferior to invasive catheter angiography. the cause is the inferior spatial resolution in mra using ecf agents, with which the increase of spatial resolution is limited by the acquisition time during first-pass (arterial phase). with intravascular contrast agents, a longer data acquisition during the distribution phase is possible. the spatial resolution can be increased on a similar level compared with catheter angiography and, therefore, the accuracy of stenosis quantification is significantly increased. optimally, a blood-pool agent permits a long acquisition window including first-pass mra as well as the possibility of separate imaging of arteries and veins by timing the injection and data acquisition. gadofosveset, the first mr blood-pool agent approved for clinical use, permits both a high-resolution approach with a long acquisition windows and first-pass contrast-enhanced mra (fig. . . ). the approval was based on the data of clinical trials in all different types of arterial vessels including high-flow vessels with large diameter (e.g., the pelvic arteries), low-flow vessels (e.g., foot arteries), and high-flow vessels with a small diameter (e.g., the renal arteries). ecf contrast agents are widely used in mr imaging of soft-tissue lesions. the enhancement in either inflammatory or neoplastic lesions makes their use inevitable for the detection and characterization of soft tissue lesions. relevant anatomical sites that in the daily clinical practice are subject to mr imaging are the female breast and the soft tissue related to the musculoskeletal system. for the female breast, mr imaging with extracellular contrast agents (mr mammography) is nowadays widely used for the detection and for the characterization of unclear breast tumors morris et al. ) . the histopathological basis of the different enhancement patterns in breast masses is not yet fully understood; however, it is well known that angiogenesis with the formation of new vessels, is an important aspect (knopp et al. ). the amount of angiogenesis and contrast agent extravasation is considered different for several benign and malignant lesions; however, the visible phenomenon of different enhancement is usually too small to be analyzed only visually. the discrete changes of contrast-agent enhancement are usually evaluated by using a semiquantitative evaluation with region-of-interest measurements at different time-points (kuhl et al. ) . the thereby achieved enhancement kinetics, as represented by the time-signal intensity curves, differ significantly for benign and malignant enhancing lesions, and are used as an aid in differential diagnosis. usually four to six measurements with an interval of - min are applied in the daily clinical practice (kuhl et al. ; pediconi et al. ) . a recently published study showed that the temporal resolution for the assessment of time-signal intensity curves is not as critical as the spatial resolution; therefore, the recommendations for the dynamic postcontrast mr imaging tend toward a -min interval with a high spatial resolution (e.g., full imaging matrix) (kuhl et al. ) . a more detailed evaluation of perfusion parameters needs, however, a very high temporal resolution in a range of - s. first results for the differentiation of unclear breast tumors in an investigational setting are very promising; however, due to the high temporal resolution only single slices can be measured, which is not feasible for daily practice (brix et al. ) . usually standard gd chelates at a dose of . mmol/ kg body weight are used for contrastenhanced mr mammography. first results indicate that the use of the high-relaxivity mr contrast agent gadopentate dimeglumine in the same dosage can achieve a superior detection and identification of malignant breast lesions at mr imaging as compared with gadopentetate dimeglumine. however, up to now gadopentate dimeglumine is not officially approved for this indication. there are also first approaches to perform mr mammography with blood-pool contrast agents. a major limitation of ecf is that they extravasate nonselectively from the vasculature into the interstitium of both normal and pathological tissues in the breast. it is hypothesized that the degree of microvascular endothelial disruption inherent to cancer vessels with the resulting extravasation of macromolecular contrast agents may predict tumor aggressiveness and tumor grade more accurately that with standard gd chelates (daldrup-link and brasch ; daldrup-link et al. ) . first results with uspio have shown an improved characterization of unclear breast tumors at the expense of tumor enhancement, which is important for tumor detection. an interesting approach is also the use of small molecular gd chelates, which bind reversibly to plasma proteins as for example gadofosveset. this might allow for a sensitivity and specificity due to the presence of small and large molecules (daldrup-link et al. ) . the assessment of microvascular changes in experimental breast tumors seem not to be reliably depicted with theses agents in contrast to the macromolecular albumin-gd-(dtpa) (daldrup-link et al. ; turetscheck et al. ) . however, clinical experience on breast tumors does not exist now. although potential diagnostic applications have been investigated with various sized albumin-gd-dtpa, this contrast agent is considered a poor candidate for development as a clinical drug due to slow and incomplete elimination and a potentially immunologic toxicity (daldrup-link and brasch ) . for soft-tissue or bone lesions in the musculoskeletal system, the application of extracellular gd-contrast agents has become a clinical standard for characterization, staging of the local extent, biopsy planning, and the therapy monitoring (verstraete and lang ). the basic principle of contrast-enhanced imaging is as described above the distribution of the gd chelates in the intravascular space, showing enhancement in tumors with dense vascularity and neoangiogenesis as well as distribution into the extracellular space. for these clinical standard applications, there seem to be no relevant differences in the diagnostic performance between the different extracellular gd chelates, similar to neuroimaging. the role of gd-enhanced mri for exact tissue characterization is still very limited. a differential diagnosis in between different sarcomas, nerve sheath tumors, or other mesenchymal tumors is not possible based on the contrast agent behavior up to know. the differentiation between benign and malignant tumors is also often very limited, even with tools like dynamic time-resolved contrast-enhanced mri (verstraete and lange ). nevertheless, surrogate parameters for angiogenesis like histological tumor-vessel density can be correlated with this method (van dijke et al. ) . one major limitation is the extravasation of standard gd chelates through the intact endothelium so that pathological extravasation in tumor vessels disrupted endothelium cannot be separated from the physiological distribution. therefore, the experimental studies mainly focus on contrast agents which show no or only minor physiological extravasation. different studies-mainly in the animal-experimental stage-were able to show that characterization in benign and malignant tumors, evaluation of angiogenesis, and even tumor grading is feasible with blood-pool contrast agents (daldrup et al. ; kobayashi et al. ; preda et al. a, b) . there have been promising results with albumin-gd-(dtpa); however, as mentioned above this agent is unlikely to be available for diagnostic use in humans (daldrup et al. ; daldrup-link and brasch ) . similar to breast tumors, uspio have also been utilized for the evaluation of perfusion and for the characterization of soft-tissue tumors in the past (bentzen et al. ). the basic group of contrast agents for hepatobiliary imaging is the group of ecf gd-based contrast agents. however, there are also tissue-specific contrast agents available, which allow for an increased detection and characterization of focal and diffuse liver disease. liver-specific contrast agents can be divided into two groups: on the one hand, there are iron-oxide particles (spio, or superparamagnetic particles of iron oxide), which are targeted to the reticuloendothelial system (res) to the so-called kupffer cells. these agents cause a signal decrease in t /t *-weighted sequences by inducing local inhomogeneities of the magnetic field. on the other hand, there is the group of hepatobiliary contrast agents, which are targeted directly to the hepatocyte and are excreted via the bile. these agents cause signal increase in t -weigthed sequences by shortening of the t relaxation time. in europe there are five different liver specific contrast agents available on the market (table . . ). the basic principle behind spio is the fact, that there are usually no kupffer cells in malignant liver tumors, in contrast to the normal liver parenchyma and to solid benign liver lesions. therefore, in the liver specific phase, which starts for ferucarbotran after about ten min and for ferumoxide after about min, high contrast is produced in between malignant liver lesions and normal liver parenchyma. due to the signal loss in normal liver parenchyma the malignant lesions are contrasted as hyperintense lesions in t *-weighted and t -weighted sequences against the dark liver parenchyma. the first spio on the market in europe has been ferumoxide (endorem®, guerbet, aulnay sous bois, france). since in most european countries and in asia the bolus-injectable ferucarbotran (resovist®, bayer schering pharma ag, berlin, germany) is available. with regard to the basic principle of imaging there is no difference between the two agents; however, direct comparative studies have not been performed so far. the most striking advantage of ferucarbotran is the better workflow due to the possibility to inject ferucarbotran as a bolus. bolus-applicability is possible for ferucarbotran due to the different particle sizes and the coating of the particles; this is also responsible for the fewer rate of side effects (especially fewer events of severe back pain), which are encountered with ferucarbotran. in earlier clinical trials the effects of spio particles were evaluated almost exclusively on t -weighted fse and t * -weighted gre sequences, whereas usually not much attention was paid to the t -effects. however, the effect of spio particles on proton relaxation is not confined to t and t * . they also influence t relaxivity with increased signal intensity on t -weighted gre sequences at low concentrations (chambon et al. ). this gave raise to the hope, that with the bolus-injectable ferucarbotran vascularity of focal liver lesions could be depicted; however, investigations have been shown, that the ferucarbotran-enhanced early dynamic examination with t -weighted sequences does not permit to evaluate lesion vascularity, since (with exception of the cotton-wool paddling of hemangioma) the expected enhancement pattern cannot be seen with reliability (zech et al. ) . with regard to the t /t * effects there might be differences between both agents, which could be related to the different average particle size of both agents (approximately nm for ferumoxide and nm for ferucarbotran). with help of spio-enhanced mr an accurate liver lesion detection can be achieved. there have been several studies comparing spio to ct during arterial portography (ctap), which has been considered as best practice and reference standard. these studies showed detection rates of more than % (ba-ssalamah et al. ; vogl et al. ) . in comparison to ctap this detection rate was comparable; moreover, spio-enhanced mr is more specific than ctap, in which false positive lesions are encountered frequently. the above cited references investigated spio-enhance mr in a mixed collective of patients; publications focusing on the cirrhotic liver showed, that in these patients the combination of spio and extracellular gd-contrast agents have to be considered as the gold-standard for lesion detection (ward et al. ) . with regard to lesion characterization spio particles can be of help for the differential diagnosis of focal liver lesions based on the cellular composition and function of the different lesions (or rather based on different kuppfer cell density and function). when the same mr sequence is acquired pre-contrast and after a definite time interval, then the signal loss in normal liver parenchyma and in different focal liver lesions can be quantitatively evaluated. this is helpful for the differentiation of benign and malignant lesions; when a threshold of % signal loss is chosen, than lesions with less signal loss are of a malignant nature with over % sensitivity and specificity (namkung and zech et al. ). however, the sequence must have the same parameters (including the same acceleration in case parallel imaging is used), since application of parallel imaging makes systematic changes in the spread of image-noise (zech et al. ). the second important group is the group of hepatobiliary contrast agents. the basic principle behind this group of contrast agents is the specific uptake directly into the hepatocyte. since the agents all shorten the t relaxation times, they cause a signal increase in normal liver parenchyma and in solid benign lesions, whereas in malignant lesions like metastases no specific uptake can be seen. these lesions contrast as hypointense lesions against the bright liver parenchyma. approved agents in europe are the manganese-based agent mangafodipir trisodium (teslascan®, ge healthcare), and the gd-based agents gadopentate dimeglumine and gadoxetic acid (primovist®, bayer schering pharma). mangafodipir has the drawback that it must not be administered as a bolus, but only as a short infusion; therefore, dynamic studies are not possible with mangafodipir. however, the liver specificity is high and the high uptake in normal liver parenchyma enables imaging of, e.g., metastases with high contrast to the surrounding liver parenchyma. gadopentate dimeglumine and gadoxetic acid are injectable as boluses. with both contrast agents, a valid early dynamic examination is feasible, allowing differentiation of lesions with regard to their hyper-or hypovascularity (huppertz et al. ; petersein et al. b ). due to the lower liver specificity of gadopentate dimeglumine, the imaging time-point of the liver-specific phase starts about min after injection; whereas gadoxetic acid allows for imaging at min after injection. this can be of value with regard to the workflow in the mr department. similar to the situation at spio agents, direct comparative studies between the agents have not been published yet; therefore, the following remarks again hold true for all hepatobiliary contrast agents. however, with regard to lesion characterization, only gadoxetic acid has the official approval to be used for this indication. all three hepatobiliary agents are approved for lesion detection. in comparison to spio agents, the potential advantage of hepatobilary agents is the fact that t -weighted sequences usually can be performed with less acquisition time, less artifacts, and substantial higher spatial resolution. this holds true especially for t -weighted d-gre sequences derived from mr angiography sequences as, for example, vibe, or volumetric interpolated breathhold examination (siemens medical solutions, erlangen, germany), or lava, for liver acquisition with volume acceleration (ge healthcare). in how far these high-resolution sequences with a slice thickness of usually below mm allow further increasing the detection of small (< cm) malignant lesions has to be investigated in the future. the present date for the hepatobiliary agents was acquired mostly with conventional d-gre sequences and a slice thickness between and mm; however even in this setting the detection of lesions < cm was improved in comparison to baseline mri and spiral ct (bartolozzi et al. ; gehl et al. ; huppertz et al. ; peterseing et al. b ). an earlier trial showed slight superiority of spio-enhanced mri versus hepatobiliary mri in detection of liver metastases; however, the potential advantaged of modern t -weighted d gre sequences were not available for this evaluation (del frate et al. ) . a recent evaluation showed comparable detection rates between these two contrast agent groups (kim et al. ) . for the diagnosis of solid benign liver lesions (as focal nodular hyperplasia [fnh] and hepatocellular adenoma), the basis is still the extracellular contrast agent behavior with flush-like, mostly homogenous arterial hypervascularization and fast, but only faint washout, being in portovenous and equilibrium phase mostly slightly hyperintense (and not hypointense in contrast to the strong washout in malignant lesions). contrast agents used for patients with suspected solid benign lesions must enable that this information can be acquired. therefore, spio agents or mangafodipir alone are not sufficient for this indication; however, especially spio can contribute to the diagnosis of these lesions in combination with extracellular contrast agents, which have to show the abovementioned enhancement pattern. with spio agents solid benign lesions typically show liver-specific uptake of the substance in the range of normal liver parenchyma, thereby allowing the differentiation from malignant lesions as for example hepatocellular carcinoma (hcc). with regard to the differential diagnosis between fnh and adenoma, results in a limited number of patients indicated that the quantification of iron uptake can be helpful for this issue, because in our cohort, adenoma showed stronger iron uptake in comparison to fnh, with only minimal overlap of the percentage sig-nal intensity loss (psil) measured in a t -weighted fse sequence with fat-sat (namkung and zech et al. ). with the hepatobiliary contrast agents gadopentate dimeglumine and gadoxetic acid, diagnosis of fnh and adenoma is also possible on the one hand based on the extracellular contrast phenomena, on the other based on the liver specific uptake of the agents into these lesions (grazioli et al. (grazioli et al. , huppertz et al. ) . there is also valid data indicating that the differentiation between fnh and adenoma is feasible with hepatobiliary contrast agents. therefore, with the bolus-injectable agents of this class a time- and presumably cost-effective diagnosis can be achieved (fig. . . ) . in patients with extrahepatic malignoma, confirmation or ruling out of liver metastasis is often crucial for the therapeutic management. moreover, the exact staging of metastatic disease of the liver is getting more and more important, since sophisticated, stage-adapted therapeutic regimens with different options from atypical liver resection over local ablative minimally invasive treatment (e.g., radiofrequency ablation) up to extended liver resection exist. therefore, contrast agents used for this indication have to provide an excellent detection rate for focal liver lesions; however, the characterization of these lesions is also important, especially the differential diagnosis of small cystic metastases and small benign cysts or atypical hemangioma. after injection of extracellular contrast agents, the vascularity can be differentiated in hypo-und hypervascular metastases. hypovascular metastases appear as hypointense lesions in the portovenous phase, whereas hypervascular metastases appear as hyperintense lesions in the arterial-dominant phase. in contrast to the enhancement pattern of benign lesions, the nodular "cotton-wool"-like paddling in hemangioma or the homogeneous enhancement of fnh or adenoma metastases typically show a heterogeneous, ring-like enhancement with strong washout in the portovenous and equilibrium phase, resulting in first hyper-, and then hypointense lesions. however, in very small lesions the morphology of vascularization between the different en- fig. . . mr images of a -year-old male patient with a formerly unclear liver lesion. the primary mr examination (upper row) with gadopentetate dimeglumine and ferucarbotran shows a lesion (arrow) with strong arterial enhancement in the gd-enhanced t -weighted d-gre sequence (a) (tr, . ms; te, . ms; flip angle °) and ongoing washout in portovenous and equilibrium phase (not shown). the t -weighted fse sequence with fat-saturation min after application of ferucarbotran (b) depicts the lesion as nearly liver isointense. the calculated percentage iron uptake compared to the pre-contrast t -weighted sequence was about %, showing the benignity of the lesion. based on these imaging features and the lobulated margins as well as the central scar, the diagnosis of a fnh was made. the follow-up study (lower row) was performed with gadoxetic acid after a single bolus injection. in the t -weighted d-gre sequence (same parameters as above) in the arterial phase (c), the same enhancement characteristics as the in prior study can be delineated. in the delayed t -weighted d-gre sequence (d), the presence of hepatocytes is proven due to the liver-specific enhancement. note the excellent delineation of the central scar in the delayed images. in contrast to the upper row, the followup study gave information about vascularity and tissue composition, with a single contrast agent injection only tities is getting more and more similar, so that the present or missing liver-specific uptake is an additional criterion for differentiation. several publications have shown that detection of metastasis is feasible with the highest accuracy with help of liver-specific contrast agents, regardless if spio or hepatobiliary. according to the literature for detection of liver metastases, all liver-specific agents can be used with a very high diagnostic reliability and superiority to merely extracellular mri or spiral ct (bartolozzi et al. ; ba-ssalamah et al. ; del frate et al. ; gehl et al. ; huppertz et al. ; kim et al. ; petersein et al. b; vogl et al. ) . a difficult situation is liver imaging in cirrhotic liver. it is known that extracellular agents are helpful for detection and characterization of hcc nodules, the sensitivity is with t -weighted d-gre sequences and % specificity (burrel et al. ) . moreover, for diagnosis a hcc according to the accepted guidelines, hypervascularity has to be demonstrated. therefore, a valid early dynamic phase is a mandatory part for imaging in the cirrhotic liver. because regenerative nodules, which can be found frequently in the cirrhotic liver, also can show hypervascularity differentiation between these nodules and hcc nodules, it is a crucial issue for the management of patients suffering from liver cirrhosis. this is the reason that liver-specific contrast agents play an important role for imaging of the cirrhotic liver. it has been demonstrated that hcc shows no relevant uptake of spio particles in contrast to benign regenerative nodules (bhartia et al. ; imai et al. ; ward et al. ; namkung and zech et al. ). since in the cirrhotic liver fibrotic areas are present frequently, spio alone are not sufficient to evaluate the cirrhotic liver. a reasonable approach for the diagnosis of hcc based on imaging alone is the correlation of hypervascularity and missing or at least decreased iron uptake (bhartia et al. ; ward et al. ) . however, there is also an indefinite area of overlapping phenomena between dysplastic nodules and well-differentiated hcc, which is the reason for false negative findings-meaning well-differentiated hcc with substantial iron uptake (imai et al. ) . with regard to lesion detection, the availability of high-resolution mr sequences gives advantages for gd-enhanced arterial-phase imaging alone (kwak et al. ) or in combination with spio as double contrast (ward et al. ) . imaging with hepatobiliary contrast agents is considered as inferior in comparison to spio agents in the cirrhotic liver, mainly due to substantial overlapping in the liver-specific uptake between well-differentiated hcc and regenerative nodules. the assessment of lymph node affections by extranodal tissue, i.e., lymph node metastasis, is currently based on morphologic parameters including, lymph node size, shape, irregular border, and signal intensity inhomogeneities (brown et al. ; zerhouni et al. ) . for all parameters, no clear cut-off values or cut-off characteristics can be defined. the definition of a cut-off value in individual studies is the result of finding a compromise between sensitivity and specificity (e.g., larger values around mm give a high specificity but low sensitivity, whereas the reverse of low specificity and high sensitivity is observed when smaller diameter < mm are defined). the use of unspecific gd-based extracellular contrast agents has not revealed to overcome this limitation. lymphotropic mr contrast agents were, therefore, developed to increase the diagnostic accuracy of positive lymph node involvement. currently, none of these agents is approved for clinical use, and the experience with different formulations is limited to clinical studies. the most frequently used agents are uspios. they are administered intravenously and, as a result of their small diameter and their electrical neutrality, pass the first lymphatic barriers, i.e., the liver and the spleen. in the lymphatic nodes, they are phagocytosed by local macrophages. in healthy lymphatic tissue, the local concentration of iron oxides is resulting in a significant decrease of t and t * relaxation, resulting in a marked decrease of signal in t - and t *-weighted sequences. in contrast, metastatic tissue replacing the lymphatic tissue shows no relevant uptake of uspio, and no relevant change in signal intensity can be observed. gradient-recalled echo t -weighted sequences are considered the most accurate to detect the signal loss in nonmetastatic nodes. the application of uspios offers not only the possibility to differentiate between tumorfree, reactive (koh et al. ) and tumor-positive lymph nodes, but enables to depict micrometastasis in case sequences when high spatial resolutions are used (harisinghani et al. ) . one representative of the group of lymph node-specific uspio, ferumoxtran- (sinerem ® guerbet, paris, france) is infused after dilution. the recommended dose is . mg fe/kg body weight. the optimal time-point for postcontrast imaging is - h after application. during their clinical development, uspios have shown to be effective in staging lymph nodes of patients with various primary malignancies (deserno et al. ; jager et al. ; michel et al. ; nguyen et al. ) . the usual way for diagnosis is to perform an initial precontrast scan and to compare the images with postcontrast images acquired - h after infusion of uspios for signal changes between both time-points. the type, onset, and intensity of adverse events after application of ferumoxtran- was evaluated in phase iii studies and seems to be similar to those related to infusion of ferumoxides (anzai et al. ) . in patients with esophageal or gastric cancer, uspios revealed a sensitivity of % and specificity between . and . % (diagnostic accuracy between . and . %) for diagnosis of metastatic nodes (nishimura et al. ; tatsumi et al. ) . in patients with carcinomas of the upper aerodigestive tract, application of ferumoxtran- has shown to increase the sensitivity from to % while maintaining a specificity of . %, compared with precontrast imaging (curvo-semedo et al. ). in patients with rectum cancer, uspios have shown wellpredictable signal characteristics in normal and reactive lymph nodes, and were able to differentiate the latter from malignant lymph nodes (koh et al. ) . dissimilarly, keller et al. studied females with uterine carcinoma and were able to show high specificity, but a low sensitivity for metastatic lymph nodes; mainly micro-metastases around mm diameter were missed. a possible way to further improve the diagnostic accuracy for detection of small positive lymph nodes could be the use of -t high magnetic field strength scanner resulting in a lower spatial resolution (heesakkers et al. ) . different results were published concerning the necessity of both pre-and postcontrast images. whereas the majority of clinical publications using uspio for lymph node imaging used both pre-and postcontrast images and stets et al. ( ) were able to statistically prove the advantage of pre- and postcontrast studies, harisinghani et al. ( ) were showed that on ferumoxtran- -enhanced mr lymphangiography, contrast-enhanced images alone may be sufficient for lymph node characterization. however, a certain level of interpretation experience seems to be required before contrast-enhanced images can be used alone. both uspios (rogers et al. ) and spios (maza et al. ) can alternatively be administered with subcutaneous or submucosal injection. this application route is able to identify sentinel lymph nodes and lymphatic drainage patterns (fig. . . ) . additionally, high diagnostic accuracy of interstitial mr lymphography using blood-pool gd-based agents has been described (herborn et al. (herborn et al. , .using different macromolecular agents or gd-based agents with high protein binding in animal models, herborn et al. were able to show that the differentiation of tumor-bearing lymph nodes from reactive inflammatory and normal nodes based on a contrast uptake pattern assessed qualitatively as well as quantitatively is possible. in difference to the intravenous administration, subcutaneous injection gives the possibility to acquire the mr images as early as some min after application. the use of gd-based non-lymphotropic blood-pool agents induced a relatively short and inhomogeneous lymph node enhancement (misselwitz et al. ) . with the aim to become more specific, a new generation of lymphotropic t contrast agents was developed and tested in animal models after subcutaneous injection. these perfluorinated gd chelates were able to visualize fine lymphatic vasculature, even the thoracic duct in animal models (staatz et al. ). bowel mr contrast agents are generally classified as either positive (bright lumen) or negative (dark lumen) agents. in addition to enteral contrast agents especially approved for mr imaging, several existing pharmaceutical agents, such as methyl cellulose, mannitol, and polyethylene glycol preparations, licensed for other enteric application than mri, have also been exploited. fig. . . lymphatic drainage of a mucosal melanoma in the left nasal cavity to a lymph node located in the left submandibular region. mr images (upper row) and spect images (lower row). a concordant alignment of hot spots caused by the skin markers on the spect images with the vitamin e caps on the mr images (yellow arrows). b accurate sentinel lymph node localization (blue arrow) after subcutaneous injection of ferucarbotran (mg fe/kg body weight). t *-weighted d gradient-recalled echo sequence tr /te /a / / . homogeneous signal intensity decrease in the depicted lymph node (arrow) indicates normal lymphatic tissue and, thereby, metastatic involvement can be ruled out (maza et al. ) the specific demands for enteral contrast media include: for enteral and rectal application a special formulation of gadopentetate dimeglumine was developed (magnevist enteral, schering). the agent contains a total of g mannitol/l to prevent the absorption of the fluid simultaneously introduced in the gi tract, thus allowing homogeneous filling, distension, and constant gd concentration during the entire examination period. in the early development, an increase in diagnostic accuracy in examinations of the pancreas in the diagnostics of abdominal lymphoma and in pelvic mr imaging was shown (claussen et al. ). negative agents provide desirable contrast to those pathologic processes that are signal intense. they have been shown to improve the quality of images obtained by techniques such as mr cholangiopancreaticography (mrcp) and mr urography by eliminating unwanted signal from fluid-containing adjacent bowel loops, thus allowing better visualization of the pancreatic/biliary ducts and the urinary tract. an alternative to oral spios was described by using ordinary pineapple juice. it was demonstrated that pineapple juice decreased t signal intensity on a standard mrcp sequence to a similar degree than a commercially available negative contrast agent (ferumoxsil) (riordan et al. ) . oral spio preparations usually contain larger particles than injectable agents do. in europe, two spio preparations with the inn code ferumoxsil are approved for oral use: lumirem (laboratoires guerbet, france) with a particle size of nm, and (oral magnetic particles) abdoscan (ge healthcare) with a particle size of nm. they are coated with a non-biodegradable and insoluble matrix (siloxane for lumirem and polystyrene for abdoscan), and suspended in viscosity-increasing agents (usually based on ordinary food additives, such as starch and cellulose). these preparations can prevent the ingested iron from being absorbed, particles from aggregating, and improve homogeneous contrast distribution throughout the bowel. if spio particle aggregating occurs, magnetic susceptibility artifact may result, especially when high magnetic field strength and gradientecho pulse sequence are used (wang et al. ) . lumirem is composed of crystals of approximately nm; the hydrodynamic diameter is approximately nm (debatin and patak ) . the recommended concentration is . - . mmol fe/l. oral spio are administered over - min, with a volume of ml for contrast enhancement of the whole abdomen, and ml for imaging of the upper abdomen. oral spio suspensions are well tolerated by the patients (haldemann et al. ) ; the iron is not absorbed and the intestinal mucosal membrane is not irritated. combination of gd-enhanced t -weighted sequences and t -weighted sequences after oral contrast with spio has revealed highest accuracy in the evaluation of crohn's disease (maccioni et al. ) . furthermore, it has been shown that mri with negative superparamagnetic oral contrast is comparable to endoscopy in the assessment of ulcerative colitis. in difference to patients with crohn's disease, the double-contrast imaging does not provide more information than single oral contrast (de ridder et al. ) . in mrcp, negative oral contrast agents can be given before the examination to provide non-superimposed visualization of the bile and pancreatic ducts. there is no negative influence of the oral contrast agents on the diameter of the ducts (petersein et al. a). in cardiac mri, contrast agents are obligatory for the assessment of myocardial perfusion, for the evaluation of enhancement of cardiac masses, and for the evaluation of myocardial viability. in addition, contrast agents are frequently used when mr angiography of the coronary arteries is performed. myocardial perfusion imaging is a promising and rapidly increasing field in cardiac mr imaging. in comparison to radionuclide techniques, mr imaging has several advantages, including higher spatial resolution, no radiation exposure, and no attenuation problem related to anatomical limitations. the examination is performed after rapid intravenous administration (e.g., - ml/sec) of a contrast agent and evaluation of the first-pass transit of the agent through the myocardium. with the use of fast scan techniques, perfusion imaging can be performed as a multislice technique, with imaging of three to five slice levels per heartbeat, possibly allowing coverage of the entire ventricle. from a series of images, signal intensity-time curves are derived from regions of interest in the myocardial tissue for generation of parametric images. the majority of data are published using ecf agents. in clinical practice, most investigators are using fast t -weighted imaging and bolus injection of doses of . up to . mmol/kg body weight (edelman ) . for the evaluation, both quantitative and qualitative approaches can be used. in case of ecf agents, generally semiquantitative assessments are applied. to quantify myocardial perfusion a calculation was published by wilke et al. on the basis of first-pass data acquired after fast bolus injection of . mmol/kg body weight of ecf agents (wilke et al. ) . in practice, gd concentrations between . and . mmol/l result in a linear progression of the mr signal compared with the concentration of the agent itself. above this dose, the maximal relative increase in signal intensity begins to saturate (schwitter et al. ) . when the evaluation is performed by visual evaluation, a higher dose of . - . mmol/kg should be preferred to reach better myocardial enhancement and image quality. good correlation between the perfusion reserve with mr imaging and the coronary flow reserve with doppler ultrasonography could be proven. blood-pool agents have the potential to be applied for quantitative measurements also because their volume of distribution is limited to the intravascular space. a requirement for quantitative perfusion measurements is that the relation between the measured signal intensity on the mr images and the contrast agent concentration in the blood is known (brasch ) . there are two major differences between first-pass curves obtained from blood-pool agents and extracellular contrast agents. first, blood-pool contrast agents reach a lower tissue signal because their volume of distribution is limited to the intravascular space (wendland et al. ; wilke et al. ) . second, there is a better return to baseline for blood-pool contrast agents. the wash-in kinetics and the signal intensity in the myocardial tissue depend on the concentration of the contrast agent, the coronary flow rate, diffusion of the contrast agent into the interstitium, relative tissue volume fractions, bolus duration, and recirculation effects (burstein et al. ) . absolute quantification of myocardial perfusion has been performed in animal models using nc . a high absolute quantification correlation was found between mri and contrast-enhanced ultrasound (johansson et al. ) . delayed enhancement allows direct visualization of necrotic or scarred tissue and is an easy and robust method to assess myocardial viability. by measuring the transmural extent of late enhancement, a prognosis toward the degree of functional recovery of cardiac tissue may be possible. although several studies have been aimed at describing the mechanisms of late enhancement, these could not be fully explained up to now. the extent of late enhancement possibly depends on the time-point after injection as well as the time-point after myocardial infarction. relevant publications about delayed enhance-ment are reporting data after the administration of . m ecf agents in a dose range of . - . mmol/kg. ecf agents are probably more efficient in assessing the cellular integrity when they are distributed homogenously through damaged myocardium (wendland et al. ) , but homogenous distribution is not always the case, as in microvascular obstruction (kroft and de roos ) . differences exist between the distribution patterns of extracellular and blood-pool agents, and hypo-enhanced cores may be observed earlier using blood-pool agents (schwitter et al. ) . the sensitivity of blood-pool agents for myocardial infarction and, therefore, their potential value for the evaluation of myocardial viability, is unknown. a different strategy to determine myocardial viability is the use of necrosis-specific mr contrast agents. gadophrin- and - (schering) have shown to possess a marked and specific affinity for necrotic tissue components and showed persistent enhancement in necrotic tissue ( min to h in myocardial infarction). in preclinical studies, the agent has not proven superiority in estimation of infarcts compared with ecf agents, and the further development of the agent was, therefore, not continued (barkhausen et al. ) . to depict coronary arteries in mri, both unenhanced and contrast-enhanced techniques are used. a frequently performed contrast-enhanced examination strategy with acquisition of multiple d slaps in breath-hold depicting each coronary artery separately was primarily described by wielopolski et al. ( ) . for d coronary mr angiography, however, the contrast between blood and myocardium in relation to the inflow of unsaturated protons is reduced. thus, the use of an intravascular contrast agent may be particularly convenient due to the t relaxation time reduction in blood. the application of ecf agents has proven to be most effective for breath-hold acquisitions. however, the concentration of ecf agents declines rapidly as they extravasate into the interstitial space, thereby reducing the contrast between blood and myocardium. newly developed strategies use high-resolution free-breathing mr sequences for coronary mra. in this situation, ecf agents are less beneficial due to the relatively long acquisition time of these free breathing d sequences. this problem can be solved by the use of intravascular contrast agents. an additional benefit from application of a blood-pool agent is a longer acquisition window, which may be used to further increase both the signal-to-noise ratio and/or image resolution (nassenstein et al. ) . in an animal model use of the macromolecular gd-based agent p with a free-breathing technique allowed more distal visualization of the coronary arteries than did an ecf agent or non-enhanced mr images (dirksen et al. ). colosimo c, demaerel p, tortori-donati p et al ( ) comparison of gadopentate dimeglumine (gd-bopta) with gadopentetate dimeglumine (gd-dtpa) for enhanced mr imaging of brain and spine tumors in children. ( ) barkhausen j ( ) ( b) one of the strengths of mri is the ability to visualize soft tissues with different image contrasts. additionally, various two and three-dimensional mr imaging techniques for morphologic and functional examinations exist. among the functional techniques the visualization and measurement of blood flow is of particular interest, since nearly all physiologic processes rely on an adequate blood supply. as with many other mr imaging techniques, the sensitivity of mri to blood flow was first observed in artifacts visible near larger blood vessels. to suppress these artifacts new imaging methods have been investigated. in a further refinement of these techniques the artifact (here: the blood flow) has been made the primary source of the imaging technique; thus, in search for new methods of flow artifact suppression the blood flow itself became the contrast-generating element. the delineation of the vascular tree with mri, mr angiography (mra), is such a development: in t -weighted d gradient-echo data, it was observed that the blood vessel signal of the margin partitions was significantly higher than at the center of the image stack. furthermore, signal voids were seen in regions of turbulent flow, and in blood vessels with pulsating flow, ghost images of the vessel were visible in phase encoding direction. in the following, the underlying physical phenomena of these artifacts will be discussed, as they form the basis for time-of-flight mra, phase-contrast mra, and mr flow measurements. since the pioneering work of prince ( ) , many mr angiographies are acquired using contrast-enhanced acquisition techniques. in contrast-enhanced mra, the signal difference between the bright blood vessel and the dark surrounding tissue is induced by a reduction of the blood's t relaxation time. again, this technique has evolved from an unwanted vascular signal artifact in spin-echo images acquired after contrast agent injection into a major mr application. with the development of new contrast agents with a longer half-life in the vascular system, the so-called intravascular contrast agents, contrast-enhanced mra has been developed even further. in this section, techniques for mra with either intravascular or extracellular contrast agents will be presented. the visualization of the blood vessels with mri relies particularly on the specific properties of blood. blood consists nearly entirely of liquids, so that blood has a very high spin density and thus yields a strong mr signal. the t time of blood is long compared to that of other tissues (e.g., ms at . t) (gomori et al. ) , and it depends on its oxygenation state. long t values are a disadvantage in t -weighted acquisition strategies as the signal decreases with increasing t . this disadvantage can be converted into an advantage if the blood signal needs to be suppressed (black-blood angiography) for visualization of the vessel walls. furthermore, using an inversion (or saturation) recovery technique, the prepared magnetization can be tracked for a longer time, as the preparation persists much longer than in other tissues; this is the basis of arterial spin labeling techniques. typical t values are of the order of - ms (at . t). this t value is long enough to provide a high signal in the blood vessels using dedicated t -weighted image acquisition strategies. with conventional t weighted spin-echo techniques, mr angiographies are difficult to acquire since the motion of the blood needs to be compensated; nevertheless, t -weighted mra pulse sequences for imaging of the peripheral vasculature have been reported (miyazaki et al. ) . another approach is the use of balanced ssfp pulse sequences, where the contrast is dependent on the ratio of t /t -these fast pulse sequences have found a widespread use in the visualization of the cardiac system. in addition to the relaxation times, blood velocity is an important parameter. in healthy arterial vessels velocity values between cm/s (e.g., in the aortic arch) and cm/s (e.g., in the intracranial vessels) are common, whereas much lower values are found in the venous vasculature. high blood flow velocities lead to a pronounced inflow of fresh, unsaturated magnetization into an imaging slice, which increases the signal in the blood vesselsthis is the well-known time-of-flight contrast. furthermore, the velocity in the arterial system is not constant but changes as a function of time in the cardiac cycle. this pulsatility can be exploited to separate arterial from venous vessels, if image data are acquired with cardiac synchronization. all of the presented mra techniques rely on these properties of the blood; some exploit only one of them, whereas others use a combination of them to increase the vascular contrast even further. in any mr pulse sequence, the magnetization in a measurement slice is exposed to a series of radio frequency (rf) pulses. if the magnetization does not move out of the measurement slice (e.g., in static tissue) it approaches a so-called steady state which, for a spoiled gradient-echo pulse sequence, depends on the flip angle, the repetition time tr, and the relaxation times t and t . the steady state magnetization is smaller than the magnetization at the beginning of the experiment-it is partially saturated. fresh, unsaturated blood flowing into the imaging slice is carrying the full magnetization and thus generates a significantly higher mr signal (fig. . . ); this is known as time-of-flight (tof) contrast (anderson and lee ; potchen et al. ) . a major disadvantage of tof mra is the sensitivity to blood signal saturation: the longer the inflowing blood remains in the measurement slice, the more its signal is saturated. in situations where the blood vessel is oriented over a long distance parallel to the imaging slice (or d slab), the inflowing magnetization is progressively saturated. thus, blood appears bright near the entry site but is seen less intense with increasing distance from this position. to maintain the tof contrast over the whole imaging volume tof mra should therefore be performed as a d acquisition with thin slices or, if a d acquisition technique is preferred, in the arterial vasculature where high-flow velocities result in fewer saturation pulses for the arterial blood. the inflow effect can be maximized if the measurement slice is oriented perpendicular to the blood vessel. this is often possible for the straight arterial vessels (e.g., the carotids in the head), but can be difficult for extended vascular territories with tortuous vessels. in d acquisitions of larger vessel structures, the saturation effect can be partially compensated, if the flip angle is increased from the entry side of the slab to the exit side (fig. . . ) . thus, the saturation effect is less pronounced during entry, and the magnetization is still visible when it enters smaller vessels that are far away from the entry side. often, an rf pulse with a linearly increasing flip angle is utilized (tilted optimized non-saturating excitation, or tone [nagele et al. ] ). for an optimal vessel contrast the blood flow velocity, the repetition time, the mean flip angle, and the slope of the rf pulse profile are important parameters. in d tof mra, a very strong tof contrast can be achieved, if the slice thickness d is chosen such that the magnetization flowing with a velocity v is completely replaced during one tr interval, i.e. d ≤ tr · v. at a typi- fig. . . transient longitudinal magnetization, which is subjected to a series of excitation pulses ( °) at a repetition time of ms after entering the readout slice at t = during tof mra. the longer the blood spins remain in the slice the more they are saturated, and a differentiation between blood and surrounding tissue becomes difficult fig. . . d tof mra data set of the intracranial vasculature in lateral (top) and axial (bottom) maximum intensity projection. to minimize saturation, a tone rf pulse was used for excitation, and the signal from static brain tissue was additionally suppressed using magnetization transfer pulses cal tr of ms and a blood flow velocity of cm/s, the slice thickness should thus not be larger than mm. with these small slice thicknesses, the data acquisition in larger vascular territories such as the legs is very timeconsuming, and patient movements cannot be excluded during the several minutes of scan time. patient movements lead to artificial vessels shifts between the imaging slices, which are particularly observed in orthogonal data reformats-these artifacts can mimic pathologies such as stenoses and thus significantly reduce the diagnostic quality of the data sets (fig. . . ) . d tof mra is advantageous over sequential d tof mra because an isotropic spatial resolution in all directions can be achieved. to reduce the saturation effects in d tof mra not only one thick, but also several thinner d slabs are acquired consecutively. thus, the saturation effects are smaller for the individual slabs and a stronger tof contrast is seen. unfortunately, the flip angle in fast d acquisitions is not constant over the slab, but is declining towards its margins. this inhomogeneous excitation results in a higher signal for the stationary tissue at the slab margin, providing an inhomogeneous signal background in lateral views of the data. combined with a higher tof contrast at the entry side compared with the exit side, spatially varying signal intensity is seen in lateral views of the whole data set (venetian blind artifact). to reduce this artifact, overlapping d slabs are acquired (multiple overlapping thin slab acquisition, or motsa [parker et al. ] ), and the marginal slices of each slab are removed; however, this results in an increased total scan time. to increase the contrast between the blood vessels and the surrounding tissue in tof mra, often magnetization transfer pulses are included in the pulse sequences (edelman et al. ). using off-resonant rf pulses, the magnetization transfer contrast (mtc) selectively saturates those tissues where macromolecules are present. for brain tissue, these additional rf pulses can reduce the signal from background tissue by % and more, which increases the conspicuity especially of the smaller blood vessels. the use of magnetization transfer pulses however increases the minimally achievable tr and, thus, the total acquisition time. additionally, through the integration of the mtc pulses more rf power is applied to the patient, so that the regulatory power limits for the specific absorption rate (sar) might be exceeded, an effect that is more pronounced at higher field strengths. nevertheless, mtc is often included in intracranial tof mra protocols where longer trs can be an advantage, as long trs additionally reduce the saturation effect. the flow velocity in arteries is typically not constant but varies over the cardiac cycle. thus, the tof contrast is a function of time, so that for image acquisition times that are longer than one cardiac cycle, a signal variation during k-space sampling is present. this periodic signal variation results in phantom images of the blood vessels in phase encoding direction after image reconstruction: the so-called pulsation artifacts or ghost images (haacke and patrick ; wood and henkelman ) . to avoid pulsation artifacts, the image acquisition can be synchronized with the cardiac cycle using ecg triggering, which typically prolongs the total acquisition time, as only part of the measurement time is used for data acquisition. another option to reduce pulsation artifacts is to saturate the inflowing blood in a slice upstream of the imaging slice. therefore, a slice-selective rf excitation is applied in a (typically parallel) saturation slice, so that the magnetization of the inflowing blood is significantly reduced. spatial presaturation avoids pulsation artifacts; however, the interior of the blood vessel now has a negative contrast, and the positive tof contrast is gone. another important ingredient of a tof mra pulse sequence is flow compensation (cf. paragraphs on flow measurements, below): the movement of the spins causes an additional velocity-dependent phase shift that is seen in tof mra data sets without flow compensation as a displacement. if multiple velocities are present as in turbulent flow, the different phases can cause signal cancellation (intra-voxel dephasing) that manifests, e.g., as a signal void behind a stenosis (saloner et al. ) . with special compensation gradients the velocity-depen- tof mra data sets of the arterial vasculature in the neck. due to swallowing, the blood vessel can move from one acquisition to the next, and the edges appear with discontinuities in the lateral views of the maximum intensity projection dent phase shifts can be reduced; however, this typically prolongs the echo time te. tof mra is susceptible to several artifacts and is strongly dependent on a sufficient inflow velocity of unsaturated blood. therefore, d tof mra techniques are typically only used in the head, where the arterial flow velocities are high and enough time is available for imaging. for abdominal studies, tof techniques are of minor interest, because long measurement times are not possible due to respiratory motion. in conventional tof mra, the difference in longitudinal magnetization between the saturated stationary tissue and the unsaturated inflowing blood is exploited to create a positive contrast between blood and tissue. with arterial spin-labeling techniques, a similar approach is taken to the visualization of the inflowing blood; however, here only a certain fraction of the inflowing blood is tagged (or labeled) and subsequently visualized, whereas in tof mra all inflowing material is detected (detre et al. ) . spin-labeling pulse sequences consist typically of a labeling section, during which an rf pulse is applied to the spins upstream of the imaging slice (fig. . . ) . for labeling often adiabatic inversion pulses are used, which are less susceptible to motion during the inversion and that allow inverting the magnetization even in rf coils with a limited transmit homogeneity (e.g., a transmit/receive head coil). after an (often variable) inflow delay time ti, during which the labeled blood is flowing into the vascular target structure, the signal in the imaging slice is acquired. for signal reception different image-acquisition strategies can be employed such as segmented spoiled gradient-echo (flash), fast spin-echo (rare, haste), or even echo planar imaging (epi). note: this image data set contains both the signal from the labeled blood and the static background tissue. in a second acquisition, the entire pulse sequence is repeated without labeling of the blood, and a second image data set is acquired. to selectively visualize only the labeled blood the two data sets are subtracted; since the signal intensity of the blood differs in both acquisitions, a non-vanishing blood signal is seen, whereas the signal contribution from static tissue cancels. if the phase of the second image data set is shifted by ° compared to the first, labeled data set, the images can be added (the minus sign is provided by the phase), and the technique is called signal targeting with alternating radiofrequencies (star) (edelman et al. ) . in clinical mri systems, arterial spin labeling (asl) is typically implemented with the described labeling pulses, which are applied only once per data readout; this approach is also termed pulsed arterial spin labeling (pasl). another method for asl uses a small transmit coil for the labeling pulse, which continuously applies an rf pulse to the arterial vessel, and thus achieves a much higher degree of inversion. unfortunately, these continuous asl techniques often cannot be used in a clinical mr system due to the regulatory constraints for the maximum rf power applied to the patient (sar limits). asl techniques are typically applied to study perfusion in the brain and other organs, where the inflow delay is chosen long enough for the labeled blood to have reached the capillary bed (golay et al. ). unfortunately, the labeling in the blood does not persist for much longer than one t time (i.e., - s at . t), and the signal differences are generally very small ( - % of the total signal), which makes perfusion measurement asl a time-consuming procedure. another application to asl is the time-resolved visualization of blood flow, e.g., in intracranial malformations (essig et al. ) , where saturation effects limit the diagnostic quality of conventional d tof mras. here, dynamic asl data sets are acquired at a series of inflow delays to visualize the transit of the labeled bolus through the nidus of the malformation, and, more importantly, the arrival of the blood in the draining venous vessels, concept of arterial spin labeling: magnetization is prepared (e.g., using a slice-selective inversion pulse) in a section of the artery (red). after an inflow delay of several hundred milliseconds, the magnetization has reached the imaging slice (green), and an image is acquired. the procedure is repeated without preparation, and the two data sets are subtracted to remove the signal background from static tissue that cannot be seen on the tof data sets (fig. . . ) . in addition to a morphologic representation of these blood vessels, transit-time measurement of the blood becomes feasible, which could be used as an indicator, e.g., for an increase in vascular resistance after a radiation therapy. mr angiographies can also be acquired using the special contrast properties of blood. in balanced steady state free precession pulse sequences (bssfp, truefisp, fiesta), an image contrast is created that depends on the ratio of the relaxation times t and t (oppelt et al. ) . for blood, this ratio is high, and thus the interior of the blood vessels are shown with higher signal intensity than the surrounding tissue. unfortunately, other liquid-filled spaces such as the ventricles also appear with a bright signal, so that conventional mra post-processing strategies such as the maximum intensity projection cannot be used to visualize the vascular tree (fig. . . ) . despite their short repetition times and balanced gradient schemes, these pulse sequences are susceptible to flow artifacts caused by intra-voxel dephasing, which can be compensated using flow-compensation gradients (storey et al. ; bieri and scheffler ) . another problem with balanced steady state pulse sequences is the susceptibility to off-resonance artifacts: since both transverse and longitudinal magnetizations contribute to the mr signal, perfect phase coherence must be main-tained within one tr to establish the desired contrast. in off-resonant regions, this phase coherence is perturbed, and a contrast variation is seen in the form of dark bands. the banding artifacts can be reduced using a repetition time that is shorter than the inverse of the off-resonance frequency, i.e., for a -hz off-resonance, the tr should be shorter than ms. off-resonance frequencies scale with field strength, so that banding artifacts become an increasing problem at higher field strengths. nevertheless, fast balanced ssfp pulse sequences are increasingly used in mra studies of the heart and the neighboring vessels in combination with ecg triggering to visualize the vascular anatomy and to assess. in conventional spin-echo images, one often observes that the interior of the blood vessels is darker than the surrounding tissue. this so-called black-blood contrast is caused by an incomplete signal refocusing of the ° pulse. compared with tof mra with gradient-echo sequences, where the inflow of blood causes signal amplification, spin-echo sequences attenuate the signal from flowing blood because spins leave the imaging slice between the ° excitation pulse and the ° refocusing pulse, and thus do not contribute to the mr signal. therefore, blood signal attenuation can be increased with a longer spacing between the two rf pulses, i.e., with longer echo times te. to further suppress signals from slowly flowing blood near the vessel walls, often addi- fig. . . tof mra (top) and timeresolved dynamic mra with arterial spin labeling (bottom) of an intracranial arteriovenous malformation. in the tof mra, the nidus of the malformation is clearly seen, but the draining vein can hardly be identified because the inflowing blood is already completely saturated when it arrives in this part of the avm. in the three asl images acquired , , and ms after signal preparation, the filling of the nidus and the drainage through the vein is clearly visible tional strong gradients are introduced in the black-blood pulse sequences, which cause an increased intra-voxel dephasing and thus suppress the signal (lin et al. ) . a different technique for blood signal suppression makes use of an inversion recovery blood signal preparation (edelman et al. ): similar to arterial spin labeling a non-selective ° inversion pulse is applied; however, the signal in the imaging slice is reinverted by a subsequent slice-selective inversion pulse. with this preparation ,the magnetization of the blood (and of all other tissues) outside the imaging slice is selectively inverted. after a delay-time that is chosen to achieve a zero crossing of the longitudinal magnetization of the inverted blood, an image is acquired. if the blood has been completely exchanged during the delay, then the signal of the labeled blood is nulled and only the static tissue is visible. this technique is often used in combination with cardiac triggering to visualize, e.g., the myocardium (fig. . . ) . in cardiac black-blood applications, both techniques are combined, which is possible, because data are acquired during diastole when the heart is nearly at rest, whereas the signal preparation is applied during systole. in this image blood is seen with high signal intensity; however, the surrounding tissue also appears with a strong mr signal. in the ascending aorta, signal voids are seen which are caused by turbulent flow, and banding artifacts are visible in the subcuta-neous fat. nevertheless, balanced ssfp sequences provide good angiographic overview images in very short acquisition times, without the need for contrast agent injection. in the contrastenhanced data acquisition, a better background suppression is possible, and the projection image of the d data set clearly delineates the aorta and the adjacent vessels fig. . . ecg-triggered dark blood image of the heart acquired with a single-shot fast spin-echo technique (haste). the blood signal both in the heart and in the cross-section of the descending aorta is completely suppressed the tof contrast relies on the increase in signal amplitude due to the inflow of unsaturated magnetization. in addition to elevated signal amplitude, the spin movement can also create a change in the phase of the mr signal. if a gradient is turned on and blood moves along the gradient direction (here: the x-direction), the phase ϕ(t) of the mr signal is given by: ( . . ) here, the motion x(t) of the magnetization is expressed as a taylor series, and only the constant term (i.e., the initial position x ) and the linear term (i.e., the velocity v ) are considered. the two integrals m and m solely depend on the gradient timing and are called the zeroth and first moment of g(t). the next higher order term is proportional to the acceleration of the spins; however, the proportionality constant m only becomes large if long time scales are considered; thus, the estimation of the spin phase from the zeroth and first moments is justified for gradient-echo sequences with short echo times. if the gradient timing is modified such that the first moment is zero, the gradients are called flow compensated. flow compensation is important ingredient in many mr pulse sequences: if a range of velocities is present in a single voxel, then the mr signal amplitude is attenuated due to the incoherent addition of the signals. with flow compensation, the individual signals all have the same phase, and the signals of the different velocities add up coherently. flow compensation is especially important in regions of high velocity gradients as, e.g., turbulent jets or in highly angulated vessels. in general, both m and m will be non-vanishing, and the phase of the signal will become proportional to the local spin velocity. unfortunately, many other factors such as off-resonance, field inhomogeneity, or chemical shift also affect the spin phase, so that a direct velocity measurement is not possible with a single mr experiment alone. to create an mr image that is dependent on the local velocity, a minimum of two image acquisitions are required. in the first, velocity-sensitized acquisition a gradient timing is used with a carefully selected, nonvanishing first gradient moment (the zeroth moment is defining the spatial encoding, i.e., the k-space trajectory). in a second, flow-compensated acquisition, a gradient timing is chosen that cancels m . if the two phase images are directly subtracted, the result is a phase difference image that is linearly dependent on the spin velocity: this is the basis of an mr flow measurement (bryant et al. ) . since phase data are only unambiguous in the angular range of ± °, so is the velocity information in an mr flow measurement. to avoid artifacts due to multiple rotations of the spin phase (so-called wrap around artifacts), the first moment needs to be chosen such that the maximum velocity in the image creates a phase shift of °. in general, this velocity is set via the so-called velocity encoding, or venc, parameter in the pulse sequence. higher venc values require weaker encoding gradients, which can be realized in shorter echo times. despite an inadequate choice of the venc value, mr flow measurements are susceptible to phase noise, which is present in regions of low signal amplitude. if the snr is or less, the phase in the image is nearly uniformly distributed between - ° and + °; under these conditions a meaningful flow measurement is not possible. unfortunately, phase noise is also often present near blood vessels (e.g., in the air-filled spaces of the lung, close to the pulmonary vessels). here, the measurement of velocity values requires a very careful placement of the regions of interest (rois) to avoid systematic errors from included noise pixels. in a conventional flow measurement, velocity encoding is typically performed in slice-selection direction only, because the orthogonal placement of the flow measurement slice induces a high tof signal in the cross-section of the vessel lumen. additionally, a parallel orientation of the velocity encoding direction with the image plane makes the image acquisition susceptible to systematic errors due to displacement, as, e.g., the readout gradient are used for both spatial encoding and velocity measurement simultaneously. flow measurements in arterial vessels are often performed with cardiac synchronization to account for the pulsatility of the blood flow. cardiac synchronization can be performed by prospective ecg triggering or retrospective ecg gating. with prospective triggering, data acquisition is started by a trigger signal, which is generated by the ecg electronics during the qrs complex of the ecg (fig. . . ) . after data have been acquired for a certain number of cardiac phases, the measurement sequence is stopped until a new cardiac trigger signal is detected. in retrospective gating, image data are continuously acquired, and the time between the last trigger and the current data set is stored. later, data are resorted into predefined time intervals (bins) in the cardiac cycle, and the images are reconstructed. prospective triggering is less time-consuming during image reconstruction and is very precise in the delineation of the cardiac activity; however, a temporal gap at the end of the cardiac cycle is required and thus flow measurements at late diastole are difficult. prospective gating uses continuous image acquisition, and the magnetization steady state is always maintained. unfortunately, more data need to be acquired than with prospective triggering to ensure a sufficient coverage of the cardiac cycle, and a temporal blurring due to the interpolation is seen in the velocity data. when the complex image data of the two acquisitions are subtracted instead of the phases, and the magnitude of the difference is displayed, a so-called phase-contrast mra image is created (dumoulin ) . this pc mra image is not only dependent on the velocity of the spins, but also on the signal amplitude in both acquisitions; thus, every pc mra data set always has an overlaid tof contrast (fig. . . ). an advantage of pc mra is the fact that signal background of the surrounding stationary tissue is almost completely suppressed, and vessels can be traced further into the vascular periphery than with tof mra, with comparable measurement parameters. in pc mra, often not only the velocity in one spatial direction is encoded, but in all three directions. since separate velocity-encoded acquisitions have to be performed for each direction, the measurement time of a pc mra is two- to fourfold longer than that of a tof mra. a careful selection of the venc value is especially important in pc mra. if, e.g., the maximum velocity in the imaging slice is twice the venc value, a phase shift of ° (or °, which cannot be distinguished) is created. under these conditions, the velocity-encoded and the velocity-compensated acquisition have the same phase, and no pc mra signal would be observable. as blood does not flow with a constant velocity and velocity values can be reduced by pathologies (aneurysms) or increased (stenoses), the optimum choice of venc value is often difficult. because pc mra is more time-consuming, is susceptible to artifacts, and suffers from the same signal saturation as tof mra, it is rarely used in clinical routine. with tof and pc mra techniques, the blood motion is used to create a signal difference between the vessel lumen and the surrounding tissue, whereas contrastenhanced mra utilizes the reduction of the longitudinal relaxation time t after administration of a contrast agent. when a contrast agent is injected, the t of blood is shortened from t blood = . s (for b = . t) to less than ms during the first bolus passage (first pass). the relaxation rate r (i.e., / t ) is a function of the local contrast agent concentration: ( . . ) the proportionality constant between contrast agent concentration c and the change in relaxation rate is called the relaxivity r . the relaxivity is different for each contrast agent - typical values range from to mmol - s - . in general, high relaxivities are desirable because lower contrast-agent concentrations are needed to achieve the same change in image contrast. to enhance the signal in the contrast agent bolus and to suppress the signal background from static tissue, heavily t -weighted spoiled gradient-echo sequences (flash) with very short repetition times (tr < ms) and high flip angles (α = °- °) are used (fig. . . ) . the use of short trs is advantageous because very short acquisition times of only a few seconds can be achieved even for the acquisition of a complete d data set. these short acquisition times are needed, because the contrast agent is progressively diluted during the passage, which reduces the vessel-to-background contrast. short acquisition times are also favorable because mra data sets can thus be acquired in a single breath hold; for this reason, contrast-enhanced mra techniques are especially suited for abdominal applications (prince ; sodickson and manning ) . to ensure isotropic visualization of the vascular territories, typically d techniques are used for data acquisition. conventional d techniques have measurement times of several minutes, so that even with short repetition times larger parts of the k-space data are acquired after the contrast agent concentration has fallen to levels where only a weak signal enhancement is observable. for this reason, the measurement times are reduced using partial k-space sampling, parallel imaging, and view sharing between subsequent d data sets (sodickson and manning ; wilson et al. ; goyen ) . in general, contrast agents can be categorized into extracellular agents that can leave the blood stream and intravascular agents that are specifically designed to remain in the vascular system. historically, the first approved mr contrast agent was gd-dtpa (gadopentate dimeglumine, magnevist, schering, germany), an extracellular agent, which has the paramagnetic gd + ion as the central atom in an open-chain ionic complex (chelate). over the years, several similar extracellular agents such as gd-bt-do a (gadovist, schering), gd-dota (dotarem, guerbet, france), gd-bma (omniscan, ge healthcare), gd-hp-do a (prohance, bracco imaging, italy), and gd-bopta (multihance, bracco, italy) have been approved for clinical use, which only slightly differ in the stability of the gd chelates, pharmacokinetic properties, and safety profiles. in general, the most recently approved contrast agents have higher relaxivities and thus allow acquiring mra data sets with higher contrast at the same dose or with similar contrast at lower dose. only recently, the first intravascular contrast agent gadofosveset trisodium (vasovist, schering) has been approved for clinical use in europe (goyen ). this fig. . . phase-contrast images encoding flow in head-foot (top) and left-right (center) direction, and phase contrast mra image (bottom). in the flow images, a velocity-sensitive and a velocity-compensated data set are subtracted, whereas the pc mra image is generated by complex subtraction of the respective signal amplitudes. note, that the pc mra image has nearly no background signal from static tissue molecule has a diphenylcyclohexyl group, which is covalently bound to a gd complex, which creates a reversible, non-covalent binding of the molecule to serum albumin that significantly prolongs the half-life of the agent in blood to about h. after injection of the agent, at first a more rapid decline of concentration is observed, because the fraction of the contrast agent bound to albumin is dependent on the contrast agent concentration; thus, a steady-state concentration is established after the unbound fraction is renally excreted. both extracellular and intravascular agents can be imaged during the first pass of the contrast agent, when a high vessel-to-background contrast is present, whereas intravascular contrast agents additionally allow angiographic imaging during the subsequent steady state. the t shortening is dependent on the contrast agent concentration, which is getting smaller already a few seconds after infusion of the contrast agent, as the contrast agent bolus in the blood is increasingly diluted and, for the extracellular agents, the contrast agent is extravasculized. therefore, contrast-enhanced mra techniques usually use pulse sequences with very short acquisition times (ta < s). the short passage time of the contrast agent bolus of a few seconds requires that imaging be precisely synchronized with the contrast agent infusion. the transit time of the bolus from the point of injection (usually a vein in the arm) through to the vascular target structure (e.g., the renal arteries) varies significantly with the heart rate and cardiac output and can be difficult to predict. therefore, various synchronization and acquisition tech-niques have been proposed for a reliable mra data acquisition: an automatic technique to start the d mra data acquisition (smartprep™, general electric) uses a fast pulse sequence before the d mra, which continuously acquires the signal in the vascular target region . after administration of the contrast agent, this signal exceeds a certain signal threshold, and the d mra acquisition is automatically started. if the signal threshold is selected too low, image noise can mimic a bolus arrival and the measurement is triggered too early, whereas a too high value of the threshold can lead to an omission of the data acquisition. with the test bolus technique, a small bolus of a few milliliters of the contrast agent is infused, and the passage of the bolus is imaged near the target vessel with a fast time-resolved d mr measurement (earls et al. ) . the trigger delay td for the subsequent d measurement is then calculated from the transit time of the bolus tt and the acquisition time of the d mra ta as: td = tt - necho × ta. here, necho denotes the fraction of ta before the center of k-space is acquired (fig. . . ). as in the automatic start of the sequence, fluoroscopic previews (carebolus™, siemens medical solutions) image the contrast bolus during its passage; however, here fast d sequences are used with real-time image reconstruction and display riederer et al. ; wilman et al. ) . once the bolus has reached the target region, the operator of the mr scanner manually switches to the predefined d mra pulse sequence, which is then executed with minimal time delay. fig. . . signal intensity as a function of t -contrast in a spoiled gradient-echo pulse sequence (flash). at high flip angles and short repetition times the signal from tissue (t > ms at . t) is nearly completely saturated, whereas a high intraluminal signal is seen due to the high concentrations of the contrast agent • multiphase mra time-resolved mra has been increasingly used to completely avoid manual or automated synchronization. multiphasic acquisitions consecutively acquire d mra data sets during the bolus passage so that the optimal vessel contrast is obtained in at least one of the data sets. various methods of measurement acceleration are combined to ensure adequate temporal resolution; these include parallel imaging, asymmetric k-space readout, and temporal data interpolation (korosec et al. ; fink et al. ). nevertheless, timeresolved mra data sets are usually of a lower spatial resolution than are optimally acquired mra data with bolus synchronization (fig. . . ). artifacts arise, if the d mra data acquisition is not perfectly synchronized with the bolus passage. the appearance of these artifacts depends on the relative timing of the k-space acquisition and the concentration-time curve of the contrast agent. if the bolus arrives too late in the target vessel, then the center of the k-space has already been sampled and large structures, such as the interior of the blood vessel, appear dark, whereas fine structures, such as vessel margins, have a high signal if the bolus arrives during sampling of the k-space periphery. if the data acquisition is started too late, then the bolus has already reached the target vessel and the contrast has partly disappeared -thus, the signal is significantly reduced compared with an optimally synchronized data acquisition (maki et al. ; wilman et al. ; svensson et al. ) . another disadvantage of suboptimal bolus timing is the fact that the bolus may have passed from the arterial to the venous system in some vascular regions (e.g., the extremities) so that both veins and arteries are seen in the images. this venous contamination makes the interpretation of the image data difficult in cases where arterial and venous vessels are parallel to each other. the variation in contrast agent concentration over time also results in a reduction in the achievable image resolution (fain et al. ) . the spatial resolution of an mr image sampled with cartesian data acquisition is always uniquely defined by the measured number of k-space lines. with increasing number of k-space lines (i.e., larger k-space coverage), finer image details are encoded in the image. this so-called nyquist scanning theorem only applies if the signal intensity is constant during data acquisition. even with perfect synchronization with the contrast agent bolus, the contrast agent concentration is only optimal during acquisition of the central k-space lines. later on, the concentration is reduced and the peripheral k-space regions are acquired with significantly reduced signal intensity (fig. . . ). this different weighting of the k-space regions results in a reduction in spatial resolution (blurring), which is mathematically described by the point-spread function, psf. the psf is the image of a point object - for linear imaging systems, it is used to describe the imperfections of the image acquisition system. the psf depends on the acquisition time, the contrast agent dynamics, and the measurement parameters of the pulse sequence. the deviation from an ideal psf is particularly visible in those spatial directions that are acquired with the lowest sampling velocity. in conventional d acquisition, this is either the phase encoding direction or the partition encoding direction. to eliminate this asymmetry and to fig. . . test bolus measurement in the heart (blue) and the aorta (red) of a patient. for an optimal visualization of the aortic arch, a time delay of s is required between contrast agent injection and the acquisition of the central k-space lines evenly distribute the blurring in both spatial directions, elliptical scanning of the phase and partition encoding steps has been proposed (bampton et al. ; , where the encoding steps are acquired along an elliptical path, starting from the center of the k-space. using the signal-time curve of a test bolus, the signal variation during data acquisition can be avoided. therefore, an injection scheme is calculated for the signal-time curve using linear system theory, where the injection rate is modulated such that there is a constant contrast agent concentration (i.e., an ideal psf) in the target region throughout the data acquisition. this technique requires a programmable contrast-agent injector and additional computations, and the constant concentration can only be achieved in a limited target volume. another option for reducing the intensity changes is to induce a blood flow stasis for a brief period after contrast agent inflow. this can be achieved in the peripheral blood vessels, without endangering the patient using an inflatable cuff, which temporarily blocks the blood flow during data acquisition; this technique has been used successfully applied to contrast-enhanced mra studies of the hand (zhang et al. ) and the legs (zhang et al. ; vogt et al. ) . in addition to reducing t , an mr contrast agent always also reduces t (and t *). the reduction in t * can lead to a significant signal reduction in the mra image at high contrast agent concentrations; often the venous vessel through which the bolus is infused are seen dark in the mra data sets (albert et al. ) . to avoid these ar-tifacts, the contrast agent concentration or the echo time te can be reduced. besides reducing t *, the contrast agent also causes a concentration-dependent resonance frequency shift. radial or spiral k-space data acquisitions especially susceptible to these frequency shifts that cause blurring artifacts, which can be compensated using dedicated off-resonance correction algorithms. contrast-enhanced mra studies are also susceptible to artifacts known from tof mra. in particular, pulsation artifacts are visible in contrast-enhanced measurements (al-kwifi et al. ) . intra-voxel dephasing is observed, although the effect is much lower due to the shorter tes used here. to keep the acquisition time short, generally flow compensation is not integrated into contrast-enhanced mra pulse sequences, because the additional gradients significantly prolong the measurement time. contrast-enhanced mra offers substantial advantages over tof or pc mra, because the saturation effects seen in tof mra are almost completely avoided. thus, extended vascular structures for example in the abdomen or the extremities can be visualized with a few slices oriented parallel to the vessel. the short acquisition times of contrast-enhanced mra allow breath-held acquisitions (ta < s), which significantly reduces motion artifacts . the dynamic information of multiphase d mra contains information about vascular anatomy, flow direction (e.g., in aortal aneurysms), tissue perfusion (e.g., in the kidney), and vascular anomalies, which might not be visible on a single mra data set. using intravascular agents, the contrast agent concentration in the blood is maintained over time spans of minutes to hours. in general, the same pulse sequences can be applied for mra with intravascular contrast agents as with the extracellular contrast agents, as they share the same contrast mechanism; however, as the concentration of intravascular contrast agent in blood attains an equilibrium state after a few re-circulations (typically - s), mra data sets can also be acquired over longer acquisition times. this prolonged acquisition window can be used to increase the image resolution, because an ideal psf can be achieved and, thus, no blurring should be present (van bemmel et al. a; grist et al. ) . because data acquisition does not have to be synchronized with the contrast agent bolus, data acquisition can be started once the contrast agent concentration has reached equilibrium. the contrast agent injection does not need to be performed with a contrast agent pump, but can be infused manually via a venous access port even prior to the mr examination. with intravascular contrast agents, the acquisition time ta is not limited by the transit time of the bolus, and data sets can be acquired over much longer acquisition periods. with longer acquisition times, special trigger and gating techniques (ecg triggering, respiratory gating, navigator echoes [ahlstrom et al. ] ) to suppress motion artifacts are required. if image data are acquired in the equilibrium phase, venous overlay is a fundamental problem in mra with intravascular contrast agents (fig. . . ). venous contamination particularly makes those images hard to interpret that are calculated through a projection technique such as the maximum intensity projection (mip), because here, the depth information is lost. a separation of arterial and venous vessels is possible with the help of dedicated post-processing software. therefore, a region in an arterial vessel is identified and a region-growing algorithm is used to find all connected regions. unfortu-nately, when arteries and veins are in close proximity the algorithm may artifactually connect arterial to venous vessels. with equilibrium-phase mra, data sets of intravascular contrast agents, these artifacts are easier to avoid than in first-pass studies because of the higher spatial resolution. nevertheless, for direct arteriovenous connections or shunts, a manual correction of the segmentation is always required (van bemmel et al. b; svensson et al. ) . the use of intravascular contrast agent is not limited to the equilibrium phase, but can be combined with a first-pass study during the initial contrast agent injection to obtain both the dynamics of the contrast agent passage as well as the vascular morphology (grist et al. ) . additionally, the dynamic information can be utilized to separate arteries from veins in the high-resolution equilibrium-phase d mra data sets (bock et al. ) . although the long half-life of the intravascular contrast agent is advantageous for intraluminal studies, it can become a problem if a dynamic study has to be repeated. with extracellular contrast agents, this is possible within a few minutes, whereas up to several hours have to be waited after a study with an intravascular contrast agent. in practice, intravascular contrast agents are still advantageous for mra, as they allow combining high spatial resolution in the equilibrium phase with dynamic information during first passage. additionally, these contrast agents can be used to quantify perfusion (prasad et al. ) or to delineate vessels during mr-guided intravascular procedures (wacker et al. ; martin et al. ). various techniques for mr angiography and mr flow measurements exist that make use of the different physical properties of blood: flow, pulsation, or signal variation following administration of contrast agent. tof mra is often used in anatomical regions where a high inflow is fig. . . surface rendering (left) and mip (right) visualization of an abdominal mra with an intravascular contrast agent. the three-dimensional character of the data set is better captured with the surface display, whereas the finer details are better visualized on the mip. the presence of venous signal is making the interpretation of the data more difficult; however, a significantly higher spatial resolution can be achieved with intravascular contrast agents present, and long measurement times can be tolerated. phase-contrast flow measurements provide a quantitative assessment of blood flow when combined with cardiac triggering. contrast-enhanced studies are favorable in abdominal regions and the periphery, where saturation effects are a limiting factor for tof mra. intravascular contrast agents further extend the capabilities of contrastenhanced mra because high-resolution data sets can be acquired over an extended time. diffusion in the context of diffusion-weighted mri or diffusion tensor imaging (dti) refers to the stochastic thermal motion of molecules or atoms in fluids and gases, a phenomenon also known as brownian motion. this motion depends on the size, the temperature, and in particular on the microscopic environment of the examined molecules. diffusion measurements can therefore be used to derive information about the microstructure of tissue. in mri, stochastic molecular motion can be observed as signal attenuation. this was first recognized in the nmr spin-echo experiment by e.l. hahn in (hahn , long before the invention of actual magnetic resonance imaging. a number of more sophisticated experiments were described in following years that allowed the quantitative measurement of the diffusion coefficient (carr and purcell ; torrey ; woessner ) . of particular importance is the pulsed-gradient spin-echo (pgse) technique proposed by stejskal and tanner in (stejskal and tanner ) that is described in detail in sect. . . . . diffusion as an imaging-contrast mechanism was first incorporated in mri pulse sequences in (taylor and bushell ; merboldt et al. ) and applied in vivo in (lebihan et al. ). its great potential for clinical mri became evident in around , when diffusionweighted images were recognized to be extremely valuable for the early detection of stroke (moseley et al. a,b; chien et al. chien et al. , . areas of focal cerebral ischemia appear hyperintense in diffusion-weighted images only minutes after the onset of symptoms (see also chap. , sect. . ) . having thus been pushed into publicity, diffusion-weighted imaging was evaluated in many other applications such as the characterization of brain tumors (tien et al. ; sugahara et al. ; okamoto et al. ) and of multiple sclerosis lesions (cercignani et al. ; filippi and inglese ) , but none of these reached the clinical significance of stroke diagnosis. mainly due to limitations of image quality, there are considerably fewer publications about diffusion-weighted imaging outside the central nervous system. examples of these are studies with the purpose of differentiating osteoporotic and malignant vertebral compression fractures (baur et al. (baur et al. , herneth et al. ) or benign and malignant lesions of the liver (moteki et al. ; taouli et al. ) and the kidneys (cova et al. ) . molecular diffusion is a three-dimensional process and is-depending on the tissue microstructure-in general anisotropic, i.e., the extent of molecular motion depends on spatial orientation. a physical quantity called the diffusion tensor is required to fully describe anisotropic diffusion. mri techniques to measure the diffusion tensor have been introduced in the s (basser et al. ; pierpaoli et al. ) and gained considerably more popularity when tracking algorithms were proposed for three-dimensional reconstruction of white matter fiber tracts (mori et al. ; conturo et al. ) . today, diffusion tensor imaging is a valuable research tool with applications, e.g., in neurodevelopment snook et al. ) , neuropsychiatry (taber et al. , sullivan and pfefferbaum , or aging (moseley , sullivan and pfefferbaum , sullivan et al. ). all molecules in fluids or gases perform microscopic random motions. this motion is called molecular diffusion or brownian motion after robert brown ( brown ( - , who observed a minute motion of plant pollens floating in water in (brown ) . these pollens were constantly hit by fast-moving water molecules, resulting in a visible irregular motion of the much larger particles. due to brownian motion, a tracer such as a droplet of ink given into water will diffuse into its surroundings, resulting in spatially and temporally varying tracer concentrations, until the ink is diluted homogeneously in the water. however, brownian molecular motion does not require concentration gradients, but occurs also in fluids consisting of only a single kind of molecule. the molecules of any arbitrary droplet of water within a larger water reservoir will stochastically disperse into their surroundings; this process is called diffusion or, to emphasize that the observed molecules do not diffuse into an external medium, self-diffusion. it should be noted that diffusion always refers to a stochastic and not directed motion and is strictly to be distinguished from any kind of directional flow of a liquid. the molecules in fluids or gases perform random motions due to their thermal kinetic energy, ekin, which is proportional to the temperature, t: ekin = - kt (k = . × - j/k is the boltzmann constant). this energy corresponds to a mean velocity m e v kin = for a molecule of mass m; in the case of water at room temperature (t = k), the mean velocity is about m/s. due to frequent collisions with other particles, however, mol-strictly speaking, we calculate the square root of the mean value of squared velocity, i.e., mean (ν�) , which is slightly different (by a factor of π / ≈ . , i.e., by . %) from the actual mean value of the velocity due to the asymmetry of the maxwell distribution. ecules do not move linearly in a certain direction but follow a random course that can be visualized in a randomwalk simulation as shown in fig. . . a. this figure also demonstrates that, macroscopically, the mean displacement or diffusion distance, s, after a time t is much more interesting than is the linear velocity of the molecule. the mean diffusion distance of a particle is proportional to the square root of the diffusion time t and is described by the diffusion coefficient d: . this relation is shown in fig. . . b for a water molecule with a diffusion coefficient of d = . × - mm /s at a temperature of °c. since diffusion is a stochastic process, the diffusion distance after the time t is not the same for all molecules but is described by a gaussian probability distribution as illustrated in fig. . . . as shown in this illustration, after a diffusion time t most molecules are still found at or close to their original position; the diffusion distance s, corresponds to the standard deviation of the shown distributions. typical diffusion distances for free water molecules at room temperature are about µm after a diffusion time of ms and µm after s. in contrast to free diffusion in pure water ( fig. . . a) , the water molecules in tissue cannot move freely, but are hindered by the cellular tissue structure, in particular by cell membranes, cell organelles, and large macromolecules as shown schematically in fig. . . b. due to additional collisions with these obstacles, the mean diffusion distance of water molecules in tissue is reduced compared to that of free water, and a decreased effective diffusion coefficient is found in tissue called the apparent diffusion coefficient (adc). obviously, the adc depends on the number and size of obstacles and therefore on the cell types that compose the tissue. hence, diffusion properties can be used to distinguish different types of tissue. examples of diffusion coefficients in different tissues and in fluids at different temperatures are summarized in table . . . not only does the number and size of organelles influence diffusion, but also the geometrical arrangement of the cell membranes. in particular, the diffusion of water molecules can reflect an anisotropic arrangement of cells as indicated in fig. . . c. since cell membranes are barriers for diffusing molecules, water diffuses more freely along the long axis of the cell than perpendicular to it (beaulieu ) . hence, the adc measured in the direction parallel to the cellular orientation will be greater than that measured in an orthogonal direction. this property, the dependence of a quantity on its orientation in space, is called anisotropy. it has proven useful to illustrate the diffusion properties by spheres for isotropic diffusion and by three-dimensional ellipsoids for anisotropic diffusion as shown in fig. . . d-f. these shapes visualize the probability density function of diffusion distances in space. isotropic diffusion can be completely described by its (apparent) diffusion coefficient, d, which corresponds to the radius of the sphere (fig. . . d,e) . more quantities are required for a complete description of anisotropic diffusion, e.g., three angles that define the orientation of the ellipsoid in space, and the length of the three principal axes describing the magnitude of the diffusion coefficients. in physics or mathematics, a quantity that corresponds to such a three-dimensional ellipsoid is called a tensor. the physical object called tensor can also be explained by comparing it to more commonly known objects such as scalars and vectors. a scalar is a quantity that can be measured or described as a single number; typical examples are the temperature, the mass, or the density of an object. in imaging, the image intensity, e.g., in a t -weighted image, is a scalar: a single number is required for each pixel to describe the intensity. as demonstrated in the last example, scalars can be spatially dependent and be visualized as intensity maps; another example is a temperature map of an object that describes the temperature as scalar quantity for each spatial position of an object. other physical quantities cannot be described by a single number, such as the velocity or acceleration of a particle in space or the flow of a liquid. these quantities are vectors and require both a direction in space and a magnitude to be fully described. a vector is typically visualized as an arrow. for example, in the case of velocity, the direction of the arrow describes the direction of motion and the length of the arrow represents the magnitude of the vector, e.g., as measured in meters/second. such an arrow can be mathematically described by three independent numbers: either by its length and two angles defining its orientation or by three coordinates (x-, y-, and z-component of the vector). these coordinates are often presented as a column or row vector, e.g., v = (vx vy vz). vectors as well as scalars can depend on the spatial position; a flowing liquid can be described by a velocity vector at each position. a full data set consisting of a vector (i.e., an arrow) at each point in space is called a vector field. some quantities such as the molecular diffusion cannot be fully described as scalars or vectors; they are tensors. as mentioned above, the diffusion properties can be depicted by a three-dimensional ellipsoid and therefore require six independent numbers to define the direction and length of all axes. these six values are visualized in fig. . . as the three lengths of the axes defining the shape of the ellipsoid and the three angles describing its orientation. however, instead of using angles, tensors can equally well be described by six coordinates arranged in a symmetric × -matrix in analogy to the three coordinates of a vector. these coordinates are called dxx, dyy, dzz, dxy, dxz, and dyz and form the matrix representation this matrix is called symmetric because the elements are mirrored at the diagonal. of these matrix elements, only the diagonal elements, dxx, dyy, dzz, can be measured directly in mri and correspond to the diffusion in the x-, y-, and z-directions; the off-diagonal elements must be determined indirectly from further measurements as described in sect. . . . in the case of isotropic diffusion, i.e., if the ellipsoid is a sphere, then this matrix has a very simple form, because a single diffusion coefficient suffices to describe the diffusion. this diffusion coefficient is found on the diagonal of the matrix, and all off-diagonal elements are zero: the diffusion tensor has some properties that are important to understand in order to measure and interpret diffusion imaging data. the mean diffusivity, i.e., the diffu-sion coefficient averaged over all spatial orientations, can be derived from the trace of the diffusion tensor, i.e., the sum of its diagonal elements: . an mri measurement of the mean adc is therefore also called trace imaging. to analyze the non-isotropic properties of the diffusion tensor, a process called diagonalization of the tensor is used. the meaning of tensor diagonalization can be visualized as finding the three axes (i.e., their length and orientation) that define the ellipsoid in fig. . . . mathematically, the tensor matrix is transformed into a form where all off-diagonal elements are zero: since six parameters are still required to fully describe the tensor, in addition to the three diagonal elements, three vectors, v , v , v , are determined which are called eigenvectors. the eigenvectors, which are always orthogonal and have unit length, define the orientation of the ellipsoid and are shown as thick grey arrows in fig. . . d. the ratios of the diffusion eigenvalues describe the isotropy or anisotropy of diffusion. in the case of isotropic diffusion, all eigenvalues are the same, d = d = d , and diffusion is represented by a sphere; see fig. . . a. if the largest eigenvalue is much greater than the two other eigenvalues, d >> d ≈ d , then the tensor is represented by a cigar-like shape as in fig. . . b. in this case, diffusion in one direction is much less hindered than in the other directions and is sometimes called linear diffusion; this is typically found in white matter fiber tracts, where the motion of water molecules is restricted by the cell membranes and the glial cells perpendicular to the fiber tract orientation. the orientation of the fiber tracts is described by the eigenvector v belonging to the large eigenvalue, d . if two large eigenvalues are much greater than the third one, d ≈ d >> d , then the diffusion tensor is represented by a pancake-like shape; see fig. . . c . this tensor corresponds to preferred diffusion within a twodimensional plane, which can occur in layered structures and is referred to as planar diffusion. in order to describe the diffusion anisotropy quantitatively, several anisotropy indices have been introduced to reduce the diffusion tensor to a single number, i.e., a scalar, measuring the anisotropy. most frequently used is fig. . . tensor visualized as three-dimensional ellipsoid: six independent numbers are required to define a tensor, three lengths (eigenvalues), d , d , d , corresponding to the length of the principal axes of the ellipsoid (shown three-dimensionally in a and in two-dimensional sections b), and three angles, α, β, γ, describing the spatial orientation of the axes (c,d) . the eigenvectors of the tensor are shown as thick gray arrows in d the fractional anisotropy (fa), defined as where d = - (d + d + d ) is the mean diffusivity. the fractional anisotropy ranges from (isotropic diffusion) to (maximum anisotropy) and can be interpreted as the fraction of the magnitude of the tensor that can be ascribed to anisotropic diffusion . a similar index is the relative anisotropy (ra), defined as the relative anisotropy is the magnitude of the anisotropic part of the tensor divided by its isotropic part and ranges from (isotropy) to . ≈ (maximum anisotropy). in order to scale the maximum value of the ra to as well, a normalized (or scaled) definition with an additional factor of is sometimes used (and often called ra as well): less frequently used indices of anisotropy are the volume ratio (vr), all these anisotropy indices can be used to describe the diffusion anisotropy (kingsley and monahan ) , but fractional anisotropy may be considered the preferred index that is currently most frequently used. some typical values of these indices are compared in table . . ; fa, ra, nra, and vf start with for isotropic diffusion and increase with increasing anisotropy. the volume ratio is in the case of isotropic diffusion and decreases with increasing anisotropy. to introduce diffusion weighting in mri pulse sequences, today almost exclusively a technique proposed by stejskal and tanner in (stejskal and tanner ) is used. the basic idea is to insert additional gradients (usually referred to as diffusion gradients) into the pulse sequence in order to measure the stochastic molecular motion as signal attenuation. originally, these were two identical gradients on both sides of the refocusing ° rf pulse of a spin-echo sequence: the so-called pulsed-gradient spin echo (pgse) technique. however, to simplify the explanation, we will replace this scheme with two gradients with opposite signs that do not require a ° pulse in between, as shown in fig. . . . the contrast mechanism is the same for both gradient schemes. as illustrated in fig. . . , the diffusion gradients superpose a linear magnetic field gradient over the static field, b . since the larmor frequency of the spins is proportional to the magnetic field strength, spins at different positions now precess with different larmor frequencies and, thus, become dephased. if the spins are stationary (no diffusion, i.e., diffusion coefficient d = ) and remain at their position, the second diffusion gradient with opposite sign exactly compensates the effect of the first one and rephases the spins. hence, without diffusion, the signal after the application of the pair of diffusion gradients is the same as before (neglecting relaxation effects). in the case of diffusing spins, the second diffusion gradient cannot completely compensate the effect of the first one since spins have moved between the first and second gradient. the additional phase the spins gained during the first diffusion gradient is not reverted during the second one. consequently, rephasing is incomplete after the second diffusion gradient, resulting in diffusion-dependent signal attenuation. as can be deduced from this explanation, the signal attenuation is larger if the diffusivity, i.e., the mobility of the spins, is larger. quantitatively, the signal attenuation depends exponentially on the diffusion coefficient, dg, in the direction defined by the diffusion gradient gd: where s is the original (unattenuated) signal and s(dg, b) is the attenuated diffusion-weighted signal. the b-value, b, is the diffusion weighting that plays a similar role for diffusion-weighted imaging as the echo time for t weighted imaging: the diffusion contrast, i.e., the signal difference between two tissues with different adcs, is low at small b-values and can be maximized by choosing the optimal b-value as discussed below. the b-value is expressed in units of s/mm and depends on the timing and the amplitude, gd, of the diffusion gradients: as illustrated in fig. . . , δ is the duration of each diffusion gradient, and ∆ is the interval between the onsets of the gradients; γ is the gyromagnetic ratio of the diffusing spins. a typical b-value used for diffusion-weighted imaging of the brain is , s/mm²; for other applications b-values range between s/mm (dark blood liver imaging) and about , s/mm (imaging of the diffusion q-space [assaf et al. ; wedeen et al. ] ). to obtain b-values of about , s/mm , diffusion gradients are required to be much longer (e.g., δ = ms) and have larger amplitudes (e.g., gd = mt/m) than normal imaging gradients applied in mri; hence, diffusion-weighted imaging can be demanding for the gradient amplifiers and is often acoustically noisy. the formula for the b-value given above is valid only for a pair of stejskal-tanner diffusion gradients. the diffusion weighting of arbitrary time-dependent diffusion gradient shapes, gd(t), applied between t = and t = t, can be calculated according to (stejskal and tanner ) by applying diffusion gradients, diffusion-weighted images can be acquired in which the signal intensity depends on the adc, e.g., structures with large adc such as liquids appear hypointense. to quantify the adc at least two diffusion-weighted measurements with different diffusion weightings (i.e., different b-values) are required as shown in fig. . . . by determining the signal intensity at the lower b-value, s(b ), and the higher b-value, s(b ), the adc can be calculated as this can be done either for the mean signal intensities in a region of interest or pixel by pixel in order to calculate an adc map as in fig. . . . the adc can also be calculated from more than two b-values by fitting an exponential to the measured signal intensities or by linear regression analysis applied to the logarithm of signal intensities. it should be noted that diffusion-weighted images generally exhibit a mixture of different contrasts. many diffusion-weighted pulse sequences require relatively long echo times between and ms because of the long duration of the diffusion preparation. thus, diffusion-weighted images are often also t -weighted, and it can be difficult to differentiate image contrast due to diffusion and t effects. this is a typical problem in diffusion-weighted mri of the brain and known as t shinethrough effect (burdette et al. ) . a further consequence of the long minimum echo times due to the diffusion preparation is the relatively low signal-to-noise ratio of diffusion-weighted images. the combined effects of diffusion weighting, which particularly decreases the signal of fluids, and of t weighting, which predominantly reduces the signal of other (non-fluid) tissue, results in globally low signal intensity on diffusion-weighted images. therefore, signal-increasing techniques such as increasing the voxel volume or (magnitude) averaging are often required for diffusion-weighted mri. in addition, adc calculation can be corrected for the decreasing signal-to-noise ratio at higher b-values (dietrich et al. a) . the range of b-values chosen for a diffusion-weighted mri experiment should depend on the typical diffusion coefficients that are measured and on the signalto-noise ratio of the diffusion-weighted image data. as a rule of thumb, the signal attenuation should be at least in the case of diffusing spins, rephasing is incomplete since spins have moved between the first and second gradient; thus, diffusion-dependent signal attenuation is observed (red arrow) fig. . . acquisition of two images with different diffusion weightings (b-values b and b ) in order to calculate an adc map. note the large signal attenuation in csf at the higher b-value, b , and the correspondingly high diffusion coefficient in the adc map . diffusion-weighted imaging and diffusion tensor imaging about %, i.e., the product of diffusion coefficient and the b-value range, bmax-bmin, should be approximately (xing et al. ) . this corresponds to a b-value difference of about , - , s/mm in brain tissue with adcs between . × - and . × - mm /s. however, the choice of the largest b-value is frequently limited by signal-to-noise considerations, and thus, the maximum diffusion weighting is often reduced in order to maintain sufficient signal-to-noise ratio. a second point to consider is the choice of the lowest b-value. although a b-value of is often chosen, a slightly higher value of, for example, s/mm can be advantageous in order to suppress the influence of perfusion effects (lebihan et al. ; van rijswijk et al. ) . historically, the first mri pulse sequences with inserted diffusion gradients were stimulated-echo (taylor and bushell ; merboldt et al. ) and spin-echo sequences (lebihan et al. ); a schematic spin-echo pulse sequence with diffusion gradients is shown in fig. . . . in this diagram, diffusion gradients are added for all three spatial directions (readout, phase, and slice direction); however, they are usually switched on in only one or two of the three directions at a time. since spins are refocused by a ° pulse, both diffusion gradients have the same polarity. the main disadvantages of the diffusion-weighted spin-echo sequence are that it requires long acquisition times of many minutes per data set and is extremely sensitive to motion. examples of images acquired with a diffusion-weighted spin-echo sequence are shown in fig. . . a ,b. the volunteers were asked to avoid any movements, but no head fixation was applied; severe motion artifacts degrade the images. these artifacts are caused by inconsistent phase information of the complex-valued raw data; stimulated-echo, and spin-echo sequences are particularly sensitive to these effects because the diffusion preparation must be repeated for each raw data line and different states of motion will occur during these diffusion preparations. more details about the motion sensitivity of diffusion-weighted sequences and about approaches to reduce these artifacts are described in the following section. with techniques such as cardiac gating and navigator-echo correction, image quality of diffusion-weighted spin-echo sequences can be dramatically improved (fig. . . c,d) . today, the most commonly used pulse sequence for diffusion-weighted mri (particularly of the brain) is the single-shot echo planar imaging (epi) sequence with spin-echo excitation. the diffusion preparation of this sequence is the same as in the conventional spin-echo sequence of fig. . . , but instead of acquiring a single echo after each excitation, the full k-space can be read. the advantages of the diffusion-weighted epi sequence are a very short acquisition time of less than ms per slice and its insensitivity to motion. however, the image resolution is typically limited to × matrices and echo planar imaging is very sensitive to susceptibility variations as demonstrated in fig. . . a ,b-different susceptibilities of soft tissue, bone, and air, cause severe image distortion and signal cancellation close to interfaces between soft tissue and air or bone. these effects can be reduced with new imaging methods known as parallel imaging or parallel acquisition techniques (see sect. . ) . the underlying idea is to use several receiver coil elements with spatially different coil sensitivity profiles to acquire multiple data sets with reduced k-space sampling density in the phase-encode direction. these data sets are used to calculate a single image corresponding to a fully sampled k-space during post-processing. reducing the number of phase-encode steps shortens the epi echo train, decreases the minimum echo time as well as the total acquisition time, and increases the effective receiver bandwidth in the phase-encode direction. as a result, susceptibility-induced distortions are reduced as shown in fig. . . c,d. alternatively, the accelerated ac- diffusion-weighted imaging of the brain is almost exclusively performed with single-shot epi sequences (with or without parallel imaging). other organs or body areas, however, are less suited for echo-planar single-shot acquisitions because of much more severe susceptibility effects that often result in images of non-diagnostic quality. depending on the slice orientation and receiver coil system, these distortions can be reduced either with parallel imaging as described above or with segmented (i.e., multi-shot) epi sequences that assemble the raw data from multiple shorter echo trains (holder et al. ; ries et al. ; einarsdottir et al. ) . a disadvantage of this approach is the increased motion sensitivity since several excitations (and diffusion preparations) are required for a single data set, resulting in potentially inconsistent phase information; segmented epi sequences are therefore often combined with additional motion correction techniques. several other pulse sequences have been proposed for diffusion-weighted imaging. diffusion gradients can be added to single-shot fast spin-echo sequences with echo trains of multiple spin-echoes (see also sect. . ) such as haste or rare sequences (norris et al. ) . however, the additionally inserted diffusion gradients cause an irregular timing of the originally equidistant refocusing rf pulses. in combination with motion-dependent phase shifts, this violates the cpmg condition, which requires a certain phase relation between excitation and refocusing pulses. thus, in order to avoid artifacts, various modifications to diffusion-weighted fast spin-echo sequences have been suggested such as additional gradients (norris et al. ) , a split acquisition of echoes of even and odd parity (schick ) , or modified rf pulse trains (alsop ) . these modified diffusion-weighted singleshot fast spin-echo sequences are fast and insensitive to motion; disadvantages are a relatively low signal-to-noise ratio, and a certain image blurring that is characteristic for all single-shot fast spin-echo techniques. they have been applied in the brain (alsop ; lovblad et al. ) , the spine (tsuchiya et al. , clark and werring ) , and in several non-neuro applications such as imaging of musculoskeletal (dietrich et al. ) or breast (kinoshita et al. ) tumors. in contrast to echo-planar fig. . . diffusion-weighted spin-echo acquisitions (b = s/mm²) of two healthy and cooperative volunteers. a,b uncorrected images acquired without cardiac gating. c,d images after navigator echo correction acquired with cardiac gating fig. . . images acquired with a diffusion-weighted epi sequence. a,b conventional epi sequence exhibiting severe distortions (arrows). c,d epi sequence with parallel imaging (acceleration factor ) showing reduced susceptibility artifacts. (for better visualization of artifacts, only images without diffusion weighting (b = ) are shown) imaging, these techniques are insensitive to susceptibility variations and, thus, particularly suited for applications outside the brain. another, however only infrequently used alternative to echo-planar sequences are fast gradient-echo techniques (flash, mp-rage) with diffusion preparation (lee and price ; thomas et al. ) . a special sequence type that has successfully been employed for diffusion-weighted imaging is based on steady-state free-precession (ssfp) sequences (see also sect. . ). pulse sequences known as ce-fast or psif sequences (the acronym psif refers to a reverted fast imaging with steady precession, i.e., fisp, sequence) have been adopted to diffusion-weighted imaging by inserting a single diffusion gradient (lebihan ; merboldt et al. ) . however, in contrast to all previously described sequences, the diffusion weighting of this technique cannot be easily determined quantitatively. the observed signal attenuation does not only depend on the diffusion coefficient and the diffusion weighting, but also on the relaxation times, t and t , and the flip angle (buxton ) . since these quantities are usually not exactly known, the adc cannot be determined. instead, these sequences have been used to acquire diffusion-weighted images that are evaluated based only on visible image contrast. a general advantage of diffusion-weighted psif sequences is the relatively short acquisition time due to short repetition times of about ms. thus, they exhibit only low motion sensitivity. the most important application of this sequence type is the differential diagnosis of osteoporotic and malignant vertebral compression fractures (baur et al. (baur et al. , . other applications include diffusion-weighted imaging of the brain (miller and pauly ) and the cartilage (miller et al. ). as mentioned above, an unwanted side effect arising in virtually all diffusion-weighted pulse sequences is extreme motion sensitivity (trouard et al. ; norris ) . by introducing diffusion gradients, the pulse sequence is made sensitive to molecular motion in the micrometer range, but it also becomes susceptible to very small macroscopic motions of the imaged object since the diffusion gradients do not distinguish between stochastic molecular motion and macroscopic bulk motion. hence, even very small and involuntary movements of the patient, e.g., caused by cardiac motion, cerebrospinal fluid pulsation, breathing, swallowing, or peristalsis, can lead to severe image degradation due to gross motion artifacts. typical appearances of these artifacts are signal voids and ghosting in phase-encode direction. several techniques and pulse-sequence modifications have been proposed to reduce the motion sensitivity of diffusion-weighted mri. on the one hand, any kind of motion should be minimized. depending on the body region being imaged, this can be achieved by improved fixation of the patient to the scanner, by imaging during breath hold, or by applying cardiac gating. effects of motion can also be reduced by decreasing the acquisition time of a pulse sequence, i.e., by using fast acquisition techniques. this is particularly effective if singleshot sequences such as echo planar imaging techniques are applied. most motion artifacts in diffusion-weighted imaging arise from inconsistent phase information in the complex-valued raw data set. this is caused by different states of motion in the repeated diffusion preparations of the acquisition. in single-shot sequences, only a single diffusion preparation is applied, and thus inconsistent phase information is avoided. it should be noted, however, that even single-shot sequences might be affected by inconsistent phase information if complex data of several measurements is averaged. instead, only magnitude images should be averaged in diffusion-weighted mri in order to improve the signal-to-noise ratio. another approach to reduce motion artifacts is to correct for motion-related phase errors in the acquired raw data. this can be done using navigator echo-correction techniques (ordidge et al. ; anderson and gore ; dietrich et al. ) . the navigator echo is an additional echo without phase encoding acquired after each diffusion preparation. in the absence of motion, all navigator echoes should be identical. thus, by comparing the acquired navigator echoes, bulk motion can be detected, and degraded image echoes can be discarded or a phase correction can be applied. more advanced navigator-echo techniques acquire several navigator echoes in different spatial directions (butts et al. ) or use spiral navigator readouts (miller and pauly ) . certain pulse sequences are self-navigated, i.e., a subset of the acquired raw data can be used as navigator echo without the need for an extra navigator acquisition. examples are pulse sequences with radial or spiral k-space trajectories that acquire the origin of k-space in every readout (seifert et al. ; dietrich et al. b ). an improved self-navigation is possible with the propeller diffusion sequence, which repeatedly acquires a large area around the origin of k-space (pipe et al. ) . some image reconstruction techniques have been proposed that do not use the often-inconsistent phase information of raw data at all. in sequences with radial k-space trajectories, images can be reconstructed by filtered back projection of magnitude projection images (gmitro and alexander ) . another spin-echo-based approach known as line-scan diffusion imaging assembles the image from one-dimensional lines of magnitude data (gudbjartsson et al. ) . in addition to substantially reduced motion sensitivity, repetition times and thus image acquisition time can be considerably reduced since the one-dimensional lines are acquired independently of each other. on the other hand, the signal-to-noise ratio of line-scan sequences is substantially lower than that of conventional acquisition techniques and the spatial resolution of this approach is limited as well. a second unwanted side effect of diffusion-weighted sequences is eddy current effects caused by the extraordinarily long and strong diffusion gradients. eddy currents are induced electric currents in coils that occur after switching magnetic fields on or off. these currents then create unwanted additional gradient fields resulting in shifted or distorted images and in incorrect diffusion weightings. whereas most mri gradient systems compensate very well for eddy current effects after the switching of short gradients typically used for imaging, the longer diffusion gradients are often not well compensated. hence, diffusion-weighted images are sometimes distorted depending on the diffusion weighting and the direction of the diffusion gradients, resulting in artifacts on adc maps such as enhanced edges. to avoid these artifacts, several techniques have been suggested. diffusion gradients can be shortened by using bipolar diffusion gradients (alexander et al. ) or by adding additional ° pulses during the diffusion preparation (reese et al. ) ; eddy currents can be partially compensated for by an additional long gradient before the ° excitation pulse (alexander et al. ) ; or diffusion-weighted images can be acquired twice with diffusion gradients of opposite polarity (bodammer et al. ) . other eddycurrent correction schemes are based on the acquisition of diffusion gradient-dependent field maps and data correction in k-space (horsfield ; papdakis et al. ) . in general, (automated) image registration as the first step of postprocessing is recommended to reduce influences from both patient motion and eddy-current effects. imaging with the stejskal-tanner diffusion preparation as described above in sect. . . . , is only sensitive for molecular diffusion parallel to the direction of the diffusion gradient. the diffusion preparation causes a dephasing of spins that move in the direction of the applied field gradient, i.e., between positions with different magnetic field strengths as illustrated in fig. . . . molecular motion perpendicular to this direction does not contribute to the signal attenuation. in general, the diffusion displacement of spins depends on the considered spatial direction; e.g., protons of water molecules in nerve fibers move more freely parallel to the fiber direction than they do in perpendicular directions. this dependence of the diffusion on spatial orientation can be measured by applying diffusion gradients in different spatial directions, e.g., separately in slice, readout, and phase direction as demonstrated in fig. . . . the resulting diffusion-weighted images show substantial signal differences in areas with strong anisotropic diffusion such as the corpus callosum. the signal intensity of the corpus callosum is decreased if diffusion gradients in the left-right direction (readout direction in the example) are applied, but increased for diffusion gradients in the head-foot (slice) direction or the anterior-posterior (phase-encode) direction. this finding is explained by the fact that water molecules diffuse more freely in the left-right direction (parallel to the nerve fibers) than they do in perpendicular directions, i.e., the effective diffusion coefficient is greater in the left-right direction than it is in other directions, and thus the signal attenuation is increased. this orientation dependence is visible in the adc maps as well: the adc in the left-right direction of the corpus callosum is increased compared to the adcs in perpendicular directions. other areas such as gray matter or the csf do not show significant differences depending on the diffusion gradient direction, indicating approximately isotropic diffusion. if the mean (or average) diffusivity of molecules in tissue is to be measured, then diffusion coefficients for all spatial directions must be averaged as shown in fig. . . d ; the corresponding adc map is given by the mean value of the three direction-dependent maps. since the direction-independent or mean adc of tissue is proportional to the trace of the diffusion tensor, this measurement is also referred to as diffusion trace imaging. the measurement of such a direction-independent diffusion-weighted image can be very important to avoid misinterpretation of hyperintense areas due to high anisotropy as tissue with generally reduced adc such as areas of focal ischemia. therefore, diffusion-weighted stroke mri is generally based on isotropically diffusionweighted images. if only a single direction-independent diffusionweighted image is required for diagnosis, it appears disadvantageous to perform three orthogonal diffusion measurements at the cost of three-times-increased acquisition duration. it should be noted that it is not possible to simply apply gradients in all three directions simultaneously for this purpose; this results in a single magnetic field gradient in diagonal direction, which is again only sensitive for diffusion parallel to this diagonal. however, the stejskal-tanner diffusion preparation can be extended by a more sophisticated series of gradient pulses in different directions to achieve an isotropic diffusion weighting within a single diffusion measurement (wong et al. ; mori and van zilj ; chun et al. ; cercignani and horsefield ) . isotropically diffusion-weighted images can thus be acquired by either a single or three orthogonal diffusion preparations. however, three measurements are not yet sufficient to determine the properties of diffusion anisotropy in all cases. for example, if a nerve fiber is oriented diagonally to all three coordinate axes, then the diffusion attenuation in this fiber will be the same for the three measurements and cannot be distinguished from isotropic diffusion. the measurement of the full diffusion tensor (cf. sect. . . . ) is required to cope with these more general cases. in spite of this limitation, some studies have used the ratio of the largest and the smallest of three perpendicular diffusion coefficients as an estimation of the anisotropy (holder et al. ) . however, this approach should be regarded as an inferior method in comparison to diffusion tensor evaluation and is generally not recommended. to determine the diffusion tensor, i.e., to fully measure anisotropic diffusion, more than three diffusionsensitized measurements with diffusion gradients in different spatial directions are required. however, only the diagonal elements of the tensor, i.e., dxx, dyy, dzz, can be measured directly; these elements are exactly the direction-dependent adcs determined in the example above. the other three (off-diagonal) tensor components dxy, dxz, dyz do not describe diffusion in a spatial direction but the correlation of diffusion in two different directions; they cannot be measured directly, but must be calculated as linear combinations of several measurements. the minimum number of measurements required to determine the full diffusion tensor can be deduced from the form of the diffusion tensor matrix: the tensor has six independent components dxx, dyy, dzz, dxy, dxz, dyz and, thus, at least six independent diffusion measurements are required. each of these measurements is based on images of at least two different b-values; in order to reduce the total number of measurements, usually a b-value of is chosen as a direction-independent reference. thus, this reference image has to be acquired only once instead of separately for each diffusion direction. a possible and frequently used choice of seven diffusion-weighted acquisitions that are sufficient to determine the diffusion tensor (basser and pierpaoli ) is shown in fig. . . . none of the six tensor components dxx, dyy, dzz, dxy, dxz, or dyz is measured directly by this gradient scheme; instead, all components must be calculated as linear combinations of the diffusion coefficients in these six directions. this calculation is based on the so-called b-matrix (basser and pierpaoli ), a symmetric × matrix describing the diffusion weighting for an arbitrary diffusion gradient fig. . . diffusion-weighted imaging in different spatial directions. a diffusion gradients in slice (s), readout (r), and phase (p) direction; the row vectors (s, r, p) denote the selected gradients. b corresponding diffusion-weighted images. c calculated adc maps corresponding to the diffusion directions in a and images in b. d averaged adc map; all adcs are in units of - mm /s. note the differing contrast in the diffusion-weighted images and adc maps depending on the diffusion gradient direction (e.g., in the corpus callosum) where gg denotes the dyadic product of these two vectors. this matrix is used to describe the signal attenuation due to the diffusion gradient as where bd denotes the matrix product of the b-matrix and the diffusion tensor matrix. the elements of the diffusion tensor dij can be determined by solving a system of linear equations, since the b-matrix and the signal attenuation are known. the result of this calculation is shown in fig. . . . the three calculated diagonal elements correspond to the direct adc measurements of fig. . . . the off-diagonal elements are generally much lower than the diagonal elements (note the differently scaled intensity maps) are and are close to zero in areas with predominantly isotropic diffusion (gray matter and csf). a simple protocol for diffusion tensor imaging consists of one reference measurement without diffusion weighting (b-value is ) and six diffusion-weighted measurements with different gradient directions. these gradient directions should be "as different as possible, " i.e., pointing isotropically in all spatial directions. a typical b-value for dti measurements of the brain is , s/mm . averaging of multiple acquisitions is frequently performed to increase the snr especially of the images with diffusion weighting. however, all these parameters (b-values, diffusion directions, number of averages) have been evaluated in a number of studies with the aim of optimizing the accuracy of diffusion tensor data. several studies investigated the optimum choice of the b-values both for conventional diffusion-weighted imaging and for diffusion tensor imaging. although the results of these studies vary to a certain extent, generally b-values in the range between about and s/ mm have been found to provide the highest accuracy of diffusion measurements in the brain (jones et al. ; armitage and bastin ; kingsley and monahan ) . the optimum number of averages depends on the b-values, which influence the signal attenuation and, thus, the signal-to-noise ratio of the diffusion-weighted images. in general, a higher number of averages are recommended for the acquisition with the high b-value than for the reference image with low b-value or without fig. . . diffusion tensor imaging a choice of diffusion gradients (s slice, r readout, p phase direction; the row vector denotes the selected gradients and their polarity) and b corresponding diffusionweighted images for the determination of the diffusion tensor. note the different contrast in the diffusion-weighted images depending on the diffusion gradient direction (e.g., in the corpus callosum) any diffusion weighting. as shown by jones et al. ( ) for their choice of b-values, the optimum ratio of the total number of acquisitions with high b-value and low bvalue is about . . the number of diffusion gradients and their directions has also been investigated in several studies. generally, the accuracy of diffusion tensor data, especially of the diffusion anisotropy and main diffusion direction, is improved when the number of different diffusion directions is increased (jones et al. ; papadakis et al. ; skare et al. ; jones ). if the number of different directions is fixed, then the accuracy of the measurements can be increased by choosing an optimized set of diffusion directions (skare et al. ; hasan et al. ) . no final consensus about the optimum number and choice of directions of diffusion gradients has yet been established, but protocols with or more diffusion directions are currently recommended by many research groups. the diffusion tensor contains complex information about the tissue microstructure that is best visualized as a threedimensional ellipsoid as discussed in sect. . . . . however, diffusion tensor data may be insufficient to describe tissue in certain geometrical situations. a well-known example is the crossing of white-matter fibers within a single voxel as illustrated in fig. . . . water diffusion in such voxels cannot be fully described by a single ellipsoid, i.e., by the diffusion tensor. to overcome this limitation, more complex measurement techniques such as high-angular resolution diffusion imaging (hardi) (frank ; tuch et al. ) and q-ball imaging (tuch ) have been proposed. all these techniques use a large number of different diffusion directions (e.g., between [frank ] and [tuch ]) distributed isotropically in space. diffusion data is measured with high-angular resolution in order to determine the spatial distribution of diffusion in more detail as indicated in fig. . . d . a further generalization of diffusion tensor measurements loosens the assumption of gaussian diffusion, which was illustrated in fig. . . . if diffusion is severely restricted, e.g., by cell membranes, no or very few molecules will move through this border; the probability distribution of diffusion distances will be limited to distances within the cell volume and will no longer be gaussian. the exact displacement probabilities in restricted diffusion can be measured with methods called q-space diffusion imaging (assaf et al. ) or diffusion spectrum imaging (wedeen et al. ) . both techniques require the acquisition of images with a large number of different b-values and, in the case of diffusion spectrum imaging, of different diffusion directions; e.g., the total number of diffusion measurements reported by wedeen et al. ( ) is . obviously, this large number of measurements severely limits the applicability of these new techniques in clinical studies; the studies should therefore be regarded as experimental work. another approach to overcome the limitations of models based purely on gaussian diffusion has been proposed by jensen et al. as diffusional kurtosis imaging (jensen et al. ) . diffusion data is acquired for several b-values over a large range between and , s/mm similarly to the way data is acquired in q-space imaging, but with a different mathematical model of the non-exponential decay. this method is related to several other studies that investigated diffusion properties in tissue at high b-values and found non-mono-exponential diffusion attenuation curves (inglis et al. ; clark et al. ) . this observation has frequently been attributed to the simultaneous measurement of water molecules in different environments such as the intracellular and the extracellular space; however, no final agreement on the interpretation of these data has been established (sehy et al. ) . diffusion tensor data calculated from the measurements shown in fig. . . ; all diffusion coefficients are in units of - mm /s. a the three diagonal elements dxx, dyy, and dzz, and b the off-diagonal elements dxy, dxz, and dyz of the tensor matrix. note the different intensity scales for diagonal and off-diagonal elements. some remaining eddy-current artifacts can be seen as enhanced edges in the maps of the off-diagonal elements diffusion tensor imaging results in a large amount of data-a full diffusion tensor, i.e., a symmetric × matrix, is determined for each pixel of the image dataset. due to this complex data structure, there is no simple way to visualize the complete diffusion tensor as a single intensity or color map. it would be straightforward to display the six independent elements of the tensor as separate maps as shown in fig. . . ; however, this would not be very helpful for the interpretation or quantitative evaluation of diffusion tensor data. instead, several techniques are used to reduce the diffusion tensor information to simpler datasets that can as easily be displayed and interpreted as other imaging data. most results of imaging examinations are presented as either signal intensity images or scalar parameter maps. these images and maps have the advantage that they can easily be manipulated, e.g., the contrast can be interactively adjusted, and they can be quantitatively evaluated by statistics over regions of interest. in order to obtain similar parameter maps of diffusion tensor data, a single scalar reflecting a certain tensor property must be calculated. the most important examples of such scalars are the mean diffusivity or trace of the diffusion tensor and the anisotropy of the tensor. the mean diffusivity of a diffusion tensor measurement, i.e., the diffusion coefficient averaged over all spatial directions, is displayed as parameter maps in fig. . . a ,b. the same data can be displayed either as an intensity-coded map (fig. . . a ) or as a color-coded map (fig. . . b ). both maps illustrate, e.g., the high diffusivity of csf and the typical adcs of about . × - mm /s in the white matter. many different scalar measures have been proposed to describe diffusion anisotropy, cf. sect. . . . . the two most important are the fractional anisotropy and the relative anisotropy shown in fig. . . c ,d. the maps are very similar; both show the high anisotropy of white matter as hyperintense areas in contrast to low anisotropy in gray matter or csf. these two scalars derived from the diffusion tensor are by far the most important quantities for the clinical evaluation of diffusion tensor data. the vast majority of clinical studies based on diffusion tensor imaging determine the mean diffusivity and the anisotropy in regions of interest in order, e.g., to statistically compare these data between certain patient groups or between patients and healthy controls. the mean diffusivity and the anisotropy contain certain important information about the diffusion tensor; if the diffusion tensor is visualized as ellipsoid, then the diffusivity reflects the volume of the ellipsoid and the anisotropy its deviation from a spherical shape. however, any information about the main diffusion direction, i.e., the orientation of the longest axis of the diffusion tensor ellipsoid, is missing. this direction corresponds to the microstructural orientation of tissue, e.g., the orientation of white-matter tracts, and is determined as the eigenvector of the largest eigenvalue of the tensor (c.f. sect. . . . ). there are two common methods to visualize the direction of this eigenvector: color coding and direct vector display. the direction can be color-coded using the redgreen-blue (rgb) color model. each direction in space is defined by a three-component vector v = (vx vy vz) . if this three-component vector is interpreted as an rgb color specification, vectors in x-direction, v = ( ), appear as red pixels, vectors in y-direction as green pixels, and vectors in z-direction as blue pixels. eigenvectors in other directions are displayed as (additive) mixtures of different colors, e.g., the vector v = ( ) as mixture of red and blue, yielding violet pixels. the resulting color map is finally scaled with the diffusion anisotropy, since the fig. . . the diffusion tensor cannot represent diffusion properties in voxels with crossing nerve fibers. voxels with a single predominant fiber direction (a,b) show diffusion tensor ellipsoids whose longest axes correspond to the fiber orientation. voxels with crossing fibers (c) result in a diffusion tensor ellipsoid with reduced anisotropy pointing in an averaged fiber direction. advanced methods such as high angular resolution diffusion imaging (d) can resolve different fiber orientations within a single voxel . diffusion-weighted imaging and diffusion tensor imaging main diffusion direction is of interest only in areas with high anisotropy. some examples of these color-coded vector maps are shown in fig. . . . the red color of the corpus callosum demonstrates that the nerve fibers are predominantly oriented in the left-right direction. white-matter areas in green and blue are oriented in the anterior-posterior direction and the head-foot direction, respectively. alternatively, the main diffusion direction can be directly displayed by a small line in each pixel; some authors refer to this technique as whisker plots. this visualization is on the one hand more intuitive than color coding, but on the other hand difficult to display for large areas because of the large number of pixels (and hence lines) of a complete image. an example is shown in fig. . . ; the magnified area shows again the corpus callosum, where the diffusion directions follow the anatomical orientation of the nerve fibers. a general problem and limitation of the visualization of the main diffusion direction is that it is based on the assumption of linear diffusion, i.e., the diffusion ellipsoid is supposed to have a cigar-like shape. this is usually true in white matter tracts, but may lead to deceptive graphical depictions at crossing fibers or if diffusion is described by a planar tensor. another disadvantage is that vector maps are difficult to compare or to evaluate statistically. it is also possible to visualize the full diffusion tensor using the diffusion ellipsoid introduced in sect. . . . . as in the vector plots, it is often difficult to visualize the entire amount of data belonging to a single image slice at once. therefore, this d tensor visualization is usually combined with tools to zoom into the illustration and fig. . . parameter maps displaying scalar quantities calculated from the diffusion tensor. direction-independent mean diffusivity shown as gray-scaled map (a) and as color-coded map (b); the diffusion coefficients are given in units of - mm /s. (c) fractional anisotropy and (d) relative anisotropy fig. . . color-coded visualization of main diffusion orientation in four different slices (a-d). the main diffusion direction (orientation of the longest axes of the diffusion ellipsoid) is shown in red, green, and blue for left-right, anterior-posterior, and head-foot orientation, respectively, as indicated in e. the green rim at the frontal brain is caused by remaining eddy-current and susceptibility artifacts to rotate the slice in order to view the tensors in specific areas of the brain, as demonstrated in fig. . . . the ellipsoids are additionally color-coded to emphasize the direction of their longest axis (the main diffusion direction); their brightness is scaled by the anisotropy. thus, the ellipsoid visualization combines features of the techniques described in the previous sections and, e.g., csf is displayed as large but relatively dark spheres (denoting a high diffusion coefficient and low anisotropy), while the tensors in fiber tracts appear as bright elongated ellipsoids corresponding to linear diffusion in a single predominant orientation. the exact depiction of the tensor information is not standardized but may look different depending on the tools used. an alternative visualization may substitute the ellipsoids by cuboids with equivalent dimensions as shown in fig. . . . the presented information is the same as before, but the computational cost required to display cuboids is substantially lower than with smooth ellipsoids. thus, interactive manipulation of the d datasets may be faster using the cuboid visualization. close inspection of the main diffusion directions in figs. . . or . . suggests that the shape of white matter tracts can be reconstructed by connecting several diffusion directions in an appropriate way. this process is illustrated in fig. . . , based on a magnification of fig. . . . by choosing a start point and following the main diffusion direction, trajectories can be constructed that visualize the fiber tracts of white matter. a typical example is shown in fig. . . , where a seed region was placed within the corpus callosum, and all fibers through this seed region were reconstructed. the color of the fi- bers reflects the local anisotropy in this case, but various other color schemes could be used instead. fiber tracking or diffusion tractography was developed in the late s (mori et al. ; conturo et al. ; mori and van zijl ; melhem et al. ) , and a multitude of different algorithms to reconstruct fibers have been proposed since then. most techniques include data interpolation to increase the spatial resolution, and all require certain criteria to decide when the tracking of a fiber should be stopped (e.g., at pixels with low anisotropy or at sudden changes of diffusion direction). fiber tracking is usually based either on a single-region approach, in which all fibers are tracked that go through a user-defined region of interest, or on a two-region approach where connecting fibers between two regions are reconstructed. fiber tracking depends on good image quality, with sufficient signal-to-noise ratio and without substantial distortion artifacts. increased noise can reduce the calculated anisotropy (jones and basser ) and, thus, the length of the reconstructed fibers. image distortions cause a mismatch of anatomical fiber orientation and the measured diffusion direction and thus can lead to erroneous tractography results. therefore, parallel imaging and eddy-current correction techniques can improve the results of white-matter tractography. it is generally assumed that isotropic spatial image resolution is preferable for fiber tracking applications. a typical protocol suggested by jones et al. acquires data of the whole brain in isotropic . × . × . mm resolution (jones et al. b) . fiber tracking is a valuable tool to visualize white matter structures of the brain. however, it is still very difficult to evaluate tractography results quantitatively, to assess the accuracy of reconstructed fibers, or to compare the results of different examinations. first approaches to these questions include the spatial normalization of ten- fig. . . three-dimensional visualization of the full diffusion tensor as colorcoded cuboids; the cuboids are colored as in fig. (jones et al. a ) and the determination and visualization of uncertainties of diffusion tensor results (jones ; jones and pierpaoli ) . fig. . . reconstruction of white matter tracts starting at a seed region in the corpus callosum. visualization was performed with the "dti task card" provided by the mgh/mit/hms athinoula a. martinos center for functional and structural biomedical imaging (ruopeng wang) risks and safety issues related to mr examinations with the rapid development of mr technology and the significant level of growth in the number of patients examined with this versatile imaging modality, the consideration of possible risks and health effects associated with the use of mr procedures is gaining increasingly in importance. as described in detail in the previous chapters, three types of fields are employed: • a high static magnetic field generating a macroscopic nuclear magnetization • rapidly alternating magnetic gradient fields for spatial encoding of the mr signal • radiofrequency (rf) electromagnetic fields for excitation and preparation of the spin system in the following, the biophysical interaction mechanisms and biological effects of these fields are summarized as well as exposure limits and precautions to be taken to minimize health hazards and risks to patients and volunteers undergoing mr procedures. in the recent past, a number of excellent reviews and books related to this topic have been published. for details and supplementary information, the reader is referred to these publications quoted in the following and to the bibliographies given therein. because no ionizing radiation is used in mri, it is generally deemed safer than diagnostic x-ray or nuclear medicine procedures in terms of health protection of patients. in this context, a fundamental difference between ionizing and non-ionizing radiation has to be noted: radiation exposure to ionizing radiation-at least at the relatively low doses occurring in medical imaging-results in stochastic effects, whereas biological effects of (electro)magnetic fields are deterministic. a stochastic process is one where the exposure determines the probability of the occurrence of an event but not the magnitude of the effect. in contrast, deterministic effects are those for which the magnitude is related to the level of exposure and a threshold may be defined (international commission on non-ionizing radiation protection [icnirp] ). as a consequence, the probability of detrimental effects caused by diagnostic x-ray and nuclear medicine examinations performed over many years accumulate, whereas physiological stress induced by mr procedures is related to the acute exposure levels of a particular examination and does, to the present knowledge, not accumulate over years. in the recent past, regulations concerning mr safety have been largely harmonized. there are two comprehensive reviews by international commissions that form the basis for both national safety standards and the implementation of monitor systems by the manufacturers of mr devices: with μ = . × - vs/m the magnetic permeability in vacuum. due to the covalent binding of atoms, electron shells in most molecules are completely filled and thus all electron spins are paired. nevertheless, these diamagnetic materials can be weakly magnetized in an external magnetic field. as described in sect. . . . , this universal effect is caused by changes in the orbital motion of electrons in an external magnetic field. the induced magnetization is very small and in a direction opposite to that of the applied field (χ < ). paramagnetic materials, on the other hand, contain molecules with single, unpaired electrons. the intrinsic magnetic moments related with these electrons tend-comparable to the much weaker nuclear magnetic moment (cf. sect. . . )-to align in an external magnetic field. this effect increases the magnetic field in paramagnetic materials ( < χ < , ). in ferromagnetic materials-such as iron, cobalt, or nickel-unpaired electron spins align spontaneously with each other in the absence of a magnetic field in a region called a domain. these materials are characterized by a large positive magnetic susceptibility (χ > , ). biomolecules are in general diamagnetic and contain at most some paramagnetic centers. in almost all human tissues, the concentration of paramagnetic components is so low that they are characterized by susceptibilities differing by no more than % from that of water (χ = - , · - ) (schenck (schenck , . as a consequence, there is virtually no effect of the human body on an applied magnetic field (b ≅ μ h ). there are several established physical mechanisms through which static magnetic fields can interact with biological tissues and organisms. the most relevant mechanisms are discussed in the following. even in a uniform magnetic field, molecules or structurally ordered molecule assemblies with either a field-induced (diamagnetic) or permanent (paramagnetic) magnetic moment mmol experience a mechanical torque that tends to align their magnetic moment parallel (or antiparallel) to the external magnetic field and thus to minimize the potential energy ( fig. . . a ). orientation effects, however, can only occur when molecules or molecule clusters have a nonspherical structure and/or when the magnetic properties are anisotropically distributed. moreover, the alignment must result in an appreciable reduction of the potential energy (emag ∝ - mmol b) of the molecules in the external field with respect to their thermal energy (etherm ∝ kt). at higher temperatures, as for example in the human body, the alignment of molecules with small magnetic moments is prevented by their thermal movement (brownian movement) . in a non-uniform magnetic field, as for example in the periphery of an mr system, paramagnetic and ferromagnetic materials, moreover, are attracted and thus can quickly become dangerous projectiles (fig. . . b) . magneto-hydromechanical interactions. static magnetic fields also exert forces (called lorentz forces) on moving electrolytes (ionic charge carriers) giving rise to induced electric fields and currents. for an electrolyte with charge q, the lorentz force, which acts perpendicular to the direction of the magnetic field, b, and the velocity, v, of the electrolyte is given by ( . . ) since electrolytes with a positive or negative charge moving, for example, through a cylindrical blood vessel orientated perpendicular to a magnetic field are accelerated into opposite directions, this mechanism gives rise to an electrical voltage across the vessel, which is commonly referred to as blood flow potential (fig. . . ). moreover, the induced transversal velocity component also interacts with the magnetic field according to eq. . . , which results in a lorentz force that is directed antiparallel to the longitudinal velocity component. at very high magnetic field strengths, this secondary effect can reduce the flow velocity and the flow profile of blood in large vessels (tenforde ) . theoretically modeling of magneto-hydromechanical interaction processes was performed by tenforde ( ) based on the navier-stokes equation describing the flow of an electrically conductive fluid in the presence of a magnetic field using the finite element technique. induced current densities in the region of the sinoatrial node are predicted to by greater than ma/m at field levels of more than t in an adult human. moreover, magneto-hydromechanical interactions were predicted to reduce the volume flow rate of blood in the human aorta by a maximum of . , . , and . % at field levels of , , and t, respectively. magnetic effects on chemical reactions. as shown by in vitro studies, several classes of organic chemical reactions can be influenced by static magnetic fields under fig. . . magneto-mechanical effects. a orientation of a molecule with a magnetic moment m in a uniform magnetic field. b attraction of a paramagnetic or ferromagnetic object in a non-uniform magnetic field. the direction of the acting forces f is indicated by arrows appropriate, non-physiological conditions (grissom ; world health organization [who] ). an established effect consists in the modification of the kinetics of chemical reactions with radicals as intermediate products, brought about by splitting and modification of electron spin states in the magnetic field. an example is the conversion of ethanolamine to acetaldehyde by the bacterial enzyme ethanolamine ammonia lyase. radical pair magnetic field effects are thus used as a tool for invitro studies of enzyme reactions (who ). for individual macromolecules, the extent of orientation in strong magnetic fields is very small. for example, measurements on dna in solution have been shown that a magnetic flux density of t is required to produce orientation of about % of the molecules (maret et al. ) . in contrast, there are several examples of molecular aggregates that can be oriented to a large extend by static magnetic fields, such as outer segments of retinal rod cells, muscle fibers, and filamentous virus particles (icnirp ; who ). an example of an intact cell that can be oriented magnetically is the erythrocyte. it has been shown that both resting and flowing sickled erythrocytes align in fields of more than . t with their long axis perpendicular to the magnetic flux lines (brody et al. ; murayama ) . highashi et al. ( ) reported that normal erythrocytes could be oriented with their disk planes parallel to the magnetic field direction. this effect was detectable even at t, and almost % of the cells were oriented when exposed to t. on the other hand, calculations performed by schenck ( ) yielded that all of these orientation effects observed in vitro are probably too small to affect the orientation of the equivalent structures in vivo. however, although biophysical models make it possible to roughly estimate the magnitude of static magnetic field effects, the reality is so complex that calculations can in principle not rule out physiological effects (hore ) . based on the evidence at present, there is no strong likelihood of major physiological consequences arising from radical-pair magnetic field effects on enzymatic reactions. reasons against are the efficacy of homeo-static buffering and the fact that the contrived conditions needed to observe a magnetic field response in the laboratory are unlikely to occur under physiological conditions (hore ) . there have been only a few studies on the effects of static magnetic fields at the cellular level. they reveal that exposure to static magnetic fields alone has no or extremely small effects on cell growth, cell cycle distribution, and the frequency of genetic damage, regardless of the magnetic flux density. however, in combination with other external factors such as ionizing radiation or some chemicals, there is evidence to suggest that a static magnetic field modifies their effects (miyakoshi ) . with regard to possible effects on reproduction and development, no adverse effects of static magnetic fields have been consistently demonstrated; few good studies however have been carried out, especially to fields in excess of t (icnirp ; saunders ; who ) . several studies indicate that implantation as well as prenatal and postnatal development of the embryo and fetus is not affected by exposure for varying periods during gestation to magnetic fields of flux densities between and . t (konermann and mönig ; murakami et al. ; okazaki et al. ; sikov et al. ). on the other hand, mevissen et al. ( ) reported that continuous exposure of rats to a -mt field slightly decreased the numbers of viable fetuses per litter. electric flow potentials generated across the aorta and other major arteries by the flow of blood in a static magnetic field can routinely seen in the ecg of animals and humans, exposed to fields in excess of mt. in humans, the largest potentials occur across the aorta after ventricular contraction and appear superimposed on the t-wave amplitude of the ecg. different animal studies demonstrated effects of static magnetic fields on blood flow, arterial pressure, and other parameters of the cardiovascular system, often at fields with flux densities much less than t (saunders ). the results of these studies, however, have to be interpreted with caution because it is difficult to reach any firm conclusion from cardiovascular responses observed in anaesthetized animals (saunders ; who ). on the other hand, two recent studies on humans exposed to a maximum flux density of t (chakeres et al. ; kangarlu et al. ) did not yield clinically relevant changes in the heart rate, respiratory positively and negatively charged electrolytes moving with a velocity v through a blood vessel oriented perpendicular to a magnetic field are accelerated into opposite directions and thus induce an electric voltage uh across the vessel (blood flow potential). cross-hatches indicate the direction of the magnetic field into the paper plane . risks and safety issues related to mr examinations rate, diastolic blood pressures, finger pulse oxygenation levels, and core body temperature the only physiologic parameter that was found to be altered significantly by high-field exposure was a change in measured systolic blood pressure. this is consistent with a hemodynamic compensatory mechanism to counteract the drag on blood flow exerted by magneto-hydrodynamic forces as described in sect. . . . (chakeres and de vocht ) . various behavioral studies yielded that the movement of laboratory rodents in static magnetic fields above t may be unpleasant, inducing aversive responses and conditioned avoidance (who ) . such effects are thought to be consistent with magneto-hydrodynamic effects on the endolymph of the vestibular apparatus (who ) . this is in line with reports that some volunteers and patients exposed in static magnetic fields with flux densities above . t experienced sensations of vertigo, nausea, and a metallic taste in the mouth (chakeres et al. ; kangarlu et al. ; schenck schenck , . moreover, some of them reported on magnetophosphenes occurring during rapid eye movement in a field of at least t, which may be attributable to weak electric fields induced by movements of the eye, resulting in an excitation of structures in the retina (reilly ; schenck ) . two recent studies evaluated neurobehavioral effects among subjects exposed to static magnetic fields of . and t, respectively, using a neurobehavioral test battery. performance in an eye-hand coordination test and a near-visual contrast sensitivity task slightly declined at . t (de vocht et al. ) , whereas a small negative effect on short-term memory was noted at t (chakeres et al. ) . taking also into account the results of other neurobehavioral studies, it can be concluded that there is at present no evidence of any clinically relevant modification in human cognitive function related to static magnetic field exposure (chakeres and de vocht ) . there are only a few epidemiological studies available that were specifically designed to study health effects of static magnetic fields. the majority of these have been focused on cancer risks. in , the international agency for research on cancer (iarc) ( ) reviewed epidemiological studies focused on cancer risks. generally, these studies have not pointed to higher risks, although the number of studies was small, the numbers of cancer cases were limited, and the information on individual exposure levels was poor. therefore, the available evidence from epidemiological studies is at present not sufficient to draw any conclusions about potential health effects of static magnetic field exposure (feychting ; who ) . some epidemiological studies have investigated reproductive outcome for workers involved in aluminum industry or in mri. kanal et al. ( ) , for example, evaluated , pregnancies of women working at clinical mr facilities. comparing these pregnancies with those occurring in employees at other jobs, they did not find significant increased risks for spontaneous abortions, delivery before weeks, reduced birth weight, and male gender of the offspring. however, no studies of high quality have been carried out of workers occupationally exposed to fields greater than t. although there are initial experiences concerning the examination of volunteers and patients in ultra-high mr systems with magnetic flux densities of up to t, most clinical mr procedures have been performed so far at static magnetic fields below t. as summarized in sect. . . . , the literature does not indicate any serious adverse health effects from the exposure of healthy human subjects up to a flux density of t (icnirp ). however, because movements in static magnetic fields above t can produce nausea and vertigo, both the iec standard and the icnirp recommendation (table . . ) regulate that mr examinations above this static magnetic flux density should be performed in the controlled operating mode under medical supervision. the recommended upper limit for the operating mode is t, due to the limited information concerning possible effects above this magnetic flux density. for mr examinations performed in the experimental operating mode, there is no upper limit for the magnetic flux density. in a safety document issued in , the us food and drug administration (fda) ( ) deemed mr devices significant risk only when a static magnetic field of more than t is used. according to faraday's law, a time-varying magnetic field b(t) induces an electric field e(t), which has two important characteristics: the field strength is proportional to the time rate of change of the magnetic flux density, db(t)/dt, and the field lines form closed loops around the direction of the magnetic field. time-varying magnetic fields are used in mriamong others-to spatially encode mr signals arising from the different volume elements within the human body. to this end, three independent gradient coils are used to produce magnetic fields directed parallel to the static magnetic field b = ( , , b ) with a field strength varying in a linear manner along the x-, y- and z-direction as shown in fig. . . . for the special case of a spatially uniform magnetic field directed in the z-direction, bz(t), the electric field strength along a circular (conductive) loop of radius r in the x-y-plane is given by ( . . ) this equation reveals that the electric field strength in the considered circular loop increases linearly with its radius as well as with the rate of change, db(t)/dt. this model gives, for example, the electric field induced by the magnetic gradient field b = ( , , g z · z) of the z-gradient coil. in contrast, the distribution of the electric fields induced by the time-varying magnetic gradient fields b = ( , , g x · x) and b = ( , , g y · y) is much more complex, since the magnetic flux density of these fields is not uniform over the x-y-plane. moreover, the generation of these gradient fields is inevitable connected - due to fundamental principles of electrodynamics - with the occurrence of magnetic fields directed in the x- and y-direction, i.e., b = (bx, , ) and b = ( , by, ), respectively. although these "maxwell terms" are of no relevance for the acquisition of mr images, they have to be considered carefully with respect to biological effects. the distribution of electric fields induced by timevarying magnetic fields directed parallel and perpendicular to the long axis of the human body is schematically shown in fig. . . . the precise spatial and temporal distribution of the electric fields in the human body, of course, strongly depends on both the technical characteristics of the gradient coils implemented at a specific mr system and the morphology of the body region exposed, and thus cannot be described by a simple mathematical expression. for worst-case estimations, however, it can be assumed that the electric field induced by a non-uniform magnetic field is equal or smaller than the electric field produced by a uniform magnetic field with field strength equal to the maximum magnetic flux density of the nonuniform field (schmitt et al. ) . for a uniform magnetic field, the electric field strength reaches, in general, a maximum when the magnetic field is orientated perpendicular to the coronal plane of the body (see fig. . . , right) since the extension of conductive loops is largest in this direction (reilly ) . in conductive media, such as biological tissues, the internally induced electric field e(t), results in circulating eddy currents, j(t). both quantities are related by the electric conductivity of the medium, σ, ( . . ) calculation of the current distribution in the human body is complicated due to widely differing conductivities of various tissue components. for rough estimations, however, the body can be treated as a homogeneous medium with an average conductivity of σ = . s/m (reilly ). according to eqs. . . and . . , for example, a current density of ma/m is induced at a radius of cm by a rate of change in the magnetic flux density of dbz/dt = t/s. fig. . . schematic representation of the electric field induced by time-varying magnetic fields b(t) that are directed parallel (left) and perpendicular (right) to the long axis of the human body. the electric field lines form closed loops around the direction of the magnetic field the magnetic flux density of gradient fields used in mri is about two orders of magnitude lower than that of the static magnetic field b . therefore, time-varying magnetic fields produced by gradient coils in mri can be neglected compared to the strong static magnetic field as far as interactions of magnetic fields with biological tissues and organisms are concerned (cf. sect. . . . ). in contrast, however, biophysical effects related to the electric fields and currents induced by the magnetic fields have to be considered carefully. in general, rise times of magnetic gradients in mri are longer than µs, resulting in time-varying electric fields and currents with frequencies below khz. in this frequency range, the conductivity of cell membranes is several orders of magnitude lower than that of the extra-and intracellular fluid (foster and schwan ) . as illustrated in fig. . . , this has two important consequences. first, the cell membrane tends to shield the interior of cells very effectively from current flow, which is thus restricted to the extracellular fluid. second, voltages are induced across the membrane of cells. when the electric voltages are above a tissue-specific threshold level, they can stimulate nerve and muscle cells (foster and schwan ) . theoretical models describing cardiac and peripheral nerve stimulation have been presented by various scientists (i.e., by irnich, mansfield, and reilly) . a detailed discussion of the underlying assumptions of the different models and the differences between them can be found, among others, in (schaefer et al. ; schmitt et al. ). the best approximation to experimental data is given by a hyperbolic strength-duration expression ( . . ) which relates the stimulation threshold, expressed as rate of change db/dt of the magnetic flux density, with the stimulus duration t, i.e., the ramp time of the magnetic gradient field (schaefer et al. ; schmitt et al. ). a hyperbolic model comparable to eq. . . was first established by g. weiss in for an electric current pulse and the corresponding electric charge. this "fundamental law of electrostimulation" has been confirmed meanwhile in numerous studies for neural and cardiac excitation as well as for defibrillation (schaefer et al. ) . as shown in fig. . . , the threshold for the strength of a stimulus decreases with its duration. the asymptotic stimulus strength, b • ∞, for an infinite duration is denoted as "rheobase"; the characteristic response time constant, τc , as "chronaxie". it should be mentioned that according to a model presented by irnich et al. stimulation depends on mean (rather than peak) db/dt changes and is thus independent on the special shape of the gradient pulse (schaefer et al. ) . in current safety regulations, however, exposure limits are unanimously expressed as maximum db/dt values. in accordance with the biophysical mechanisms described in the previous section, there is now a strong body of evidence suggesting that the transduction processes through which induced electric fields and currents can influence cellular properties involve interactions at the level of the cell membrane (icnirp ) . in addition to the stimulation of electrically excitable tissues, changes in membrane properties-such as ion-binding to membrane macromolecules, ion transport across the membrane, or ligand-receptor interactions-may trigger transmembrane signaling events. cardiac and peripheral nerve stimulation. experimental studies with magnetic stimulation of the heart have been carried out since about , with the introduction of improved gradient hardware for epi. experiments fig. . . in the frequency range below khz, the conductivity of cell membranes (σm) is several orders of magnitude lower than that of the extra- and intracellular fluid (σext and σint, respectively) so that the induced electric fields (and also the resulting electric currents) are mainly restricted to the extracellular fluid (eext > eint). as a result, electric voltages are generated across the membrane of cells that can stimulate nerve and muscle cells were not performed of course with humans, but rather with dogs. the data, which are listed and reviewed by reilly ( ) , reveal that magnetic stimulation is most effective, when it is delivered during the t wave of the cardiac cycle. moreover, excitation thresholds for the heart are substantially greater than that for nerve as long as the pulse duration is sufficiently less than the chronaxie time of the heart of about ms. therefore, the avoidance of peripheral sensations in a patient provides a conservative safety margin with respect to cardiac stimulation. bourland et al. ( ) determined a mean value of . ± . for the ratio of cardiac (the induction of ectopic beats) to muscle stimulation in dogs for a pulse duration of µs, which is quite close to the theoretical heart/nerve ratio of . estimated by reilly ( ) . various studies yielded that the cardiac threshold variability of healthy persons is surprisingly low, which is confirmed by experimental and clinical experience that pacing thresholds are rather uniform (schmitt et al. ). drugs and changes in electrolyte concentrations can lower thresholds, but not below about % of the normal value (schmitt et al. ). peripheral nerve stimulation has been investigated in various volunteer studies. a systematic evaluation of the available data was presented by schaefer et al. in . they recalculated published threshold levels-often reported for different gradient coils and shapes in different terms-to the maximum db/dt occurring during the maximum switch rate of the gradient coil at a radius of cm from the central axis of the mr system, i.e., at the border of the volume normally accessible to patients. in fig. . . , the recalculated threshold levels are plotted for the y- (anterior/posterior) and z-gradient coils (superior/inferior) as compared to model estimates by reilly. as expected, y-gradient coils have lower stimulation threshold for a given ramp time than x-gradient coils since the x-z cross-sections of the body are usually larger than are x-y cross-sections. by fitting the hyperbolic strength-duration relationship defined in eq. . . to mean peripheral nerve stimulation thresholds measured by bourland et al. ( ) in human subjects, schaefer et al. estimated the following values for the rheobase/chronaxie: . t/s/ . ms for the y-gradient and . t/s / . ms for the z-gradient. as shown in fig. . . , the db/dt intensity to induce a sensation that the subject described as uncomfortable or even painful was significantly above the sensation threshold. bourland et al. ( ) also analyzed their stimulation data in the form of cumulative frequency distributions, that gives for a db/dt level the number of subjects that had already reported on perceptible, uncomfortable, or even intolerable sensations. they found that the db/dt level needed for the lowest percentile for uncomfortable stimulation is approximately equal to the median threshold for perception. the lowest percentile for intolerable stimulation occurs at a db/dt level approximately % above the median perception threshold. time-varying magnetic fields can also result in the perception of magnetophosphenes due to the induction of electrical currents, presumably in the retina (cf. sect. . . . ). a unique feature of phosphenes, which are not considered to be hazardous to humans, is their low excitation threshold and sharply defined excitation frequency of about hz as compared to other forms of neural stimulation (reilly ) . in general, a combination of magnetic gradient fields from all three gradient coils is used in mri. in this case, the biological relevant time-varying magnetic field is approximately given by the vector sum of the magnetic field components. a detailed discussion of the effect of stimulus shape, number of stimuli, and other experimental set- fig. . . hyperbolic strength-duration expression that relates the stimulation threshold, expressed as rate of change db/dt of the magnetic flux density, with the stimulus duration t, i.e., the ramp time of the magnetic gradient field. the asymptotic stimulus strength, b • ∞, for an infinite duration is denoted as rheobase; the characteristic response time constant, τc , as chronaxie . risks and safety issues related to mr examinations tings on stimulation thresholds can be found in (reilly ; schmitt et al. ). a comprehensive review of the current scientific evidence on biological effects of low-frequency electromagnetic fields in the frequency range up to khz has been published by icnirp in . the majority of the reviewed studies focus on extremely low-frequency (elf) magnetic fields associated with the use of electricity at power frequencies of or hz. according to the icnrip review, cellular studies do not provide convincing evidence that low-frequency magnetic fields alter cell division, calcium homeostasis, and signaling pathways. furthermore, no consistent effects were found in animals and humans with respect to genotoxicity, reproduction, development, immune system function, as well as endocrine and hematological parameters. on the other hand, a number of laboratory and field studies on humans demonstrated an effect of low-frequency magnetic fields at higher exposure levels on the power spectrum of different eeg frequency bands and on sleep structure. in the light of cognitive and performance studies yielding a number of biological effects, further studies are necessary to clarify the significance of the observed effects for human health. over the last two decades, a large number of high quality epidemiological investigations of long-term disease endpoints such as cancer, cardiovascular and neurodegenerative disorders have been performed in relation to time-varying-mainly elf-magnetic fields. following the mentioned icnirp review ( ), the results can be summarized as follows. among all the outcomes evaluated, childhood leukemia in relation to postnatal exposures to or hz magnetic fields at flux densities above . µt is the one for which there is most evidence of an association. however, the results are difficult to interpret in the absence of evidence from cellular and animal studies. there is also evidence for an association of amyotrophic lateral sclerosis (als) with occupational emf exposure although confounding is a potential explanation. whether there are associations with breast cancer and cardiovascular disease remains unsolved. from a safety standpoint, the primary concern with regard to rapid switching of magnetic gradients is cardiac fibrillation, because it is a life-threatening condition. in contrast, peripheral nerve stimulation is of practical concern because uncomfortable or intolerable stimulations would interfere with the examination (e.g., patient movements) or would even result in a termination of the examination. in the current safety recommendations issued by iec ( ) and and icnrip ( ), maximum db/dt values for time-varying magnetic fields created by gradient coils is limited for patient and volunteer examinations performed in the normal and the controlled operating mode by the db/dt level of % and % of the mean perception threshold for peripheral nerve stimulation, respectively. to this end, mean perception threshold levels have to be determined by the manufacturers for any given type of gradient system by means of experimental studies on human volunteers. as an alternative, the following empirical hyperbolic strength-duration expression for the mean threshold for peripheral nerve stimulation (expressed as maximum change of the magnetic flux density in t/s) can be used: ( . . ) in this equation, teff is the effective stimulation duration (in milliseconds), i.e., the duration of the period of monotonic increasing or decreasing gradient. a mathematical definition for arbitrary gradient shapes can be found in the iec standard ( ). time-varying magnetic fields used for the excitation and preparation of the spin system in mri (b fields, cf. sect. . . ) have typically frequencies above mhz. in this rf range, the conductivity of cell membranes is comparable to that of the extra-and intracellular fluid, which means that no substantial voltages are induced across the membranes (foster and schwan ) . due to this reason, stimulation of nerve and muscle cells is no longer a matter of concern. instead, thermal effects due to tissue heating are of importance. energy dissipation of rf fields in tissues is described by the frequency-dependent conductivity σ(ω), which characterizes energy losses due to the induction and orientation of electrical dipoles as well as the drift of free charge carriers in the induced time-varying electric field (foster and schwan ) . the energy absorbed per unit of tissue mass and time, the so-called specific absorption rate (sar, in w/kg), is approximately given by where e is the induced electric field, j the corresponding current density, and ρ the tissue density (cf., . . . ). absorption of energy in the human body strongly depends on the size and orientation of the body with respect to the rf field as well as on the frequency and polarization of the field. theoretical and experimental considerations reveal that rf absorption in the body approaches a maximum when the wavelength of the field is in the order of the body size. unfortunately, the wavelength of the rf fields used in mri falls into this "resonance range. " in order to discuss the effect of various measurement parameters on rf absorption, let us consider a simple mr sequence with only one rf pulse-such as a d or d flash sequence. in this case, the time-averaged sar can approximately be described by the expression ( . . ) according to this equation, the time-averaged sar is proportional • to the square of the static magnetic field, b , which means that energy absorption is markedly higher at high-field as compared to low-field mr systems • to the square of the pulse angle, α, so that a sequence with a ° or even a ° pulse will result in a much higher sar value than a sequence with a low-angle excitation pulse • to the duty cycle, tp / tr, of the sequence, e.g., the ratio of the pulse duration tp and the repetition time tr of the pulse or sequence • to the number of slices, ns, subsequently excited within the repetition time of a d sequence (multi-slice technique, cf. sect. . . ; ns = for d sequences) in case of a more complex mri sequence with various rf pulses, e.g., a spin-echo or a turbo spin-echo sequence, the contribution of the different rf pulses to patient exposure has to be summed up. the most relevant quantity for the characterization of physiological effects related to rf exposure is the temperature rise in the various body tissues, which is not only dependent on the localized sar and the duration of exposure, but also on the thermal conductivity and microvascular blood flow (perfusion). in case of a partial-body rf exposure, the latter two factors lead to fast temperature equalization within the body (adair ) . based on the bioheat equation, it can be shown (brix et al. ) that for this particular exposure scenario the temperature response in the center of a homogenous tissue region, which is larger in each direction than the so-called thermal equilibration length, λ, is given by a convolution of the exposure-time course, sar(t), with a tissue-specific system function, exp (-t/τ), ( . . ) where τ is the thermal equilibration time, ta the constant temperature of arterial blood, and c the specific heat capacity of the tissue. for representative tissues, equilibration lengths and times are between . and mm and . and min, respectively (brix et al. ) . both parameters are inversely related to tissue perfusion and thus vary considerably. in case of a continuous rf exposure, the temperature rise even in poorly perfused tissues is less than . °c for each w/kg of power dissipated. using a simple model of power deposition in the head, athey ( ) showed that continuous rf exposure over h is unlikely to raise the temperature of the eye by more than . °c when the average sar to the head is less than . w/kg. more complex computations were performed by gandhi and chen ( ) for a high-resolution model of the human body using the finite-difference time domain in order to assess sar distributions in the body for different rf coils. their calculations indicate that the maximum sar averaged over g of tissue can be ten times greater than the whole-body average sar ("hot spots"). established biological effects of rf fields used for mr examinations are primarily caused by tissue heating. therefore, it is important to critically evaluate the numerous number of studies focused on temperature effects, from the cellular and tissue level to the whole-body level, in-cluding potential effects on vulnerable persons. in contrast, non-thermal (or athermal) effects are not well understood but seem-as far as this can be assessed at moment-to have no relevance with respect to the assessment of adverse effects associated with mr examinations. non-thermal effects are those which can only be explained in terms of mechanisms other than increased random molecular motion (i.e., heating) or which occur at sar levels so low that a thermal mechanism seems unlikely (icnirp ) . as summarized in a review by lepock ( ) , relative short exposures of mammalian cells to temperatures in excess of - °c result in a variety of effects, such as inhibition of cell growth, cytotoxic changes, alteration of signal transduction pathways, and an increased sensitivity to other stresses such as ionizing radiation and chemical agents. this suggests that damage is not localized to a single target, but that multiple heat-labile targets are damaged. extensive protein denaturation has been observed at temperatures of - °c for moderate periods. the most sensitive animal responses to heat loads are thermoregulatory adjustments, such as reduced metabolic heat production, vasodilatation, and increased heart rate. the corresponding sar thresholds are between about . and w/kg (icnirp ) . the observed cardiovascular changes reflect normal thermoregulatory responses that facilitate the conduction of heat to the body surface in order to maintain normal body temperatures. direct quantitative extrapolation of the animal (including primate) data to humans, however, is difficult given the marked species differences in the basal metabolism and thermoregulatory ability (who ) . at levels of rf exposure that cause body temperature rises of °c or more, a large number of additional, in most cases reversible, physiological effects have been observed in animals, such as alterations in neural and neuromuscular functions, increased blood-brain barrier permeability, stress-associated changes in the immune system, and hematological changes (icnirp ; michaelson and swicord ; who ) . thermal sensitivities and thresholds for irreversible tissue damage from hyperthermia have been summarized by dewhirst et al. ( ) . the most sensitive organs to acute damage are the testes and brain as well as portions of the eye (lens opacities and corneal abnormalities). the sar threshold for irreversible effects even in the most sensitive tissues caused by rf exposure, however, is greater than w/kg under normal environmental conditions (icnirp ) . effects of heat on embryo and fetus have been thoroughly reviewed by edwards et al. ( ) . processes critical to embryonic development, such as cell proliferation, migration, differentiation, and apoptosis are adversely affected by elevated maternal temperatures. therefore, hyperthermia of animals during pregnancy can cause embryonic death, abortion, growth retardation, and developmental defects. especially the development of the central nervous system is susceptible to heat. however, most animal data indicate that implantation and the development of the embryo and fetus are unlikely to be affected by rf exposures that increase maternal body temperature by less than °c (who ). in humans, epidemiological studies suggest that an elevation of maternal body temperature by °c for at least h during fever can cause a range of developmental defects, but there is little information on temperature thresholds for shorter exposures (edwards et al. ) . humans possess comparatively effective heat loss mechanisms. in addition to a well-developed ability to sweat, the dynamic range of blood flow rates in the skin is much higher than it is in other species. studies focused on rf-induced heating of patients during mr procedures have been summarized and evaluated in a review by shellock ( ) . they indicate that exposure of resting humans for - min to rf fields producing a whole-body sar of up to w/kg results in a body temperature increase between . and . °c (who ) . of special interest is an extensive mr study reported by shellock et al. ( ) . in this study, thermal and physiologic responses of healthy volunteers undergoing an mr examination over min at a whole-body averaged sar of . w/kg were investigated in a cool ( °c) and a warm ( °c) environment. in both cases, significant variations of various physiologic parameters were observed, such as an increase in the heart rate, systolic blood pressure, or skin temperature. however, all variations were in a range that can be physiologically tolerated by an individual with normal thermoregulatory function (shellock et al. ) . generally, these studies are supported by mathematical modeling of human thermoregulatory responses to mr exposure (adair ; adair and berglund , ) . it should be noted, however, that heat tolerance or thermoregulation may be compromised in some individuals undergoing an mr examination, such as the elderly, the very young and people with certain medical conditions (e.g., obesity, hypertension, impaired cardiovascular functions, diabetes, fever, etc.) and/or taking certain medications (e.g., beta-blockers, calcium channel blockers, sedatives, etc.) (donaldson et al. ; goldstein et al. ; icnirp ; shellock ) . some regions of the human body, in particular the brain, are particularly vulnerable to raised temperatures. mild-to-moderate hyperthermia (body temperature less than °c) induced thermal stress. for example, it affects cognitive performance (sharma and hoopes ) and can produce specific alterations in the cns that may have long-term physiological and neuropathological consequences (hancock and vasmatzidis ) . there have been a large number of epidemiological studies over several decades, particularly on cancer, cardiovascular disease, and cataract, in relation to occupational, residential, and mobile-phone rf exposure. as summarized in a review published by the icnirp standing committee on epidemiology (ahlbom et al. ) , results of these studies give no consistent or convincing evidence of a causal relation between rf exposure and adverse health effect. it has to be noted, however, that the studies considered not only have too many deficiencies to rule out an association but also focus on chronic exposures at relatively low levels-an exposure scenario that is not comparable to mr examinations of patients. as reviewed in the previous section, no adverse health effects are expected if the rf-induced increase in body core temperature does not exceed °c. in case of infants, pregnant women, or persons with cardiocirculatory im-pairment, it is desirable to limit body core temperature increases to . °c. as indicated in table . . , these values have been laid down in the current safety recommendations (iec, icnirp) to limit the body core temperature for the normal and controlled operating mode. additionally, local temperatures under exposure to the head, trunk, and extremities are limited for each of the two operating modes to the values given in table . . . however, temperature changes in the different parts of the body are difficult to control during an mr procedure in clinical routine. therefore, sar limits have been derived on the basis of experimental and theoretical studies, which should not be exceeded in order to limit the temperature rise to the values given in table . . . as only parts of the body-at least in the case of adult patients-are exposed simultaneously during an mr procedure, not only the whole-body sar but also partial-body short-term sar the sar limit over any -s period shall not exceed times the corresponding average sar limit a partial-volume sars given by iec; icnirp limits sar exposure to the head to w/kg b partial-body sars scale dynamically with the ratio r between the patient mass exposed and the total patient mass: normal operating mode, sar= ( - . r) w/kg; controlled operating mode, sar = ( - . r) w/kg c in cases where the eye is in the field of a small local coil used for rf transmission, care should be taken to ensure that the temperature rise is limited to °c sars for the head, the trunk, and the extremities have to be estimated by means of suitable patient models (e.g., brix et al. ) and limited to the values given in table . . for the normal and controlled operating mode. with respect to the application of the sar levels defined in table . . , the following points should be taken into account: • when a volume coil is used to excite a greater field-of view homogeneously, the partial-body and the wholebody sars have to be controlled: in the case of a local rf transmit coil (e.g., a surface coils), the local and the whole-body sar (iec ). • partial-body sars scale dynamically with the ratio r between the patient mass exposed and the total patient mass. for r → they converge against the corresponding whole-body values, for r → against the localized sar level of w/kg established by icnirp for occupational exposure of head and trunk (icnirp ). • the recommended sar limits do not relate to an individual mr sequence, but rather to running sar averages computed over each -min-period, which is assumed a typical thermal equilibration time of smaller masses of tissue. but even if mr examinations are performed within the established sar limits, severe burns can occur under unfavorable conditions at small focal skin-to-skin contact zones. the potential danger is illustrated in fig . . by the case of a patient who developed third-degree burns at the calves after conventional mr imaging. in this case, the contact between the calves resulted in the formation of a closed conducting loop and high current densities near the small contact zone. therefore, patients should always be positioned in such a way that focal skin-to-skin contacts are avoided (e.g., by foam pads) (knopp et al. ). to protect volunteers, patients, accompanying persons, and uninformed healthcare workers from possible hazards and accidents associated with the mr environment, it is indispensable to perform a proper control of access to the mr environment. the greatest potential hazard comes from metallic, in particular ferromagnetic materials (such as coins, pins, hair clips, pocketknives, scissors, nail clippers, etc.), that are accelerated in the inhomogeneous magnetic field (cf. sect. . . . ) in the periphery of an mr system and quickly become dangerous projectiles (missile effect). this risk can only be minimized by a strict and careful screening of all individuals entering the mr environment for metallic objects. every patient or volunteer should complete a detailed questionnaire prior to the mr examination to ensure that every item posing a potential safety issue is considered. an example of such a form can be found, for example, in shellock and crues ( ) , or can be downloaded from http://www.mrisafety.com. next, an oral interview should be conducted to verify the information of the form and to allow discussion of any question or concern that the patient may have before undergoing the mr procedure. an in-depth discussion of the various aspects of screening patients for mr procedures and individuals for the mr environment can be found in various publications by shellock (e.g., shellock ; shellock and crues ) and the webpage mentioned above. here only a condensed summary of the most important risks and contraindications can be given. all patients (and volunteers) undergoing mr procedures should-at the very least-be visually (e.g., by using a camera system) and/or acoustically (using an intercom system) monitored. moreover, physiologic monitoring is indicated whenever a patient requires observation of vital functions due to a health problem or whenever the patient is unable to communicate with the mr technologist regarding pain, respiratory problems, cardiac stress, or other difficulty that might arise during the examination (shellock ) . this holds especially in the case of sedated or anesthetized patients. for patient monitoring, special mr-compatible devices are available (shellock ) . pregnant patients undergoing mr examinations are exposed to the static magnetic field, time-varying gradient fields and rf fields. the few studies concerning the combined effects of these fields on pregnancy outcome in humans following mr examinations have not revealed any adverse effects, but are very limited due to the small numbers of patients involved and difficulties in the interpretation of the results (colletti ; icnirp ) . it fig. . . current-induced third-degree burns due to a small focal skin-to-skin contact between the calves during the mr examination. (from knopp et al. , with permission by springer-verlag) is thus advised that mr procedures may be performed in pregnant patients, in particular in the first trimester, only after critical risk/benefit analysis and with verbal and written informed consent of the mother or parents (colletti ) . the standard of care is that mr imaging may be used in pregnant woman, if other non-ionizing forms of diagnostic imaging (e.g., sonography) are inadequate or if the examination provides important information that would otherwise require exposure to ionizing radiation (e.g., fluoroscopy or ct) (colletti ; shellock and crues ) . in any case, however, exposure levels of the normal operating mode should not be exceeded and the duration of exposure should be reduced as far as possible (icnirp ) . mr examinations of patients with implants or metallic objects (such as bullets, pellets) are always associated with a serious risk, even if all procedures are performed within the established exposure limits summarized in the previous sections. this risk can only be minimized by a careful interview of the patient, evaluation of the patient's file and contacting the implanting clinician and/or the manufacturer for advice on mr safety and compatibility of the implant (medical devices agency ). in any case, mr procedures should be performed only after critical risk/benefit analysis. it should be noted that having undergone a previous mr procedure without incident does not guarantee a safe subsequent mr examination, since various factors (type of mr system, orientation of the patients, etc.) can substantially change the scenario (shellock and crues ) . in the case of passive implants-e.g., vascular clips and clamps, intravascular stents and filters, vascular access ports and catheters, heart valve prostheses, orthopedic prostheses, sheets and screws, intrauterine contraceptive devices, etc.-it has to be clarified if they are made of or contain ferromagnetic materials. as already mentioned, strong forces act on ferromagnetic objects in a static magnetic field. these forces (astm a) may result in a movement and dislodgment of ferromagnetic objects that could injure vessels, nerves or other critical tissue structures. comprehensive information on the mr compatibility (astm b) of more than , implants and other metallic objects is available in a reference manual published by shellock ( ) and online at http://www. mrisafety.com. mr examinations are deemed relatively safe for patients with implants or objects that have been shown to be non-ferromagnetic or weakly ferromagnetic (shellock and sawyer-glover ) . furthermore, patients with certain implants that have relatively strong ferromagnetic qualities may safely undergo mr procedures when the object is held in place by sufficient retentive forces, is not located near vital structures, and will not heat excessively (shellock and sawyer-glover ) . however, such examinations should be restricted to essential cases and should be performed at mr systems with a low magnetic field strength. examinations of patients with active implants or lifesupport systems are strictly contraindicated at conventional mr systems, if the patient implant card does not explicitly state their safety in the mr environment. in addition to the risks already mentioned above, there is the possibility that the function of the active implant is changed or perturbed, which may result in a health hazard for the patient. clinically important examples are cardiac pacemakers, implantable cardioverter defibrillators, infusion pumps, programmable hydrocephalus shunts, neurostimulators, and cochlear implants, etc. (medical devices agency ; shellock and sawyer-glover ) . the induction of electric currents by rf fields during imaging in implants made from conducting materials can result in excessive heating and thus may pose risks to patients. excessive heating is typically associated with implants that have elongated configurations and/or are electronically activated, as for example the leads of cardiac pacemakers or neurostimulation systems (shellock and crues ) . the same holds for electrically conductive objects (e.g., ecg leads, cables, wires, etc.), in particular when they form conductive loops in the bore of the mr system. to avoid severe burns, the instructions for proper operation of the equipment provided by the manufacturer of the implant or device have strictly to be followed. practical recommendations concerning this issue can be found in (shellock and sawyer-glover ) . in various reports, transient skin irritations, cutaneous swellings or heating sensations were described in relation to the presence of both permanent (cosmetic) and decorative tattoos. these findings seem to be associated with the use of iron oxide or other metal-based pigments that are prone to magnetic field-related interactions and/or rf-induced heating, in particular when the pigments are organized as loops or rings. according to a survey performed by tope and shellock ( ) , however, this side effect has an extremely low rate of occurrence in a population of subjects with tattoos and should not prevent patients-after informed consent-from undergoing a clinically indicated mr procedures (shellock and crues ) . as a precautionary measure, a cold compress may be applied to the tattoo site during the mr examination (tope and shellock ) . real-time position monitoring of invasive devices using magnetic resonance adaptive technique for highdefinition mr imaging of moving structures electrocardiogram acquisition during mr examinations for patient monitoring and sequence triggering acquiring simultaneous eeg and functional mri the modular (twin) gradient coil-high resolution, high contrast, diffusion weighted epi at . tesla resonant trapezoidal gradient generation for use in echo planar imaging monitoring the patient's eeg during echo planar mri biological effects and health implications in magnetic resonance imaging hazardous situation in the mr bore: induction in ecg leads causes fire on the induced electric field gradients in the human body for magnetic stimulation by gradient coils in mri active magnetic screening of coils for static and time-dependent magnetic field generated in nmr imaging limits to neural stimulation in echo planar imaging nmr probeheads for biophysical and biomedical experiments: theoretical principles and practical guidelines contrastenhanced mri of the central nervous system: comparison between gadodiamide injection and gd-dtpa evaluation of neck and body metastases to nodes with ferumoxtran -enhanced mr imaging: phase iii safety and efficacy study imaging of myocardial infarction: comparison of magnevist and gadophrin- in rabbits detection of colorectal liver metastases: a prospective multicenter trial comparing unenhanced mri, mndpdp-enhanced mri, and spiral ct detection of focal hepatic lesions: comparison of unenhanced and shu a-enhanced mr imaging versus biphasic helical ctap sensitivity of enhanced mr in multiple sclerosis: effects of contrast dose and magnetization transfer contrast evaluation of retroperitoneal and pelvic lymph node metastases with mri and mr lymphangiography intravascular contrast agent-enhanced mri measuring contrast clearance and tumor blood volume and the effects of vascular modifiers in an experimental tumor hepatocellular carcinoma in cirrhotic livers: double-contrast thin-section mr imaging with pathologic correlation of explanted tissue efficacy and safety of mr imaging with liver-specific contrast agent: us multicenter phase iii study diffusion and perfusion mr imaging in cases of alzheimer's disease: correlations with cortical atrophy and lesion load rationale and applications for macromolecular gd-based contrast agents microcirculation and microvasculature in breast tumors: pharmacokinetic analysis of dynamic mr image series gadodiamide-associated nephrogenic systemic fibrosis: why radiologists should be concerned morphologic predictors of lymph node status in rectal cancer with use of high-spatial-resolution mr imaging with histopathologic comparison effects of three different doses of a bolus injection of gadodiamide: assessment of regional cerebral blood volume maps in a blinded reader study randomized double blind trial of the safety and efficacy of two gd complexes (gd-dtpa and gd-dota) public assessment reports increased risk of nephrogenic fibrosing dermopathy/nephrogenic systemic fibrosis and gd-containing mri contrast agents mri angiography is superior to helical ct for detection of hcc prior to liver transplantation: an explant correlation factors in myocardial "perfusion" imaging with ultrafast mri and gd-dtpa administration preclinical profile and clinical potential of gadocoletic acid trisodium salt (b / ), a new intravascular contrast medium for mri superparamagnetic iron oxides as positive mr contrast agents: in vitro and in vivo evidence oral contrast media for magnetic resonance tomography of the abdomen. iii. initial patient research with gd-dtpa detection of intracranial metastases: a multicenter, intrapatient comparison of gadopentate dimeglumine-enhanced mri with routinely used contrast agents at equal dosage a comparison of gd-bopta and gd-dota for contrast-enhanced mri of intracranial tumors pulmonary mr angiography with ultrasmall superparamagnetic iron oxide particles as a blood pool agent and a navigator echo for respiratory gating: pilot study susceptibility changes following bolus injections pulsatile motion effects on d magnetic resonance angiography: implications for evaluating carotid artery stenoses time-of-flight techniques. pulse sequences and clinical protocols centric phase-encoding order in three-dimensional mp-rage sequences: application to abdominal imaging level-set-based artery-vein separation in blood pool agent ce-mr angiograms blood pool contrast-enhanced mra: improved arterial visualization in the steady state flow compensation in balanced ssfp sequences separation of arteries and veins in d mr angiography using correlation analysis measurement of flow with nmr imaging using a gradient pulse and phase difference technique tissue specific perfusion imaging using arterial spin labeling phase contrast mr angiography techniques hepatic arterial-phase dynamic gadolinium-enhanced mr imaging: optimization with a test examination and a power injector fast selective black blood mr imaging improved time-offlight mr angiography of the brain with magnetization transfer contrast signal targeting with alternating radiofrequency (star) sequences: application to mr angiography cerebral arteriovenous malformations: improved nidus demarcation by means of dynamic tagging mr-angiography theoretical limits of spatial resolution in elliptical-centric contrast-enhanced d-mra time-resolved contrast-enhanced three-dimensional magnetic resonance angiography of the chest: combination of parallel imaging with view sharing (treat) perfusion imaging using arterial spin labeling nmr relaxation times of blood: dependence on field strength, oxidation state, and cell integrity steady-state and dynamic mr angiography with ms- : initial experience in humans reducing motion artifacts in two-dimensional fourier transform imaging time-resolved contrast-enhanced d mr angiography black blood angiography the effects of time varying intravascular signal intensity and kspace acquisition order on three-dimensional mr angiography image quality the effects of incomplete breath-holding on d mr image quality steady-state imaging for visualization of endovascular interventions non-contrast-enhanced mr angiography using d ecg-synchronized half-fourier fast spin-echo nonlinear excitation profiles for three-dimensional inflow mr angiography fisp-a new fast mri sequence mr angiography by multiple thin slab d acquisition first-pass renal perfusion imaging using ms- , an albumin-targeted mri contrast agent gadolinium-enhanced mr aortography body mr angiography with gadolinium contrast agents contrast-enhanced abdominal mr angiography: optimization of imaging delay time by automating the detection of contrast material arrival in the aorta sense: sensitivity encoding for fast mri mr flouroscopy: technical feasibility central intraluminal saturation stripe on mr angiograms of curved vessels: simulation, phantom, and clinical analysis simultaneous acquisition of spatial harmonics (smash): fast imaging with radiofrequency coil arrays flow artifacts in steady-state free precession cine imaging image artifacts due to a time-varying contrast medium concentration in d contrast-enhanced mra separation of arteries and veins using flow-induced phase effects in contrast-enhanced mra of the lower extremities venous compression at highspatial-resolution three-dimensional mr angiography of peripheral arteries use of a blood-pool contrast agent for mr-guided vascular procedures: feasibility of ultrasmall superparamagnetic iron oxide particles high-resolution magnetic resonance angiography of hands with timed arterial compression (tac-mra) improved centric phase encoding orders for three-dimensional magnetization-prepared mr angiography elliptical spiral phase encoding order: an optimal, field-of-viewdependent ordering scheme for breath-hold contrast-enhanced d mr angiography fluoroscopically triggered contrast-enhanced three-dimensional mr angiography with elliptical centric view order: application to the renal arteries quantitative evaluation of nonrepetitive phase-encoding orders for firstpass, d contrast-enhanced mr angiography parallel imaging in mr angiography mr image artifacts from periodic motion decreased venous contamination on d gadolinium-enhanced bolus chase peripheral mr angiography using thigh compression elimination of eddy current artifacts in diffusion-weighted echo planar images: the use of bipolar gradients phase insensitive preparation of singleshot rare: application to diffusion imaging in humans analysis and correction of motion artifacts in diffusion weighted imaging utilizing the diffusion-tonoise ratio to optimize magnetic resonance diffusion tensor acquisition strategies for improving measurements of diffusion anisotropy high b-value q-space analyzed diffusion-weighted mri: application to multiple sclerosis mr diffusion tensor spectroscopy and imaging microstructural and physiological features of tissues elucidated by quantitative-diffusion-tensor mri a simplified method to measure the diffusion tensor from seven mr images diffusion-weighted mr imaging of bone marrow: differentiation of benign versus pathologic compression fractures diffusion-weighted imaging of bone marrow: current status the basis of anisotropic water diffusion in the nervous system - a technical review eddy current correction in diffusion-weighted imaging using pairs of images acquired with opposite diffusion gradient polarity diffusionweighted mr imaging of the liver of hepatitis c patients on the particles contained in the pollen of plants; and on the general existence of active molecules in organic and inorganic bodies diffusion-weighted interleaved echo planar imaging with a pair of orthogonal navigator echoes the diffusion sensitivity of fast steady-state free precession imaging effects of diffusion on free precession in nuclear magnetic resonance experiments an optimized pulse sequence for isotropically weighted diffusion imaging pathologic damage in ms assessed by diffusion-weighted and magnetization transfer mri mr diffusion imaging of the human brain mr diffusion imaging of cerebral infarction in humans single-shot diffusion-weighted trace imaging on a clinical scanner diffusion tensor imaging in spinal cord: methods and applications - a review in vivo mapping of the fast and slow diffusion tensors in human brain tracking neuronal fiber pathways in the living human brain diffusion-weighted mri in the evaluation of renal lesions: preliminary results reducing motion artefacts in diffusion-weighted mri of the brain: efficacy of navigator echo correction and pulse triggering noise correction for the exact determination of apparent diffusion coefficients at low snr diffusion-weighted imaging of the spine using radial k-space trajectories a comparative evaluation of a rare-based single-shot pulse sequence for diffusionweighted mri of musculoskeletal soft-tissue tumors diffusion-weighted mri of soft tissue tumours overview of diffusion-weighted magnetic resonance studies in multiple sclerosis anisotropy in high angular resolution diffusion-weighted mri use of a projection reconstruction method to decrease motion sensitivity in diffusion-weighted mri line scan diffusion imaging spin-echoes comparison of gradient encoding schemes for diffusion-tensor mri diffusion-weighted mr imaging in normal human brains in various age groups vertebral metastases: assessment with apparent diffusion coefficient diffusion-weighted mr imaging of the normal human spinal cord in vivo mapping eddy current induced fields for the correction of diffusion-weighted echo planar images visualization of neural tissue water compartments using biexponential diffusion tensor mri diffusional kurtosis imaging: the quantification of non-gaussian water diffusion by means of magnetic resonance imaging determining and visualizing uncertainty in estimates of fiber orientation from diffusion tensor mri the effect of gradient sampling schemes on measures derived from diffusion tensor mri: a monte carlo study squashing peanuts and smashing pumpkins": how noise distorts diffusionweighted mr data confidence mapping in diffusion tensor magnetic resonance imaging tractography using a bootstrap approach optimal strategies for measuring diffusion in anisotropic systems by magnetic resonance imaging spatial normalization and averaging of diffusion tensor mri data sets isotropic resolution diffusion tensor imaging with whole brain acquisition in a clinically acceptable time selection of the optimum b factor for diffusion-weighted magnetic resonance imaging assessment of ischemic stroke contrast-to-noise ratios of diffusion anisotropy indices diffusion-weighted half-fourier singleshot turbo spin-echo imaging in breast tumors: differentiation of invasive ductal carcinoma from fibroadenoma intravoxel incoherent motion imaging using steady-state free precession mr imaging of intravoxel incoherent motions: application to diffusion and perfusion in neurologic disorders separation of diffusion and perfusion in intravoxel incoherent motion mr imaging diffusion imaging with the mp-rage sequence turbo spin-echo diffusionweighted mr of ischemic stroke diffusion tensor mr imaging of the brain and white matter tractography self-diffusion nmr imaging using stimulated echoes mri of "diffusion" in the human brain: new results using a modified ce-fast sequence nonlinear phase correction for navigated diffusion imaging steady-state diffusion-weighted imaging of in vivo knee cartilage self-diffusion in normal and heavy water in the range - ° diffusion weighting by the trace of the diffusion tensor within a single scan fiber tracking: principles and strategies - a technical review threedimensional tracking of axonal projections in the brain by magnetic resonance imaging diffusion tensor imaging and aging - a review early detection of regional cerebral ischemia in cats: comparison of diffusion- and t -weighted mri and spectroscopy diffusion-weighted mr imaging of acute stroke: correlation with t -weighted and magnetic susceptibility-enhanced mr imaging in cats diffusion-tensor imaging of cognitive performance evaluation of hepatic lesions and hepatic parenchyma using diffusion-weighted reordered turboflash magnetic resonance images diffusion tensor imaging of normal and injured developing human brain - a technical review implications of bulk motion for diffusion-weighted imaging experiments: effects, mechanisms, and solutions on the application of ultra-fast rare experiments diffusion-weighted echo planar mr imaging in differential diagnosis of brain tumors and tumor-like conditions correction of motional artifacts in diffusionweighted mr images using navigator echoes minimal gradient encoding for robust estimation of diffusion anisotropy ) k-space correction of eddy current-induced distortions in diffusion-weighted echo planar imaging diffusion tensor mr imaging of the human brain multishot diffusion-weighted fse using propeller mri reduction of eddy current-induced distortion in diffusion mri using a twice-refocused spin-echo diffusion tensor mri of the spinal cord diffusion-weighted mri in the characterization of soft-tissue tumors splice: sub-second diffusion-sensitive mr imaging using a modified fast spin-echo acquisition mode evidence that both fast and slow water adc components arise from intracellular space high-resolution diffusion imaging using a radial turbo-spin-echo sequence: implementation, eddy current compensation, and self-navigation condition number as a measure of noise performance of diffusion tensor data acquisition schemes with mri diffusion tensor imaging of neurodevelopment in children and young adults spin diffusion measurements: spin-echoes in the presence of a time-dependent field gradient usefulness of diffusion-weighted mri with echo planar technique in the evaluation of cellularity in gliomas diffusion tensor imaging in normal aging and neuropsychiatric disorders selective age-related degradation of anterior callosal fiber bundles quantified in vivo with fiber tracking the future for diffusion tensor imaging in neuropsychiatry evaluation of liver diffusion isotropy and characterization of focal hepatic lesions with two single-shot echo planar mr imaging sequences: prospective study in patients the spatial mapping of translational diffusion coefficients by the nmr imaging technique a quantitative method for fast diffusion imaging using magnetization-prepared turboflash mr imaging of high-grade cerebral gliomas: value of diffusion-weighted echoplanar pulse sequences test liquids for quantitative mri measurements of self-diffusion coefficient in vivo bloch equations with diffusion terms analysis and comparison of motion-correction techniques in diffusion-weighted imaging diffusion-weighted mri of the cervical spinal cord using a single-shot fast spin-echo technique: findings in normal subjects and in myelomalacia q-ball imaging high angular resolution diffusion imaging reveals intravoxel white matter fiber heterogeneity mapping complex tissue architecture with diffusion spectrum magnetic resonance imaging effects of diffusion in nuclear magnetic resonance spin-echo experiments optimized isotropic diffusion weighting optimised diffusion-weighting for measurement of apparent diffusion coefficient (adc) in human brain (eds) handbook of biological effects of electromagnetic fields. crc, boca raton on the thermoregulatory consequences of nmr imaging risks and safety issues related to mr examinations thermoregulatory consequences of cardiovascular impairment during nmr imaging in warm/humid environments epidemiology of health effects of radiofrequency exposure standard practice for marking medical devices and other items for safety in the magnetic resonance environment athey tw ( ) a model of the temperature rise in the head due to magnetic resonance imaging procedures z-gradient coil and eddycurrent stimulation of skeletal and cardiac muscle in the dog. society for magnetic resonance in medicine sampling and evaluation of specific absorption rates during patient examinations performed on . -tesla mr systems estimation of heat transfer and temperature rise in partial-body regions during mr procedures: an analytical approach with respect to safety considerations induced alignment of flowing sickle erythrocytes in a magnetic field: a preliminary report static magnetic field effects on human subjects related to magnetic resonance imaging systems effect of static magnetic field exposure of up to tesla on sequential human vital sign measurements magnetic resonance procedures and pregnancy. in: shellock fg (ed) magnetic resonance procedures: health effects and safety basic principles of thermal dosimetry and thermal thresholds for tissue damage from hyperthermia cardiovascular responses to heat stress and their adverse consequences in healthy and vulnerable human polulations effects of heat on embryos and foetuses health effects of static magnetic fields - a review of the epidemiological evidence (eds) handbook of biological effects of electromagnetic fields specific absorption rates and induced current densities for an anatomy-based model of the human for exposure to time-varying magnetic fields of mri summary, conclusions and recommendations: adverse temperature levels in the human body magnetic field effects in biology -a survey of possible mechanisms with emphasis on radicalpair recombination effects of heat stress on cognitive performance: the current state of knowledge orientation of erythrocytes in a strong static magnetic field static and extremely low frequency electric and magnetic fields. iarc monographs on the evaluation of carcinogenic risks to humans guidelines for limiting exposure to timevarying electrical, magnetic, and electromagnetic fields (up to ghz) general approach to protection against non-ionizing radiation review of the scientific evidence on dosimetry, biological effects, epidemiological observations, and health consequences concerning exposure to static and low frequency electromagnetic fields ( - khz) survey of reproductive health among female mr workers cognitive, cardiac, and physiological safety studies in ultra high field magnetic resonance imaging sicherheitsaspekte zur vermeidung strominduzierter hautverbrennungen in der mrt untersuchungen über den einfluß statischer magnetfelder auf die pränatale entwicklung der maus cellular effects of hyperthermia: relevance to the minimum dose for thermal damage orientation of nucleic acids in high magnetic fields effects of static and time-varying ( hz) magnetic fields on reproduction and fetal development in rats interaction of nonmodulated and pulse modulated radio frequency fields with living matter: experimental results effects of static magnetic fields at the cellular level fetal development of mice following intrauterine exposure to a static magnetic field of . t orientation of sickled erythrocytes in a magnetic field effects of a . static magnetic field on fetal development in icr mice applied bioelectricity. from electrical stimulation to electropathology review of patient safety in time-varying gradient fields safety of strong, static magnetic fields physical interactions of static magnetic fields with living tissues physiological side effects of fast gradient switching hyperthermia-induced pathophysiology of the central nervous system radiofrequency energy-induced heating during mr procedures: a review patient monitoring in the magntic resonance environment. in: shellock fg (ed) magnetic resonance procedures: health effects and safety reference manual for magnetic resonance safety, implants, and devices: edn the magnetic resonance environment and implants, devices, and materials. in: shellock fg (ed) magnetic resonance procedures: health effects and safety physiological responses to mr imaging at an sar level of . w/kg development of mice after intrauterine exposure to directcurrent magnetic fields magnetically induced electric fields and currents in the circulatory system center for devices and radiological health. criteria for significant risk investigations of magnetic resonance diagnostic devices neurobehavioral effects among subjects exposed to high static and gradient magnetic fields from a . tesla magnetic resonance imaging system: case-crossover pilot study united nations environment programme/word health organisation/international radiation protection association: environmental health criteria , electromagnetic fields ( hz to ghz) risks and safety issues related to mr examinations grazioli l, morana g, kirchin ma, schneider g ( ) accurate differentiation of focal nodular hyperplasia from hepatic adenoma at gadopentate dimeglumine-enhanced mr imaging: prospective study. radiology : - grossman ri, rubin di, hunter g et al ( ) magnetic resonance imaging in patients with key: cord- -c xit tf authors: javid, alireza m.; liang, xinyue; venkitaraman, arun; chatterjee, saikat title: predictive analysis of covid- time-series data from johns hopkins university date: - - journal: nan doi: nan sha: doc_id: cord_uid: c xit tf we provide a predictive analysis of the spread of covid- , also known as sars-cov- , using the dataset made publicly available online by the johns hopkins university. our main objective is to provide predictions of the number of infected people for different countries in the next days. the predictive analysis is done using time-series data transformed on a logarithmic scale. we use two well-known methods for prediction: polynomial regression and neural network. as the number of training data for each country is limited, we use a single-layer neural network called the extreme learning machine (elm) to avoid over-fitting. due to the non-stationary nature of the time-series, a sliding window approach is used to provide a more accurate prediction. the covid- pandemic has led to a massive global crisis, caused by the rapid spread rate and severe fatality, especially, among those with a weak immune system. in this work, we use the available covid- time-series of the infected cases to build models for predicting the number of cases in the near future. in particular, given the time-series till a particular day, we make predictions for the number of cases in the next τ days, where τ ∈ { , , , }. this means that we predict for the next day, after days, after days, and after days. our analysis is based on the time-series data made publicly available on the covid- dashboard by the center for systems science and engineering (csse) at the johns hopkins university (jhu) (https://systems.jhu.edu/research/public-health/ncov/) [ ] . let y n denote the number of confirmed cases on the n-th day of the time-series after start of the outbreak. then, we have the following • the input consists of the last n samples of the time-series given by y n [y , y , · · · , y n ]. • the predicted output is t n =ŷ n+τ , τ ∈ { , , , }. • due to non-stationary nature of the time-series data, a sliding window of size w is used over y n to make the prediction, and w is found via cross-validation. • the predictive function f ( · ) is modeled either by a polynomial or a neural network, and is used to make the prediction:ŷ n+τ = f (y n ) the dataset from jhu contains the cumulative number of cases reported daily for different countries. we base our analysis on of the countries listed in table i . for each country, we consider the time-series y n starting from the day when the first case was reported. given the current day index n, we predict the number of cases for the day n + τ by considering as input the number of cases reported for the past w days, that is, for the days n − w + to n. we use data-driven prediction approaches without considering any other aspect, for example, models of infectious disease spread [ ] . we apply two approaches to analyze the data to make predictions, or in other words, to learn the function f : • polynomial model approach: simplest curve fit or approximation model, where the number of cases is approximated locally with polynomials − f is a polynomial. • neural network approach: a supervised learning approach that uses training data in the form of input-output pairs to learn a predictive model − f is a neural network. we describe each approach in detail in the following subsections. a. polynomial model ) model: we model the expected value of y n as a third degree polynomial function of the day number n: the set of coefficients {p , p , p , p } are learned using the available training data. given the highly non-stationary nature of the time-series, we consider local polynomial approximations of the signal over a window of w days, instead of using all the data to estimate a single polynomial f ( · ) for the entire time-series. thus, at the n-th day, we learn the corresponding polynomial f ( · ) using y n,w [y n−w+ , · · · , y n− , y n ]. ) how the model is used: once the polynomial is determined, we use it to predict for (n + τ )-th day aŝ for every polynomial regression model, we construct the corresponding polynomial function f ( · ) by using y n,w as the most recent input data of size w. the appropriate window size w is found through cross-validation. b. neural networks ) model: we use extreme learning machine (elm) as the neural network model to avoid overfitting to the training data. as the length of the time-series data for each country is limited, the number of training samples for the neural network would be quite small, which can lead to severe overfitting in large scale neural network such as deep neural networks (dnns), convolutional neural networks (cnns), etc. [ ], [ ] . elm, on the other hand, is a single layer neural network which uses random weights in its first hidden layer [ ] . the use of random weights has gained popularity due to its simplicity and effectiveness in training [ ]- [ ] . we now briefly describe elm. consider a dataset containing n samples of pair-wise pdimensional input data x ∈ r p and the corresponding qdimensional target vector t ∈ r q as d = {(x n , t n )} n n= . we construct the feature vector as z n = g(wx n ) ∈ r h , where • weight matrix w ∈ r h×p is an instance of normal distribution, • h is the number of hidden neurons, and • activation function g( · ) is the rectified linear unit (relu). to predict the target, we use a linear projection of feature vector z n onto the target. let the predicted target for the nth sample be oz n . note that o ∈ r q×h . by using ℓ -norm regularization, we find the optimal solution for the following convex optimization problem where · f denotes the frobenius norm. once the matrix o ⋆ is learned, the prediction for any new input x is now given bŷ ) how the model is used: when using elm to predict the number of cases, we define x n = [y n−w+ , ..., y n− , y n ] ⊤ and t n = [y n+τ ]. note that x n ∈ r w and t n ∈ r. for a fixed τ ∈ { , , , }, we use cross-validation to find the proper window size w, number of hidden neurons h, and the regularization hyperparameter λ. in this subsection, we make predictions based on the timeseries data which currently is available until today may , , for τ ∈ { , , }. we estimate the number of cases for the last days of the countries in table i . for each value of τ ∈ { , , }, we compare the estimated number of caseŝ y n+τ with the true value y n+τ and report the estimation error in percentage, i.e., we carry out two sets of experiments for each of the two approaches (polynomial and elm) to examine their sensitivity to the new arriving training samples. in the first set of experiments, we implement cross-validation to find the hyperparameters without using the observed samples of the time-series as we proceed through days span. in the second set of experiments, we implement cross-validation in a daily manner as we observe new samples of the time-series. in the latter setup, the window size w varied with respect to time to find the optimal hyperparameters as we proceed through time. we refer to this setup as 'elm time-varying' and 'poly time-varying' in the rest of the manuscript. we first show the reported and estimated number of infection cases for sweden by using elm time-varying for different τ 's in figure . for each τ , we estimate the number of cases up to τ days after which jhu data is collected. in our later experiments, we show that elm time-varying is typically more accurate than the other three methods (polynomial, poly timevarying, and elm). this better accuracy conforms to the nonstationary behavior of the time-series data, or in other words that the best model parameters change over time. hence, the result of elm time-varying is shown explicitly for sweden. according to our experimental result, we predict that a total of , , and people will be infected in sweden on may , may , and may , , respectively. histograms of error percentage of the four methods are shown in figure for different values of τ . the histograms are calculated by using a nonparametric kernel-smoothing distribution over the past days for all countries. the daily error percentage for each country in table i is shown in figures - . note that the reported error percentage of elm is averaged over monte carlo trials. the average and the standard deviation of the error over days is reported (in percentage) in the legend of each of the figures for all four methods. it can be seen that daily cross-validation is crucial to preserve a consistent performance through-out the pandemic, resulting in a more accurate estimate. in other words, the variations of the time-series as n increases is significant enough to change the statistics of the training and validation set, which, in turn, leads to different optimal hyperparameters as the length of the time-series grows. it can also be seen that elm time-varying provides a more accurate estimate, especially for large values of τ . therefore, for the rest of the experiments, we only focus on elm time-varying as our favored approach. another interesting observation is that the performance of elm time-varying improves as n increases. this observation verifies the general principle that neural networks typically perform better as more data becomes available. we report the average error percentage of elm time-varying over the last days of the time-series in table ii . we see that as τ increases the estimation error increases. when τ = , elm time-varying works well for most of the countries. it does not perform well for france and india. this poor estimation for a few countries could be due to a significant amount of noise in the timeseries data, even possibly caused by inaccurately reported daily cases. in this subsection, we repeat the prediction based on the time-series data which is available until today may , , for τ ∈ { , , }. in subsection iv-a, we predicted the total number of cases in sweden on may , may , and may , . the reported number of cases on these days for sweden turned out to be , , and , respectively, which is in the similar range of error that is reported in table ii . we show the reported and estimated number of infection cases for sweden by using elm time-varying for different τ 's in figure . for each τ , we estimate the number of cases up to τ days after which jhu data is collected. according to our experimental result, we predict that a total of , , and people will be infected in sweden on may , may , and may , , respectively. histograms of error percentage of the four methods are shown in figure for different values of τ . these experiments verify that elm time-varying is the most consistent approach as the length of the time-series increases from may to may . we report the average error percentage of elm timevarying over the last days of the time-series in table iii . we see that as τ increases the estimation error increases. when τ = , elm time-varying works well for all of the countries except india, even though the number of training samples has increased compared to subsection iv-a. in this subsection, we repeat the prediction based on the time-series data which is available until today may , , for τ ∈ { , , }. in subsection iv-b, we predicted the total number of cases in sweden on may , may , and may , . the reported number of cases on these days for sweden turned out to be , , and , respectively, which is in the similar range of prediction error that is reported in table iii . we increase the prediction range τ in this subsection and we show the reported and estimated number of infection cases for sweden by using elm time-varying for τ = , , and in figure . for each τ , we estimate the number of cases up to τ days after which jhu data is collected. according to our experimental result, we predict that a total of , , and people will be infected in sweden on may , may , and june , , respectively. histograms of error percentage of the four methods are shown in figure for different values of τ . these experiments verify that elm time-varying is the most consistent approach as the length of the time-series increases from may to may . we report the average error percentage of elm timevarying over the last days of the time-series in table iv . we see that as τ increases the estimation error increases. when τ = , elm time-varying works well for all of the countries so we increase the prediction range to days. we observe that elm time-varying fails to provide an accurate estimate for several countries such as france, india, iran, and usa. this experiment shows that long-term prediction of the spread covid- can be investigated as an open problem. however, by observing tables ii-iv, we expect that the performance of elm time-varying to improve in the future as the number of training samples increases during the pandemic. we studied the estimation capabilities of two well-known approaches to deal with the spread of the covid- pandemic. we showed that a small-sized neural network such as elm provides a more consistent estimation compared to polynomial regression counterpart. we found that a daily update of the model hyperparameters is of paramount importance to achieve a stable prediction performance. the proposed models currently use the only samples of the time-series data to predict the future number of cases. a potential future direction to improve the estimation accuracy is to incorporate constraints such as infectious disease spread model, non-pharmaceutical interventions, and authority policies [ ]. [ ] christian szegedy, alexander toshev, and dumitru erhan, "deep neural networks for object detection," in advances in neural error (%) elm time-varying, avg = . %, std = % elm, avg = . %, std = . % poly time-varying, avg = . %, std = poly, avg = . %, std = error (%) elm time-varying, avg = . %, std = . % elm, avg = . %, std = . % poly time-varying, avg = . %, std = % poly, avg = %, std = error (%) elm time-varying, avg = %, std = . % elm, avg = . %, std = . % poly time-varying, avg = . %, std = = . %, std = . % error (%) elm time-varying, avg = . %, std = . % elm, avg = . %, std = . % poly time-varying, avg = . %, std = poly, avg = . %, std = error (%) elm time-varying, avg = . %, std = . % elm, avg = . %, std = . % poly time-varying, avg = %, std = % poly, avg = . %, std = error (%) elm time-varying, avg = . %, std = . % elm, avg = . %, std = . % poly time-varying, avg = . %, std = = . %, std = error (%) elm time-varying, avg = %, std = . % elm, avg = . %, std = . % poly time-varying, avg = . %, std = poly, avg = . %, std = error (%) elm time-varying, avg = . %, std = % elm, avg = . %, std = . % poly time-varying, avg = %, std = error (%) elm time-varying, avg = . %, std = . % elm, avg = %, std = . % poly time-varying, avg = % error (%) elm time-varying, avg = . %, std = . % elm, avg = . %, std = . % poly time-varying, avg = . %, std = = . %, std = % error (%) elm time-varying, avg = . %, std = . % elm, avg = . %, std = % poly time-varying, avg = . %, std = poly, avg = . %, std = error (%) elm time-varying, avg = . %, std = % elm, avg = %, std = . % poly time-varying, avg = . %, std = poly, avg = . %, std = : daily error percentage of the last days of countries for elm and polynomial regression key: cord- -mcit luk authors: gupta, chitrak; cava, john kevin; sarkar, daipayan; wilson, eric; vant, john; murray, steven; singharoy, abhishek; karmaker, shubhra kanti title: mind reading of the proteins: deep-learning to forecast molecular dynamics date: - - journal: biorxiv doi: . / . . . sha: doc_id: cord_uid: mcit luk molecular dynamics (md) simulations have emerged to become the back-bone of today’s computational biophysics. simulation tools such as, namd, amber and gromacs have accumulated more than , users. despite this remarkable success, now also bolstered by compatibility with graphics processor units (gpus) and exascale computers, even the most scalable simulations cannot access biologically relevant timescales - the number of numerical integration steps necessary for solving differential equations in a million-to-billion-dimensional space is computationally in-tractable. recent advancements in deep learning has made it such that patterns can be found in high dimensional data. in addition, deep learning have also been used for simulating physical dynamics. here, we utilize lstms in order to predict future molecular dynamics from current and previous timesteps, and examine how this physics-guided learning can benefit researchers in computational biophysics. in particular, we test fully connected feed-forward neural networks, recurrent neural networks with lstm / gru memory cells with tensorflow and pytorch frame-works trained on data from namd simulations to predict conformational transitions on two different biological systems. we find that non-equilibrium md is easier to train and performance improves under the assumption that each atom is independent of all other atoms in the system. our study represents a case study for high-dimensional data that switches stochastically between fast and slow regimes. applications of resolving these sets will allow real-world applications in the interpretation of data from atomic force microscopy experiments. molecular dynamics or md simulations have emerged to become the cornerstone of today's computational biophysics, enabling the description of structure-function relationships at atomistic details [ ] . these simulations have brought forth milestone discoveries including resolving the mechanisms of drug-protein interactions, protein synthesis and membrane transport, molecular motors and biological energy transfer, and viral maturation, encompassing a number of our contributions [ ] . more recently, we have employed molecular modeling to predict mortality rates from sars-cov- [ ] , showcasing its application in epidemiology. in md simulations, the chronological evolution of an n -particle system is computed by solving the newton's equations of motion. methodological developments in md has pushed the limits of computable system-sizes to hundreds of millions of interacting particles, and timescales from femtoseconds ( − second) to microseconds ( − second), allowing all-atom simulations of an entire cell organelle [ ] . high performance computing, parallelized architecture, speciality hardware and gpuaccelerated simulations have made notable contributions towards this progress. however, in spite of significant advancements in both development and applications, computational resources required to achieve biologically relevant system-sizes and timescales in brute-force md simulations remain prohibitively "expensive". notably, md involves solving newtonian dynamics by integrating over millions of coupled linear equations. an universal bottleneck arises from the time span chosen to perform the numerical integration. akin to any paradigm in dynamic systems, the time span for numerical integration is limited by the dynamics of the fastest mode. in biological systems, this span is femtoseconds (fs) or lower, owing to the physical limitations of capturing fast vibrations of hydrogen atoms. thus, md simulation of at least microsecond, wherein biologically relevant events occur, requires the computation of million fs-resolved time steps. each step involves the calculation of the interaction between every particle with its neighbors, which scales as n or n logn . when n = - million atoms, these simulations are only feasible on peta to exascale supercomputers. several techniques have been employed to accelerate atomistic simulations, which can broadly be classified into two categories: coarse-gaining and enhanced sampling. in the former, the description of the system under study is simplified in order to reduce the number of particles required to completely define the system [ ] . in the latter, either the potential energy surface and gradients (or forces) that drive the molecular dynamics is made artificially long-range so as to accelerate the movements or multiple short replica of the system are simulated in order to sample a broader range of molecular movements than a long brute-force md [ ] . a major contention of these techniques is that, the simulated protein movements cannot be attributed either chemical precision or a realistic time label [ ] . we explore machine-learning methodologies for predicting the outcomes of md simulations by preserving their accurate time labels. this idea will greatly reduce the computational expenses associated with performing md, making it broadly accessible beyond the current user-base of scientific researchers to high schools and colleges, where the computational resources are sparse. the developments will imminently expedite the efforts of nearly , users of our open-source md engine namd [ ] . in this resource paper, we present trajectory (green: high dimension, red: reduced dimension) visualized in d and rendered in d using the molecular visualization software vmd [ ] . (c) deviation from gaussian behavior (quantified by kurtosis where a higher value denotes larger deviation) of the distribution of x, y, and z positions of each of the particles (shown in red in b). two types of data sets, the dynamic correlations within which pose significant challenge on existing machine-learning techniques for predicting the real-time nonlinear dynamics of proteins. the underlying physics of these data sets represent out-of-equilibrium and inequilibrium conditions, wherein the n -particle systems evolve in the presence vs. absence of external perturbations. beyond tracking the nonlinear transformations, these examples also create an opportunity to study whether prediction accuracy of future outcomes with fs-resolution improve, if prior knowledge is utilized to enhance the signal-to-noise ratio of key features in the training set. a number of works in the past have focused on predicting protein structures from protein sequence/ compositional information by training on the so-called sequencestructure relationship using massive data sets accrued over the pdb and psi databases [ ] . however, knowledge of stationary d coordinates offer little to no information on how the system evolves in time following the laws of classical or quantum physics. little data is available to train algorithms on such time series information despite the imminent need to predict molecular dynamics [ ] . the presented data sets capture both the linear and nonlinear movements of molecules, resolved contiguously across millions of time points. these time series data enable the learning of spatio-temporal correlation or memory-effect that underpins newtonian dynamics of large biomolecules -a physical property that remains obscure to the popular sequence-structure models constructed stationary data. we establish that the success of any deep learning strategy towards predicting the dynamics of a molecule with fs precision is contingent on accurately capturing on these many-body correlations. thus, the resolution of our md data sets will result in novel training strategies that decrypt an inhomogeneously evolving time series. as a publicly accessible resource, our md simulations trajectories of even larger systems ( - particles) [ ] will be provided in the future to seek generalizable big-data solutions of fundamental physics problems. in what follows, we use equilibrium and non-equilibrium md to create high-dimensional time series data with atom-scale granularity. for simplicity, we derive a sub-space of intermediate size composed only of carbon atoms. in this intermediate-dimensional space, where the data distribution is densed highly correlated, we train state-of-the-art time sequence modeling techniques including recurrent neural networks (rnns) with long short term memory (lstms) to predict the future state of the system (fig. ). we explore, how a kirchhoff decomposition [ ] of the many-body problem dramatically enhances the learning accuracy both under equilibrium and non-equilibrium data, even when the number of hidden layers << than the number of atoms. hardness of the time series are captured in terms of root mean square deviations (rmsd) errors, computed at different lead-times. the rmsd between two n -dimensional data points a and b is defined as: ( ) where a and b could be either real and predicted points. we also define history time and lead time to be a moving window of cumulative time steps (in units of fs) respectively in the past and in the f uture of a given data point in the time series, over which training and predictions are achieved. modeling accuracy was evaluated by varying the amount of historical data points incorporated during the training phase, and then comparing its prediction accuracy against that of a static or linear model. surprisingly, we find that the equilibrium md time series is more challenging to learn, despite the non-gaussian distribution atoms associated to the non-equilibrium md. henceforth, we discuss how these new data-set resources can be used for future research of modeling high-dimensional high-frequency event-driven md time series data. in the recent past, machine learning approaches have been successful in analyzing the results of md simulations. support vector machines and variational auto-encoders have been developed to extract free energy information from md simulations [ ] . kinetic properties of small-molecules have also been extracted using neural networks. it is also shown, that neural networks trained on limited data selected from very expensive md simulations can resurrect the entire boltzmann distribution for small proteins [ ] . however, none of these approaches are aimed at resurrecting the real-time (i.e. fs-resolution molecular movements of biological molecules) -one of the central goals of md simulations [ ] . rnns and lstms have been used to predict md [ ] , but the tested data sets fail to wholly capture the dynamical complexity of a biological molecule. a key observation made therein that inspires our current investigations is that training on molecular dynamics beyond particles is improbable. the data sets we present in the next section challenges this seminal bottleneck that must be overcome to forecast md simulations of real biological systems. from a computational perspective, any dynamically evolving system can be regarded as event-driven time series data; in this sense, md simulations are essentially high-dimensional high-frequency time series data, and sequence modeling techniques like recurrent neural networks [ ] , hidden markov models and arma, can be used to model md trajectories. deep learning has recently emerged as a popular paradigm for modeling dynamically evolving time series and predicting future events. these techniques have also been vastly studied in special application areas like business and finance [ ] , healthcare [ ] , power and energy [ ] . at room temperature, where biology exists, newtonian mechanics of the molecules become stochastic described by the fluctuation-dissipation theorem. the ensuing molecular trajectories converge at boltzmann-distributed ensembles at infinitely long times. it has been established that protein dynamics in cells can be modeled as motions of molecules within a media that is highly viscous. imposing this so called friction-dominated condition on the stochastic newton's equations, and assuming that a complete set of the degrees of freedom for describing the dynamical system is known, molecular dynamics is deemed to be a markovian process. in simpler terms, it is a process for which predictions can be made regarding future outcomes based solely on its present state, and most importantly, such predictions are just as good as the ones that could be made knowing the process's full history. the equation of motion of a particle of mass (m), at position (x) in time (t) within an environment of friction coefficient (γ) becomes: where the random force ζ is constrained by requiring the integral of its autocorrelation function to be inversely related to the friction coefficient. however, we often cannot find a complete set of descriptors to probe the molecular dynamics of proteins. the problem becomes particularly challenging once the number of amino acids in the protein sequences becomes more than [ ] (i.e. roughly n = atoms). the associated phase space (of n positions, x = x , x , ...x n , and n momenta) for systems of these sizes (or higher) becomes too extended for physics-based methods such as md to visit all the possible points in the n -dimensional space. this incomplete description of the phase space together with the well-known finite-size artifacts[ ], introduces "memory" into any realistic md simulation. introduced originally by zwanzig and used in ref. [ ] , this memory shows up as a "long-time" tail in auto-correlation functions of atoms undergoing simulation. in a fully equilibrated systems, this memory is short-term vanishing within picoseconds ( − seconds) for carbon, hydrogen and oxygen atoms that primarily compose the proteins [ ] . in non-equilibrium simulations that are often employed to accelerate md [ ] , the long-time tail stretches to nanosecond ( − seconds). noting that every integration time step in md is - fs ( − seconds), there exists at least order of magnitude in time within which the memory of the system is relevant and offers the opportunity to leverage deep learning techniques for making predictions. computational modeling of any complex dynamics essentially boils down to a multivariate time series forecasting task, and hence time series trajectory data capturing an evolving biological system is necessary to analyze and computationally learn the underlying molecular dynamics. below we first present some basic definitions and notations we will used to characterize the md time series. -lead time: for a forecasting problem, the lead time specifies how far ahead the user wants to predict the future positions of atoms. predicting far ahead (high lead) enables faster md simulation, and at the same time, makes forecasting task more challenging. -history size: next, we must decide how much historical data we wish to use to predict the future positions of atoms. this value is known as the history size. -prediction window: prediction window indicates the discrete time-window in the future used for creating the prediction outcome. for simplicity, in this paper, we always use a prediction window of fs. -prediction error: error is defined as the root mean square deviation (rmsd (eq. )) between real and predicted structures at a given time point. during the learning stages, the error across individual interactions is denoted loss. we present two new data sets to introduce subtleties in the equilibrium and nonequilibrium molecular dynamics from the perspective of time series forecasting. an analysis of these data sets will bring to light how effects of the history (or correlation) in the time series data can be described at different lead times and prediction windows to model a real-time dynamically evolving md time series. the training objective here is to minimize the prediction error for a sufficiently large batch of training instances over a historical time span. we introduce two data sets from two distinct kinds of md simulation systems. illustrated in fig. , the first data set is an equilibrium simulation of the enzyme adenosine kinase (adk). the second one is a steered molecular dynamics (smd) or non-equilibrium simulation of the -alanine polypeptide helix (fig. ) . in smd, an external force is applied to the system along a chosen direction. we applied a force of nanonewton along one end of the -alanine helix, unfolding the protein [ ] . we have generated high-as well as low-dimensional data for both the systems. in highdimension, the position of every atom is explicitly defined, resulting in × for adk in both the data sets, shape transformation of a -dimensional ( d) many-body system is recorded over time. for adk, a transition from an open to a closed dshape is observed due to concomitant rearrangements of particles (fig. b) , while in -alanine, a more non-linear helixto-coil transition is probed by tracking the changes in position of particles (fig. a ). beyond such high dimensionality of the data sets, the uniqueness of the equilibrium md time series is in its dynamical evolution -the kinetic behavior stochastically switches between fast and slowly evolving regimes. using rmsd values of all the the particle positions with respect to the very first, t = position, we showcase these sudden changes in single-particle as well as collective dynamics in fig. a . for the non-equilibrium time series data of -alanine, the movements occur in the presence of an external force. these simulations produce less noisy data than the equilibrium md of adk fig. b vs. a) . however, given that the shape changes are highly directed, we find that there are multiple classes of single-particle dynamics hidden under a collective behavior. unlike the equilibrium md simulations, where the positions of all the particles are gaussian-distributed about a mean, at least two different classes of particle distributions is observed in the non-equilibrium time series (fig. c vs. c) . the distribution of the significant majority of atoms is non-gaussian, reflecting of the positional biases from high external forces to which they are subjected. during protein structure determination experiments, the atomic positions of a target protein are assigned by averaging the observed electron densities [ ] . while this assignment offers a good starting model, the derived protein structure is typically in a non-biological (or non-native) state, and therefore severely limits biological application. such artifacts can be resolved by bringing the starting model into thermal equilibrium at room temperature. once in equilibrium, the protein adopts its native structure ( d shape) and dynamics. by numerically integrating eq. , equilibrium md simulations monitor the real-time evolution of native proteins. the challenges involved in modeling of an equilibrium md data can be presented employing the lead times of the associated time series. the hardness of the time series data is quantified by tracking how the rmsd values between the data points change at different lead times, namely at leads of , .. fs (fig. ) . the change in rmsd at different lead times also serve as a direct probe for the correlation in the data. if the lead time is short ( or fs) then it is simple to computationally probe the . - . Å scale changes in molecular position (fig. , black and red traces) by analyzing the associated short-time correlations (fig. d) . in contrast, if the lead time is too long ( and fs), then key short-time correlations within the data are missed. thus, the associated small d shape changes may not be accurately learnt at this scale. one advantage of this data set is that all the particles are "well-behaved" and their dynamics is gaussian distributed (fig. c) . thus, an optimal lead time is desired which is sufficiently large (far into the future) to be interesting from a biological standpoint, and at the same time, can be used to train a machine learning model aimed at replacing computationally expensive md. data preparation. a starting d protein model of adk was generated using an x-ray diffraction crystal structure obtained from the pdb [ ] . the atomic coordinates of adk are encoded in the traditional pdb format presenting the x, y, z positions. x-ray is unable to resolve hydrogen atom positions. thus, the position of hydrogen atoms were inferred using the run adk.py script located in the equilibrium md simulation of the github for this project [ ] . thus, a complete initial model was determined. the goal of equilibrium md is to recreate the native dynamics of a protein of interest. therefore, the forces acting on each atom of the protein is defined using a potential energy function or force field. the amber force field, ff sbonlysc, was used for the adk simulation [ ] . an implicit water model, gb-neck , was chosen to capture the equilibrium adk environment; it is computationally efficient and enhances conformational sampling through decreased friction (γ in eq. ) [ ] . after force field and water model selection, the energy of the protein model is minimized. the energy minimization corrects atoms that are in erroneously close contact due to artifacts from structural determination. if uncorrected, the simulation can produce unrealistic forces that cause the simulation to become unstable. once minimized using conjugate gradients, the all-atom model is ready for production simulation. the adk simulation was performed for timesteps with a periodic update frequency of fs, and atomic models were saved every fs. this results in a . nanosecond ( steps× fs/step) simulation of the adk protein, providing in time series of data points. the simulation of adk was performed using the openmm python library [ ] . five copies simulations were performed at a temperature of k. collective dynamics of adk was monitored by computing its rmsd relative to the t = time point (fig. a) . a plateau in this profile suggests that equilibrium is attained at . × fs. the trajectory data, containing time points or snapshots, was initially stored in single precision binary fortran files known as dcd files. the positional coordinates (x,y,z) of all atoms in each snapshot were extracted from the dcd file resulting in a rank- tensor which was ( × × ) for the high dimensional space and ( × × ) for the low dimensional data. the entire simulation can be reproduced with a single openmm python script located in the equilibrium md simulation on github [ ] . life as we know, exists out of equilibrium. traditionally, experiments focusing on the nonequilibrium behavior of proteins were performed by either adding heat or inducing chemical perturbations. another factor that can bring proteins out of equilibrium is mechanical stress (e.g stretching of the molecules). such stretching arises naturally in proteins located in the muscle tissue. the response of these proteins to mechanical stress can be studied by investigating the individual domain's response to stretching within atomic force microscopy or afm experiments [ ] . this molecular events are analogous to the process of pulling a rubber-band by holding one end fixed in our hand (fig. a) . now, we employ nonequilibrium md simulations for computationally recreating the afm experiments. in particular, steered md or smd is used to generate a relevant and challenging data set for learning algorithms to be trained and validated. it is notable that events from such non-equilibrium pulling experiments or their equivalent smd simulations, have never been used within rnn, in particular lstm framework for time series forecasting. the challenge in smd is commensurate to that of equilibrium md in that, an optimal lead time should be derived respecting the correlation limits of the data. however, subtleties are twofold: first, for the same lead time steps the rmsd error bars in smd are much higher (fig. ) , consistent with more prominent d shape changes that those observed for equilibrium md simulations of adk (fig. a vs. b ). yet, the longer the correlation times (fig. e) indicate smoother shifts within the time series. second, there are multiple classes of atoms with different dynamics distribution (fig. c) . data preparation. the -alanine helix was prepared using the avagadro software on a single cpu. the external force acts on the c-terminus of the long helical protein, while the n-terminus region remains constrained. as the molecule is stretched, it undergoes a gradual conformational change, transitioning from an α-helix to a random coil (fig. a) . typically, there are two variants of smd, constant force and constant velocity pulling. the equation for the external pulling force (f spring ) acting on the atom in the c-terminal region of the protein is given by, here, x is the displacement of the atom in protein which is pulled from its original position, v is the prescribed pulling velocity, and k is the spring constant. in the presence of this external force, the equation of motion (eq. ) becomes for our data set, we adopt the smd with constant velocity (smd-cv) protocol from our open-source namd tutorial [ ] . the smd-cv simulations are performed using the langevin dynamics scheme of md at constant temperature of k in generalized-born implicit solvent with the charmm m force field [ ] . one end of the molecule (n-terminus) is constrained while the other end at the (c-terminus) is free to move along the z-axis with a constant speed of . Å/ps and force constant of kcal/mol/Å , exerting an overall force of nanonewton (fig. a) [ ] . a set of copies of smd is used to generate an ensemble of conformations when subject to smd-cv pulling. all simulations are performed using the recent build of namd (version nightly build) with a time step of fs, with dielectric constant of , and a user defined cut-off for coulomb forces with a switching function starting at a distance of Å which plateaus to zero at Å. a simulation time of fs is required for extension of the helix to random coil. here, we save the trajectory every fs, mainly to generate a large data set of points to train an lstm model in sect. . the data presented in figs. and are saved at even longer time intervals, namely fs, to reduce the number of time points to for computing lead times and correlations. the full data set of ( or × × ) points, which is used in the lstms below is accessible through the google drive link provided on github [ ] . a tcl script smd.constvel.namd is used to implement the outlined simulation protocol. the script includes all the standard namd parameters, which are outlined above to perform smd. this script together with all other input files are available freely through github [ ] and the namd website [ ] to reproduce our data set for non-equilibrium md simulations. our md data is documented in tutorial files, scripts, and an openly accessible github page [ ] so any user with access to a single cpu or gpu node will be able to reproduce the results. the full time series can be loaded, visualized in d and analyzed for rmsd using the molecular visualization tool vmd (figs. and ) . the presented data set exemplifies arguably a first attempt at capturing the entire range of time series variations typical of a biomolecule. we describe two broad classes of data with distinct correlation timescales. more importantly, the data clearly shows how external physical forces can alter time series correlations and provides an avenue to experiment with machine learning models for probing such external factors. accordingly, a data scientist can chose a suite of different learning algorithms to model these fast evolving high dimensional md trajectory data. the equilibrium data at a single-particle level appears to be well behaved with relatively uniform kurtosis values (fig. c ), but offers difficulties in training of the rapid variability in rmsds ( fig. a-multiple shaded regions) . in contrast, the non-equilibrium data shows non-gaussian statistics at a single-particle level (fig. c ) eliciting complexity at a single-particle level, but manifest smooth changes in the time series when treated together (fig. b) . a key question these data sets pose is whether a common learning algorithm can ever be introduced to work with all the limits of biomolecular dynamics. a second question the data sets raise pertains to identification of limits that are easier to model using popular sequence modeling techniques like rnns with lstms or grus cells either in isolation or in concert. finally, will the learning algorithms scale if the dimensions of the data sets increase from the hundred-to-thousand variables, chosen here for simplicity, to the more realistic million-to-billion dimensional spaces. these three questions also offer the opportunity to think about the use of the existing petascale or the upcoming exascale resources for handing the convoluted biomolecular problems with data science methodologies. put together, these data sets places an machine learning expert in a position to address one of the central questions at the interface of life sciences and computer sciences, namely to what extent can numerical simulation schemes be by-passed using the machine learning tools. the community of computational biophysics with nearly , namd users and a - fold large cadre of researchers applying md will immediately benefit from answering this question. the findings from this data set are further generalizable to any domain with quantitative data on highdimensional rapidly fluctuating time series. due to the recent success of recurrent neural networks (rnn) for modeling time series data [ ] , we conducted an exploratory study with rnns to model the two new dynamically evolving md trajectory data. we used long-short term memory (lstm) cells in the hidden layers and trained rnns on both equilibrium and non-equilibrium md simulations to decipher which data-set is more amenable to learning. more specifically, we conducted a series of experiments to produce baseline accuracy numbers for lstms as well as to tune the different hyperparameters associated with the same. below we present a brief summary of the experiments that were conducted and report our findings to facilitate in-depth future research in this direction. as a starting point, we set the static model as our baseline where we assume that the position of an atom at a future timestamp x t+lead does not change relative to its last known position, i.e. x t , where, t is the current timestamp. the assumption is incorrect, but still helps us set a realistic baseline for evaluating the performance of advanced machine learning techniques like lstms. figures a,b (adk) and a,b (smd) show the rmsd distributions of static model for lead time steps and , respectively. for starters, we trained a rnn with lstm units in the hidden layer, a learning rate of . , history size and varying lead time steps of { , , , , }. the output layer used linear activation and mean squared loss was used as the training loss function. below we report some of our key observations from the experiments. curse of dimensionality and kirchhoff decomposition we found that learning by treating the entire protein structure at a given timestamp as a single training instance is very challenging due to the high dimensionality of the problem, generating higher errors than the static model. to deal with this issue, we assumed that the position of each atom within the protein structure is independent of one another and can be modeled as separate one dimensional time-series. this so-called kirchhoff decomposition scheme boosted the performance of lstm significantly. adk vs -alanine: we report rmsd of each simulated system, i.e., adk and smd ( figs. a and b ). we found that rmsd of smd simulation of -alanine is one order of magnitude higher than that of the equilibrium md simulation of adk. this is due to the non-equilibrium nature of the former, where an external force is used to pull the system. this difference is also reflected in the static model error at varying lead time steps (figs. and ). effect of lead time: increasing lead time makes time series forecasting harder, which we expected would justify the use of complex sequence modeling techniques like lstm. in other words, we hypothesized that an increase in lead time will cause the lstm error to increase less than the static model error. we found this to be true for the -alanine smd simulation. with lead time steps of and , lstm loss was higher than the static model error. however, with lead time step of , lstm performed better than the static model, and improvement from static model increased further at even higher lead time steps ( and ). due to lack of space, we only present the results for lead time an (fig. a,b) . in contrast, we have not been able to achieve lower lstm losses compared to the static model loss for the equilibrium md simulation of adk, for the lead time steps through . equilibrium md simulation of adk decorrelates much faster than smd, in the picosecond regime ( fig. a) . this yields an interesting as well as surprising result that equilibrium md trajectories were more difficult to model than the non-equilibrium md trajectories, which is indeed counter intuitive. effect of history: for this set of experiments, we hypothesized that an increase in history size will reduce the lstm training error as we are using more information from the past. indeed, the results confirm our hypothesis (figs. ab and a,b). more specifically, we varied history size among { , , } and found that increasing the history actually reduces the lstm training erros for both adk and -alanine trajectories. effect of learning rate: we trained the lstm network separately while varying learning rate among { . , . , . , . , . }. we found that rates of . , . were unstable, while . , . were too slow to converge for smd simulations (figs. ) [results for md simulations were similar, and are provided in github [ ] ]. thus, we recommend . as the learning rate. summary of hyper-parameter tuning study: based on our exploratory study, we recommend the following set of empirical values for each hyper-parameter as shown in table . in regards to the future directions of methods that can be done in the data set, there are still more ways to improve training on lstms. one possible improvement is through more stacked lstms. this would be able to learn more nonlinear dynamical relationships between the points. other than lstms, we can also borrow from deep learning in natural language processing by utilizing attention models, which have recently been getting state of the art results, without of the use of a recurrent hidden layer [ ] . other considerations for future direction is the ability to reformulate the d structural input of the data as a d point cloud. there have been recent deep learning architectures used in d point cloud segmentation and classification such as voxelnet and pointnet [ ] . both architectures leverage the underlying d relationship between points and objects in d space for the supervising task. with voxelnet, the data is voxelized into fixed voxels in which a d convolutional neural network is used. however, with architectures like pointnet, the input can be variable. in this case, future directions can be the addition of a data set in which the number of atoms per dynamical system and be varied. with architectures that deal with data in the d space, there is the consideration of new loss functions. here, we utilized mse loss in optimizing our lstm. loss functions such as earth movers distance (emd) and chamfer loss are two most notable losses used for d point generation [ ] . moreover, emd can be extended for graphs, which can be useful for not only learning the d geometrical relationships, but the graph relationships between atoms. the external information sought in the current data sets from afm or force measurements to improve temporal correlation can also be derived from other experimental modalities such as x-ray crystallography [ ] or cryo-electron microscopy [ ] . finally, recovery of the all-atom description from an lstm-predicted reduced space of only heavy atoms opens the door to inverse-boltzmann approaches for reverse coarse-graining [ ] . in the present study, we report two new data sets for describing equilibrium and nonequilibrium protein dynamics produced by physics-based simulations. these data sets fill a much needed knowledge gap in the protein-learning field, providing a synergistic augmentation to the popular existing data sets used for learning molecular structure [ ] . protein dynamics was represented as a time-series data and was modeled through a recurrent neural network with lstm cells in the hidden layer. we found that the learning of both data sets was improved when using a kirchhoff decomposition on models with a constant number of hidden layers. the ability to forecast future structure was shown to be dependent on the correlation among the recent past structures. specifically, dynamics within the nonequilibrium molecular dynamic simulations were highly correlated, and thus protein dynamics were effectively learned. conversely, the movements of a protein at thermal equilibrium were poorly correlated, making accurate forecasting more difficult. increasing history size improved the prediction accuracy for both data-sets and lstm outperformed the static baseline while forecasting at higher lead times. overall, lstms provide an exciting tool to model non-equilibrium protein dynamics. virtually all biologically relevant actions occur at non-equilibrium, therefore these results indicate an exciting advance with far-reaching implications. on the range of applicability of the reissner-mindlin and kirchhoff-love plate bending models the protein data bank pointnet: deep learning on point sets for d classification and segmentation recurrent neural networks for multivariate time series with missing values a thorough review on the current advance of neural network structures openmm : rapid development of high performance algorithms for molecular dynamics shear viscosity of the hard-sphere fluid via nonequilibrium molecular dynamics a point set generation network for d object reconstruction from a single image computational methodologies for real-space structural refinement of large macromolecular complexes reconstructing potentials of mean force through time series analysis of steered molecular dynamics simulations cikm md prediction vmd: visual molecular dynamics generalized scalable multiple copy algorithms for molecular dynamics simulations in namd ensemble of multi-headed machine learning architectures for time-series forecasting of healthcare expenditures boltzmann generators: sampling equilibrium states of many-body systems with deep learning calculating potentials of mean force from steered molecular dynamics simulations accelerating molecular simulations of proteins using bayesian inference on weak information predicting improved protein conformations with a temporal deep recurrent neural network time series forecasting of petroleum production using deep lstm recurrent networks financial time series forecasting with deep learning: a systematic literature review order parameters for macromolecules: application to multiscale simulation atoms to phenotypes: molecular design principles of molecular dynamicsbased refinement and validation for sub- Å cryo-electron microscopy maps total predicted mhc-i epitope load is inversely associated with mortality from sars-cov- . medrxiv p key: cord- -we lmrps authors: yoo, geunsik title: real-time information on air pollution and avoidance behavior: evidence from south korea date: - - journal: popul environ doi: . /s - - - sha: doc_id: cord_uid: we lmrps this study provides new empirical evidence on the relationship between information about air pollution and avoidance behavior. many countries provide real-time information to describe the current level of air pollution exposure. however, little research has been done on people’s reactions to that real-time information. using data on attendance at professional baseball games in south korea, this study investigates whether real-time information on particulate matter affects individuals’ decisions to participate in outdoor activities. regression models that include various fixed effects are used for the analysis, with the results showing that real-time alerts reduce the number of baseball game spectators by %, and that the size of the effect is not statistically different from that of air pollution forecasts. the study demonstrates that providing real-time information can be a way to protect the public’s health from the threat of air pollution. moreover, the findings suggest that having easy access to the relevant information and an awareness of the risks involved are necessary for a real-time information policy to succeed. the hazards of air pollution are well known, and government authorities worldwide have implemented various policies to protect their people from the threat it presents. providing information on the level of air pollution is one of these efforts and is based on the expectation that people will adjust their behavior in response to the information. thus, the information provided typically includes behavioral guidelines to explain the actions the public should take in response to elevated levels of air pollution. a number of studies have shown that providing information on air pollution, such as forecasts, prompts avoidance behavior (neidell ; graff zivin and neidell ; janke ; altindag et al. ) . with developments in information and communication technologies, providing and acquiring information have become easier, and the type of information that can be exchanged has become more diverse than ever before. many countries now provide the public with real-time information on air pollution, which more accurately describes the current level of pollution exposure than what air pollution forecasts can offer. in addition, individuals can obtain this information easily and immediately through smart devices (mobile phones, tablets, etc.) . while the expectation is that people will adjust their behavior based on real-time information, little has been studied on people's actual reactions to that information. providing access to information in real-time does not guarantee that people will respond to it. therefore, this study analyzes whether realtime information about air pollution triggers avoidance behavior, based on data about air pollution levels and baseball game attendance in south korea from to . since the mid- s, the south korean government has been providing real-time information on air pollutants via a website created for this purpose (www.airkorea.or. kr). the information has also been disseminated through an open api system since december , enabling the public to access the information from various portals or mobile applications. this study focuses on information regarding particulate matter (pm) among the various air pollutants tracked by the south korean government. many studies have investigated the health effects of pm, which is associated with morbidity and mortality from cardiovascular and respiratory diseases (pope and dockery ; epa ) . pm also negatively affects non-health aspects, such as human capital formation, cognitive abilities, and labor productivity (neidell ; roth ; shier et al. ) . pm is the most significant threat among the air pollutants in south korea. the annual average pm concentration in south korea as of was μg/m , more than double the world health organization (who) standards for acceptable risk. kim et al. ( ) reported that south koreans perceived "micro dust" as the most significant among public health threats. the most popular mobile application that presents real-time information on pm in south korea, named "misemise," has been downloaded more than one million times from the google play store (as of april ). therefore, it is reasonable to assume that some south koreans would adjust their behavior based on real-time pm information. this study focuses on pm , an aerodynamic particle with a diameter smaller than μm because information about pm . , the other form of particulate matter, was made available only after . given that the typical avoidance behavior in response to air pollution is to reduce one's outdoor activities, the reaction to real-time pm information is measured using the change in attendance at professional baseball games. baseball is one of the most popular sports in south korea. focusing on data about attendance at professional baseball games has some useful attributes for this study. first, since the baseball season runs from march to october, we observe the extensive variations in pm levels throughout the season. second, given that baseball games are typically held at night during the summer, the effects of ozone, another type of pollutant that triggers avoidance behavior, can be eliminated (neidell ). third, the data are suitable for investigating avoidance behavior considering that the amount of time it takes to play a baseball game (approximately h) is generally longer than for other sports, and the amount of time spent at an outdoor event is directly linked to an individual's exposure to air pollution on highly polluted days. the results of this study show that people do adjust their behavior in response to real-time pm information. i find that attendance at baseball games decreases by approximately % when real-time information shows the level of pm to be bad or very bad, and the results are highly robust to alternative specifications. the effects of real-time information have increased drastically since due to the greater accessibility of the information and heightened sensitivity to pm. i also find that the size of the effect of real-time information is not statistically different from that of air pollution forecasts. these results demonstrate that providing real-time information can be a way to protect people's health from the threat of air pollution and suggest that having easily accessible channels of information and awareness of the risk are necessary for a realtime information policy to succeed. the risks created by air pollution provide incentives to avoid it, and a typical avoidance behavior is to reduce one's outdoor activities. previous studies investigated the relationship between information on levels of air pollution and changes in outdoor activities. neidell ( ), janke ( , and altindag et al. ( ) reported that information about air pollution such as smog alerts led people to reduce their outdoor activities, reducing the adverse health effects of air pollution. the scope of those studies was more comprehensive than what this study addresses in that they focused not only on the existence of avoidance behavior but also on the health effects. however, the previous studies did not consider real-time information, which is of growing importance. in this way, this study offers a new perspective. nam and jeon ( ) examined the effect of pm on the number of spectators in attendance at professional baseball games in south korea. the authors reported that a high level of pm on the morning of a game day decreased the number of spectators. they concluded that poor visibility due to the high level of pm caused the reduced attendance but did not consider the effect of information. in contrast, this study shows that information about the level of pm is a key factor in the decline in spectators. there are other notable studies concerning avoidance behavior. moretti and neidell ( ) estimated the welfare costs of avoidance behavior using daily boat traffic as an instrumental variable. graff zivin et al. ( ) showed that responses to information about water quality violations led people to buy bottled water. sheldon and sankaran ( ) reported that poor air quality due to forest fires in indonesia induced people to stay indoors, resulting in increased electricity demand by households in singapore. zhang and mu ( ) , and liu et al. ( ) found that a high level of pm increased online searches for and sales of anti-pm . masks and air filters. kim ( ) showed that estimates of health effects based on data from south korea could be biased when avoidance behavior is not considered. yoon ( ) reported that retail sales declined when the level of pm becomes worse than the "bad" category, which indicated that people were reacting to government-enacted air quality standards. eom and oh ( ) demonstrated that an ambient level of pm indirectly affects avoidance behavior by changing subjective risk perceptions. the south korean government and its local agencies operate more than air quality monitoring stations (as of ) and collect hourly data on various pollutants, including so , no , co, o , pm , and pm . . the national institute of environmental research (nier) of korea receives data from these monitoring stations and disseminates it to the public via various channels, such as a government-operated website (www.airkorea.or.kr), portals, and mobile applications. real-time pm information is provided as numerical values and by categories that are determined by the numerical values as follows: - , good; - , moderate; - , bad; and over , very bad. each of these categories is represented by different colors, i.e., blue, green, yellow, and red, to express these risk levels visually and intuitively. behavioral guidelines associated with each category that are provided along with the real-time information are shown in table . the nier distributes real-time information on pm at various levels, from the individual monitoring station to province-level data. this study uses county-level information that is generated by averaging the pm concentration of monitors within a county. information about air pollution forecasts is also considered in this study. the south korean government has been operating an air quality forecasting system since that provides forecasts of pm and ozone levels for the next day, four times a day at the following times: a.m., a.m., p.m., and p.m. this study uses information from the p.m. forecast because it is disseminated via the main evening news, which is presumed to be the most widely viewed news source. these forecasts are provided in terms of the four categories, namely, good, moderate, bad, and very bad, but not as numerical values. since the forecast is disseminated at the province level, all counties within a province receive the same forecast information. data on pm forecast are available on www.airkorea.or.kr. data on professional baseball games is obtained from the korea baseball organization, including information about the ballpark, the home team, the away team, and the number of attendees per game. the baseball season in south korea runs from march to october, and each team plays approximately games per season. the league consisted of eight teams in , and two teams were added during the analysis period; thus, there were ten teams in the korean professional baseball league as of . three teams changed their home ballparks during the analysis period. only data from the regular season was used; in other words, post-season (playoff/championship) games were not considered. occasionally, a home team plays at a substitute field rather than at its home ballpark. these cases are also excluded from the sample because the number of such games is negligible. weather data are obtained from the climate data open portal (data.kma.go.kr) operated by the korea meteorological administration. this portal provides hourly data from more than weather monitors across the country. for the several counties that have no weather stations, the weather observed from the station nearest to the administrative office of a county is defined to be the weather of that county. table shows summary statistics for the data. the study covers baseball games, and all variables were linked based on location and day. the average number of fans in attendance per game was , , and the average pm concentration for the overall sample was approximately μg/m . over % of real-time pm information was categorized as good or moderate. real-time alerts indicating that real-time pm information was in the "bad" or "very bad" category occurred for games. forecast alerts, referring to instances when the forecasted level of pm on a game day was expected to be bad or very bad, occurred for games. forecast alerts have a value of before , when the air pollution forecast system did not exist. for those days where real-time alerts occurred, only . % of them were simultaneously affected by forecast alerts. even if the time period is limited to or later, in which the forecasting system was in operation, only . % of the real-time alerts appeared together with forecast alerts. this means that the correlation between the types of alerts may not cause problems of multicollinearity in estimations. figure displays the eight teams existed in , and two teams joined the league in and . one team moved its ballpark in , and two teams moved in . this summary statistic is of real-time information available at the reservation cancelation deadline of a game. the reason for using this variable is explained in section . the reason why real-time and forecast alerts do not match each other well is that they cover different regions and periods. the forecast provides information on the daily average pollution levels at provinces, while real-time information represents the hourly information of counties. limit extended or strenuous outdoor activities. people should avoid strenuous outdoor activities. sensitive people should stay indoors. source: www.airkorea.or.kr. the pm classification follows the standard of the south korean government, which is less stringent than the who standard variations in attendance, frequency of real-time alerts as a percentage of games played, and the average pm concentration by year and by month. pm pollution is more severe in the spring; therefore, real-time alerts appear more frequently in the first half of the baseball season. the number of attendees per game is especially large in march and may. the reason for the high average number of spectators in march is that the season starts at the end of march and many people are eager to attend a baseball game at the start of the season, especially the popular first game of the year. may has a large average number of spectators because it is family month in korea. figure shows the locations of the ballparks. the average annual pm levels in counties that have a ballpark ranged from to during the analysis period. a total of ten teams participate in the korean baseball league, but two of them share the same ballpark, so only nine ballparks are shown on the map. figure shows the distribution of the percentage of real-time alert occurrences among games held on the same day (days when the percentage is zero are excluded from the figure). the histogram shows variation in the real-time alerts within a given day, not only for the entire country but also in areas where a number of ballparks are clustered, such as the northwest region. the influence of real-time information is estimated using the real-time pm information available as of the reservation cancelation deadline in the county where the game is to be held. the assumption behind the use of this variable is that people adjust their behavior by canceling their reservations, and they refer to real-time pm information available at the deadline by which ticketholders must decide whether or not to cancel. later in the analysis, i verify this only those seats that have not been reserved can be purchased onsite and people have to line up for on-site purchases; therefore, those who want to go to a ball game usually make reservations in advance. a report on professional sports in south korea revealed that for the season, only . % of people purchased baseball game tickets on-site (kpsa ). although the rules for canceling a reservation differ across teams, all reservations can be canceled up to to h before the game. the cost of canceling is approximately $ plus % of the ticket price, which varies from $ to $ , and cancelation is not possible after the deadline. actual data on canceled reservations would be helpful in this analysis but none was available; therefore, i was only able to use the number of attendees in the study. as mentioned above, real-time pm information is provided not only as numeric values but also as categories. although the actual level of pm and the associated category are both pieces of information, people are more likely to respond to the category than the actual level of pm because each category has its own color and behavioral guidelines corresponding to the risk level. moreover, high levels of pm are directly related to poor visibility or physical reactions, so the effect estimated from the actual level of pm could not be considered entirely as the result of viewing the information. on the other hand, people's reactions to a category can be interpreted as the direct effect of information, as the thresholds for each category are arbitrarily determined by authorities. therefore, a dummy variable indicating whether the real-time pm is categorized as bad or very bad as of the cancelation deadline is used to estimate the real-time information effects. a number of previous studies have used air pollution forecasts to identify avoidance behavior (neidell ; janke ; altindag et al. ) . in this study, however, forecasts can be a confounding factor that affects the estimation of the impact of realtime information. the forecast is associated with both real-time information and with the number of attendees simultaneously, so omitting it would cause omitted variable bias. therefore, all regression models in this study include a forecast alert variable there are ten teams across the country and four teams in the northwest region indicating whether the forecasted pm for a ballpark location is categorized as bad or very bad. the regression model controls for the average pm level before the game starts. the pm level before the game can affect the number of attendees because people can identify a high level of pollution based on poor visibility or physical reactions (e.g., difficulty breathing, eye irritation). also, the higher the level of pm before a game, the more likely a real-time alert will occur. although pm levels during a game can also be associated with real-time alerts, this is less likely to affect the number of attendees. the attendance data only show the number of people who enter the ballpark; therefore, if people leave the game early due to high levels of pollution it does not influence the number of attendees. the regression model used to estimate the effect of real-time information is as follows: where y ct is the (log) number of attendees in the game held in county (or ballpark) c on day t. ra indicates that the real-time information available at the cancelation deadline is categorized as either bad or very bad. if real-time information affects attendance, the effect will be shown in β . pm is the average level of pm before the game starts and is included as a quadratic function or as μg/m interval dummies. fa is the indicator for forecast alerts, which indicates that the forecasted pm is categorized as bad or very bad. fa is provided at the province level (p), so counties within the same province have the same fa. w represents a set of weather variables, including temperature, precipitation, wind speed, and relative humidity. omitting weather variables could also create omitted variable bias because weather affects both attendance and level of air pollution. weather variables are included as second-order polynomials. team is a vector of home team fixed effects, away team fixed effects, and homeaway team fixed effects. home team and away team fixed effects capture the time invariant characteristics of teams when they are the home team or an away team. the unique relationships between teams can influence the number of attendees. as shown in fig. , some teams are clustered in the northwest and southeast regions of the country, and two teams share a ballpark. this indicates that the number of fans for the away team is likely to vary depending on the location of the home team's ballpark. in addition, rivalries between two teams can affect game attendance. home-away fixed effects, measured by the interaction terms of the home team and away team dummies, account for the unique relationships between teams. time represents a set of time fixed effects including year, month, day of the week, holiday, and the mers outbreak, to capture temporal and seasonal trends in baseball game attendance and real-time alerts. during the analysis period, a few teams changed their ballparks; the interaction terms for the home team and year dummy variables are included in the model to account for such changes. the results obtained from the regression specified in eq. ( ) are shown in table . each column provides results for different specifications of the model, but weather, team fixed effects, and time fixed effects are included in all of the specifications. results show that real-time information available about pm at the cancelation deadline reduces baseball game attendance by approximately %. moreover, the coefficient of the impact of real-time alerts is largely unaffected by the functional form of the average pm level before the start time of the game and is not sensitive to the forecast alert. the results suggest that the issuance of real-time alerts affects participation in outdoor activities independently of the actual pm level or the forecast. considering nam and jeon ( )'s results reporting that the number of baseball spectators decreased by . % when the average pm on the morning of the game day is high ( μg/m or over), we can infer that real-time information is a key factor for the decrease. in this analysis, reverse causality can exist in that the regional air pollution level could be influenced by game-related traffic (locke ) . although home team fixed effects capture the average level of local traffic, the daily traffic variations due to baseball games can affect the level of air pollution and, consequently, real-time alerts. however, even if reverse causality does exist, it would only cause a downward bias on the effect of real-time information because traffic and air pollution are positively robust standard errors in parentheses. ***p < . , **p < . , *p < . . the analysis period is from to . real-time alert indicates that real-time information on pm is categorized as bad or very bad. forecast alert indicates that the forecasted level of pm is categorized as bad or very bad. pm indicates the average level of pm before the game starts and is included as a quadratic function or μg/m interval dummies. weather, team-fe, and time-fe are controlled correlated. despite the possibility of underestimation, the results of the analysis are statistically significant. this study uses the information available as of the cancelation deadline as the real-time information to which people respond. the underlying assumption is that people adjust their behavior in response to information about the level of pm by canceling their reservations, and they refer to the information that is available as of the cancelation deadline to make their decision. i check the validity of this assumption by including all of the real-time pm information available before and after the cancelation deadline. real-time alerts occurring within h before and after the deadline are included in the model. since real-time information at a given time can be highly correlated with that of adjacent times, real-time alerts around the deadline are included at -to -h intervals in table and fig. show that real-time alerts occurring after the cancelation deadline have no significant effect on attendance. this supports the hypothesis that the people who do respond to real-time information adjust their behavior by canceling their reservations prior to the deadline. in addition, although the effects of real-time alerts increase as the deadline approaches, the alert as of the deadline has the largest effect and is the only statistically significant source of information. therefore, we conclude that our underlying assumption, namely, that people adjust their behavior by canceling their reservation and that they refer to information as of the cancelation deadline to decide whether or not to cancel, is reasonable. the effects of real-time information can differ by year if factors such as accessibility and sensitivity to real-time information vary over time. the south korean government began to publish air pollution data through an open api system in december , allowing people to easily check pm information in real-time via various portals and mobile applications. furthermore, the international agency for research on cancer (iarc) classified pm as a group carcinogen in october (iarc ). figure shows the google search trend for pm in south korea, revealing significant increases after these events. therefore, in this section, i examine whether the effect of real-time information differs before and after . the interaction term of real-time alerts and a dummy variable indicating the years after is included in the main regression model to estimate this effect. column ( ) of table shows the result of the main regression model, which estimates the average impact of real-time alerts over the entire analysis period. column ( ) shows the differential effect over time. the results show that real-time alerts on pm do not have a noticeable effect prior to and that the impact increases dramatically after . this finding confirms that the increased accessibility of real-time information and sensitivity to the risks of pm exposure have changed the way people respond to real-time information about pm levels. unfortunately, it is not possible to decompose the contributions of accessibility and sensitivity to the overall change due to data limitation. however, we can infer that easily accessible information and education about risks are necessary to the success of a real-time information policy. figure , which displays the real-time information effects by year, also supports this finding. robust standard errors in parentheses. ***p < . , **p < . , *p < . . the analysis period is from to . real-time alert indicates that real-time information on pm is categorized as bad or very bad. in all analyses, pm, forecast alerts, weather, team-fe, and time-fe are controlled the effect of real-time information can also be differentiated by the importance of the particular baseball game. for example, more people may decide to go to the ballpark despite a high level of air pollution when the game is crucial for their team. this section investigates whether there are differentiated effects based on the importance of specific games by including the ranking of the home team and the difference in ranking between robust standard errors in parentheses. ***p < . , **p < . , *p < . . the analysis period is from to . real-time alert indicates that real-time information on pm is categorized as bad or very bad. in all analyses, pm, forecast alerts, weather, team-fe, and time-fe are controlled. the home team's ranking ranges from to , with representing first place and representing last place. ranking differences range from to -the larger the value, the closer the ranks of home and away teams the home and away teams. variables related to ranking are converted as follows: the home team's ranking ranges from to , with representing first place and representing last place. ranking differences range from to -the larger the value, the closer the ranks of the home and away teams. table shows the results. consistent with our expectations, fans are more likely to attend a baseball game when their team is highly ranked, or when their team is playing another team with a similar ranking. however, real-time information about pm levels does have a greater impact on more crucial games. the real-time alert decreases the number of spectators by an additional . % when the home team's ranking increases by one unit. this may be because those spectators whose behavior is affected by the importance of the game are relatively less devoted to their team and therefore are more likely to abandon going to a ballpark on a high pollution day. considering that a oneunit increase in ranking increases the number of spectators by . % when the pm level does not trigger an alert, a real-time alert reduces approximately % of this increase. column ( ) shows that the effect of real-time information is greater for fans who decide to attend a game based on the ranking difference (i.e., the importance of the game to their team). this section compares the magnitude of the effects of real-time alerts and forecast alerts using data after . given that the forecast system was implemented in and that the effects of real-time alerts have surged since then, the data after are deemed most appropriate for this comparison. column ( ) includes the interaction term of real- ( ) and ( ) estimate the effect of one type of alert on days when the other type of alert does not appear. table shows that real-time alerts reduce baseball game attendance by approximately - %, which is comparable to the main results. forecast alerts are shown to decrease the number of spectators by approximately - %. although the regression coefficient of forecast alerts is slightly greater than that of real-time alerts across all specifications, the difference is not statistically significant. therefore, the results suggest that people adjust their behaviors in response to real-time information and the extent to which they depend on that information is almost the same as their dependence on forecasts. in this section, i conduct additional robustness checks on the effect of real-time information and the findings support the main results. column ( ) in table estimates the regression with the dependent variable expressed in levels rather than log values. it shows that a real-time alert reduced attendance at baseball games in south korea by approximately individuals, or . % of average attendance. this percentage is slightly larger than the main result but is quite comparable. column ( ) presents the results where dummy variables representing the week are included instead of month dummies. although month fixed effects account for seasonal trends, if time trends exist in our dependent and key explanatory variables within a month, the regression results could show a spurious relationship. the result in column ( ) shows that the effect of real-time information is not affected by the inclusion of week dummies. column ( ) in table represents the result of a multi-pollutant model. pm is associated with other pollutants, given that gases such as sulfur oxides and nitrogen oxides can be transformed into pm through chemical reactions. this model controls for the average levels of so , co, o , and no before the game, and real-time alerts on these air pollutants are also included. the result is almost unchanged, even when the other pollutants are considered. in column ( ), the games that attract full capacity crowds are excluded from the sample. the main regression model can be considered as a censored model with upper limits because each ballpark has a maximum number of spectators it can accommodate. i perform an analysis that excludes the games that attracted full capacity crowds ( cases) to remove any distortion related to these upper limits. the result using the restricted sample shows that the effects are still largely negative and statistically significant. the models in columns ( ) and ( ) investigate the impact of real-time information on indoor activities. in , nexen, one of the professional baseball teams in korea, changed its home stadium to the gocheok skydome, the first and only domed stadium in korea. the influence of information about pm on games played in this stadium vector inflation factors (vif) of real-time alert and forecast alert are . and . for column ( ), . and . for column ( ). the vifs reconfirm that the correlation between real-time and forecast alert does not cause a serious problem in the regression. there were real-time alerts for o , while there were no alerts for the other pollutants. could differ from that of other ballparks if people believe that games held in the domed stadium are not affected, or are significantly less affected by pm. column ( ) estimates the differentiated effect of the domed stadium by including the interaction term of realtime alerts and the gocheok skydome dummy in the model. separately, column ( ) examines the effect of real-time pm information on basketball game attendance, as all basketball games are held indoors. the results in columns ( ) and ( ) show that attendance at games held in the gocheok skydome, as well as at basketball games, are negatively influenced by real-time pm alerts. this finding may imply that fewer people are willing to go out on highly polluted days, resulting in a lower number of visitors even to indoor facilities. this study investigates the impact of real-time information regarding the level of pm on outdoor activities using data on attendance at professional baseball games in south korea. the main results show that real-time alerts reduced the number of spectators at baseball games by approximately %. this result is robust under various model specifications. the finding suggests that people adjust their behavior based on realtime information and the dependence on real-time information about pollution levels is not statistically different from the dependence on air pollution forecasts. this study uses the real-time information about pm levels that is available as of the deadline for canceling baseball game reservations as the real-time information to which people respond, and additional analysis confirmed that the use of this variable is the gocheok skydome fixed effect is already captured in the main model because the interaction terms of home teams and year dummy variables are included in that model. robust standard errors in parentheses. ***p < . , **p < . , *p < . . the analysis period is from to . real-time alert indicates that real-time information on pm appears as bad or very bad. in all analyses, pm, forecast information, weather, team-fe, time-fe are controlled. in the multi-pollutants model, so , co, o , and no is considered. nexen, one of the korean professional baseball teams, changed its home stadium to gocheok skydome, which is the first and the only domed stadium in korea in . data on basketball games of - and - season are used for column ( ) reasonable. i find that the effect of real-time information has dramatically increased since due to a change in the accessibility of the information and the public's sensitivity to the risks associated with pm exposure. real-time information on pm has a greater impact on spectators whose attendance is affected by the importance of the game. in addition, i find that the desire to avoid air pollution affects attendance even at indoor facilities. with the development of technology, authorities can more easily provide information to the public about air pollution in real-time. despite its growing importance, little research has been done regarding the impact of real-time information. this study identified that real-time information can be a way to protect people's health from the threat of air pollution by triggering avoidance behavior. these findings may apply not only to air pollution but also to other fields of health policy where real-time information can be provided and applied in a practical way. the results also indicate that for a realtime information policy to succeed, authorities should provide real-time information through easily accessible channels such as mobile applications or portals and should increase people's awareness of health risks through education. unlike some previous research, this study focuses only on a behavioral response and does not consider the costs or benefits of a specific action. unfortunately, i was not able to analyze the discriminatory effects with respect to groups with greater than average health risks. nevertheless, this study is meaningful in that it provides new empirical evidence on the impact of real-time information and how it can help improve health policy decisions. chinese yellow dust and korean infant health. social science and medicine health risks from particulate matters (pm ) and averting behavior: evidence from the reduction of outdoor leisure activities air quality index: a guide to air quality and your health days of haze: environmental information disclosure and intertemporal avoidance behavior water quality violations and avoidance behavior: evidence from bottled water consumption air pollution and cancer: iarc scientific publication no air pollution, avoidance behaviour and children's respiratory health: evidence from england air pollution, health, and avoidance behavior: evidence from south korea, working paper national risk awareness of public health issues and application for future policy developments avoidance behavior against air pollution: evidence from online search indices for anti-pm . masks and air filters in chinese cities estimating the impact of major league baseball games on local air pollution pollution, health, and avoidance behavior evidence from the ports of los angeles a study on the impact of air pollution on the korean baseball attendance information, avoidance behavior, and health the effect of ozone on asthma hospitalizations air pollution and worker productivity health effects of fine particulate air pollution: lines that connect air pollution, educational achievements, and human capital formation averting behavior among singaporeans during indonesian forest fires ambient air pollution and children's cognitive outcomes effects of particulate matter (pm ) on tourism sales revenue: a generalized additive modeling approach air pollution and defensive expenditures: evidence from particulate-filtering facemasks publisher's note springer nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations acknowledgments this article is based on the first chapter of the author's ph.d. dissertation at seoul national university. the author would like to thank chulhee lee, dea-il kim, sok chul hong, jungmin lee, sung won kang, and anonymous referees for their useful comments and suggestions.funding this research was supported by the bk plus program (future-oriented innovative brain raising type, b ) funded by the ministry of education (moe, korea) and national research foundation of korea (nrf). key: cord- - tc ksf authors: schaap, andrew; weeks, kathi; maiguascha, bice; barvosa, edwina; bassel, leah; apostolidis, paul title: the politics of precarity date: - - journal: contemp polit theory doi: . /s - - -z sha: doc_id: cord_uid: tc ksf nan forms that political agency and solidarity might take in response to it, and the appropriate site within which precarious social conditions can be contested and transformed, is controversial. precarity refers to a situation lacking in predictability, security or material and social welfare. importantly, this condition is socially produced by the development of post-fordist capitalism (which relies on flexible employment practices) and neoliberal forms of governance (which remove social protections) (see azmanova, ) . precarity entails social suffering, which is manifested in the declining mental and physical health of both working and 'out of work' people and compounded by the attribution of personal responsibility to individuals for their politically induced predicament (apostolidis, , pp. - ) . precarity leads to social isolation as workers find themselves segregated and alienated by work processes while the capacity to sustain community is undermined (pp. - ). moreover, precarity leads to temporal displacement with precarious workers finding they have no time to do much else than work: they must constantly make time to find and prepare for work and, in doing so, become out of sync with the normal rhythms of social life (pp. - ) . precarity involves social dislocation as people are forced to relocate to adapt to precarious situations at the same time as their movements are constrained and policed (pp. - ). importantly, precarity is distributed unequally, with people of colour, women, low-status workers and many in the global south experiencing its most devastating effects. at the same time, however, some of its aspects penetrate all social strata. as apostolidis ( , p. ) puts it, 'if precarity names the special plight of the world's most virulently oppressed human beings, it also denotes a near-universal complex of unfreedom'. recognizing that anti-capitalist struggle has always been a fight for time, apostolidis ( , p. ) reflects on how this fight should be adapted to our present political conjuncture. to develop this vision of radical democratic politics, he turns to the experience of migrant day labourers to both diagnose contemporary social pathologies and envision alternative social possibilities. the research for the book is based on apostolidis's involvement in the activities of two worker centres located in seattle, washington, and portland, oregon. in addition to participating in various activities of the centres (such as staffing phones and running occupational health and safety sessions), the research team conducted interviews with migrant day labourers. through interpreting the interviews, apostolidis practices a kind of political theory inspired by paulo freire, which he characterises as 'critical-popular analysis' (p. ). by attending to the self-interpretations of the research participants, apostolidis characterises precarity and considers the possibility of its transformation in terms of four generative themes around which the book is structured. the first three themes speak to the experience of precarity: 'desperate responsibility', 'fighting for the job' and 'risk on all sides, eyes wide open'. the fourth theme envisions an anti-precarity politics in terms of a 'convivial politics'. as apostolidis acknowledges, there is an ethnographic dimension to this project since it provides a thick description of the everyday experiences and practices of migrant day labourers. however, it also entails critical-popular analysis since apostolidis aims to co-create political theory with the research participants. he does so by staging a constructive dialogue between the self-interpretations and practical insights of day labourers and the systematic and defamiliarized perspective afforded by critical theory. the fight for time not only provides insight into how some of the most vulnerable people in society experience, negotiate and resist precarity: from this social perspective, it aims to generate a wider understanding, of what agency all working (and 'out of work') people have to challenge the precaritisation of social life. as such, the book pivots on a fundamental distinction between day labour as exception and day labour as synecdoche. as kathi weeks explains below, this paradigmatic understanding of the precarity of day labouring, enables a perspectival shift from the singular experiences and ideas of migrant day labourers to the more general social condition of precarity and the possibility of its transformation. on the one hand, apostolidis considers those exceptionalising forms of precarity that dominate day labourers' lives, differentiating them from other members of society. on the other hand, however, apostolidis considers the significance of day labour as synecdoche for how precarity permeates social relations on a much broader social scale. a synecdoche is a figure of speech in which a part represents the whole. an often remarked on synecdoche in political language is that of the people, whereby the poor (those who do not participate in politics) speak in the name of the citizenry (the people as a whole). similarly, apostolidis treats day labour as synecdoche, according to which the exceptional forms of precarity experienced by labourers might make visible the precarity that increasingly conditions all social relations. in the final chapters, apostolidis explores how worker centres might also function synecdochally insofar as the purpose of association is construed not only instrumentally, as protection against the risks associated with precarity, but in terms of their constitutive potential to sustain convivial networks of political possibility for more mutually supportive, creative and pluralistic forms of solidarity than those afforded by traditional unionised spaces. it is in these spaces, which are both mundane and potentially extraordinary, that apostolidis discerns a nascent form of radical democratic politics that consists in a struggle against precarity. this entails three key elements: first, the refusal of work, i.e. the refusal to allow one's life to be consumed according to one's role as worker within capitalist social relations; second, the constitution of spaces for egalitarian social interaction that resist the imperatives of neoliberal governance, and; third, the reclamation of people's time from capitalist and state powers (p. ) . this recuperation of time (the time robbed from people's lives, which is symptomatic of alienated labour) is fundamental to understanding how day labour might function as synecdoche both of the wider social condition of precarity and the possibility of its transformation. as apostolidis explains, 'working people are running out of time and living out of time ' (p. ; emphasis in original) . in this context, he suggests, day labourers' socialized activities within the 'time-gaps' of the precarious work economy indicate how the 'time of everyday precarity' might be remade into 'novel, unpredictable, and politically generative temporalities ' (p. ) . the contributors to this critical exchange engage with two key aspects of the politics of precarity. the first relates to the subject of an anti-precarity politics and the extent to which the exceptional but inevitably partial experiences of day labourers can function as a synecdoche for the precarity of all. edwina barvosa questions whether identification with precarity provides an adequate basis for an emancipatory politics, given that it may condition unreflexive modes of action. bice maiguashca suggests that an intersectional politics would require attending to multiple exceptions, each with their own set of experiences and aspirations, as the basis for a coalitional anti-precarity politics. leah bassel similarly advocates building a politics of migrant justice from the knowledge experiences that are generated by a matrix of oppression, which requires acknowledging struggles against patriarchy and racism as well as capitalist domination. in this context, she emphasises the political imperative of making settler colonialism visible in any analysis of migrant justice, including acknowledging the social position of migrants as settlers. in contrast, kathi weeks highlights how certain appropriations of the marxian category of lumpenproletariat resonate with apostolidis's synecdochal interpretation of day labour. as such, it can be interpreted as a conceptual articulation of a heterogenous -rather than a homogenizing -political subject. indeed, in his response, apostolidis clarifies that the use of the term synecdoche indicates that the perspectival shift from the experience of day labour to the general social condition of precarity is intended as a contingent act of representationrather than a reductive empirical truth. the second issue relates to the mode and site of political organizing against precarity, encapsulated in apostolidis's demand of 'workers' centres for all'. weeks emphasises the urgency of politicizing workplace death and injury, which is obscured by the managerial appropriation of discourses of health and well-being with increased productivity of workers. yet, she is concerned that workers centres might be susceptible to co-optation. moreover, she wonders whether workers centres require embodied social interaction to be effective or might also be realised in virtual spaces. bassel highlights how such anti-precarity spaces are both sustained by affective labour of women and may reproduce other forms of oppression. maiguashca wonders what the visionary pragmatism that apostolidis ascribes to day laborers has in common with the principled pragmatism that she and catherine eschle observed among feminist activists involved in the global justice movement. barvosa questions the assumption that global inequality is most effectively redressed through the mobilization of oppressed groups according to a salt-of-the-earth script. she invokes instead to an alternative keep-only-a-competency script, according to which social inequality might be more effectively reduced by the voluntary giving of the wealthy. in response, apostolidis elaborates on the benefits of the critical-popular approach he adopts in the book. while the practical focus of the fight for time supports a coalitional politics as a key mode of struggle, apostolidis highlights the limits of a 'coalitional epistemology', which would require a cumulative assemblage of particularised knowledges prior to envisioning a desirable form of mass solidarity. lois mcnay ( ) has rightly highlighted how radical democratic theory risks becoming 'socially weightless' to the extent that it treats the social world as contingent, devoid of any significance of its own and able to be reshaped in limitless ways through political action. radical democrats tend to over-estimate the agency of members of oppressed groups when they neglect the mundane experiences of social suffering, which undermine individuals' capacity to participate in politics (mcnay, , pp. , - ) . as this critical exchange demonstrates, the fight for time challenges theorists of radical democracy to recognise the weight of the world while reflecting on how political agency is shaped, constrained and enabled by the conditions that it seeks to transform. moreover it challenges us to reflect on how political solidarity is possible across the differences and inequalities that are currrently being exacerbated and intensified by the social production of precarity in response to the covid- pandemic. andrew schaap the future of anti-precarity politics the discussion that follows is constructed around three insights gleaned from the fight for time about how to formulate an anti-precarity politics in the u.s. today. the first concerns one target for such a politics, the second its political subject, and the third considers one of its organizational sites. all three draw on apostolidis's approach to day labouring as both singular and paradigmatic, as at once an exceptional case and an exemplar of precarious work in the contemporary economy. i will begin with one of the targets of an anti-precarity politics apostolidis identifies that seems critically important today: publicizing and politicizing the incidents of work-related death and injury. this is one of the aspects of day labouring, which might be distinctive insofar as it is more hazardous than many other jobs, but is also appallingly common to precarious work under postfordism more generally. (if we include the household as a site of unwaged work as well, the rate of workplace injury and death increases dramatically.) apostolidis mentions briefly an encounter with a nurse who talked about the dangers of working intimately with bodies in need, and this certainly squares with the literature on other forms of care work, especially of home health aides (one of the fastest growing jobs in the u.s.), whose privatized places of work, and complex as well as under-regulated employment relations, can easily render workers unsafe. publicizing this issue is difficult because, as apostolidis notes, the problem of workplace death and injury is strangely absent from popular consciousness. public awareness is only occasionally peaked when massive disasters are reported: 'intervallic evocations of shock enable an overall scheme of normalization' (p. ). the anarchist polemicist bob black, in his essay 'the abolition of work', speaks to this normalization -using his own inimitable brand of sarcasm in a bid for attention to the issue -by claiming that we have made homicide a way of life: 'we kill people in the six-figure range (at least) in order to sell big macs and cadillacs to the survivors' (black, , p. ) . in her book on emma goldman, i was struck by the effort with which ferguson ( ) attempts to make visible the violence that capital and the state used against workplace organizing in the late th and early th centuries, which was rarely reported at the time and remains largely absent from our history books. ferguson ( , p. ) even offered, to powerful effect, a visual aid in the form of a six-page list, a 'bloody ledger', of what she could find of the documented instances of violence levied by public and private armies against striking or resistant workers. for the most part, this spectacular, overt wielding of force and violence over workers by the state and capital has been replaced by brutality meted out through the tools and within the routines of the labour process, such that the perpetrators are typically less directly involved or clearly identifiable. i agree with apostolidis when he argues that anti-precarity political activism requires 'a self-conscious, strategically eclectic, affectively inventive politics of the body' (p. ). the trick, as i see it, is how not only to publicize but also to politicize the issue of bodily harm, given how extensively the idiom of health has been rendered amenable to the logics and aims of biopolitical management. what vocabulary can be used when the seemingly most obvious and most legible candidate, the language of health, has become so tightly sutured to measures of productivity and complicit with the 'workplace wellness' programs dedicated to its restoration and maximization? although it may still be a language through which the problem of work-related death and injury can be publicized, particularly in light of the ways it is currently deployed to pathologize various modes of indiscipline, i am less certain that the individualizing and biologizing vocabulary of health can be used as a tool of work's politicization. the second aspect of the analysis that i want to consider once again draws on the day labourer as both a specific figure and an archetype of precarious work in order to think further about how to conceptualize a political subject adequate to a broad anti-precarity politics. the case of day labourer activism would seem to lend support to the proposition that the marxist category of the lumpenproletariat is once again resonant. the concept is not offered as a form of self-identification, but rather as a mechanism of conceptual articulation, particularly across lines of gender, race, and citizenship, that might serve as alternatives to the analytical and political categories of proletariat and working class. famously disparaged by marx and engels as the sub-working class, or, more precisely, a de-classed and disparate collection that includes vagabonds, former prisoners, pickpockets, brothel keepers, porters, tinkers, and beggars (marx, , p. ) , the lumpenproletariat was negatively contrasted to the upstanding 'labouring nation' exemplified by the economically and socially integrated -hence, powerful and politically reliableindustrial proletariat. (although it should be noted that marx and engels include some discards from other classes as well, including the bourgeoisie.) even the unemployed members of the industrial reserve army were posited as fully inside capitalist relations, as opposed to the surplus population relegated to the outside: that subaltern, disorganized, and politically untrustworthy non-class of people 'without a definite occupation and a stable domicile' (engels cited in draper, draper, , p. . engels included day-laborers in his list of the lumpenproletariat, and those who have since tried to reclaim and revalue the category -most notably, bakunin, fanon, and the black panthers -have added as well various modes of petty criminality, maids, sex workers, and 'the millions of black domestics and porters, nurses' aides and maintenance men, laundresses and cooks, sharecroppers, unpropertied ghetto dwellers, welfare mothers, and street hustlers' with 'no stake in industrial america' (brown, , p. ) . while i am interested in the category as a way to make particular connections among prison workers, domestic workers, day laborers, sex workers, laborers in various underground economies, and undocumented migrants, it has also been used to identify linkages among a host of precariously employed people (see, for example, bradley and lee, ) . indeed, refusing the original distinction between proletariat and lumpenproletariat, the latter category could serve as the general designation that links the lumpen to the proletariat through the hinge category of the precariat. engels once criticized kautsky for using the label proletariat as inclusive of what engels sought to set apart as the lumpen class; kautsky's proletariat was a 'squinty-eyed' concept because it looks in both directions, thereby blurring an important distinction (draper, (draper, , p. . perhaps today the lumpenproletariat could serve as a squinty-eyed, broad category, more adequate to a u.s. political economy where the difference between formal and informal employment, employment and unemployment, work and nonwork are breaking down. the specific advantages of this formulation of the lumpen category include its breadth. stallybrass ( , p. ) notes how the lumpenproletariat is often described in terms of the 'spectacle of multiplicity' it evokes in contrast to the unified sameness of the conception of the proletariat. this heterogeneous breadth would seem especially appropriate to a political economy in which, as apostolidis notes, rather than determine who exactly counts as a precarious worker, 'the better question might be: who does not belong to the vast population of the precaritised?' (apostolidis , p. ; emphasis in original) . another attraction of the concept is how marx and engels's pejorative characterization of the lumpen class betrays some of the ways that the moralized understanding of work and family -recall the description of the lumpen as lacking or marginal to the stabilizing force of both occupation and family -haunts their analyses. for this reason, some, myself included, are interested in how the lumpenproletariat can, as thoburn ( , p. ) notes, be figured as the 'class of the refusal of work'-and, i would add, the refusal of family. finally, i am interested in how it was conceived as politically unreliable in a way that seems more realistic than the tendency for some to posit some kind of special 'wokeness' to the working class, only to be disappointed when they turn out to be politically erratic, sometimes acting against what are taken to be their class interests. the third and last point of particular interest for me in apostolidis's theorizing about the politics of work today was the argument about the worker centre as a mode of labour organizing for precarious workers. in thinking about analogous organizational innovations two examples come to mind. both share some resemblances with the worker centre even if they are associated with more privileged workers. the first is what might be characterized as a dystopian version of the worker centre that goes by the label coworking. interestingly, coworking originated from below as activist projects to create spaces of community and collaboration among elements of the white-collar precariat, but as de peuter et al. ( , p. ) note: 'inside a decade, an innovation from below was drawn out of the margins, harnessed by capital and imprinted with corporate power relations'. today, by way of these global real estate ventures, capital can both appropriate the value waged workers create and charge them rent, just as we pay for the households where so much of our free reproductive labour is enacted. but what might seem quite distant from the worker centres apostolidis describes comes a little closer if we take seriously the contradictory (merkel, ) or ambivalent (de peuter et al., ) status of coworking, which may provide opportunities for the convivial mutualism that apostolidis finds in the worker centre while also interpellating members as entrepreneurial individuals, and which 'is animated by a tension between accommodating precarity and commoning against it' (de peuter et al., , p. ) . i am left with a question that i think might be worth pursuing: is coworking best understood as a specular image against which we can recognize the progressive potential of the worker centre, or is it a cautionary tale about its potential to be co-opted? the second comparison is to a very different model of labour organizing for precarious workers. this is a project based in new york city called wage, an acronym for working artists in the greater economy. it started in as a project committed to help artists to be remunerated for all the work they do with non-profit arts organizations and museums. their 'womanifesto' says they demand payment 'for making the world more interesting' (wage, ) . among other initiatives, wage's efforts involve knowledge production about various arts organizations and the contracts they make with independent artistic workers, the development of a platform that helped artists negotiate fair compensation, and a certification for which arts institutions can apply. this approach to organizing precarious workers is comparable to the model of the worker centre in the sense that each of the projects seeks at once to facilitate work and to acknowledge anti-work critical languages and agendas. one of the questions that the comparison with this project raises is whether the forms of convivial mutualism and politicization apostolidis found in the worker centre require the kind of 'embodied social interaction' (p. ) and faceto-face encounters that platform models of organizing do not necessarily prioritize. kathi weeks in , i co-authored a book with catherine eschle entitled making feminist sense of the global justice movement which sought to make visible, audible and intelligible a strand of feminist anti-capitalist activism that was being consistently ignored in the international relations and social movement literatures (eschle and maiguashca, ) . driven by the conviction that taking the words and deeds of the women engaged in these struggles seriously would yield not only a more intricate and complete empirical map of the movement, but also prompt a re-conceptualisation of its meaning and trajectory, we embarked on fieldwork in several countries as well as interviews with activists over a period of several years. by seeking to expose the gendered power relations that marginalise women within the world social forum process, as well as in the academic literature about this movement, and by choosing to speak to and from the feminist struggles that emerged to confront them, the book was written in solidarity with feminist anticapitalist activists. paul apostolidis' book the fight for time encapsulates a very similar kind of intellectual-political project as it also seeks to capture the self-understandings of migrant day labourers in their everyday struggles, to reflect on how they resonate with contemporary critical theoretical concepts and to learn how, taken together, these empirical and conceptual insights may lead us to a renewed vision of what a left politics might look like for our age. like our book, paul's is unashamedly political in intent and, as such, it embodies a form of 'militant research', which 'activates enlivening moments of contact between the popular conceptions of day labourers and scholars attempts to describe and account for precarity in sociostructural terms' (p. ). like our project, paul's research wants to bring what has been rendered marginal, both politically and academically, to the centre of our scholarship and theorising. and like my own work, more generally, paul's is driven by a commitment to revitalising both the theory and practice of left politics. in my contribution to this critical exchange i will draw out the points of contact between our respective approaches as well as tease out what i take to be our differences. in doing so, i aim to underline not only what is distinctive about paul's efforts, but also the shared challenges that we face as critical theory scholars attempting to chart a path for the theory and practice of a collective, transformative politics. more specifically, i want to highlight two broad lines of inquiry that emerge when undertaking this kind of politicised scholarship. the first line of inquiry seeks to open up a dialogue about the challenges that implicitly accompany the quest of constructing a critical theory that can simultaneously speak to and from 'the exception' and 'the synecdoche', or, to put it otherwise, that can light a path from the particular to the universal. the second theme concerns the role of utopian thinking in galvanising and giving direction to a radical left politics that is inclusive and that is fit for purpose in the st century. turning first to the task of critical theory, understood in marx's terms as the selfclarification of the wishes and struggles of the age, it is imperative that one grounds one's analysis in the practices and aspirations of a particular marginalised subject. elaborating on this point leonard ( , p. ) states, 'without the recognition of a class of persons who suffer oppression, conditions from which they must be freed, critical theory is nothing more than an empty intellectual enterprise'. now, while apostolidis and i agree on this, and both of us have chosen 'addressees' that are subjected to oppressive power relations that undermine their life chances and denigrate their ways of knowing and feeling, the conditions and experiences which give rise to and shape their respective ideas and practices are significantly different. indeed, despite some important overlaps, the radical politics and utopian imagination that emerge from each constituency -precarious labourers, on the one hand and feminist activists, on the other -diverge considerably. so, what are these differences, and what lessons might be drawn from this comparative analysis for those of us seeking to develop a comprehensive critical theory that seeks to move seamlessly from the exception to the synecdoche? apostolidis' chosen addressee is the migrant day labourer living precariously from day to day in a hostile environment in the us. framed as an exploited class, apostolidis' chosen subject wages his struggle for survival and dignity on the terrain of labour relations. while paul rightly recognises that day labourers, as a group, are also gendered and racialised subjects, his study remains primarily focused on the collective efforts of male labourers to resist forms of denigration and harm that mark their lives as workers and to overturn the destructive and exploitative practices of an unregulated capitalist economy, more generally. by contrast, my feminist interlocutors were relatively privileged economically in comparison to other women in their respective societies -and certainly to the day labourers of apostolidis's book. moreover, most of these women were well educated and, although many lived precarious professional lives (e.g. their ngo funding was secured year on year), the women themselves were, in the main, leading comparatively secure lives both materially and socially (they had families and belonged to social movement networks). finally, all of our activists were already politicised and involved in consciousness-raising activities (e.g. our fieldwork in brazil exposed popular education as a common practice) and, to this extent, were engaged in a form of feminist praxis that quite self-consciously and explicitly sought to transform the world they lived in. in sum, pace apostolidis' claim that precarity is a 'near universal complex of unfreedom' (p. ), it is not the obvious starting point for conceptualising the challenges faced by these women. given these different starting points, what kind of politics emerges from each constituency, what utopian visions accompany them, and to whom are they directed? for apostolidis, an anti-precarity politics demands a 'post-work' future, one in which we all refuse to assume the responsibility for facing up to and accepting the consequences of precarity as an inevitable condition of life. instead, we are entreated to engage in a 'politics of demand' that seeks to reclaim our wages and our time ('for what we will') from predatory capitalist powers. more concretely, apostolidis outlines several attendant policies, including the introduction of a universal basic income and the creation of affective spaces of embodied social interaction, including multiple work centres. as he puts it, 'if all working people could gain access to workers centres like those that are inspiring such utopian effulgence … such a politics could well find masses of adherents and assume more fully developed form in our common precarious world' (p. ). this is a resolutely anti-capitalist vision of a transformed world demanded by and imagined for all workers. or, to put it in fraser's ( ) terms, this is a bold call for a social politics of redistribution. turning to the feminist activists of my project, we find an alternative vision of what a better, more just future looks like. and while it is also anti-capitalist in orientation, it refuses to centralise either the realm of 'work' or 'workers' as its central axis of liberation. instead, the politics of demand that emerges from this politicised subject targets, not only capitalism as a systemic power relations but also patriarchy and racism. in this context, all three systems of power are understood as interlinked and pervasive to the extent that they cut across all social realms (economy, society, political, cultural) and are reproduced in both the public and private sphere. each however is also sui generis, and therefore, requires specific strategies to be overturned. moreover, on the affirmative side, our feminist interlocutors articulated their vision for the future in terms of two sets of demands. the first took the form of multiple proposals for policy change that seek to address context specific problems, such as violence against women, reproductive health, labour rights including the women's right to work and environmental degradation. the second was normative and universal in nature and revolved around the identification and defence of a set of ethical values -bodily integrity, equality, fulfilment of basic needs, peace and respect for the environment -that go beyond the concrete wish lists of different groups and pertain to all human beings. thus, the feminist anti-capitalist activism that i explored embodied a self-consciously intersectional politics in which demands for material redistribution and social justice were combined with equally important claims for cultural recognition. thus, here we have different struggles, different self-understandings and different visions of a progressive left politics. but if, as apostolidis suggests, 'we need a politics that merges universalist ambitions to change history, which are indispensable to structural change, with responsiveness to group differences that matter because minimizing them means leaving some people out' ( , p. ; emphasis in original), then how do we knit together these connected and yet distinct visions of emancipation? how do we move from the exception to synecdoche if we have multiple exceptions, each with their own sets of experiences, analyses and aspirations? after all, linking 'universal ambitions' to radical social change requires that we have a shared understanding not only of which structures of power need to be transformed/challenged the most, but also of how we go about building a common struggle. and whatever the intellectual synergies, programmatic overlaps and emotional affinities between the struggles of day labourer in the us and that of women worldwide, their utopian dreams would take us along very different, perhaps even incommensurable, paths. given this challenge, the question becomes one of deciding whether we need multiple critical theories running parallel to each other animated by different kinds of oppressions and degrees of marginality or whether we are still looking for a singular revolutionary subject, the one catalyst for change who is able to be both an exception and a universal exemplar, thereby embodying all the demands of the oppressed? this is not just a quibble about who gets to lead the charge: it is about what radical, progressive change should actually look like. as a feminist scholar seeking to find and defend space for an intersectional politics that refuses to be contained and streamlined in any way, i think it is imperative that critical theorists resist the temptation of elevating one concrete subject to that of a universal one. instead, we must engage in far more patient, painstaking ethnographic work of the kind that apostolidis has undertaken on male migrant day labourers, with a range of other addresses or marginalised subjects (e.g. the experiences of female day labourers are, as apostolidis suggests, one good place to start). it is only once these varied, complex mappings of power and resistance are drawn, with the recognition that they cannot be easily merged, that we can begin to look for connections across them and identify possible sites of bridge building which may lead to a convivial politics of the left and to the emergence of a collective dream. whatever it ends up being, my sense is that it will have to take the form of a coalitional politics, one in which sui generis struggles fight alone and together for radical change. the second theme is the role of utopian thinking in galvanising and giving direction to a radical left politics. despite being burdened by a 'relentless presentism' that does not allow them to think about, let alone strive for, a better future, it is clear that apostolidis believes that the 'demand' politics of day workers is suffused with utopian aspirations (p. ). drawing on coles ( ), apostolidis describes their aspirations in terms of a 'visionary pragmatism' (p. ) that combines an overt disruptive politics, that makes them visible and audible to the wider public, with more mundane, everyday practices of solidarity, mutual aid and self-government. interestingly, this view of utopian thinking as granular, incremental and cumulative, as well as eventful, unruly and confrontational, resonates very strongly with the dreams and impulses of feminist anti-capitalist activists. in fact, we deployed the notion of 'principled pragmatism' as a way of capturing their mode of action, in general and its pre-figurative orientation, in particular. for what became clear to us as researchers is that our feminist activists were concerned with articulating not only the political substance of their alternative future and the values that underpin it, but also an ethos by which this future should be brought into being. in this way, the 'principled' part of principled pragmatism sought to underline the highly ethical nature of both the goals/ends of their mode of action, as well as the means designed to achieve them. moreover, we found that this normative mode of action embodied a specific temporality, which was open ended and processual as well as nonlinear. this is, in part, due to the commitment of feminist activists to enabling women to speak and act for themselves, a project which, by its very nature, is unpredictable. it is nonlinear because its pre-figurative orientation demands that the future be lived out in the present. in this way, principled pragmatism is anchored by the imperative of getting things done in the 'here and now' of everyday life, without giving up the goal of radical change in the future. as a mode of praxis that pursues incremental, context specific change, feminist anticapitalist activism presents us with an inspiring alternative to the clichéd dualism of reformism and revolution. the question here is whether the 'visionary pragmatism' of day workers is generalizable to other forms of contestation and, if not, in what ways it might be different from the 'principled pragmatism' of the feminist activists outlined above and what might be at stake in these differences. whatever our different starting points, what all the contributors to this exchange share is an abiding interest in generating explicitly normative, politicised scholarship or what apostolidis refers to as 'emancipatory scripts'. in other words, we all resist the path of what mcnay calls 'socially weightless' theorising, referred to by andrew schaap in his introduction to this critical exchange, opting instead to grapple with the messy world of politics, the material social conditions that hold it in place, and the suffering it engenders. to this extent, we all believe that what we write about and how we conceptualise it matters, not just intellectually, but also politically. for in the end, the stories we tell about the world and 'politics of resistance' that bubble up within it, can contribute to opening up (or closing down) the spaces of possibility for its realisation. pursuing this intuition is becoming harder, however, not only because academia continues to extol the virtues of scientific knowledge, but also because of changes in the political landscape. with 'populism' now elevated as the threat du jour, all resistance against the status quo is in danger of being discursively contained by politicians and academics alike. moreover, the increasingly trenchant calls to drop the left-right distinction in favour of other political cleavages (e.g. 'people vs elites', 'people from somewhere' vs 'people from nowhere') are making it harder to reclaim a politics for and by the left. in this context, critical theorists of all ilks need to stick together, learn from each other and engage in a form of 'epistemological coalition building'. while it may not be the only route to progressive change, as paul rightly points out, it is one worth sustaining, in my view, and critical exchanges of this sort provide one step in this direction. fighting from fear or creating collaboration across economic divides? in the fight for time paul apostolidis offers readers a powerful meditation on the problem and politics of precarity. he contends that precarity is a global problem shared by virtually all who toil in the global economy. through his study of latino day laborers in the us, apostolidis argues that day laborers present a proxy for the precarity of laborers worldwide (pp. - ). through his portrait of the cruel trials faced by day laborers, apostolidis wisely proposes that work centers for all, popular education practices and consciousness raising, as well as a 'demand politics' for better and safer labor conditions, fair pay, and flexible time are necessary to improve the lot of all laborers everywhere. his valuable work thus provides a vision of collective practices that might, if we are persistent and lucky, ease the plight of billions of precariously placed workers across all walks of life worldwide. along with my admiration, this book's fine and yet familiar tones raise for me two questions that i pose here in the spirit of conversation and in sharing in paul's quest for the best ways to realize global prosperity and peace that recoups the time that all human beings need to explore and express their best qualities and capacities. my first question is whether inviting widespread personal identification with precarity -as opposed to identifying with peace, justice, or other motivating concepts -is a necessary step to ignite awareness and action for economic change that recoups time for all (pp. - )? a recent national public radio/harvard university poll shows that in the us, the majority of both the wealthy ( %) and the poor ( %) already share the view that extreme economic inequality is a widespread and serious problem that presents risks to everyone in the global economy (harvard, ) . while wealth and poverty are facts of a balance sheet, precarity is experienced as a feeling or state of mind. this is acknowledged implicitly by apostolidis in his application of lauren berlant's concept of 'cruel optimism', in which precarity is not considered as economic hardship alone, but is an 'affective syndrome' (p. ). thus while wealth and poverty shape experience in material ways, the feeling of precarity is a choice to embrace and/or identify emotionally with a fearful state of dangerous insecurity. but is the choice to identify oneself with the feelings and fears of precarity wise or helpful? dangerous insecurities may arise for anyone, and even the comparatively well off may feel fear of sudden destitution. yet as frankl ( ) observed in man's search for meaning, the responses that we choose to a threatparticularly one's capacity to choose not to succumb to fear -is a central factor in securing human freedom under any conditions. as frankl himself exhibits, even in the life-threatening conditions of a nazi concentration camp, his humanity and true freedom could not be extracted from him because freedom lies in our capacity to choose our own responses to violent and destructive conditions, even unfathomable extremes. thus, in contrast to berlant's cruel optimism, frankl's observation is that even within the vicissitudes of illness, exposure, and hunger, those who faced the concentration camps with dignity, self-worth, and courage were far more likely to survive, and eventually escape those conditions, than those who surrendered to a mindset of fear-based terror and precarity. in short, our chosen mindsets under hardship also shape our prospects for resolution and escape from extremity for better or worse. thus, to choose to embrace affective fear and precarity may ultimately undermine the strength and survivability of the self. if fear of precarity is widely embraced, this may in turn subvert the capacity for collective action in pursuit of economic justice and the reclaimed time that all workers, as apostolidis deftly shows, so desperately need. beyond frankl's philosophy and experience, neuroscience also illuminates the possible hazards of self-identifying with a precarity mindset. in ledoux's ( ) influential work on the interface of emotion and human physiology, the emotion of fear, particularly mortal fear, triggers neurological subsystems of the body that enable rapid responses by bypassing and making temporarily inaccessible the neocortex -the brain-centers of conscious reflection -which are too slow to address risks to mortal safety. in other words, when humans are in fear, we cannot physically access our capacity for conscious reflection until our fear subsides (ledoux, , p. ) . instead, when in fear, the human body defaults to operating on autopilot through whatever neurologically encoded scripts the emergency systems of a given body happens to have for its fear responses, typically including, fight, flight or freeze. arguably, this can be seen in chapter three of the fight for time, in which paul shows day laborers -fearful of missing out on even an extractive job in their precarious conditions -inflict violent harm on one other in a 'surly wrestling match' as a car approaches (p. ). does such fearbased reaction help? not as much as it endangers people, fosters increasing fear and dissention among laborers, and drives away would-be employers. yet this kind of scrum is not a poor conscious choice. instead it is a scripted embodied impulse that is the anticipated neurological consequence of adopting a fearful approach to experience and thereby hobbling conscious response. on this analysis, choosing a precarity mindset risks disabling physical access to conscious, thoughtful reasoning and response in fearful moments in favor of fear-based impulses and reactions that are attendant to moments of fear. these risks of identifying with precarity raise my second question. what blind spots might exist in the familiar narrative of economic reforms championed in the fight for time? the proposed path to reform invites readers to embrace work centers for all and collective action based in common experiences of deprivation that address intra-group biases and divisions along the way. this is an inherited social script that is long-treasured and often invoked. as a common social inheritance among scholars and activists alike it has been portrayed eloquently before in such powerful retellings as that of salt of the earth, the once blacklisted film narrating a famous new mexico labor strike. in this valuable and familiar approach, echoed here by paul, laborers come together to confront and overcome their mutual biases, and then pursue together demands for better wages and benefits. paul's recruitment into one work center's 'theatre of the oppressed,' intended to help workers address their biases, is an example of this longstanding approach in action (p. ). in this script, rich capitalists appear as universally greedy and cruel hoarders whose victims, the long-suffering poor, must now muster the courage to see their commonalities within divisions of race and gender to demand a fair shake from capitalists. this story is rewarding. and it is true that workers everywhere would be better off if this familiar scenario were consistently fulfilled. yet the gains of this approach over time have been slow, sporadic, labor intensive, and often hobbled by the stubbornly persistent biases, suspicions, and enmities of many laborers -as well as owners -weaknesses to which all of humanity is still often prone. in contrast, from a chicana feminist perspective, such as that of gloria anzaldúa, the enduring problem of economic inequality does not call only for looking within worker's groups for sources of intra-group conflict and dissention. it also calls for searching across polarized social divides -of workers and owners, of the haves and the have nots -to explore and create the conditions for peaceful resolution of economic inequality. although venerated in death, anzaldúa was at times scorned in her lifetime for proposing that true peace and justice required people to eventually come together to work across trenchant social divides: people of color working with whites, women with men, immigrants with non-immigrants, and so on (anzaldúa, ) . this anzaldúan chicana feminist perspective urges us to not overlook the possibility of working generatively across the divides among workers and owners, a possibility in the blind spot of the salt of the earth narrative in which economic benefits must always be fought for and hard won rather than produced through collaborative vision and effort. following this traditional script, the fight for time's focus on work centers and the fight of traditional labor activism implies that attempts to collaboratively bridge the worker-owner divide may be futile, naïve, or at best irrelevant. yet among the ultra-rich, practices of large-scale philanthropy are emerging which suggest that there is more transformative common ground between laborers and some owners than the traditional salt of the earth viewpoint can yet acknowledge. if so, then attending to this common ground may help remedy the lack of time, economic freedom, and financial stability needed by everyone more quickly and effectively than the fights and struggles of work centers, strikes, and direct actions have historically achieved. specifically, in recent years carnegie's ( ) assertion that successful capitalists should ideally end their financial careers by giving away all of their wealth, retaining only a personal competency -defined by carnegie as enough wealth to meet their own life needs and that of one's family -has been gaining a following. reflecting this view, in two of the world's wealthiest billionaires, bill gates and warren buffet, created an organizational structure called the giving pledge ( ), in which ultra-wealthy people across the world pledge to give away the majority, or at least half, of their wealth in their lifetime or upon their death. to date, over ultra-wealthy individuals and families have made this pledge, including five of the top thirteen billionaires on earth (i.e. bill gates, warren buffet, elon musk, mark zuckerberg and mackenzie scott). in july , these five pledgers command a combined total net worth of $ billion usd (bloomberg bi, ), representing an estimated philanthropic giving over time of at least $ billion usd by those five pledgers alone. if a growing number of the ultra-rich are voluntarily committed to giving away their wealth for the benefit others, then -by adopting an anzaldúan perspective on working across economic and other social divides -it becomes valid to explore beyond the familiar salt of the earth script hailed in the fight for time. doing this would involve considering how engagement across social divides of workers and owners may help direct emerging philanthropy into social justice philanthropy that could potentially ease global financial inequities more quickly and resoundingly than the efforts of work centers and traditional labor actions have done to date. such a move could potentially recoup both time and transformative possibilities for the benefit of laborers, as well as owners, and provide sustainability benefits for the planet from a revised economy. by shining an anzaldúan chicana feminist perspective into the blind spots of the fight for time, apostolidis's project is not abandoned, but augmented by bringing unforeseen possibilities into view. new possibilities might arise from organizing with willing and openhearted owners, rather than fighting against them as a class to retrieve the time and financial freedoms precious to all. in moving beyond the view that labor and owners are always divided (rather than only often so), it becomes possible, for example, to imagine efforts in large-scale social justice philanthropy that could, for example, provide everyone on earth with a carnegiesque financial competency. for the sake of discussion let's imagine that such a personal competency would be $ million usd per person worldwide. with . billion people now on earth, the core funding for a $ million dollar safety-trust for each person at present on earth would require . billion usd. that sum seems large, yet it is less than % of the combined minimum pledge, of the five of the signatories to the giving pledge named above. of those five givers, mackenzie scott herself is committed to giving away all of her $ . billion, a sum that alone could handily endow a universal personal competency worldwide. thus at least in terms of core capital resources (even accounting for the illiquidity of many assets of the ultra-wealthy), a universal competency could be funded by a small fraction of the funds already pledged for giving by the world's ultra-rich. in this context, self-identifying with fearful precarity and fighting for traditional reforms through work centers and labor actions for the changes so urgently needed in the (now pandemic-stricken) world may be worthy in our traditional socially inherited script of salt of the earth-style social change. yet this accustomed approach arguably now may be less wise and expeditious than other emerging options. if so, it is worthwhile to explore the limitations of our commonplace labor-related scripts and to confront as needed our own potential blind-spots regarding the diversity among the ultra-rich that could -in an anzaldúan manner -help us to better see new possibilities for bridging economic divides and opening ourselves to collaboratively producing transformations that can benefit all people and the planet upon which we reside together. is resolving the pain of global poverty through philanthropic giving so farfetched? it is not as implausible as so often thought. alongside the kinds of labor actions hailed in the fight for time, in recent months one us billionaire chose to pay the college debt of an entire class of morehouse college totaling over $ million usd. another man paid the college debt of his uber driver, a single mother, thereby enabling her to finish her college degree. by chance, the latter giver is a well-off white man and the recent graduate an african american woman. meeting as strangers by chance, the two have now become friends and their story has gained popular attention. if giving to strangers in need is not merely feasible but also appealing, why is it perhaps emerging more visibly now? it may be because many humans are learning that beyond a meaningful competency, wealth does not necessarily create happiness, but that human connection and giving often do. if so, then a season of transformational giving may be on the near horizon. if these events reveal a nascent turning of the tide, there are still many obstacles on the path of philanthropic giving-for-global-prosperity. if a pathway to funding a universal competency could be created through social justice philanthropy, for instance, this would also need to involve further measures for healing the poverty-related traumas so aptly described in the fight for time. beyond a basic endowment, provisions would be needed to provide for new learning, safeguards, and other supports for recipients in order to truly solve the lingering problems of precarity. why? because those who come into sudden wealth from poverty and lack often risk experiencing poverty once again through missteps, fraud, or other hazards arising from a rapid change in economic conditions. thus even if furnished with a financial competency, in the context of hazardous grafts, frauds and other pitfalls that remain mainstays of us culture (young, ) , latino day laborerslike the vast majority of other workers alluded to in the fight for time -would need additional training to cultivate the skill sets and mindsets needed for living with meaningful wealth after having had little or no prior knowledge or instruction in how to hold, manage, or grow the would-be competency that could furnish them at last with time and freedom from extractive labor. is the idea of philanthropic solutions to global economic inequalities simply another example of 'cruel optimism'? by berlant's ( , p. ) definition, optimism is cruel only if the desired change is truly 'impossible or too possible and toxic'. clearly, however, changes are emerging that make meaningful large-scale social justice philanthropy possible, even if those changes are growing in the shadow of predatory economic practices. with these changes in view, it is worth asking whether paul apostolidis's fine call to 'fight' to retrieve time across all laborers might be best served by extending our willingness to also seek common cause not only among diverse workers, but also among those openhearted wealthy owners who are willing to give back their wealth to benefit the well-being of all humanity. if so, it may be worth our time not to fight for time, but instead to work collaboratively and creatively for time and wealth to become equitably available to everyone in unexpected ways. edwina barvosa whose politics? whose time? traditionally, political theory has not co-theorised. it has spoken from on high among 'male, pale, stale' companions. hence my defection from these ranks. in this dialogue with paul apostolidis' the fight for time, i would like to recognise the attempt to co-theorise. in this work some migrant day labourers' voices, described as latino, are represented through ethnographic moments. bodies, presumably cis-male, are portrayed in struggle. this day labour is proposed as 'synecdoche' -the part that stands for the whole -by which is meant precarity on the grand social scale (p. ). thus, the collective fight for time is staged. demands include: a politics that goes beyond seeking marginal relief from overwork and instead fundamental alternatives; a repudiation of the work ethic that prescribes personal responsibility in the face of desperation; the demand to restore time as well as wages to the people; a refusal of work 'as the axial concept that constricts working people's social and political imaginaries' (p. ). i can only respond from outside of the social and political world the book portrays. i am not latinx/latin@ (hence the unsatisfactory use of terms that are, themselves, the site of struggle), but white, cis female, and belonging to many other privileged social locations. from my vantage point i explore struggles for migrant justice and against austerity and precarity at the intersections, drawing on lessons from black feminism and indigenous scholars writing in the context of the ongoing violence of settler colonialism. i ask: whose politics? whose time? whose politics? whose knowledge counts as the basis for politics? i cannot accept proposals, as in this book, to radiate outwards from some bodies and experiences -people presented as cis-gender latino men, workers -as the part that stands as the whole, the synecdoche. this is a project of inclusion: generative themes are based primarily on these experiences, to which others must then align. this story has been told before. it is of a linear, sequential march toward 'justice'. some are at the centre, in the lead, and others need to wait their turn to then be included. add and stir. who must wait their turn? in this work, this sounds like (presumably cis) women domestic workers who are mentioned but peripheral to this study, as well as those who experience misogyny and harassment at the worker centres (pp. , , ) that are to be the incubators of progressive alternatives and the collective fight for time. we could add here the women who founded and run the worker centres in this book, who are barely visible but are also key protagonists of anti-precarity and antideportation struggles. those who must wait also surely encompass malepresenting others who do not identify with what are referred to in the book as the 'normative' masculinities deployed in the worker centres (p. ). what happens when the political knowledge of queer, non-conforming, differently gendered actors is parked for consideration later on? what politics is generated when these experiences and these intersections are named at the end of a book (pp. - ), after the contours of struggle have been determined against precaritisation 'as the array of social dynamics that structure these settings' (p. )? it becomes possible to call for 'workers centres for all workers'. and thus a space for the resistance of some is built on the oppression of others. theorising this as synecdoche does not name the problem or open up the space for resistance to multiple, intersecting oppressions. it does not centre as part of the theory the messy and vital struggles of workers' centres to change representation on governing boards, to reconfigure resistance to border control in recognition of the specific brutality experienced by lgbtq migrants (p. ) and to bring into focus all forms of work (p. ). this call, 'workers centres for all workers', chills me without scrutiny of all gender relations and all gendered labour -and i mean all, beyond gender binaries, at multiple intersections. what can the 'repudiation of work' mean without naming cis heteropatriarchal relationships of domination, in ableist and racialized capitalist systems that pervade all 'public' and 'private' realms? this book asks how various groups of workers articulate terms of their consent, how regimens and discontinuities of body-time on the job vary between different groups. but this undertaking is impossible without articulating at the same time the terms of consent to cis heteropatriarchal relations in and outside of the workplace. oppressors are not only employers. they are also other workers, community and family members, who are cis men and women embedded in hierarchies that include gender, class, race and legal status. what would it look like to build a politics for migrant justice, against austerity and precarity starting with the knowledge of experiences of a matrix of oppression (hill collins, ) ? this is no synecdoche. it is the challenge of forging justice at the intersections. these are not new lessons to learn and there is no way to do justice here to all the illustrations of this kind of politics in practice. from my past work, one example from france in the s, may provide purchase on us-based challenges. in paris, madjiguene cissé led movements for the regularisation of 'sans papiers' -people 'without papers'. she described the 'struggle within the struggle' by women 'sans papie`res' (the feminised version of 'sans papiers') for gender equality within the movement, as well as regularisation of immigration status. this was a struggle against patriarchy as well as the racism of the french mainstream. the knowledge that sans papie`res women imparted in the struggle meant that they were in charge of their own thought and politics but without excluding others (hill collins, , p. ) , and they did not project separatist solutions to oppression because they were sensitive to how these same systems oppress others (hill collins, , p. s ) . women revitalised the movement and kept it together: 'a role of cement' (cissé and quiminal, ) . cissé explains how women kept the group together particularly when the government attempted to divide them, by offering to regularise 'good files' of some families, but not of single men. sans papie`res very firmly opposed this proposal, arguing that if single men were abandoned, they would never get their papers. migrant justice, anti-austerity and precarity politics look different when built at these intersections. the difference lies in who is present and also in what results. care and self-care are centred as 'an act of political warfare' in a system in which some were never meant to survive (lorde, ). self-help, self-care and selforganising are alternative, sometimes complementary spaces, and an important source of personal support, resilience, information and community, beyond whitedominated, politically raceless, misogynistic anti-austerity/precarity spaces (emejulu and bassel, ) . no part can stand for any whole when other spaces are unsafe and sites of violence rather than a collective fight for time. whose time? in our work exploring the activism of women of colour across europe, akwugo emejulu and i have argued that epistemic justice is about women of colour producing counter-hegemonic knowledges for and about themselves to counter the epistemic violence that defines white supremacy (emejulu and bassel, , p. ). epistemic justice is not a correction or adjustment to 'include' unheard voices, but a break away from destructive hierarchical binaries of european modernity. it is a break away from the 'persistent epistemic exclusion that hinders one's contribution to knowledge production' (dotson, , p. ) and renders women of colour invisible, inaudible and illegitimate to both policymakers and ostensible social movement 'allies'. epistemic justice at the intersections makes settler colonialism visible, whether in the united states of this study or so-called canada, where i grew up. this means going much further than the possibilities briefly flagged in the book: kindling a critical sense of historical time and orientation to the future that is fuelled by an awakened sense of historical injustice (pp. - ). it is necessary to go much further because the fight for time cannot be founded on indigenous erasure. erasure does not create a path toward solidarity 'with other colonised populations who understand their past experiences in somewhat parallel ways' (p. ). this book discusses workers turning a day-labour corner where jobs are fought for in portland into a space of musical performance. these are important moments to explore and co-theorise. but when they are described as transforming the space into a 'site of freedom' (p. ), indigenous struggles are erased. these performances are taking place on stolen land in what is now referred to as 'portland'. tuck and yang's ( ) key work 'decolonisation is not a metaphor' rattles the kind of settler logic that allows for this erasure. they discuss the occupy movement and argue that claiming land for the commons and asserting consensus as the rule of the commons, erases existing, prior, and future native land rights, decolonial leadership, and forms of self-government. occupation is a move towards innocence that hides behind the numerical superiority of the settler nation, which elides democracy with justice and the logic that what became property under the % rightfully belongs to the other %. in contrast to the settler labour of occupying the commons, homesteading, and possession, some scholars have begun to consider the labour of de-occupation in the undercommons, permanent fugitivity, and dispossession as possibilities for a radical black praxis … [that] includes both the refusal of acquiring property and of being property (tuck and yang, , p. ). the fight against precarity and for migrant justice must be reconfigured, if it is to be in solidarity with indigenous struggles. this means changing whose understanding of time and labour are at the centre of analysis. the land where this study took place is not an 'immigrant-receiving country' but a settler colony, founded on indigenous genocide, dispossession and slavery. when time is decolonised, the refusal of work is recast in relation to the refusal of the settler colonial state (simpson, ) and the formations of race, class, gender that it engenders. these formations, rooted in settler colonialism, shape the lives of the migrant day labourers who are 'here' because the united states was 'there' (sivandandan, n.d.) and must contend with entangled colonial legacies from different social locations. this requires a shift in vocabulary, when 'migrants' are in fact settlers. but with this comes also a shift in politics. in undoing border imperialism, walia ( ) shows how movements such as no one is illegal (noii) in what is now called canada have reconsidered their understandings of migrant justice. this has required recognizing the ways in which their actions have been premised on an understanding of sovereignty and territory that perpetuates the colonial legacy that has dispossessed and disenfranchised indigenous peoples (walia, ) . noii activists consequently re-centre ongoing colonialism and reconfigure understandings of land, movement, and sovereignty when claiming that 'no one is illegal'. specifically, activists have tried to consider how their calls for 'no borders' undermine indigenous struggles for title and against land loss, to reclaim land and nation. solidarity means reshaping the political agenda of noii beyond token acknowledgements, to move from a politics of 'no borders, no nation' to 'no one is illegal, canada is illegal' (fortier, ) . and now? i asked two questions here: whose politics? whose time? they remain unanswered. but they are a path to solidarity rather than solutions. so it goes in the messy world of politics, not political theory. leah bassel representing precarity: health, social solidarity, and the limits of coalitional epistemology in her contribution to this critical exchange, kathi weeks poses an unexpectedly timely question about how to politicise precaritisation in the form of heightened bodily risk at work. writing prior to the coronavirus outbreak, weeks echoes my observation in the book that, apart from the temporary rush of reporting when an occupational safety and health (osh) disaster strikes somewhere in the world, 'the problem of workplace death and injury is strangely absent from public consciousness'. how quickly things can change. i am writing this response in april in london, now in its fifth week of 'lockdown'. in this context, weeks's reflections prompt two questions: first, in what specific ways has the covid- crisis made workplace threats to life and health newly legible? second, what ramifications do state and employer responses to the pandemic have for the pressing issue of how 'to politicise the issue of bodily harm given how extensively the idiom of health has been rendered amenable to the logics and aims of biopolitical management', as weeks aptly puts it? i still see the outlines of an answer to the second question in the politics of solidarity around osh matters that day labourers have developed through worker centres. today's work-culture construes the task of sustaining the worker's health as the worker's personal responsibility, which the worker also exercises as a productivity-oriented social duty. many day labourers abet this tendency through their own themes of meeting the 'risk on all sides' by individually keeping their 'eyes wide open'. yet day labourers also demonstrate how health-related language, desires and practices can be cathected with a different figuration of social and individual conscientiousness: responsibility as autonomously collective solidarity. day labourers pose this alternative in three main ways. first, through convivial relations at worker centres, day labourers bolster one another to stand up to abusive employers, to refuse dangerous jobs and to de-throne work and income from their primacy in everyday affairs. second, day labourers contest biopolitical powerknowledge by fusing their own analyses of work-hazards to responsive practices of their own devising, as they teach one another about risky work processes, materials and employer conduct through popular education. third, day labourers are hatching visionary ideas about how distinct working populations can recognise their common stakes in ending the bodily precaritising dimensions of work, such as by organising with, not just against, their middle-class employers. in all these ways, at day labour centres, the talk of putting 'health' first mobilises a complexly social vernacular. one's 'own' health is always a concern, but the worker's understanding of 'health' does not stop with the individual. instead, this idiom positions health as stemming from social interactions that are contingent on power-differences, which are amenable to workers' collective re-formulations, which, in turn, need not be determined by the ideal of productivity. politically, these initiatives by day labourers imply that disentangling health-talk from the corporate wellness apparatus depends on autonomous action from below in tandem with cross-class organising. the role of the wizened welfare state in such efforts, however, is not clear -and that brings us back to the coronavirus. talk about 'biopolitical management'. the crisis has precipitated massive deployments of state resources to expand public health knowledge-systems and to use statistical probability calculations to foster mass populations' biological vigour and protection from disease, albeit in racially selective and gender-unequal ways. must this tidal wave of emergency mobilisation re-sediment personal responsibility and productivism as the norms that regulate occupational safety and health? or, as this surge recedes, could it leave behind institutional beachheads for fighting precarity on the level, and within the sinews, of the working body? even as the present apotheosis of biopolitics applies itself globally and to entire nations, it targets micro-practices in the workplace and affects precarity's configuration of work as a zone of bodily hazard. overall, the covid crisis reduces to the point of vanishing the already quite faint and episodic awareness of how mounting osh threats have made the workplace increasingly dangerous to workers' health for decades, across occupations. the fight for time discusses how these threats principally entail work-environmental hazards, especially poor air quality as more work is done indoors, ergonomically dysfunctional work-processes, and debilitating stress due to corporate downsizing and rising job insecurity. ironically, the pandemic's sudden re-framing of the workplace as replete with health dangers focuses on the work environment. it does so, however, in terms that reproduce the moral individualism of the precaritised osh culture, while occluding the work-environmental systems that generate endemic hazards. thus the exhaled breath of a single co-worker becomes the respiratory threat, rather than the air circulation machinery in the office or warehouse. health-conscious bodily comportment means obeying the individual remonstrance to keep six feet away from any colleague rather than ensuring that the ergonomics of work-procedures avoid forcing workers to contort their bodies and overstrain their tendons. the stress of losing one's job, having work hours reduced, or fearing these things because of the virus's immediate economic effects, normalises the ongoing anxiety that is baked into precarious work-life and linked to heart disease. the hyper-individualisation of osh hazards in the covid- crisis and the fingering of co-workers as those who pose lethal hazards to us also clearly discourage building safer and healthier workplaces through solidarity among workers. such miscasting of fellow workers as the culprits whose irresponsible conduct explains why everyone's health is in jeopardy bedevils many day labourers' attempts to rationalise the contradiction between expectations of personal responsibility and the power-relations governing their work. the pandemic further embeds this thought-habit of precarity. meanwhile, consigning 'essential' workers in some occupations to higher risk exposures while others 'shelter at home' and assemble via zoom aggravates the difficulties of organising across class lines. in all these ways, the pandemic has made it harder to dislodge health discourses from their current ensnarement in norms of productivity and individual responsibility. yet the sheer size and weight of institutional responses to covid- also presents an opportunity to argue that, if states and employers can so speedily muster these titanic responses to this virus, then the capabilities are there, more obviously than ever, to tackle the endemic osh challenges that constitute the bodily mortifying facets of precarity even in 'normal' times. this will only happen, however, if working people redouble their organising efforts. and that makes the project of founding worker centres for all workers even more vital: extending the scaffolding for leadership development and autonomously collective organisationbuilding along with new ventures in state-sponsored redistribution, such as a universal basic income. bice maiguascha correctly observes that she and i share aspirations to pursue critical theory in ways informed by the ideas she cites from marx, leonard and militant research, and i am glad she sees in my book the work of a fellow traveller. for us both, this means doing theoretically evocative social research from positions of active engagement within political struggles against oppression and with the aim of contributing something tangible to those struggles. maiguascha and eschle's research with feminist anti-capitalist activists also illuminates how political agents quite different from those who occupy centre stage in my book can pinpoint 'systemic power relations', including gender, that are fundamental in their own right and need to be contested both as such and via the demands these women raise. in response to maiguashca, let me also underscore that, notwithstanding the near-exclusive focus of my fieldwork on male, latino day labourers, the fight for time affirms, explicitly and in its intellectual practice, the need to theorise politicaleconomic power and contestation in ways that attend to the complex gendered and racialised aspects of work. maiguashca allows that my book 'recognises that day labourers … are gendered and racialised subjects', but the book does more than this. it probes the masculine ideals woven into these workers' themes, explores how the racial state constitutes precarity through policing migrants, distinguishes day labourers' varied renderings of latino identity, and draws on my own supplementary field work and secondary literature to suggest how domestic workers' conceptions would likely both differ from and align with those of day labourers. maiguashca also implies that the book searches 'for a singular revolutionary subject' and anoints the day labourer as 'the one catalyst for change', but the fight for time does neither. if my statements in the book to the contrary do not suffice to show this, then it should still be apparent from the book's premise of basing a critique of capitalism on research with workers who, as weeks notes, resemble marx's disparaged and heterogeneous lumpenproletariat, rather than the traditional proletariat. i stand firmly in sympathy with the efforts of weeks and other theorists influenced by autonomism to widen and complicate the notion of 'the working class', as weeks does by training our attention on women's reproductive labour in households, and as studying day labourers does by foregrounding a liminal and ambiguously gendered realm between productive and reproductive labour. the analytical rubric that positions day labour as both exception and synecdoche in relation to precarity writ large appears to lie at the heart of what most troubles maiguascha and leah bassel. let me thus address further what this interpretive framework means, going somewhat beyond what is already in the book. the exception/synecdoche formulation is intended as a strategy of provocation: a prod to imagine how the critical language of one especially benighted group, which has done a remarkable job of building itself up politically, could shake loose new ways of construing overarching forms of power and domination. such general structures, systems and flows of power and domination exist, and they need to be named in order to be engaged politically. this does not obviate the fact that any act of naming by a situated subject is also bound to yield misnomers because of that person's or group's particularised social location. moreover, as mezzadra and neilson ( ) argue, capital itself regenerates, accumulates and dominates both through systemic processes that integrate the globe and through localised 'operations' that proliferate heterogeneities of experience, identity and activity (including work-activity). this, however, makes it imperative to theorise capital on both levels at the same time, through critical procedures that juxtapose the general and the particular, teasing out their resonances and tensions. one models the whole with the help of closely scrutinising an always-insufficient particular, then re-envisions the systemic through considering other concrete-particulars, and so forth. a synecdoche is a part that stands in for the whole, but this notion's origin in literary theory bespeaks selfawareness that this figuration is a contingent act of representation -rather than a straightforward declaration of truth. furthermore, critical-popular analysis does not simply infer the whole from a part but rather effects mutual mediations between self-expressions of the part and conceptions of general dynamics. the fight for time pursues this path by reading day labourers' themes together with allied concepts from critical and political theory about broad formations of precarity. this is certainly a different way of reaching a provisional sense of society-wide power than that preferred by maiguashca, but it has its virtues. one virtue has to do with the temporality and affectivity of collective action that seeks to confront thoroughly pervasive forms of social, political and economic power. having exhorted readers to pursue with other groups more of the finegrained ethnographic analysis that my book provides, maiguashca then cautions: it is only once these varied, complex mappings of power and resistance are drawn, with the recognition that they cannot be easily merged, that we can begin to look for connections across them and identify possible sites of bridge building which may lead to a convivial politics of the left and to the emergence of a collective dream. this statement conveys a political temporality of postponement as well as an ascetic tinge, and i question both. if capital and other systemic forms of power are perpetually in motion, always mutating, and never ceasing to employ both universalising and particularising modes of operation, then it makes little sense for theory to hold its own generalising capacities in reserve until it has amassed some critical mass of analyses of situated perspectives (and how could a non-arbitrary threshold be specified?). strategically, this appears unwise. affectively, something also seems awry with the gesture of renunciation one must make to defer the invigoration that comes from battling broad-scale domination, while also letting systemically generated suffering endure without being called out as such. the critical-popular approach, in contrast, partakes in the affective spirit of weeks's 'politics of the demand'. this means taking seriously both the re-constituting of desiring subjects in the midst of utopian struggle and the value of fighting for a 'collective dream' that is massive and radical -like 'worker centres for all workers' or 'wages for housework' -but neither totalising, nor conclusive. another virtue of the critical-popular approach to theorising the whole, in comparison to mapping specific differences and then building localised bridges, is that the former offers not just an alternative to the latter, but also a prelude to it. my book not only juxtaposes day labourers' popular themes with academic concepts to theorise precarity writ large and anti-precarity struggle, but also shows how worker centres, the day labour movement and a broader anti-precarity politics all depend on developing popular consciousness and political action-plans through molecular processes and alliance formation. the book's practical contribution to day labour centres' popular education programming, through workshops i conducted, as well as a report i wrote with additional dialogue options, further shows this project's commitment to fostering intersectional interactions of the kind that maiguashca and bassel endorse. the fight for time thus supports coalitional politics as one key mode of struggle needed to define and confront precarity. it takes issue, however, with what we might call a 'coalitional epistemology', or the idea that understanding power on the broadest levels and identifying desirable forms of mass solidarity, can only occur through the cumulative, piece-by-piece assembling of particularised knowledges into progressively larger composites. along these lines, it bears emphasis that the fight for time is one of two inaugural books in my publisher's series 'subaltern studies in latina/o politics', edited by alfonso gonzales and raymond rocco. i am honoured to have my book involved in this effort to support work that brings together latino studies and political theory. the series is also promoting research on latino/latin-american transnationalism (félix, ) , contentious citizenship and gender among salvadorans in the us, and religion, gender and local agency in mexican shelters for central american migrants. colleagues interested in how my book contributes to more wide-ranging discussions of race, ethnicity, migration and gender, and to coalitional politics, should be aware of this context. for the most part, my responses to maiguashca, and defence of the criticalpopular method above, comprise my answer to leah bassel as well. bassel shares with maiguashca a similar orientation toward critique and political action, which bassel describes as embracing 'the challenge of forging justice at the intersections'. bassel argues, however, that rather than either encouraging consideration of other oppressed groups' experiences or incorporating such analysis into the book, the fight for time suppresses and erases such experiences. i strongly disagree. as i have explained, there are good reasons for understanding the logic of the synecdoche as evoking provisional renderings of broad power dynamics in ways that invite -rather than discourage -contestation. readers hoping to join a 'linear, sequential march toward ''justice''' will search in vain for marching orders in my book. bassel also does not mention how the book frames day labour as both exception and synecdoche in relation to precarity writ large. this dual optic makes basic to the book an appreciation for the specificity of day labourers' social experiences. it thus signals clearly that attentiveness to situated subjectivity is a sine qua nonthough not the sole legitimate basis -of critique. in this way, my book underscores how the forms of precarity thematised by day labourers reflect, for instance, their particular position in the urban construction economy and their specific vulnerability to the racialized and gendered homeland security state. this implicitly affirms the value of hearing what other groups of workers, situated distinctly, would say about precarity. at the same time, bassel's commentary neglects a different problem with which my book grapples: the need to challenge the invidious naturalisation of assumed group differences. white middle-class americans, for instance, certainly need to understand better what makes the lives of working-class migrants in the us both different and harder. but the former also need a better grasp of how their own economic, political and bodily fortunes resemble those of the latter much more closely than most would like to admit. anderson ( ) calls for 'migrantizing citizenship' as a tactic for waking britons up to how the shrill demand to save 'british jobs for british workers' has precaritised work for everyone. in a similar spirit, the fight for time appeals for precaritised workers throughout society to recognise their shared stakes in a common struggle, even while observing how the stakes are graver, and different, for some than for others. i do see it as a limitation of my research that, although it delved into the complexities of day labourers' commentaries and traced their interactions with an eclectically convened set of theoretical interlocutors, it did not include substantial fieldwork with other precaritised workers. thus, i could not critically compare such workers' generative themes with the themes spotlighted in the book. the conception of critical-popular research is in its formative stages, and maiguashca's and bassel's comments, have fuelled my interest in exploring how a future project could bring such critical moves into the heart of the inquiry. planning such work with migrant and indigenous subjects (including indigenous migrants) would offer one attractive pathway for doing this, especially given the anti-capitalist trajectories of leading critiques of settler colonialism, which prioritise spatial and temporal politics that may both align and conflict with migrant endeavours (coulthard, ) . in the meantime, i appreciate maiguashca's and weeks's invitations to speculate about how day labourers' themes and organisational spaces might relate to those of other groups. i see an affinity between feminist wsf activists' embrace of an 'ethos' whereby organising processes 'prefigure' radically altered social relations and the day labourers' anticipatory enactment of the 'refusal of work', -even as they desperately pursue jobs, and even though the day labour network takes no stand for such a refusal. as these lines suggest, however, day labourers pursue social change by generating transformation from within, and by virtue of acutely contradictory circumstances. i wonder whether a similar catalysis of power-fromcontradiction plays a role in the wsf activists' undertakings, or whether perhaps these women's class privileges permit a more confident sense that an ethically consistent programme of action is possible in ways that are precluded for day labourers. that said, it would be intriguing to know if the activists in maiguashca's research feel subjected to class-transcending temporal contradictions of precarity, such as the clash between oppressively continuous and jarringly discontinuous patterns of work. even if precarity does not furnish the express 'starting point' for these women's advocacy, it might still provide a basis for solidarity with the day labour movement in the broad fight against capital. barvosa asks whether encouraging people to identify with the timorous mindstate of precarity might be politically counter-productive, given how fear induces corporeal responses that shut down complex thinking, induce self-preserving automatism and impede cooperation. as the book shows, however, the emotions that pervade precarity include not just fear but also guilt, hopefulness, selfsatisfaction, resentment, boredom, numbness and compassion, and more. precisely because precarity is so emotionally plural, it both acquires compelling force and spawns opportunities from within itself for its own contestation. in addition, precarity is more than a 'state of mind'. it is also a socially and politically constituted condition that stems from the convergence of protracted welfare-state austerity with the transformation of employment norms and institutions. precarity, moreover, is a hegemonic formation that relies on working people's consent, which day labourers provide, for instance, through the individualism of their generative themes. yet precisely for this reason and because it is structured in contradiction, especially temporally, precarity can be transformed from within. as my book argues, many workers prefer to see the worker centrecommunity as just a 'workforce' and in this way 'identify emotionally with a fearful state of dangerous insecurity', as barvosa fittingly puts it. yet more day labourers respond to fear -along with confusion, rash self-confidence, impatience and loneliness -by acknowledging these tangled emotions and converting their affective energy into bonds of solidarity. as to gates and buffet, i am glad they are giving away mounds of money and have updated philanthropy's ethical framework, but relying on a programme to broaden beneficent actions does not strike me as a viable response to precarity. as azmanova ( ) argues, in ways complementary to the fight for time, the systemic roots of precarity lie in the competitive pursuit of profit, and precarity's structural foundations abide in the re-organisation of work and de-funding of the welfare state. absent a coordinated and democratic (anti-oligarchic) movement by masses of working people to tackle power on these levels, precarity will persist. the emancipatory script proposed by my book, far from simply pitting poor downtrodden workers against greedy bosses, casts working people at all levels of the economic hierarchy as potential collaborators in the fight against precarity, which must also be a struggle against gargantuan wealth -and a fight for time. paul apostolidis new directions in migration studies: towards methodological de-nationalism now let us shift…the path of conocimiento…inner work, public acts the fight for time: migrant day laborers and the politics of precarity capitalism on edge: how fighting precarity can achieve radical change without crisis or utopia cruel optimism: on marx, loss and the senses the abolition of work a taste of power: a black woman's story ) the gospel of wealth. www.carnegie.org/about/our-history/gospelofwealth visionary pragmatism: radical and ecological democracy in neoliberal times red skin, white masks: rejecting the colonial politics of recognition the ambivalence of coworking: on the politics of an emerging work practice conceptualizing epistemic oppression the concept of the 'lumpenproletariat' in marx and engels the politics of survival. minority women, activism and austerity in france and britain making feminist sense of the global justice movement spectres of belonging: the political life cycle of mexican migrants emma goldman: political thinking in the streets no one is illegal, canada is illegal! negotiating the relationships between settler colonialism and border imperialism through political slogans man's search for meaning justice interruptus: from redistribution to recognition school of public health. ( ) life experiences and income inequality in the united states learning from the outsider within: the sociological significance of black feminist thought black feminist thought: knowledge, consciousness and the politics of empowerment the emotional brain: the mysterious underpinnings of emotional life critical theory as political practice the misguided search for the political freelance isn't free: co-working as a critical urban practice to cope with informality in creative labour markets the politics of operations: excavating contemporary capitalism mohawk interruptus: political life across the borders of settler states marx and heterogeneity: thinking the lumpenproletariat difference in marx: the lumpenproletariat and the proletarian unnameable decolonization is not a metaphor undoing border imperialism the a. sivandandan collection. race & class bunk: the rise of hoaxes, humbug, plagiarist, phonies, post-facts, and fake news key: cord- -d joq authors: arthur, ronan f.; jones, james h.; bonds, matthew h.; ram, yoav; feldman, marcus w. title: adaptive social contact rates induce complex dynamics during epidemics date: - - journal: biorxiv doi: . / . . . sha: doc_id: cord_uid: d joq the covid- pandemic has posed a significant dilemma for governments across the globe. the public health consequences of inaction are catastrophic; but the economic consequences of drastic action are likewise catastrophic. governments must therefore strike a balance in the face of these trade-offs. but with critical uncertainty about how to find such a balance, they are forced to experiment with their interventions and await the results of their experimentation. models have proved inaccurate because behavioral response patterns are either not factored in or are hard to predict. one crucial behavioral response in a pandemic is adaptive social contact: potentially infectious contact between people is deliberately reduced either individually or by fiat; and this must be balanced against the economic cost of having fewer people in contact and therefore active in the labor force. we develop a model for adaptive optimal control of the effective social contact rate within a susceptible-infectious-susceptible (sis) epidemic model using a dynamic utility function with delayed information. this utility function trades off the population-wide contact rate with the expected cost and risk of increasing infections. our analytical and computational analysis of this simple discrete-time deterministic model reveals the existence of a non-zero equilibrium, oscillatory dynamics around this equilibrium under some parametric conditions, and complex dynamic regimes that shift under small parameter perturbations. these results support the supposition that infectious disease dynamics under adaptive behavior-change may have an indifference point, may produce oscillatory dynamics without other forcing, and constitute complex adaptive systems with associated dynamics. implications for covid- include an expectation of fluctuations, for a considerable time, around a quasi-equilibrium that balances public health and economic priorities, that shows multiple peaks and surges in some scenarios, and that implies a high degree of uncertainty in mathematical projections. author summary epidemic response in the form of social contact reduction, such as has been utilized during the ongoing covid- pandemic, presents inherent tradeoffs between the economic costs of reducing social contacts and the public health costs of neglecting to do so. such tradeoffs introduce an interactive, iterative mechanism which adds complexity to an infectious disease system. consequently, infectious disease modeling typically has not included dynamic behavior change that must address such a tradeoff. here, we develop a theoretical model that introduces lost or gained economic and public health utility through the adjustment of social contact rates with delayed information. we find this model produces an equilibrium, a point of indifference where the tradeoff is neutral, and at which a disease will be endemic for a long period of time. under small perturbations, this model exhibits complex dynamic regimes, including oscillatory behavior, runaway exponential growth, and eradication. these dynamics suggest that for epidemic response that relies on social contact reduction, secondary waves and surges with accompanied business re-closures and shutdowns may be expected, and that accurate projection under such circumstances is unlikely. the covid- pandemic had infected almost million people and caused over , deaths worldwide as of june , [ ] . in the absence of effective therapies and vaccines [ ] , many governments responded with lock-down policies and social distancing laws to reduce the rate of social contacts and curb transmission of the virus. prevalence of covid- in the wake of these policies in the united states indicates they may have been successful at decreasing the reproduction number (r t ) of the epidemic [ ] . however, they have also led to economic recession with an unemployment rate at an -year peak, the stock market in decline, and the federal government forced to borrow heavily to financially support businesses and households. solutions to these economic crises may conflict with public health recommendations. thus, governments worldwide must decide how to balance the economic and public health consequences of their epidemic response interventions. behavior-change in response to an epidemic, whether autonomously adopted by individuals or externally directed by governments, affects the dynamics of infectious diseases [ , ] . prominent examples of behavior-change in response to infectious disease prevalence include measles-mumps-rubella (mmr) vaccination choices [ ] , social distancing in influenza outbreaks [ ] , condom purchases in hiv-affected communities [ ], and social distancing during the ongoing covid- pandemic [ ] . behavior is endogenous to an infectious disease system because it is, in part, a consequence of the prevalence of the disease, which in turn responds to changes in behavior [ , ] . individuals and governments have greater incentive to change behavior as prevalence increases; conversely they have reduced incentive as prevalence decreases [ , ] . endogenous behavioral response may then theoretically produce a non-zero endemic equilibrium of infection. this happens because, at low levels of prevalence, the cost of avoidance of a disease may be higher than the private benefit to the individual, even though the collective, public benefit in the long-term may be greater. however, in epidemic response we typically think of behavior-change as an exogenously-induced intervention without considering associated costs. while guiding positive change is an important intervention, neglecting to recognize the endogeneity of behavior can lead to a misunderstanding of incentives and a resurgence of the epidemic when behavior change is reversed prematurely. although there is growing interest in the role of adaptive human behavior in infectious disease dynamics, there is still a lack of general understanding of the most important properties of such systems [ , , ] . behavior is difficult to measure, quantify, or predict [ ] , in part, due to the complexity and diversity of human beings who make simply allowed the transmission parameter (β) to be a negative function of the number infected, effectively introducing an intrinsic negative feedback to the infected class that regulated the disease [ ] . modelers have used a variety of tools, including agent-based modeling [ ] , network structures for the replacement of central nodes when sick [ ] or for behavior-change as a social contagion process [ ] , game theoretic descriptions of rational choice under changing incentives as with vaccination [ , , ] , and a branching process for heterogeneous agents and the effect of behavior during the west africa ebola epidemic in [ ] . a common approach to incorporating behavior into epidemic models is to track co-evolving dynamics of behavior and infection [ , , ] , where behavior represents an i-state of the model [ ] . in a compartmental model, this could mean separate compartments (and transitions therefrom) for susceptible individuals in a state of fear and those not in a state of fear [ ] . periodicity (i.e. multi-peak dynamics) has long been documented empirically in epidemiology [ , ] . periodicity can be driven by seasonal contact rate changes (e.g. when children are in school) [ ] , seasonality in the climate or ecology [ ] , sexual behavior change [ ] , and host immunity cycling through new births of susceptibles or a decay of immunity over time. some papers in nonlinear dynamics have studied delay differential equations in the context of epidemic dynamics and found periodic solutions [ ] . although it is atypical to include delay in modeling, delay is an important feature of epidemics. for example, if behavior responds to mortality rates, there will inevitably be a lag with an average duration of the incubation period plus the life expectancy upon becoming infected. in a tightly interdependent system, reacting to outdated information can result in an irrational response and periodic cycling. the original epidemic model of kermack and mckendrick [ ] was first expressed in discrete time. then by allowing "the subdivisions of time to increase in number so that each interval becomes very small" the famous differential equations of the sir epidemic model were derived. here we begin with a discrete-time susceptible-infected-susceptible model that is adjusted on the principle of endogenous behavior-change through an adaptive social-contact rate that can be thought of as either individually motivated or institutionally imposed. we introduce a dynamic utility function that motivates the population's effective contact rate at a particular time period. this utility function is based on information about the epidemic size that may not be current. this leads to a time delay in the contact function that increases the complexity of the population dynamics of the infection. results from the discrete-time model show that the system approaches an equilibrium in many cases, although small parameter perturbations can lead the dynamics to enter qualitatively distinct regimes. the analogous continuous-time model retains periodicities for some sets of parameters, but numerical investigation shows that the continuous time version is much better behaved than the discrete-time model. this dynamical behavior is similar to models of ecological population dynamics, and a useful mathematical parallel is drawn between these systems. to represent endogenous behavior-change, we start with the classical discrete-time susceptible-infected-susceptible (sis) model [ ] , which, when incidence is relatively small compared to the total population [ , ] , can be written in terms of the recursions where at time t, s t represents the number of susceptible individuals, i t the infected individuals, and n t the number of individuals that make up the population, which is assumed fixed in a closed population. we can therefore write n for the constant population size. here γ, with < γ < , is the rate of removal from i to s due to recovery. this model in its simplest form assumes random mixing, where the parameter b represents a composite of the average contact rate and the disease-specific transmissibility given a contact event. in order to introduce human behavior, we substitute for b a time-dependent b t , which is a function of both b , the probability that disease transmission takes place on contact, and a dynamic social rate of contact c t whose optimal value, c * t , is determined at each time t as in economic epidemiological models [ ] , namely where c * t represents the optimal contact rate, defined as the number of contacts per unit time that maximize utility for the individual. here, c * t is a function of the number of this utility function is assumed to take the form here u represents utility for an individual at time t given a particular number of contacts per unit time c, α is a constant that represents maximum potential utility achieved at a target contact rateĉ. the second term, α (c −ĉ) , is a concave function that represents the penalty for deviating fromĉ. the third term, the delay in information acquisition and the speed of response to that information. we note that ( − i n b ) c can be approximated by when i n b is small and c i n b << . we thus assume i n (b ) is small, and approximate u (c) in eq. using eq. . eq. assumes a strictly negative relationship between number of infecteds and contact. we assume an individual or government will balance the cost of infection, the probability of infection, and the cost of deviating from the target contact rateĉ to select an optimal contact rate c * t , namely the number of contacts, which takes into account the risk of infection and the penalty for deviating from the target contact rate. this captures the idea that individuals trade off how many people they want to interact with versus their risk of getting sick, or that authorities want to reopen the economy during a pandemic and have to trade off morbidity and mortality from increasing infections with the need to allow additional social contacts to help the economy restart. this optimal contact rate can be calculated by finding the maximum of u with respect to c from eq. with substitution from eq. , namely differentiating, we have which vanishes at the optimal contact rate, c * , which we write as c * t to show its dependence on time. then which we assume to be positive. therefore, total utility will decrease as i t increases and c * t also decreases. utility is maximized at each time step, rather than over the course of lifetime expectations. in addition, eq assumes a strictly negative relationship between number of infecteds at time t − ∆ and c * . while behavior at high degrees of prevalence has been shown to be non-linear and fatalistic [ , ] , in this model, prevalence (i.e., b it n ) is assumed to be small, consistent with eq. . we introduce the new parameter α = α we can now rewrite the recursion from eq. , using eq. and replacing c t with c * t as defined by eq. , as when ∆ = and there is no time delay, f (·) is a cubic polynomial, given by july , / for the susceptible-infected-removed (sir) version of the model, we include the removed category and write the (discrete-time) recursion system as the baseline contact rate and c * t specified by eq. . with b t = b, say, and not changing over time, eqs. - form the discrete-time version of the classical kermack-mckendrick sir model [ ] . the inclusion of the removed category entails thatĨ = is the only equilibrium of the system eqs. - ; unlike the sis model, there is no equilibrium with infecteds present. in general, since c * t includes the delay ∆, the dynamic approach toĨ = is expected to be quite complex. intuitively, since the infecteds are ultimately removed, we do expect that from any initial frequency i of infecteds all n individuals will eventually be in the r category. numerical analysis of this sir model shows strong similarity between the sis and sir models for several hundred time steps before the sir model converges toĨ = with r = n . in the section "numerical iteration and continuous-time analog" we compare the numerical iteration of the sis (eq. ) and sir (eqs. [ ] [ ] [ ] and integration of the continuous-time (differential equation) versions of the sis and sir models. to determine the dynamic trajectories of ( ) without time delay, we first solve for the fixed point(s) of the recursion ( ) (i.e., value or values of i such that from eq. , it is clear that i = is an equilibrium as no new infections can occur in the next time-step if none exist in the current one. this is the disease-free equilibrium denoted byĨ. other equilibria are the solutions of we label the solution with the + sign i * and the one with the − signÎ. it is important to note that under these conditionsÎ is an equilibrium of the for this to hold whenÎ is legitimate is if inequalities ( ) and nĉb > γ hold, thenÎ is locally stable. however, even if both of these inequalities hold, the number of infecteds may not converge toÎ. it is well known that iterations of discrete-time recursive relations, of which ( ) is an example (i.e., with ∆ = ), may produce cycles or chaos depending on the parameters and the starting frequency i of infecteds. table shows an array of possible asymptotic dynamics with ∆ = found by numerical iteration of ( ) for a specific set of parameters and an initial frequency table are examples for which, beginning with a single infected, the number of infecteds explodes, becoming unbounded; of course, this is an illegitimate trajectory since i t cannot exceed n . however, in the case marked * ,Î is locally stable and with a large enough initial number of infecteds, there is damped oscillatory convergence toÎ. in the case marked * * , with i = the number of infecteds becomes unbounded, but in this case,Î is locally unstable, and starting with i close to i a stable two-point cycle is approached; in this case df (i)/di| i=Î < − . table . stability analysis of the sis model is more complicated when ∆ = , and in the appendix we outline the procedure for local analysis of the recursion ( ) nearÎ. local stability is sensitive to the delay time ∆ as can be seen from the numerical iteration of july , / ( ) for the specific set of parameters shown in table . some analytical details related to table are in the appendix. table reports an array of dynamic trajectories for some choices of parameters and, in two cases, an initial number of infecteds other than i = . the first three rows show three sets of parameters for which the equilibrium values ofÎ are very similar but the trajectories of i t are different: a two-point cycle, a four-point cycle, and apparently chaotic cycling above and belowÎ. in all of these cases, df (i)/di| i=Î < − . clearly the dynamics are sensitive to the target contact rateĉ in these cases. the fourth and eighth rows show that i t becomes unbounded (tends to +∞) from i = , but a two-point cycle is approached if i is close enough toÎ: df (i)/di| i=Î < − in this case. for the parameters in the ninth row, if i is close enough toÎ there is damped oscillation intoÎ: here − < df (i)/di| i=Î < . the fifth and sixth rows of table exemplify another interesting dynamic starting from i = . i t becomes larger thanÎ (overshoots) and then converges monotonically down toÎ; in each case < df (i)/dt| i=Î < . for the parameters in the seventh row, there is oscillatory convergence toÎ from i = (− < df (i)/di| i=Î < ), while in the last row there is straightforward monotone convergence toÎ. a continuous-time analog of the discrete-time recursion ( ), in the form of a differential equation, substitutes di/dt for i t+ − i t in ( ). we then solve the resulting delay differential equation numerically using the vode differential equation integrator in scipy [ , ] (source code available at https://github.com/yoavram/sanjose). using the parameters in table figure with i = . in figure , with no delay (∆ = ) and a one-unit delay (∆ = ), the discrete and continuous dynamics are very similar, both converging toÎ. however, with ∆ = the differential equation oscillates intoÎ while the discrete-time recursion enters a regime of inexact cycling aroundÎ, which appears to be a state of chaos. for ∆ = and ∆ = , the discrete recursion "collapses". in other words, i t becomes negative and appears to go off to −∞; in figure , this is cut off at i = . the continuous version, however, in these cases enters a stable cycle aroundÎ. it is important to note that in figure for respectively. in fig. s s there appears to be convergence toÎ, but in fig. s l after about time units, in both discrete-and continuous-time sir versions, the number of infected begins to decline towards zero. it is worth noting that if the total population size of n decreases over time, for example, if we take n (t) = n exp(−zt), with z = b ĉ γ, then the short-term dynamics of the sis model in ( ) begins to closely resemble the sir version. this is illustrated in supplementary fig. s n , where b ,ĉ, γ are, as in figs. s s and s l, the same as in fig. , panel (a) . with n decreasing to zero, both s and i will approach zero in the our model makes a number of simplifying assumptions. we assume, for example, that all individuals in the population will respond in the same fashion to government policy. we assume that governments choose a uniform contact rate according to an optimized utility function, which is homogeneous across all individuals in the population. finally, we assume that the utility function is symmetric around the optimal number of contacts so that increasing or decreasing contacts above or below the target contact rate, respectively, yield the same reduction in utility. these assumptions allowed us to create the simplest possible model that includes adaptive behavior and time delay. in holling's heuristic distinction in ecology between tactical models, models built to be parameterized and predictive, and strategic models, which aim to be as simple as possible to highlight phenomenological generalities, this is a strategic model [ ] . we note that the five distinct kinds of dynamical trajectories seen in these computational experiments come from a purely deterministic recursion. this means that oscillations and even erratic, near-chaotic dynamics and collapse in an epidemic may not necessarily be due to seasonality, complex agent-based interactions, changing or stochastic parameter values, demographic change, host immunity, or socio-cultural idiosyncracies. this dynamical behavior in number of infecteds can result from mathematical properties of a simple deterministic system with homogeneous endogenous behavior-change, similar to complex population dynamics of biological organisms [ ] . the mathematical consistency with population dynamics suggests a parallel in ecology, that the indifference point for human behavior functions in a similar way to a carrying capacity in ecology, below which a population will tend to grow and above which a individuals are incentivized to change their behavior to protect themselves, they will, and they will cease to do this when they are not [ ] . further, our results show certain parameter sets can lead to limit-cycle dynamics, consistent with other negative feedback mechanisms with time delays [ , ] . this is because the system is reacting to conditions that were true in the past, but not necessarily true in the present. in our discrete-time model, there is the added complexity that the non-zero equilibrium may be locally stable but not attained from a wide range of initial conditions, including the most natural one, namely a single infected individual. observed epidemic curves of many transient disease outbreaks typically inflect and go extinct, as opposed to this model that may oscillate perpetually or converge [ ] , and surges in fluctuations in covid- cases globally [ ] . there may be many causes for such double-peaked outbreaks, one of which may be a lapse in behavior-change after the epidemic begins to die down due to decreasing incentives [ ] , as represented in our simple theoretical model. this is consistent with findings that voluntary vaccination programs suffer from decreasing incentives to participate as prevalence decreases [ , ] . it should be noted that the continuous-time version of our model can support a stable cyclic epidemic whose interpretation in empirical terms will depend on the time scale, and hence on the meaning of the delay, ∆. one of the responsibilities of infectious disease modelers (e.g. covid- modelers) is to predict and project forward what epidemics will do in the future in order to better assist in the proper and strategic allocation of preventative resources. covid- models have often proved wrong by orders of magnitude because they lack the means to account for adaptive response. an insight from this model, however, is that prediction becomes very difficult, perhaps impossible, if we allow for adaptive behavior-change because the system is qualitatively sensitive to small differences in values of key parameters. these parameters are very hard to measure precisely; they change depending on the disease system and context and their inference is generally subject to large errors. further, we don't know how policy-makers weight the economic trade-offs against the public health priorities (i.e., the ratio between α and α in our model) to arrive at new policy recommendations. to maximize the ability to predict and minimize loss of life or morbidity, outbreak response should not only seek to minimize the reproduction number, but also the length of time taken to gather and distribute information. another approach would be to use a predetermined strategy for the contact rate, as opposed to a contact rate that depends on the number of infecteds. in our model, complex dynamic regimes occur more often when there is a time delay. if behavior-change arises from fear and fear is triggered by high local mortality and high local prevalence, such delays seem plausible since death and incubation periods are lagging epidemiological indicators. lags mean that people can respond sluggishly to an unfolding epidemic crisis, but they also mean that people can abandon protective behaviors prematurely. developing approaches to incentivize protective behavior throughout the duration of any lag introduced by the natural history of the infection (or otherwise) should be a priority in applied research. this paper represents a first step in understanding endogenous behavior-change and time-lagged protective behavior, and we anticipate further developments along these lines that could incorporate long incubation periods and/or recognition of asymptomatic transmission. in the neighborhood of the equilibriumÎ, write i t =Î + ε t and i t−∆ =Î + ε t−∆ , where ε t and ε t−∆ are small enough that quadratic terms in them can be neglected in the expression for i t+ =Î + ε t+ . the linear approximation to (a ) is then and in the case ∆ = , this reduces to we focus first on ∆ = and write (a ) as ε t+ = ε t l(Î). recall thatÎ satisfies eq. ( ) , and substituting γ from ( ) now we turn to the general case ∆ = and eq. (a ), which we write as where a and b are the corresponding terms on the right side of (a constants with respect to time. local stability ofÎ is then determined by the properties of recursion (a ), whose solution first involves solving its characteristic equation in principle there are ∆ + real or complex roots of (a ), which we represent as λ , λ , . . . , λ ∆+ , and the solution of (a ) can be written as where c i are found from the initial conditions. convergence to, and hence local stability ofÎ, is determined by the magnitude of the absolute value (if real) or modulus (if complex) of the roots λ , λ , . . . , λ ∆+ :Î is locally stable if the largest among the ∆ + of these is less than unity. in table , results of numerically iterating the complete recursion ( ) are listed for the delay ∆ varying from ∆ = to ∆ = , all starting from i = , with n = , and the stated parameters. figure illustrates the discrete-and continuous-time dynamics summarized in table with complex roots . ± . i whose modulus is . , which is less than . the complexity implies cyclic behavior, and since the modulus is less than one, we see locally damped oscillatory convergence toÎ. for ∆ = , the characteristic equation is the cubic which has one real root . and complex roots . ± . i. here the modulus of the complex roots is . , which is greater than unity so thatÎ is not locally stable. in this case the dynamics depend on the initial value i . if i < , i t oscillates but not in a stable cycle. if i > , the oscillation becomes unbounded. world health organization. coronavirus disease (covid- ): situation report scientific and ethical basis for social-distancing interventions against covid- . the lancet infectious diseases social factors in epidemiology modelling the influence of human behaviour on the spread of infectious diseases: a review evolving public perceptions and stability in vaccine uptake game theory of social distancing in response to an epidemic the responsiveness of the demand for condoms to the local prevalence of aids nine challenges in incorporating the dynamics of behaviour in infectious diseases models impact and behaviour: the importance of social forces to infectious disease dynamics and disease ecology economic epidemiology and infectious diseases erratic flu vaccination emerges from short-sighted behavior in contact networks capturing human behaviour a generalization of the kermack-mckendrick deterministic epidemic model a hybrid epidemic model: combining the advantages of agent-based and equation-based approaches winter. ieee the effect of a prudent adaptive behaviour on disease transmission coupled contagion dynamics of fear and disease: mathematical and computational explorations a general approach for population games with application to vaccination ebola cases and health system demand in liberia the spread of awareness and its proceedings of the national academy of sciences a review the dynamics of physiologically structured populations periodicity in epidemiological models measles in england and wales-i: an analysis of factors underlying seasonal patterns seasonal and interannual cycles of endemic cholera in bengal - in relation to climate and geography etiology of newly emerging marine diseases epidemic cycles driven by host behaviour periodic solutions of delay differential equations arising in some models of epidemics a contribution to the mathematical theory of epidemics the royal society modeling infectious diseases in humans and animals princeton university press time series modelling of childhood diseases dynamical systems approach adaptive human behavior in epidemiological models choices, beliefs, and infectious disease dynamics higher disease prevalence can induce greater sociality: a game theoretic coevolutionary model global stability of an sir epidemic global stability for the seir model in epidemiology scipy-based delay differential equation (dde) solver the strategy of building models of complex ecological systems simple mathematical models with very complicated dynamics journal of the fisheries board of canada time-delay versus stability in population models with two and three trophic levels time delays are not necessarily destabilizing different epidemic curves for severe acute respiratory rational epidemics and their public control group interest versus self-interest in smallpox vaccination policy key: cord- -lr ubz authors: droit-volet, sylvie; gil, sandrine; martinelli, natalia; andant, nicolas; clinchamps, maélys; parreira, lénise; rouffiac, karine; dambrun, michael; huguet, pascal; dubuis, benoît; pereira, bruno; bouillon, jean-baptiste; dutheil, frédéric title: time and covid- stress in the lockdown situation: time free, «dying» of boredom and sadness date: - - journal: plos one doi: . /journal.pone. sha: doc_id: cord_uid: lr ubz a lockdown of people has been used as an efficient public health measure to fight against the exponential spread of the coronavirus disease (covid- ) and allows the health system to manage the number of patients. the aim of this study (clinicaltrials.gov nct ) was to evaluate the impact of both perceived stress aroused by covid- and of emotions triggered by the lockdown situation on the individual experience of time. a large sample of the french population responded to a survey on their experience of the passage of time during the lockdown compared to before the lockdown. the perceived stress resulting from covid- and stress at work and home were also assessed, as were the emotions felt. the results showed that people have experienced a slowing down of time during the lockdown. this time experience was not explained by the levels of perceived stress or anxiety, although these were considerable, but rather by the increase in boredom and sadness felt in the lockdown situation. the increased anger and fear of death only explained a small part of variance in the time judgment. the conscious experience of time therefore reflected the psychological difficulties experienced during lockdown and was not related to their perceived level of stress or anxiety. a a a a a in , faced with a virus that is uncontrollable because of its unknown [ ] and virulent nature (sars-cov- ), the governments of different countries of the european union, as well as of the whole world, found themselves obliged to impose a lockdown on their citizens. this unprecedented public measure is thought to allow the health system to manage the number of patients in hospital and ensure that they receive proper care in the context of the covid- outbreak. in france, confinement was officially imposed in the month of march (on march th at : noon). this lockdown, which requires a large number of people to stay at home, thus depriving them of their liberty, is a situation never previously encountered and its psychological consequences in the short and medium term are not yet known. researchers into time perception can nevertheless easily imagine that this life in lockdown completely changes individuals' relationship to time, i.e. their experience of time. however, to our knowledge, no studies have as yet investigated this question. very recent scale surveys or survey projects on covid- conducted all around the word (e.g., china, korea, iran and united kingdom) suggest that the lockdown situation generates new or heightened emotional states in the form of an increase in psychological distress [ ] [ ] [ ] [ ] [ ] . nonetheless, in the different distress scales used, the different dimensions of emotion (valence and arousal) were not dissociated, and no survey has examined their relationships to time experience, even though emotion and the experience of time are known to be intrinsically linked. the aim of the present study was thus to conduct a scale survey on a large sample of an as yet untested population-french people-in order to assess not only the perceived stress related to covid- but also the emotions (happiness, boredom, arousal) felt during as compared to before the lockdown and their links to the subjective experience of time. the experience of time corresponds to one's feeling about time, i.e., the conscious judgment of the speed of the passage of time [ , ] . this has received relatively little attention by researchers in the field when compared to research into individuals' abilities to perceive short durations (< minute). this is probably due to the challenge of objectively examining just what makes up the experience of each individual, and therefore the role of higher-level cognitive mechanisms (e.g., consciousness, memory, self-awareness) [ ] [ ] [ ] . indeed, the judgment of the passage of time can be seen as a mirror of the subjective experience of one's internal state [ ] [ ] [ ] . for example, contrary to the generally held belief that time seems to pass faster as we get older, some studies have demonstrated that the feeling of the passage of time in the immediate moment is not directly related to age (young adult vs. older adult), but to people's subjective emotional experience and lived activities [ , , ] . the passage of time is in fact a sensitive index of emotional experience felt in the present moment and of its variations as a function of life conditions. it is thus important to investigate individuals' judgments about how fast time seems to pass in the exceptional situation of lockdown and the factors explaining these. from a general standpoint, the literature provides evidence of the role of emotional experience as a critical factor in the experience of time. nevertheless, the famous expression "time flies when you feel good; time drags when you feel bad" is not straightforward to explain, as negative feelings are diverse and may involve varying mechanisms. more precisely, the emotional experience can be divided into two fundamental dimensions, valence (pleasure vs. displeasure) and activation (calmness vs. excitement/alertness) [ , ] . these two dimensions interact in the characterization of any given emotion. for example, while the emotions of sadness and fear are both negative, the former is weakly activating (or even deactivating) while the latter is strongly activating. accordingly, the level of felt arousal has been shown to be a prominent factor in temporal mechanisms: the more individuals report being in a state of arousal, the faster time is reported to pass. several studies have shown a lengthening of estimates of short temporal intervals in situations of acute stress, for example when participants are faced with unpleasant stimuli [ ] [ ] [ ] or when they imminently expect a very unpleasant event, e.g., electric shock [ , ] . however, few studies have examined the effect of chronic stress on time judgments, such as that experienced by people with the covid- virus or subjected to lockdown. in the context of chronic stress, i.e. when stress is extended over several days or weeks as in the case of hospital nurses, cocenas-silva et al. [ ] showed that duration judgments were no longer altered by physiological stress as measured by physiological markers, but rather by subjective psychological stress as assessed by a self-reported scale. in addition, one can assume that different mechanisms are at work in the case of an emotion, such as fear (an immediate and ephemeral negative state directed towards a specific event), compared to a more diffuse affective state, like anxiety or perceived stress (a prolonged negative state whose origin is not necessarily identified) [ ] . the covid- pandemic, i.e., the risk that you or your loved ones will be affected by the disease as well as uncertainty about this disease, could produce chronic stress that has consequences for mental and physical health. it is well known that chronic stress affects the immune system, suppressing protective and increasing pathological immune responses [ ] . there is thus a risk in this period of pandemic that the chronic stress related to covid- and its corollaries (anxiety, fear of death) are particularly high and therefore impact the subjective experience of time by speeding up the perceived passage of time. consequently, we hypothesized a significant relationship between stress and time experience during the lockdown imposed by the covid- pandemic. furthermore, in this covid- period, it is critical to consider not only the disease-related perceived stress but also the consequences for life of being locked down at home, as well as the direct and indirect effects on daily psychological and social functioning. as a recent survey highlighted, confining people increases their sense of boredom [ ] . boredom corresponds to "the aversive state of wanting, but being unable, to engage in satisfying activity" and involves, in particular, low arousal, negative affects [ , p ]. in particular, some studies have shown that boredom produces a feeling of the slowing down of time rather than a speeding up [ , ] . an alternative hypothesis was thus that boredom would prevail over stress in the experience of time. since boredom is associated with negative emotion of low level of arousal, we thus expected participants to experience of slowing down of time with the boredom experienced during the lockdown. it was not possible a priori to identify which hypothesis would be valid, i.e., which are the factors related to and influencing the experience of time in a lockdown situation, the perceived stress in the stressful situation of covid- and/or-by contrast-other affective states characterized by a decrease in arousal such as boredom. indeed, on one hand, the fear and distress generated by the morbid nature of the crisis and its repercussions (fear for one's health and for that of one's family and friends) or by inappropriate housing quality (stress at home) or working conditions (job stress) could increase people's sense of alertness, and therefore lead to a speeding of the passage of time. on the other, confinement at home and social distancing could result in an increased sense of sadness (i.e., less happiness) and boredom, and thus in the feeling that the passage of time slows down. here, a large sample of french people were asked to answer a scale survey during the lockdown period. this consisted of a series of questions, i.e., demographic questions but also questions on the stress perceived (covid- stress, home stress, job stress, anxiety), the emotions (happiness, arousal, boredom) felt during compared to before the lockdown and the experience of time. the participants were asked to assess their experience of the passage of time according to three periods of the lockdown: in the immediate moment, during the day, during the last week, as well as before the lockdown for comparison purposes. the sample consisted of french participants, women and men (mean age = . , sd = . , min = , maxi = , n - years = ). the participants completed the questionnaire at home ( . %) or at work ( . %). the study was reviewed and approved by the human ethics committees sud est vi, france (clinicaltrials.gov nct ). all participants were volunteers and were informed of the objective of the survey and that their data would be processed anonymously and be used for research purposes. the ethics committee waived the need for written consent considering that if people respond to the questionnaires by going to the website, they are giving their consent. furthermore, they can withdraw it at any time. the few minors who completed the questionnaire did so with the consent of their parents who sent them the survey. the responses to the demographic questions allowed us to characterize the surveyed population. . % of participants were married or equivalent (civil partner, etc.) and . % were single ( % other). their distribution as a function of education level was: . % certificate of general education, . % high school vocational certificate, % high school diploma, . % bachelor's degree, . % master's degree and % doctoral degree. the percentage of participants per professional category was: jobseekers: . %; students: . %; farmers: . %; craftsmen/shopkeepers/business executives: . %; white-collar workers: %; manual workers: . %; intermediate professions, . %; retired: . % ( . % no response). we implemented an open epidemiological, observational, descriptive study by administering a self-reported questionnaire proposed to volunteers using redcap software available through the covistress.org website. the redcap questionnaire was hosted by the university hospital of clermont-ferrand. the questions analyzed in this manuscript were therefore specific questions included in a large questionnaire composed of different thematic sections of questions (s questions). the thematic sections were presented in random order after the demographic questions. the online questionnaire was distributed several times through mailing lists held by institutions and french social groups. there were no exclusion criteria. the data that we analyzed were obtained for the period of lockdown from march th to april th , , whereas the french lockdown was ordered on march th at : noon. the time taken to complete the survey lasted between and minutes on average, depending on sub-items. for the main outcomes, we used a visual analog scale (vas), i.e., a non-calibrated line of mm, ranging from to [ , ] . the subjective experience of time was thus assessed using this vas, which went from very slowly ( ) to very fast ( ). the question was "what are your feelings about the speed of the passage of time". there were four time questions, one for the passage of time before the lockdown, and three for during the lockdown: now, for the day, and for the week. the stress resulting from covid- as well as job stress and home stress, health-related and financial concerns and anxiety were assessed using the same vas. the emotional dimensions tested were also assessed with the vas for the period before the lockdown and during the lockdown (now): fear of death (not at all vs. at lot), arousal (calm vs. excited), happiness (sad vs. happy), anger (peaceful vs. angry), boredom (occupied vs. bored). the quality of sleep and level of fatigue were also examined in the survey using the vas. as explained above, these different questions were presented in different thematic sections presented in a random order (s questions). we performed analyses of variance on the subjective experience of time. we also examined correlations and ran a linear regression model on all the measures of interest by using the standardized data. we used the variance inflation factor (vif) to examine the multicollinearity in the regression analysis [ ] . finally, to examine the results of the linear regression model in more detail, we also performed an analysis of mediation. the analyses were performed with spss and the bonferroni correction was systematically applied when necessary. a preliminary analysis of variance performed on the subjective experience of time showed a marked difference between the experience of time before and during the lockdown (fig ) . that time passed faster when a longer period of time was considered, i.e., a week compared to a day or the present moment (bonferroni comparisons, p < . ). to simplify the results, the subsequent statistical analyses are based on the difference in time ratings for the question on the period before the lockdown and that for the present moment (during the lockdown). indeed, the meaning of temporal judgment during the lockdown is relative to that before the lockdown. in addition, the results were similar when the analyses were only performed on the ratings for the present moment. a positive value of our temporal difference index therefore indicates that the individuals experience a slowing down of time during the lockdown, a negative value a speeding up of time and a null value no difference. the anova performed on this temporal difference index, with level of education, professional category and whether the individuals were at work or home as factors, did not show any significant effect (all f < ). there was indeed no significant difference in time experience before the lockdown situation as a function of these factors. only a small effect of professional category was observed in the present time judgment during the lockdown, f the anova on the temporal index with sex and marital status (single vs. not single) as factors showed a significant main effect of sex, f( , ) = . , p < . , η p = . , and status, f( , ) = . , p < . , η p = . , with no sex x status interaction (p > . ). this suggests that the single people in our sample tended to experience a greater difference in the flow of time during the lockdown when compared to before ( . vs. . ) . indeed, in the lockdown situation, time in the present was judged to pass slower by the single people (m = . , sd = . ) than by the others (m = . , sd = . ). the women also tended to feel a greater slowing down of time than the men ( . vs. . ) during as compared to before the lockdown, but time passed faster for the women than for the men before the lockdown ( . vs. . ), f( , ) = . , p < . , η p = . . nevertheless, their responses to the stress questions indicated that they tended to be more stressed than the men, even though the sex difference only explained a very small proportion of variance ( table shows the correlation matrix (s table) between the subjective experience of time (difference in the judgment of the passage of time between before the lockdown and the present moment, i.e., during the lockdown) and the different tested factors. an examination of table reveals that several dimensions were associated with the slowing down of time during as compared to before the lockdown. with regard to stress, the participants experienced that time passed slower-rather than faster-with an increase in the level of perceived stress, i.e., the perceived stress related to covid- (r = . ) as well as the stress at home (r = . ) and at work (r = . ). a slowing down of time was therefore observed as the stress level increased. this deceleration of subjective time was observed even if the stress value reported on the vas was high, and higher for covid- -related stress than for home and job stress (covid- stress, m = . , sd = . ; job stress, m = . , sd = . ; home stress, m = . , sd = . , f( , ) = . , p < . , η p = . (all bonferroni tests, p < . ). the rating for each type of stress was indeed significantly different from zero (t( ) = . , t( ) = . , t( ) = . , respectively, all p < . ). finally, the stress resulting from covid- was more closely associated with anxiety (r = . , p < . ), the fear of death (r = -. , p < . ) than it was with the experienced time per se. inconsistently with our first hypothesis, the level of correlation between the experience of time and covid- -related stress was therefore very low, and this was also the case for stress in the other contexts (home, work). as suggests table , the experience of time was more correlated with boredom (r = -. , p < . ) and decreased happiness (r = . , p < . ) than with the level of perceived stress. therefore, the participants experienced a slowing down of time as boredom increased and happiness decreased during the lockdown. as the time judgment was significantly correlated with several dimensions, to identify the best predictor of the subjective experience of time we performed a regression analysis on the time judgments with the different significant dimensions entered into the same model ( table ). the examination of multicollinearity in the regression analysis using the vif indicated no problematic presence of multicollinearity (all vif < ) [ ] . the results of this regression analysis indicated that the perceived stress resulting from covid- and its spread was not a table . correlations between the passage of time (difference between before the lockdown and for the present, i.e., during the lockdown) and the different tested factors (z-scores). participants were in the lockdown situation, the more they experienced a slowing down of time. indeed, time was experienced as passing increasingly slowly in the present moment compared to before the lockdown as the level of boredom rose (fig ) . it also seemed to slow down as happiness decreased, i.e., as sadness increased (fig ) . increasing boredom and decreasing happiness were therefore the two main predictors of the experience of the passage of time during the lockdown. since these two dimensions are related, we conducted statistical analyses to estimate whether the boredom mediated the effect of emotion on the experience of time and, conversely, whether emotion mediated the effect of the boredom of the experience of time. the mediation analyses indicated that boredom contributes to explaining the effect of emotion on the experience of the passage of time, with a significant indirect effect of . (β), se = . , % ci (. ; . ), z = . , p < . , . % of mediation) (fig ) . however, the direct effect of emotion (sadness) on the time experience remained significant (β = . the results of our survey showed that the stress felt by a broad cross-section of the french population during the lockdown was high, in particular with regard to stress relating to the covid- pandemic, as is indicated by the rating of . (+/- . ) on a -mm vas. the level of perceived stress linked to covid- was even higher than the stress at work and at home. covid- stress was, in fact, related to the participants' anxiety and their fear of death. the more anxious and frightened they were about death, the more stressed they were in the face of this disease. these results are entirely consistent with the initial results of surveys on covid- conducted, in particular, in china [ , ] and iran [ ] , which have shown an increase in psychological distress as a result of the covid- pandemic. however, as reported by qui et al. [ ] , it is noteworthy that people's distress does not reach a pathological level (m = . ), with only % of the population suffering from severe distress and % from mild or moderate distress. in addition, the proportion of individuals presenting psychological distress disorders before the covid- is unknown. however, the chinese suffer less psychological distress and have greater life satisfaction when working in the office than at home, whereas the opposite seems to be the case in the french population, as suggested by the significantly lower level of stress at home than at work. this suggests that there are some differences in culture or living conditions between people in different countries with regard to stress management in similar social isolation situations. the originality of our results is to show that, although the level of stress was quite high, it had little impact on the current subjective experience of time. indeed, the participants did not feel a speeding up of time related to the increase in their stress level. this is contrary to the results of studies on timing which have described a lengthening of duration estimates and the experience of a faster passage of time when the levels of stress and anxiety are high [ , , ] . however, these findings were obtained in intense and concisely emotional situations, when the subjects were faced or expecting a forthcoming threatening event, or in individuals with high-anxiety traits. in the situation of lockdown at home, the current level of stress was therefore not high enough to affect the sense of time. indeed, the level of arousal remained low, although it increased slightly between the period before and during the lockdown. to conclude, one might nevertheless think that it would have been more convincing to record the physiological markers of stress. however, this was not possible in the lockdown situation which was rapidly decided on by the public authorities [ , ] . in addition, cocenas et al. [ ] recently showed that perceived stress was a better predictor of changes in time estimates than physiological stress per se in the case of prolonged stressful situations, for example in the case of hospital nurses at work. in addition, the likelihood of encountering a series of intensely stressful events may be reduced in the present isolation situation. family life involving the care of children can obviously be a source of stress. our study did indeed indicate that women were more stressed at home than men, but were even more so when they were single than part of a family, and that the number of children only slightly increased the stress level at home (r = . , p < . ). rather than covid- -related stress or home and job stress, our study showed that it was the emotional experience of everyday life during the lockdown that influenced the sense of time. indeed, the participants clearly reported experiencing a slowing down of the passage of time during in comparison to before the lockdown. and the most reliable predictors of this slowing down were the feelings of boredom and sadness. our results are consistent with those of recent studies on time judgments that have pointed out the critical role of emotion in human beings' sense of time [for a review ] and of boredom [ , , ] . these studies have indeed found a slowing down of time as both sadness and boredom increase. in line with theoretical models of boredom [ ] , the present study found that the degree of boredom experienced was related not only to arousal but mostly to negative emotional experience: the more bored people were in lockdown, the sadder they were. the boredom is known to be linked to depression [ , ] , and depressed people feel a slowing down of time [ ] . consequently, the experience of boredom in the lockdown and the judgment of a slower passage of time have increased sadness and could lead to pathological depression. however, in the lockdown situation, the level of boredom explained a proportion, but not all, of the effect of sadness on the experience of the passage of time. other factors that we need to examine in a future study could also help to explain sadness and time experience in the lockdown, such as social withdrawal. the changes in the sense of time in lockdown were therefore due to the significant increase in both boredom and sadness. the literature on boredom suggests that it is involved in a multitude of behaviors and psychological dimensions and that it has a negative side, as in the sadness observed in our study, as well as a positive side. indeed, trait boredom is associated with psychological difficulties (e.g., drug abuse, depression, anxiety, binge eating) [ , ] . however, some recent functional approaches have also suggested that boredom constitutes a key signal to change behavior by orientating humans to try to find a more satisfying situation [ ] . in the context of lockdown, one may therefore wonder what influence this feeling of boredom has on the development of pro-social behaviors or on compliance with the containment situation in the short or longer term (does it only result in bad things or also in good things?). in the lockdown situation, people may have more time. however, they "die" of boredom and sadness and time slows down, drags on. the sense of the passage of time is, ultimately, a phenomenological time that is closely related to the self and the sense of existence [ ] . as stated by jean-paul sartre, human beings are defined by their acts and their effects on others. however, when they have more time but are isolated and cannot act-they have nothing to do-they are overwhelmed by sadness and boredom. it would seem important for future surveys to examine whether this feeling is valid in all cultures and for all people. it also seems to be important to identify whether other factors specific to individual characteristics or living conditions, to representations/beliefs toward covid- or government policies contribute to changes in the sense of time in the lockdown situation. some authors nevertheless defend the benefits of boredom. however, this raises the question of individual abilities to cope with the feeling of boredom in industrial societies. individual differences in coping with boredom can potentially predict psychological difficulties, health problems and increased vulnerability to psychopathologies such as depression [ ] . it is thus a serious problem and one which has to be taken into account. in conclusion, the changes in the sense of time in the lockdown situation, imposed as an efficient solution to the covid- pandemic, reflect the major psychological difficulties that people are experiencing during the lockdown. (docx) s table. table of members of the research group are nicolas andant, maélys clinchamps china; peter dieckmann -copenhagen academy for medical education and simulation (cames), denmark how will country-based mitigation measures influence the course of the covid- epidemic? the psychological impact of quarantine and how to reduce it: rapid review of the evidence multidisciplinary research priorities for the covid- pandemic: a call for action for mental health science the distress of iranian adults during the covid- pandemic-more distressed than the chinese and with different predictors. medrxiv a nationwide survey of psychological distress among chinese people in the covid- epidemic: implications and policy recommendations unprecedented disruption of lives and work: health, distress and life satisfaction of working adults in china one month into the covid- outbreak passage of time judgements intertwined facets of subjective time passage of time judgments in everyday life are not related to duration judgments except for long durations of several minutes passage of time judgments are not duration judgments: evidence from a study using experience sampling methodology what day is today? a social-psychological investigation into the process of time orientation mindfulness meditation, time judgment and time experience: importance of the time scale considered (seconds or minutes) awareness of the passage of time and self-consciousness: what do meditators report? psych journal individual differences in self-rated impulsivity modulate the estimation of time in a real waiting situation time does not fly but slow down in old age experience sampling methodology reveals similarities in the experience of passage of time in young and elderly adults a circumplex model of affect core affect, prototypical emotional episodes, and other things called emotion: dissecting the elephant the effect of expectancy of a threatening event on time perception in human adults time estimation of fear cues in human observers negative emotionality influences the effects of emotion on time perception fear and time: fear speeds up the internal clock emotional modulation of interval timing and time perception chronic stress impairs temporal memory. timing time percept anxiety makes time pass quicker while fear has no effect effects of stress on immune function: the good, the bad, and the beautiful the unengaged mind: defining boredom in terms of attention what happens while waiting? how self-regulation affects boredom and subjective time during a real waiting situation clinical stress assessment using a visual analogue scale validity of occupational stress assessment using a visual analogue scale extracting the variance inflation factor and other multicollinearity diagnostics from typical regression results when time slows down: the influence of threat on time perception in anxiety the effects of valence and arousal on time perception in individuals with social anxiety jobstress study: comparison of heart rate variability . in emergency physicians working a -hour shift or a -hour night shift-a randomized trial urinary interleukin- is a biomarker of stress in emergency physicians, especially with advancing age-the jobstress* randomized trial the temporal dynamic of emotional effect on judgments of durations proneness to boredom mediates relationships between problematic smartphone use with depression and anxiety severity relationships between boredom proneness, mindfulness, anxiety, depression, and substance use time perception in depression: a meta-analysis time flies when you're having fun: temporal estimation and the experience of boredom psychometric measures of boredom: a review of the literature high boredom proneness and low trait self-control impair adherence to social distancing guidelines during the covid- pandemic intrinsic enjoyment and boredom coping scale: validation with personality, evoked potential and attention measures the covistress network is headed by pr. frédéric dutheil (frederic.dutheil@uca.fr) chu key: cord- - cs z x authors: baraitser, lisa title: the maternal death drive: greta thunberg and the question of the future date: - - journal: psychoanal cult soc doi: . /s - - -y sha: doc_id: cord_uid: cs z x the centenary of freud’s beyond the pleasure principle (freud, a/ ) falls in , a year dominated globally by the covid- pandemic. one of the effects of the pandemic has been to reveal the increasingly fragile interconnectedness of human and non-human life, as well as the ongoing effects of social inequalities, particularly racism, on the valuing of life and its flourishing. drawing on earlier work, this paper develops the notion of a ‘maternal death drive’ that supplements freud’s death drive by accounting for repetition that retains a relation to the developmental time of ‘life’ but remains ‘otherwise’ to a life drive. the temporal form of this ‘life in death’ is that of ‘dynamic chronicity’, analogous to late modern narratives that describe the present as ‘thin’ and the time of human futurity as running out. i argue that the urgency to act on the present in the name of the future is simultaneously ‘suspended’ by the repetitions of late capitalism, leading to a temporal hiatus that must be embraced rather than simply lamented. the maternal (death drive) alerts us to a new figure of a child whose task is to carry expectations and anxieties about the future and bind them into a reproductive present. rather than seeing the child as a figure of normativity, i turn to greta thunberg to signal a way to go on in suspended ‘grey’ time. and why should i be studying for a future that soon will be no more, when no one is doing anything whatsoever to save that future? (greta thunberg) this paper is late. not just a little late but seriously forestalled. there is some pressure -an urgency produced by the centenary of freud's beyond the pleasure principle falling in -and the desire and pleasure in partaking in a collaborative, timely celebration of the work. there are the ordinary repetitions that are holding this up: a chronic relation to my own thoughts, veering towards and away from the satisfactions and disturbances of ideas connecting or linking; the chronic overwhelm produced by the difficulty of saying 'no' and resisting the temptations of an overloaded life; and the realities of overload brought on not by a chronic relation to limits but by their obliteration by the institutions and systems that govern our lives. then, of course, as has deepened, there have been the temporalities of illness, care and grief; of the suspension of time under conditions of lockdown; the stop-start of uncertainty and helplessness. for some, it has been a time of permanent and dangerous work; of intolerable waiting for others; and of the fault-lines of inequality and racial injustice urgently rupturing the otherwise monotonous rhythm of a global pandemic. in , everything and nothing went on hold. during this time, i continued to work with patients, albeit 'remotely', in the strange temporality of a five times per week psychoanalysis. even with so much time, the wait between sessions can be felt to be intolerable. to be in an analysis is to be held in suspension from one session to the next. one of my patients describes the wait as an agonizing 'blank time', like the crackling of an oldfashioned tv. it is not dead time as such but the incessant noise of nothing happening. to be in the session, however, produces a different kind of disturbance: an utterly absorbing kind of time that they liken to the colour blue. we move between the absorbing blue time of the sessions to the blank, crackling, maddening time between them. there is a 'session-time' analyst, who is blue, and a 'between-session-time' analyst, who maddens with a blank, crackling absence. time is both interminable -a wait between the sessions that feels like it goes on forever -and chronic: the repetition of blue, blank, blue, blank, blue, blank… beyond the pleasure principle is freud's meditation on the temporalities of repetition and return as species-time articulates with the time of the subject. in many ways, the death drive is a temporal concept, holding together the paradoxical time in which repetition contains within it a backwards pull towards the no-time of the living organism, even as the shape of this relation describes 'a life'. years later, time in the early decades of the st century Ó springer nature limited. - psychoanalysis, culture & society appears oddly analogous: it seems to loop or repeat but is undercut by a pull towards no-time, since the human and planetary future is not just foreshortened but now 'foreclosed' by the immanent twin disasters of capitalist and (neo)colonial expansion (baraitser, a, p. ) . franco 'bifo' berardi has long argued that our collective human future has come and gone and that the future has outlived its usefulness as a concept (berardi, ) . time after the present will come, but it will not bring the promises of bettering the conditions of the now for most, this having been a central aspect of european and north american future narratives in the post-war period (toffler, ; lee, ; luhmann, ) . in fact, as naomi klein ( ) argues, the very folding of disaster into capitalist discourses, governmental policies and institutional practices does not stave off disaster but profits further from it, pushing the relations between the human and non-human world to the brink of sustainability. what this implies is that disaster is not a future horizon we must urgently draw back from but a condition we have already incorporated, profited from and continue to sustain in the present. in these conditions of 'crisis capitalism', whole populations are kept in a 'chronic state of near-collapse' (invisible committee, , p. ) , a kind of temporal hiatus in which one goes on but without a future. amy elias ( ) has noted the intensive discussions about the 'presentism' of post-wwii globalized societies that have revolved around the idea of the loss of history (p. ). in these narratives, a sense of a saturated, elongated, thin present is a product of a traumatized western collective consciousness confronting the unprecedented 'event' of wwii. however, these narratives, she argues, have given way in the st century, as humankind 'has created its own version of durational time inside (rather than outside) the box of historicity' (p. ). this durational time is not bergson's duration that teams with experience (bergson, (bergson, / (bergson, , (bergson, / but the empty, timeless time of a 'marketplace duration' (elias, , p. ) , closer to the maddening crackling of nothing happening that my patient describes. in addition, as time is increasingly synchronized in the post-war period in terms of economic, cultural, technological, ecological and planetary registers, the 'present' itself becomes the management of a tension between time that is felt to be synced or simultaneous and time that is multiple or heterogeneous to simultaneity (burges and elias, , p. ) . we could think of this tension as produced by the dominating effects of european models of time (mills, (mills, , . european time is constantly imposed by the west on 'the rest' through the temporal structures of empire and enacted through colonization, exploitation, extraction and enslavement. european time comes to mediate representations of the world through the imposition of a particular account of the world-historical present on other temporal organizations -cosmic time, geological time, earth time, soil time, indigenous time, women's time, queer time, to name a few (chakrabarty, ; freeman, ; kristeva, kristeva, / nanni, ; puig de la bellacasa, ) . another way to put this is that, although freud proposes that repetition leads to the ultimate suspension of time -the return to non-being -the state of nonbeing produced by temporal suspension in the early st century is radically unequally distributed. writing under conditions of lockdown during the covid- pandemic, achille mbembe ( ) states: for we have never learned to live with all living species, have never really worried about the damage we as humans wreak on the lungs of the earth and on its body. thus, we have never learned how to die. with the advent of the new world and, several centuries later, the appearance of the 'industrialized races,' we essentially chose to delegate our death to others, to make a great sacrificial repast of existence itself via a kind of ontological vicariate. non-being, or death, is a luxury that hasn't yet been learnt by the 'human', non-being having been delegated to slaves -those humans who are denied status as humans against which the category of 'human' is both founded and flounders -as well as to non-human others. unless we recognize the 'universal right to breath' (emphasis added) for all organic matter, mbembe argues, we will continue to fail to die for ourselves, the death drive being projected, that is, into the body of that which is deemed non-human. if we go on collectively refusing to die for ourselves, we could say that the temporality of the current human predicament is closer to what martin o'brien calls 'zombie time' (o'brien, ) . as an artist and writer living with cystic fibrosis, which gives rise to symptoms very similar to covid- (coughing, shortness of breath, exhaustion), o'brien has now outlived his own life expectancy. he writes: zombie time insists on a different temporal proximity to death. like the hollywood zombie which holds within it a paradox, in that it is both dead and alive, those of us living in zombie time experience death as embodied in life […] .we had come to terms with the fact that we are about to die, and then we didn't. freud's movement towards death is circular: a repetitive arc that leads us back to the inorganic, so that in some sense it too describes zombie time, the fact we have always already surpassed our death date, whereby a life is an act of return. each organism follows its own path, he tells us, to death, and that deviation is a life. a path, however, is not quite what o'brien is suggesting. here the presence of death is sutured to every aspect of life, closer perhaps to melanie klein's insistence on the death drive as a permanent unconscious phantasy that must be managed as a life-long psychic struggle (klein, (klein, / . two questions arise from this. firstly, does recognizing 'death as embodied in life' lead us to begin to die for ourselves? in this 'hour of autophagy', as mbembe ( ) puts it, we will no longer be able to delegate death to an other. we do, indeed, have to die not just in our own fashion but on our own behalf. in one reading of freud's death drive, it is associated with the freedom to do one's own thing, follow one's own path and stands as a marker of an independent life in many ways free from others -even if, as lacan would have it, not free from the big other. but, as so many feminist, queer, disability, and black studies scholars have attested, living an independent life is a fantasy; it is always premised on dependency or interdependency, which so often requires the temporary or permanent tethering of the life of an other, or, more profoundly, the harnessing of 'life' itself. judith butler ( ) writes in the force of non-violence that we are all born into a condition of 'radical dependency' (p. ), that no-one stands on their own, that we are all at some level propped up by others. freud's suggestion of 'eternal return' requires practices of maintenance that have largely been accorded to women, people of colour, animals, and other non-human others. these practices of maintenance entail the temporalities of often mind-numbing repetition: reproductive and other forms of labour that support, sustain, and maintain all living systems. in order to 'deviate', someone or something else needs to preserve, maintain, protect, sustain, and repeat. those 'others' stay on the side of life, not as progression or even deviation towards death but as a permanent sustaining of life-processes. death in life requires a simultaneous articulation, in other words, of life in death, in which the temporalities of progression, regression, and repetition can be understood as supported and supplemented by another temporal element within the death drive that operates through 'dynamic chronicity': an element that animates 'life' in such a way as to allow the subject to die in its own fashion. i call this life in death the 'maternal death drive' (baraitser, a) to distinguish it from the pleasure principle or the 'life' drive. secondly, if the time of the 'now', as i've elaborated above, takes the form of dynamic chronicity, a suspended yet chronically animated time that pushes out temporal multiplicity, what work needs to be done in order that this form of time retains some connection to a futurity for all? do the repetitions of 'blue blank' in their own circular fashion retain within them a relation to futurity, even if they don't exactly lead us somewhere else? i would hope, after all, that my patient may eventually, with time, come to experience the 'blue-session' analyst and the 'blank-absent' analyst as one and the same analyst, even as the agonies of having and losing may continue to be difficult. from a kleinian perspective, the time that this requires is the time in which what is hated and what is loved come to have a relation to one another, which klein calls 'depression' (klein (klein / and which may entail 'depressing time'. we could say that it is the time in which we come to be concerned about the damage done to what is loved, the time whereby what is loved and what is hated can come to matter to one another, making the time of working through that of 'mattering' itself. furthermore, mbembe ( ) writes: community -or rather the in-common -is not based solely on the possibility of saying goodbye, that is, of having a unique encounter with others and honoring this meeting time and again. the in-common is based also on the possibility of sharing unconditionally, each time drawing from it something absolutely intrinsic, a thing uncountable, incalculable, priceless. (emphases in original) this would suggest that, supplementary to the time of blue-blank (saying goodbye again and again), there is another time: that of the 'in-common'. this is a time of permanent mattering, which also takes time to recognize. it is, if you like, the time in which depressive guilt survives and hence the time it takes for a future to be recognized within the present, rather than being the outward edge, the longed-for time that is yet to come. in what follows, and taking my cue from beyond the pleasure principle itself, i attempt to rework freud's death drive by drawing attention to a particular form of developmental time that lies inside the time of repetition, which i link to 'life in death'. in chapter ii of freud's essay, in the midst of his struggle with the meaning of repetition, pleasure and unpleasure, he turns to a child. the function of the child at this point in the text is to provide the case of 'normalcy' -the play of children -in order to help him understand the 'dark and dismal topic of traumatic neurosis' (freud, b (freud, / . the child will be 'light' (read white) and playful but turns out to be deeply troubled. instead of dragging the cotton reel along the floor as the adults intended, so it could turn and check its existence at any point, the child, standing outside the cot, throws the reel into the cot, accompanied by an o-o-o-o sound, so it cannot be seen, and then pulls it out with a 'da!' that freud describes as 'joyful' (p. ). the pleasure of refinding, however, is postponed -in the time between 'gone' and 'found', the child plays at waiting, as it attempts to remaster the experience, freud tells us, of its 'gone' mother. this is of course also an attempt to deal with its own goneness from the imagined place of the mother; the child is standing outside the cot, after all. the passivity of being left is repeated but transformed through an act of 'revenge', a repetitive act of aggression in which, through psychic substitution, something essentially unpleasurable is turned into something 'to be remembered and to be processed in the psyche' (p. ). the child does this by identifying with the mother, waiting in her place. my aim is to repeat freud's impulse, re-inserting a mother and child into the scene of the death drive 'proper' as a way to signal how to die on our own behalf and therefore how to go on in the suspended hiatus we appear to be living through. the maternal, as i will elaborate, appears as a non-normative developmental temporality within the death drive. in my account, the child reappears, however, in the figure of the child-activist greta thunberg. she is the child who has been invested in symbolically to carry hope for the future, a hope that she is decidedly pushing back towards those of the generation who came before her, calling on them to take action now, before it is too late. although thunberg names her vision of the world in terms of 'black and white' thinking, i draw on laura salisbury's notion of 'grey time' (salisbury, in press) in order to understand what to do with the time that remains in which action can still take place. it is always an uncomfortable thing to do, to insert a mother and child into a scene where they are ostensibly not wanted. it carries the sour smells of heteronormativity and essentialism that still cling to discussions of the maternal and relegate mother-child configurations as the counterpoint to those who are 'not fighting for the children', as lee edelman ( ) suggested in his famous polemic no future. for edelman, the death drive is a queer refusal of futurity that allows negativity to operate as a 'pulsive force' that would otherwise trap queer as a determinate stable position (p. ). the child and mother come to represent the ultimate trap, that of development itself -the unfolding of the normative temporalities of birth, growth, development, maturation, reproduction, wealth generation and death. in some ways, this is what makes the insertion of mother-child back into discourses about the death drive rather 'queer'. in doing so, i deliberately refuse the association between motherhood and normativity and suggest that motherhood is the name for any temporal relation of 'unfurling' whereby the unfurling of one life occurs in relation to the unfurling of another, albeit out of sync. in fact, as i will elaborate below, for a life to unfurl there needs to be the presence of another life that is prepared to wait whilst life and death can come to have a relation to one another. this suspended time of waiting for life to unfurl is a non-teleological, crystalline form of developmental time based on the principle of life in death (baraitser, a, p. ) . whilst motherhood is always in danger of being squeezed out of this kind of queer theory, it is also in danger of being squeezed out of feminist theories that purport to make space for the maternal. julia kristeva's essay 'women's time' ( / ), for instance, conceptualized female subjectivity as occupying two forms of time: cyclical time (repetition) and monumental time (eternity without cleavage or escape). these two 'feminine' forms of time, she argued, work to conceal the inherent logic of teleological, historical, 'masculine' time, which is linear, progressive, unfolding and yet constantly rupturing, an 'anguished' time (p. ) . masculine time rests on its own stumbling block, which is death. cyclical time and 'monumental' or eternal time, kristeva argued, are both accessed through the feminine, so that the feminine signifies a less 'anguished' time because it is uncoupled from the death of the subject and more concerned with suturing the subject to extrasubjective time. although this has been rightly critiqued for essentializing 'the feminine' through the normative positioning of the female subject on the side of the biological, as well as mobilizing a nonpolitical appeal to 'nature', i have argued elsewhere that, in attempting to separate the feminine from cyclical and monumental time, feminist theory designates the maternal as the keeper of species-time, in which the mother becomes a biologistic and romanticized subject attached to the rhythms of nature (baraitser, , p. ) . toril moi ( ) writes of kristeva's essay that the question for kristeva was not so much how to valorize the feminine but how to reconcile maternal time with linear (political and historical) time (p. ). without a theory of the desire to have children (a desire that can permeate any gender configuration and that i name as maternal regardless of the gendered body that desires it), we leave the door open to the consequence of a failure to theorize and the maternal falls out of signification, time and history. moreover, motherhood is not just the desire for children but a particular form of repetitive labour relegated largely to women and particularly, in the global north, to women of colour and women from the south. although the concept of 'social reproduction' has been expanded to incorporate a much broader array of activities than caring for children, maternal labour remains distinct from other forms of domestic labour. joy james ( ) argues that the ongoing trauma and theft involved in slavery, for instance, produces not only western democracy but a repudiated 'twin' within western theory that she names 'the black matrix' (p. ). where mothers in captivity and slavery have always provided the reproductive and productive labour that underpin wealth and culture, they are systematically erased -not just in culture but in what she calls 'womb theory' (theory, for instance, that accommodates feminism, intersectionality and antiracism, whilst still denying the maternal captive). despite this, she claims, the black matrix can act as a 'fulcrum' that leverages power against captivity (p. ). i would argue that this power comes, in part, from the impossibility of the maternal captive remaining indifferent to her labour. subsistence farming, cooking, cleaning, household maintenance, support work and the production of status are forms of repetition from which it remains possible to emotionally disattach. but the 'labour' of maternity is 'affective, invested, intersubjective' (sandford, , p. ) and retains an ethical dimension that is distinct. here the maternal emerges as a figuration of the subject that is deeply attached to its labouring, whose labouring is a matter of attachment to that labour, as well as providing the general conditions for attachment (the infant's psychic struggle to become connected to the world) to take place. we could say, then, that the time of repetition under the condition that is maternity becomes the time of mattering, as opposed to the 'meaningless' time of reproduction: the time, that is, in which repetition may come to matter. this time can be felt as obdurate, distinctively uncertain in its outcome, both intensive and 'empty', and bound to the pace of the unfurling other. what is at play is a kind of crystalline developmental time within the time of history. it takes the form of repetition, but this repetition holds open the possibility of something coming to matter, rather than the death drive understood only as a return to non-being. a maternal death drive? what might this conjunction mean? freud always maintained that the two elements of psychic life that couldn't be worked through were the repudiation of femininity in both men and women, by which he meant the repudiation of passivity; and the death drive, the repetitive return again and again to our psychic dissolution or unbinding. in 'analysis terminable and interminable', written in the last years of his life, freud ( freud ( / ) named these the 'bedrocks' of psychic life, evoking an immoveable geological time. the permanent fixtures of psychic life that an analysis cannot shift are the hatred of passivity and the simultaneous impulse to return to an ultimate passive state, suturing the feminine to death in psychoanalysis. earlier, in beyond the pleasure principle, freud had offered an hypothesis in which, despite his conception of drives as exerting the pressure that presses for change, they are constrained by a conservatism, meaning they do not operate according to one singular temporality. this double temporality within the death drive is drawn out by adrian johnston ( ) , who has noted freud's ( freud's ( / ) developmental account of the drive in three essays on the theory of sexuality and later in 'instincts and their vicissitudes ' ( / ) , where the drive is articulated as maturing over time. johnston ( ) maintains that freud's drive is simultaneously timeless and temporal, both interminable (it repeats) and containing an internal tendency to deviate, to change its object and its aim (it develops or alters) (p. ). after all, something happens, according to freud, that shifts the human organism from one that dies easily to one that diverges ever more widely from the original course of life (that is, death) and therefore makes ever more complicated detours before reaching death. for johnston, alteration can be understood as an intra-temporal resistance to the time of iteration, a negation of time transpiring within time. this means that the death drive therefore includes rather than negates developmental time. this is not a developmental tendency separated off and located within the selfpreservative drives or a 'life' drive but a death drive that contains within it its own resistance to negation. i would want to reclaim this doubled death drive as 'maternal', the drive that includes within it the capacity for development, for what johnston calls 'alteration', which always mediates the axis of repetition or 'iteration' (p. ). the maternal death drive would describe the unfolding of another life in relation to one's own path towards death and marks the point that alteration and iteration cross one another. if we move from freud to klein, we see how this double temporality plays out between the maternal and child subject. i have described elsewhere how, in love, guilt and reparation, klein ( klein ( / tells us that anxiety about maternal care and dependency on the maternal body in very early life -the relationship, that is, with a feeding-object of some kind that could be loosely termed 'breast' -is a result of both the frustrations of that breast (its capacities to feed but also to withhold or disappear at whim) and what the infant does with the hatred and aggressive feelings stirred up by those experiences of frustration that rebound on it in the form of terrifying persecutory fantasies of being attacked by the breast itself (pp. - ; see also baraitser, b, p. ). klein's conceptual infant swings in and out of psychic states that are full of envious rage and makes phantasized aggressive raids on the maternal body in an attempt to manage the treacherous initial experiences of psychical and physical survival. klein ( klein ( / moves us closer to a more thing-like internal world permeated less with representations and more with dynamic aggressive phantasies of biting, hacking at and tearing the mother and her breasts into bits, and attempts to destroy her body and everything it might be phantasized to contain (p. ). in klein's thinking, libido gives way to aggression, so that the defences themselves are violent in their redoubling on the infant in the form of persecutory anxiety. one's own greed and aggressiveness themselves become threatening, along with the maternal object that evokes them, and have to be split off from conscious thought. coupled with this are feelings of temporary relief from these painful states of mind (p. ) and these 'good' experiences form the basis for what we could think of as love. it is only as the infant moves towards a tolerance of knowing that good and bad 'things' and experiences are bound up in the same person (that is, both (m)other and self) that guilt arises as an awareness that we have tried to destroy what we also love. whilst this can overwhelm the infant with depressive anxiety that also needs to be warded off, there is a chance that this guilt can be borne and a temporary state of ambivalence can be achieved that includes the desire to make good the damage done. 'unfurling', then, arises out of the capacity to tolerate the proximity of love and hate towards the mother, but the mother also needs to tolerate the time this takes -to be prepared to go back 'again and again' to the site of mattering without becoming too overwhelmed or rejecting. it is here that futurity emerges, not as that which is carried forward by the child but as this element within the death drive that i am naming as maternal, which is a capacity to tolerate repetition within the present. to return to a lacanian formulation, chenyang wang ( ) , in his work on differentiating real, imaginary and symbolic time in lacan, shows how lacan's death drive is not so much the reinsertion of the bodily or biological into the human subject but the traumatic intrusion of the symbolic into the organism at the expense of the imaginary, which evokes the real body. wang describes how what he calls the 'real future' (p. ) does not involve the human subject. where the ego may continue to imagine a future of fulfilled wishes, hopes and expectations, in which the present is characterized as a mode of 'waiting' until the future unfolds, the death drive in fact interrupts the fantasy of the future as something unreachable or unattainable and instead returns the future to the subject as something that has already structured it. for wang, real time opens the subject to the real present that is neither instantaneous or immediate but the freedom of returning to the same place in one's own way. he sees this as the offer of the possibility of freedom that transcends the isolated, egoic individual, otherwise trapped in its established temporal order (p. ). we could say, then, that the death drive includes rather than negates developmental time and holds out the possibility of a time that breaks free of the ego's imaginary sense of past, present and future. developmental time, from this perspective, is precisely a suspension of the flow of time, a capacity to wait for the other to unfold. maternity, in its failure to be indifferent to the specificity of its labour, implies a return, again and again, to a scene that matters, a kind of repetition that is not quite captured by the death drive as excessive access to jouissance, nor to the death drive as a deviation towards a unique form of death, but that might after all have something to do with generativity, indeed with freedom, not of the self, but of the other. the return to a scene that matters is not a kind of flowing time (anyone who has spent time with small children will know this) nor the stultifying time of indifferent labour, but living in a suspended or crystalline time, which is the time it takes for mattering to take place. finally, we can link the maternal death drive to elizabeth freeman's ( ) concept of 'chronothanatopolitics' (p. ) that extends mattering beyond the mother-child relation to the politics of mattering in the contemporary moment. in her discussion of 'playing dead' in th century african-american literature, freeman notes that many african-american stories involve 'fictive rebirths'(p. ). these are stagings of death and rebirth, not just once but multiple times, so that in these stories slaves and their descendants are constantly moving towards and away from death. feigning death, she argues, does not solve the problem of having not been 'born' as human -a position well established within afropessimist thought -but allows an engagement through repetitive staged dying with what jared sexton ( ) has called 'the social life of social death' (quoted in freeman, , p. ). freeman therefore builds on freud's death drive to develop a concept of 'chronothanatopolitics' in which life is not simply the opposite of death but the opposite of the 'presence' of death (p. ), a temporary 'disappearing' of death within life, the counterpart to the maternal death drive as life in death. staging one's death again and again, she states, is a way of managing the life/death binary, rather than simply a commitment to life or an acceptance of unchanging black deathliness. where freud's death drive does refuse any simple opposition between life and death, freeman notes, it nevertheless proposes a universal and purely psychic drive. she calls instead for recognition of a socio-political death drive enacted by white supremacy: chronothanatopolitics is the 'production of deathliness and nonbeing by historical forces external to the subjectivity it creates for nonblack people, and forecloses for people of african descent' (p. ). in the st century, we see 'playing dead' resurfacing in the 'die-ins' revived by the protest movement black lives matter. time becomes central, creating what freeman terms 'temporal conjoinments' with death (p. ) through counting 'i can't breathe' times, as eric garner did. we have seen this repeated in , when protesters hold a silence or take the knee for minutes and seconds, the time that george floyd had his neck knelt on by the police officer who killed him on may. 'mattering', in the sense of black life coming to matter, freeman notes, captures the double meaning of coming to importance and becoming-inert substance or matter, giving the phrase an ambivalent valence. mattering refuses the afropessimist insight that black life is structurally foreclosed and instead implies a more open stance towards non-being. by miming death rather than life, black lives matter activists 'commit to an (a)social life within death even as they fight for an end to the annihilation of blackness' (p. ). here, life in death is the 'social' work of activism that counts the time that is left within black life even as it is extinguished, just as it is the social work of mothering that waits for life to unfurl towards its death without knowing when or how this will take place. miming death, again and again, is analogous to returning to the scene of mattering again and again, the hiatus within the path towards death that i have described as the maternal death drive. however, freeman's work provides the corrective to an easy universalizing of the drive, pointing us towards the way that black lives matter politicizes repetition in the name of life in death. recently i've seen many rumours circulating about me and enormous amounts of hate. (greta thunberg) in the child to come: life after the human catastrophe, rebekah sheldon ( ) charts a recent shift in the use of the child to suture the image of the future. the child, metonymic with the fragility of the planetary system and therefore in need of protection, has become 'the child as resource' (p. ). as resource, the child is used to carry both expectations and anxieties about the future. unlike earlier iterations, the child as resource is premised on a future that cannot be taken for granted. much of the affect around ecological disasteranxiety, fear, terror, hopelessness, despair, guilt, determination, protectivenesscomes not so much from an awareness of the current effects of global climate change as they play out in the present but from the projected harm to the future that it portends. and the future, sheldon reminds us, is the provenance of the child. sheldon describes the history of this relationship between child and future as emanating from the th century at the same point as modern theories of 'life' begin to proliferate in darwin and of course in freud. 'the link forged between the child and the species', she writes, 'helped to shape eugenic historiography, focalized reproduction as a matter of concern for racial nationalism, and made the child a mode of time-keeping' (p. ). in the face of anxious concerns about the deep biological past of the human species, the child held open a future through a coordination of the trio 'life, reproduction and species' with that of 'race, history and nation'. freud's child, for instance, caught both in the relentless unfolding of developmental time and the timelessness of unconscious life, is also the site of the regulation of 'life' itself. whilst these two axes of temporality (development and timelessness), as we saw above, cross one another, the figure of the child is nevertheless a 'retronaut, a bit of the future lodged in the present ' (p. ) . yet, at the same time, sheldon's child is already melancholic. it knows its childness can't be preserved; it will be lost; just as the future is felt also to be something constantly slipping away. as a melancholic figure, sheldon suggests that the child as resource has a very specific task right now: to cover over the complex systems at work in biological materiality. as non-human animacy becomes more visible in conditions of planetary crisis, with it comes the terrifying potential (at least for the human world) of nature to slip its bonds. the child stands in for life itself at a time of vibrant and virulent reassertion of materialisms in all their forms. the child's new task, according to sheldon, becomes one of binding nonhuman vibrancy back into the human, into something safer, and into the frame of human reproduction. this perhaps helps us modulate how we might respond to the figure of greta thunberg, the climate activist who describes herself as both 'autistic' and living with asperger's, and to her work as a 'cry for help' (thunberg, , p. ) . during , when she was years old, thunberg started to skip school to sit outside the swedish parliament with a sign reading 'skolstrejk fö r klimatet' [school strike for climate]. as a result of the school climate change movement that grew around thunberg's 'fridays for future' actions during , there has been an intensive, rapid sanctification of the plain-speaking, white, plaitedhaired child now simply known as 'greta'. although she herself acknowledges that she is not unique and is part of a network of youth movements in the global south who bear the brunt in the present for the effects of climate disaster largely produced by the global north, she has nevertheless become an enormously influential figure through whom climate discussions now pass. some describe her influence as simply the 'greta effect' (watts, ) . there is a specific and careful simplicity to the way thunberg talks. in a speech entitled 'almost everything is black and white', she states, 'i have asperger's syndrome, and to me, almost everything is black or white' (thunberg, , p. ) . utilizing what others may see as a disability, a difficulty in seeing shades of grey, she speaks against the need for more complexity, more reflection, more science; in short, a more 'grown up' approach to climate chaos: 'we already have all the facts and solutions. all we have to do is to wake up and change […] everything needs to change. and it has to start today' (p. ). it is this rhetorical insistence that there is no more time and that the future of her generation has been stolen by the inaction of the generation that has come before that positions her as not so much future-orientated but backed up against a closing future, looking back towards those who came before her as they continue to gaze ahead towards what they imagine is her future. as she states, 'we children are doing this to wake the adults up. we children are doing this for you to put your differences aside and start acting as you would in a crisis. we children are doing this because we want our hopes and dreams back' (p. ). in many ways, we could see thunberg as performing a call, in the name of a human reproductive future, for the binding of nonhuman vibrancy back into the human, into something safe and stable, the child's new task that sheldon describes. we could also make a critical reading of the ways thunberg -as a contemporary incarnation of maisie in henry james ' ( / ) what maisie knew, where the child-protagonist is sacrificed to save a negligent and damaged society -re-mobilizes a discourse that re-stabilizes the differences between the generations in the name of the reproduction of the white heteronormative social bond. however, i want to read thunberg's 'black and white' thinking as metonymic with my patient's blank and blue: the oscillation between the absorbing blue of the analytic session and the suspended time of nothing happening between the sessions; the time of no-analyst and the agonies of waiting. thunberg ( ) states: 'there are no grey areas when it comes to survival. either we go on as a civilization or we don't. we have to change' (p. ). in many ways, she refuses 'development' in the sense of klein's depressive position functioning, where blue and blank come to be understood as having a relation to one another, and insists instead on their separation, on what klein would call 'paranoid-schizoid' thinking, in which blue and blank are radically split apart, as a viable place to speak from. indeed, she goes on insisting she is a child and that development is precisely what has got us into so much trouble. she warns us that, from the perspective of blank time (the time of nothing happening), blue time is absorbing for sure, but it is short, cannot last, and time itself needs to urgently come to matter if we are to find a way out of the current predicament. if we want to repair a relationship with monumental time, there is only action or no action, blue or blank, as we have now run out of time. despite the obvious occlusion of the many brown and black children who have protested, spoken out, organized school strikes and presented to the un over the years and gained no coverage, what is striking is that the white child claims that it is her unusual perspective, in which black and white remain separate, that is our only way out. in describing what she calls 'grey time', laura salisbury (in press) reminds us that grey is not, strictly speaking, a colour at all; rather, it is a shade. as such, it is achromatic, composed of black and white in various shades of intensity, rather than hues. moving from colour to time, salisbury claims that grey time can be thought of as similarly a time that contains intensities of affect, naming grey time as 'anachromistic', a form of intensive temporality that belongs to and traverses the perceiving subject and the aesthetic object. to speak of grey time as anachromistic is to evoke an aesthetic experience that is against colour or hue, but, with its echo of anachronism, also produces a slub in the fabric of time as it is usually thought. the double gesture of the term anachromism is the attempt to speak to time's intensity rather than, as is more usual, concentrating on its flow or movement, while trying to capture an atmosphere where there is a weaving or binding in of blank, uncertain, colourless 'colour', and affect into what is felt of time. (emphasis in original) grey time, then, is an intensity of time that moves us beyond the impasse of action and no action, or blue and blank, by acting as a slub or thickening in the oscillation between the two. this thickening, if we follow salisbury, both reveals time's stuck oscillation between black and white at the same point as it acts to bind greyness into what is felt of time. grey inhabits black and white without resolving the oscillation, both intensifying the sense of time's stuckness but also drawing attention to the affect of greyness, of uncertainty. whilst the time for grey thinking, as thunberg states, may have passed, perhaps salisbury's attention to grey time is important. as the existential dangers facing humanity deepen -by mbembe's description, the destruction of the biosphere, the criminalization of resistance and the rise of determinisms, whether genetic, neuronal, biological or environmental -so perhaps greta thunberg's urgency cannot be heard until we bind the blank, uncertain, colourless affect of the grey 'now' into what is felt of time. mbembe ( ) writes of the covid- virus: of all these dangers, the greatest is that all forms of life will be rendered impossible. […] at this juncture, this sudden arrest arrives, an interruption not of history but of something that still eludes our grasp. since it was imposed upon us, this cessation derives not from our will. in many respects, it is simultaneously unforeseen and unpredictable. yet what we need is a voluntary cessation, a conscious and fully consensual interruption. without which there will be no tomorrow. without which nothing will exist but an endless series of unforeseen events. (emphasis in original) this is, indeed, grey time -a voluntary cessation, a conscious and fully consensual interruption to business as usual as a response to the profound Ó springer nature limited. - psychoanalysis, culture & society uncertainty that is the reality of the interdependencies of all forms of life. although i know that there is no way for 'couch time' to have an effect without a 'session-time' analyst and a 'between-session-time' analyst eventually coming together in the time that is an analysis, it may be that we have simply run out of time. then a new psychoanalytic temporality may be needed, one that understands the simultaneous need for and suspension of development in the name of really knowing about the death drive; one in which action would no longer be simply understood as acting out but in which the mutative interpretation, the one that brings about change, can be grey, ill-timed, coming too soon and too late, before it is too late. maternal encounters: the ethics of interruption Ó springer nature limited. - psychoanalysis postmaternal, postwork and the maternal death drive. special issue: the postmaternal after the future / ) time and free will: an essay on the immediate data of consciousness / ) matter and memory the schoolchildren strikes when the kids are united introduction: time studies today the force of non-violence the climate of history: four theses no future: queer theory and the death drive past/future time binds: queer temporalities, queer histories. durham and london beside you in time: sense methods and queer sociabilities in the american nineteenth century / ) three essays on the theory of sexuality / ) instincts and their vicissitudes / ) beyond the pleasure principle beyond the pleasure principle / ) analysis terminable and interminable a queer place and time lose your mother: a journey along the atlantic slave route semiotext(e). invisible committee, the ( ) to our friends the ( ) now. new york: semiotext(e) / ) what maisie knew the womb of western theory: trauma, time theft and the captive maternal time driven: metapsychology and the splitting of the drive / ) notes on some schizoid mechanisms / ) love, guilt and reparation the shock doctrine: the rise of disaster capitalism / ) women's time chronophobia: on time in the art of the s the future cannot begin: temporal structures in modern society the universal right to breathe. translated by c. shread. critical inquiry white time: the chronic injustice of ideal theory the chronopolitics of racial time introduction to women's time the colonisation of time: ritual, routine and resistance in the british empire you are my death: the shattered temporalities of zombie time matters of care: speculative ethics in more than human worlds grey time: anachromism and waiting for beckett what is maternal labour? the social life of social death: on afro-pessimism and black optimism the child to come: life after the human catastrophe no one is to small to make a difference. london: penguin, random house future shock subjectivity in-between times: exploring the notion of time in lacan's work the greta thunberg effect: at last, mps focus on climate change. the guardian publisher's note springer nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations. Ó springer nature limited. - psychoanalysis the research in this paper was funded by a wellcome trust collaborative award, 'waiting times', grant number [ /a/ /z] (see waitingtimes.exeter.ac.uk). data sharing is not applicable as no datasets were generated and/or analysed for this study. ( ) and hedva ( ) . . in doing so, she was perhaps unwittingly building on a long history of school strikes, certainly dating back at least years in the uk, in which schoolchildren mobilised against caning in and later came out on strike as part of a number of localised general strikes in . see bloom ( bloom ( , . key: cord- -pnjt aa authors: ordun, catherine; purushotham, sanjay; raff, edward title: exploratory analysis of covid- tweets using topic modeling, umap, and digraphs date: - - journal: nan doi: nan sha: doc_id: cord_uid: pnjt aa this paper illustrates five different techniques to assess the distinctiveness of topics, key terms and features, speed of information dissemination, and network behaviors for covid tweets. first, we use pattern matching and second, topic modeling through latent dirichlet allocation (lda) to generate twenty different topics that discuss case spread, healthcare workers, and personal protective equipment (ppe). one topic specific to u.s. cases would start to uptick immediately after live white house coronavirus task force briefings, implying that many twitter users are paying attention to government announcements. we contribute machine learning methods not previously reported in the covid twitter literature. this includes our third method, uniform manifold approximation and projection (umap), that identifies unique clustering-behavior of distinct topics to improve our understanding of important themes in the corpus and help assess the quality of generated topics. fourth, we calculated retweeting times to understand how fast information about covid propagates on twitter. our analysis indicates that the median retweeting time of covid for a sample corpus in march was . hours, approximately minutes faster than repostings from chinese social media about h n in march . lastly, we sought to understand retweet cascades, by visualizing the connections of users over time from fast to slow retweeting. as the time to retweet increases, the density of connections also increase where in our sample, we found distinct users dominating the attention of covid retweeters. one of the simplest highlights of this analysis is that early-stage descriptive methods like regular expressions can successfully identify high-level themes which were consistently verified as important through every subsequent analysis. monitoring public conversations on twitter about healthcare and policy issues, provides one barometer of american and global sentiment about covid . this is particularly valuable as the situation with covid changes every day and is unpredictable during these unprecedented times. twitter has been used as an early warning notifier, emergency communication channel, public perception monitor, and proxy public health surveillance data source in a variety of disaster and disease outbreaks from hurricanes [ ] , terrorist bombings [ ] , tsunamis [ ] , earthquakes [ ] , seasonal influenza [ ] , swine flu [ ] , and ebola [ ] . in this paper, we conduct an exploratory analysis of topics and network dynamics of covid tweets. since january , there have been a growing number of papers that analyze twitter activity during the covid pandemic in the united states. we provide a sample of papers published since january , in table i . chen, et al. analyzed the frequency of different keywords such as "coronavirus", "corona", "cdc", "wuhan", "sinophobia", and "covid- " analyzed across million tweets from january , to march , [ ] . thelwall also published an analysis of topics for english-language tweets from march - , . [ ] . singh et al. [ ] analyzed distribution of languages and propogation of myths, sharma et al. [ ] implemented sentiment modeling to understand perception of public policy, and cinelli et al. [ ] compared twitter against other social media platforms to model information spread. our contributions are applying machine learning methods not previously analyzed on covid twitter data, mainly uniform manifold approximation and projection (umap) to visualize lda generated topics and directed graph visualizations of covid retweet cascades. topics generated by lda can be difficult to interpret and while there exist coherence values [ ] that are intended to score the interpretability of topics, they continue to be difficult to interpret and are subjective. as a result, we apply umap, a dimensionality reduction algorithm and visualization tool that "clusters" documents by topic. vectorizing the tweets using term-frequency inverse-document-frequency (tf-idf) and plotting a umap visualization with the assigned topics from lda allowed us to identify strongly localized and distinct topics. we then visualized "retweet cascades", which describes how a social media network propagates information [ ] , through the use of graph models to understand how dense networks become over time and which users dominate the covid conversations. in our retweeting time analysis, we found that the median time for covid messages to be retweeted is approximately minutes faster than h n messages during a march outbreak in china, possibly indicating the global nature, volume, and intensity of the covid pandemic. our keyword analysis and topic modeling were also rigorously explored, where we found that specific topics were triggered to uptick by live white house briefings, implying that covid twitter [ ] , , jan. -apr , x medford, et al. [ ] , jan. -jan. , x x x x singh, et al. [ ] , , jan. , -mar. , x x x x lopez, et al. [ ] , , jan. -mar. , x x x cinelli, et al. [ ] , , jan. -feb. , x x x kouzy, et al. [ ] feb , x x alshaabi, et al. [ ] unknown mar. -mar , x x sharma, et al. [ ] , , mar. , -mar. , x x x x x x x chen, et al. [ ] , , mar. , -mar. , x schild [ ] , , nov. , -mar. , x x x x yang, et al. [ ] unknown mar. , -mar. , x x ours , , mar. -apr. , x x x x x yasin-kabir, et al. [ ] , , mar. , -apr. , x x x x users are highly attuned to government broadcasts. we think this is important because it highlights how other researchers have identified that government agencies play a critical role in sharing information via twitter to improve situational awareness and disaster response [ ] . our lda models confirm that topics detected by thelwall et al. [ ] and sharma et al. [ ] , who analyzed twitter during a similar period of time, were also identified in our dataset which emphasized healthcare providers, personal protective equipment such as masks and ventilators, and cases of death. this paper studies five research questions: ) what high-level trends can be inferred from covid tweets? ) are there any events that lead to spikes in covid twitter activity? ) which topics are distinct from each other? ) how does the speed of retweeting in covid compare to other emergencies, and especially similar infectious disease outbreaks? ) how do covid networks behave as information spreads? the paper begins with data collection, followed by the five stages of our analysis: keyword trend analysis, topic modeling, umap, time-to-retweet analysis, and network analysis. our methods and results are explained in each section. the paper concludes with limitations of our analysis. the appendix provides additional graphs as supporting evidence. ii. data collection similar to researchers in table i , we collected twitter data by leveraging the free streaming api. from march , to april , , we collected , , ( gb) tweets. note, in this paper, we refer to the twitter data interchangeably as both "dataset" and "corpora" and refer to the posts as "tweets". our dataset is a collection of tweets from different time periods shown in table v . using the twitter api through tweepy, a python twitter mining and authentication api, we first queried the twitter track on twelve query terms to capture a healthcare-focused dataset: 'icu beds', 'ppe', 'masks', 'long hours', 'deaths', 'hospitalized', 'cases', 'ventilators', 'respiratory', 'hospitals', '#covid', and '#coronavirus'. for the keyword analysis, topic modeling, and umap tasks, we analyzed non-retweets that brought the corpus down to , , tweets. in the time-to-retweet and network analysis, we included retweets but selected a sample out of the larger . million corpus of , tweets. our preprocessing steps are described in the data analysis section that follows. prior to applying keyword analysis, we first had to preprocess the corpus on the "text" field. first, we removed retweets using regular expressions, in order to focus the text on original tweets and authorship, as opposed to retweets that can inflate the number of messages in the corpus. we use no-retweeted corpora for both the keyword trend analysis and the topic modeling and umap analyses. further we formatted datetime to utc format, removed digits, short words less than characters, extended the nltk stopwords list to also exclude "coronavirus", "covid ", " ", "covid", removed "https:" hyperlinks, removed "@" signs for usernames, removed non-latin characters such as arabic or chinese characters, and implemented lower-casing, stemming, and tokenization. finally, using regular expressions, we extracted tweets that table vi and the frequencies of tweets per minute here in table ii . the greatest rate of tweets occurred for the tweets consisting of the term "mask" (mean . ) in table ii , followed by "hospital" (mean . ) and "vent" (mean . ). tweets of less than . mean tweets per minute, came from groups about testing positive, being in serious condition, exposure, cough, and fever. this may indicate that people are discussing the issues around covid more frequently than symptoms and health conditions in this dataset. we will later find out that several themes consistent with these keyword findings are mentioned in topic modeling to include personal protective equipment (ppe) like ventilators and masks, and healthcare workers like nurses and doctors. lda are mixture models, meaning that documents can belong to multiple topics and membership is fractional [ ] . further, each topic is a mixture of words, where words can be shared among topics. this allows for a "fuzzy" form of unsupervised clustering where a single document can belong to multiple topics, each with an associated probability. lda is a bag of words model where each vector is a count of terms. lda requires the number of topics to be specified. similar to methods described by syed et al. [ ] , we ran different lda experiments varying the number of topics from to , and selected the model with the highest coherence value score. we selected the lda model that generated topics, with a medium coherence value score of . . roder et al. [ ] developed the coherence value as a metric that calculates the agreement of a set of pairs and word subsets and their associated word probabilities into a single score. in general, topics are interpreted as being coherent if all or most of terms are related. our final model generated topics using the default figure and include the terms generated and each topic's coherence score measuring interpretability. similar to the high-level trends inferred from extracting keywords, themes about ppe and healthcare workers dominate the nature of topics. the terms generated also indicate emerging words in public conversation including "hydroxychloroquine" and "asymptomatic". our results also show four topics that are in non-english languages. in our preprocessing, we removed non-latin characters in order to filter out a high volume of arabic and chinese characters. in twitter there exists a tweet object metadata field of "lang" for language to filter tweets by a specific language like english ("eng"). however, we decided not to filter against the "lang" element because upon observation, approximately . % of the dataset consisted of an "undefined" language tag, meaning that no language was indicated. although it appears to be a small fraction, removing even the "undefined" tweets would have removed several thousand tweets. some of these tweets that are tagged as "undefined" are in english but contain hashtags, emojis, and arabic characters. as a result, we did not filter out for english language, leading our topics to be a mix of english, spanish, italian, french, and portuguese. although this introduced challenges in interpretation, we feel it demonstrates the global nature of worldwide conversations about covid occurring on twitter. this is consistent with what singh et al. singh et al. [ ] reported as a variety of languages in covid tweets upon analyzing over million tweets. as a result, we labeled the four topics by the language of the terms in the respective topics: "spanish" (topic ), "portuguese" (topic ), "italian" (topic ) and "french" (topic ). we used google translate to infer the language of the terms. when examining the distribution of the topics across the corpora in figure , topics ("potus"), ("case.death.new"), ("mask.ppe.ventil"), and ("like.look.work") were the top five in the entire corpora. for each plot, we labeled each topic with the first three terms of each topic for interpretability. in our trend analysis, we summed the number of tweets per minute, and then applied a moving weighted average of minutes for topics march -march , and minutes for topics march to april th. we provided two different plots in order to visualize smaller time frames such as march of minutes compared to figure and figure show similar trends on a time-series basis per minute across the entire corpora of , , tweets. these plots are in a style of "broken axes" to indicate that the corpora are not continuous periods of time, but discrete time frames, which we selected to plot on one axis for convenience and legibility. we direct the reader to table v for reference on the start and end datetimes, which are in utc format, so please adjust accordingly for time zone. the x-axis denotes the number of minutes, where the entire https://github.com/bendichter/brokenaxes corpora is total minutes of tweets. figure shows that for the corpora of march , , and , the topics (denoted in hash-marked lines) focused on topic "potus" and topic "mask.ppe.ventil" trended greatest. for the later time periods of march , march , april , and in figure , topic "potus" and topic "mask.ppe.ventil" (also in hash-marked lines) continued to trended high. it is also interesting that topic was never replaced as the top trending topic, across a span of days (april , also includes early hours of april est), potentially as this may have been a proxy for active government listening. the time series would temporally decrease in frequency during overnight hours, between the we applied change point detection in the time series of tweets per minute for topic in the datasets march , , april - , , april - , , and april , , to identify whether the live press briefings coincided with inflections in time. using the ruptures python package [ ] containing a variety of change point detection methods, we used binary segmentation [ ] , a standard method for change point detection. given a sequence of data y :n = (y , ..., y n ) the model will have m changepoints with their positions τ :m = (τ , ..., τ m ). each changepoint position is an integer between and n − . the m changepoints split the time series data into m + segments, with the ith segment containing y ( τ i− + ) : τ i . changepoints are identified by minimizing a cost function, c for a given segment, where βf (m) is a penalty to prevent overfitting. where twice the negative log-likelihood is a commonly used cost function. binary segmentation detects multiple changepoints across the time series by repeatedly testing on different subsets of the sequence. it checks to see if a τ exists that satisfies: c(y :τ + c(y (τ + ):n ) + β < c(y :n ) if not, then no changepoint is detected and the method stops. but if a changepoint is detected, the data are split into two segments consisting of the time series before (figure blue) and after (figure pink) the changepoint. we can clearly see in figure that the timing of the white house briefing indicates a changepoint in time, giving us the intuition that this briefing influenced an uptick in the the number of tweets. we provide additional examples in the appendix. our topic findings are consistent with the published analyses on covid and twitter, such as [ ] who found major themes of healthcare and illness and international dialogue, as we noticed in our four non-english topics. they are also similar to by thelwall et al. [ ] who manually reviewed tweets from a corpus of million tweets occurring earlier and overlapping our dataset (march - ). similar topics from their findings to ours includes "lockdown life", "politics", "safety messages", "people with covid- ", "support for key workers", "work", and "covid- facts/news". further, our dataset of covid tweets from march to april , occurred during a month of exponential case growth. by the end of our data collection period, the number of cases had increased by times to , cases on april , [ ] . the key topics we identified using our multiple methods were representative of the public conversations being had in news outlets during march and april, including: term-frequency inverse-document-frequency (tf-idf) [ ] is a weight that signifies how valuable a term is within a document in a corpus, and can be calculated at the n-gram level. tf-idf has been widely applied for feature extraction on tweets used for text classification [ ] [ ] , analyzing sentiment [ ] , and for text matching in political rumor detection [ ] with tf-idf, unique words carry greater information and value than common, high frequency words across the corpus. tf-idf can be calculated as follows: where i is the term, j is the document, and n is the total number of documents in the corpus. tf-idf calculates the term frequency tf i,j multiplied by the log of the inverse document frequency n dfi . the term frequency tf i,j is calculated as the frequency of i in j divided by all terms i in given j. the inverse document frequency is n dfi is the log of the total number of documents j in the corpus divided by the number of documents j containing term, i. using the scikit-learn implementation of tfidfvectorizer and setting max_features to , we transformed our corpus of , , tweets into a r n×k sparse dimensional matrix of shape ( , ). note, prior to fitting the vectorizer, our corpus of tweets was pre-processed during the keyword analysis stage. we chose to visualize how the topics grouped together using uniform manifold approximation and projection (umap) [ ] . umap is a dimension reduction algorithm that finds a low dimensional representation of data with similar topological properties as the high dimensional space. it measures the local distance of points across a neighborhood graph of the high dimensional data, capturing what is called a fuzzy topological representation of the data. optimization is then used to find the closest fuzzy topological structure by first approximating nearest neighbors using the nearest-neighbor-descent algorithm and then minimizing local distances of the approximate topology using stochastic gradient descent [ ] . when compared to t-distributed stochastic neighbor embedding (t-sne), umap has been observed to be faster [ ] with clearer separation of groups. due to compute limitations in fitting the entire high dimensional vector of nearly . m records, we randomly sampled one million records. we created an embedding of the vectors along two components to fit the umap model with the hellinger metric which compares distances between probability distributions, as follows: we visualized the word vectors with their respective labels, which were the assigned topics generated from the lda model. we used the default parameters of n_neighbors = and min_dist = . . figure presents the visualization of the tf-idf word vectors for each of the million tweets with their labeled topics. umap is supposed to preserve local and global structure of data, unlike t-sne that separates groups but does not preserve global structure. as a result, umap visualizations intend to allow the reader to interpret distances between groups as meaningful. in figure each topic is colorcoded by its respective topic. the umap plots appear to provide further evidence of the quality and number of topics generated. our observations is that many of these topic "clusters" appear to have a single dominant color indicating distinct grouping. there is strong local clustering for topics that were also prominent in the keyword analysis and topic modeling time series plots. a very distinct and separated mass of purple tweets represents the " : n/a" topic which is an undefined topic. this means that the lda model outputted equal scores across all topics for any single tweet. as a result, we could not assign a topic to these tweets because they all had uniform scores. but this visualization informs us that the contents of these tweets were uniquely distinct from the others. examples of tweets in this " : n/a" cateogry include "see, #democrats are always guilty of whatever", "why are people still getting in cruise ships?!?", "thank you mike you are always helping others and sponsoring anchors media shows.", "we cannot let this woman's brave and courageous actions go to waste! #chinaliedpeopledied #chinaneedstopay", "i wish people in this country would just stay the hell home instead of going to the beach". other observations reveal that the mask-related topic in purple, and potentially a combination of and in red are distinct from the mass of noisy topics in the center of the plot. we can also see distinct separation of aqua-colored topic "potus" and potentially topics and in yellow. we refer the reader to other examples where umap has been leveraged for twitter analysis, to include darwish et al. [ ] for identifying clusters of twitter users with controversial topic similarity, vargas [ ] for event detection, political polarization by darwish et al. [ ] and estimating political leaning of users by [ ] . retweeting is a special activity reserved for twitter where any user can "retweet" messages which allows them to disseminate their messages rapidly to their followers. further, a highly retweeted tweet might signal that an issue has attracted attention in the highly competitive twitter environment, and may give insight about issues that resonate with the public [ ] . whereas in the first three analyses we used no retweets, in the time-series and network modeling that follows, we exclusively use retweets. we began by measuring time-toretweet. wang et al. [ ] calls this "response time" and used it to measure response efficiency and speed of information dissemination during hurricane sandy. wang analyzed , tweets and found that % of re-tweets occur within h [ ] . we researched how fast other users retweet in emergency situations, such as what spiro [ ] reported for natural disasters, and how earle [ ] reported as seconds for retweeting about an earthquake. we extracted metadata from our corpora for the tweet, user, and entities objects. for reference, we direct the reader to the twitter developer guide that provides a detailed overview of each object [ ] . due to compute limitations, we selected a sample that consisted of , tweets that included retweets from the corpora of march - , . however, since we were only focused on retweets, out of the corpus of , tweets, we reduced it to , ( %) that were only retweets. the metadata we used for both our time-to-retweet and directed graph analyses in the next section, included: ) created_at (string) -utc time when this tweet was created. ) text (string) -the actual utf- text of the status update. see twitter-text for details on what characters are currently considered valid. ) from the user object, the id_str (string) -the string representation of the unique identifier for this user. ) from the retweeted_status object (tweet) -the cre-ated_at utc time when the retweet was created. ) from the retweeted_status object (tweet) -the id_str which is the unique identifier for the retweeting user. we used the corpus of retweets and analyzed the time between the tweet created_at and the retweeted created_at. here, the rt_object is the datetime in utc format for when the message that was retweeted was originally posted. the tw_object is the datetime in utc format when the current tweet was posted. as a result, the datetime for the rt_object is older than the datetime for the current tweet. this measures the time it took for the author of the current tweet to retweet the originating message. this is similar to kuang et al. [ ] who defined response time of the retweet to be the time difference between the time of the first retweet and that of the origin tweet. further, spiro et al. [ ] calls these "waiting times". the median time-to-retweet for our corpus was . hours meaning that half of the tweets occurred within this time (less than what wang reported as . hour), and the mean was . hours. figure shows the histogram of the number of tweets by their time to retweet in seconds and figure shows it in hours. further, we found that compared to the avian influenza outbreak (h n ) in china described by zhang et al. [ ] covid retweeters sent more messages earlier than h n . zhang analyzed the log distribution of , h n related posts during april and plotted reposting time of messages on sina weibo, a chinese twitter-like platform and one of the largest microblogging sites in china figure . zhang found that h n reposting occurred with a median time of minutes (i.e. . hours) and a mean of minutes (i.e. hours). compared to zhang's study, we found our median retweet time to be . hours, about minutes faster than the reposting time during h n of . hours. when comparing figure and figure , it appears that covid retweeting does now completely slow down until . hours later ( seconds). for h n it appears to slow down much earlier by seconds. unfortunately few studies appear to document retweeting times during infectious disease outbreaks which made it hard to compare how covid retweeting behavior against similar situations. further, the h n outbreak in china occurred seven years ago and may not be a comparable set of data for numerous reasons. chinese social media may not represent similar behaviors with american twitter and this analysis does not take into account multiple factors that imply retweeting behavior to include the context, the user's position, and the time the tweet was posted [ ] . we also analyzed what rapid retweeters, or those retweeting messages even faster than the median, in less than , seconds were saying. in figure we plotted the top tf-idf features by their scores for the text of the retweets. it is intuitive to see that urls are being retweeted quickly by the presence of "https" in the body of the retweeted text. this is also consistent with studies by suh et al. [ ] who indicated that tweets with urls were a significant factor impacting retweetability. we found terms that were frequently mentioned during the early-stage keyword analysis and topic modeling mentioned again: "cases", "ventilators", "hospitals", "deaths", "masks", "test", "american", "cuomo", "york", "president", "china", and "news". when analyzing the descriptions of the users who were retweeted in figure , we ran the tf-idf vectorizer on bigrams in order to elicit more interpretable terms. user accounts whose tweets were rapidly retweeted, appeared to describe themselves as political, news-related, or some form of social media account, all of which are difficult to verify as real or fake. vii. network modeling we analyzed the network dynamics of nine different time periods within the march - , covid dataset, and visualized them based on their speed of retweeting. these types of graphs have been referred to as "retweet cascades" which describes how a social media network propagates information [ ] . similar methods have been applied for visualizing rumor propogation by jin et al. [ ] we wanted to analyze how covid retweeting behaves at different time points. we used published disaster retweeting times to serve as benchmarks for selecting time periods. as a result, the graphs in figure are plotted by retweeting time of known benchmarks -the median time to retweet after an earthquake which implies rapid notification, the median time to retweet after a funnel cloud has been seen, all the way to a one-day or hour time period. we did this to visualize a retweet cascade of fast to slow information propogation. we used median retweeting times published spiro et al. [ ] for the time it took users to retweet messages based on hazardous keywords like "funnel cloud", "aftershock", and "mudslide". we also used the h n reposting time which zhang et al. [ ] published of . hours. we generated a directed graph for each of the nine time periods, where the network consisted of a source which was the author of the tweet (user object, the id_str) and a target which was the original retweeter shown in table iv . the goal was to analyze how connections change as the retweeting speed increases. the nine networks are visualized in figure . graphs were plotted using networkx and drawn using the kamada kawai layout [ ] , a force-directed algorithm. we modeled users for each graph. we found that more nodes became too difficult to interpret. the size of the node indicates the number of degrees, or users that it is connected to. it can mean that the node has been retweeted by others several times. or, it can also mean that the node itself has been retweeted by others several times. the density of each network increases over time shown in figure and figure . very rapid retweeters, in the time it takes to retweet after an earthquake, start off with a sparse network with a few nodes in the center being the focus of retweets in figure a . by the time we reach figure d , the retweeted users are much more clustered in the center and there are more connections and activity. the top retweeted user in our median time network figure g , was a news network and tweeted "the team took less than a week to take the ventilator from the drawing board to working prototype, so that it can". by hours out in figure h , we see a concentrated set of users being retweeted and by figure i , one account appears to dominate the space being retweeted times. this account was retweeting the following message several times "she was doing #chemotherapy couldn't leave the house because of the threat of #coronavirus so her line sisters...". in addition, the number of nodes generally decreased from in "earthquake" time to in one week, and the density also generally increased, shown in table iv. these retweet cascade graphs provide only an exploratory analysis. network structures like these have been used to predict virality of messages, for example memes over time as the message is diffused across networks [ ] . but, analyzing them further could enable ) an improved understanding about how covid information diffusion is different than other outbreaks, or global events, ) how information is transmitted differently from region to region across the world, and ) what users and messages are being concentrated on over time. this would support strategies to improve government communications, emergency messaging, dispelling medical rumors, and tailoring public health announcements. there are several limitations with this study. first, our dataset is discontinuous and trends seen in figure and figure where there is an interruption in time should be taken with caution. although there appears to be a trend between one discrete time and another, without the missing data, it is impossible to confirm this as a trend. as a result, it would be valuable to apply these techniques on a larger and continuous corpus without any time breaks. we aim to repeat the methods in this study on a longer continuous stream of twitter data in the near future. next, the corpus we analyzed was already pre-filtered with thirteen "track" terms from the twitter streaming api that focused the dataset towards healthcare related concerns. this may be the reason why the high level keywords extracted in the first round of analysis were consistently mentioned throughout the different stages of modeling. however, after review of similar papers indicated in table i , we found that despite having filtered the corpus on healthcare-related terms, topics still appear to be consistent with analyses where corpora were filtered on limited terms like "#coronavirus". third, the users and conversations in twitter are not a direct representation of the u.s. or global population. the pew research foundation found that only % of american adults use twitter [ ] and that this group is different from the majority of u.s. adults, because they are on average younger, more likely to identify as democrats, more highly educated and possess higher incomes [ ] . the users were also not verified and should be considered as a possible mixture of human and bot accounts. fourth, we reduced our corpus to remove retweets for the keyword and topic modeling anlayses since retweets can obscure the message by introducing virality and altering the perception of the information [ ] . as a result, this reduced the size of our corpus by nearly % from , , tweets to , , tweets. however, there appears to be variability in terms of consistent corpora sizes in the twitter analysis literature both in table i fifth, our compute limitations prohibited us from analyzing a larger corpus for the umap, time-series, and network modeling. for the lda models we leveraged the gensim mul-ticorelda model that allowed us to leverage multiprocessing across workers. but for umap and the network modeling, we were constrained to use a cpu. however, as stated above, visualizing more than nodes for our graph models was unintepretable. applying our methods across the entire . million corpora for umap and the network models may yield more meaningful results. sixth, we were only able to iterate over different lda models based on changing the number of topics, whereas syed et al. [ ] iterated on models to select coherent models. we believe that applying a manual gridsearch of the lda parameters such as iterations, alpha, gamma threshold, chunksize, and number of passes would lead to a more diverse representation of lda models and possibly more coherent topics. seven, it was challenging to identify papers that analyzed twitter networks according to their speed of retweets for public health emergencies and disease outbreaks. zhang et al. [ ] points out that there are not enough studies of temporal measurement of public response to health emergencies. we were lucky to find papers by zhang et al. [ ] and spiro et al. [ ] who published on disaster waiting times. chew et al. [ ] and szomszor et al. [ ] have published about twitter analysis in h n and the swine flu, respectively. chew analyzed the volume of h n tweets and categorized different types of messages such as humor and concern. szomszor correlated tweets with uk national surveillance data and tang et al. [ ] generated a semantic network of tweets on measles during the measles outbreak to understand keywords mentioned about news updates, public health, vaccines and politics. however, it was difficult to compare our findings against other disease outbreaks due to the lack of similar modeling and published retweet cascade times and network models. we answered five research questions about covid tweets during march , -april , . first, we found highlevel trends that could be inferred from keyword analysis. second, we found that live white house coronavirus briefings led to spikes in topic ("potus"). third, using umap, we found strong local "clustering" of topics representing ppe, healthcare workers, and government concerns. umap allowed for an improved understanding of distinct topics generated by lda. fourth, we used retweets to calculate the speed of retweeting. we found that the median retweeting time was . hours. fifth, using directed graphs we plotted the networks of covid retweeting communities from rapid to longer retweeting times. the density of each network increased over time as the number of nodes generally decreased. lastly, we recommend trying all techniques indicated in table i to gain an overall understanding of covid twitter data. while applying multiple methods for an exploratory strategy, there is no technical guarantee that the same combination of five methods analyzed in this paper will yield insights on a different time period of data. as a result, researchers should attempt multiple techniques and draw on existing literature. models were calculated using the ruptures python package. we also applied exponential weighted moving average using the ewm pandas function. we applied a span of for march , and a span of for april - datasets, april - datasets, and april - datasets. our parameters for binary segmentation included selecting the "l " model to fit the points for topic , using n_bkps (breakpoints). crisis information distribution on twitter: a content analysis of tweets during hurricane sandy evaluating public response to the boston marathon bombing and other acts of terrorism through twitter twitter tsunami early warning network: a social network analysis of twitter information flows twitter earthquake detection: earthquake monitoring in a social world a case study of the new york city - influenza season with daily geocoded twitter data from temporal and spatiotemporal perspectives what can we learn about the ebola outbreak from tweets? covid- : the first public coronavirus twitter dataset retweeting for covid- : consensus building, information sharing, dissent, and lockdown life a first look at covid- information and misinformation sharing on twitter coronavirus on social media: analyzing misinformation in twitter conversations the covid- social media infodemic using twitter and web news mining to predict covid- outbreak a large-scale covid- twitter chatter dataset for open scientific research-an international collaboration an" infodemic": leveraging highvolume twitter data to understand public sentiment for the covid- outbreak understanding the perception of covid- policies by mining a multilanguage twitter dataset coronavirus goes viral: quantifying the covid- misinformation epidemic on twitter how the world's collective attention is being paid to a pandemic: covid- related -gram time series for languages on twitter an early look on the emergence of sinophobic behavior on web communities in the face of covid- prevalence of low-credibility information on twitter during the covid- outbreak coronavis: a real-time covid- tweets analyzer exploring the space of topic coherence measures detection and analysis of us presidential election related rumors on twitter analysis of twitter users' sharing of official new york storm response messages latent dirichlet allocation full-text or abstract? examining topic coherence scores using latent dirichlet allocation selective review of offline change point detection methods optimal detection of changepoints with a linear computational cost get your mass gatherings or large community events ready trump says fda will fast-track treatments for novel coronavirus, but there are still months of research ahead the white house. presidential memoranda using tf-idf to determine word relevance in document queries twitter trending topic classification predicting popular messages in twitter opinion mining and sentiment polarity on twitter and correlation between events and sentiment umap: uniform manifold approximation and projection for dimension reduction how umap works ¶ understanding umap unsupervised user stance detection on twitter event detection in colombian security twitter news using fine-grained latent topic analysis predicting the topical stance of media and popular twitter users bad news travel fast: a content-based analysis of interestingness on twitter waiting for a retweet: modeling waiting times in information propagation omg earthquake! can twitter improve earthquake response? introduction to tweet json -twitter developers predicting the times of retweeting in microblogs social media as amplification station: factors that influence the speed of online public response to health emergencies want to be retweeted? large scale analytics on factors impacting retweet in twitter network an algorithm for drawing general undirected graphs virality prediction and community structure in social networks share of u.s. adults using social media, including facebook, is mostly unchanged since how twitter users compare to the general public retweets are trash characterizing diabetes, diet, exercise, and obesity comments on twitter comparing twitter and traditional media using topic models empirical study of topic modeling in twitter characterizing twitter discussions about hpv vaccines using topic modeling and community detection topic modeling in twitter: aggregating tweets by conversations twitter-network topic model: a full bayesian treatment for social network and text modeling pandemics in the age of twitter: content analysis of tweets during the h n outbreak tweeting about measles during stages of an outbreak: a semantic network approach to the framing of an emerging infectious disease software framework for topic modelling with large corpora lda model parameters patient -china.thank.lockdown -case.spread.slow -day.case.week -test.case.hosp -die.world.peopl -mask.face.wear -make.home.stay -hospit.nurs.le -case.death.new -mask.ppe.ventil -portuguese -case.death.number -italian -great.god.news -potus -spanish -like.look.work -hospit.realli.patient -china.thank.lockdown -case.spread.slow -day.case.week -test.case.hosp -die.world.peopl -mask.face.wear -make.home.stay -hospit.nurs.le -case.death.new -mask.ppe.ventil -portuguese -case.death.number -italian -great.god.news -potus -spanish -like.look.work -hospit.realli.patient -china.thank.lockdown -case.spread.slow -day.case.week -test.case.hosp -die.world.peopl -mask.face.wear -make the authors would like to acknowledge john larson from booz allen hamilton for his support and review of this article. [ , ] . it provides four different coherence metrics. we used the "c_v" metric for coherence developed by roder [ ] . coherence metrics are used to rate the quality and human interpretability of a topic generated. all models were run with the default parameters using a ldamulticore model parallel computing on workers, default gamma threshhold of . , chunksize of , , iterations, passes. note -sudden decreases in figure signal may be due to temporary internet disconnection. key: cord- -kzt vmf authors: huang, x.; li, z.; lu, j.; wang, s.; wei, h.; chen, b. title: time-series clustering for home dwell time during covid- : what can we learn from it? date: - - journal: nan doi: . / . . . sha: doc_id: cord_uid: kzt vmf in this study, we investigate the potential driving factors that lead to the disparity in the time-series of home dwell time, aiming to provide fundamental knowledge that benefits policy-making for better mitigation strategies of future pandemics. taking metro atlanta as a study case, we perform a trend-driven analysis by conducting kmeans time-series clustering using fine-grained home dwell time records from safegraph, and further assess the statistical significance of sixteen demographic/socioeconomic variables from five major categories. we find that demographic/socioeconomic variables can explain the disparity in home dwell time in response to the stay-at-home order, which potentially leads to disparate exposures to the risk from the covid- . the results further suggest that socially disadvantaged groups are less likely to follow the order to stay at home, pointing out the extensive gaps in the effectiveness of social distancing measures exist between socially disadvantaged groups and others. our study reveals that the long-standing inequity issue in the u.s. stands in the way of the effective implementation of social distancing measures. policymakers need to carefully evaluate the inevitable trade-off among different groups, making sure the outcomes of their policies reflect interests of the socially disadvantaged groups. we perform a trend-driven analysis by conducting kmeans time-series clustering using finegrained home dwell time records from safegraph. • we find that demographic/socioeconomic variables can explain the disparity in home dwell time in response to the stay-at-home order. • the results suggest that socially disadvantaged groups are less likely to follow the order to stay at home, potentially leading to more exposures to the covid- . • policymakers need to make sure the outcomes of their policies reflect the interests of the disadvantaged groups. of their unique characteristics, all selected mobility datasets suggest a statistically significant positive correlation between mobility reduction and income at the u.s. county scale. despite the above efforts, the soundness of correlating disparity in response to demographic/socioeconomic variables is hampered by the coarse geographical units, as mitigation policies may vary in different countries, states, and even counties; therefore, the documented disparity in response may result from the discrepancy in mitigation policies, not from the varying demographic/socioeconomic indicators. thus, the examination of fine-grained mobility records (e.g., at the census tract or block group level) are in great need. in addition, most existing studies utilize indices summarized during a specific period to quantify the mobility-related response, neglecting the dynamic perspectives revealed from time-series data. in comparison, time-series trend-based analytics may provide valuable insights in distinguishing different dynamic patterns of mobility records, thus warranting further investigation. the objective of this study is to explore the capability of time-series clustering in categorizing fine-grained mobility records during the covid- pandemic, and further investigate what demographic/socioeconomic variables differ among the categories with statistical significance. taking advantage of the home dwell time at census block group (cbg) level from the safegraph [ ] , and using the atlanta-sandy springs-roswell metropolitan statistical area (msa) (hereafter referred to as metro atlanta) as a study case, this study investigates the potential driving factors that lead to the disparity in the time-series of home dwell time during the covid- pandemic, providing fundamental knowledge that benefits policy-making for better mitigation measures of future pandemics. the contributions of this work are summarized as follows: • we perform a trend-driven analysis by conducting kmeans time-series clustering using finegrained home dwell time records from safegraph. we assess the statistical significance of sixteen selected demographic/socioeconomic variables among categorized groups derived from the time-series clustering. those variables cover economic status, races and ethnicities, age and household type, education, and transportation. we discuss the potential demographic/socioeconomic variables that lead to the disparity in home dwell time during the covid- pandemic, how they reflect the long-standing health inequity in the u.s., and what can be suggested for better policy-making. the remainder of the paper is organized as follows. section introduces the datasets used in this study. section presents the methodological approaches we applied. section describes the contexts of the study case (metro atlanta). section presents the results of time-series clustering, the results of the analysis of variance, and the discussion. section concludes our article. the home dwell time records are derived from safegraph (https://www.safegraph.com/), a data company that aggregates anonymized location data from numerous applications in order to provide insights about physical places. safegraph aggregates data using a panel of gps points from anonymous mobile devices and determines the home location as the common nighttime location of each mobile device over a six-week period to a geohash- granularity (∼ m × ∼ m) [ ] . to enhance privacy, safegraph excludes cbg information if fewer than five devices visited an establishment in a month from a given cbg. the data records used in this study are the median home dwell time in minutes for all devices with a certain cbg on a daily basis. for each device, the observed minutes at home across the day are summed, and the median value for all devices with a certain cbg is further calculated [ ] . the raw safegraph dataset we used for the year spans from january , , to august , ( days) with daily home dwell records (in mins) for a total of , cbgs. heat map of home dwell time for these cbgs are is the author/funder, who has granted medrxiv a license to display the preprint in (which was not certified by peer review) preprint the copyright holder for this this version posted september , . . presented in figure . the impact of covid- can be observed, as home dwell time notably increased after the declaration of national emergency on march , [ ] (figure ), despite the disparity in the increasing intensity. after the lifting of strict social distancing measures in early may, however, home dwell time starts to decrease and returns to the pre-pandemic level ( figure ). the increased variation of home dwell time after the national emergency declaration indicates that cbgs have different responses to the pandemic and the government order. despite the large number of cbgs, not all cbgs contain sufficient records to derive stable time-series that can be used for clustering. the details of the preprocessing steps are presented in section . . demographic and socioeconomic variables in this study are derived from the american community survey (acs), collected by the u.s. census bureau. acs is an ongoing nationwide survey that investigates a variety of aggregated information about u.s. residents at different geographic levels every year [ ] . acs randomly selects monthly samples based on housing unit addresses and publishes annual estimates datasets (i.e., -month samples). in addition to the year datasets, acs also releases -year estimates (i.e., -month samples) and -year estimates (i.e., -month samples). compared to the -year and -year datasets, -year estimates cover the most areas, have the largest sample size, and contain the most reliable information [ ] . in this study, we use the latest -year acs data, i.e., the - acs -year estimates, obtained from social explorer (https://www.socialexplorer.com/). we recode the variables from acs data as five major categories: ) economic status; ) races and ethnicities; ) gender, age and household type; ) education; ) transportation. previous empirical studies suggested that these variables could be associated with the pattern of daily travels and participation of out-of-home activities [ ] [ ] [ ] [ ] . the detailed information of the variables within the five categories is presented in table . in addition, cbg boundaries are derived from tiger/line shapefiles by u.s. census bureau (https://www.census.gov/cgi-bin/geo/shapefiles/index.php). economic status pct_low_income percent of household income less than $ , . cc-by-nc . international license it is made available under a perpetuity. is the author/funder, who has granted medrxiv a license to display the preprint in (which was not certified by peer review) preprint several preprocessing steps are applied to ensure that cbgs within the study area contain sufficient and valid records to derive stable time-series that can be used for clustering. we first select cbgs that fall within the study area, i.e., metro-atlanta (more details of the metro-atlanta can be found in section ), which results in a total of , cbgs. as safegraph uses digital devices to measure home dwell time, the number of available devices in each cbg greatly determines the representativeness and the stability of the time-series. we plot the spatial distribution of median daily device count within the metro atlanta area and observe that cbgs dominated by non-residential zones tend to have less daily device count (figure a) , presumably due to the low number of home locations identified via safegraph's algorithm (see section . ). we keep cbgs with more than days (out of days) of home dwell time records to ensure reliable time-series can be generated. to fill the missing data, we adopt the approach from huang et al. [ ] , where missing data are filled via a simple linear interpolation by assuming that home dwell time changes linearly between two consecutive available records. our preliminary investigation suggests that stable time-series of daily home dwell time can be achieved when daily device count reaches . thus, we calculate the median of daily device count for each cbg during the -day period and select cbgs with the median equal or larger than . we also observe that some cbgs present abnormal home dwell patterns with consecutive values for a certain period of time. to avoid the potential problems caused by these cbgs on the performance of the clustering algorithm, we remove cbgs with values that span more than three consecutive days. a total of , cbgs remain after the aforementioned preprocessing steps, and their representativeness is presented in figure b . the representativeness is defined as the ratio between the median daily device count and the population from the acs - estimates. the representativeness for most cbgs ranges from % - % (figure b ), which is considerably higher than twitter [ ] , a commonly used open-sourced platform to derive mobility-related statistics. . cc-by-nc . international license it is made available under a perpetuity. is the author/funder, who has granted medrxiv a license to display the preprint in (which was not certified by peer review) preprint the copyright holder for this this version posted september , . . time-series clustering is the process of the partitioning a time-series dataset into a certain number of clusters, according to a certain similarity criterion. in this study, we aim to cluster the time-series of home dwell time in the cbgs within the study area. we adopt the design of kmeans [ ] , an unsupervised partition-based clustering algorithm in which observations are categorized into the cluster with the nearest mean. the choice of similarity measurement in kmeans is crucial to the detection of clusters [ ] . considering that the time-series of home dwell time for the majority of the cbgs present a similar shape but vary in intensity (figure ), we decide to calculate the euclidean distance between two time-series. given a dataset on time series = { , , … , }, we aim to partition into a total of clusters, i.e., = { , , … , } by minimizing the objective function j, given as: where denotes the time-series in category , and ‖•‖ denotes the similarity measurement that measures the distance between and the cluster center of . let and each be adimensional vector, where equals the length of the time series ( in this case). as euclidean distance is selected as similarity measurement in this study, ‖ − ‖ can be rewritten as: further, kmeans utilizes an iterative procedure with the following steps to derive the final category for each time-series candidate: . initialize cluster centroids , , … , arbitrarily. . assign each time-series to its correct cluster , according to ‖ − ‖. is the author/funder, who has granted medrxiv a license to display the preprint in (which was not certified by peer review) preprint the copyright holder for this this version posted september , . . the kmeans time-series clustering requires pre-specification of the total number of clusters (i.e., ), which inevitably introduces the subjective nature of deciding the constitution of reasonable clusters [ ] . through the investigation of the time-series dataset, we set = , expecting to find three cbg clusters with different home dwell time patterns, following the stayat-home order: ) cbgs with a significant increase of home dwell time; ) cbgs with a moderate increase of home dwell time; ) cbgs with unnoticeable changes in home dwell time. after the time-series clustering, three cbg clusters are therefore formed, each with a unique distribution pattern of daily home dwell time. identifying the statistical difference in demographic/socioeconomic variables among these clusters facilitates a better understanding of what variables potentially lead to the disparity in home dwell time during the covid- pandemic. qualitatively, we label the cbg clusters, plot them spatially, and compare the spatial pattern of clusters with the spatial pattern of several major demographic/socioeconomic variables in the study area (see figure in section ). quantitatively, we apply one-way anova (analysis of variance) (α = . ) [ ] to assess the statistical significance of five major indicators (see table ) among categorized cbg groups derived from the time-series clustering. as anova does not provide insights into particular differences between pairs of cluster means, we further conduct tukey's test (α = . , . , . ) [ ] , a common and popular post-hoc analysis following anova, to assess the statistical difference of demographic/socioeconomic variable between cluster pairs. the study area defined in this study is referred to as metro atlanta, designated by the united states office of management and budget (omb) as the atlanta-sandy springs-alpharetta, georgia (ga) metropolitan statistical area (msa). metro atlanta is the twelfth-largest msa in the u.s. and the most populous metro area in ga [ ] . the study area includes a total of ga counties (listed in table a ) and has an estimated population of , , , according to the acs - estimates. metro atlanta has grown rapidly since the s. despite its rapid growth, however, metro atlanta has shown widening disparities, including class and racial divisions, underlying the uneven growth and development, making it one of the metro regions with the most inequity [ ] [ ] [ ] . it is the main reason why we chose this metro region to explore the disparity in responses to the covid- pandemic. in the last few decades, the north metro area has absorbed most of the new growth, thanks to the northward shifting trend of the metro region's white population and the rapid office, commercial, and retail development [ ] . after the increasingly unbalanced development in recent decades, metro atlanta started to present a distinct north-south spatial disparity in many demographic/socioeconomic variables (figure ). compared to the south metro region, the north region is characterized by higher income (figure a) , higher white percentages (figure b ), higher education (figure c) , and higher percentages of work-from-home workers (figure d) . is the author/funder, who has granted medrxiv a license to display the preprint in (which was not certified by peer review) preprint the copyright holder for this this version posted september , . . in contrast to the substantial spatial heterogeneity of socioeconomic status, ga's governmental reactions to the covid- pandemic are rather homogenous in space. on march , , governor brian p. kemp announced the public health state of emergency in ga. twenty days later (april ), the shelter-in-place order took effect for the entire state [ ] . the strict social distancing measures lasted until late april when ga started to reopen gradually: resuming restaurant dine-in services (april ), reopening bars and nightclubs with capacity limits (june ), allowing the gatherings of people (june ), and reopening conventions and live performance (july ) [ ] . is the author/funder, who has granted medrxiv a license to display the preprint in (which was not certified by peer review) preprint the copyright holder for this this version posted september , . . implemented in march and april strongly. cbgs in cluster # experienced a moderate increase in home dwell time during the implementation of strict social distancing measures (figure b ). compared to cluster # where the daily home dwell time increased up to , mins, cbgs in cluster # saw a more dramatic increase, as the home dwell time for most of the cbgs in cluster # reached , mins (out of mins in a day) in march and april, suggesting that mitigation measures have greatly changed people's travel behavior in these cbgs (figure c ). note that the three identified clusters are with different numbers of cbgs. clusters # , # , and # have cbgs, cbgs, and cbgs, respectively. figure shows the spatial distribution of the three cbg clusters, which presents a certain level of spatial autocorrelation, especially for cluster # and cluster # . the global moran's i [ ] for the distribution of the three identified clusters is . , and it is significant at the significance level of . . in general, the spatial distribution implies that demographic/socioeconomic variables potentially drive the disparity in home dwell time during the pandemic. the is the author/funder, who has granted medrxiv a license to display the preprint in (which was not certified by peer review) preprint the copyright holder for this this version posted september , . . distribution of cbgs in cluster # suggests a high correlation of home dwell time and income, as the distribution patterns between cbgs in cluster # and cbgs of high household income (see figure a ) are largely similar. north metro atlanta, where cbgs with high percentages of workfrom-home workers and high educational levels are concentrated, exhibits a strong influence due to the stay-at-home orders, evidenced by the high concentration of cbgs in cluster # , a cluster with significantly increased home dwell time in march and april. the selected sixteen demographic/socioeconomic variables present unique distribution patterns in the three identified clusters ( figure ). compared with the other two clusters, cluster # is characterized by a high median household income, a high percentage of high-earning groups, a low percentage of low-earning groups, and a low unemployment rate, suggesting that residents in rich cbgs respond to the stay-at-home order more aggressively by considerably reducing their out-of-home activities. it indicates that financial resources can, to a certain degree, influence the effectiveness of policies, as stated in other studies [ , ] . in terms of racial composition, the three clusters are distinctly different. the mean black percentages of cbgs in cluster # , # , and # are respectively . %, . %, and . %. cbgs in cluster # (with unnoticeable home dwell time increase) present much higher black percentages than cluster # (with strong home dwell time increase), revealing that stay-at-home order is less effective for cbgs with higher black percentages. this finding coincides with other recent studies that identified the racial disparities during the covid- pandemic [ , ] . as expected, cluster # also presents a higher single-parent family percentage, given the fact that a high percentage of single-parent families is usually seen in black communities [ ] . in contrast, the three identified is the author/funder, who has granted medrxiv a license to display the preprint in (which was not certified by peer review) preprint the copyright holder for this this version posted september , . . clusters present similar hispanic and female percentages, indicating their weaker role in distinguishing the patterns of home dwell time. as for education, cbgs in cluster # and # show similar distribution of the percentages of low education ( . % and . % as mean) while cbgs in cluster # shows a considerably lower percentage ( . % as mean). a reversed pattern can be found for high education, where cluster # presents a notably higher percentage of high education compared to cluster # and # . the percentages of short-commuters remain similar in all three clusters, while the percentages of long-commuters differ. the mean percentages of long-commuters in cluster # , # , and # are . %, . %, and . %, respectively. the result points out that a stronger increase in home dwell time is in tandem with a higher percentage of long-commuters. figure . selected demographic/socioeconomic variables in three identified clusters. the descriptions of these variables can be found in table . we perform avnoa to assess the statistical difference of demographic/socioeconomic variables among the three identified clusters and post-hoc tukey's test to evaluate the statistical difference between a certain cluster pair. the results from anova suggest that all selected variables, except for the percentage of females (pct_female) and the percentage of shortcommuters (pct_short_commute), show a statistically significant difference (α = . ) among the three clusters ( table ). the results reveal that gender and the percentage of short-commuters are not significantly different (α = . ) among the means of the three identified clusters, indicating that these two variables play a weaker role in explaining the disparity in patterns of home dwell time. is the author/funder, who has granted medrxiv a license to display the preprint in (which was not certified by peer review) preprint the copyright holder for this this version posted september , . to provide deeper insights into the comparisons of selected variables between a specific pair of clusters, we further conduct post-hoc tukey's test (figure ) . for variables regarding economic status, cluster # is statistically different (α = . ) from cluster # and # in all four economicrelated variables, i.e., pct_low_income, pct_high_income, median_hhinc, and pct_unemployrate. cluster # and cluster # present a weaker difference (α = . ) in median_hhinc and are not significantly different in pct_high_income. results of racial and ethnic variables suggest that three clusters are statistically different from each other in pct_white, pct_black, and pct_hispanic, despite the weaker difference in pct_hispanic (α = . ) between cluster # and # . the difference in education (pct_low_edu and pct_high_edu) is not significant between cluster # and cluster # but is significant (α = . ) when comparing cluster # to either cluster # or # . it suggests that cbgs in cluster # , a cluster with a strong increase in home dwell time, are characterized by their residents with high education, which is statistically different from the other two clusters. in addition, the three clusters are statistically different (α = . ) from each other in terms of longcommuters (pct_long_commute) and car ownership (pct_ car), suggesting that these two variables partially explain the disparity in home dwell time. is the author/funder, who has granted medrxiv a license to display the preprint in (which was not certified by peer review) preprint the copyright holder for this this version posted september , . . this study applies a time-series clustering technique to categorize fine-grained mobility records (at cbg level) during the covid- pandemic. through the investigation of the demographic/socioeconomic variables in identified time-series clusters, we find that they are able to explain the disparity in home dwell time in response to the stay-at-home order, which potentially leads to disproportionate exposures to the risk from the covid- . this study also reveals that socially disadvantaged groups are less likely to follow the order to stay at home, pointing out the extensive gaps in the effectiveness of social distancing measures exist between socially disadvantaged groups and others. to make things worse, the existing socioeconomic status induced disparities are often exaggerated by the shortcomings of u.s. protection measures (e.g., health insurance, minimum incomes, unemployment benefits), potentially causing longterm negative outcomes for the socially disadvantaged populations [ ] . in addition to the many pieces of epidemiological evidence that prove a strong relationship between social inequality and health outcomes [ , ] , this study offers evidence in the covid- pandemic we are facing. specifically, we find that all selected variables, except for the percentage of females (pct_female) and the percentage of short-commuters (pct_short_commute), show a statistically significant difference (α = . ) among the three identified clusters. cbgs in cluster # , a cluster with strong response in home dwell time, are characterized by high median household income, high black percentage, high percentage of high-earning groups, low unemployment rate, high is the author/funder, who has granted medrxiv a license to display the preprint in (which was not certified by peer review) preprint the copyright holder for this this version posted september , . . education, low percentage of single parents, high car ownership, and high percentage of longcommuters. the statistically significant difference of demographic/socioeconomic variables in cluster # collectively points out the privilege of the advantaged groups, usually the white and the affluent. the weak response from the socially disadvantaged groups in home dwell time can be possibly explained by the fact that policies can sometimes unintentionally create discrimination among groups with different socioeconomic status [ ] , as people can react to policies based on the financial resources they have [ ] , which in return, influences the effectiveness of the policies. our study reveals that the long-standing inequity issue in the u.s. stands in the way of the effective implementation of social distancing measures. thus, policymakers need to carefully evaluate the inevitable trade-off among different groups and make sure the outcomes of their policies reflect not only the preferences of the advantaged but also the interests of the disadvantaged. it is important to mention several limitations of this study and provide guidelines for future directions. first, we acknowledge the subjectivity of predefining the number of clusters in the kmeans clustering algorithm. in this study, we set the number of clusters as three (i.e., = ) via the investigation and interpretation of the home dwell time records from safegraph. we notice that, even after the preprocessing, some cbgs still present unstable temporal patterns due to the low and varying daily device count. our interpretation of the data records reveals three distinct temporal patterns with a strong, moderate, and unnoticeable increase in home dwell time during march and april (hence, is predefined as ). to ensure the interpretability of clusters, the selection of the number of clusters in kmeans via prior knowledge (priori) is common. however, we acknowledge that approaches like elbow curve [ ] and silhouette analysis [ ] are largely adopted to facilitate the optimization of without prior knowledge. when conducting a crosscity comparison or reproducing our approach in another region, we advise re-investigating the pattern of the time-series or adopting the aforementioned approaches to derive a reasonable setting of k. second, we construct and cluster the time-series of home dwell time using the data in the year (january to aug ), without considering the changes in time-series compared to the previous year. it is reasonable to assume that deriving a cross-year change index facilitates the identification of cbgs that behave differently compared to the year . however, we need to acknowledge the involvement of data records in the year inevitably introduces a certain level of uncertainty, as daily device count may vary substantially, leading to different representativeness of the same cbg between the two years. in addition, the kmeans time-series clustering algorithm in this study takes the -month period as input. further efforts can be directed towards the exploration of how cbgs behave differently at a certain time frame window, e.g., march and april, when strict social distancing measures were implemented. third, this study selects a total of sixteen variables from five major categories and explores the distribution of these variables in three identified clusters. although previous studies have demonstrated the strong linkage between these variables and the participation of out-of-home activities, we can not rule out the possible contribution of other demographic/socioeconomic variables that are not included in this study. future studies need to incorporate more variables to understand their roles in how social distancing guidelines are practiced. in addition, it is reasonable to assume that these variables drive the disparity in home dwell time, not independently but collectively. therefore, statistical approaches like multinomial logit regression [ ] can be used to further investigate the interactions among these variables towards time-seriesbased cluster generation. finally, it should be noted that the demographic structure, spatial pattern, and built environment vary substantially across areas, especially across densely populated urban fabrics [ , ] . thus, the influence of demographic/socioeconomic variables on the disparity in home dwell time following the stay-at-home order may not hold the same and tend to vary geographically. in addition, local governments had differing responses to the pandemic with varying strictness of the implemented social distancing measures, potentially leading to an unequal impact that disfavors disadvantaged groups. this study only explores the situation in metro atlanta, which can not be generalized to other regions without caution. thus, it is necessary to conduct comparative studies that include multiple regions to better understand the contribution of demographic/socioeconomic variables to the impact of the covid- pandemic on mobility-related behaviors. this study categorizes the time-series of home dwell time records during the covid- pandemic, and further explores what demographic/socioeconomic variables differ among the categories with statistical significance. taking the atlanta-sandy springs-roswell metropolitan statistical area (metro atlanta) as a study case, we investigates the potential driving factors that lead to the disparity in the time-series of home dwell time, providing fundamental knowledge that benefits policy-making for better mitigation measures of future pandemics. we find that demographic/socioeconomic variables can explain the disparity in home dwell time in response to the stay-at-home order, which potentially leads to disproportionate exposures to the risk from the covid- . the results further suggest that socially disadvantaged groups are less likely to follow the order to stay at home, pointing out the extensive gaps in the effectiveness of social distancing measures exist between socially disadvantaged groups and others. specifically, we find that cbgs with strong response to the stay-at-home order are characterized by high median household income, high black percentage, high percentage of highearning groups, low unemployment rate, high education, low percentage of single parents, high car ownership, and high percentage of long-commuters, pointing out the privilege of the advantaged groups, usually the white and the affluent. in other words, populations with lower socioeconomic status may lack the freedom or flexibility to stay at home, leading to the exposure of more risks during the pandemic. our study reveals that the long-standing inequity issue in the u.s. stands in the way of the effective implementation of social distancing measures. thus, policymakers need to carefully evaluate the inevitable trade-off among different groups and make sure the outcomes of their policies reflect not only the preferences of the advantaged but also the interests of the disadvantaged. covid- ) -events as they happen covid- ) -weekly epidemiological update the covid- vaccine development landscape social distancing responses to covid- emergency declarations strongly differentiated by income the effect of human mobility and control measures on the covid- epidemic in china transmission potential and severity of covid- in south korea covid- and italy: what next? first cases of coronavirus disease (covid- ) in france: surveillance, investigations and control measures unemployment effects of stay-at-home orders: evidence from high frequency claims data. institute for research on labor and employment working paper the characteristics of multi-source mobility datasets and how they reveal the luxury nature of social distancing in the u.s. during the covid- pandemic the determinants of the differential exposure to covid- in new york city and their evolution over time. covid economics: vetted and real-time papers economic and social consequences of human mobility restrictions under covid- social distancing, internet access and inequality (no. w ) the benefits and costs of social distancing in rich and poor countries urban residents in states hit hard by covid- most likely to see it as a threat to daily life are stay-at-home orders more difficult to follow for low-income groups? working paper american community survey information guide when to use -year, -year, or -year estimates distance traveled in three canadian cities: spatial analysis from the perspective of vulnerable population segments a time-use investigation of shopping participation in three canadian cities: is there evidence of social exclusion? my car, my friends, and me: a preliminary analysis of automobility and social activity participation relative accessibility deprivation indicators for urban settings: definitions and application to food deserts in montreal human mobility, and covid- . arxiv preprint an efficient kmeans clustering algorithm: analysis and implementation clustering of time series data-a survey selection of k in k-means clustering analysis of variance (anova) tukey's honestly significant difference (hsd) test metropolitan and micropolitan statistical areas population totals and components of change multi-city study of urban inequality inequities of transit access: the case of sprawl atlanta: social equity dimensions of uneven growth and development atlanta: race, class, and urban expansion kemp -office of the governor where states reopened and cases spiked after the u.s. shutdown, the washington post local spatial autocorrelation statistics: distributional issues and an application the impact of social vulnerability on covid- in the us: an analysis of spatially varying relationships assessing racial and ethnic disparities using a covid- outcomes continuum for new york state the covid- pandemic: a call to action to identify and address racial and ethnic disparities the changing demographic and socioeconomic characteristics of single parent families anthropology, inequality, and disease: a review the income-associated burden of disease in the united states disadvantage, inequality, and social policy review on determining number of cluster in k-means clustering selecting variables for k-means cluster analysis by using a genetic algorithm that optimises the silhouettes multinomial logistic regression algorithm impact of metropolitan-level built environment on travel behavior how built environment affects travel behavior: a comparative analysis of the connections between land use and vehicle miles traveled in us cities key: cord- -n cwg b authors: bernini, antonio; bonaccorsi, lorella; fanti, pietro; ranaldi, francesco; santosuosso, ugo title: use of it tools to search for a correlation between weather factors and onset of pulmonary thromboembolism date: - - journal: nan doi: nan sha: doc_id: cord_uid: n cwg b pulmonary embolism (pe) and deep vein thrombosis (dvt) are gathered in venous thromboembolism (vte) and represent the third cause of cardiovascular diseases. recent studies suggest that meteorological parameters as atmospheric pressure, temperature, and humidity could affect pe incidence but, nowadays, the relationship between these two phenomena is debated and the evidence is not completely explained. the clinical experience of the department of emergency medicine at aouc hospital suggests the possibility that a relationship effectively exists. we have collected data concerning the emergency medicine unit admissions of pe patients to confirm our hypothesis. at the same time, atmospheric parameters are collected from the lamma consortium of tuscany region. we have implemented new it models and statistic tools by using semi-hourly records of weather time high resolution data to process the dataset. we have carried out tools from econometrics, like mobile means, and we have studied anomalies through the search for peaks and possible patterns. we have created a framework in python to represent and study time series and to analyze data and plot graphs. the project has been uploaded on github. our analyses highlighted a strong correlation between the moving averages of atmospheric pressure and those of the hospitalizations number (r= - . , p< , ) although causality is still unknown. the existence of an increase in the number of hospitalizations in the days following short-to-medium periods of time characterized by a high number of half-hourly pressure changes is also detected. the spectrograms studies obtained by the fourier transform requires to increase the dataset. the analyzed data (especially hospitalization data) were too few to carry out this kind of analyses. study time series by developing a python project and to analyze data and plot graphs. the project has been uploaded on github. aim of the study. the aim of our study is to describe a method to evaluate the relationship between the atmospheric parameters spikes and the incomes to emergency unit at careggi hospital of subjects diagnosed with pe. to set up the study, we have retrospectively collected clinical data of pe diagnosed subjects in hospital admissions. we have set a database characterized by the demographic and pathological parameters of the subjects. in another database the meteorological data of atmospheric parameters relatively to the investigated period of time have been collected. results. to approach this issue we have used econometry, chronobiology (time series analysis) and pattern recognition approaches. the moving average of the time series of daily hospitalizations, on an annual window, has evidenced an increasing trend. conversely, the annual moving average of pressure values has a decreasing trend. our analyses highlighted a strong correlation between the moving averages of atmospheric pressure and those of the hospitalizations number although causality is still unknown. the existence of an increase in the number of hospitalizations in the days following short-to-medium periods of time characterized by a high number of half-hourly pressure changes is also detected. average pressure changes in days with halfhourly records divided by number of hospitalizations at th day in has an increase of . %. in the all studied period this increment was . %. the spectrograms studies obtained by the fourier transform requires to increase the dataset. the analyzed data (especially hospitalization data) were too few to carry out this kind of analyses. conclusions. in conclusion, our results evidence a correlation between pulmonary embolism and meteorological parameters and in particular, atmospheric pressure (r= - . , p< , ) that is more relevant than temperature and wind speed. further data collections will increase the investigated time and the enrolled subjects treating the information through more complex computational approaches to confirm our results. studying seasonality of certain diseases and an eventual influence of weather factors on their onset and mortality is a current and very debated topic, also in light of recent studies done in this sense on covid- 's diffusion [ ] [ ] [ ] . in particular, some recent researches seem to suggest the existence of a correlation between onset and mortality of deep vein thrombosis and/or pulmonary embolism, two diseases strongly linked together (see ), and weather factors: however, since the studies are recent, conclusions are often not statistically significant or in contrast to each other. indeed some analyses highlight an increase in cases during spring months, that are characterized by a lower atmospheric pressure [ ] . however, other analyses find peaks of cases in winter [ ] . anyway, most studies agree that probably some kind of correlations exist, but more investigation is needed [ ] . the clinical experience of the department of emergency medicine at aouc hospital seems to suggest that a connection between the number of hospitalizations for pulmonary embolism and weather factors actually exists. from this comes this research which has the purpose of using statistical tools and models to verify this correlation and, in case, to describe it. besides attempting methods already used by other researchers, we tried to tackle the problem with different and new tools, too. in particular, it was found that much of the existing literature tended to focus on studying annual and monthly means, which nevertheless have the defect of flattening values. this fact leads to a loss of information. therefore we tried to study the problem with a higher resolution of data, using daily and semi-hourly means, the latter obtained thanks to a concession obtained by the laboratory for meteorology and environmental modelling of lamma consortium of tuscany region. we have also used tools from econometrics, like mobile means, and we have studied anomalies through the search for peaks and possible patterns. everything has been realized using python. we have created a framework to represent and study the time series. it can be used for future developments of this study or to analyze other data regarding other diseases, bot pulmonary and no, or statistic samples from different sources. software tools and data's preliminary analysis we have created a python project and used it to analyse data and plot graphs which are shown in this document. the project uses pyplot library [ ] and it can be consultes and downloaded on github [ ] . for this study we crossed data from different sources. daily hospitalizations data have been gotten by medical records of cases of pulmonary embolism in the period - , which aouc hospital gave us. the corresponding time series is shown in figure . daily weather data (in particular atmospheric pressure, minimum, maximum and average temperature and minimum, maximum and average wind speed) in the same period were initially obtained by meteo.it [ ]. the corresponding time series is shown in figure . later, since we needed semi-hourly values rather than daily values, we have used data provided by lamma consortium. these data are publicy available, only in the form of graphs, on [ , lamma.txt] . for the narrow range (from august th to august th ) without data we have manually assigned the standard value mbar. since we worked for most of the time with meteo.it's data, as initially we did not need semi-hourly means, in the rest of the document, when referring to weather data, always consider those taken from the latter source, unless otherwise specified. as first approach, the autocorrelation of daily values of atmospheric pressure and, separately, of the number of hospitalizations was investigated. the tool used to represent and study autocorrelation is the correlogram, also known as autocorrelogram. it is a graph constructed by examining the correlation between a time series and several dalayed series of k periods, in other words the sime series y t = y , ..., y t is correlated with his traslations of amplitude k. in our case, k assumes values , , ..., . for every correlation, the index is calculated, whithȲ representing the arithmetic mean of the values of the starting time series, not translated. then, the correlogram is constructed reporting in a cartesian chart the couple of values (k, r k ). as you can see in figure , the correlogram of pressue looks flat, with a monotonically increasing trend which is highlighted making the autocorrelation of the normalized time series (figure ). for details about normalization of a time series, look at the section . . . insted the correlogram of hospitalizations, figure , is more variegated, with a peak in . this is a predictable result: as regards hospitalizations, the past has almost no influence on the present and the accidental component prevails. in the pressure values, on the contrary, the trend component prevails, with the present being strongly influenced by the past, in particular by the nearest one, with decreasing influence as we move backwards, as evidenced by the decreasing correlogram [ ] . for the algorithm used, see [ , auto_correlation.py]. since in the graph in the figure there are peaks of annual seasonality, in an attempt to seasonally adjust the series, its moving average was realized on a -day window. the resulting historical series, shown in the figure , has a decreasing trend. using the data provided by lamma, a similar graph is obtained, so this historical series can be considered completely reliable. the trend observed is not surprisingly: barometric variations of this amplitude are in fact normal in the time spans considered. by making the moving average of the time series of daily hospitalizations, always on an annual window, we obtain a graph ( figure ) with an increasing trend, which cannot be ignored in the analysis of the results obtained later. some hypotheses about the cause of this growing trend include: • improvement of diagnostic tools; • increased awareness of the pathology, with consequent increase in the frequency of diagnosis; • increased precription of drugs with pulmonary embolism as a side effect; • increase of triggering factors such as atmospheric phenomena and/or polluting agents. note that only the last of the hypotheses formulated above can lead to finding a correlation between the data in our possession. if, on the other hand, the cause was to be found in one of the other hypothesised phenomena (or were in any case of another nature), the growing trend found would make the data less "clear" for the purposes of the study, constituting an obstacle in data analysis. search for a correlation between atmospheric pressure and number of hospitalizations the most immediate tool for finding a correlation is the scatter plot, however, if used alone, it can be conditioned by subjective interpretations. for this reason, the use of this tool has been accompanied by two of the most widely used correlation coefficients. where x and y represent respectively the values assumed by the two random variables x and y whose correlation we are studying, while m x and m y represent the averages. the p value associated with the calculation of the pearson coefficient indicates the probability that a randomly generated system of variables has a degree of correlation at least equal to that of the system examined. note however that the calculation of p is completely correct only under the assumption that all the variables examined have normal distribution, which is not always ensured in the analyzes made below, which is why the values of p reported could be subject to errors [ ] . the spearman's rank correlation coefficient, or spearman's coefficient, indicates the degree of monotonic correlation between two random variables [ ] . like the pearson coefficient, it assumes values belonging to the range [− , ] where indicates the absence of a monotonic correlation, while − and , on the contrary, indicate an exact monotonic correlation , negative in the first case, positive in the second. it is defined as a particular case of the pearson coefficient in which the values are converted into ranks before moving on to the calculation of the coefficient [ ] . compared to the pearson coefficient, it has the advantage of also detecting non-linear correlations, albeit always monotonic, and of not needing the normality of the distribution of the variables for the calculation of the associated p value, which is however sufficiently reliable only for sets of data of at least values [ ] , hypothesis that however, unlike what happened for the pearson coefficient, is always satisfied in the present study. a first approach in the search for a correlation between the time series of hospitalizations and that of atmospheric pressure was to generate a scatter plot (figure ), which however did not highlight anything in particular, but, at first sight, it shows a fairly random relationship between the data. something that, at least apparently, seems more relevant is what is shown in figure where a scatter plot between the time series of hospitalizations and the time series of the variation of pressure values since one day to another has been generated. the resulting graph appears to deviate slightly more from a completely random distribution, with the upper left part of the figure completely empty, however, trying to calculate the pearson and spearman coefficients, no relevant results were produced. for the algorithm used, see [ , corr_pressure.py] . note that in both figures and many of the points actually represent multiple occurrences, not visible because they overlap. correlating the seasonally adjusted time series mentioned in the section . . , the result obtained is completely different. the dispersion graph in figure shows a negative linear correlation, confirmed by the pearson (− . , p = . ) and spearman (− . , p = . ) coefficients, both close to − . this result is in line with studies in the literature which show a higher incidence of pulmonary embolism in the months characterized by low atmospheric pressure [ ] . in general, there is an increase in the negative correlation as the width of the moving average range increases, this is highlighted by the graphs in figure (notice that the graph (d) is the same as shown in figure ) and its correlation ceofficientes in table . for the algorithm used, see [ , corr_mobile_means.py] . although a correlation regarding seasonally adjusted values is undeniable, causality is not necessarily so. in fact, we have already discussed in the section . . about the possible causes of the increase in hospitalizations, and several have been hypothesized that have nothing to do with atmospheric pressure. investigations on this causality and possible explanations will be the subject of future research in the context of a collaboration with biomedical and experimental doctors. given the nature of the biological phenomenon that causes pulmonary embolisms and the possible correlation with environmental factors (see appendix b), it is reasonable to consider the analysis of the peaks of the historical series of such data, and their patterns, as a valid tool for look for some kind of correlation between data. it was initially thought that a sudden pressure jump could cause the detachment of the thrombus with the consequent appearing of the embolus (see b. ). ). let a = a , a , ..., a n a time series. let w ∈ n with w > and f ∈ r with f > . we indicate by mean : r w → r and std : r w → r the arithmetic mean and the mean squared error functions, respectively. we define the set {p + } of positive peaks as: similarly, the set {p − } of negative peaks is: once the time series has been transformed into a peak time series, by simply setting and − as the values that are, respectively, positive or negative peaks, and equal to all other values, it is possible to obtain a historical series of patterns. this is achieved by analyzing all the consecutive n-tuples (with n equal to the length of the considered pattern) of peaks series (which is a sequence of − , and values), and posing for each occurrence of the pattern, otherwise. note that, since w= and n= , the occurrence of a pattern involves an interval of consecutive days which are the day the pattern is found and the previous days. figures , e , obtained by performing [ , pattern_hospitalizations.py] , show a comparison between the incidence of some patterns of the atmospheric pressure peaks and the number of daily hospitalizations. the chosen patterns are ( , − , ), ( , , ), and ( , , − ) since they reveal the greatest pressure changes in restricted periods. notice how the ( , − , ) pattern never occurs, so that we deduce that in the geographical area of study the atmospheric pressure does not change too quickly or at least this did not happen in the period of interest. consider the following values: { , , , , , , , , , } with w = , the corresponding sequence of the moving average is { . , . , . , . } we then calculate the absolute value of the difference between the last value of each interval and its moving average: if we are looking for the pattern ( , , ) we obtain the following pattern time series: { , }. note that the same results would have been obtained even if the values and had been reversed. at least with the data in our possession, it is difficult to describe a relationship between the occurrence of a certain pattern (in the graphs represented by vertical segments in orange) and the number of hospitalizations in the next days. for this reason, in order to better study the effect pressure surges on the number of hospitalizations we have used also other tools (described in section . ), which needed semi-hourly pressure data. almost all parameters are easily modifiable, in particular: • meteorological data to be considered; • length of the intervals to generate the time series of the variations; • w and f values for the search of peaks; • length of the maximum translation between the time series of the hospitalizations and peak time series; • pattern of peaks to search. with the configuration we used to run our algorithm, the correlations examined amounted to . then, the algorithm returns a graph of comparison between the time series, the dispersion graph and the indices of correlation of the that series which, after the correlation with daily hospitalizations, has the highest pearson index and the one with the highest spearman's index (both in terms of absolute value). failing to reach the value . for neither index, the results are omitted because irrelevant. however, this failed attempt motivates the choice to continue the investigation focusing on the only atmospheric pressure values, since neither temperature nor speed of the wind seem to have a particular correlation with the number of hospitalizations. in this study, the pressure measures show that small, short and continuous pressure variations are present. in order to study the effects of these variations could have on the incidence of pulmonary embolism cases, it was seen that was no longer sufficient to analyse the daily averages of atmospheric pressure. for this reason an analysis carried out using pressure values recorded every half an hour. (i.e. daily records, during the period - , data provided by the lamma consortium). in accordance with the hypothesis that pressure changes are determinants for human health, the total pressure variation during the day was calculated for each day belonging the sample interval. this methodology was found to be more effective than analyzing peaks and patterns. the choice of this methodology was also due to empirical observations. imagine having a glass tube full of fluid and an impurity not occluding the lumen of the vessel, for example: a gas bubble. to detach the impurity from the wall and then remove it, is more effective to proceed applying a series of repeated and delicate taps to the tube, rather than a single strong shot. by analogy, it was thought that, a series of repeated, small and rapid changes in pressure were more probable causes of a thrombus detachment from the venous wall, rather than a single surge of strong entity. the calculation of the variations has been done in the following way: for every day d, given the half-hourly values d , d , ..., d ,the daily pressure change ∆ d has been calculated as: a graph has then been drawn showing the averages of the variations ∆ d depending on the number of daily hospitalizations, on the same day and in the three preceding days. initially it was considered a test data set only the period corresponding to the year . the result is in figure (the size of the points in the graph is directly proportional to the number of occurrences). this graph shows that, in the periods immediately preceding the days with two or more hospitalizations there is an average variation of pressure ∆ d higher of . % than to the average variation over the whole period considered. this result seems to confirm the hypothesis that a greater number of pressure changes correspond, in the short term, an increase in cases of pulmonary embolism. for a better confirmation, the procedure has been repeated on all the data in our possession, that is on all the - period. the analysis we have done, seem to confirm only partially what was found. the graph shown in figure is more flattened, and shows an increase in the variation in the periods before the days with more hospitalizations equal to only . %. all this is implemented in the script [ , daily_variation.py]. only the graphs obtained by analyzing the averages on the time interval that produced the best results over the whole period - , i.e. four table : percentage increase in pressure changes in the preceding periodsdays with two or more hospitalizations in relation with the average of the variations -over the whole period. days, have been reported. for completeness, the results obtained for some of the other ranges analysed are given in table . in particular, it shows the percentage increase of the average pressure changes in the periods before the days with two or more hospitalizations compared to the average over the whole period considered. although with more or less incisive results, it should be noted that there is an increase independent from the amplitude of the range and the length of the sample analysed. finally, it can be observed that, by extending the length of the sample, the increase is always scaled down to three years. having increased, at least for the pressure, the sample detection frequency, we now have a much larger data set. this fact allowed to try to apply the fourier fast transform to calculate the spectrograms of the historical series of pressure and that of the hospitalizations [ ] . in order to have two series of the equal length and to compare more easily the spectrogram, it was thought to expand also the series of hospitalizations, using a formal artifice, to semihourly samples. since the information on the distribution of hospitalizations within the day was unknown, it was chosen to assume a homogeneous distribution. for this reason, each daily observation has been divided into semihourly observations, each of / of the daily hospitalizations. the two spectrograms obtained are those in figures and . in these figures the frequency on the x-axis is expressed in /days. note that, to make the graphs more readable, in both figures the value of the first frequency ( hz) has been manually set to . this was done because, for the properties of the treated signals, it would normally have a very high value. to see the algorithm used in detail and the graphs with the first frequency values left unchanged, see paragraph [ , fft.py] . no relationship or analogy seems to emerge by comparing the two historical series in the domain of frequencies. this could also be due to the method by which the series of hospitalizations was expanded. therefore, no particular conclusion can be drawn with a sample of this size. many of the attempts we have done, lead to no statistically significant results, thus confirming the difficulty of the existing literature to give a definite answer on the subject. despite this, the research carried out highlighted the strong correlation between the moving averages of atmospheric pressure and those of the number of hospitalizations discussed in section . . the existence of this correlation, from the results obtained, is undeniable for the sample studied. as already mentioned, however, a possible causality between the time series of pressures and that of hospitalizations is far from certain. this is due to the fact that the possible factors that could have caused the increase in the number of hospitalizations are manifold, and many of these, probably, do not concern meteorological factors, such as, for example, the improvement of diagnostic tools. for this reason, the result obtained is a solid result, but it is necessary to start more studies in this field, which analyze different samples. it would be interesting to see if, in a geographical area ( both the same and other areas ) which has, during a period of time similar to the one studied ( years ), an increase in the annual moving average of pressure atmospheric, a decrease in the moving average number of hospitalizations occurs. in this case, many of the causes of other nature, thus obtaining a more certain answer. such efforts should be a priority in any future developments given the severity of the disease and the difficulty of its diagnosis. another significant finding is: the existence of an increase in the number of hospitalizations in the days following short-to-medium periods of time characterized by a high number of half-hourly pressure changes, observed in section . . results obtained seems to give credit to the hypothesis of considering the physical phenomenon of thrombus detachment as the effect of very small pressure variations recurring. however, unlike the moving average, this is not an unequivocal result. this considering that the result over the whole period - is much more contained than the one related to only. in this sense it would be interesting to confirm or deny our results by studying what happens for longer periods of time, covering several years. the study of the spectrograms obtained by the fourier transformation will undoubtedly be at the centre of future developments: data in our possession (especially hospitalization data) were found to be too few to carry out analyses of this type, but better results could emerge by repeating the procedure on larger datasets. in conclusion, although further confirmations are needed, there seems to be some kind of correlation between pulmonary embolism and meteorological parameters, and in particular, atmospheric pressure seems to be more relevant than temperature and wind speed, which, moreover, is strongly related to pressure variations. where x i is a generic value of the random variable x, min and max are respectively the minimum and maximum values assumed by x and [a, b] is the new range within which the values of the random variable will be scaled. this transformation is implemented in an efficient and intuitive way in the normal method normalise(self, feature_range), where the feature_range pair, which if not specified is equal to ( , ), indicates the exremes a and b. variation_series(self, length: int, unsigned: boolean), executed on an object of type series representing a historical series of length n, returns an object of the same type representing a historical series of length n − length + where the value at instant t contains: • in the case unsigned = false (default option), the difference between the first and the last value of the interval [t, t + length] of the starting time series; • in the case unsigned = true, the sum of the absolute values of the differences between each value of the starting time series and the next one, within the interval [t, t + length]. mobile_mean(self, window: int, ws: list), executed on an object of type series representing a time series of length n, returns an object of the same type representing a time series of length n − window + , where the value at instant t contains the mobile average weighted according to the weights contained in ws over the interval [t, t + window]. if the weights are not specified, the arithmetic moving average is executed. eventuality(self), executed on an object of type series representing a time series of length n, returns an object of the same type representing a time series of the same length, where the value at instant t contains: • if the starting time series is also at instant t; • otherwise. peakseries extends series, with the additional method pattern_series(self, pattern: tuple) generating the time series containing the occurrences of a given peacks pattern. objects of this class are generated by an object of type peakmaker by means of the method get_peaks(self). it is a class composed of two objects of the series class that provides different methods to calculate correlation indices between the two time series, to draw graphs of various types, or to translate one series with respect to the other. appendix b: difinitions of medical and meteorological terms . medical terms a thrombus, colloquially called blood clot, is a semisolid substance consisting of cells and fibrin that can locate anywhere in the circulatory district, such as arteries and veins and is attached to the inner wall of the blood vessels. a clot is a healthy response to injury to prevent bleeding, but, when clots obstruct blood flow through healthy blood vessels, it can become the leading cause of some severe pathologies in which case we talk about thrombus [ ] . when thrombosis occurs within deep blood veins, usually at the level of the lower limbs, we speak of deep vein thrombosis (dvp). [ ] the embolism occurs when a thrombus is detached from the wall of a blood vessel to which it is attached. the thrombus or some of its parts can enter into blood circulation until it stops in a blood vessel smaller than the source vessel reducing the blood supply to the downstream tissues [ ] . pulmonary embolism (pe) is a blockage of an artery in the lungs by a clot that has moved from other districts in the body through the bloodstream (embolism). pe usually results from a blood clot in the leg that travels to the lung. main signs of the pe include low blood oxygen levels, rapid breathing, rapid heart rate, that cause circulatory and respiratory problems. in most cases, the pe is preceded by deep vein thrombosis (dvt). pe and dvt share both the risk factors and the triggers. among these, advanced age and some pathological conditions have been found to be the main risk factors. in italy, pe occurs in one patient per and is the cause of about % of hospital deaths, which rise to % if it is not treated correctly [ ] . pulmonary embolism and deep vein thrombosis are two closely related pathological manifestations, which can be described by a single pathological process that is known as venous thromboembolism (vte) or thromboembolism [ ] . atmospheric pressure measures the total weight exerted on a horizontal unit surface by the air column above [ ] . the atmospheric pressure is measured with an instrument called a barometer. generally, atmospheric pressure is measured in atmospheres (atm) or millbar (mbar). however, neither of these two units of measurement is the adopted unit by the system, which instead adopts the pascal (p a) for the measurement of pressure [ ] . in this study the adopted unit of measurement of atmospheric pressure is the millibar. one millibar corresponds to p a. average daily atmospheric pressure is the arithmetic mean among all atmospheric pressure values recorded over a full day that is normally detected from : to : . the number of recordings are made at regular intervals throughout the day to achieve a discretization of the number of registrations. although atmospheric pressure values in a given area tend to remain the same over the long term, these pressure values can change from day to day or from month to month due to weather phenomena. different average values of annual pressures are also possible due to climate factors. correlation between weather and covid- pandemic in humidity and latitude analysis to predict potential spread and seasonality for covid- high temperature and high humidity reduce the transmission of covid- barometric pressure and the incidence of pulmonary embolism venous thromboembolism in denmark: seasonality in occurrence and mortality meteorological parameters and seasonal variations in pulmonary thromboembolism serie storiche economiche the proof and measurement of association between two things research design and statistical analysis, seconda edizione medicina preventiva e riabilitativa, padova, piccin trombosi venosa profonda, manuale msd versione per i pazienti malattie dell'apparato respiratorio bureau international des poids et mesures, the international system of units (si) this appendix wants to be a tool to help you understand some scripts and project functions in python weatherpe.this project was created to carry out the analysis of this study, in order to be modular and applicable to the analysis of other statistical samples, which were mentioned in the document.this appendix is not intended as documentation or a full explanation of the code, for that see the code itself [ ] . the project is divided into four folders:• res: contains the data files used;• weape: contains the classes and functions used;• launchers: contains executable codes to generate all results included in this document;• tests:contains tests of methods and functions. file series.py implements series class, which has been used to represent historical series. this file includes only two attributes:: values and label storing respectively a list containing the values of the series and a string containing the name of the same. below you will be documented the most important methods at the end of the understanding of the document. due to the various meanings it assumes in different scientific disciplines, the concept of normalisation always causes some ambiguity, unless it is explicitly stated [ ] . for this project, the term normalization has been used to mean a transformation of random variables, also known as min-max normalization, described by the following formula: key: cord- -bxpmxvkk authors: harmon, justin; duffy, lauren n. title: a moment in time: leisure and the manifestation of purpose date: - - journal: int j sociol leis doi: . /s - - - sha: doc_id: cord_uid: bxpmxvkk there has been little consideration given to understanding the concept of time within leisure. just what is time when considered as an ordering mechanism of our leisure behaviors? most leisure research has approached the concept of time through a largely western, monochronic understanding which emphasizes time for its linear ordering and quantifiable qualities. the dominance of this implicit understanding of time is also notably influenced by pressing ideologies that define western society, such as neoliberalism, which can distort our personal discourse with our own time: we see it as a commodity – something to be used efficiently and to be invested. what this thought-piece aims to do is consider the existential properties of time, particularly the “moment,” as an opportunity to “achieve [the] total realization of a possibility” as illustrated by lefebvre. "there is a wide discrepancy between time as it is lived and time as it is considered." -edward t. hall ( ) "time" has always been a central component to understanding leisure. the narrative of modern, industrialized nations with -h work weeks led to the emergence of our dualistic view of work and leisure that comprise our lives, and thus, early concepts of leisure were defined by time outside work, instilling the centrality of time in how we understand leisure (godbey ) . as a result, most research on time in the field of leisure studies has focused on the use of time, typically through time diaries (robinson and godbey ) and other self-report time-use studies (bureau of labor statistics ; godbey ) . building from this, then, has been the utility function of time where the purposeful use of it is considered an essential ingredient to a life well-lived in the pursuit of the "american dream" (hunnicutt ) . time has been implicitly explored through the frameworks of serious leisure (e.g., stebbins ) , recreation specialization (e.g., scott and shafer ) , and enduring involvement (e.g., mcintyre and pigram ) . in each of these frameworks, time manifests as forms of commitment, or the behavioral components that tie individuals to routines of leisure behavior (habitual use of time) and a level of involvement indicative of devotion to an activity (intentional use of time). other scholars have looked at the segmented phases of leisure experiences and their effect on future participation (harmon and dunlap ) , as well as the continuation of leisure experiences after participation (scott and harmon ) , both key aspects of understanding the role of temporality, particularly as it relates to continuity in leisure repertoires. likewise, with the interest in intentional leisure experience design, there has been more recent attention paid to the temporal episodes that occur within experiences and the methods to study these across the duration of an experience (e.g., experiential sampling method; see ito et al. ; quinlan cutler et al. ; zajchowski et al. ) . though there is no one guiding theory of time, there has been little consideration given to understanding the concept of time within leisure. just what is time when considered as an ordering mechanism of our leisure behaviors? time has been theorized as both absolute (merrifield ) and abstract (hall ), but our internalization of it is often entirely subjective as something that can be transcended (berdyaev (berdyaev / ), or in many cases, as something that has control over us (rose ) . what this thought-piece aims to do is consider the existential properties of time, particularly the "moment," as an opportunity to "achieve [the] total realization of a possibility" (lefebvre, as cited in merrifield , p. ) . moments, according to the work of henri lefebvre, are "the delirious climax of pure feeling, of pure immediacy, of being there and only there," and a desire to "endure" in the ephemeral properties of bliss, revelation, and connection (merrifield , p. ). while moments evaporate as a requirement of its fleeting qualities, their effects can endure. moments are understood as the opposite of alienation, which is reflected as "absence, a dead moment empty of critical content"; the lefebvrian moment signifies "presence, a fullness, [being] alive and [feeling] connected" (merrifield , p. ; emphasis original) . while presence has certainly been explored in leisure through the concept of flow (e.g., csizszentmihalyi ) , ruminations in this area have focused solely on absorption into the leisure activity, absent any significant consideration to the temporal component aside from simply "losing track of time." that is, most leisure research has approached the concept of time through a largely western, monochronic understanding which emphasizes time for its linear ordering and quantifiable qualities. the dominance of this implicit understanding of time is also notably influenced by pressing ideologies that define western society, such as neoliberalism, which can distort our personal discourse with our own time: we see it as a commoditysomething to be used efficiently and to be invested. metaphors are flippantly attributed to the experience of time without any depth of consideration in definition or description as it relates to the temporal aspects of experience: we "speak of it as being saved, spent, wasted, lost, made up, crawling, killed, and running out," but we rarely explore the complexities that lie beneath the surface (hall , p. ) . as hall ( ) stated, "time is so thoroughly woven into the fabric of existence that we are hardly aware of the degree to which it determines and coordinates everything we do, including the molding of relations with others in many subtle ways" (p. ). the essential value of time can be found in "the manifestation of purpose" (berdyaev (berdyaev / , something often sought through, and attributed to, meaningful leisure experiences. thus, the moment, in leisure, is the point of departure, the pivotal processing of experience where we make decisions in the existential reckoning of our lives in the pursuit of transcendence. below, three cases are examined through the lens of lefebvre's moments in time. each demonstrate the notion that moments pass, yet the imprint they leave on us can go on to define the trajectory of our lives and meaning in life. there are generations of americans who remember exactly where they were when president john f. kennedy and dr. martin luther king, jr. were assassinated, when the space shuttle challenger blew up, or when the twin towers fell on / ; all moments that were jarring and decentering to life that followed. building on lefebvre's work, elden et al. ( ) state that a moment not only defines a form, but a form is also defined by it. moments can redefine or confirm one's trajectory. moments are recognized through individual-level consciousness, but those recognitions can take place in the broader tapestry of social consciousness and collective memory. that a moment could either confirm, deny, or disrupt routines, understandings, or beliefs, suggests that it can lead to shockwaves that reorder social life, even if only for a finite period of time but in some instances, permanently. much like being in a multi-car accident, there is a cascading effect from the point of impact on all involved that can extend beyond just those onsite, especially if the injuries of those involved are serious. the injuries sustained, and the rehabilitation required to follow, or the long-lasting effects of the permanence of injury, can alter the time which is still yet to come. in the moment, a blink of the eye, whole lives can be forever changed. just the same, moments can be pivotal and positive in terms of how they reorient life, interests, relationships, goals, and meanings. moments can be ambiguous, their importance not immediately recognizable until the series of events that follows has completed its evolution or escalation. elden et al. ( ) state that, "there is no moment except in so far as it embraces and aims to constitute an absolute" (p. ). but that "absolute" might not be recognized immediately for lack of information or context, might only come into being through reflection on historical events, or may not be understood until the sequences that initiated it, or follow it, are put into place. in what follows, we explore the state of altered time through life during the covid- pandemic and the moment "it all changed"; the death of george floyd as the breaking point of public consciousness after centuries of injustice culminating in feelings of "enough"; and the moment when one recognizes their life is no longer lived as their own when there is a loss of control of one's time through incarceration. the surreality of life in both in the united states and across the globeonce covid- forced itself on humankind, was quite abrupt even though there were signs indicating we should have been better prepared. the everyday, takenfor-granted "normalcy" of life was disruptedtoppled on its head. in the united states alone, millions lost their jobs; those who did not, in many instances, had to work from home where they were met with competition for their time. for many, it came in the form of children who were now out of school, sent home to continue their "education," albeit with parents playing a significant role in their instruction (burk et al. ) . still others were not so lucky, as the realization of just what and who was "essential" became very clear (rose ) . those on the "front line" were forced to navigate the precarious circumstances of the new world order as daily mis/information and everevolving updates from scrambling medical professionals and beleaguered governmental bureaucrats tried to keep up with what had become an almost minute-by-minute news cycle, seemingly always two steps behind the best practices for life during a global pandemic. our lives, almost literally, changed moment-by-moment as new information surfaced. realization of a "new normal" also came through moments that captured changes in the most mundane aspects of everyday life: scrolling tv listings for live sports and finding none, negotiating a transaction at the post-office through a make-shift protective barrier, or participating in telehealth appointments instead of visiting a doctor's office. all of these examples illustrate lefebvre's notion that moments can be those events that simply register the possibility of a life that could look, feel, and be, different. however, the case of covid- is not only about the moments that signify the new normal, but in itself became a disrupter of how time is spent, with the oft-used descriptor, "covid time." with the shake-up in daily routines came a perceived glut of "free time," in large part because most social and recreational outlets and activities were halted for the foreseeable future. entertainment, edification, and exercise were all now fully our responsibility whether we liked it or not (simpson ; williams ). slowly, but surely, we were forced to recognize that the hours in our days were our own to put in order and make work for us in the face of global uncertainty. numerous moments presented themselves to us as opportunities for engagement and action; still others seemed to give us permission to not take advantage of our situations, retreating to the couch to binge-watch netflix, or engage in passive activities, and not make accommodations to otherwise previously-held healthy leisure routines. some had few choices due to other health or social forces (son et al. ) , and for many, the moments that followed lockdown likely resulted in a sense of isolation, loneliness, or alienation (palgi et al. ) . but all were opportunities for enacting agency, for recognizing the potential to "seize the moment," to make the best of a situation that continued to remain unclear. through the long, cruel, and unjust history of chattel slavery, jim crow, segregation, and structural racism in the united states, there have been a series of moments that signaled atrocity and horror and the need for change. central in this rising awareness of injustice and brutality is the rash of police killings of unarmed, and oftentimes innocent, black men in the twenty-first century (staggers-hakim ) . while public shock and sadness simmered to the top with every black man and woman lost to senseless stateimposed violence, it was the killing of one man, george floyd, on may , , where the anger boiled over in the broader public and a moment, captured in an eight minute and s video, ignited action. while the black lives matter movement had a significant presence since the murder of michael brown on august , (see rickford ) , with the death of george floyd, it took on a new insurgence of power brought on by a public who had finally had enough; the moment for action and change was ripe. the departures from normal routine brought about by covid- intersected with the moment triggered by floyd's death; it created the opportunity for people to seize that altered state of time to capitalize on the historical moment and take to the streets in protest (abbady ) . for many, out-of-work and enraged by police violence, and without structured outlets for leisure, activism and protest became a renewed forum for solidarity and meaning makingthe streets became the surrogate for ball fields, basketball courts, dancefloors, and festivals; leisure blended with social justice to create the forum for revolution (arora ) . it is the temporal relations that occur in shared space where "societal sense-making" takes place (poell , p. ), the communal processing of generations of injustices in a collective call to action instigated by a pivotal moment in history. incarceration is the ultimate form of alienation (cochran and mears ) . there is an old adage that when you go to prison you only "do" two days: the day you go in, and the day you come out. the rest of the time spent behind bars is not yours. not only does being sentenced to jail or prison alienate the inmate from their family, work, and leisure, but it also alienates them from societyand the self (barry et al. ) . while many convicted of crimes may have been living a life of alienation before arrest and sentencing, it is only fully realized once the individual is locked up. elden et al. ( ) stated that, metaphorically, "the alienated person locks himself in the moment; he makes himself its prisoner" (p. ); but this is likely literally true as well. either at the moment of capture or sentencing, but certainly once an inmate crosses into government confinement to serve their sentence, it is a series of moments, beginning with the decision to engage in a crime, through sentencing, to incarceration, where past decisions in life can be seen as part of a greater series of decisions, outlining how one loses control of their own time (severson et al. ) . prison and jails are the epicenters of unfree time, the ultimate paradox and contradiction where agency is removed, and inmates are subject to the impositions of statesanctioned mandates (shaw and elger ) . the same can be said of leisure in this blended temporal-spatial environment. while inmates have the opportunity to engage in certain types of "leisure" due to an excess of "free time," they have little control over much of it, and more often than not, that leisure is used primarily for helping pass time (johnsen and johansen ) . while an individual's initial decision to commit a crime, in most instances, was the key moment in their incarceration, they are constrained in their decision-making ability due to their imprisonment. moments of opportunity are few and far between, and the onus of moments shifts to maintaining habits and making smart decisions until they are released and given back their agency and ability to respond to future moments and sculpt their lives as they see fit. time, in a western context, is an intentionally "disciplining instrument meant to indoctrinate a particular set of arrangements and values among those operating within its purview" (saul , p. ) , something that is evidenced equally, but differently, through the covid- and incarceration examples. frequently, time has the appearance of a "homogenous medium" evoking the endurance properties of humans as they work, socialize, reflect, and recreate, but in reality, time is experienced heterogeneouslyas a multitude of series where we have the opportunity to process our thoughts and actions in situ (bergson (bergson / . rojek ( ) said that leisure is shaped by history and that all theories of leisure must be situated in time, something that is captured by the social unrest, activism, and protests related to police brutality. yet, often, what is perceived as "free time" is infused with anxiety-inducing properties causing individuals to seek out some form of instrumental or practical structure within which to situate their leisure (batchelor et al. ). in our intentional use of "excess" free time at home during covid- , in the justice movements we collectively try to develop through social protest, and in the routines we create for ourselves while in confinement, our decisions in the moment redirect us back to regimented patterns of behavior, that while familiar, if not necessarily potentially comfortable, can also be limiting in their predictability to our personal evolution. as we search for the "the manifestation of purpose" (berdyaev (berdyaev / ) in our lives through leisure, whatever the context, we must be reminded of the importance of the opportunity to "achieve [the] total realization of a possibility" when presented in the moment (lefebvre, as cited in merrifield , p. ). leisure research is uniquely suited to explore the complexities and intricacies of human experience and behavior and its resulting impact on identity, perception, and potential. while the field has explored the temporality of leisure experiences for more than half a century (cf. clawson and knetsch ) , there has been scant effort in the dissection of the units of experience which are often richly rooted in histories, both personal and social. related, the sociopsychological processing of experience is inextricable from time, and therefore, there must be a reconstituted effort to investigate how moments in time not only define the form of a leisure experience, but how the form of leisure is also defined by time (elden et al. ) . phenomenological investigation and oral histories, common methodologies in leisure studies, are apt for generating more intimate knowledge of the micro-temporal moments in leisure. leisure scholars can approach investigations of time, particularly the moment, by placing more emphasis on points of origin, integral instances, realizations, calls-to-action, ultimatums, and the role of personal responsibility and accountability in leisure. some expedient examples of the exploration of a leisure moment could be the point when one begins to feel a sense of accomplishment or competence (e.g., breaking a personal record, learning to play the guitar capably), or when one reaffirms their purpose (e.g., seeing the sunrise over a lake, being acknowledged for one's contributions). moments are not static, and they are not uniform in their duration, but they have the potential to be life-changing, and it is the duty of the field of leisure studies to understand how. as a final posit, this paper makes an explicit call for more attention to be paid to the concept of time within leisure research. simply, researchers interested in self-realization through leisure should revisit assumptions regarding time. this paper approached time through a lefebvrian lens using his theory of moments, however, future research can draw from a range of philosophers, theoretical physicists, and theologists concerned with time, purpose, and meaning making in life. how to protest safely in a pandemic usurping public leisure space for protest. social activism in the digital and material commons functional disability, depression, and suicidal ideation in older prisoners precarious leisure: (re)imagining youth, transitions and temporality solitude & society time and free will: an essay on the immediate data of consciousness pandemic motherhood and the academy: a critical examination of the leisure-work dichotomy economics of outdoor recreation social isolation and inmate behavior: a conceptual framework for theorizing prison visitation and guiding and assessing research flow: the psychology of optimal experience history, time and space leisure matters: the state and future of leisure studies the dance of life: the other dimension of time the temporal phases of leisure experience: expectation, experience and reflection of leisure participation free time: the forgotten american dream a comparison of immediate and retrospective affective reports in leisure contexts serving time: organization and the affective dimension of time recreation specialization re-examined: the case of vehicle-based campers henri lefebvre: a critical introduction the loneliness pandemic: loneliness and other concomitants of depression, anxiety and their comorbidity during the covid- outbreak social media, temporality, and the legitimacy of protest the experience sampling method: examining its use and potential in tourist experience research black lives matter toward a modern practice of mass struggle time for life: the surprising ways americans use their time the labour of leisure: the culture of free time never enough hours in the day": employed mothers' perceptions of time pressure biopolitics, essential labor, and the political-economic crises of covid- temporality and inequity: how dominant cultures of time promote injustices in schools extended leisure experiences: a sociological conceptualization recreation specialization: a critical look at the construct prisoner reentry programming: who recidivates and when improving public health by respecting autonomy: using social science research to enfranchise prison populations mass hysteria, manufacturing crisis and the legal reconstruction of acceptable exercise during a pandemic promoting older adults' physical activity and social well-being during covid- the nation's unprotected children and the ghost of mike brown, or the impact of national police killings on the health and social development of african american boys amateurs, professionals, and serious leisure from gym rat to rock star! negotiating constraints to leisure experience via a strengths and substitutability approach the experiencing self and the remembering self: implications for leisure science publisher's note springer nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations key: cord- -x e cz a authors: mishra, devanshu; haleem, abid; javaid, mohd title: analysing the behaviour of doubling rates in major countries affected by covid- virus date: - - journal: journal of oral biology and craniofacial research doi: . /j.jobcr. . . sha: doc_id: cord_uid: x e cz a abstract background and aims sars-cov is a novel coronavirus that is transmitted to humans through zoonosis and characterised by mild to moderate pneumonia-like symptoms. the outbreak began in wuhan, china, and has now spread on a global scale. doubling time is the amount of period taken for a particular entity (that tends to grow over time) to double its size/value. this study's prime target is to develop relationships between the variation in the doubling time of the number of cases of covid- virus and various socio-economic factors responsible for them. these frameworks focus on the relationships instead of relational data, so here in graph structures, we have generated different patterns of doubling rates and drawn the inferences. methods only significant countries affected by the covid- virus are studied, and accordingly, collected datasets of growth of cases in the form of spreadsheets. the doubling rate is determined by calculating the doubling time for each day and then plotting these datasets in graphical form. results the doubling time of various countries is vastly affected by the preventive measures taken and the lockdown implementation's success. higher testing rates helped identify the hosts of the virus; thus, countries with mass testing have lower doubling rates. countries, where the virus spread started earlier, had less time to prepare themselves, and they were in initial stages, the doubling time suffered. a sudden dip in doubling time is due to a large gathering of people or not effective lockdown; thus, people's attitude contributes to an essential role in affecting the doubling time. conclusion the relationships between the spread of the virus and various factors such as dissimilarities in ethnic values, demographics, governing bodies, human resources, economy, and tourism of major countries are carried out to understand the differences in the virus's behaviour. this fast-moving pandemic has shown various defects and weaknesses in our healthcare systems, political organisations & economic stability and gives numerous lessons on how to enhance the ways that the global societies address similar epidemics. there is also a component that may share the same denominator is the necessity for requisite healthcare systems and medical staff. still, the shortage of this component does not certainly mean that taking necessary steps would be ineffective. transmission of covid- to humans by zoonosis reveals that the global community is required to be observant concerning similar pandemics in the future. in december, an outbreak with pneumonia-like symptoms broke out in wuhan, china. the natural hosts of this virus are considered to be bats, yet other species are also regarded as sources. there has not been enough information accumulated by the epidemiologists to conclude how the virus spreads and affects the patients' bodies on a cellular level, but the figures indicate that the disease's reproduction number lies between and . covid- was the name announced of this new virus on th february , and the virus is caused by severe acute respiratory syndrome coronavirus (sars-cov- ). although similar relations were seen with sars-cov and mers-cov after genomic characterisation was done, the novel virus was more aggressive than other coronaviruses . since so far, confirmed cases are touching million, with more than . million deaths worldwide. preliminary data from the eu/eea show that around - % of confirmed covid- patients are hospitalised, and % are in severe conditions. for patients aged above or those having other medical conditions, there is an increase in the hospitalisation rates , . the incubation period for the disease is around days, with a high possibility of symptoms showing . days . the covid- outbreak is putting a massive strain on societies due to the considerable mortality and morbidity, the profound impact on healthcare and the societal and economic harm included with the physical distancing measures . different countries have variation in the geography, economy, culture, tourism, healthcare, education, leadership; thus, several such factors are responsible for altering doubling rates, which can explain why the outbreak in a few countries has been at an alarming rate . the developing countries have an immense amount of air traffic (easing foreign diseases to spread inside the country) but have overpopulated cities and underfunded healthcare systems. thus, in the long term, these countries may observe a slight increase in the doubling rates and show an exploding number of cases [ ] [ ] [ ] [ ] .the measures taken by the governing bodies are also an essential factor in coronavirus's behaviour in countries. we intend to add value to this discussion by analysing doubling rates of major countries and drawing inferences that can act as a resource for containing the outbreak of covid- virus. problem definition and research objectives of the paper here, the main focus of this paper is identifying the doubling rate of several covid- positive cases. this analysis aims to assess the relationship between the variations in the doubling rates in various countries with factors such as geography, culture, government, economy and tourism. our objective is to identify these relations, draw patterns and accumulate the ones which seem effective worldwide. scope of the paper the study focuses on drawing insights on publicly available datasets and statistics. the data considered for the study is the number of cases of covid- positive patients for each country, and spreadsheets are used to accumulate the data on a single platform. the study's depth is limited to analysing the current factors responsible for altering doubling rates through graphical representation and will not cover and sort of data forecasting. . the study's depth will be limited to some exploratory data analysis, data analysis for correlation and cause-and-effect relationships, bivariate analysis, and data visualisation, this study focuses on prediction. it will not cover any kind of data forecasting. the datasets used in the study are publicly available and taken for eight major countries. microsoft excel is used to gather the data on a single platform. the programming language used for converting the data in graphical form is r (used explicitly for statistical computing and graphics) data used www.worldometers.info/coronavirus is used as the data source for accumulating the datasets for each country in our study, epidemic doubling time refers to the sequence of intervals in which the cases of covid- doubles in value. it is an important factor in determining the rate at which the virus is multiplying in various countries. figure shows that after days, when the first infected patient was registered, the doubling rate was still at an alarming rate of hours. on st march, the state of são paulo issued a complete lockdown, and all sort of non-essential services were closed down for two weeks starting from rd march. this lockdown somewhat helped to improve the doubling rate from days to . days as the maximum cases were in the state of sao paulo only. however, as the cases spread, the doubling time suffered as the country's health system is underfunded. president jair bolsonaro is strictly against lockdown, which is seen on the graph as a numerous rise and dips. the increase in the doubling time is simply due to the citizens' precautions, but the country may suffer in the long term. the country's test positivity rate is %, which shows that the increase in doubling time may be false due to less testing done. initially, the government did not act quickly enough and even punished doctors who sounded the alarm, which caused initial lower rates of doubling time and also spread of the virus. figure shows the rapid improvement in the doubling time in china from day . this increase can be attributed to the government's vigorous measures to reallocate a vast chunk of its healthcare system to respond to the outbreak's centre and build new facilities specifically for the patients. the government also made testing free of charge even if the results were negative. it took quick actions to alert the public about infection symptoms and isolate confirmed cases and track their closer contacts to find the origins of infection clusters. on day , china updated its data with new , cases which can be seen as a sharp dip in the graph. being the epicentre, china has the most knowledge and experience in dealing with the virus, which also acts as an essential factor in the doubling time's steady rise. china has now successfully contained the epidemic, with the doubling time reaching more than days. germany recorded its first case earlier than italy, but its doubling time was far better than other european countries, as seen in figure . the initial low value of doubling time is due to german carnival, a hotspot for the virus's early spread. however, comprehensive mass testing& associated quality healthcare system is the main reason behind the success of improving the doubling rate. germany has successfully kept the doubling rate low through a well-thought-out strategy and adequately funded healthcare system with strong top leadership support. the country has tested far more people than other countries that have allowed authorities to slow the transmission of the disease by isolating known cases while they are infectious. therefore, health officials understood the situation early on and took the required measures. it can be seen as a steady increase in the graph from day . germany has a building block of strong public trust and smoothly functioning leaders. the improvement in doubling time of several cases also displays the significance of governing bodies and transparent data in controlling the virus's extent. thus, the country managed to increase its doubling time at a significant rate. earlier, the country was only testing citizens with travel history, which then broadened to only symptomatic cases. thus, not adopting extensive testing (which helps find mild and asymptomatic cases) led the virus to spread in the country's vast majority. on day , the government of india issued a nationwide lockdown. as we can see from figure , starting from day , there has been an improvement in the doubling time, i.e. from the th march. it is directly related to the lockdown issued by the government and adding the incubation period. the lockdown was placed when the number of positive covid- cases in india was around . lockdown slowed the pandemic rate by th april to a rate of doubling every six days, and by th april, to a rate of doubling every eight days. another factor responsible for mass spreading is massive economic migration in the country. the migrants in lockdown were forced to return to their homes (many on foot). although the lockdown was implemented early on, the governing bodies took slower actions in developing an effective strategy to contain the virus spread. the slow response can worsen the situation as mass testing and contact tracing is not being adopted on a mass scale. poor health infrastructure has also contributed to the lowering of the doubling rate. as seen in figure , the starting days of the coronavirus spread show no improvement in the doubling time mostly due to the government light response and the country held nationwide parliamentary elections on st february. the president called it a conspiracy by enemy countries to shut down the nation, showing no sign of declaring lowdown until the conditions got out of hand. after the cases hit more than , the country went under lockdown. it showed a slight improvement in the doubling rate from day to day . the country again suffered the second wave of corona cases due to people disobeying the nowruz holiday restrictions. thus, starting from day , there was a steady improvement in the doubling time with doubling time of . days.the initial doubling rates also displays the significance of governing bodies and transparent data in controlling the virus's extent. the initial days show an exponential growth of corona cases as the doubling time graph tends to alter from day to days, as shown in figure . a large number of people in northern italy showed pneumonia-like symptoms and thus becoming sites of infection. there is a high possibility that there were already people infected with the virus long before the first registered case appeared. the country had less time to prepare for the massive explosion that took place in various cities. in mid-february, the doubling time was on the lower side; still, no norms and rules limiting the population's movement were taken. on th march, the whole country went under lockdown, and strict measures were taken, and movement in and out of areas was prohibited. thus, the graph showed a steady upward trend starting on day . the majority of cases in the country came due to people travelling from spain, france and italy. the government initially adopted the mitigation approach, where it tried not to react and let the outbreak continue with only minor measures. finally, enough people are supposed to get infected and create herd immunity to permanently reduce the r (reproduction number) below . due to this reason, the cases multiplied quickly until days, as seen in figure . on th march, the government changed its approach and started promoting social distancing and self-isolation. furthermore, later in march, the country went into lockdown slowly, there was an upward trend in the doubling time of no. of cases. the country earlier adopted reactive testing, i.e. prioritising testing of people showing severe symptoms. thus, those are not actively seeking out the tests, and people with mild or no symptoms were left out. the initial days show the country's conditions worsening with doubling rate altering between and days, as seen in figure . this condition can be reflected due to the slow response towards the pandemic and people travelling to the country from china and no virus testing of those people done. social distancing precautions were taken in mid of march when the cases over flowed, and mass testing was carried out. no improvement in the doubling rate was also due to cultural issues with the people and the top leadership's poor role. this country has one of the best healthcare systems, but there has been a large number of residents have not following social distancing. the country has shown the highest number ofcovid- positive cases, but the governing bodies have shown improvement in doubling rate. after days, the doubling time started to increases due to mass testing being done, and social distancing being implemented. as of nd april, % of the population was under lockdown, which is reflected in day till day . the doubling time further increased as more testing was done, and preventive measures were taken. three different analyses are carried out for eight major countries, and results are drawn. the doubling time of various countries is vastly affected by the preventive measures taken, top leadership role with lockdown implementation and attitude of its citizens. countries such as brazil where the lockdown has still not been enacted, the cases have risen steadily. higher testing rates helped identify the hosts of the virus; thus, countries with mass testing have higher doubling time. the doubling time graph can also help countries where doubling time is low address the fact that increased medical staff and healthcare facilities are needed, including extra testing centres and ppe kits in countries with high cases. in terms of doubling rates, the worst affected countries are the developing countries due to their weak healthcare system covering an overpopulated expanse. if the conditions are not controlled, this may pose a severe threat to the country. the country with a weak economy suffered more due to underfunded health system and economic migration (causing the virus to spread to more regions). unemployment recession and unstable jobs also cause incapable individuals to disobey the lockdown to meet their daily needs to survive. the concerned government plays the most crucial role. thus, the need is for early and rapid actions for all the governments to control the virus's outreach. therefore, countries such as china that acted quickly to contain virus spread did not have an explosion of cases, whereas iran and brazil lack acting swiftly. countries such as brazil where the political assemblies did not take necessary actions and ignored the importance of lockdown and social norms suffered heavily. the strategy of contact tracing and aggressive testing is not easier to replicate in countries with large populations, thus usually countries with high population density will show lower doubling rates (china being the outlier). limitations of the study, tool, data the barrier to increasing the number of people getting tested is the limited number of testing facilities, medical staff, and healthcare facilities. the test positivity rate of countries such as brazil and the united kingdom is high, which shows that several other people suffer from the virus that is not getting tested. thus, inadequate information may be available for a large extent of areas. some governments' failure to provide transparent, up-to-date information about the spread of disease poses a barrier to precise results. there are specific patterns or sequence where the length of the path is unknown upfront, so it is hard to express with absolute certainty the outcome of the growth in the doubling time of the number of coronavirus positive cases. analysis of covid- pandemic requires multilayered parameters, here we have chosen an elementary model that could include the fundamental aspects of the dissimilarities in the doubling rate of cases of covid- only. another factor for possible bias is that the data used does not cover all the periods and countries from when the first case was recorded, thus making it tough to study homogenously about the outcomes. understanding the study done of covid- outbreak can help the authorities take new healthcare measures and other systems to more successfully take necessary action on other diseases lurking in the current time and prepare ourselves more efficiently any future outbreaks. the datasets can be used in conjunction with other systems such as analytics cloud or machine learning. these data sets' patterns give insights on what further measures can be taken by the governing bodies to combat the deadly virus. the possibility of horizontal scalability is there such that no matter what amount of data is there, one can add more resources to the infrastructure and carry out further analysis. the covid- outbreak reveals the significance of rapid actions and strategies in terms of containing the diseases to prevent any further pandemics. the lessons of this study can be learning for others and in dealing with multiple cases of outbreaks. the evolution of healthcare systems, scientific research and medical institutions with strong government support over the past years are important factors that could prove significant in containing any future diseases that may get spread on a global scale. this fast-moving pandemic has shown various defects and weaknesses in our healthcare systems, political organisations & economic stability and gives numerous lessons on how to enhance the ways that the global societies address similar epidemics. there is also a component that may share the same denominator is the necessity for requisite healthcare systems and medical staff. still, the shortage of this component does not certainly mean that taking necessary steps would be ineffective. transmission of covid- to humans by zoonosis reveals that the global community must be observant concerning similar pandemics in future. from what we have observed and from the inference that we have drawn, we can say that government response to the pandemic plays a vital role in affecting the virus's doubling time. mass testing can help identify hosts of the virus and prevent the virus from spreading to other regions. countries, where the virus spread early, had less time to be prepared and thus in initial stages, the doubling time suffered and vice versa. the people's attitude towards the government and the lockdown also alters the rate at which the doubling time increases. thus countries such as germany and south korea did far better than the united states of america and iran. the healthcare system and the economic conditions also affect the doubling time, where countries such as peru and brazil are immensely affected. the developing countries are the worst hit due to overpopulation and underfunded healthcare system and must take strictest measures to contain the virus spread. naming the coronavirus disease (covid- ) and the virus that causes it statement on the second meeting of the international health regulations remdesivir in adults with severe covid- : a randomised, double-blind, placebocontrolled, multicentre trial. the lancet a systematic review of covid- epidemiology based on current evidence ostwald growth rate in controlled covid- epidemic spreading as in arrested growth in a quantum complex matter effect of changing case definitions for covid- on the epidemic curve and transmission parameters in none key: cord- -kiyix qd authors: grzesik, piotr; mrozek, dariusz title: comparative analysis of time series databases in the context of edge computing for low power sensor networks date: - - journal: computational science - iccs doi: . / - - - - _ sha: doc_id: cord_uid: kiyix qd selection of an appropriate database system for edge iot devices is one of the essential elements that determine efficient edge-based data analysis in low power wireless sensor networks. this paper presents a comparative analysis of time series databases in the context of edge computing for iot and smart systems. the research focuses on the performance comparison between three time-series databases: timescaledb, influxdb, riak ts, as well as two relational databases, postgresql and sqlite. all selected solutions were tested while being deployed on a single-board computer, raspberry pi. for each of them, the database schema was designed, based on a data model representing sensor readings and their corresponding timestamps. for performance testing, we developed a small application that was able to simulate insertion and querying operations. the results of the experiments showed that for presented scenarios of reading data, postgresql and influxdb emerged as the most performing solutions. for tested insertion scenarios, postgresql turned out to be the fastest. carried out experiments also proved that low-cost, single-board computers such as raspberry pi can be used as small-scale data aggregation nodes on edge device in low power wireless sensor networks, that often serve as a base for iot-based smart systems. in the recent years we have been observing iot systems being applied for multiple use cases such as water monitoring [ ] , air quality monitoring [ ] , and health monitoring [ ] , generating a massive amount of data that is being sent to the cloud for storing and further processing. this is becoming a more significant challenge due to the need for sending the data over the internet. due to that, a new computing paradigm called edge computing started to emerge [ ] . the main idea behind edge computing is to move data processing from the cloud to the devices that are closer to the source of data in order to reduce the volume of data that needs to be send to the cloud, improve reaction time to the changing state of the system, provide resilience and prevent data loss in situations where internet connection is not reliable or even not available most of the time. to achieve that, edge computing devices need to be able to ingest data from sensors, analyze them, aggregate metrics, and send them to the cloud for further processing if required. for example, while collecting and processing environmental data on air quality, the edge device can be responsible for aggregating data and computing air quality index (aqi) [ ] , instead of sending raw sensor readings to the environmental monitoring center. in systems with multiple sensors generating data at a fast rate, efficient storage and analytical system running on edge device becomes a crucial part. due to the time-series nature of sensor data, dedicated time series databases seem like a natural fit for this type of workload. this paper aims to evaluate several time series databases in the context of using them in edge computing, low-cost, constrained device in form of raspberry pi that is processing data from environmental sensors. the paper is organized as follows. in sect. , we review the related works. in sect. , we describe databases selected for comparison. section describes testing environment, used data model as well as testing methodology. section contains a description of the performance experiments that we carried out. finally, sect. concludes the results of the paper. in the literature, there is a few research concerning the comparison of various time-series databases. in the paper [ ] , tulasi priyanka sanaboyina compared two time-series databases, influxdb and opentsdb, based on the energy consumption of the physical servers on which the databases are running under several reading and writing scenarios. the author concludes the research with claims that influxdb consumes less energy than opentsdb in comparable situations. bader et al. [ ] focused on open source time-series databases, examined different solutions during their research, and focused on the comparison of twelve selected databases, including influxdb, postgresql and opentsdb among others. all selected solutions were compared based on their scalability, supported functions, granularity, available interfaces, and extensions as well as licensing and support. in his research [ ] , goldschmidt et al. benchmarked three open-source timeseries databases, opentsdb, kariosdb and databus in the cloud environment with up to nodes in the context of industrial workloads. the main objective of the research was to evaluate selected databases to determine their scalability and reliability features. out of the three technologies, kairosdb emerged as the one that meets the initial hypotheses about scalability and reliability. wlodarczyk, in his article [ ] , provides an overview and comparison of four offerings, chukwa, opentsdb, tempodb, and squwk. the analysis focused on feature differences between selected technologies, without any performance benchmarks. the author identified opentsdb as a most popular choice for the time series storage. pungilȃ et al. [ ] compared the databases to use them in the system that stores large volumes of sensor data from smart meters. during the research, they compared three relational databases, sqlite , mysql, postgresql, one timeseries database, ibm informix with datablade module, as well as three nosql databases, monetdb, hypertable and oracle berkeleydb. during the experiments, it was determined that hypertable offers the most significant number of insert operations per second, but is slower when it comes to scanning operations. the authors suggested that berkeleydb offers a compromise when there is a need for a workload that has a balanced number of both insert and scan operations. fadhel et al. presented research [ ] concerning the evaluation of the databases for a low-cost water quality sensing system. authors identified influxdb as the most suitable solution, listing the ease of installation and maintenance, support for multiple interface formats, and http gui as the deciding factors. in the second part of the research, they conducted performance experiments and determined that influxdb can handle the load from sensors. in his article [ ] , kiefer provided a performance comparison between post-gresql and timescaledb for storage and analytics of large scale, time-series data. the author presented that at the scale of millions of rows, timescaledb offers up to × higher ingest rates than postgresql, at the same time offering time-based queries to be even , × faster. the author also mentions that for simple queries, e.g., indexed lookups, timescaledb will be slower than post-gresql due to more considerable planning time. boule, in his work [ ] , described a performance comparison for insert and read operations between influxdb and timescaledb. it is based on a simulated dataset of metrics for a fleet of trucks. according to results obtained during the experiments, timescaledb offers a better read performance than influxdb in tested scenarios. based on the above, it can be concluded that most of the current research focuses on the use of time-series databases for large-scale systems, running in cloud environments. one exception to that is the research [ ] , where authors evaluate several databases in the context of a low-cost system; however, presenting performance tests only for one of them, influxdb. in contrast to the mentioned works, this paper focuses on the comparison of the performance of several database systems for storing sensor data at the edge devices that have limited storage and compute capabilities. time series database (tsdb) is a database type designed and optimized to handle timestamped or time-series data, which is characterized by a low number of relationships between data and temporal ordering of records. most of the time series workloads consist of a high number of insert operations, often in batches. query patterns include some forms of aggregation over time. it is also important to note that in such workloads, data usually does not require updating after being inserted. to accommodate these requirements, time-series databases store data in the form of events, metrics, or measurements, typically numerical, together with their corresponding timestamps and additional labels or tags. data is very often chunked, based on timestamp, which in turn allows for fast and efficient time-based queries and aggregations. most tsdbs offer advanced data processing capabilities such as window functions, automatic aggregation functions, time bucketing, and advanced data retention policies. there are currently a few approaches to building a time-series database. some of them, like opentsdb or timescaledb, depend on already existing databases, such as hbase or post-gresql, respectively, while others are standalone, independent systems such as influxdb. in recent years, according to db engine ranking, as seen in fig. , the growth rate of the popularity of time series databases is the highest out of all classified database types. for the experiments, databases were selected based on their popularity, offered aggregation functionalities, support for arm architecture, sql or sql-like query language support as well as on their availability without commercial license. timescaledb is an open-source, time-series database, written in c programming language and is distributed as an extension of the relational database, postgresql. it is developed by timescale inc., which also offers enterprise support and cloud hosting in the form of timescale cloud offering. timescaledb is optimized for fast ingest and complex queries [ ] . thanks to the support for all sql operations available in postgresql, it can be used as a drop-in replacement of a traditional relational database, while also offering significant performance improvements for storing and processing time-series data. by taking advantage of automatic space-time partitioning, it enables horizontal scaling, which in turn can further improve the ingestion capabilities of the system. it stores data in structures called hypertables, which serve as an abstraction for a single, continuous table. internally, timescaledb splits hypertables into chunks that correspond to a specific time interval and partition keys. chunks are implemented by using regular postgresql tables [ ] . thanks to being an extension of postgresql dbms, it supports the same client libraries that support post-gresql. according to the db engines ranking [ ] , it is the th most popular time-series database. influxdb is an open-source, time-series database, written in go programming language, developed and maintained by influxdb inc., which also offers enterprise support and a cloud-hosted version of the database. internally, it uses a custom-build storage engine called time-structured merge (tsm) tree, which is optimized for time series data. it has no external dependencies, is distributed as a single binary, which in turn allows for easy deployment process on all major operating systems and platforms. influxdb supports influxql, which is a custom, sql-like query language with support for aggregation functions over time series data. it supports advanced data retention policies as well as continuous queries, which allow for automatic computations of aggregate data to speed up frequently used queries [ ] . it uses shards to partition data and organizes them into shards groups, based on the retention policy and timestamps. influxdb is also a part of tick stack [ ], which is a data processing platform that consists of a time-series database in form of influxdb, kapacitor, which is a realtime streaming data processing engine, telegraf, the data collection agent and chronograf, a graphical user interface to the platform. client libraries in the programming languages like go, python, java, ruby, and others are available, as well as command-line client "influx". according to db engines ranking [ ] , it is the most popular time-series database management system. riak ts is an open-source, distributed nosql database, optimized for the time series data and built on top of riak kv database [ ] , created and maintained by basho technologies. riak ts is written in erlang programming language, supports masterless, multi-node architecture to ensure resiliency to network and hardware failures. this type of architecture also allows for efficient scalability with near-linear performance increase [ ] . it supports a sql-like query language with aggregation operations over time series data. it offers both http and pbc apis as well as dedicated client libraries in java, python, ruby, erlang, and node.js. besides, it has a native apache spark [ ] connector for the in-memory analytics. according to db engines ranking [ ] , it is the th most popular time-series database. postgresql is an open-source relational database management system written in c language and currently maintained by postgresql global development group. postgresql runs on all major operating systems, is acid [ ] compliant and supports various extensions, namely timescaledb. it supports a major part of the sql standard and offers many features, including but not limited to, triggers, views, transactions, streaming replication. it uses multi-version concurrency control, mvcc [ ] . in addition to being a relational database, it also offers support for storing and querying document data thanks to json, jsonb, xml, and key-value data types [ ] . there are client libraries available in programming languages like python, c, c++, java, go, erlang, rust, and others. according to db engines ranking [ ] , it is the th most popular database overall. it does not offer any dedicated support and optimizations for time-series data. sqlite is an open-source relational database, written in c language. the sqlite source code is currently available in the public domain. it is a lightweight, single file, and unlike most databases, it is implemented only as a library and does not require a separate server process. sqlite provides all functionalities directly by the function calls. its simplicity makes it one of the most widely used databases, especially popular in embedded systems. sqlite has a full-featured sql standard implementation with support for functionalities such as triggers, views, indexes, and many more [ ] . similar to postgresql, it does not offer any specific support for time series data. besides, it does not provide a data type for storing time, and it requires users to save it as numerical timestamps or strings. according to db engines ranking [ ] , it is the th most popular relational database and th most popular database overall. the testing environment was based on a lowpan sensor network that is a part of the environment monitoring system, which consists of a group of the edge router device that additionally serves as a database and analytical engine. it is also responsible for sending aggregated metrics to the analytic system in the cloud for further processing. another part of the network is composed of ten sensor nodes that are sending measurements such as air quality and weather condition metrics to the edge router device. figure presents the network diagram of the described system. in this research, we focused on performance evaluation of the edge database functionality of the presented system. to simplify the testing environment and allow for running tests multiple times in a reasonable amount of time, we developed a small python application to serve as a generator of sensor readings instead of using data generated by the physical network. as an edge device we decided to use a raspberry pi single-board computer, with the following specification each data point sent by the sensor consists of air quality metrics in the form of no and dust particle size metrics -pm . and pm . besides, it also carries information about weather conditions such as ambient temperature, pressure, and humidity. each reading is timestamped and tagged with the location of the sensor and the unique sensor identifier. table shows the structure of a single data point with corresponding data types. for the experiments, we generated data from simulated sensors, where each sensor sends reading every s over h. it resulted in , data points used for performance testing. for testing, a small python application was developed separately for each of the selected databases. the application was responsible for reading simulated time-series data, inserting that data into the database and reading the data back from the database, while measuring the time it took to execute all of the described operations. table presents the list of the databases along with their corresponding client libraries. it also shows versions of the software used during the experiments. to evaluate the insertion and querying performance, we conducted several experiments. firstly, we ran the test to assess the writing capabilities of all selected databases by simulating the insertion of data points in two ways: one-by-one and in batches of points. the reason for that was to accommodate the fact that databases can offer better performance for batch insertions, and it is possible to buffer data before saving it to the database. in this step, for each database, we ran the simulation times (except for sqlite where simulations were run times due to relatively long simulation time). secondly, we ran the experiments to evaluate the query performance of all selected solutions in three scenarios. in the first scenario, we evaluated a query for average temperature in the chosen period, grouped by location. in the second query, we tested a query for minimum and maximum values of no , pm . , and pm in the selected period, once again grouped by location. in the last, third scenario, we evaluated the performance of a query that counts data points grouped by sensor id in the selected period for which no was larger than selected value and location was equal to a specific one. each query was executed times. the query scenarios were selected in order to test the performance of the databases for most common aggregation queries that can be used in scenarios where the analysis has to be performed directly on the edge device or when the data needs to be aggregated before sending to the cloud in order to reduce the volume of transferred data. in the first simulation, we evaluated the insertion performance in two different scenarios. figure presents the obtained results in the form of the average number of data points inserted per second in both scenarios. for one-by-one insertion, we observe postgresql and timescaledb as the best performing solutions, with and points inserted per second, respectively. next is riak in the following experiments, we tested the reading performance for three different queries. results are presented in the form of the average query execution time in milliseconds for each database. due to the fact that execution for riak ts was in all cases - times slower than for all other solutions, the results for riak ts were removed from the further comparison to improve the readability of the presented charts. figure shows both the query used in the first scenario as well as the obtained results. in this scenario, influxdb emerged as the fastest solution with average query execution time of ms, followed by postgresql and timescaledb with and ms, respectively. sqlite was the slowest, recording average query execution time of ms. next, a comparison was made for the results obtained during the evaluation of second query computing minimum and maximum aggregations of air quality metrics. the recorded results and queries are shown in fig. . in this example, postgresql turned out to be the fastest solution with average query execution time of ms, next was influxdb with ms and timescaledb with ms. tested query took the longest time to execute on sqlite, taking on average ms. we can observe a general trend of increased query execution time with more aggregations performed in comparison to the first testing scenario. the last experiment was performed for the third tested query, evaluating the number of times the no was higher than the predefined threshold. figure presents the query used and the results obtained during that simulation. once again, postgresql was the fastest solution with an average query execution time of ms, followed by influxdb with ms. the two slowest databases were timescaledb and sqlite, with and ms per execution on average. considering results for all presented simulations, we can observe that in almost all cases, postgresql is the best performing solution for the evaluated workloads, except for influxdb, which turned out to be faster for the first aggregation query. it was validated that batching data points for insertion causes performance gains, as high as . times more data points ingested per second for influxdb. with the exception of riak ts, all databases executed tested queries on average in less than ms, and the relative differences in performance for queries are not as high as in the case of insertion. the selection of a proper storage system with declarative querying capabilities is an essential element of building efficient systems with edge-based analytics. this research aimed to compare the performance of several databases in the context of edge computing in wireless sensor networks for iot-based smart systems. we believe that experiments and analysis of the results presented in the paper complement the performance evaluation of influxdb presented in [ ] by showcasing performance results for multiple databases and can serve as a reference when selecting an appropriate database for low-cost, edge analytics applications. as it turned out, for a smaller scale, it might make sense to choose a more traditional, relational database like postgresql, which offers the best performance in all but one tested case. however, when features such as data retention policies, time bucketing, automatic aggregations are crucial for the developed solution, dedicated time-series databases such as timescaledb and influxdb become a better choice. dbms popularity broken down by database model influxdb on db-engines ranking postgresql on db-engines ranking riak ts on db-engines ranking sqlite on db-engines ranking timescaledb on db-engines ranking timescaledb: sql made scalable for time-series data survey and comparison of open source time series databases concurrency control in distributed database systems how to benchmark iot time-series workloads in a production environment a comparison of time series databases for storing water quality data scalability and robustness of time-series databases for cloud-native monitoring of industrial processes a review on air quality indexing system postgresql for time-series: x higher inserts, x faster deletes, . x- , x faster queries air quality monitoring system and benchmarking fog computing-based iot for health monitoring system benchmarking database systems for the requirements of sensor readings performance evaluation of time series databases based on energy consumption optimize cloud computations using edge computing overview of time series storage and processing in a cloud environment advanced ebusiness transactions for b b-collaborations key: cord- - ij fkrh authors: walsh, froma title: loss and resilience in the time of covid‐ : meaning making, hope, and transcendence date: - - journal: fam process doi: . /famp. sha: doc_id: cord_uid: ij fkrh this article addresses the many complex and traumatic losses wrought by the covid‐ pandemic. in contrast to individually‐based, symptom‐focused grief work, a resilience‐oriented, systemic approach with complex losses contextualizes the distress and mobilizes relational resources to support positive adaptation. applying a family resilience framework to pandemic‐related losses, discussion focuses on the importance of shared belief systems in ( ) meaning‐making processes; ( ) a positive, hopeful outlook and active agency; and ( ) transcendent values and spiritual moorings for inspiration, transformation, and positive growth. practice guidelines are offered to facilitate adaptation and resilience. matriarch. a death is often experienced as a hole in the heart of a family that will never again feel intact. sudden deaths, most common in rapidly progressing, severe cases of covid- , are jolting experiences for families. a recovering loved one may suddenly take a turn for the worse. there is often extreme physical suffering before death, which is agonizing for loved ones, helpless on the sidelines and lacking treatment options. with quarantine restrictions, family members are unable to be at the bedside, to provide comfort and say their good-byes. additional heartache ensues when gatherings are prohibited for funeral and burial rituals that help families and their communities to honor the deceased, share grief, and provide mutual support (imber-black, ). my extended family experienced a heartbreaking death to coronavirus. in march, i received an anguished email from my cousin: she had been informed by her mother's nursing home, that her mother had contracted covid- , was in isolation and declining rapidly, but could receive no visitors. family members hovered outside the building, unable to be with her as she declined and died. they weren't allowed to see her body or to hold a funeral gathering. a week later, her daughter, who had visited her mother just before symptoms appeared, contracted the virus herself, was in quarantine, and worried about having spread it to other grieving family members. i was relieved to hear, a month later, that she was recovering from a mild case. but she and her siblings were deeply distressed over their mother's death and furious that the facility had not informed them that other residents had tested positive before her mother's diagnosis. they were wracked with remorse that they had let her go to a care facility and had not insisted upon taking her in to live with them. such heart-wrenching situations are all too common for families losing a loved one in this time of high contagion. the elderly and others with underlying medical conditions face heightened risk. with an unexpected loss, family members lack time to prepare emotionally or practically, to deal with unfinished business, or to say their goodbyes. grief can be complicated with regrets that it is too late to repair wounded bonds. in some cases, families and emergency care providers must make agonizing end-of-life decisions to forego or end life support efforts. strong disagreements or religious concerns can lead to long-lasting family distress. this article is protected by copyright. all rights reserved the isolating constraints of social distancing heighten awareness that our connections with others are vital to thrive. in traumatic experiences like a pandemic, when helplessness and confusion are common, we have an urgent need to turn to one another for support, comfort, and safety. separations are keenly felt. with high risks of severe illness and death for elders and those with chronic medical conditions, loved ones are fearful of bringing the virus to them. travel safety concerns limit visits by those living at a distance. elders miss out on the rapid developments of grandchildren and yearn for a hug, a kiss, the scent of a baby's breath. individuals in prolonged isolation, living alone or in care facilities, can suffer a sense of disconnection and loneliness, which increases risks for physical and mental decline, substance use, emotional despair, and death (caccioppo, cacioppo, & capitanio, ; killgore, cloonan, taylor, & dailey, ) . families need to sustain connections across distance: phone and internet contact, cards and letters, and children's drawings all offer vital lifelines. the severe economic shockwaves of the covid- pandemic have far reaching impact for financial security and wellbeing in families. job loss and the looming threat of prolonged unemployment, business closures, and uncertain economic recovery can be devastating, especially for lower-income families who lack savings and barely scrape by, paycheck to paycheck. the loss of essential income can have cascading effects with loss of homes, disruptive relocations, and persistent housing and food insecurity. an untimely death in the pandemic is especially heartbreaking for families. the loss of a child, even one in early adulthood, upends life cycle expectations and shatters hopes and dreams for all that might have been. in the rapid spread of the coronavirus, anticipatory loss (rolland, ) is a constant concern, with worry about one's own safety and the threatened loss of loved ones. dire forecasts of a prolonged economic recession generate deep anxieties about future livelihoods and retirement security. young adults, facing the loss of educational and job plans, fear the loss of life dreams: in pursuing careers, gaining financial independence, finding life partners, and starting a family. this article is protected by copyright. all rights reserved the loss of a sense of normalcy is widespread. life as we have known it has been derailed. life forward is on hold, the future uncertain, and the road ahead unclear. there is much talk about the "old normal" and the "new normal." yet, like the aftershocks of an earthquake, the ground keeps shifting, and nothing feels normal. these harrowing times take a mental, physical, and emotional toll. daily news reports increase a sense of overwhelm, with confusing and conflicting information and changing forecasts on what lies ahead. a cartoon depicts a couple in their living room, with flames rising up around them. as one partner sits on the sofa, trying to read a book, the other stands transfixed in front of the large screen tv watching the breaking news bulletin: "hell still on fire." in this unprecedented pandemic, there is a collective experience of shattered assumptions in our worldview: our taken-for-granted beliefs and expectations about our lives and our connections to our world (janoff-bulman, ) . the invisibility of the virus, its lethal potential, and the possible spread by non-symptomatic persons heighten fears of infection. the death of a loved one, and loss of physical contacts, life structures, and future life visions can shatter core beliefs and make our world seem unpredictable and unjust. as one father lamented, "everything i thought i knew is shaken." one global mental health specialist coined the term "covid cognitive cloud" to describe the disorganizing impact of the pandemic. ambiguities cloud our thinking and decision-making. who is trustworthy for leadership, information, and guidance? where and with whom are we safe? we feel trapped and angry at a loss of freedoms with lockdown and restrictions. paradoxically, we also feel unmoored and adrift, swept by strong currents in a perfect storm of extreme events beyond our comprehension and control. the impact of loss is compounded with situational risks, larger systemic/structural forces, and/or complex family dynamics. high risk situations and socio-economic disparities. the risk and pain of loss is intensified when loved ones are working on the front lines and in jobs with repeated exposure to the virus. it is this article is protected by copyright. all rights reserved heartbreaking for families of healthcare emergency workers who contract coronavirus while providing critical care, often lacking protective equipment, without respite from the overload of cases, and suffering emotionally when lives can't be saved. those who self-isolate to protect their own family members miss their support. socio-economic and racial disparities render disadvantaged and marginalized communities at higher risk for multiple losses in major disasters worldwide (norris, ) . in a pandemic, crowded living and conditions, job and environmental hazards, chronic medical conditions, and discrimination in disaster response heighten risks. blacks and latinx have been disproportionately affected by coronavirus across the united states and all age groups (oppel, gabeloff, et al., ) . stark disparities are seen in the highest death rates, particularly among low-paid workers and their family members. many employees. are caught between troubling options: going to work for a needed paycheck or losing their jobs and income if they stay home to keep themselves and loved ones safe. prolonged unemployment and financial insecurities have long-term effects, ambiguous loss. ambiguity surrounding risk and loss generate anxiety, depression, and conflict, interfering with adaptation (boss, ) . with covid- , ambiguities persist about how the virus is spread and whether a death was due to coronavirus. unclarity about the diagnosis, symptoms, severity can be an impediment in getting emergency care. family members may fault themselves for not having understood risks or acted to prevent a death and remain unclear about their future risks. unacknowledged and stigmatized losses. when losses are unacknowledged, hidden, or minimized, they leave families unsupported (doka, ) . the denial of the human tragedy of illness and deaths in the spread of covid- by national authorities renders their suffering invisible. the stigma of possible contagion surrounding a covid-related death fosters misinformation, secrecy, and estrangement, impairing social support as well as critical health and mental health care. reports are also emerging of a spike in suicides and addiction-related deaths, with concerns about further increases with long-term effects in the economy and vulnerable groups (gunnell, appleby, et al., ) . deaths by suicide or overdose are tormenting for families, who struggle to comprehend them and may need help with anger, blame, shame, or guilt over how they might have made a difference (walsh, in press ). as the first wave of the pandemic surges in many places, with a second this article is protected by copyright. all rights reserved wave expected, most families experience a roller-coaster course in efforts to cope and adapt. families can be overwhelmed by the emotional, relational, and functional impact of the many stresses in their lives. adaptation can be further complicated in highly conflicted, abusive, or estranged relationships or with reactivation of painful emotions around past trauma or loss (walsh & mcgoldrick, ) . the dominant anglo-american culture has fostered avoidance in facing death and loss, minimizing their impact, and encouraging people to quickly get "closure" and move on from losses and painful emotions (walsh & mcgoldrick, ) . some seek reassurance that death happens to others who are unfortunate or at fault, to assuage anxieties about their own risks. many are uncomfortable in responding to others' loss experiences and may distract attention or avoid contact. reflecting the cultural aversion, many therapists working with families have been hesitant in addressing significant losses, leaving grief to bereavement specialists and pastoral counselors. moreover, there's no safe professional boundary from emotional spill-over: therapists, as well as clients, are impacted by the pandemic and are dealing with losses, disruptions, and anxieties in both work and family spheres of life. like our clients, we are trying to hold it all together. in a larger cultural context and mental health field that favors brief solution-oriented approaches, therapists need to appreciate that loss is not a problem to solve. we can't bring back a deceased loved one or a livelihood or way of life that is gone. we can listen openheartedly to pain and suffering in families, facilitate their mutual support, and encourage active efforts for positive adaptation. the cultural ethos of the "rugged individual" fosters expectations for self-reliance and fierce independence in dealing with serious life challenges. vulnerability and dependence on others are shame laden, viewed as weakness and deficiency. associated cultural images of masculinity constrain accepted article many men's emotional expression and strain relational bonds. in couples, a distraught spouse may feel abandoned by an emotionally unavailable partner when mutual support is needed most. this ethos also encourages individuals to tough it out on their own: "i should be able to manage it all myself." "i don't want to ask for help or burden others." such expectations lead to burnout, especially for single parents, and leave no time to attend to emotional needs or find respite from pandemic-related stresses. vulnerability is part of the human condition. distress is normal in abnormal times.. although some families are more vulnerable in this pandemic, most face losses and upheaval. false assurances of invulnerability are foolhardy. acknowledgment of grief, suffering and hardship is a strength that can rally mutual support and collective efforts for recovery. we are relational beings. recognition of our essential interdependence is vital for our wellbeing and resilience. in turning to others for help, we can pay it back and pay it forward. mobilizing kin and social support, while challenging with social distancing restrictions, is crucial to build family and community resource teams. as a society, we are all going through this pandemic together. we need and depend on each other for our lives and our future. in this time of pandemic, there is much talk about widespread grief. it's important to clarify current research-based understandings of loss, and common misconceptions from earlier theories positing a single, universal model of "normal," or "healthy" grief. epidemiological and cross-cultural studies have found wide diversity in responses to loss, with variation in the timing, expression, and intensity of normal grief responses (walsh, in press) . in families, members may not be in sync, requiring respect for differences. grief and recovery processes do not follow an orderly stage sequence or timetable as proposed by kubler-ross and kessler ( ) . common reactions of shock and disbelief, anger, bargaining, sorrow, and acceptance are better seen as facets of grief, which ebb and flow over time. while usually decreasing in intensity, various facets can surface unexpectedly, particularly around nodal events. this article is protected by copyright. all rights reserved in the covid- pandemic. initial shock and disbelief are common, but unshakable denial becomes detrimental in not facing the reality that must be dealt with. in families, tolerance is needed for different reactions: one member may be consumed by sadness and yearning while another is enraged by the unfairness of a loss. a breadwinner may need to keep emotions under wraps to function at work. small children may show anxious clinging or need constant contact while adolescents may distance (walsh & mcgoldrick, ) . adaptation involves a dynamic oscillation in attention alternating between loss and restoration, focused at times on grief and at other times on emerging challenges . with pressing demands, many don't have the time and space to process complicated losses, which may find expression in substance use, relational conflict, or child-focused problems. many only seek counseling much later, after initial social support wanes and the full impact of loss-related challenges is felt. this will require pacing of interventions attuned to each family, weaving back and forth in attention to grief, coping efforts, and future directions. adaptation to loss does not mean full recovery or resolution in the sense of some complete, once and for all, getting over it. recovery is best seen in terms of adaptation over time, rather than a final outcome. many recover from coronavirus, yet some suffer long-term sequelae not yet understood. recovery from the economic effects of the pandemic may be partial, as will be recovery of aspects of past ways of life. efforts will be needed for both continuity and adaptive change. likewise, resilience in response to loss and other major disruptions does not mean "just bounce back," quickly rallying and moving on unscathed (walsh, b) . healing and resilience occur gradually over time. grief is a healing process: we don't get over grief--we go through it. resilience is forged through suffering and setbacks; it involves struggling well and integrating painful loss experiences into our life passage. the concept of resilience-the capacity to overcome adversity-is finding valuable application in situations of widespread disaster, collective trauma, and loss (landau, ; masten & motti-stefanidi, ; saul, ; walsh, ; b) . with advances in research, resilience is now this article is protected by copyright. all rights reserved understood as involving dynamic multilevel systemic processes over time. the response to a disaster by communities and larger systems can make the difference for individual and family wellbeing and resilience. for instance, abysmal failures in government response to hurricane katrina compounded widespread suffering and loss. in contrast, the coordinated response to the oklahoma city bombing tragedy by community leaders and agencies provided immediate support and fostered long-term positive adaptation (walsh, ; b) . family resilience refers to capacities in family functioning to withstand and rebound from disruptive life challenges in adversity. more than surviving loss and coping with disruptions, resilience involves positive adaptation: regaining the ability to thrive, with the potential for transformation and positive growth forged through the searing experience. a family resilience orientation is finding broad application in strengths-based, collaborative, systemic training, practice, and research (walsh, a (walsh, , b . a resilience-oriented approach with loss (a) contextualizes the distress; (b) attends to the challenges, suffering, and struggles of families, and (c) strengthens relational processes that support coping, adaptation, and growth. with a multisystemic lens, this approach draws on extended kin, social, community, sociocultural and spiritual resources, and strengthens larger systemic/structural supports. to help families forge resilience in response to pandemic-related losses and the myriad of challenges they face, therapists can usefully apply this author's family resilience framework. designed as a practice map to guide intervention with families facing extreme adversity, it has been applied to traumatic and complicated losses in communities and with widespread disaster (walsh, ; b) . the covid- pandemic is a perfect storm of stressors, involving acute crisis and loss events, disruptions in many aspects of life, and ongoing multistress challenges with evolving conditions. this situation is so extreme that families are experiencing the strains of grief and sadness over so much loss, fears for loved ones, and anxieties about the future. how a family deals with stress and loss is crucial; therapists can help families strengthen key transactional processes for mutual support and mobilize active efforts to overcome challenges. in gaining resilience, they strengthen bonds and resourcefulness in meeting future challenges. this article is protected by copyright. all rights reserved the walsh family resilience framework identified nine key processes--facilitative beliefs and practices--in three domains of family functioning: family belief systems, family organizational processes, and communication / problem-solving processes (walsh, (walsh, , b ). discussion in this paper focuses on the powerful influence of family belief systems in the covid- pandemic. shared facilitative beliefs are the heart and soul of family resilience. each family's belief system, rooted in multi-generational and sociocultural influences comes to the fore in times of crisis and loss, shaping members' experience and their pathways in adaptation. family resilience is fostered by shared beliefs ( ) to make meaning of the crisis and challenges; ( ) to (re)gain a positive, hopeful outlook that supports active agency, and ( ) for transcendence: to rise above suffering and hardship through larger values, spiritual beliefs and practices, and experiencing transformations in new priorities, a sense of purpose, and deeper bonds. core beliefs ground and orient families, providing a sense of reality, normalcy, meaning, or purpose in life. well-being is fostered by expectations that others can be trusted; that communities are safe; that life is orderly and events predictable; and that society is just. when the losses and upheavals in this pandemic shatter such assumptions, as noted above, there is a deep need to restore order, meaning, and purpose (janoff-bulman, ). meaning making and recovery involve a struggle to understand what has been lost, how to build new lives, and how to prevent future tragedy. meaning reconstruction is a central process in healing in response to trauma involving both death and non-death losses (neimeyer & sands, ) . it involves sense-making efforts over time, not simply a final stage in resolving grief, an "aha" moment when everything makes sense. in this pandemic, at first it is hard to understand what is happening, without previous experience to relate it to. as we grapple with the implications, we gradually try to come to terms with the situation, what can be known and the uncertainties that persist. in families, meaning-making processes involve shared attempts to make sense of the loss, put it in perspective to make it more bearable, and, over time, integrate it into personal and relational life passage (nadeau, ) . resilience is strengthened in helping families gradually forge a sense of coherence through shared efforts to make loss-related challenges comprehensible, manageable, and this article is protected by copyright. all rights reserved meaningful to tackle (antonovsky & sourani, ) . this requires dealing with ongoing negative implications, including the loss of hopes and dreams. contextualizing members' distress as common and understandable in their situation--normal in an abnormal times--can depathologize intense reactions and reduce blame, shame, and guilt. in the context of covid- , therapists need to explore both the factual circumstances of losses and the implications they hold for family members in their social and developmental contexts. commonly, they grapple with painful questions: "how did this happen?" "could it have been prevented?" "what will happen to us?" "what does it mean for our lives? such concerns persist when, for instance, the source of viral transmission, the development of vaccines and treatments, or the future of the economy remain unclear. causal attributions concerning blame, self-blame, and guilt can be strong when questions of failed responsibility or negligence arise, such a not following public health guidelines. meaning-making efforts and future planning are hampered by repeated unclear and inconsistent information by government authorities. frustrations may boil over in anger that more should have done to prevent widespread viral contagion and economic losses. systemic therapists can help family members to voice such concerns, come to terms with reasonable limits of control in the situation, and seek greater accountability and leadership by those in charge at local and national levels. families may struggle to envision a new sense of normality, identity, and relatedness to adapt to altered conditions. they can become trapped in helplessly waiting to hear what will happen next or in the future. a sense of active agency is vital for resilience: what can we do about it? what are our options? clinicians can support efforts to gain and share helpful information and become involved in community efforts. helping professionals are cautioned not to ascribe meaning to a family's unique experience. our role is not to provide meaning for those who are struggling, but to facilitate their meaningmaking process (frankl, ) . the multiple meanings of a particular loss evolve over time as they find expression in continuing patterns of interaction and are integrated with other life experiences. over time, adaptation involves weaving the painful experience and the resilience forged into the fabric of individual and collective identity and life passage. this article is protected by copyright. all rights reserved abundant research has found the importance of a positive outlook for resilience (walsh, b) . yet this should not be seen as relentless optimism and good cheer. in confronting significant challenges, with covid- , it is common to experience discouraging setbacks. sadness and nostalgic yearning are intensified when former lives can't be restored. many persons report that there were times when they didn't know if they could face another day, or felt that life no longer had meaning-but with the support--or needs--of others, they vowed to carry on. . family members' mutual encouragement bolsters active efforts to take initiative and to persevere. affirming individual and family strengths in the midst of difficulties can counter a sense of helplessness, failure, and despair as it reinforces shared pride, confidence, and a "can do" spirit. hope is most essential in times of overwhelm and despair, fueling energies and efforts to cope and rebuild lives. we hold onto hope in the midst of uncertainty. weingarten ( ) cautions us to practice reasonable hope and to avert false hopes. wishful thinking does not make a pandemic go away. flaskas ( ) notes the complex dynamics of hope and hopelessness within intimate relationships, embedded in family history, community, and social processes, which can support or undermine hope. in a couple, one partner may lose hope while the other holds hope for both. therapists can witness the coexistence of hope and hopelessness in a way that nurtures hope and yet emotionally holds both. in working with covid-related loss, we can help families reorient hope: as some hopes are lost, what can realistically be hoped for and worked toward? support may be needed to tolerate prolonged uncertainties and lengthy recovery processes, while holding hope in future possibilities with sustained efforts. as studies have found, resilience is fostered by focusing efforts to master the possible, accepting that which is beyond control, and coming to terms with what cannot be changed (walsh, a (walsh, , b . transcendent values and connections enable families to view losses and suffering beyond their immediate plight, cultural and spiritual connections are valuable resources to support adaptation, providing assistance to honor and grieve all that was lost and move forward with life (rosenblatt, ) . in the time of covid- , transcendent values and practices help families to endure and rise this article is protected by copyright. all rights reserved above losses and disruptions, by fostering meaning, harmony, connection, and purpose. they offer opportunities to reaffirm identity, relatedness, and core social values of caring and compassion for others. in times of loss and deep suffering, spiritual matters commonly come to the fore, whether based in religious or existential concerns (wright & bell, ) . clinicians are encouraged to attend to the spiritual dimension of experience to explore issues that constrain adaptation and to draw on spiritual resources that fit clients' preferences within and/or outside organized religion (walsh, b) . research has documented the positive effects of deep faith, belief in a higher power, prayer and meditative practices, and congregational support in times of crisis (koenig, ) . in this time of sequestering and social restrictions, connections with nature are important to nourish spirits--from soothing bonds with companion animals (walsh, ) , to the rhythm of waves on the shore, the songs of birds, and the hopeful renewal of life with new birth. the transcendent power of music and the creative arts fosters resilience, expressing unbearable sorrow and restoring the spirit to rise above adversity. in this prolonged period of angst, i find music most uplifting; it also connects me with my mother, a gifted musician, whom i lost too soon. times of great tragedy can bring out the best in the human spirit: ordinary people show extraordinary courage, compassion, and generosity in helping kin, neighbors, and strangers to recover and rebuild lives. for many, their spirituality is connected to a purposeful dedication to social justice or climate change activism. creativity is vital in our lives, as we need to invent new ways to overcome pandemic-related challenges. in some communities, individuals and multi-generational families are exploring safe ways to come together by creating "social pods"-contact clusters for interpersonal connection and practical support. mental health professionals are needing to transform ways of providing therapy when social distancing and face coverings constrain in-person "face-to-face" office sessions. therapists and clients are gaining new skills and comfort with telehealth therapy, despite the limitations. many notice a silver lining: increased access to therapy for those whose stress overload, incompatible schedules, disabilities, or distances from offices prevented in-person sessions. this article is protected by copyright. all rights reserved finding creative ways to celebrate important events, such as birthdays and graduations, can revitalize spirits and reconnect all with the rhythms of life. one young couple, saddened when plans for an elaborate wedding had to be cancelled, instead held a simple, yet deeply meaningful backyard ceremony, under a homemade wooden arch covered with a trellis of white blossoms. we witnessed the couple's joyful union via zoom, along with family members across two continents, snapping memorable photos with a screen-saving click. they look forward to a festive post-pandemic party. whatever our adverse situation, we can make the most of it, practicing the "art of the possible": "do all you can, with what you have, in the time you have, in the place you are." more than surviving loss or managing stressful conditions, family processes in resilience can yield personal and relational transformation and positive growth. in struggling through loss and hardship, in active coping efforts, and in reaching out to others, families tap resources that they may not have drawn on otherwise, and gain new perspective on life (walsh, b) . similarly, studies of posttraumatic growth have found that individuals often emerge from life-shattering losses with remarkable transformations: gaining appreciation of life and new priorities; warmer, closer relationships; enhanced personal strengths; recognition of new possibilities or paths in life; and deepened spirituality (tedeschi & calhoun, ) . the experience of suffering and loss often sparks compassionate actions to benefit others or address harmful conditions. clinicians can encourage families to find pride, dignity, and purpose from their darkest times through altruistic initiatives. many report stronger bonds forged through shared dedication, such as mobilizing community action coalitions or medical research funding (walsh, b) . bereaved families can find strength to surmount heartbreaking loss and go on in meaningful lives by bringing benefit to others from their own suffering. one african-american family lost their beloved matriarch to covid- . she had worked tirelessly as a home healthcare provider but never had the healthcare herself that she needed. when the family sought testing and care for her symptoms of coronavirus, their community lacked essential resources that might have saved her life. her children were devastated by her loss, but agreed that she wouldn't want this article is protected by copyright. all rights reserved them to become consumed by anger and grief. they vowed to do something meaningful to honor her life and her memory. as her son related, "we want something good to come out of our tragedy. we're taking up fierce advocacy for changes in our healthcare system so everyone gets quality care, no community is left behind, and no family will suffer as our has. she will be smiling down on us with pride." in the wake of loss, families cannot bring back a deceased loved one or recover all that was valued and lost, yet their suffering and struggle can yield new purpose and life priorities. many report that a major life challenge spurred them to reappraise their priorities and stimulated greater investment in meaningful pursuits. in the peak of covid- hospitalizations, as neighborhoods put up lawn signs thanking healthcare workers, a teenager in one family expressed outrage: "thanking them is nice, but we should value them by making sure they have the equipment they need and by paying them what they are worth!" she and her parents mobilized community members to lobby for changes. many become more keenly aware of urgent needs for children and families. in the pandemic, parents are juggling incompatible demands of jobs, housework, childcare, and home schooling, with planned school openings precarious and daycare resources unavailable. gender disparities are starkly revealed for women who provide essential income and carry most responsibilities for homemaking and childrearing. with the economy reopening, many parents are between a rock and a hard place: forced to give up jobs to attend to children's needs. the difficulties experienced in home-based learning sharpened awareness of the vital importance of quality education and the undervalued and underpaid role of teachers in our society. it also exposed the striking lack of access for remote learning in under-resourced, low-income and largely minority neighborhoods, setting children back from achieving their potential. transforming new insights into meaningful actions requires initiative, persistence, and creative solutions. we are living through time out of the ordinary. with our life course seemingly on hold and future forecasts cloudy, we cope by trying to "be here now," focused on getting through each day and week. while we are restricted in our social space, we need not be trapped in time in the "here and now." this article is protected by copyright. all rights reserved time out: as the initial overwhelm with covid-related loss and disruption eases and we contemplate a long haul, it affords the opportunity to reflect on our personal and collective lives and to re-appraise our values and aspirations (bruner, ) . a crisis can be a wake-up call, heightening attention to what matters and who matters. in thinking more deeply about the "old normal" and "new normal" we realize that many aspects of our pre-covid lives that were normalized need to be changed for the better. as we expand our vision beyond our personal struggles, we see needs for broader systemic changes with more urgency. reconnecting with the past: we can learn and grow stronger from the past. this is the time to deepen understandings and connections to our past, to the joys and sorrows experienced. we can encourage family members to share sweet memories to revive spirits and bonds in these hard times. we can reminisce together over photos, make scrapbooks and pass on keepsakes to cherish. using technology or old-school pen and paper, adults and their children can interview family elders and record their life stories: how grandparents fell in love; what their lives were like in their times. in hearing about experiences of crisis and challenge, it's important to draw out accounts of resilient responses alongside the difficulties. how did they and their families get through the great depression and world war ii? the courage, tenacity, and ingenuity in dealing with past loss and hardships can inspire current efforts. moreover, gaining perspective on elders' lived experience can increase compassion for their shortcomings and deepen bonds (walsh, b) . re-envisioning the future. in a pandemic that is novel, complex, and changing, long-term forecasts are hazy. we must learn to live with considerable uncertainty with flexibility to adapt to new developments. many joke about making plans a, b, c and beyond. we can't control everything that will happen, but we can dream and direct our energies toward our preferred vision. when future hopes and dreams are lost, therapists can help family members to reorient hope and envision new possibilities. linking the past, present, and future. when death ends a life, it does not end relationships. research finds that healthy adaptation to loss involves not a "letting go" or detachment, but rather a transformation from physical presence to continuing bonds (klass, ; stroebe, schut, & boerner, ) . these bonds can be sustained through spiritual connections, memories, stories, keepsakes, deeds, and legacies. in this time of covid- , families will need to find innovative ways to honor accepted article and sustain connections: through meaningful memorial events, whether virtual or postponed; in websites to share tributes and remembrances. in varied ways, family members can find meaning and resilience in "saying 'hullo' again" (white, ) and re-membering those who have been lost. where bonds were frayed, they can be healed. therapists can foster an evolutionary perspective that integrates painful experiences and yields meaning and hope for the future. present time / precious time. the pandemic sharpens our awareness of the fragility of life and jolts us not to take future time for granted. the inevitability of losing others becomes more salient: what would we regret --things unsaid, undone--if a loved one died, or as we faced our own impending death. loss and threatened loss can heighten appreciation of loved ones taken for granted and spur efforts to repair grievances in wounded bonds. time does not heal all wounds, but offers new perspectives, experiences, and connections that can help people forge new meaning and purpose in their lives. over time, we will need to integrate the pandemic experience into the chapters of our individual and shared lives, strengthening the relational connections that matter to us: with the families we were born into, those we choose, and our wider communities. there is no love--or life--without loss. we are all mourners now, trying to guide one another as we navigate our way forward and strive to make a better world out of tragedy. our resilience is relationally-based, nurtured and fortified through our interconnections. by facing our vulnerability and by supporting one another through the worst of times we are better able to overcome daunting challenges to live and love fully. resilience is commonly thought of as "bouncing back," like a spring, to our pre-crisis norm. however, when events of this magnitude occur, we cannot return to "normal" life as we knew it. as our world changes, we must change with it. in the wake of the / terrorist attacks, i suggested that a more apt metaphor for resilience might be "bouncing forward" to face an uncertain future (walsh, ) . this involves constructing a new sense of normalcy as we recalibrate our lives to face unanticipated challenges ahead. over the ages, individuals, families, and communities have shown that, in coming together, they could endure the worst forms of suffering and loss, and with this article is protected by copyright. all rights reserved time and concerted effort, rebuild and grow stronger. the painful experiences in this pandemic will require time and shared reflection for meaning making, questioning old assumptions and grappling with a fundamentally altered conception of ourselves and our interconnections with all others in our shared world. taking a systemic view, the pandemic and our response will generate reverberations we can't foresee or control. mastering these challenges will require great wisdom and humanity in the months and years ahead. family sense of coherence and family adaptation the denial of death loss, trauma, and human resilience: have we underestimated the human capacity to thrive after extremely aversive events? ambiguous loss actual minds / possible worlds the neuroendocrinology of social isolation disenfranchised grief holding hope and hopelessness: therapeutic engagements with the balance of hope man's search for meaning covid- pandemic, unemployment, and civil unrest: underlying deep racial and socioeconomic divides risk and prevention during the covid- pandemic rituals in the time of covid- . imagination, responsiveness and the human spirit accepted article this article is protected by copyright. all rights reserved shattered assumptions: towards a new psychology of trauma on grief and grieving loneliness: a signature mental health concern in the era of covid- enhancing resilience: families and communities as agents for change multisystemic resilience for children and youth in disaster: reflections in the context of covis- . adversity and resilience science meaning-making in bereaved families: assessment, intervention, and future research meaning reconstruction in bereavement: from principles to practice , disaster victims speak: part . an empirical review of the empirical literature the fullest look yet at the racial inequality of coronavirus family grief in cross-cultural perspective collective trauma, collective healing: promoting community healing in the aftermath of disaster the dual process model of coping with bereavement: a decade on continuing bonds in adaptation to bereavement: accepted article this article is protected by copyright. all rights reserved toward theoretical integration beyond the concept of recovery: growth and the experience of loss bouncing forward: resilience in the aftermath of traumatic loss and major disasters: strengthening family and community resilience human-animal bonds: i. the relational significance of companion animals spiritual resources in family therapy applying a family resilience framework in training, practice, and research: mastering the art of the possible strengthening family resilience ( rd ed.) complicated loss: fostering healing & resilience loss and the family: a systemic perspective living beyond loss: death in the family bereavement: a family life cycle perspective reasonable hope: construct, clinical applications, and supports saying hullo again: the incorporation of the lost relationship in the resolution of grief beliefs and illness: a model for healing the myths of coping with loss accepted article key: cord- -hiqqx a authors: abdellatif, amal; gatto, mark title: it's ok not to be ok: shared reflections from two phd parents in a time of pandemic date: - - journal: gend work organ doi: . /gwao. sha: doc_id: cord_uid: hiqqx a adopting an intersectional feminist lens, we explore our identities as single and co‐parents thrust into the new reality of the uk covid‐ lock‐down. as two phd students, we present shared reflections on our intersectional and divergent experiences of parenting and our attempts to protect our work and families during a pandemic. we reflect on the social constructions of 'masculinities' and 'emphasised femininities' (connell, ) as complicated influence on our roles as parents. finally, we highlight the importance of time and self‐care as ways of managing our shared realities during this uncertain period. through sharing reflections, we became closer friends in mutual appreciation and solidarity as we learned about each other's struggles and vulnerabilities. protecting your family is one of the most important roles you can play as a parent, but what happens when you cannot shield yourself or your loved ones from the threat of trauma (cobham & newnham, ) ? these reflections provide a glimpse into the lives of two phd parents. amal is a second-year phd student (international), an associate lecturer, and a single parent to a year old and year old. mark is a third-year phd student (home), a research assistant, a co-parent with a -month-old baby ( months old during reflections), and his wife works in the nhs. we are both exploring gender in the workplace for our phds. our shared stories of the uk covid- lockdown acted as both individual catharsis and collective empowerment. through sharing, we both learned more about our intersectional identities and our efforts to act as protective shields for our families during this traumatic time. importantly, we also chose to write together to expose and resist patriarchal models of gender through our divergent parental roles and converging feminist principles towards gender equity. we present our reflective stories in three acts represented in a single day: morning, afternoon and evenings of april th, -'good friday' (additionally recorded as a shared time-log exercise). this method provided a reassuring structure for us to work with, while also framing our lived experiences thematically and over a longer time period. we include snapshots of our 'good friday' to highlight how our days progressed with various points of similarity and differences. we also include reflections on our developing response to the lockdown from across a three-week period from the start of the uk lockdown on march rd until april th. we intentionally shared our reflections with each other after each new entry to enrich our collective writing experience, a process which had the additional benefit of deepening our friendship and mutual admiration. we were inspired in our collective writing approach by 'writing resistance together' (ahonen et al., ) . we also drew on grenier ( ) as a model for constructing our shared autoethnography in a quasi-conversational form that expresses insights into our shared truths. we present our captured stories as both ordered and messy including © american geophysical union. all rights reserved. occasional spelling and grammatical errors; a messiness that reflects our lives. by making ourselves vulnerable in this way, we hope our reflective stories can pay tribute to the canon of emancipatory feminist writing (for examples, see cixous, cohen, & cohen, ; haraway, ) that challenges how we write about ourselves and our experiences as feminists who aspire to transcend gender binaries. "it is not sufficient to have an experience in order to learn. without reflecting on this experience, it may quickly be forgotten, or its learning potential lost." graham gibbs ( ) we share similar identities as phd students and parents. through our reflective diaries, we found our experiences converge around these two intersectional identities (crenshaw, ). yet, our diaries also reflect how our experiences diverge from the other identities we hold; gender, ethnicity, and co-parenting vs single parenting; all of which influenced our pandemic reality. we echo rodriguez, holvino, fletcher, and nkomo ( ) in moving beyond the favoured triumvirate of gender, race and class to building a more complex ontology of intersecting socio-cognitive categories in our experiences. as we both believe in the principles of social equity, we examined and acknowledged where our identities were privileged or discriminated against in a pandemic. we feel this represents a foundational step of our feminist reflective praxis. are different to other phd students. we must protect our study time. just as we must protect our time with our families. and these two worlds, though, naturally will cross over. always means that we strain to separate and retain difference. ( hooks ( ) we found resemblance between our diary reflections around feminism and examining our femininity and masculinity in the context of covid- . we reflected on mark's experiences of 're-embodied masculinity' (connell, ) to embrace his caring role against the cultural template of 'hegemonic masculinity'. at the same time, amal reflected on her resistance through single-parenting to the cultural template of 'emphasised femininity' (connell & messerschmidt, ) . we present our non-conformant femininities and masculinities as a challenge to the institutionalised, 'assumed' and regulated practices around our gender (butler, ) . : -after lunch, helped her this time to draw on the paper rather than the walls! also coloured easter bunny and little eggs. (amal) (the older) practicing some masculine domination over his sister. for example, asking her not to talk in a certain way as she is a 'girl'. i observed similar behaviour from my daughter towards my son. for example, seeing him wearing or playing with something that does not conform to his gender, she directly says "this is not for boys, this is for girls". here, i realised how i come out as a feminist rather than only a mother and intervene in the conversation. i try to challenge the way i was raised (and resisted) sartre ( , p. ) our experiences of time have been stretched, squeezed and snapped at various stages of this pandemic. as our working days stretched into evenings, we tended to squeeze our time with more intensity until, with fatigue, we snapped. some experiences converged, especially our moments of vulnerability and need to take time for self-care, while others diverged. our © american geophysical union. all rights reserved. need to compartmentalise our time as parents and phd students acts as an ever-present pressure we both battle with. work always feels like a race against time. it is a race that i will never win, (mark - as phd parents, we cannot escape the ticking of time and looming deadlines; we constantly feel this pressure, even at the best of times. the lockdown has meant we cannot produce the volume, nor achieve the quality of focus and output required to meet our own perceived expectations. with each passing minute, we experience prodding fingers and shouts for attention, which wrench us away from the immersion needed to produce our best work. in such a competitive discipline as academia, where success is measured on the 'publish or perish' continuum, our very survival as early career academics is at the forefront of our minds. "in the midst of every crisis, lies great opportunity." albert einstein "to be a complete individual, equal to man, woman has to have access to the male world as man does to the female one, access to the other; but the demands of the other are not symmetrical in the two cases." beauvoir ( , p. ) reviewing our reflections, we have both been comforted by our two lives lived in simultaneously divergent, yet similar moments of vulnerability with our families. as we have shared reflections during these early days and weeks, and we have grown closer as friends, despite the enforced distance we must observe. we have glimpsed behind the veil of our professional selves, allowed ourselves to share our precious private lives, and gained something far more valuable in our mutual admiration for each other as people. as amal has embodied the total parent from teacher to chef, carer, friend and protector, while squeezing in her studies; mark has experienced periods of re-embodied masculinity as transient sole primary carer and support to his wife. our experiences are unequal, but we © american geophysical union. all rights reserved. have both gained unplanned access to the 'other' as working parents, peers and friends. it is this 'other' that builds our case to embrace our vulnerabilities as parents towards a collective strength that could endure beyond this lockdown. is this a beginning to an end or an end to a new beginning? will we get back to the life we knew? in these ambiguous uncertain times, there are plenty of unanswered questions. even with this ambiguity, we think everyone by now already will have a long 'to do after the lockdown list'. this could be something as simple as a friend's hug, a cautious handshake, a staff kitchen gossip, a chilled drink at the pub, or that long overdue haircut! since the lockdown started, in each household, we became a huge conglomerate of organisations. we are the university, the school, the nursery, the gym, the restaurant, the library, and the hairdresser. will we see this as an ugly experience that brought all social inequalities and injustice to the surface? or will we see it as a great opportunity for family, self-discovery, open vulnerability, resilience, love, compassion and solidarity? will we value one another differently, or will it be a matter of time before we get back to the 'old' reality of busy bees buzzing around the hive? all we do know is that this shared experience has meant more to us than we anticipated. we helped each other see the light at the end of our separate tunnels and, out of our solidarity as feminists, our friendship has blossomed. writing resistance together. gender, work & organization the second sex undoing gender. london: routledge ltd toward a field of intersectionality studies: theory, applications, and praxis. signs the laugh of the medusa trauma and parenting: considering humanitarian crisis contexts masculinities. berkley and los hegemonic masculinity: rethinking the concept demarginalizing the intersection of race and sex: a black feminist critique of antidiscrimination doctrine, feminist theory, and antiracist politics feminist legal theory learning by doing: a guide to teaching and learning methods autoethnography as a legitimate approach to hrd research: a methodological conversation at , feet a cyborg manifesto: science, technology, and socialist-feminism in the late twentieth century yearning: race, gender, and cultural politics the theory and praxis of intersectionality in work and organisations: where do we go from here? gender being and nothingness: an essay on phenomenological ontology we both wish to formally thank our supervisor, professor jamie callahan, whose continuous mentorship and empathy has ensured we felt fully supported, especially during these traumatic times. we also want to thank jamie for encouraging us to write collectively and learn from each other. we both feel very fortunate to work with such an inspiring leader. © american geophysical union. all rights reserved. key: cord- -mb j zs authors: agapiou, sergios; anastasiou, andreas; baxevani, anastassia; christofides, tasos; constantinou, elisavet; hadjigeorgiou, georgios; nicolaides, christos; nikolopoulos, georgios; fokianos, konstantinos title: modeling of covid- pandemic in cyprus date: - - journal: nan doi: nan sha: doc_id: cord_uid: mb j zs the republic of cyprus is a small island in the southeast of europe and member of the european union. the first wave of covid- in cyprus started in early march, (imported cases) and peaked in late march-early april. the health authorities responded rapidly and rigorously to the covid- pandemic by scaling-up testing, increasing efforts to trace and isolate contacts of cases, and implementing measures such as closures of educational institutions, and travel and movement restrictions. the pandemic was also a unique opportunity that brought together experts from various disciplines including epidemiologists, clinicians, mathematicians, and statisticians. the aim of this paper is to present the efforts of this new, multidisciplinary research team in modelling the covid- pandemic in the republic of cyprus. coronavirus disease , an infection caused by the novel coronavirus sars-cov- (coronaviridae study group of the international committee on taxonomy of viruses ( )) that first emerged in wuhan, china, (zhu et al. ( ) ), counts now more than million cases and has claimed nearly , lives (world health organization ( )). despite some advances in therapy (beigel et al. ( ) ) and considerable progress in vaccine development, with some vaccine candidates reaching phase iii trials (jackson et al. ( ) ), there are still many gaps in our understanding of the new pandemic disease including some epidemiological parameters. epidemic modelling is a fundamental component of epidemiology, especially with regards to infectious diseases. following the pioneering work of r. ross, w. kermack, and mckendrick in early twentieth century (kermack and mckendrick ( ) ), the discipline has established itself and comprises a major source of information for decision makers. for instance, in the united kingdom, the scientific advisory group of emergencies (sage) is a major body that collects evidence from multiple sources including inputs from mathematical modelling to advice the british government on its response to the complex covid- situation; for more information see this link. in the context of the covid- pandemic, expert opinions can help decision makers comprehend the status of the pandemic by collecting, analyzing, and interpreting relevant data and developing scientifically sound methods and models. an exact model that would describe perfectly the data is usually not feasible and of limited scope; hence scientists usually aim for models that allow a statistical simulation of synthetic data. at the same time, models can also approximate the dynamics of the disease and discover important patterns in the data. in this way, researchers can study various scenarios and understand the likely consequences of government interventions. finally, the proposed models could motivate the conduct of further studies about the evolution of both infectious and non-infectious diseases of public interest. here we report our work including results from statistical and mathematical models used to understand the epidemiology of covid- in cyprus, during the time period starting from the beginning of march till the end of may . we propose a range of different models that capture different aspects of the covid- pandemic. the analysis consists of several methods applied to understand the evolution of pandemics in the long and short run. we use change-point detection, count time series methods and compartmental models for short and long term projections, respectively. we estimate the effective reproduction number by using three different methods and obtain consistent results irrespective of the method used. results are cross-validated against observed data with considerable consistency. besides providing a comprehensive data analysis we illustrate the importance of mathematical models to epidemiology. in this section, after a brief introduction to the testing protocol, we introduce the different techniques and models that have been used for the modelling and analysis of the covid- infections in cyprus. the unit for surveillance and control of communicable diseases (usccd) of the ministry of health operates covid- surveillance. the lab-based surveillance system consists of laboratories ( public and private) that carry out molecular diagnostic testing for sars-cov- . sociodemographic, epidemiological, and clinical data of individuals with sars-cov- infection are routinely collected from laboratories and clinics, and reported to an electronic platform of the usccd. a confirmed covid- case is a person, symptomatic or asymptomatic, with a respiratory swab (nasopharynx and/or pharynx) positive for sars-cov- by a real-time reverse-transcription polymerase chain (rrt-pcr) assay. cases are considered imported if they have travel history from an affected area within days of the disease onset. locallyacquired cases are individuals who test positive for sars-cov- and have the earliest onset date in cyprus without travel history from affected areas. people with symptomatic covid- are considered recovered after the resolution of symptoms and two negative tests for sars-cov- at least -hour apart from each other. for asymptomatic cases, the negative tests to document virus clearance are obtained at least days after the initial positive test. a person with a positive test at days is further isolated for one week and finally released at days after the initial diagnosis without further laboratory tests. testing approaches in the republic of cyprus included: a) targeted testing of suspect cases and their contacts; of repatriates at the airport and during their -day quarantine; of teachers and students when schools re-opened in mid-may; of employees in essential services that continued their operation throughout the first pandemic wave (e.g., customer services, public domain); and of health-care workers in public hospitals, and b) population screenings following random sampling in the general population of most districts and in two municipalities with increased disease burden. by june nd , , pcr tests had been performed ( , . per , population). public health measures were taken in phases: period ( - march, ) included closures of educational institutions and cancellation of public gatherings (> persons); period ( - march, ) involved closure of entertainment areas (for instance, malls, theatres, etc), allowance of person per square meters in public service areas, and restrictions to international travel (for example, access to the republic of cyprus was permitted only for specific persons and after sars-cov- testing); period ( - march, ) included closure of most retail services; and period ( march - may) included the suspension of incoming flights with few exceptions (for instance, repatriated cypriot citizens), stay at home order, and night curfew. change-point detection is an active area of statistical research that has attracted a lot of interest in recent years and plays an essential role in the development of the mathematical sciences. a non-exhaustive list of application areas includes financial econometrics (schröder and fryzlewicz, ) , credit scoring (bolton and hand, ) , and bioinformatics (olshen et al., ) . the focus is on the so-called a posteriori changepoint detection, where the aim is to estimate the number and locations of certain changes in the behaviour of a given data sequence. for a review of methods of inference for single and multiple change-points (especially in the context of time series) under the a-posteriori framework, see jandhyala et al. ( ) . the aim is to estimate the number and locations of certain changes in a stream of data. detecting these change-points enables us to separate the data sequence into homogeneous segments, leading to a more flexible modeling approach. advantages of discovering such heterogeneous segments include interpretation and forecasting. interpretation naturally associates the detected change-points to real-life events or/and political decisions. in this way, a better description of the observed process and the impact of any intervention can be communicated. forecasting, is based on the final detected segment which is important as it allows for more accurate prediction of future values of the data sequence at hand. methods developed in this context are based on a given model. for the purpose of this paper, we work with the following signal-plus-noise model where x t denotes the daily incidence covid- cases and f t is a deterministic signal with structural changes at certain time points. details about f t are given below. the sequence t consists of independent and identically distributed (iid) data with mean zero and variance equal to one and σ > . we denote the number of change-points by k and their respective locations by r , r , . . . , r k . the locations are unknown and the aim is to estimate them based on ( ). the daily incidence cases of the covid- outbreak in cyprus is investigated by using the following two models for f t of ( ): . continuous, piecewise-linear signals: f t = µ j, + µ j, t, for t = r j− + , r j− + , . . . , r j with the additional constraint of µ k, + µ k, r k = µ k+ , + µ k+ , r k for k = , , . . . , n . the change-points, r k , satisfy f r k − + f r k + = f r k . . piecewise-constant signals: f t = µ j for t = r j− + , r j− + , . . . , r j , and f rj = f rj + . in this work, we are using the isolate-detect (id) methodology of anastasiou and fryzlewicz ( ) to detect changes based on ( ) by using linear and constant signals, as described above; see appendix a- for a description of the method. the analysis of count time series data (like daily incidence data we consider in this work) has attracted considerable attention, see kedem and fokianos ( , sec & ) for several references and fokianos ( ) for a more recent review of this research area. in what follows, we take the point of view of generalized linear modelling as advanced by mccullagh and nelder ( ) . this framework naturally generalizes the traditional arma methodology and includes several complicated data generating processes besides count data such as binary and categorical data. in addition, fitting of such models can be carried out by likelihood methods; therefore testing, diagnostics and all type of likelihood arguments are available to the data analyst. the logarithmic function is the most popular link function for modeling count data. in fact, this choice corresponds to the canonical link of generalized linear models. suppose that {x t } denotes a daily incidence time series and assume, that given the past, x t is conditionally poisson distributed with mean λ t . define ν t ≡ log λ t . a log-linear model with feedback for the analysis of count time series (fokianos and tjøstheim ( ) ) is defined as in general, the parameters d, a , b can be positive or negative but they need to satisfy certain conditions to obtain stability of the model. the inclusion of the hidden process makes the mean of the process to depend on the long-term past values of the observed data. further discussion on model ( ) can be found in appendix a- which also includes some discussion about interventions. an intervention is an unusual event that has a temporary or a permanent impact on the observed process. computational methods for discovering interventions, in the context of ( ), under a general mixed poisson framework have been discussed by liboschik et al. ( ) . in this work, we will consider additive outliers (ao) defined by where the notation follows closely that of sec. . and i(.) denotes the indicator function. inclusion of the indicator function shows that at the time point r k , the mean process has a temporary shift whose effect is measured by the parameter γ k but in the log-scale. other type of interventions can be included (see appendix a- ) whose effect can be permanent and, in this sense, intervention analysis and change-point detection methodologies address similar problems but from a different point of view. model fitting is based on maximum likelihood estimation and its implementation has been described in detail by liboschik et al. ( ). compartmental models in epidemiology, like the susceptible-infectious-recovered (sir) and susceptible-exposed-infectious-recovered (seir) models and their modifications, have been used to model infectious diseases since the early 's (see keeling and rohani ( ) , nicolaides et al. ( ) among others). the basic assumptions for these models are the existence of a closed community, i.e without influx of new susceptibles or mortality due to other causes, with a fixed population, say n , and also that the individuals who recover from the illness are immune and do not become susceptible again. in the basic seir model, at any point in time t, each individual is either susceptible (s(t)), exposed (e(t)), infectious (i(t)) or recovered (r(t), including death). the epidemic starts at time t = with one infectious individual, usually thought of as being externally infected, and the rest of the population being susceptible. people progress between the different compartments and this motion is described usually through a system of ordinary differential equations that can be put in a stochastic framework. a variety of seir modifications and extensions exist in the literature, and a multitude of them emerged recently because of the covid- epidemic. in this work, we consider four such modifications, based on the models proposed in peng et al. ( ) and li et al. ( ) for the analysis of the covid- epidemic in wuhan and the rest of the chinese provinces. initially, we employ the seir model based on the meta-population model of li et al. ( ) but simplified to take into account only a single population. the novelty compared to the standard seir model, is that this model takes into account the existence of undocumented/asymptomatic infections, which transmit the virus at a potentially reduced rate. the model tracks the evolution of four state variables at each day t, representing the number of susceptible, exposed, infected-reported and infected-unreported individuals, s(t), e(t), i r (t), i u (t) respectively. the parameters of the model are the transmission rate β (days − ), the relative transmission rate µ representing the reduction in transmission for asymptomatic individuals, the average latency/incubation period z (days), the average infectious period d (days) and the reporting rate α representing the proportion of infected individuals which are reported. for a graphic description of the model see figure . the time evolution of the system is defined by the following set of differential equations (recall n denotes the population size): following li et al. ( ) , we use a stochastic version of this model with a delay mechanism. each term, say u , on the right hand side of ( ) is replaced by a poisson random variable with mean u . at each day, we use the th order runge-kutta numerical scheme to integrate the resulting equations and obtain the values of the four state variables on the next day. for each new reported infection, we draw a gamma random variable with mean τ d days, to determine when this infection will be recorded. for the main analysis we use τ d = days, as the average reporting delay between the onset of symptoms and the recording of an infection; see also li et al. ( ) . note that the results are robust with respect to the value of reporting delay. the final output of this model is the number of recorded infections on each day t, y = y(t). we also use the meta population model of li et al. ( ) . it models the transmission dynamics in a set of populations, indexed by i, connected through human mobility patterns, say m ij . this is implemented by incorporating information on human movement between the main districts of cyprus: nicosia, limassol, larnaca, paphos and ammochostos. in this case, i = , , , , and m ij denotes the daily number of people traveling from district i to district j, i = j. such information is based on the census data obtained from the cyprus statistical service. the time evolution of the four compartmental states in each district i is defined by the following set of differential equations: where the notation follows the notation given in sec. . . . in addition to the four state variables, this model also updates at each time step the population of each area i, say n i , by where the multiplicative factor θ is assumed to be greater than to reflect under-reporting of human movement. like model ( ), model ( ) further, we consider the meta-population model of peng et al. ( ) . this is a generalisation of the classical seir model, consisting of seven states: (s(t), p (t), e(t), i(t), q(t), r(t), d(t)). at time t, the susceptible cases s(t) will become with a rate ζ insusceptible p (t) or with rate β exposed e(t), that is infected but not yet infectious i.e. in a latent state. some of the exposed cases will eventually become infected with a rate γ. infected means they have the capacity of infecting but are not quarantined q(t) yet. the introduction of the new quarantined state, q(t), in the classical seir model, formed by the infected cases with a constant rate δ, allows to consider the effect of preventive measures. finally, the quarantined cases, are now split to cured cases, r(t), with rate λ(t) and to closed, d(t), with mortality rate κ(t). the model's parameters are the transmission rate β, the protection rate ζ, the average latent time γ − (days), the average quarantine time δ − (days) as well as the time dependent cure rate λ(t) and mortality rate κ(t). the relations are characterized by the following system of difference equations: the total population size is assumed to be constant and equal to n = s(t) + p (t) + e(t) + i(t) + q(t) + according to the official reports, the number of quarantined cases , recovered and deaths , due to covid- , are available. however, the recovered and death cases are directly related to the number of quarantine cases, which plays an important role in the analysis, especially since the numbers of exposed (e) and infectious (i) cases are very hard to determine. the latter two are therefore treated as hidden variables. this implies that we need to estimate the four parameters ζ, β, γ − , δ − and both the time dependent cure rate λ(t) and mortality rate κ(t). notice here that while the rest of the parameters are considered fixed during the pandemic, we allow the cure and mortality rate to vary with time. we expect that the former will increase with time, given that social distancing measures have been put in place, while the latter will decrease. finally, this is an optimization problem, and the methodology we have followed in order to address it can be found in appendix a- . the last model we consider is a modified version of a solution created by bettencourt and ribeiro ( ) to estimate real-time effective reproduction number r t using a bayesian approach on a simple susceptible -infected (si) compartmental model: we use the bayes rule to update the beliefs about the true value of r t based on our predictions and on how many new cases have been reported each day. having seen k new cases on day t, the posterior distribution of r t is proportional to (denoted by ∝) the prior beliefs of the value of p (r t ) times the likelihood of r t given that we have recorded k new cases, i.e., p (r t |k) ∝ p (r t ) × l(r t |k). to make this iterative every day that passes by, we use last day's posterior p (r t− |k t− ) to be today's prior p (r t ). therefore in general however, in the above model the posterior is influenced equally by all previous days. thus, we propose a modification suggested in systrom ( ) that shortens the memory and incorporates only the last m days of the likelihood function, p (r t |k) ∝ t t=m l(r t |k t ). the likelihood function is modelled with a poisson distribution. recall the compartmental models discussed in sec. . . and . . . then the effective reproduction number is given by see the supplement of li et al. ( ) . we estimate r t in ( ) during consecutive fortnight periods for which its value is considered to be constant. to achieve this we estimate the parameters of each model, also assumed to be constant for each fortnight, using daily incidence data for cyprus . to estimate the parameters we employ bayesian statistics, that is, we postulate prior distributions on the parameters and incorporate the data and the model (through the likelihood) to obtain the posterior distributions on the parameters. the posterior distributions capture our updated beliefs about the parameters after combining the prior with the observed data; see, for example, bernardo and smith ( ) . for the model defined by ( ), we consider the whole area of cyprus as a single uniform population. for this case, the observations are not sufficiently informative to identify all five parameters of the model. a solution would be to enforce identifiability by postulating strongly informative prior distributions on the parameters. instead, we choose to make the assumption that the parameters z, d and µ have globally constant values, fixed over time. in particular we set d = . and µ = . as estimated in li et al. ( ) and . which appears to be the globally accepted mean incubation period. we thus only need to infer the reporting rate α and the transmission rate β, which vary both between different fortnights and for different countries because of the amount of testing and the degree of adherence to the social distancing policies. on the other hand, the model defined by ( ) is sufficiently informative to infer all six model parameters. all computational methods, prior modelling and assumptions in relation to both compartmental models discussed in sec. . . and . . are given in appendix a- . in addition to the above methods we further consider the method of cori et al. ( ) as a benchmark to compare all methodologies for estimating the effective reproduction number. by the end of may , cases of covid- were diagnosed in the republic of cyprus. of these, . % were male (n = ) and the median age was years (iqr: - years). the setting of potential exposure was available for cases ( . %). of these, . % (n = ) had history of travel or residence abroad during a -day period before the onset of symptoms. locally acquired infections were ( . %) with . % (n = ) related to a health-care facility in one geographical setting (cluster a) and . % (n = ) clustered in another setting (cluster b). the epidemic curve by date of sampling and date of symptom onset is shown in figure . the number of cases started to decline in april reaching very low levels in late may. in this section, we investigate the long-term impact of covid- to cyprus. towards this, we give longterm projections for the daily incidence and death rates. we fit system ( ) to covid- data that were collected during the period from the st of march till the st of may , in cyprus. we treat all the reported cases without making the distinction between local and imported. the model parameters are estimated using the methodology described in appendix a- . once the model is fitted to data, it can be used to forecast the epidemic. in order to study the evolution of the model as new data are added and the quality of the respective forecasts, we have fitted model ( ) using different time periods. specifically, the four datasets were formed using the daily reported incidences from the beginning of the observation period until and including the / / , / / , / / and / / respectively. the dates were chosen according to the change points detected using the methodology described in section . , see also section . . the fitted model in each case was used in order to predict the pandemic's evolution until the / / . in figure , we show the number of predicted exposed plus infectious cases (green solid lines) and the number of predicted recovered cases (blue solid lines) for the duration of the prediction period, and compare them to the observed cases which are indicated by circles and triangles. we use circles for data that have been used in the prediction and triangles for the observed data that are used for validation. visual inspection shows that after a period of about two months during which the model overestimates the number of active cases and underestimates the number of recovered, see figure (top), model ( ) was able to capture accurately the evolution of the pandemic, figure (bottom). the performance of the predictions can also be evaluated by means of the relative error (re) which are , where x t denotes the datum for day t and y t the model prediction for the same day. the re for the recovered cases equal . %, . %, . % and . % for the four time periods respectively with the corresponding re for the active cases being high in the beginning %, . %, but then dropping considerably . % and . %, reflecting the fact the model caught up with the evolution of the pandemic. overall, system ( ) gives adequate predictions especially when data from longer time periods are used. for the active cases. figure shows the number of deaths and their respective predictions using subsets of data as described above. in the duration of the first data set, there were no deaths registered and therefore the prediction was identically zero, giving also an re equal to % see figure (top left). as more deaths are registered the model's ability to predict the correct number of deaths is improving, see figure . the recovery rate (λ(t)) is modelled as , λ i ≥ , i = , , . the idea is that the recovery rate, as time increases, should converge towards a constant. in figure (left), the fitted recovery rate (solid line) is plotted against the observed number of recovered cases (stars). finally, model can be used to estimate the unobserved number of exposed, e(t), and infectious, i(t), cases during the development of the pandemic. the maximum number of exposed cases occurs on the st of march and is estimated to be cases, figure (right, blue line), with the maximum of infectious individuals ( ) being attained on the of march . we can observe a delay in the transition of exposed to infectious in the order of days, which suggests a day latent time of covid- . we first consider the change-point detection method of sec. . for the case of piecewise-linear signal plus noise model. figure illustrates the results obtained by this analysis on daily incidence data. we first fit model ( ) note again that the sum . + ∼ which shows that the non-stationarity persists even after including additive outliers (in the log-scale). furthermore, the positive sign of both interventions shows the sudden explosion of the daily number of people infected. the corresponding bic values obtained after fitting this model is equal to . which improves the bic of the model without intervention which was equal to . . figure shows the fit of the model to the data and gives % prediction intervals for the week ahead. comparing both change-point analysis (see fig. ) and the result obtained by using the above intervention analysis, we observe that both approaches give similar prediction intervals that include future observed incidence data. indeed, the observed data for the week ahead ( / / - / / ) were , , , , , and cases. recall the effective reproduction number r t defined by ( ). we perform bayesian analysis using ( ) (see for the data concerning all incidents, the first recorded incident was on / / , hence, as detailed in appendix a- , we initialize our analysis of the outbreak days earlier, on / / . figure posterior probabilities of the event r t < . analysis using the full data. next, we consider the estimation model described in li et al. ( ) where cyprus is divided in subpopulations (nicosia, limassol, larnaca, paphos, ammochostos) and the mobility patterns between them are taken into account (as described in metapopulation compartmental model ). the effective reproduction number is given by ( ). the compartmental model structure was integrated stochastically using a th order runge-kutta (rk ) scheme. we use uniform prior distributions on the parameters of the model, with ranges similar to li et al. ( ) as follows: relative trasmissibility . ≤ µ ≤ , movement factor ≤ θ ≤ . ; latency period . ≤ z ≤ . ; infectious period ≤ d ≤ . for the infection rate we choose . ≤ β ≤ . before the lockdown and ≤ β ≤ . after the lockdown and for the reporting rate we choose . ≤ α ≤ . note that the ensemble adjustment kalman filter (eakf, described in appendix a- ) is not constrained by the initial priors and can migrate outside these ranges to obtain system solutions. for the initialization purposes we assume that all districts are potential origins with an undocumented infected and exposed population drawn from a uniform distribution [ , ] a week before the first documented case. initial condition does not affect the outcome of the inference. transmission model does not explicitly represent the process of infection confirmation. thus, we mapped simulated documented infections to confirmed cases using a separate observational delay model. in this delay model, we account for the time interval between a person transitioning from latent to contagious and observational confirmation of that individual infection through a delay of t d . we assume that t d follows a gamma distribution g(a, τ d /a) where τ d = days and a = . as derived by li et al. ( ) using data from china. inference is robust with respect to the choice of τ d . for the inference we use incidents from local transmission in cyprus as were reported by the ministry of health. in figure we plot the time evolution of the weekly effective reproduction number r t . while at the beginning of the outbreak the effective reproduction number was close to . , after the lockdown measures, it dropped below and stayed consistently there until the end of june . we then use the methodology proposed by bettencourt and ribeiro ( ) and recently modified by systrom ( ) as described in detail in section . . . for that method we also use the incidents from local transmission in cyprus as were reported by the ministry of health. figure shows the daily median value as well as the % credible intervals for the effective reproductive number using that method. the work presented in this report is the result of intensive collaboration of an interdisciplinary team which was formed shortly after the pandemic started. the main motivation was to give guidance to cypriot government for controlling this major infectious disease outbreak. accordingly, we developed models and methods that are of critical importance in appreciating how this disease is developing and what will be its next stage and in what kind of time framework. this is a valuable information for outbreak control, resource utilization and to initiate again the normal daily life. we followed diverse paths to accomplish this by appealing to different modeling approaches and methods. we have shown that the government interventions were successful on containing covid- in cyprus, by the end of may, even though the disease initiated with a high value of r t . the government lockdown helped reduce the reproduction number, as the data shows, by applying different methodology. in addition, we have shown by change-point methodology and time series analysis the effect of various measures taken and have developed short-term predictions. the models we applied are based on simple surveillance data, seem to work well, give similar results, and can certainly help epidemiologists and public health officials quantify and understand changes in the transmission intensity of future epidemics and the drivers of these changes. finally, we feel that our approach to bring together experts from various fields avoids misunderstandings and gaps in communication between scientists, and maximizes the effectiveness of efforts to deal with public health emergencies. the existing change-point detection techniques for the scenarios mentioned in section . are mainly split into two categories based on whether the change-points are detected all at once or one at a time. the former category mainly includes optimization-based methods, in which the estimated signal is chosen based on its least squares or log-likelihood criterion penalized by a complexity rule in order to avoid overfitting. the most common example of a penalty function is the bayesian information criterion (bic); see schwarz ( ) and yao ( ) for details. in the latter category, in which change-points are detected one at a time, a popular method is binary segmentation, which performs an iterative binary splitting of the data on intervals determined by the previously obtained splits. even though binary segmentation is conceptually simple, it has the disadvantage that at each step of the algorithm, it looks for a single change-point, which leads to its suboptimality in terms of accuracy, especially for signals with frequent change-points. one method that works towards solving this issue is the isolate-detect (id) methodology of anastasiou and fryzlewicz ( ) ; it is the method used for the analysis carried out in this paper. the concept behind id is simple and is split into two stages; firstly, the isolation of each of the true change-points within subintervals of the domain [ , , . . . , t ], and secondly their detection. the basic idea is that for an observed data sequence of length t and with λ t a positive constant, id first creates two ordered sets of k = t /λ t right-and left-expanding intervals as follows. the j th right-expanding interval is for clarity of exposition, we give below a simple example. figure covers a specific case of two change-points, r = and r = . we will be referring to phases and involving six and four intervals, respectively. these are clearly indicated in the plot and they are only related to this specific example, as for cases with more change-points will entertain more such phases. at the beginning, s = , e = t = , and we take the expansion parameter λ t = . then, r gets detected in {x s * , x s * + , . . . , x e }, where s * = . recall ( ) and that the parameters d, a , b can be positive or negative but they need to satisfy certain conditions so that we obtain stable behavior of the process. note that the lagged observations of the response x t are fed into the autoregressive equation for ν t via the term log(x t− + ). this is a one-to-one transformation of x t− which avoids zero data values. moreover, both λ t and x t are transformed into the same scale. covariates can be easily accommodated by model ( ). when a = , we obtain an ar( ) type model in terms of log(x t− + ). in addition, the log-intensity process of ( ) can be rewritten as after repeated substitution. hence, we obtain again that the hidden process {ν t } is determined by past functions of lagged responses, i.e. ( ) belongs to the class of observation driven models; see cox ( ) . models like ( ) ( ), is determined by a latent process. therefore a formal linear structure, as in the case of gaussian linear time series model does not hold any more and interpretation of the interventions is a more complicated issue. hence, a method which allows detection of interventions and estimation of their size is needed so that structural changes can be identified successfully. important steps to achieve this goal are the following; see chen and liu ( ) : . a suitable model for accommodating interventions in count time series data. . derivation of test procedures for their successful detection. . implementation of joint maximum likelihood estimation of model parameters and outlier sizes. . correction of the observed series for the detected interventions. all these issues and possible directions for further developments of the methodology have been addressed by liboschik et al. ( ) under the poisson and mixed poisson distributional framework. ( ) according to the official reports, the number of quarantined cases (q), recovered (r) and deaths (d), due to covid- , are available. however, the recovered and death cases are directly related to the number of quarantine cases, which plays an important role in the analysis, especially since the numbers of exposed (e) and infectious (i) cases are very hard to determine. the latter two are therefore treated as hidden variables. this implies that we need to estimate the four parameters ζ, β, γ − , δ − and both the time dependent cure rate λ(t) and mortality rate κ(t). this is an optimization problem that we solve as follows: first we allow the latent time γ − to vary between and days and for each fixed γ − , we explore its influence on the rest of the parameters. the system of differential equations ( ) is solved numerically using the runge-kutta numerical scheme. the left plot of figure shows that the protection rate ζ and the transmission rate β both attain their corresponding maximum value when γ − is equal to days. note that ζ takes values between . and . , while β converges very fast to . the reciprocal of the quarantine time δ − is increasing with the latent time γ − . one would suspect that longer latent time results to higher transmission rate and as the latent time increases almost every unprotected person will be infected after a direct contact with a covid- patient. the right plot of figure shows the effect of the latent time on the total number of infected cases (exposed and infectious e(t) + i(t)) but not yet quarantined. the peak of the infection was achieved between the st and the th of march, depending on the latent time with the estimated number of infected people ranging between and , depending again on the latent time considered. hence, once the latent time γ − is fixed, the fitting performance depends on the values of ζ, β and δ − . after a small sensitivity analysis the latent time was finally determined as days. the mortality rate κ(t) is constantly very small and almost equal to zero, therefore we have not attempted to fit any function to it. for the cure rate λ(t) we have fitted the exponential function given in , the idea behind being that with time the recovery should converge to a constant rate. for the parameter estimation we have used a modified version of the matlab code given by cheynet ( ) because cyprus is a small country and this fact needs to be taken properly into account. figure : sensitivity analysis on the parameters for the model defined by ( ). the influence of the latent time γ − on the protection rate ζ, the transmission rate β and the quarantine time δ − (left plot ), the sum of exposed and infectious cases e(t) + i(t) (right plot). we present a bayesian analysis, for the model defined by ( ) consideration. in particular, in the first period (when the number of tests was relatively low) we employ a symmetric prior around the value α = . , while for later periods (when the number of targeted and random tests increased) we let the prior become progressively skewed towards . for the transmission rate β > , in the first period we use a gamma( / , / ) prior, which puts high probability around , while for later periods we use an exponential( ) prior which puts more mass closer to zero. this choice reflects the existence of super-spreaders in the early stages of the outbreak with higher probability compared to later on. in each time-period under consideration we also need to initialize the outbreak in cyprus. for the first period in both datasets, we use a uniform prior supported in { , , . . . , } on the number of exposed and the number of undocumented infected days before the first recorded incident. the two priors are independent, while the number of susceptible individuals is taken equal to cyprus' population and the number of infected-reported equal to zero. for later periods, we use as priors on the four state variables, their posterior distributions at the end of the previous period (corrected appropriately based on the observation at the end of the previous period). following li et al. ( ) , we assume that the daily number of reported cases are independent gaussian random variables and use an empirical variance given as by recalling that y(t) denotes the number of infected cases at day t. this allows us to build a gaussian likelihood for the parameters α and β. combining this likelihood with the prior distributions, we can deduce a formula for the posterior distribution on α, β. this distribution is not available in a closed form, hence in order to compute posterior estimates and their respective uncertainty quantification, we need to sample it. in the relatively simple setting of model , it is feasible to employ markov chain monte carlo methods, (see robert and casella ( ) ), in order to sample the posterior (namely, we use an independence sampler). this is in contrast to the model defined by ( ) in sec. . . , see li et al. ( ) , where one has to use the ensemble adjustment kalman filter (eakf), which introduces some approximations to the posterior distribution, due to the more complex meta-population structure. originally developed for use in weather prediction, the eakf assumes a gaussian distribution for both the prior and the likelihood and adjusts the prior distribution to a posterior using bayes rule deterministically. in particular, the eakf assumes that both the prior distribution and likelihood are gaussian, and thus can be fully characterized by their first two moments (mean and variance). the update scheme for ensemble members is computed using bayes rule (posterior ∼ prior × likelihood) via the convolution of the two gaussian distributions (see li et al. ( ) for the implementation). we report the results obtained after fitting a piecewise-constant signal plus noise model, as descibed in sec. . . the scenario here is that at each change-point, we have a sudden jump in the mean level of the signal. figure data and code are available at github (https://github.com/chrisnic /covid_cyprus) detecting multiple generalized change-points by isolating single ones remdesivir for the treatment of covid- -preliminary report bayesian theory real time bayesian estimation of the epidemic potential of emerging infectious diseases statistical fraud detection: a review joint estimation of model parameters and outlier effects in time series generalized seir epidemic model (fitting and computation) a new framework and software to estimate time-varying reproduction numbers during epidemics the species severe acute respiratory syndrome-related coronavirus: classifying -ncov and naming it sars-cov- statistical analysis of time series: some recent developments impact of non-pharmaceutical interventions (npi)s to reduce covid- mortality and healthcare demand statistical analysis of count time series models: a glm perspective log-linear poisson autoregression an mrna vaccine against sars-cov- -preliminary report inference for single and multiple change-points in time series regression models for time series analysis modeling infectious diseases in humans and animals a contribution to the mathematical theory of epidemics substantial undocumented infection facilitates the rapid dissemination of novel coronavirus (sars-cov- ) tscount: an r package for analysis of count time series following generalized linear models generalized linear models hand-hygiene mitigation strategies against global disease spreading through the air transportation network circular binary segmentation for the analysis of array-based dna copy number data epidemic analysis of covid- in china by dynamical modeling monte carlo statistical methods adaptive trend estimation in financial time series via multiscale change-pointinduced basis recovery estimating the dimension of a model the metric we need to manage covid- rt: the effective reproduction number coronavirus disease (covid- ) -situation report- estimating the number of change-points via schwarz' criterion a novel coronavirus from patients with pneumonia in china key: cord- -bycskjtr authors: mönke, gregor; sorgenfrei, frieda a.; schmal, christoph; granada, adrián e. title: optimal time frequency analysis for biological data - pyboat date: - - journal: biorxiv doi: . / . . . sha: doc_id: cord_uid: bycskjtr methods for the quantification of rhythmic biological signals have been essential for the discovery of function and design of biological oscillators. advances in live measurements have allowed recordings of unprecedented resolution revealing a new world of complex heterogeneous oscillations with multiple noisy non-stationary features. however, our understanding of the underlying mechanisms regulating these oscillations has been lagging behind, partially due to the lack of simple tools to reliably quantify these complex non-stationary features. with this challenge in mind, we have developed pyboat, a python-based fully automatic stand-alone software that integrates multiple steps of non-stationary oscillatory time series analysis into an easy-to-use graphical user interface. pyboat implements continuous wavelet analysis which is specifically designed to reveal time-dependent features. in this work we illustrate the advantages of our tool by analyzing complex non-stationary time-series profiles. our approach integrates data-visualization, optimized sinc-filter detrending, amplitude envelope removal and a subsequent continuous-wavelet based time-frequency analysis. finally, using analytical considerations and numerical simulations we discuss unexpected pitfalls in commonly used smoothing and detrending operations. oscillatory dynamics are ubiquitous in biological systems. from transcriptional to behavioral level these oscillations can range from milliseconds in case of neuronal ring patterns, up to years for the seasonal growth of trees or migration of birds (goldbeter et al. [ ] , gwinner [ ] , rohde and bhalerao [ ] ). to gain biological insight from these rhythms, it is often necessary to implement time-series analysis methods to detect and accurately measure key features of the oscillatory signal. computational methods that enable analysis of periods, amplitudes and phases of rhythmic time series data have been essential to unravel function and design principles of biological clocks (lauschke et al. [ ] , ono et al. [ ] , soroldoni et al. [ ] ). here we present pyboat, a framework and software package with a focus on usability and generality of such analysis. many time series analysis methods readily available for the practitioner rely on the assumption of stationary oscillatory features, i.e. that oscillation properties such as the period remain stable over time. a plethora of methods based on the assumption of stationarity have been proposed which can be divided into those working in the frequency domain such as fast fourier transforms (fft) or lomb-scargle periodograms (lomb [ ] , ruf [ ] ) and those working in the time domain such as autocorrelations (westermark et al. [ ] ), peak picking (abraham et al. [ ] ) or harmonic regressions (edwards et al. [ ] , halberg et al. [ ] , naitoh et al. [ ] , straume et al. [ ] ). in low noise systems with robust and stable oscillations, these stationary methods suce to reliably characterize oscillatory signals. recordings of biological oscillations frequently exhibit noisy and time-dependent features such as drifting period, uctuating amplitude and trend. animal vocalization (fitch et al. [ ] ), temporal changes in the activatory pathways of somitogenesis (tsiairis and aulehla [ ] ) or reversible and irreversible labilities of properties in the circadian system due to aging or environmental factors (pittendrigh and daan [ ] , scheer et al. [ ] ) are typical examples where systematic, often non-linear changes in oscillation periods occur. in such cases, the assumption of stationarity is unclear and often not valid, thus the need to use nonstationary-based methods that capture time-dependent oscillatory features. recently, the biological data analysis community has developed tools that implement powerful methods tailored to specic steps of time-series analysis such as rhythmicity detection (hughes et al. [ ] , thaben and westermark [ ] ), de-noising and detrending, and the characterization of nonstationary oscillatory components (leise [ ] , price et al. [ ] ). to extract time-dependent features of non-stationary oscillatory signals, methods can be broadly divided into those that rely on operations using a moving time-window (e.g. wavelet transform) and those that embeds the whole time series into a phase space representation (e.g. hilbert transform). these two families are complementary, having application-specic advantages and disadvantages, and in many cases both are able to provide equivalent information about the signal (quiroga et al. [ ] ). due to the inherent robustness in handling noisy oscillatory data and its interpretability advantages, we implemented at the core of pyboat a continuous-wavelet-transform approach. as a software package pyboat combines multiple steps in the analysis of oscillatory time series in an easy-to-use graphical user interface that requires no prior programming knowledge. with only two user-dened parameters, pyboat is able to proceed without further intervention with optimized detrending, amplitude envelope removal, spectral analysis, detection of main oscillatory components (ridge detection), oscillatory parameters readout and visualization plots (figure a) . pyboat is developed under an open-source license, is freely available for download and can be installed on multiple operatings systems. in the rst section of this work we lay out the mathematical foundations at the core of pyboat. in the subsequent section we describe artifacts generated by the widely used smoothing and detrending techniques and how they are resolved within pyboat. in section we describe the theory behind spectral readouts in the special case of complex amplitude envelopes. we nalize this manuscript with a short description of the user interface and software capabilities. shown together with a sweeping signal whose instantaneous period coincides with the morlet of scale s = exactly at τ . bottom panel: result of the convolution with the sliding morlet Ψ ,τ (t) along signal f (t). the power quickly decreases away from τ . the curve corresponds to one row in the wavelet power spectrum of panel d). c) synthetic signal with periods sweeping from t = s (f ≈ . hz) to t = s (f ≈ . hz). d) wavelet power spectrum shows timeresolved (instantaneous) periods. in this section we aim to lay down the basic principles of wavelet analysis as employed in our signal analysis tool, albeit the more mathematical subtleties are moved to the appendix. the classic approach to do frequency analysis of periodic signals is the wellknown fourier analysis. its working principle is the decomposition of a signal f (t) into sines and cosines, known as basis functions. these harmonic components have no localization in time but are sharply localized in frequency: each harmonic component carries exactly one frequency eective everywhere in time. thus, the straighforward fourier analysis underperforms in cases of time-dependent oscillatory features, such as when the period of the oscillation changes in time ( figure c ). the goal behind wavelets is to reach an optimal compromise between time and frequency localization (gabor [ ] ). gabor introduced a gaussian modulated harmonic component, also known as morlet wavelet: the basis harmonic functions for time-frequency analysis are then generated from the mother wavelet by scaling and translation: varying the time localization τ slides the wavelet left and right on the time axis. scale s changes the center frequency of the morlet wavelet according to ω center (s) = ω /s (see also appendix equation ( )). higher scales therefore generate wavelets with lower center frequency. the gaussian envelope suppresses the harmonic component with frequency ω center farther away from τ , therewith localizing the wavelet in time ( figure b top panel). this frequency ω center (s) is conventionally taken as the fourier equivalent (or pseudo-) frequency of a morlet wavelet with scale s. it is noteworthy to state that wavelets are in general not as sharply localized in frequency as their harmonic counterparts ( figure s ). this is a trade-o imposed by the uncertainty principle to gain localization in time (gröchenig [ ] ). the wavelet transform of a signal f (t) is given by the following integral expression: for a xed scale, this equation has the form of a convolution as denoted by the ' * ' operator, whereΨ denotes the complex conjugate of Ψ. for an intuitive understanding it is helpful to consider above expression as the cross-correlation between the signal and the wavelet of scale s (or center frequency ω center (s)). the translation variable τ slides the wavelet along the signal. since the wavelet decays fastly away from τ , only the instantaneous correlation of the wavelet with center frequency ω center and the signal around τ signicantly contributes to the integral ( figure b middle and lower panel). by using an array of wavelets with dierent frequencies (or periods), this allows to scan for multiple frequencies in the signal in a time-resolved manner. the result of the transform: w : f (t) → f (t, ω) is a complex valued function of two variables, frequency ω and time localization τ . in the following, we implicitly convert scales to frequency via the corresponding central frequencies ω center (s) of the morlet wavelets. to obtain a physically meaningful quantity, one denes the wavelet power spectrum: we adopted the normalization with the variance of the signal σ from torrence and compo [ ] as it allows for a natural and statistical interpretation of the wavelet power. by stacking the transformations in a frequency order, one constructs a two-dimensional time-frequency representation of the signal, where the power itself is usually color coded ( figure d ), using a dense set of frequencies to scan for approximates of the continuous wavelet transform. it is important to note that the time averaged wavelet power spectrum is an unbiased estimator for the true fourier power spectrum p f f of a signal f (percival [ ] ). this allows to directly compare fourier power spectra to the wavelet power. white noise is the simplest noise process which may serve as a null hypothesis. normalized by variance white noise has a at mean fourier power of one for all frequencies. hence, also the variance normalized wavelet power of one corresponds to the mean expected power for white noise: p f w n (ω) = ( figure c ). this serves as a universal unit to compare dierent empirical power spectra. extending these arguments to random uctuations of the fourier spectrum allows for the calculation of condence intervals on wavelet spectra. if a background spectrum p f (ω) is available, the condence power levels can be easily calculated as: assuming normality for the distribution of the complex fourier components of the background spectrum, one can derive that the power itself is chi-square dis-tributed (chateld [ ] ). thus, picking a desired condence (e.g. χ ( %) ≈ ) gives the scaling factor for the background spectrum. only wavelet powers greater than this condence level are then considered to indicate oscillations with the chosen condence. the interested reader may nd more details in section of torrence and compo [ ] . a wavelet power of c = corresponds to the % condence interval in case of white noise ( figure b ), which is frequency independent. for the practitioner, this should be considered the absolute minimum power, required to report 'oscillations' in a signal. it should be noted that especially for biological time series, due to correlations present also in non-oscillatory signals, white noise often is a poor choice for a null model (see also supplementary information a. ). a possible solution is to estimate the background spectrum from the data itself, this however is beyond the scope of this work. optimal filtering -do's and dont's a biological recording can be decomposed into components of interest and those elements which blur and challenge their analysis, most commonly noise and trends. various techniques for smoothing and detrending have been developed to deal with these issues. often overlooked is the fact that both, smoothing and detrending operations can introduce spectral biases, i.e. attenuation and amplication of certain frequencies. in this section we lay out a mathematical framework to understand and compare the eects of these two operations, showing examples of the potential pitfalls and at the same time providing a practical guide to avoid these issues. finally, we discuss how pyboat minimizes most of these common artifacts. the operation which removes the fast, high-frequency (low period) components of a signal is colloquially called smoothing. this is most commonly done as a sliding time window operation (convolution). in general terms we can refer to a window function w(t) such that the smoothed signal is given as: by employing the convolution theorem, it turns out that the fourier transformation of the smoothed signal is simply given by the product of the individual fourier transforms. it follows that the fourier power spectrum of the smoothed signal reads as: applying a few steps of fourier algebra shows that the original power spectrum p f f gets modied by the low pass response of the window function | w| scaled by the ratio of variances σ f /σ f s . also without resorting to mathematical formulas, smoothing and its eect on the time-frequency analysis can be easily grasped visually. a broad class of ltering methods falls into the category of convolutional ltering, meaning that there is some operation in a sliding window done to the data, e.g. for moving average and loess or savitzky-golay ltering (savitzky and golay [ ] ). moving average lter is a widely used smoothing technique, dened simply by a box-shaped window that slides in the time domain. in figure we summarize the spurious eects that this lter can have on noisy biological signals. white noise, commonly used as descriptor for uctuations in biological systems, is a random signal with no dominant period. the lack of dominant period can be seen from a raw white noise signal ( figure a ) and more clearly from the almost at landscape on the power spectrum ( figure b ). applying to raw white noise signal a moving average lter of size -times the signal's sampling interval (m = ∆t) leads to a smoothed noise signal (figure c ) that now has multiple dominant periods, as seen by the emergence of high power islands in figure d . comparing the original spectrum ( figure b ) with the white noise smoothed spectrum ( figure d ), it becomes evident that smoothing introduces a strong increase in wavelet power for longer periods. in other words, smoothing perturbs the original signal by creating multiple highpower islands of long periods, also referred as spurious oscillations. to better capture the statistics behind these smoothing-induced spurious oscillations, it is best to look at the time averaged wavelet spectrum. figure e shows the mean expected power after smoothing white noise with a moving average lter. a zone of attenuated small periods become visible, the sloppy stopband for periods around ∆t. these are the fast, high-frequency components which get removed from the signal. however, for larger periods the wavelet power gets amplied up to -fold. it is this gain, given by σ f /σ f s , which leads to spurious results in the analysis. as stated before, variance normalized white noise has a mean power of for all frequencies or periods (p f w n (ω) = ). this allows to use a straightforward numerical method to estimate a lter response | w(ω)| , i.e. applying the smoothing operation to simulated white noise and time averaging the wavelet spectra. this monte carlo approach works for every (also non-convolutional) smoothing method. results for the savitzky-golay lter applied to white noise signals can be found in supplementary figure s . convolutional lters will in general produce more gain and hence more spurious oscillations with increasing window sizes in the time domain. if smoothing, even with a rather small window (m = ∆t), already potentially introduces false positive oscillations, what does that mean for practical time-frequency analysis? for wavelet analysis the answer is plain and clear: smoothing is simply not needed at all. a close inspection of the unaltered white noise wavelet spectrum shown in figure b , shows the same structures for higher periods as in the spectrum of the smoothed signal ( figure d ). the big dierence is, that even though these random apparent oscillations get picked up by the wavelets, their low power directly indicates their low signicance. as wavelet analysis (see previous section) is based on convolutions, it already has power preserving smoothing built in. as illustration, we show in figure f a raw noisy signal with lengthening period (noisy chirp) and the corresponding power spectrum ( figure f lower panel). without any smoothing the main periodic signal can be clearly identied in the power spectrum. thus, wavelet analysis does not require smoothing for the detection of oscillations in very noisy signals. for all other spectrum analysis methods which rely on explicit smoothing, characteristics of the background noise and the signal to noise ratio are crucial to avoid detecting spurious oscillations. these are usually both quantities not readily available a priori or in practice. complementary to smoothing, an operation which removes the slow, low frequency components of a signal is generally called detrending. strong trends can dominate a signal by eectively carrying most of the variance and power. there are at least two broad classes of detrending techniques: parametric tting and convolution based. both aim to estimate the trend as a function over time to be subtracted from the original signal. a parametric t always is the best choice, if the deterministic processes leading to the trend are known and well understood. an example is the so called photobleaching encountered in time-lapse uorescent imaging experiments, here an exponential trend can often be well tted to the data based on rst principle deliberations (song et al. [ ] ). however, there are often other slow processes, like cell viability or cells drifting in and out of focus, which usually can't be readily described parametrically. for all these cases convolutional detrending with a window function w(t) is a good option and can be written as: the trend here is nothing less than the smoothed original signal, i.e. f (t) * w(t). however with the signal itself falling into the stop-band of the low-pass lter, with the aim to not capture and subtract any signal components. using basic algebra in the frequency domain we obtain an expression relating the window w(t) to the power spectrum of the original signal f (t): as in the case of smoothing, the so called high-pass response of the window function is given by ( − | w|) and scaled with the ratio of variances σ f /σ f d . in strong contrast to smoothing, there is no overall gain in power in the range of the periods, passing through the lter (called passband). there is however, in case of moving average and other time-domain lters (see also figure s ), no simple passband region. instead, there are rippling artifacts in the frequency domain, meaning some periods getting amplied to up to % and others attenuated by up to %. to showcase why this can be problematic, we constructed a synthetic chirp signal sweeping through a range of periods t − t , however, this time modied by a linear and an oscillatory trend ( figure a ). the oscillatory component of the trend was chosen for clarity with a specic time scale given by its period t trend , which is three times the longest period found in the chirp signal. strongly depending on the specic window size chosen for the moving average lter, there are various eects on both, the time and frequency domain (shaded area in figure b ) such as the introduction of amplitude envelopes and/or incomplete trend removal ( figure c ). a larger window size is better to reduce the eect of the ripples inside the passband. however, the lter decay (roll-o ) towards larger periods becomes very slow. that in turn means that trends can not be fully eliminated. smaller window sizes perform better in detrending, but their passband can be dominated by ripples (see also supplementary figure s ). in practice, sticking to lters originally designed for the time-domain without having oscillatory signals in mind, can easily lead to biased results of a time-frequency analysis. however, given the moderate gains of the detrending lter response, there is a much smaller chance to mistakenly detect spurious oscillations compared to the case of smoothing. the sinc lter, also known as the optimal lter in the frequency domain, is a function with a constant value of one in the passband. in other words, frequencies which pass through are neither amplied nor attenuated. accordingly, this lter also should be constantly zero in the stop-band, the frequencies (or periods) which should be ltered out. this optimal low-pass response can be formulated in the frequency domain simply as: here ω c is the cut o frequency, an innitely sharp transition band dividing the frequency range into pass-and stop-band. it is eectively a box in the frequency domain (dashed lines in figure d ). note that the optimal high-pass or detrending response simply and exactly swaps the pass-and stop-band. in the time domain via the inverse fourier transform, this can be written as: this function is known as the sinc function and hence the name sinc lter. an alternative name used in electrical engineering is brick-wall lter. in practice, this optimal lter has a nonzero roll-o as shown for two dierent cut-o periods (t c = π ωc ) in figure d . the sinc function mathematically requires the signal to be of innite length. therefore, every practical implementation implements windowed sinc lters (smith et al. [ ] ), see also supplementary information s ) about possible implementations. strikingly still, there are no ripples or other artifacts in the frequency-response of the windowed sinc lter. and hence also the 'real world' version allows for a bias free time-frequency analysis. as shown in figure e , the original signal can be exactly recovered via detrending. to showcase the performance of the sinc lter, we numerically compared its performance against two other common methods, the hodrick-prescott and moving average lter ( figure f ). the stop-and passband separation of the sinc lter clearly is the best, although the hodrick-prescott lter with a parameterization as given by ravn and uhlig [ ] also gives acceptable results (see also supplementary figure s ). the moving average is generally inadvisable, due to its amplication right at the start of the passband. in addition to its advantages in practical signal analysis, the sinc lter also allows to analytically calculate the gains from ltering pure noise (see also supplementary information a. ). the gain, and therefore the probability to detect spurious oscillations, introduced from smoothing is typically much larger compared to detrending. however, if a lot of energy of the noise is concentrated in the slow low frequency bands, also detrending with small cut-o periods alone can yield substantial gains (see figure s and figure s ). importantly, when using the sinc lter, the background spectrum of the noise will always be uniformly scaled by a constant factor in the pass-band. there is no mixing of attenuation and amplication as for time-domain lters like moving average ( figure b and c). if the spectrum of the noise can be estimated, or an empirical background spectrum is available, the theory presented in a. allows to directly calculate the correct condence intervals. extraction of the instantaneous period, amplitude and phase of that component is of prime interest for the practitioner. in this section we show how to obtain these important features using wavelet transforms as implemented in pyboat. from the perspective of wavelet power spectra, such main oscillatory components are characterized by concentrated and time-connected regions of high power. wavelet ridges are a means to trace these regions on the time-period plane. for the vast majority of practical applications a simple maximum ridge extraction is sucient. this maximum ridge can be dened as: with n being the number of sample points in the signal. thus, the ridge r(t k ) maps every time point t k to a row of the power spectrum, and therewith to a specic instantaneous period t k ( figure c and d). evaluating the power spectrum along a ridge, gives a time series of powers: p w f (t k , r(t k )). setting a power threshold value is recommended to avoid evaluating the ridge in regions of the spectrum where the noise dominates (in figure c threshold is set to ). alternatively to simple maximum ridge detection, more elaborated strategies for ridge extraction have been proposed (carmona et al. [ ] ). a problem often encountered when dealing with biological data is a general time-dependent amplitude envelope ( figure a ). under our wavelet approach, the power spectrum is normalized with the overall variance of the signal. consequently, regions with low signal amplitudes but robust oscillations are nevertheless represented as very low power blurring them with the spectral oor ( figure c ). this leads to the impractical situation, where even a noise free signal with an amplitude decay will show very low power at the end ( figure e ,f and s ), defeating its statistical purpose. a practical solution in this case is to estimate an amplitude envelope and subsequently normalize the signal with this envelope ( figure a and b). we specically show here non-periodic envelopes, estimated by a sliding window (see also methods). after normalization, lower amplitudes are no longer penalized and an eective power-thresholding of the ridge is possible (figure d and f) . a limitation of convolutional methods, including wavelet-based approaches, are edge eects. at the edges of the signal, the wavelets only partially overlap with the signal leading to a so-called cone of inuence (coi) (figure c and d). even though the periods are still close to the actual values, phases and especially the power should not be trusted inside the coi (see discussion and supplementary figure s ). once the trace of consecutive wavelet power maxima has been determined and thresholded, evaluating the transform along it yields the instantaneous envelope amplitude, normalized amplitude and phases (see figure e ,f and g). to obtain applications after introducing the dierent time series analysis steps using synthetic data for clarity, in this paragraph, we discuss examples of pyboat applications to real data. to showcase the versatility of our approach, we chose datasets obtained from dierent scientic elds. in figure a we display covid- infections in italy as reported by the disease prevention and control (ecdc). a sinc-lter trend identication with a cut-o period of days reveals a steep increase in newly reported infections at the beginning of march and a steady decline after the beginning of april. subtracting this non-linear trend clearly exposes oscillations with a stable period of one week ( figure b , and see supplementary figure s a for power spectrum analysis). similar ndings were recently reported in ricon-becker et al. [ ] . the signals shown in figure c show cycles in hare-lynx population sizes as inferred from the yearly number of pelts, trapped by the hudson bay company. data has been taken from odum and barrett [ ] . the corresponding power spectra are shown in the supplement ( figure s b ), and reveal a fairly stable year periodicity. after extracting the instantaneous phases with pyboat, we calculated the time-dependent phase dierences as shown in figure b . interestingly, the phase dierence slowly varies between being almost perfectly out of phase and being in phase for a few years around . the next example signal shows a single-cell trajectory of a u os cell carrying a geminin-cfp uorescent reporter (granada et al. [ ] ). geminin is a cell cycle reporter, accumulating in the g phase and then steeply declining during mitosis. applying pyboat on this non-sinusoidal oscillations reveals the cellcycle length over time ( figure f ), showing a slowing down of the cell cycle progression for this cell. ensemble dynamics for a control and a cisplatin treated population are shown in the supplementary figure s c . the nal example data set is taken from mönke et al. [ ] , here populations of mcf cells where treated with dierent dosages of the dna damaging agent ncs. this in turn elicits a dose-dependent and heterogeneous p response, tracked in the individual cells for each condition ( figure g ). pyboat also features readouts of the ensemble dynamics: figure h shows the time-dependent period distribution in each population, figure i the phase coherence over time. the latter is calculated as r(t) = j e iφ j (t) . it ranges from zero to one and is a classical measure of synchronicity in an ensemble of oscillators kuramoto [ ] . the strongly stimulated cells ( ng ncs) show stable oscillations with a period of around min, and retain more phase coherent after an initial drop in synchronicity. the medium stimulated cells ( ng ncs) start to slow down on average already after the rst pulse, both the spread of the period distribution and the low phase coherence indicate a much more heterogeneous response. two individual cells and their wavelet analysis are shown in supplementary figure s d . graphical interface the extraction of period, amplitude and phase is the nal step of our proposed analysis workow, which is outlined in the screen-captures of figure . the user interface is separated into several sections. first, the 'dataviewer' allows to visualize the individual raw signals, the trend determined by the sinc lter, the detrended time series and the amplitude envelope. once satisfactory parameters have been found, the actual wavelet transform together with the ridge are shown in the 'wavelet spectrum' window. after ridge extraction, it is possible to plot the instantaneous observables in a 'readout' window. each plot produced from the interface can be panned, zoomed in and saved separately if needed. once the delity of the analysis has been checked for individual signals, it is also possible to run the entire analysis as a 'batch process' for all signals imported. one aggregated result which we found quite useful is to determine the 'rhythmicity' of a population by creating a histogram of time-averaged powers of the individual ridges. a classication of signals into 'oscillatory' and 'nonoscillatory' based on this distribution, e.g. by using classical thresholds (otsu [ ] ) is a potential application. examples of the provided ensemble readouts are shown in figure h and i and supplementary figure s c . finally, pyboat also features a synthetic signal generator, allowing to quickly explore its capabilities also without having a suitable dataset at hand. a synthetic signal can be composed of up to two dierent chirps, and ar -noise and an exponential envelope can be added to simulate possible challenges often present in real data (see also material and methods and supplementary figure s ). installation and user guidelines of pyboat can be found in the github repository. gets detrended with a cut-o period of h, and an amplitude envelope is estimated via a window of size h. see labels and main text for further explanations. the example trajectory displays a circadian rhythm of h and is taken from the data set published in abel et al. [ ] . recordings of biological oscillatory signals can be conceptualized as an aggregate of multiple components, those coming from the underlying system of interest and additional confounding factors such as noise, modulations and trends that can disguise the underlying oscillations. in cases of variable period with noisy amplitude modulation and non-stationary trends the detection and analysis of oscillatory processes is a non-trivial endeavour. here we introduced pyboat, a novel software package that uses a statistically rigorous method to handle non-stationary rhythmic data. pyboat integrates pre-and post-processing steps without making a priori assumptions about the sources of noise and periodicity of the underlying oscillations. we showed how the signal processing steps of smoothing, detrending, amplitude envelope removal, signal detection and spectral analysis can be resolved by our hands-o standalone software (figure and ). artifacts introduced by the time series analysis methods itself are a common problem that inadvertently disturbs results of the time-frequency analysis of periodic components (wilden et al. [ ] ). here we rst analyzed the eects of data-smoothing on a rhythmic noisy signal and showed how common smoothing approaches disturb the original recordings by introducing non-linear attenuations and gains to the signal (figures , s and s ). these gains easily lead to spurious oscillations that were not present in the original raw data. these artifacts have been characterized since long for the commonly used moving-average smoothing method, known as the slutzky-yule eect (slutzky [ ] ). using an analytical framework, we describe the smoothing process as a lter operation in frequency domain. this allows us to quantify and directly compare the eects of diverse smoothing methods by means of response curves. importantly, we show here how any choice of smoothing unavoidably transforms the original signal in a non-trivial manner. one potential reason for its prevalence is that practitioners often implement a smoothing algorithm without quantitatively comparing the spectral components before versus after smoothing. pyboat avoids this problem by implementing a wavelet-based approach that per se evades the need to smooth the signal. another source of artifacts are detrending operations. thus, we next studied the spectral eects that signal detrending has on rhythmic components. our analytical and numerical approaches allowed us to compare the spectral eects of dierent detrending methods in terms of their response curves (see figure ). our results show that detrending also introduces non-trivial boosts and attenuations to the oscillatory components of the signal, strongly depending on the background noise ( figures s and s ). in general there is no universal approach and optimally a detrending model is based on information about the sources generating the trend. in cases without prior information to formulate a parametric detrending in the time domain, we suggest that the safest method is the convolution based sinc lter , as it is an "ideal" (step-function) lter in the frequency domain ( figures c and s ). furthermore we compared the performance of the sinc lter with two other commonly applied methods to remove non-linear trends in data ( figure f ), i.e. the moving average (díez-noguera [ ] ) and hodrick-prescott (myung et al. [ ] , schmal et al. [ ] , st. john and doyle [ ] ) lter. in addition to smoothing and detrending, amplitude-normalization by means of the amplitude envelope removal is another commonly used data processing step that pyboat is able to perform. here we further show how that for decaying signals amplitude normalization grants that the main oscillatory component of interest can be properly identied in the power spectrum ( figure a to d). this main component is identied by a ridge-tracking approach that can be then used to extract instantaneous signal parameters such as amplitudes, power and phase ( figure e to g). rhythmic time series can be categorized into those showing stationary oscillatory properties and the non-stationary ones where periods, amplitudes and phases change over time. many currently available tools for the analysis of biological rhythms rely on methods aimed at stationary oscillatory data, using either a standalone software environment such as brass (edwards et al. [ ] , locke et al. [ ] ), chronostar (klemz et al. [ ] ) and circada (cenek et al. [ ] ) or an online interfaces such as biodare , zielinski et al. [ ] ). continuous wavelet analysis allows to reveal non-stationary period, amplitude and phase dynamics and to identify multiple frequency components across dierent scales within a single oscillatory signal (leise [ ] , leise et al. [ ] , rojas et al. [ ] ) and is thus complementary to approaches that are designed to analyze stationary data. in contrast to the r-based waveclock package (price et al. [ ] ), pyboat can be operated as a standalone software tool that requires no prior programming knowledge as it can be fully operated using its graphical user interface (gui). an integrated batch processing option allows the analysis of large data sets within a few "clicks". for the programming interested user, pyboat can also be easily scripted without using the gui, making it simple to integrate it into individual analysis pipelines. pyboat also distinguishes itself from other wavelet-based packages (e.g. harang et al. [ ] ) by adding a robust sinc lter-based detrending and a statistically rigorous framework, providing the interpretation of results by statistical condence considerations. pyboat is not specically designed to analyze oscillations in high-throughput "omics" data. for such sake, specialized algorithms such as arser (yang and su [ ] ), jtk-cycle (hughes et al. [ ] ), metacycle (wu et al. [ ] ) or rain (thaben and westermark [ ] ) are more appropriate. its analysis reveals basic oscillatory properties such as the time-dependent (instantaneous) rhythmicity, period, amplitude and phase but is not aimed at more specic statistical tests such as, e.g., tests for dierential rhythmicity as implemented in dodr (thaben and westermark, ) . the continous-wavelet analysis underlying pyboat requires equidistant time series sampling with no gaps. methods such as lomb-scargle periodograms or harmonic regressions are more robust or even specically designed with respect to unevenly-sampled data (lomb [ ] , ruf [ ] ). being beyond the scope of this manuscript, it will be interesting in future work to integrate the ability to analyze unevenly sampled data into the pyboat software, either by the imputation of missing values (e.g. by linear interpolation) or the usage of wavelet functions specically designed for this purpose (thiebaut and roques [ ] ). pyboat is fast, easy too use and statistically robust analysis routine designed to complement existing methods and advance the ecient time series analysis of biological rhythms research. in order to make it publicly available, pyboat is a free and open-source, multi-platform software based on the popular python (van rossum and drake [ ] ) programming language. it can be downloaded using the following link : https://github.com/tensionhead/pyboat, and is available on the anaconda distribution (via the conda-forge channel). software pyboat is written in the python programming language (van rossum and drake [ ] ). it makes extensive use of python's core scientic libraries numpy and scipy (virtanen et al. [ ] ) for the numerics. additionally we use matplotlib (hunter [ ] for visualization, and pandas (mckinney [ ] ) for data management. pyboat is released under the open source gpl- . license, and its code is freely available from https://github.com/tensionhead/pyboat. the readme on this repository contains further information and installation instructions. pyboat is also hosted on the popular anaconda distribution, as part of the conda-forge community https://conda-forge.org/. to estimate the amplitude envelope in the time domain, we employ a moving window of size l and determine the minimum and maximum of the signal inside the window for each time point t. the amplitude at that time point is then given by a(t) = (max(t) − min(t)). this works very well for envelopes with no periodic components, like an exponential decay. however, this simple method is not suited for oscillatory amplitude modulations. it is also recommended to sinc-detrend the signal before estimating the amplitude envelope. note that l should always be larger then the maximal expected period in the signal, as otherwise the signal itself gets distorted. a noisy chirp signal can be written as: f (t i ) = a cos(φ(t i )) + d x(t i ), where φ(t i ) is the instantaneous phase and the x(t i ) are samples from a stationary stochastic process (the background noise). the increments of the t i are the sampling interval: t i+ − t i = ∆t, with i = , , ..., n samples. starting from a linear sweep through angular frequencies: ω( ) = ω and ω(t n ) = ω , we have ω(t) = ω −ω t n t + ω . the instantaneous phase is then given by sampling n times from a gaussian distribution with standard deviation equal to one corresponds to gaussian white noise ξ(t i ). with x(t i ) = ξ(t i ) the signal to noise ration (snr) then is a /d . a realization of an ar process can be simulated by a simple generative procedure: the inital x(t ) is a sample from the standard normal distribution. then the next sample is given by: x(t i ) = αx(t i− ) + ξ(t i ), with α < . simulating pink noise is less straightforward, and we use the python package colorednoise from https://pypi.org/project/colorednoise for the simulations. its implementation is based on timmer and koenig [ ] . functional network inference of the suprachiasmatic nucleus quantitative analysis of circadian single cell oscillations in response to temperature identication of chirps with continuous wavelet transform circada: shiny apps for exploration of experimental and synthetic circadian time series with an educational emphasis problem solving: a statistician's guide methods for serial analysis of long time series in the study of biological rhythms quantitative analysis of regulatory exibility under changing environmental conditions calls out of chaos: the adaptive signicance of nonlinear phenomena in mammalian vocal production theory of communication. part : the analysis of information systems biology of cellular rhythms the eects of proliferation status and cell cycle phase on the responses of single cells to chemotherapy foundations of time-frequency analysis circannual rhythms in birds circadian system phase an aspect of temporal morphology; procedures and illustrative examples wavos: a matlab toolkit for wavelet analysis and visualization of oscillatory systems jtk_cycle: an ecient nonparametric algorithm for detecting rhythmic components in genomescale data sets matplotlib: a d graphics environment hilbert transformer and time delay: statistical comparison in the presence of gaussian noise reciprocal regulation of carbon monoxide metabolism and the circadian clock chemical turbulence scaling of embryonic patterning based on phase-gradient encoding wavelet analysis of circadian and ultradian behavioral rhythms persistent cellautonomous circadian oscillations in broblasts revealed by six-week singlecell imaging of per ::luc bioluminescence extension of a genetic network model by iterative experimentation and mathematical analysis least-squares frequency analysis of unequally spaced data mckinney, wes. "data structures for statistical computing in python excitability in the p network mediates robust signaling with tunable activation thresholds in single cells online period estimation and determination of rhythmicity in circadian data, using the biodare data infrastructure period coding of bmal oscillators in the suprachiasmatic nucleus circadian rhythms determined by cosine curve tting: analysis of continuous work and sleep-loss data fundamentals of ecology dissociation of per and bmal circadian rhythms in the suprachiasmatic nucleus in parallel with behavioral outputs a threshold selection method from gray-level histograms on estimation of the wavelet variance circadian oscillations in rodents: a systematic increase of their frequency with age waveclock: wavelet analysis of circadian oscillation performance of dierent synchronization measures in real data: a case study on electroencephalographic signals on adjusting the hodrick-prescott lter for the frequency of observations a sevenday cycle in covid- infection and mortality rates: are inter-generational social interactions on the weekends killing susceptible people? medrxiv plant dormancy in the perennial context beyond spikes: multiscale computational analysis of in vivo long-term recordings in the cockroach circadian clock the lomb-scargle periodogram in biological rhythm research: analysis of incomplete and unequally spaced time-series smoothing and dierentiation of data by simplied least squares procedures plasticity of the intrinsic period of the human circadian timing system measuring relative coupling strength in circadian systems the summation of random causes as the source of cyclic processes the scientist and engineer's guide to digital signal processing photobleaching kinetics of uorescein in quantitative uorescence microscopy a doppler eect in embryonic pattern formation quantifying stochastic noise in cultured circadian reporter cells least squares analysis of uorescence data detecting rhythms in time series with rain dierential rhythmicity: detecting altered rhythmicity in biological data time-scale and time-frequency analyses of irregularly sampled astronomical time series on generating power law noise a practical guide to wavelet analysis self-organization of embryonic genetic oscillators into spatiotemporal wave patterns python reference manual. cre-atespace quantication of circadian rhythms in single cells subharmonics, biphonation, and deterministic chaos in mammal vocalization metacycle: an integrated r package to evaluate periodicity in large scale data analyzing circadian expression data by harmonic regression based on autoregressive spectral estimation strengths and limitations of period estimation methods for circadian data we gratefully thank bharath ananthasubramaniam, hanspeter herzel, pedro pablo rojas and shaon chakrabarti for fruitful discussions and comments on the manuscript. we further thank jelle scholtalbers and the gbcs unit at the embl in heidelberg for technical support. we thank members of the aulehla and leptin labs for comments, support and helpful advice. key: cord- -ilxt b g authors: zhao, liang title: event prediction in the big data era: a systematic survey date: - - journal: nan doi: nan sha: doc_id: cord_uid: ilxt b g events are occurrences in specific locations, time, and semantics that nontrivially impact either our society or the nature, such as civil unrest, system failures, and epidemics. it is highly desirable to be able to anticipate the occurrence of such events in advance in order to reduce the potential social upheaval and damage caused. event prediction, which has traditionally been prohibitively challenging, is now becoming a viable option in the big data era and is thus experiencing rapid growth. there is a large amount of existing work that focuses on addressing the challenges involved, including heterogeneous multi-faceted outputs, complex dependencies, and streaming data feeds. most existing event prediction methods were initially designed to deal with specific application domains, though the techniques and evaluation procedures utilized are usually generalizable across different domains. however, it is imperative yet difficult to cross-reference the techniques across different domains, given the absence of a comprehensive literature survey for event prediction. this paper aims to provide a systematic and comprehensive survey of the technologies, applications, and evaluations of event prediction in the big data era. first, systematic categorization and summary of existing techniques are presented, which facilitate domain experts' searches for suitable techniques and help model developers consolidate their research at the frontiers. then, comprehensive categorization and summary of major application domains are provided. evaluation metrics and procedures are summarized and standardized to unify the understanding of model performance among stakeholders, model developers, and domain experts in various application domains. finally, open problems and future directions for this promising and important domain are elucidated and discussed. and is the focus of this survey. accurate anticipation of future events enables one to maximize the benefits and minimize the losses associated with some event in the future, bringing huge benefits for both society as a whole and individual members of society in key domains such as disease prevention [ ] , disaster management [ ] , business intelligence [ ] , and economics stability [ ] . "prediction is very difficult, especially if it's about the future. " -niels bohr, event prediction has traditionally been prohibitively challenging across different domains, due to the lack or incompleteness of our knowledge regarding the true causes and mechanisms driving event occurrences in most domains. with the advent of the big data era, however, we now enjoy unprecedented opportunities that open up many alternative approaches for dealing with event prediction problems, sidestepping the need to develop a complete understanding of the underlying mechanisms of event occurrence. based on large amounts of data on historical events and their potential precursors, event prediction methods typically strive to apply predictive mapping to build on these observations to predict future events, utilizing predictive analysis techniques from domains such as machine learning, data mining, pattern recognition, statistics, and other computational models [ , , ] . event prediction is currently experiencing extremely rapid growth, thanks to advances in sensing techniques (physical sensors and social sensors), prediction techniques (artificial intelligence, especially machine learning), and high performance computing hardware [ ] . event prediction in big data is a difficult problem that requires the invention and integration of related techniques to address the serious challenges caused by its unique characteristics, including: ) heterogeneous multi-output predictions. event prediction methods usually need to predict multiple facets of events including their time, location, topic, intensity, and duration, each of which may utilize a different data structure [ ] . this creates unique challenges, including how to jointly predict these heterogeneous yet correlated facets of outputs. due to the rich information in the outputs, label preparation is usually a highly labor-intensive task performed by human annotators, with automatic methods introducing numerous errors in items such as event coding. so, how can we improve the label quality as well as the model robustness under corrupted labels? the multi-faceted nature of events make event prediction a multi-objective problem, which raises the question of how to properly unify the prediction performance on different facets. it is also challenging to verify whether a predicted event "matches" a real event, given that the various facets are seldom, if ever, % accurately predicted. so, how can we set up the criteria needed to discriminate between a correct prediction ("true positive") and a wrong one ("false positive")? ) complex dependencies among the prediction outputs. beyond conventional isolated tasks in machine learning and predictive analysis, in event prediction the predicted events can correlate to and influence each other [ ] . for example, an ongoing traffic incident event could cause congestion on the current road segment in the first minutes but then lead to congestion on other contiguous road segments minutes later. global climate data might indicate a drought in one location, which could then cause famine in the area and lead to a mass exodus of refugees moving to another location. so, how should we consider the correlations among future events? ) real-time stream of prediction tasks. event prediction usually requires continuous monitoring of the observed input data in order to trigger timely alerts of future potential events [ ] . however, during this process the trained prediction model gradually becomes outdated, as real world events continually change dynamically, concepts are fluid and distribution drifts are inevitable. for example, in september % of the united states population were social media users, including % of those over . however, by may , % of the united states population were social media users, including % of those over [ ] . not only the data distribution, but also the number of features and input data sources can also vary in real time. hence, it is imperative to periodically upgrade the models, which raises further questions concerning how to train models based on non-stationary distributions, while balancing the cost (such as computation cost and data annotation cost) and timeliness? in addition, event prediction involves many other common yet open challenges, such as imbalanced data (for example data that lacks positive labels in rare event prediction) [ ] , data corruption in inputs [ ] , the uncertainty of predictions [ ] , longer-term predictions (including how to trade-off prediction accuracy and lead time) [ ] , trade-offs between precision and recall [ ] , and how to deal with high-dimensionality [ ] and sparse data involving many unrelated features [ ] . event prediction problems provide unique testbeds for jointly handling such challenges. in recent years, a considerable amount of research has been devoted to event prediction technique development and applications, in order to address the aforementioned challenges [ ] . recently, there has been a surge of research that both proposes and applies new approaches in numerous domains, though event prediction techniques are generally still in their infancy. most existing event prediction methods have been designed for a specific application domains, but their approaches are usually general enough to handle problems in other application domains. unfortunately, it is difficult to cross-reference these techniques across different application domains serving totally different communities. moreover, the quality of event prediction results require sophisticated and specially-designed evaluation strategies due to the subject matter's unique characteristics, for example its multi-objective nature (e.g., accuracy, resolution, efficiency, and lead time) and heterogeneous prediction results (e.g., heterogeneity and multi-output). as yet, however, we lack a systematic standardization and comprehensive summarization approaches with which to evaluate the various event prediction methodologies that have been proposed. this absence of a systematic summary and taxonomy of existing techniques and applications in event prediction causes major problems for those working in the field who lacks clear information on the existing bottlenecks, traps, open problems, and potentially fruitful future research directions. to overcome these hurdles and facilitate the development of better event prediction methodologies and applications, this survey paper aims to provide a comprehensive and systematic review of the current state of the art for event prediction in the big data era. the paper's major contributions include: • a systematic categorization and summarization of existing techniques. existing event prediction methods are categorized according to their event aspects (time, location, and semantics), problem formulation, and corresponding techniques to create the taxonomy of a generic framework. relationships, advantages, and disadvantages among different subcategories are discussed, along with details of the techniques under each subcategory. the proposed taxonomy is designed to help domain experts locate the most useful techniques for their targeted problem settings. • a comprehensive categorization and summarization of major application domains. the first taxonomy of event prediction application domains is provided. the practical significance and problem formulation are elucidated for each application domain or subdomain, enabling it to be easily mapped to the proposed technique taxonomy. this will help data scientists and model developers to search for additional application domains and datasets that they can use to evaluate their newly proposed methods, and at the same time expand their advanced techniques to encompass new application domains. • standardized evaluation metrics and procedures. due to the nontrivial structure of event prediction outputs, which can contain multiple fields such as time, location, intensity, duration, and topic, this paper proposes a set of standard metrics with which to standardize existing ways to pair predicted events with true events. then additional metrics are introduced and standardized to evaluate the overall accuracy and quality of the predictions to assess how close the predicted events are to the real ones. • an insightful discussion of the current status of research in this area and future trends. based on the comprehensive and systematic survey and investigation of existing event prediction techniques and applications presented here, an overall picture and the shape of the current research frontiers are outlined. the paper concludes by presenting fresh insights into the bottleneck, traps, and open problems, as well as a discussion of possible future directions. this section briefly outlines previous surveys in various domains that have some relevance to event prediction in big data in three categories, namely: . event detection, . predictive analytics, and . domain-specific event prediction. event detection has been an extensively explored domain with over many years. its main purpose is to detect historical or ongoing events rather than to predict as yet unseen events in the future [ , ] . event detection typically focuses on pattern recognition [ ] , anomaly detection [ ] , and clustering [ ] , which are very different from those in event prediction. there have been several surveys of research in this domain in the last decade [ , , , ] . for example, deng et al. [ ] and atefeh and khreich [ ] provided overviews of event extraction techniques in social media, while michelioudakis et al. [ ] presented a survey of event recognition with uncerntainty. alevizos et al. [ ] provided a comprehensive literature review of event recognition methods using probabilistic methods. predictive analysis covers the prediction of target variables given a set of dependent variables. these target variables are typically homogeneous scalar or vector data for describing items such as economic indices, housing prices, or sentiments. the target variables may not necessarily be values in the future. larose [ ] provides a good tutorial and survey for this domain. predictive analysis can be broken down into subdomains such as structured prediction [ ] , spatial prediction [ ] , and sequence prediction [ ] , enabling users to handle different types of structure for the target variable. fülöp et al. [ ] provided a survey and categorization of applications that utilize predictive analytics techniques to perform event processing and detection, while jiang [ ] focused on spatial prediction methods that predict the indices that have spatial dependency. baklr et al. [ ] summarized the literature on predicting structural data such as geometric objects and networks, and arias et al. [ ] phillips et al. [ ] , and yu and kak [ ] all proposed the techniques for predictive analysis using social data. as event prediction methods are typically motivated by specific application domains, there are a number of surveys event predictions for domains such as flood events [ ] , social unrest [ ] , wind power ramp forecasting [ ] , tornado events [ ] , temporal events without location information [ ] , online failures [ ] , and business failures [ ] . however, in spite of its promise and its rapid growth in recent years, the domain of event prediction in big data still suffers from the lack of a comprehensive and systematic literature survey covering all its various aspects, including relevant techniques, applications, evaluations, and open problems. the remainder of this article is organized as follows. section presents generic problem formulations for event prediction and the evaluation of event prediction results. section then presents a taxonomy and comprehensive description of event prediction techniques, after which section categorizes and summarizes the various applications of event prediction. section lists the open problems and suggests future research directions and this survey concludes with a brief summary in section . this section begins by examining the generic denotation and formulation of the event prediction problem (section . ) and then considers way to standardize event prediction evaluations (section . ). an event refers to a real-world occurrence that happens at some specific time and location with specific semantic topic [ ] . we can use y = (t, l, s) to denote an event where its time t ∈ t , its location l ∈ l, and its semantic meaning s ∈ s. here, t , l, and s represent the time domain, location domain, and semantic domain, respectively. notice that these domains need to have very general meanings that cover a wide range of types of entities. for example, the location l can include any features that can be used to locate the place of an event in terms of a point or a neighborhood in either euclidean space (e.g., coordinate and geospatial region) or non-euclidean space (e.g., a vertex or subgraph in a network). similarly, the semantic domain s can contain any type of semantic features that are useful when elaborating the semantics of an event's various aspects, including its actors, objects, actions, magnitude, textual descriptions, and other profiling information. for example, (" am, june , ", "hermosillo, sonora, mexico", "student protests") and ("june , ", âĂIJberlin, germany", "red cross helps pandemics control") denote the time, location, and semantics, for two events respectively. an event prediction system requires inputs that could indicate future events, called event indicators, and these could contain both critical information on events that precede the future event, known as precursors, as well as irrelevant information [ , ] . event indicator data can be denoted as x ⊆ t × l × f , where f is the domain of the features other than location and time. if we denote the current time as t now and define the past time and future time as t − ≡ {t |t ≤ t now , t ∈ t } and t + ≡ {t |t > t now , t ∈ t }, respectively, the event prediction problem can now be formulated as follows: definition . (event prediction). given the event indicator data x ⊆ t − × l × f and historical event data y ⊆ t − × l × s, event prediction is a process that outputs a set of predicted future eventsŶ ⊆ t + × l × s, such that for each predicted future eventŷ = (t, l, s) ∈Ŷ where t > t now . not every event prediction method necessarily focuses on predicting all three domains of time, location, and semantics simultaneously, but may instead predict any part of them. for example, when predicting a clinical event such as the recurrence of disease in a patient, the event location might not always be meaningful [ ] , but when predicting outbreaks of seasonal flu, the semantic meaning is already known and the focus is the location and time [ ] and when predicting political events, sometimes the location, time, and semantics (e.g., event type, participant population type, and event scale) are all necessary [ ] . moreover, due to the intrinsic nature of time, location, and semantic data, the prediction techniques and evaluation metrics of them are necessarily different, as described in the following. event prediction evaluation essentially investigates the goodness of fit for a set of predicted eventŝ y against real events y . unlike the outputs of conventional machine learning models such as the simple scalar values used to indicate class types in classification or numerical values in regression, the outputs of event prediction are entities with rich information. before we evaluate the quality of prediction, we need to first determine the pairs of predictions and the labels that will be used for the comparison. hence, we must first optimize the process of matching predictions and real events (section . . ) before evaluating the prediction error and accuracy (section . . ). . . matching predicted events and real events. the following two types of matching are typically used: • prefixed matching: the predicted events will be matched with the corresponding groundtrue real events if they share some key attributes. for example, for event prediction at a particular time and location point, we can evaluate the prediction against the ground truth for that time and location. this type of matching is most common when each of the prediction results can be uniquely distinguished along the predefined attributes (for example, location and time) that have a limited number of possible values, so that one-on-one matching between the predicted and real events are easily achieved [ , ] . for example, to evaluate the quality of a predicted event on june , in san francisco, usa, the true event occurrence on that date in san francisco can be used for the evaluation. • optimized matching: in situations where one-on-one matching is not easily achieved for any event attribute, then the set of predicted events might need to assess the quality of the match achieved with the set of real events, via an optimized matching strategy [ , ] . for example, consider two predictions, prediction : (" am, june , ", "nogales, sonora, mexico", "worker strike"), and prediction : (" am, june , ", "hermosillo, sonora, mexico", "student protests"). the two ground truth events that these can usefully be compared with are real event : (" am, june , ", "hermosillo, sonora, mexico", "teacher protests"), and real event : ("june , ", "navojoa, sonora, mexico", "general-population protest"). none of the predictions are an exact match for any of the attributes of the real events, so we will need to find a "best" matching among them, which in this case is between prediction and real event and prediction and real event . this type of matching allows some degree of inaccuracy in the matching process by quantifying the distance between the predicted and real events among all the attribute dimensions. the distance metrics are typically either euclidean distance [ ] or some other distance metric [ ] . some researchers have hired referees to manually check the similarity of semantic meanings [ ] , but another way is to use event coding to code the events into an event type taxonomy and then consider a match to have been achieved if the event type matches [ ] . based on the distance between each pair of predicted and real events, the optimal matching will be the one that results in the smallest average distance [ ] . however, suppose there are m predicted events and n real events, then there can be as many as m ·n possible ways of matching, making it prohibitively difficult to find the optimal solution. moreover, there could be different rules for matching. for example, the "multiple-to-multiple" rule shown in figure (a) allows one predicted (real) event to match multiple real (predicted) events [ ] , while "bipartite matching" only allows one-to-one matching between predicted and real events (figure (b) ). "non-crossing matching" requires that the real events matched by the predicted events follow the same chronological order (figure (c)). in order to utilize any of these types of matching, researchers have suggested using event matching optimization to learn the optimal set of "(real event, predicted event)" pairs [ ] . the effectiveness of the event predictions is evaluated in terms of two indicators: ) goodness of matching, which evaluates performance metrics such as the number and percentage of matched events [ ] , and ) quality of matched predictions, which evaluates how close the predicted event is to the real event for each pair of matched events [ ] . • goodness of matching. a true positive means a real event has been successfully matched by a predicted event; if a real event has not been matched by any predicted event, then it is called a false negative and a false positive means a predicted event has failed to match any real event, which is referred to as a false alarm. assume the total number of predictions is n , the number of real events isn , the number of true positives is n t p , the number of false negatives is n f n and the number of false positives is n f p . then, the following key evaluation metrics can be calculated: prediction=n t p /(n t p + n f p ), recall=n t p /(n t p + n f n ), f-measure = · precision · recall/(precision + recall). other measurements such as the area under the roc curves are also commonly used [ ] . this approach can be extended to include other items such as multi-class precision/recall, and precision/recall at top k [ , , , ] . • quality of matched predictions. if a predicted event matches a real one, it is common to go on to evaluate how close they are. this reflects the quality of the matched predictions, in terms of different aspects of the events. event time is typically a numerical values and hence can be easily measured in terms of metrics such as mean squared error, root mean squared error, and mean absolute error [ ] . this is also the case for location in euclidean space, which can be measured in terms of the euclidean distance between the predicted point (or region) and the real point (or region). some researchers consider the administrative unit resolution. for example, a predicted location ("new york city", "new york state", "usa") has a distance of from the real location ("los angeles", "california", "usa") [ ] . others prefer to measure multi-resolution location prediction quality as follows: ( / )(l count ry + l count ry · l st at e + l count r y · l st at e · l city ), where l city , l st at e , and l count r y can only be either (i.e., no match to the truth) or (i.e., completely matches the truth) [ ] . for a location in non-euclidean space such as a network [ ] , the quality can be measured in terms of the shortest path length between the predicted node (or subgraph) and the real node (or subgraph), or by the f-measure between the detected subsets of nodes against the real ones, which is similar to the approach for evaluating community detection [ ] . for event topics, in addition to conventional ways of evaluating continuous values such as population size, ordinal values such as event scale, and categorical values such as event type, actors, and actions, as well as more complex semantic values such as texts, can be evaluated using natural language process measurements such as edit distance, bleu score, top-k precision, and rouge [ ] . this section focuses on the taxonomy and representative techniques utilized for each category and subcategory. due to the heterogeneity of the prediction output, the technique types depend on the type of output to be predicted, such as time, location, and semantics. as shown in figure , all the event prediction methods are classified in terms of their goals, including time, location, semantics, and the various combinations of these three. these are then further categorized in terms of the output forms of the goals of each and the corresponding techniques normally used, as elaborated in the following. semantic sequence discrete time occurrence research problems . supervised: geo-featured classification; spatial multi-task learning; spatial autoregressive . unsupervised: spatial scan, network scan step : event representation step : event causality inference. step : future event inference. fig. . taxonomy of event prediction problems and techniques. event time prediction focuses on predicting when future events will occur. based on their time granularity, time prediction methods can be categorized into three types: ) event occurrence: binary-valued prediction on whether an event does or does not occur in a future time period; ) discrete-time prediction: in which future time slot will the event occur; and ) continuous-time prediction: at which precise time point will the future event occur. occurrence prediction is arguably the most extensive, classical, and generally simplest type of event time prediction task [ ] . it focuses on identifying whether there will be event occurrence (positive class) or not (negative class) in a future time period [ ] . this problem is usually formulated as a binary classification problem, although a handful of other methods instead leverage anomaly detection or regression-based techniques. . binary classification. binary classification methods have been extensively explored for event occurrence prediction. the goal here is essentially to estimate and compare the values of f (y = "yes ′′ |x ) and f (y = "no ′′ |x ), where the former denotes the score or likelihood of event occurrence given observation x while the latter corresponds to no event occurrence. if the value of the former is larger than the latter, then a future event occurrence is predicted, but if not, there is no event predicted. to implement f , the methods typically used rely on discriminative models, where dedicated feature engineering is leveraged to manually extract potential event precursor features to feed into the models. over the years, researchers have leveraged various binary classification techniques ranging from the simplest threshold-based methods [ , ] , to more sophisticated methods such as logistic regression [ , ] , support vector machines [ ] , (convolutional) neural networks [ , ] , and decision trees [ , ] . in addition to discrminative models, generative models [ , ] have also been used to embed human knowledge for classifying event occurrences using bayesian decision techniques. specifically, instead of assuming that the input features are independent, prior knowledge can also be directly leveraged to establish bayesian networks among the observed features and variables based on graphical models such as (semi-)hidden markov models [ , , ] and autoregresive logit models [ ] . the joint probabilities p(y = "yes ′′ , x ) of p(y = "no ′′ , x ) can thus be estimated using graphical models, and then utilized to estimate f (y = "yes ′′ |x ) = p(y = "yes ′′ |x ) and f (y = "no ′′ |x ) = p(y = "no ′′ |x ) using bayesian rules [ ] . . anomaly detection. alternatively, anomaly detection can also be utilized to learn a "prototype" of normal samples (typical values corresponding to the situation of no event occurrence), and then identify if any newly-arriving sample is close to or distant from the normal samples, with distant ones being identified as future event occurrences. such methods are typically utilized to handle "rare event" occurrences, especially when the training data is highly imbalanced with little to no data for "positive" samples. anomaly detection techniques such as one-classification [ ] and hypotheses testing [ , ] are often utilized here. . regression. in addition to simply predicting the occurrence or not, some researchers have sought to extend the binary prediction problem to deal with ordinal and numerical prediction problems, including event count prediction based on (auto)regression [ ] , event size prediction using linear regression [ ] , and event scale prediction using ordinal regression [ ] . . . discrete-time prediction. in many applications, practitioners want to know the approximate time (i.e. the date, week, or month) of future events in addition to just their occurrence. to do this, the time is typically first partitioned into different time slots and the various methods focus on identifying which time slot future events are likely to occur in. existing research on this problem can be classified into either direct or indirect approaches. . direct approaches. these of methods discretize the future time into discrete values, which can take the form of some number of time windows or time scales such as near future, medium future, or distant future. these are then used to directly predict the integer-valued index of future time windows of the event occurrence using (auto)regression methods [ , ] , or to predict the ordinal values of future time scales using ordinal regression or classification [ ] . . indirect approaches. these methods adopt a two-step approach, with the first step being to place the data into a series of time bins and then perform time series forecasting using techniques such as autoregressive [ ] based on the historical time series x = {x , · · · , x t } to obtain the future time seriesx = {x t + , · · · , xt }. the second step is to identify events in the predicted future time seriesx using either unsupervised methods such as burstness detection [ ] and change detection [ ] , or supervised techniques based on learning event characterization function. for example, existing works [ , ] first represent the predicted future time seriesx ∈ rt ×d using time-delayed embedding, intox ∈ rt ×d ′ where each observation at time t can be represented x t } and t = t ,t + , · · ·t . then an event characterization function f c (x t ) is established to mapx t to the likelihood of an event, which can be fitted based on the event labels provided in the training set intuitively. overall, the unsupervised method requires users to assume the type of patterns (e.g., burstiness and change) of future events based on prior knowledge but do not require event label data. however, in cases where the event time series pattern is difficult to assume but the label data is available, supervised learning-based methods are usually used. discrete-time prediction methods, although usually simple to establish, also suffer from several issues. first, their time-resolution is limited to the discretization granularity; increasing this granularity significantly increases the computations al resources required, which means the resolution cannot be arbitrarily high. moreover, this trade-off is itself a hyperparameter that is sensitive to the prediction accuracy, rendering it difficult and time-consuming to tune during training. to address these issues, a number of techniques work around it by directly predicting the continuous-valued event time [ ] , usually by leveraging one of three techniques. . simple regression. the simplest methods directly formalize continuous-event-time prediction as a regression problem [ ] , where the output is the numerical-value future event time [ ] and/or their duration [ , ] . common regressors such as linear regression and recurrent neural networks have been utilized for this. despite their apparent simplicity, this is not straightforward as simple regression typically assumes gaussian distribution [ ] , which does not usually reflect the true distribution of event times. for example, the future event time needs to be left-bounded (i.e., larger than the current time), as well asbeing typically non-symmetric and usually periodic, with recurrent events having multiple peaks in the probability density function along the time dimension. . point processes. as they allow more flexibility in fitting true event time distributions, point process methods [ , ] are widely leveraged and have demonstrated their effectiveness for continuous time event prediction tasks. they require a conditional intensity function, defined as follows: where д(t |x ) is the conditional density function of the event occurrence probability at time t given an observation x , and whose corresponding cumulative distribution function, g(t |x )), n (t, t + dt), denotes the count of events during the time period between t and t + dt, where dt is an infinitelysmall time period. hence, by leveraging the relation between density and accumulative functions and then rearranging equation ( ), the following conditional density function is obtained: once the above model has been trained using a technique such as maximal likelihood [ ] , the time of the next event in the future is predicted as: although existing methods typically share the same workflow as that shown above, they vary in the way they define the conditional intensity function λ(t |x ). traditional models typically utilize prescribed distributions such as the poisson distribution [ ] , gamma distribution [ ] , hawks [ ] , weibull process [ ] , and other distributions [ ] . for example, damaschke et al. [ ] utilized a weibull distribution to model volcano eruption events, while ertekin et al. [ ] instead proposed the use of a non-homogeneous poisson process to fit the conditional intensity function for power system failure events. however, in many other situations where there is no information regarding appropriate prescribed distributions, researchers must start by leveraging nonparametric approaches to learn sophisticated distributions from the data using expressive models such as neural networks. for example, simma and jordan [ ] utilized of rnn to learn a highly nonlinear function of λ(t |x ). . survival analysis. survival analysis [ , ] is related to point processes in that it also defines an event intensity or hazard function, but in this case based on survival probability considerations, as follows: where h (t |x ) is the so-called hazard function denoting the hazard of event occurrence between time (t −dt) for a t for a given observation x . either h (t |x ) or ξ (t |x ) could be utilized for predicting the time of future events. for example, the event occurrence time can be estimated when ξ (t |x ) is lower than a specific value. also, one can obtain ξ (t |x ) = exp − ∫ t h(u|x)du according to equation ( ) [ ] . here h (t |x ) can adopt any one of several prescribed models, such as the wellknown cox hazard model [ , ] . to learn the model directly from the data, some researchers have recommended enhancing it using deep neural networks [ ] . vahedian et al. [ ] suggest learning the survival probability ξ (t |x ) and then applying the function h (·|x ) to indicate an event at time t if h (t |x ) is larger than a predefined threshold value. a classifier can also be utilized. instead of using the raw sequence data, the conditional intensity function can also be projected onto additional continuous-time latent state layers that eventually map to the observations [ , ] . these latent states can then be extracted using techniques such as hidden semi-markov models [ ] , which ensure the elicitation of the continuous time patterns. event location prediction focuses on predicting the location of future events. location information can be formulated as one of two types: . raster-based. here, a continuous space is partitioned into a grid of cells, each of which represents a spatial region, as shown in figure (a). this type of representation is suitable for situations where the spatial size of the event is non-negligible. . point-based. in this case, each location is represented by an abstract point with infinitely-small size, as shown in figure (b). this type of representation is most suitable for the situations where the spatial size of the event can be neglected, or the location regions of the events can only be in discrete spaces such as network nodes. there are three types of techniques used for raster-based event location prediction, namely spatial clustering, spatial embedding, and spatial convolution. . spatial clustering. in raster-based representations, each location unit is usually a regular grid cell with the same size and shape. however, regions with similar spatial characteristics typically have irregular shapes and sizes, which could be approximated as composite representations of a number of grids [ ] . the purpose of spatial clustering here is to group the contiguous regions who collectively exhibit significant patterns. the methods are typically agglomerative style. they typically start from the original finest-grained spatial raster units and proceed by merging the spatial neighborhood of a specific unit in each iteration. but different research works define different criteria for instantiating the merging operation. for example, wang and ding [ ] merge neighborhoods if the unified region after merging can maintain the spatially frequent patterns. xiong et al. [ ] chose an alternative approach by merging spatial neighbor locations into the current locations sequentially until the merged region possesses event data that is sufficiently statistically significant. these methods usually run in a greedy style to ensure their time complexity remains smaller than quadratic. after the spatial clustering is completed, each spatial cluster will be input into the classifier to determine whether or not there is an event corresponding to it. . spatial interpolation. unlike spatial clustering-based methods, spatial interpolation based methods maintain the original fine granularity of the event location information. the estimation of event occurrence probability can be further interpolated for locations with no historical events and hence achieve spatial smoothness. this can be accomplished using commonly-used methods such as kernel density estimation [ , ] and spatial kriging [ , ] . kernel density estimation is a popular way to model the geo-statistics in numerous types of events such as crimes [ ] and terrorism [ ] : where k(s) denotes the kernel estimation for the location point s, n is the number of historical event locations, each s i is a historical event location, γ is a tunable bandwidth parameter, and k(·) is a kernel function such as gaussian kernel [ ] . more recently, ristea et al. [ ] further extended kde-based techniques by leveraging localized kde and then applying spatial interpolation techniques to estimate spatial feature values for the cells in the grid. since each cell is an area rather than a point, the center of each cell is usually leveraged as the representative of this cell. finally, a classifier will take this as its input to predict the event occurrence for each grid [ , ] . . spatial convolution. in the last few years, convolutional neural networks (cnns) have demonstrated significant success in learning and representing sophisticated spatial patterns from image and spatial data [ ] . a cnn contains multiple convolutional layers that extract the hierarchical spatial semantics of images. in each convolutional layer, a convolution operation is executed by scanning a feature map with a filter, which results in another smaller feature map with a higher level semantic. since raster-based spatial data and images share a similar mathematical form, it is natural to leverage cnns to process it. existing methods [ , , , ] in this category typically formulate a spatial map as input to predict another spatial map that denotes future event hotspots. such a formulation is analogous to the "image translation" problem popular in recent years in the computer vision domain [ ] . specifically, researchers typically leverage an encoder-decoder architecture, where the input images (or spatial map) are processed by multiple convolutional layers into a higher-level representation, which is then decoded back into an output image with the same size, through a reverse convolutional operations process known as transposed convolution [ ] . . trajectory destination prediction. this type of method typically focuses on populationbased events whose patterns can be interpreted as the collective behaviors of individuals, such as "gathering events" and "dispersal events". these methods share a unified procedure that typically consists of two steps: ) predict future locations based on the observed trajectories of individuals, and ) detect the occurrence of the "future" events based on the future spatial patterns obtained in step . the specific methodologies for each step are as follows: • step : here, the aim is to predict each location an individual will visit in the future, given a historical sequence of locations visited. this can be formulated as a sequence prediction problem. for example, wang and gerber [ ] sought to predict the probability of the next time point t + 's location s t + based on all the preceding time points: p(s t + |s ≤t ) = p(s t + |s t , s t − , · · · , s ), based on various strategies including a historical volume-based prior model, markov models, and multi-class classification models. vahedian et al. [ ] adopted bayesian theory p(s t + |s ≤t ) = p(s ≤t |s t + ) · p(s t + )/p(s ≤t ) which requires the conditional probability p(s ≤t |s t + ) to be stored. however, in many situations, there is huge number of possible trajectories for each destination. for example, with a × grid, one needs to store ( × ) ≈ . × options. to improve the memory efficiency, this can be limited to a consideration of just the source and current locations, leveraging a quad-tree style architecture to store the historical information. to achieve more efficient storage and speed up p(s ≤t |s t + ) queries, vahedian et al. [ ] further extended the quad-tree into a new technique called vigo, which removes duplicate destination locations in different leaves. • step : the aim in this step is to forecast future event locations based on the future visiting patterns predicted in step . the most basic strategy here is to consider each grid cell independently. for example, wang and gerber [ ] adopted supervised learning strategies to build predictive mapping between the visiting patterns and the event occurrence. a more sophisticated approach is to consider the spatial outbreaks composited by multiple grids. scalable algorithms have also been proposed to identify regions containing statistically significant hotspots [ ] , such as spatial scan statistics [ ] . khezerlou et al. [ ] proposed a greedy-based heuristic tailored for the grid-based data formulation, which extends the original "seed" grid containing statistically-large future event densities to four directions until the extended region is no longer a statistically-significant outbreak. . unlike the raster-based formulation, which covers the prediction of a contiguous spatial region, point-based prediction focuses specifically on locations of interest, which can be distributed sparsely in a euclidean (e.g., spatial region) or non-euclidean space (e.g., graph topology). these methods can be categorized into supervised and unsupervised approaches. . supervised approaches. in supervised methods, each location will be classified as either "positive" or "negative" with regard to a future event occurrence. the simplest setting is based on the independent and identically distributed (i.i.d.) assumption among the locations, where each location is predicted by a classifier independently using their respective input features. however, given that different locations usually have strong spatial heterogeneity and dependency, further research has been proposed to tackle them based on different locations' predictors and outputs, resulting in two research directions: ) spatial multi-task learning, and ) spatial auto-regressive methods. • spatial multi-task learning. multi-task learning is a popular learning strategy that can jointly learn the models for different tasks such that the learned model can not only share their knowledge but also preserve some exclusive characteristics of the individual tasks [ ] . this notion coincides very well with spatial event prediction tasks, where combining the outputs of models from different locations needs to consider both their spatial dependency and heterogeneity. zhao et al. [ ] proposed a spatial multi-task learning framework as follows: where m is the total number of locations (i.e., tasks), w i and y i are the model parameters and true labels (event occurrence for all time points), respectively, of task i. l(·) is the empirical loss, f (w i , x i ) is the predictor for task i, and r(·) is the spatial regularization term based on the spatial dependency information m ∈ r m×m , where m i, j records the spatial dependency between location i and j. c(·) represents the spatial constraints imposed over the corresponding models to enforce them to remain within the valid space c. over recent years, there have been multiple studies proposing different strategies for r(·) and c(·). for example, zhao et al. [ ] assumed that all the locations would be evenly correlated and enforced their similar sparsity patterns for feature selection, while gao et al. [ ] further extended this to differentiate the strength of the correlation between different locations' tasks according to the spatial distance between them. this research has been further extended this approach to tree-structured multitask learning to handle the hierarchical relationship among locations at different administrative levels (e.g., cities, states, and countries) [ ] in a model that also considers the logical constraints over the predictions from different locations who have hierachical relationships. instead of evenly similar, zhao, et al. [ ] further estimated spatial dependency d utilizing inverse distance using gaussian kernels, while ning et al. [ ] proposed estimating the spatial dependency d based on the event co-occurrence frequency between each pair of locations. • spatial auto-regressive methods. spatial auto-regressive models have been extensively explored in domains such as geography and econometrics, where they are applied to perform predictions where the i.i.d. assumption is violated due to the strong dependencies among neighbor locations. its generic framework is as follows: where x t ∈ r m×d andŶ t + ∈ r m×m are the observations at time t and event predictions at time t + over all the m locations, and m ∈ r m×m is the spatial dependency matrix with zero-valued diagonals. this means the prediction of each locationŶ t + ,i ∈Ŷ t + is jointly determined by its input x t,i and neighbors {j |m i, j } and ρ is a positive value to balance these two factors. since event occurrence requires discrete predictions, simple threshold-based strategies can be used to discretizeŶ i intoŶ ′ i = { , } [ ] . moreover, due to the complexity of event prediction tasks and the large number of locations, sometimes it is difficult to define the whole m manually. zhao et al. [ ] proposed jointly learning the prediction model and spatial dependency from the data using graphical lasso techniques. yi et al. [ ] took a different approach, leveraging conditional random fields to instantiate the spatial autoregression, where the spatial dependency is measured by gaussian kernel-based metrics. yi et al. [ ] then went on to propose leveraging the neural network model to learn the locations' dependency. . unsupervised approaches. without supervision from labels, unsupervised-based methods must first identify potential precursors and determinant features in different locations. they can then detect anomalies that are characterized by specific feature selection and location combinatorial patterns (e.g., spatial outbreaks and connected subgraphs) as the future event indicators [ ] . the generic formulation is as follows: where q(·) denotes scan statistics which score the significance of each candidate pattern, represented by both a candidate location combinatorial pattern r and feature selection pattern f . specifically, f ∈ { , } d ′ ×n denotes the feature selection results (where " " means selected; " ", otherwise) and r ∈ { , } m×n denotes the m involved locations for the n events. m(g, β) and c are the set of all the feasible solutions of f and r, respectively. q(·) can be instantiated by scan statistics such as kulldorff's scan statistics [ ] and the berk-jones statistic [ ] , which can be applied to detect and forecast events such as epidemic outbreaks and civil unrest events [ ] . depending on whether the embedding space is an euclidean region (e.g., a geographical region) or a non-euclidean region (e.g., a network topology), the pattern constraint c can be either constrained to predefined geometric shapes such as a circle, rectangle, or an irregular shape or subgraphs such as connected, cliques, and k-cliques. the problem in equation ( ) is nonconvex and sometimes even discrete, and hence difficult to solve. a generic way is to optimize f using sparse feature selection; there is a useful survey provided in [ ] and r can be defined using the two-step graph-structured matching method detailed in [ ] . more recently, new techniques have been developed that are capable of jointly learning both feature and location selection [ , ] . event semantics prediction addresses the problem of forecasting topics, descriptions, or other metaattributes in addition to future events' times and locations. unlike time and location prediction, the data in event semantics prediction usually involves symbols and natural languages in addition to numerical quantities, which means different types of techniques may be utilized. the data are categorized into three types based on how the historical data are organized and utilized to infer future events. the first of these categories covers rule-based methods, where future event precursors are extracted by mining association or logical patterns in historical data. the second type is sequence-based, considering event occurrence to be a consequence of temporal event chains. the third type further generalizes event chains into event graphs, where additional cross-chain contexts need to be modeled. these are discussed in turn below. . association rule-based methods are amongst the most classic approaches in data mining domain for event prediction, typically consisting of two steps: ) learn the associations between precursors and target events, and then ) utilize the learned associations to predict future events. for the first step, for example, an association could be x = {"election", "fraud"} → y ="protest event", which indicates that serious fraud occurring in an election process could lead to future protest events. to discover all the significant associations from the ocean of candidate rules efficiently, frequent set mining [ ] can be leveraged. each discovered rule needs to come with both sufficient support and confidence. here, support is defined as the number of cases where both "x" and "y" co-occur, while confidence means the ratio indicating that "y" occurs once "x" happens. to better estimate these discrimination rules, further temporal constraints can be added that require the occurrence time of "x" and "y" to be sufficiently close to be considered "co-occurrences". once the frequent set rules have been discovered, pruning strategies may be applied to retain the most accurate and specific ones, with various strategies for generating final predictions [ ] . specifically, given each new observation x ′ , one of the simplest strategies is to output the events that are triggered by any of the association rules starting from event x ′ [ ] . other strategies first rank the predicted results based on their confidence and then predict just the top r events [ ] . more sophisticated and rigorous strategies tend to build a decision list where each element in the list is an association rule mapping, so once a generative model has been built for the decision process, the maximal likelihood can be leveraged to optimize the order of the decision list [ ] . this type of research leverages the causality inferred among the historical events to achieve future event predictions. the data here typically shares a generic framework consisting of the following procedures: ) event representation, ) event graph construction, and ) future event inference. step : event semantic representation. this approach typically begins by extracting the events from the target texts using natural language processing techniques such as sanitization, tokenization, pos tag analysis, and name entity recognition. several types of objects can be extracted to represent the events: i) noun phrase-based [ , , ] , where the noun-phrase corresponds to each event (for example, " sichuan earthquake"); ii) verbs and nouns [ , ] , where an event is represented as a set of noun-verb pairs extracted from news headlines (for example, "", "", or ""); and iii) tuple-based [ ] , where each event is represented by a tuple consisting of objects (such as actors, instruments, or receptors), a relationship (or property), and time. an rdf-based format has also been leveraged in some works [ ] . step : event causality inference. the goal here is to infer the cause-effect pairs among historical events. due to its combinatorial nature, narrowing down the number of candidate pairs is crucial. existing works usually begin by clustering the events into event chains, each of which consist of a sequence of time-ordered events under the relevant semantics, typically the same topics, actors, and/or objects [ ] . the causal relations among the event pairs can then be inferred in various ways. the simplest approach is just to consider the likelihood that y occurs after x has occurred throughout the training data. other methods utilize nlp techniques to identify causal mentions such as causal connectives, prepositions, and verbs [ ] . some formulate causal-effect relationship identification as a classification task where the inputs are the cause and effect candidate events, often incorporating contextual information including related background knowledge from web texts. here, the classifier is built on a multi-column cnn that outputs either " " or " " to indicate whether the candidate has an effect or not [ ] . in many situations, the cause-effect rules learned directly using the above methods can be too specific and sparse, with low generalizability, so a typical next step is to generalize the learned rules. for example, "earthquake hits china" → "red cross help sent to beijing" is a specific rule that can be generalized to "earthquake hits [a country]" → "red cross help sent to [the capital of this country]". to achieve this, some external ontology or a knowledge base is typically needed in order to establish the underlying relationships among items or provide necessary information on their properties, such as wikipedia (https://www.wikipedia.org/), yago [ ] , wordnet [ ] , or conceptnet [ ] . based on these resources, the similarity between two cause-effect pairs (c i , ε i ) and (c j , ε j ) can be computed by jointly considering the respective similarity of the putative cause and effect: σ ((c i , ε i ), (c j , ε j )) = (σ (c i , c j )+σ (ε i , ε j ))/ . an appropriate algorithm can then be utilized to apply hierarchical agglomerative clustering to group them and hence generate a data structure that can efficiently manage the task of storing and querying them to identify any cause-effect pairs. for example, [ , , ] leverage an abstraction tree, where each leaf is an original specific cause-effect pair and each intermediate node is the centroid of a cluster. instead of using hierarchical clustering, [ ] directly uses the word ontology to simultaneously generalize cause and effect (e.g., the noun "violet" is generalized to "purple", the verb "kill" is generalized to "murder- . ") and then leverage a hierarchical causal network to organize the generalized rules. step : future event inference. given an arbitrary query event, two steps are needed to infer the future events caused by it based on the causality of events learned above. first, we need to retrieve similar events that match the query event from historical event pool. this requires the similarity between the query event and all the historical events to be calculated. to achieve this, lei et al. [ ] utilized context information, including event time, location, and other environmental and descriptive information. for methods requiring event generalization, the first step is to traverse the abstraction tree starting from the root that corresponds to the most general event rule. the search frontier then moves across the tree if the child node is more similar, culminating in the nodes which are the least general but still similar to the new event being retrieved [ ] . similarly, [ ] proposed another tree structure referred to as a "circular binary search tree" to manage the event occurrence pattern. we can now apply the learned predicate rules starting from the retrieved event to obtain the prediction results. since each cause event can lead to multiple events, a convenient way to determine the final prediction is to calculate the support [ ] , or conditional probability [ ] of the rules. radinsky et al. [ ] took a different approach, instead ranking the potential future events by their similarity defined by the length of their minimal generalization path. for example, the minimal generalization path for "london" and "paris" is "london" alternatively, zhao et al. [ ] proposed embedding the event causality network into a continuous vector space and then applying an energy function designed to rank potential events, where true cause-effect pairs are assumed to have low energies. these methods share a very straightforward problem formulation. given a temporal sequence for a historical event chain, the goal is to predict the semantics of the next event using sequence prediction [ ] . the existing methods can be classified into four major categories: ) classical sequence prediction; ) recurrent neural networks; ) markov chains; and ) time series predictions. sequence classification-based methods. these methods formulate event semantic prediction as a multi-class classification problem, where a finite number of candidate events are ranked and the top-ranked event is treated as the future event semantic. the objective isĈ = arg max c i u(s t + = c i |s , · · · , s t ), where s t + denotes the event semantic in time slot t + andĈ is the optimal semantic among all the semantic candidates c i (i = , · · · ). multi-class classification problems can be split into events with different topics/semantic meaning. three types of sequence classification methods have been utilized for this purpose, namely feature-based methods, prototype-based methods, and model-based methods such as markov models. • feature-based. one of the simplest methods is to ignore the temporal relationships among the events in the chain, by either aggregating the inputs or the outputs. tama and comuzzi [ ] formulated historical event sequences with multiple attributes for event prediction, testing multiple conventional classifiers. another type of approach based on this notion utilizes compositional based-methods [ ] that typically leverage the assumption of independency among the historical input events to simplify the original problem u(s t + |s , s , · · · , s t ) = u(s t + |s ≤t ) into v(u(s t + |s ), u(s t + |s ), · · · , u(s t + |s t )) where v(·) is simply an aggregation function that represents a summation operation over all the components. each component function u(s t + |s i ) can then be calculated by estimating how likely it is that event semantic s t + and s i (i ≤ t ) co-occur in the same event chain. granroth-wilding and clark [ ] investigated various models ranging from straightforward similarity scoring functions through bigram models and word embedding combined with similarity scoring functions to newly developed composition neural networks that jointly learn the representation of s t + and s i and then calculate their coherence. some other researchers have gone further to consider the dependency among the historical events. for example, letham et al. [ ] proposed to optimizing the correct ordering among the candidate events, based on the following equation: where the semantic candidate in the set i should be ranked strictly to be lower than those in j , with the goal being to penalize the "incorrect ordering". here, [·] is an indicator function which is discrete such that [b ≥a] ≤ e b−a and can thus be utilized as the upperbound for minimization, as can be seen in the right-hand-side of the above equation. w is the set of parameters of the function u(·). this can now be relaxed to an exponentialbased approximation for effective optimization using gradient-based algorithms [ ] . other methods focus on first transferring the sequential data into sequence embeddings that can encode the latent sequential context. for example, fronza et al. [ ] apply random indexing to represent the words in terms of their its vector representations by embedding the information from neighboring words into each word before utilizing conventional classifiers such as support vector machines (svm) to identify the future events. • model-based. markov-based models have also been leveraged to characterize temporal patterns [ ] . these typically use e i to denote each event under a specific type and e denotes the set of event types. the goal here is to predict the event type of the next event to occur in the future. in [ ] , the event types are modeled using the markov model so given the current event type, the next event type can be inferred simply by looking up the state with the highest probability in the transition matrix. a tool called wayeb [ ] has been developed based on this method. laxman et al. [ ] developed a more complicated model, based on a mixture of hidden markov models and introducing new assumptions and the concept of episodes composed of a subsequence of event types. they assumed different event episodes should have different transition patterns so started by discovering the frequent episodes for events, each of which they modeled by a specific hidden markov model over various event types. this made it possible to establish the generative process for each future event type s based on the mixture of the above episode markov models. when predicting, the likelihood of a current observed event sequence over each possible generative process, p(x |Λ y ) is evaluated, after which a future event type can be considered as either being larger than some threshold (as in [ ] ) or the largest among all the different y values (in [ , ] ). • prototype-based. adhikari et al. [ ] took a different approach, utilizing a prototype-based strategy that first clusters the event sequences into different clusters in terms of their temporal patterns. when a new event sequence is observed, its closest cluster's centroid will then be leveraged as a "reference event sequence" whose sub-sequential events will be referred to when predicting future events for this new event sequence. recurrent neural network (rnn)-based methods. approaches in this category can be classified into two types: . attribute-based models; and . descriptive-based models. the attribute-based models, ingest feature representation of events as input, while the descriptive-based models typically ingest unstructured information such as texts to directly predict future events. • attributed-based methods. here, each event y = (t, l, s) at time t is recast and represented as e t = (e t, , e t, , · · · , e t,k ), where e t,i is the i-th feature of the event at time t. the feature here can include location and other information such as event topic and semantics. each sequence e = (e , · · · , e t ) is then input into the standard rnn architecture for predicting next event e t + in the sequence at time point t + [ ] . various types of rnn components and architecture have been utilized for this purpose [ , ] , but a vanilla rnn [ , ] for sequence-based event prediction can be written in the following form: where h i , o i , and a i are the latent state, output, and activation for the i-th event, respectively, and w , u , and v are the model parameters for fitting the corresponding mappings. the prediction e t + := ψ (t + ) can then be calculated in a feedforward way from the first event and the model training can be done by back-propagating the error from the layer of ψ (t). existing work typically utilizes the variants of vanilla rnn to handle the gradient vanishing problem, especially when the event chain is not short. the most commonly used methods for event prediction are lstm and gru [ ] . for example, the architecture and equation for lstm are as follows: where the additional components c i− and ζ i are introduced to keep tracking the previous "history" and gating the information for forgetting in order to handle longer sequences. for example, some researchers opt to leverage a simple type lstm architecture to extend the rnn-based sequential event prediction [ , ] , while others leverage variants of lstm, such as bi-directional lstm instead [ , ] and yet others prefer to leverage gated-recurrent units (gru) [ ] . moving beyond considering just the chain relationships among events, li et al. [ ] generalized this into graph-structured relationships to better incorporate the event contextual information via the narrative event evolutionary graph (neeg). an neeg is a knowledge graph where each node is an event and each edge denotes the association between a pair of events, enabling the neeg to be represented by a weighted adjacency matrix a. the basic architecture can be denoted by the following, as detailed in the paper [ ] : here, the current activation a i is not only dependent on the previous time point but also influenced by its neighbor nodes in neeg. • descriptive-based methods. attribute-based methods require extra effort during preprocessing in order to convert the unstructured raw data into feature vectors, a process which is not only computationally labor intensive but also not always feasible. therefore, multiple architectures have been proposed to directly process the raw (textual) event descriptions to enable them to be used to predict future event semantics or descriptions. these models share a similar generic framework [ , , , , , ] , which begins by encoding each sequence of words into event representations, utilizing an rnn architecture, as shown in figure . the sequence of events must then be characterized by another higher-level rnn to predict future events. under this framework, some works begin by decoding the predicted future candidate events into event embedding, after which they are compared with each other and the one with the largest confidence score is selected as the predicted event. these methods are usually constrained by the known list of event types, but sometimes we are interested in open set predictions where the predicted event type can be a new appearance of a type that has not previously been seen in the training set. to achieve this, other methods focus on directly generating future events' descriptions that characterize event semantics that may or may not have appeared before by designing an additional sequence decoder that decodes the latent representation of future events into word sequences. more recent research has enhanced the utility and interpretability of the relationship between words and relevant events, and all the previous events for the relevant future event, by adding a hierarchical attention mechanisms. for example, yu et al. [ ] and su and jiang [ ] both proposed word-level attention and event-level attention, while hu [ ] leveraged word-level attention in the event encoder as well as in the event decoder. this section discusses the research into ways to jointly predict the time, location, and semantics of future events. existing work in this area can be categorized into three types: ) joint time and for example, vilalta and ma [ ] defined lhs as a tuple (e l , τ ), where τ is the time window before the target in rhs predefined by the user. only the events occurring within a time window before the event in rhs will satisfy the lhs. similar techniques have also been leveraged by other researchers [ , ] . however, τ is difficult to define beforehand and it is preferable to be flexible to suit different target events. to handle this challenge, yang et al. [ ] proposed a way to automatically identify information on a continuous time interval from the data. here, each transaction is composed of not only items but also continuous time duration information. lhs is a set of items (e.g., previous events) while rhs is a tuple (e r , [t , t ]) consisting of a future event semantic representation and its time interval of occurrence. to automatically learn the time interval in rhs, [ ] proposed the use of two different methods . the first is called the confidence-intervalbased method, which leverages a statistical distribution (e.g., gaussian and student-t [ ] ) to fit all the observed occurrence times of events in rhs, and then treats the statistical confidence interval as the time interval. the second method is known as minimal temporal region selection, which aims to find the temporal region with the smallest interval and covers all historical occurrences of the event in rhs. time expression extraction. in contrast to the above statistical-based methods, another way to achieve event time and semantics joint prediction comes from the pattern recognition domain, aiming to directly discover time expressions that mention the (planned) future events. as this type of technique can simultaneously identify time, semantics, and other information such as locations, it is widely used and will be discussed in more details later as part of the discussion of "planned future event detection methods" in section . . . time series forecasting-based methods. the methods based on time series forecasting can be separated into direct methods and indirect methods. direct methods typically formulate the event semantic prediction problem as a multi-variate time series forecasting problem, where each variable corresponds to an event type c i (i = , · · · ) and hence the predicted event type at future timet is calculated asŝt = arg max c i f (st = c i |x ). for example, in [ ] , a longitudinal support vector regressor is utilized to predict multi-attribute events, where n support vector regressors, each of which corresponds to an attribute, is built to achieve the goal of predicting the next time point's attribute value. weiss and page [ ] took a different approach, leveraging multiple point process models to predict multiple event types. to further estimate the confidence of their predictions, biloš et al. [ ] first leveraged rnn to learn the historical event representation and then input the result into a gaussian process model to predict future event types. to better capture the joint dynamics across the multiple variables in the time series, brandt et al. [ ] extended this to bayesian vector autoregression. utilizing indirect-style methods, they focused on learning a mapping from the observed event semantics down to low-dimensional latent-topic space using tensor decompositionbased techniques. similarly, matsubara et al. [ ] proposed a -way topic analysis of the original observed event tensor y ∈ r d o ×d a ×d c consisting of three factors, namely actors, objects, and time. they then went on to decompose this tensor into latent variables via three corresponding low-rank matrices p o ∈ r d k ×d o , p a ∈ r d k ×d a , and p c ∈ r d k ×d c respectively, as shown in figure . here d k is the number of latent topics. for the prediction, the time matrices p c are predicted into the futurep c via multi-variate time series forecasting, after which a future event tensor are estimated by recovering a "future event tensor"Ŷ by the multiplication among the predicted time matrixp c as well as the known actor matrix p a and object matrix p o . raster-based. these methods usually formulate data into temporal sequences consisting of spatial snapshots. over the last few years, various techniques have been proposed to characterize the spatial and temporal information for event prediction. the simplest way to consider spatial information is to directly treat location information as one of the input features, and then feed it into predictive models, such as linear regression [ ] , lstm [ ] and gaussian processes [ ] . during model training, zhao and tang [ ] leveraged the spatiotemporal dependency to regularize their model parameters. most of the methods in this domain aim to jointly consider the spatial and temporal dependency for predictions [ ] . at present, the most popular framework is the cnn+rnn architecture, which implements sequenceto-sequence learning problems such as the one illustrated in figure . here, the multi-attributed spatial information for each time point can be organized as a series of multi-channel images, which can be encoded using convolution-based operations. for example, huang et al. [ ] proposed the addition of convolutional layers to process the input into vector representations. other researchers have leveraged variational autoencoders [ ] and cnn autoencoders [ ] to learn the lowdimensional embedding of the raw spatial input data. this allows the learned representation of the input to be input into the temporal sequence learning architecture. different recurring units have been investigated, including rnn, lstm, convlstm, and stacked-convlstm [ ] . the resulting representation of the input sequence is then sent to the output sequence as input. here, another recurrent architecture is established. the output of the unit for each time point will be input into a spatial decoder component which can be implemented using transposed convolutional layers [ ] , transposed convlstm [ ] , or a spatial decoder in a variational autoencoder [ ] . a conditional random field is another popular technique often used to model the spatial dependency [ ] . point-based. the spatiotemporal point process is an important technique for spatiotemporal event prediction as it models the rate of event occurrence in terms of both spatial and time points. it is defined as: various models have been proposed to instantiate the model of the framework illustrated in equation ( ) . for example, liu and brown et al. [ ] began by assuming there to be a conditional independence among spatial and temporal factors and hence achieved the following decomposition: where x , l,t , and f denotes the whole input indicator data as well as its different facets, including location, time, and other semantic features, respectively. then the term λ (·) can be modeled based on the markov spatial point process while λ (·) can be characterized using temporal autoregressive models. to handle situations where explicit assumptions for model distributions are difficult, several methods have been proposed to involve the deep architecture during the point process. most recently, okawa et al. [ ] have proposed the following: where k(·, ·) is a kernel function such as a gaussian kernel [ ] that measures the similarity in time and location dimensions. f (t ′ , l ′ ) ⊆ f denotes the feature values (e.g., event semantics) for the data at location l ′ and time t ′ . д θ (·) can be a deep neural network that is parameterized by θ and returns an nonnegative scalar. the model selection of д θ (·) depends on the specific data types. for example, these authors constructed an image attention network by combining a cnn with the spatial attention model proposed by lu et al. [ ] . in this section, we introduce the strategies that jointly predict the time, location, and semantics of future events, which can be grouped into either system-based or model-based strategies. system-based. the first type of the system-based methods considered here is the model-fusion system. the most intuitive approach is to leverage and integrate the aforementioned techniques for time, location, and semantics prediction into an event prediction system. for example, a system named embers [ ] is an online warning system for future events that can jointly predict the time, location, and semantics including the type and population of future events. this system also provides information on the confidence of the predictions obtained. using an ensemble of predictive models for time [ ] , location, and semantic prediction, this system achieves a significant performance boost in terms of both precision and recall. the trick here is to first prioritize the precision of each individual prediction model by suppressing their recall. then, due to the diversity and complementary nature of the different models, the fusion of the predictions from different models will eventually result in a high recall. a bayesian fusion-based strategy has also been investigated [ ] . another system named carbon [ ] also leverages a similar strategy. the second type of model involves crowd-sourced systems that implement fusion strategies to generate the event predictions made by the human predictors. for example, in order to handle the heterogeneity and diversity of the human predictors' skill sets and background knowledge under limited human resources, rostami et al. [ ] proposed a recommender system for matching event forecasting tasks to human predictors with suitable skills in order to maximize the accuracy of their fused predictions. li et al. [ ] took a different approach, designing a prediction market system that operates like a futures market, integrating information from different human predictors to forecast future events. in this system, the predictors can decide whether to buy or sell the "tokens" (using virtual dollars, for example) for each specific prediction they have made according to their confidence in it. they typically make careful decisions as they will obtain corresponding awards (for correct predictions) or penalties (for erroneous predictions). planned future event detection methods. these methods focus on detecting the planned future events, usually from various media such sources as social media and news and typically relying on nlp techniques and linguistic principles. existing methods typically follow a workflow similar to the one shown in figure , consisting of four main steps: ) content filtering. methods for content filtering are typically leveraged to retain only the texts that are relevant to the topic of interest. existing works utilize either supervised methods (e.g., textual classifiers [ ] or unsupervised methods (e.g., querying techniques [ , ] ); ) time expression identification is then utilized to identify future reference expressions and determine the time to event. these methods either leverage existing tools such as the rosetta text analyzer [ ] or propose dedicated strategies based on linguistic rules [ ] ; ) future reference sentence extraction is the core of planned event detection, and is implemented either by designing regular expression-based rules [ ] or by textual classification [ ] ; and ) location identification. the expression of locations is typically highly heterogeneous and noisy. existing works have relied heavily on geocoding techniques that can resolve the event location accurately. in order to infer the event locations, various types of locations are considered by different researchers, such as article locations [ ] , authors' profile locations [ ] , locations mentioned in the articles [ ] , and authors' neighbors' locations [ ] . multiple locations have been selected using a geometric median [ ] or fused using logical rules such as probabilistic soft logic [ ] . tensor-based methods. some methods formulate the data into tensor-form, with dimensions including location, time, and semantics. tensor decomposition is then applied to approximate the original tensors as the product of multiple low-rank matrices, each of which is a mapping from latent topics to each dimension. finally, the tensor is extrapolated towards future time periods by various strategies. for example, mirtaheri [ ] extrapolated the time dimension-matrix only, which they then multiplied with the other dimensions' matrices to recover the estimated extrapolated tensor into the future. zhou et al. [ ] took a different approach, choosing instead to add "empty values" for the entries in future time to the original tensor, and then use tensor completion techniques to infer the missing values corresponding to future events. this category generally consists of two types of event prediction: ) population level, which includes disease epidemics and outbreaks, and ) individual level, which relates to clinical longitudinal events. there has been extensive research on disease outbreaks for many different types of diseases and epidemics, including seasonal flu [ ] , zika [ ] , h n [ ] , ebola [ ] , and covid- [ ] . these predictions target both the location and time of future events, while the disease type is usually fixed to a specific type for each model. compartmental models such as sir models are among the classical mathematical tools used to analyze, model, and simulate the epidemic dynamics [ , ] . more recently, individual-based computational models have begun to be used to perform network-based epidemiology based on network science and graphtheoretical models, where an epidemic is modeled as a stochastic propagation over an explicit interaction network among people [ ] . thanks to the availability of high-performance computing resources, another option is to construct a "digital twin" of the real world, by considering a realistic representation of a population, including membersâĂŹ demographic, geographic, behavioral, and social contextual information, and then using individual-based simulations to study the spread of epidemics within each network [ ] . the above techniques heavily rely on the model assumptions regarding how the disease progresses individually and is transmitted from person to person [ ] . the rapid growth of large surveillance data and social media data sets such as twitter and google flu trends in recent years has led to a massive increase of interest in using data-driven approaches to directly learn the predictive mapping [ ] . these methods are usually both more time-efficient and less dependent on assumptions, while the aforementioned computational models are more powerful for longer-term prediction due to their ability to take into account the specific disease mechanisms [ ] . finally, there have also been reports of synergistic research that combines both techniques to benefit from their complementary strengths [ , ] . this research thread focuses on the longitudinal predictive analysis of individual health-related events, including death occurrence [ ] , adverse drug events [ ] , sudden illnesses such as strokes [ ] and cardiovascular events [ ] , as well as other clinical events [ ] and life events [ ] for different groups of people, including elders and people with mental disease. the goal here is usually to predict the time before an event occurs, although some researchers have attempted to predict the type of event. the data sources are essentially the electronic health records of individual patients [ , ] . recently, social media, forum, and mobile data has also been utilized for predicting drug adverse events [ ] and events that arise during chronic disease (e.g., chemical radiation and surgery) [ ] . this category focuses on predicting events based on information held in various types of media including: video-based, audio-based, and text-based formats. the core issue is to retrieve key information related to future events utilizing semantic pattern recognition from the data. video-and audio-based. . while event detection has been extensively researched for video data [ ] and audio mining [ ] , event prediction is more challenging and has been attracting increasing attention in recent years. the goal here is usually to predict the future status of the objects in the video, such as the next action of soccer players [ ] or basketball players [ ] , or the movement of vehicles [ ] . text-and script-based. a huge amount of news data has accumulated in recent decades, much of which can be used for big data predictive analytics among news events. a number of researchers have focused on predicting the location, time, and semantics of various events. to achieve this, they usually leverage the immense historical news and knowledge base in order to learn the association and causality among events, which is then applied to forecast events when given current events. some studies have even directly generated textual descriptions of future events by leveraging nlp techniques such as sequence to sequence learning [ , , , , , , , , ] . . this category can be classified into: ) population based events, including dispersal events, gathering events, and congestion; and ) individual-level events, which focus on fine-grained patterns such as human mobility behavior prediction. . . group transportation pattern. . here, researchers typically focus on transportation events such as congestion [ , ] , large gatherings [ ] , and dispersal events [ ] . the goal is thus to forecast the future time period [ ] and location [ ] of such events. data from traffic meters, gps, and mobile devices are usually used to sense real-time human mobility patterns. transportation and geographical theories are usually considered to determine the spatial and temporal dependencies for predicting these events. another research thread focuses on individual-level prediction, such as predicting an individual's next location [ , ] or the likelihood or time duration of car accidents [ , , ] . sequential and trajectory analyses are usually used to process trajectory and traffic flow data. different types of engineering systems have begun to routinely apply event forecasting methods, including: ) civil engineering, ) electrical engineering, ) energy engineering, and ) other engineering domains. despite the variety of systems in these widely different domains, the goal is essentially to predict future abnormal or failure events in order to support the system's sustainability and robustness. both the location and time of future events are key factors for these predictions. the input features usually consist of sensing data relevant to specific engineering systems. • civil engineering. this covers various a wide range of problems in diverse urban systems, such as smart building fault adverse event prediction [ ] , emergency management equipment failure prediction [ ] , manhole event prediction [ ] , and other events [ ] . • electrical engineering. this includes teleservice systems failures [ ] and unexpected events in wire electrical discharge machining operations [ ] . • energy engineering. event prediction is also a hot topic in energy engineering, as such systems usually require strong robustness to handle the disturbance from the nature environments. active research domains here include wind power ramp prediction [ ] , solar power ramp prediction [ ] , and adverse events in low carbon energy production [ ] . • other engineering domains. there is also active research on event prediction in other domains, such as irrigation event prediction in agricultural engineering [ ] and mechanical fault prediction in mechanical engineering [ ] . here, the prediction models proposed generally focus on either network-level events or devicelevel events. for both types, the general goal is essentially to predict the likelihood of future system failure or attacks based on various indicators of system vulnerability. so far these two categories have essentially differed only in their inputs: the former relies on network features, including system specifications, web access logs and search queries, mismanagement symptoms, spam, phishing, and scamming activity, although some researchers are investigating the use of social media text streams to identify semantics indicating future potential attack targets of ddos [ , ] . for device-level events, the features of interest are usually the binary file appearance logs of machines [ , ] . some work has been done on micro-architectural attacks [ ] by observing and proactively analyzing the observations on speculative branches, out-of-order executions and shared last level caches [ ] . political event prediction has become a very active research area in recent years, largely thanks to the popularity of social media. the most common research topics can be categorized as: ) offline events, and ) online activism. . . offline events. this includes civil unrest [ ] , conflicts [ ] , violence [ ] , and riots [ ] . this type of research usually targets the future events' geo-location, time, and topics by leveraging the social sensors that indicate public opinions and intentions. utilization of social media has become a popular approach for these endeavors, as social media is a source of vital information during the event development stage [ ] . specifically, many aspects are clearly visible in social media, including complaints from the public (e.g., toward the government), discussions about their intentions regarding specific political events and targets, as well as advertisements for the planned events. due to the richness of this information, further information on future events such as the type of event [ ] , the anticipated participants population [ ] , and the event scale [ ] can also be discovered in advance. . . online events. due to the major impact of online media such as online forums and social media, many events such as online activism, petitions, and hoaxes in such online platform also involve strong motivations for achieving some political purpose [ ] . beyond simple detection, the prediction of various types of events have been studied in order to enable proactive intervention to sidetrack the events such as hoaxes and rumor propagation [ ] . other researchers have sought to foresee the results of future political events in order to benefit a particular group of practitioners, for example by predicting the outcome of online petitions or presidential elections [ ] . different types of natural disasters have been the focus of a great deal of research. typically, these are rare events, but mechanistic models, a long historical records (often extending back dozens or hundreds of years), and domain knowledge are usually available. the input data are typically collected by sensors or sensor networks and the output is the risk or hazard of future potential events. since these event occurrence are typically rare but very high-stakes, many researchers strive to cover all event occurrences and hence aim to ensure high recalls. . . geophysics-related. earthquakes. predictions here typically focus on whether there will be an earthquake with a magnitude larger than a specified threshold in a certain area during a future period of time. to achieve this, the original sensor data is usually proccessed using geophysical models such as gutenbergâĂŞrichterâĂŹs inverse law, the distribution of characteristic earthquake magnitudes, and seismic quiescence [ , ] . the processed data are then input into machine learning models that treat them as input features for predicting the output, which can be either binary values of event occurrence or time-to-event values. some studies are devoted to identifying the time of future earthquakes and their precursors, based on an ensemble of regressors and feature selection techniques [ ] , while others focus on aftershock prediction and the consequences of the earthquake, such as fire prevention [ ] . it worth noting that social media data has also been used for such tasks, as this often supports early detection of the first-wave earthquake, which can then be used to predict the afterstocks or earthquakes in other locations [ ] . fire events. research in this category can be grouped into urban fires and wildfires. this type of research often focuses on the time at which a fire will affect a specific location, such as a building. the goal here is to predict the risk of future fire events. to achieve this, both the external environment and the intrinsic properties of the location of interests are important. therefore, both static input data (e.g., natural conditions and demographics) and time-varying data (e.g., weather, climate, and crowd flow) are usually involved. shin and kim [ ] focus on building fire risk prediction, where the input is the building's profile. others have studied wildfires, where weather data and satellite data are important inputs. this type of research focuses primarily on predicting both the time and location of future fires [ , ] . other researchers have focused on rarer events such as volcanic eruptions. for example, some leverage chemical prior knowledge to build a bayesian network for prediction [ ] , while others adopt point processes to predict the hazard of future events [ ] . . . atmospheric science-related. flood events. floods may be caused by many different reasons, including atmospheric (e.g., snow and rain), hydrological (e.g., ice melting, wind-generated waves, and river flow), and geophysical (e.g., terrain) conditions. this makes the forecasting of floods highly complicated task that requires multiple diverse predictors [ ] . flood event prediction has a long history, with the latest research focusing especially on computational and simulation models based on domain knowledge. this usually involves using ensemble prediction systems as inputs for hydrological and/or hydraulic models to produce river discharge predictions. for a detailed survey on flood computational models please refer to [ ] . however, it is prohibitively difficult to comprehensively consider and model all the factors correctly while avoiding all the accumulated errors from upstream predictions (e.g., precipitation prediction). another direction, based on data-driven models such as statistical and machine learning models for flood prediction, is deemed promising and is expected to be complementary to existing computational models. these newly developed machine learning models are often based solely on historical data, requiring no knowledge of the underlying physical processes. representative models are svm, random forests, and neural networks and their variants and hybrids. a detailed recent survey is provided in [ ] . tornado forecasting. tornadoes usually develop within thunderstorms and hence most tornado warning systems are based on the prediction of thunderstorms. for a comprehensive survey, please refer to [ ] . machine learning models, when applied to tornado forecasting tasks, usually suffer from high-dimensionality issues, which are very common in meteorological data. some methods have leveraged dimensional reduction strategies to preprocess the data [ ] before prediction. research on other atmosphere-related events such as droughts and ozone events has also been conducted [ ] . there is a large body of prediction research focusing on events outside the earth, especially those affecting the star closest to us, the sun. methods have been proposed to predict various solar events that could impact life on earth, including solar flares [ ] , solar eruptions [ ] , and high energy particle storms [ ] . the goal here is typically to use satellite imagery data of the sun to predict the time and location of future solar events and the activity strength [ ] . business intelligence can be grouped into company-based events and customer-based events. . . customer activity prediction. the most important customer activities in business is whether a customer will continue doing business with a company and how long a costumer will be willing to wait before receiving the service? a great deal of research has been devoted to these topics, which can be categorized based on the type of business entities namely enterprises, social media, and education, who are primarily interested in churn prediction, site migration, and student dropout, respectively. the first of these focuses on predicting whether and when a customer is likely to stop doing business with a profitable enterprise [ ] . the second aims to predict whether a social media user will move from one site, such as flickr, to another, such as instagram, a movement known as site migration [ ] . while site migration is not popular, attention migration might actually be much more common, as a user may "move" their major activities from one social media site to another. the third type, student dropout, is a critical domain for education data mining, where the goal is to predict the occurrence of absenteeism from school for no good reason for a continuous number of days; a comprehensive survey is available in [ ] . for all three types, the procedure is first to collect features of a customer's profile and activities over a period of time and then conventional or sequential classifiers or regressors are generally used to predict the occurrence or time-to-event of the future targeted activity. financial event prediction has been attracting a huge amount of attention for risk management, marketing, investment prediction and fraud prevention. multiple information resources, including news, company announcements, and social media data could be utilized as the input, often taking the form of time series or temporal sequences. these sequential inputs are used for the prediction of the time and occurrence of future high-stack events such as company distress, suspension, mergers, dividends, layoffs, bankruptcy, and market trends (rises and falls in the company's stock price) [ , , , , , , ] . it is difficult to deduce the precise location and time for individual crime incidences. therefore, the focus is instead estimating the risk and probability of the location, time, and types of future crimes. this field can be naturally categorized based on the various crime types: . . political crimes and terrorism. this type of crime is typically highly destructive, and hence attracts huge attention in its anticipation and prevention. terrorist activities are usually aimed at religious, political, iconic, economic or social targets. the attacker typically targets larger numbers of people and the evidences related to such attacks is retained in the long run. though it is extremely challenging to predict the precise location and time of individual terrorism incidents, numerous studies have shown the potential utility for predicting the regional risks of terrorism attacks based on information gathered from many data sources such as geopolitical data, weather data, and economics data. the global terrorism database is the most widely recognized dataset that records the descriptions of world-wide terrorism events of recent decades. in addition to terrorism events, other similar events such as mass killings [ ] and armed-conflict events [ ] have also been studied using similar problem formulations. most studies on this topic focus on predicting the types, intensity, count, and probability of crime events across defined geo-spatial regions. until now, urban crimes are most commonly the topic of research due to data availability. the geospatial characteristics of the urban areas, their demographics, and temporal data such as news, weather, economics, and social media data are usually used as inputs. the geospatial dependency and correlation of the crime patterns are usually leveraged during the prediction process using techniques originally developed for spatial predictions, such as kernel density estimation and conditional random fields. some works simplify the tasks by only focusing on specific types of crimes such as theft [ ] , robbery, and burglary [ ] . . . organized and serial crimes. unlike the above research on regional crime risks, some recent studies strive to predict the next incidents of criminal individuals or groups. this is because different offenders may demonstrate different behavioral patterns, such as targeting specific regions (e.g., wealthy neighborhoods), victims (e.g., women), for specific benefits (e,g, money). the goal here is thus is to predict the next crime site and/or time, based on the historical crime event sequence of the targeted criminal individual or group. models such as point processes [ ] or bayesian networks [ ] are usually used to address such problems. despite the major advances in event prediction in recent years, there are still a number of open problems and potentially fruitful directions for future research, as follows: increasingly sophisticated forecasting models have been proposed to improve the prediction accuracy, including those utilizing approaches such as ensemble models, neural networks, and the other complex systems mentioned above. however, although the accuracy can be improved, the event prediction models are rapidly becoming too complex to be interpreted by human operators. the need for better model accountability and interpretability is becoming an important issue; as big data and artificial intelligence techniques are applied to ever more domains this can lead to serious consequences for applications such as healthcare and disaster management. models that are not interpretable by humans will find it hard to build the trust needed if they are to be fully integrated into the workflow of practitioners. a closely related key feature is the accountability of the event prediction system. for example, disaster managers need to thoroughly understand a model's recommendations if they are to be able to explain the reason for a decision to displace people in a court of law. moreover, an ever increasing number of laws in countries around the world are beginning to require adequate explanations of decisions reached based on model recommendations. for example, articles - in the european union's general data protection regulation (gdpr) [ ] require algorithms that make decisions that âĂIJsignificantly affect" individuals to provide explanations ("right to explanation") by may , . similar laws have also been established in countries such as the united states [ ] and china [ ] . the massive popularity of the proposal, development, and deployment of event prediction is stimulating a surge interest in developing ways to counter-attack these systems. it will therefore not be a surprise when we begin to see the introduction of techniques to obfuscate these event prediction methods in the near future. as with many state-of-the-art ai techniques applied in other domains such as object recognition, event prediction methods can also be very vulnerable to noise and adversarial attacks. the famous failure of google flu trends, which missed the peak of the flu season by percent due to low relevance and high disturbance affecting the input signal, is a vivid memory for practitioners in the field [ ] . many predictions relying on social media data can also be easily influenced or flipped by injecting scam messages. event prediction models also tend to over-rely on low-quality input data that can be easily disturbed or manipulated, lacking sufficient robustness to survive noisy signals and adversarial attacks. similar problems threaten to other application domains such as business intelligence, crime, and cyber systems. over the years, many domains have accumulated a significant amount of knowledge and experience about event development occurrence mechanisms, which can thus provide important clues for anticipating future events, such as epidiomiology models, socio-political models, and earthquake models. all of these models focus on simplifying real-world phenomena into concise principles in order to grasp the core mechanism, discarding many details in the process. in contrast, data-driven models strive to ensure the accurate fitting of large historical data sets, based on sufficient model expressiveness but cannot guarantee that the true underlying principle and causality of event occurrence modeled accurately. there is thus a clear motivation to combine their complementary strengths, and although this has already attracted great deal of interest [ , ] , most of the models proposed so far are merely ensemble learning-based and simply merge the final predictions from each model. a more thorough integration is needed that can directly embed the core principles to regularize and instruct the training of data-driven event prediction methods. moreover, existing attempts are typically specific to particular domains and are thus difficult to develop further as they require in-depth collaborations between data scientists and domain experts. a generic framework developed to encompass multiple different domains is imperative and would be highly beneficial for the various domain experts. the ultimate purpose of event prediction is usually not just to anticipate the future, but to change it, for example by avoiding a system failure and flattening the curve of a disease outbreak. however, it is difficult for practitioners to determine how to act appropriately and implement effective policies in order to achieve the desired results in the future. this requires a capability that goes beyond simply predicting future events based on the current situation, requiring them instead to also take into account the new actions being taken in real time and then predict how they might influence the future. one promising direction is the use of counterfactual event [ ] prediction that models what would have happened if different circumstances had occurred. another related direction is prescriptive analysis where different actions can be merged into the prediction system and future results anticipated or optimized. related works have been developed in few domains such as epidemiology. however, as yet these lack sufficient research in many other domains that will be needed if we are to develop generic frameworks that can benefit different domains. existing event prediction methods mostly focus primarily on accuracy. however, decision makers who utilize these predicted event results usually need much more, including key factors such as event resolution (e.g., time resolution, location resolution, description details), confidence (e.g., the probability a predicted event will occur), efficiency (whether the model can predict per day or per seccond), lead time (how many days the prediction can be made prior to the event occurring), and event intensity (how serious it is). multi-objective optimization (e.g., accuracy, confidence, resolution). there are typically trade-offs among all the above metrics and accuracy, so merely optimizing accuracy during training will inevitably mean the results drift away from the overall optimal event-prediction-based decision. a system that can flexibly balance the trade-off between these metrics based on decision makers' needs and achieve a multi-objective optimization is the ultimate objective for these models. this survey has presented a comprehensive survey of existing methodologies developed for event prediction methods in the big data era. it provides an extensive overview of the event prediction challenges, techniques, applications, evaluation procedures, and future outlook, summarizing the research presented in over publications, most of which were published in the last five years. event prediction challenges, opportunities, and formulations have been discussed in terms of the event element to be predicted, including the event location, time, and semantics, after which we went on to propose a systematic taxonomy of the existing event prediction techniques according to the formulated problems and types of methodologies designed for the corresponding problems. we have also analyzed the relationships, differences, advantages, and disadvantages of these techniques from various domains, including machine learning, data mining, pattern recognition, natural language processing, information retrieval, statistics, and other computational models. in addition, a comprehensive and hierarchical categorization of popular event prediction applications has been provided that covers domains ranging from natural science to the social sciences. based upon the numerous historical and state-of-the-art works discussed in this survey, the paper concludes by discussing open problems and future trends in this fast-growing domain. forecasting of solar power ramp events: a post-processing approach causal prediction of top-k event types over real-time event streams epideep: exploiting embeddings for epidemic forecasting prediction of solar eruptions using filament metadata area-specific crime prediction models methodological approach of construction business failure prediction studies: a review event forecasting with pattern markov chains wayeb: a tool for complex event forecasting probabilistic complex event recognition: a survey on-line new event detection and tracking a bayesian approach to event prediction forecasting with twitter data forecasting ebola with a regression transmission model earthquake magnitude prediction in hindukush region using machine learning techniques a survey of techniques for event detection in twitter modern information retrieval predicting structured data customer event history for churn prediction: how long is long enough? a spatiotemporal deep learning approach for citywide short-term crash risk prediction with multi-source data a comparison of flare forecasting methods. i. results from the âĂIJall-clearâĂİ workshop scalable causal learning for predicting adverse events in smart buildings identifying content for planned events across social media sites comparison of machine learning algorithms for clinical event prediction data-driven prediction and prevention of extreme events in a spatially extended excitable system uncertainty on asynchronous time event prediction pattern recognition and machine learning epifast: a fast algorithm for large scale realistic epidemic simulations on distributed memory systems predicting local violence: evidence from a panel survey in liberia forecasting civil wars: theory and structure in an age of "big data" and machine learning real time, time series forecasting of inter-and intra-state political conflict forecasting social unrest using activity cascades estimating binary spatial autoregressive models for rare events sensor event prediction using recurrent neural network in smart homes for older adults prediction of next sensor event and its time of occurrence using transfer learning across homes temporal convolutional networks allow early prediction of events in critical care making words work: using financial text as a predictor of financial events social media fact sheet event summarization using tweets extracting causation knowledge from natural language texts a text-based decision support system for financial sequence prediction non-parametric scan statistics for event detection and forecasting in heterogeneous social media graphs a generic framework for interesting subspace cluster detection in multi-attributed networks pcnn: deep convolutional networks for short-term traffic congestion prediction bayesian networks based rare event prediction with sensor data a tree-based approach for event prediction using episode rules over event streams stargan: unified generative adversarial networks for multi-domain image-to-image translation ensemble flood forecasting: a review regulating by robot: administrative decision making in the machine-learning era using publicly visible social media to build detailed forecasts of civil unrest infrequent adverse event prediction in low carbon energy production using machine learning an architecture for emergency event prediction using lstm recurrent neural networks disease transmission in territorial populations: the small-world network of serengeti lions bayes predictive analysis of a fundamental software reliability model hidden markov models as a support for diagnosis: formalization of the problem and synthesis of the solution text analytics apis, part : the smaller players a volcanic event forecasting model for multiple tephra records, demonstrated on mt news events prediction using markov logic networks a new hybrid classification algorithm for customer churn prediction based on logistic regression and decision trees leveraging fine-grained transaction data for customer life event predictions predicting soccer highlights from spatiotemporal match event streams event prediction for individual unit based on recurrent event data collected in teleservice systems isurvive: an interpretable, event-time prediction model for mhealth an overview of event extraction from twitter traffic congestion prediction by spatiotemporal propagation patterns deep learning for event-driven stock prediction online failure prediction for railway transportation systems based on fuzzy rules and data analysis forecasting location-based events with spatio-temporal storytelling tornado forecasting: a review recurrent marked temporal point processes: embedding event history to vector on clinical event prediction in patient treatment trajectory using longitudinal electronic health records systematic review of customer churn prediction in the telecom sector reactive point processes: a new approach to predicting power failures in underground electrical systems forecasting heroin overdose occurrences from crime incidents mag versus alternative techniques for forecasting active region flare productivity christiane fellbaum. . wordnet. the encyclopedia of applied linguistics a survey on wind power ramp forecasting managing the risks of extreme events and disasters to advance climate change adaptation: special report of the intergovernmental panel on climate change issues in complex event processing: status and prospects in the big data era failure prediction based on log files using random indexing and support vector machines titan: a spatiotemporal feature learning framework for traffic incident duration prediction survey on complex event processing and predictive analytics google flu trends' failure shows good data> big data a review on the recent history of wind power ramp forecasting incomplete label multi-task ordinal regression for spatial event scale forecasting incomplete label multi-task deep learning for spatio-temporal event subtype forecasting extreme events: dynamics, statistics and prediction a taxonomy of event prediction methods deep learning what happens next? event prediction using a compositional neural network model fortuneteller: predicting microarchitectural attacks via unsupervised deep learning automated news reading: stock price prediction based on financial news using context-specific features data mining: concepts and techniques simulating spatio-temporal patterns of terrorism incidents on the indochina peninsula with gis and the random forest method toward future scenario generation: extracting event causality exploiting semantic relation, context, and association features bayesian model fusion for forecasting civil unrest integrating hierarchical attentions for future subevent prediction what happens next? future subevent prediction using contextual hierarchical lstm social media based simulation models for understanding disease dynamics mist: a multiview and multimodal spatial-temporal learning framework for citywide abnormal event forecasting improved disk-drive failure warnings estimating time to event of future events based on linguistic cues on twitter using machine learning methods to forecast if solar flares will be associated with cmes and seps skip n-grams and ranking functions for predicting script events deepurbanevent: a system for predicting citywide crowd dynamics at big events a survey on spatial prediction methods epidemiological modeling of news and rumors on twitter that's what friends are for: inferring location in online social media platforms based on social relationships carbon: forecasting civil unrest events by monitoring news and social media time-series event-based prediction: an unsupervised learning framework based on genetic programming forecasting gathering events through trajectory destination prediction: a dynamic hybrid model extracting causal knowledge from a medical database using graphical patterns supervenience and mind: selected philosophical essays diversityaware event prediction based on a conditional variational autoencoder with reconstruction prediction for big data through kriging: small sequential and one-shot designs improving event causality recognition with multiple background knowledge sources using multi-column convolutional neural networks a spatial scan statistic leveraging unscheduled event prediction through mining scheduled event tweets spatio-temporal violent event prediction using gaussian process regression time-to-event prediction with neural networks and cox regression data mining and predictive analytics stream prediction using a generative model based on frequent episodes in event sequences a hybrid model for business process event prediction event prediction based on causality reasoning interpretable classifiers using rules and bayesian analysis: building a better stroke prediction model sequential event prediction the wisdom of crowds in action: forecasting epidemic diseases with a web-based prediction market system feature selection: a data perspective multi-attribute event modeling and prediction over event streams from sensors time-dependent representation for neural event sequence prediction next hit predictor-self-exciting risk modeling for predicting next locations of serial crimes constructing narrative event evolutionary graph for script event prediction failure event prediction using the cox proportional hazard model driven by frequent failure signatures a novel serial crime prediction model based on bayesian learning theory mm-pred: a deep predictive model for multi-attribute event sequence grid-based crime prediction using geographical features a new point process transition density model for space-time event prediction conceptnetâĂŤa practical commonsense reasoning tool-kit knowing when to look: adaptive attention via a visual sentinel for image captioning sam-net: integrating event-level and chain-level attentions to predict what happens next major earthquake event prediction using various machine learning algorithms data handling and assimilation for solar event prediction fast mining and forecasting of complex time-stamped events a survey of machine learning approaches and techniques for student dropout prediction a multi-stage deep learning approach for business process event prediction counterfactual theories of causation event recognition and forecasting technology forecasting occurrences of activities tensor-based method for temporal geopolitical event forecasting flood prediction using machine learning models: literature review urban events prediction via convolutional neural networks and instagram data embers at years: experiences operating an open source indicators forecasting system capturing planned protests from open source indicators a prototype method for future event prediction based on future reference sentence extraction future event prediction: if and when sequence to sequence learning for event prediction staple: spatio-temporal precursor learning for event forecasting spatio-temporal event forecasting and precursor identification real-time forecasting of an epidemic using a discrete time stochastic model: a case study of pandemic influenza (h n - ) deep mixture point processes: spatio-temporal event prediction with rich contextual information mobile network failure event detection and forecasting with multiple user activity data sets prediction of irrigation event occurrence at farm level using optimal decision trees forecasting the novel coronavirus covid- using social media to predict the future: a systematic literature review towards a deep learning approach for urban crime forecasting a new temporal pattern identification method for characterization and prediction of complex time series events assessing china's cybersecurity law pairwise-ranking based collaborative recurrent neural networks for clinical event prediction learning causality for news events prediction learning to predict from textual data mining the web to predict future events beating the news' with embers: forecasting civil unrest using open source indicators an investigation of interpretable deep learning for adverse drug event prediction forecasting natural events using axonal delay a deep learning approach to the citywide traffic accident risk prediction neural networks to predict earthquakes in chile spatial crime distribution and prediction for sporting events using social media a crowdsourcing triage algorithm for geopolitical event forecasting machine learning predicts laboratory earthquakes a process for predicting manhole events in manhattan theft prediction with individual risk factor of visitors earthquake shakes twitter users: real-time event detection by social sensors a survey of online failure prediction methods using hidden semi-markov models for effective online failure prediction unexpected event prediction in wire electrical discharge machining using deep learning techniques adverse drug event prediction combining shallow analysis and machine learning forecasting seasonal outbreaks of influenza an efficient approach to event detection and forecasting in dynamic multivariate social media networks tiresias: predicting security events through deep learning autoencoder-based one-class classification technique for event prediction predicting an effect event from a new cause event using a semantic web based abstraction tree of past cause-effect event pairs modeling events with cascades of poisson processes neural speech recognizer: acoustic-to-word lstm model for large vocabulary speech recognition fundamental patterns and predictions of event size distributions in modern wars and terrorist campaigns high-impact event prediction by temporal data mining through genetic algorithms hierarchical gated recurrent unit with semantic attention for event prediction yago: a core of semantic knowledge machine learning for predictive maintenance: a multiple classifier approach an empirical comparison of classification techniques for next event prediction using business process event logs probabilistic forecasting of wind power ramp events using autoregressive logit models dynamic forecasting of zika epidemics using google trends predicting time-to-event from twitter messages a multimodel ensemble to forecast onsets of state-sponsored mass killing forecasting gathering events through continuous destination prediction on big trajectory data predicting urban dispersal events: a two-stage framework through deep survival analysis on mobility data a measurement-based model for estimation of resource exhaustion in operational software systems predicting rare events in temporal domains the eu general data protection regulation (gdpr). a practical guide graph-based deep modeling and real time forecasting of sparse spatio-temporal data deep learning for real-time crime forecasting and its ternarization an iot application for fault diagnosis and prediction a hierarchical pattern learning framework for forecasting extreme weather events towards long-lead forecasting of extreme flood events: a data mining framework for precipitation cluster precursors identification incomplete label uncertainty estimation for petition victory prediction with dynamic features using twitter for next-place prediction, with an application to crime prediction csan: a neural network benchmark model for crime forecasting in spatio-temporal scale cityguard: citywide fire risk forecasting using a machine learning approach ddos event forecasting using twitter data the perils of policy by p-value: predicting civil conflicts forest-based point process for event prediction from electronic health records on predicting crime with heterogeneous spatial patterns: methods and evaluation a miml-lstm neural network for integrated fine-grained event forecasting event history analysis spatio-temporal check-in time prediction with recurrent neural network based survival analysis finding progression stages in time-evolving event sequences web-log mining for quantitative temporal-event prediction using external knowledge for financial event prediction based on graph neural networks neural network based continuous conditional random field for fine-grained crime prediction an integrated model for crime prediction using temporal and spatial factors predicting future levels of violence in afghanistan districts using gdelt tornado forecasting with multiple markov boundaries dram: a deep reinforced intra-attentive model for event prediction a survey of prediction using social media hetero-convlstm: a deep learning approach to traffic accident prediction on heterogeneous spatio-temporal data blending forest fire smoke forecasts with observed data can improve their utility for public health applications a data-driven approach for event prediction social media mining: an introduction forecasting seasonal influenza fusing digital indicators and a mechanistic disease model unsupervised spatial event detection in targeted domains with applications to civil unrest modeling spatiotemporal event forecasting in social media multi-resolution spatial event forecasting in social media online spatial event forecasting in microblogs simnest: social media nested epidemic simulation via online semi-supervised deep learning spatial auto-regressive dependency interpretable learning based on spatial topological constraints multi-task learning for spatio-temporal event forecasting feature constrained multi-task learning models for spatiotemporal event forecasting spatial event forecasting in social media with geographically hierarchical regularization distant-supervision of heterogeneous multitask learning for social event forecasting with multilingual indicators hierarchical incomplete multi-source feature learning for spatiotemporal event forecasting constructing and embedding abstract event causality networks from text snippets modeling temporal-spatial correlations for crime prediction prediction model for solar energetic proton events: analysis and verification a pattern based predictor for event streams a tensor framework for geosensor data forecasting of significant societal events key: cord- -lnzz chk authors: chakraborty, tanujit; ghosh, indrajit; mahajan, tirna; arora, tejasvi title: nowcasting of covid- confirmed cases: foundations, trends, and challenges date: - - journal: nan doi: nan sha: doc_id: cord_uid: lnzz chk the coronavirus disease (covid- ) has become a public health emergency of international concern affecting more than countries and territories worldwide. as of september , , it has caused a pandemic outbreak with more than million confirmed infections and more than million reported deaths worldwide. several statistical, machine learning, and hybrid models have previously tried to forecast covid- confirmed cases for profoundly affected countries. due to extreme uncertainty and nonstationarity in the time series data, forecasting of covid- confirmed cases has become a very challenging job. for univariate time series forecasting, there are various statistical and machine learning models available in the literature. but, epidemic forecasting has a dubious track record. its failures became more prominent due to insufficient data input, flaws in modeling assumptions, high sensitivity of estimates, lack of incorporation of epidemiological features, inadequate past evidence on effects of available interventions, lack of transparency, errors, lack of determinacy, and lack of expertise in crucial disciplines. this chapter focuses on assessing different short-term forecasting models that can forecast the daily covid- cases for various countries. in the form of an empirical study on forecasting accuracy, this chapter provides evidence to show that there is no universal method available that can accurately forecast pandemic data. still, forecasters' predictions are useful for the effective allocation of healthcare resources and will act as an early-warning system for government policymakers. in december , clusters of pneumonia cases caused by the novel coronavirus were identified at the wuhan, hubei province in china [ , ] after almost hundred years of the spanish flu [ ] . soon after the emergence of the novel beta coronavirus, world health organization (who) characterized this contagious disease as a "global pandemic" due to its rapid spread worldwide [ ] . many scientists have attempted to make forecasts about its impact. however, despite involving many excellent modelers, best intentions, and highly sophisticated tools, forecasting covid- pandemics is harder [ ] , and this is primarily due to the following major factors: -very less amount of data is available; -less understanding of the factors that contribute to it; -model accuracy is constrained by our knowledge of the virus, however. with an emerging disease such as covid- , many transmission-related biologic features are hard to measure and remain unknown; -the most obvious source of uncertainty affecting all models is that we don't know how many people are or have been infected; -ongoing issues with virologic testing mean that we are certainly missing a substantial number of cases, so models fitted to confirmed cases are likely to be highly uncertain [ ] ; -the problem of using confirmed cases to fit models is further complicated because the fraction of confirmed cases is spatially heterogeneous and time-varying [ ] ; -finally, many parameters associated with covid- transmission are poorly understood. amid enormous uncertainty about the future of the covid- pandemic, statistical, machine learning, and epidemiological models are critical forecasting tools for policymakers, clinicians, and public health practitioners [ , , , , , ] . covid- modeling studies generally follow one of two general approaches that we will refer to as forecasting models and mechanistic models. although there are hybrid approaches, these two model types tend to address different questions on different time scales, and they deal differently with uncertainty [ ] . compartmental epidemiological models have been developed over nearly a century and are well tested on data from past epidemics. these models are based on modeling the actual infection process and are useful for predicting long-term trajectories of the epidemic curves [ ] . short-term forecasting models are often statistical, fitting a line or curve to data and extrapolating from there -like seeing a pattern in a sequence of numbers and guessing the next number, without incorporating the process that produces the pattern [ , , ] . well constructed statistical frameworks can be used for short-term forecasts, using machine learning or regression. in statistical models, the uncertainty of the prediction is generally presented as statistically computed prediction intervals around an estimate [ , ] . given that what happens a month from now will depend on what happens in the interim, the estimated uncertainty should increase as you look further into the future. these models yield quantitative projections that policymakers may need to allocate resources or plan interventions in the short-term. forecasting time series datasets have been a traditional research topic for decades, and various models have been developed to improve forecasting accuracy [ , , ] . there are numerous methods available to forecast time series, including traditional statistical models and machine learning algorithms, providing many options for modelers working on epidemiological forecasting [ , , , , , , ] . many research efforts have focused on developing a universal forecasting model but failed, which is also evident from the "no free lunch theorem" [ ] . this chapter focuses on assessing popularly used short-term forecasting (nowcasting) models for covid- from an empirical perspective. the findings of this chapter will fill the gap in the literature of nowcasting of covid- by comparing various forecasting methods, understanding global characteristics of pandemic data, and discovering real challenges for pandemic forecasters. the upcoming sections present a collection of recent findings on covid- forecasting. additionally, twenty nowcasting (statistical, machine learning, and hybrid) models are assessed for five countries of the united states of america (usa), india, brazil, russia, and peru. finally, some recommendations for policy-making decisions and limitations of these forecasting tools have been discussed. researchers face unprecedented challenges during this global pandemic to forecast future real-time cases with traditional mathematical, statistical, forecasting, and machine learning tools [ , , , , ] . studies in march with simple yet powerful forecasting methods like the exponential smoothing model predicted cases ten days ahead that, despite the positive bias, had reasonable forecast error [ ] . early linear and exponential model forecasts for better preparation regarding hospital beds, icu admission estimation, resource allocation, emergency funding, and proposing strong containment measures were conducted [ ] that projected about icu and icu admissions in italy for march , . health-care workers had to go through the immense mental stress left with a formidable choice of prioritizing young and healthy adults over the elderly for allocation of life support, mostly unwanted ignoring of those who are extremely unlikely to survive [ , ] . real estimates of mortality with -days delay demonstrated underestimating of the covid- outbreak and indicated a grave future with a global case fatality rate (cfr) of . % in march [ ] . the contact tracing, quarantine, and isolation efforts have a differential effect on the mortality due to covid- among countries. even though it seems that the cfr of covid- is less compared to other deadly epidemics, there are concerns about it being eventually returning as the seasonal flu, causing a second wave or future pandemic [ , ] . mechanistic models, like the susceptible-exposed-infectious-recovered (seir) frameworks, try to mimic the way covid- spreads and are used to forecast or simulate future transmission scenarios under various assumptions about parameters governing the transmission, disease, and immunity [ , , , , ] . mechanistic modeling is one of the only ways to explore possible long-term epidemiologic outcomes [ ] . for example, the model from ferguson et al. [ ] that has been used to guide policy responses in the united states and britain examines how many covid- deaths may occur over the next two years under various social distancing measures. kissler et al. [ ] ask whether we can expect seasonal, recurrent epidemics if immunity against novel coronavirus functions similarly to immunity against the milder coronaviruses that we transmit seasonally. in a detailed mechanistic model of boston-area transmission, aleta et al. [ ] simulate various lockdown "exit strategies". these models are a way to formalize what we know about the viral transmission and explore possible futures of a system that involves nonlinear interactions, which is almost impossible to do using intuition alone [ , ] . although these epidemiological models are useful for estimating the dynamics of transmission, targeting resources, and evaluating the impact of intervention strategies, the models require parameters and depend on many assumptions. several statistical and machine learning methods for real-time forecasting of the new and cumulative confirmed cases of covid- are developed to overcome limitations of the epidemiological model approaches and assist public health planning and policy-making [ , , , , ] . real-time forecasting with foretelling predictions is required to reach a statistically validated conjecture in this current health crisis. some of the leading-edge research concerning real-time projections of covid- confirmed cases, recovered cases, and mortality using statistical, machine learning, and mathematical time series modeling are given in table . a univariate time series is the simplest form of temporal data and is a sequence of real numbers collected regularly over time, where each number represents a value [ , ] . there are broadly two major steps involved in univariate time series forecasting [ ] : -studying the global characteristics of the time series data; -analysis of data with the 'best-fitted' forecasting model. understanding the global characteristics of pandemic confirmed cases data can help forecasters determine what kind of forecasting method will be appropriate for the given situation [ ] . as such, we aim to perform a meaningful data analysis, including the study of time series characteristics, to provide a suitable and comprehensive knowledge foundation for the future step of selecting an apt forecasting method. thus, we take the path of using statistical measures to understand pandemic time series characteristics to assist method selection and data analysis. these characteristics will carry summarized information of the time series, capturing the 'global picture' of the datasets. based on the recommendation of [ , , , ] , we study several classical and advanced time series characteristics of covid- data. this study considers eight global characteristics of the time series: periodicity, stationarity, serial correlation, skewness, kurtosis, nonlinearity, long-term dependence, and chaos. this collection of measures provides quantified descriptions and gives a rich portrait of the pandemic time-series' nature. a brief description of these statistical and advanced time-series measures are given below. a seasonal pattern exists when a time series is influenced by seasonal factors, such as the month of the year or day of the week. the seasonality of a time series is defined as a pattern that repeats itself over fixed intervals of time [ ] . in general, the seasonality can be found by identifying a large autocorrelation coefficient or a large partial autocorrelation coefficient at the seasonal lag. since the periodicity is very important for determining the seasonality and examining the cyclic pattern of the time series, the periodicity feature extraction becomes a necessity. unfortunately, many time series available from the dataset in different domains do not always have known frequency or regular periodicity. seasonal time series are sometimes also called cyclic series, although there is a significant distinction between them. cyclic data have varying frequency lengths, but seasonality is of a fixed length over each period. for time series with no seasonal pattern, the frequency is set to . the seasonality is tested using the 'stl' function within the "stats" package in r statistical software [ ] . stationarity is the foremost fundamental statistical property tested for in time series analysis because most statistical models require that the underlying generating processes be stationary [ ] . stationarity means that a time series (or rather the process rendering it) do not change over time. in statistics, a unit root test tests whether a time series variable is non-stationary and possesses a unit root [ ] . the null hypothesis is generally defined as the presence of a unit root, and the alternative hypothesis is either stationarity, trend stationarity, or explosive root depending on the test used. in econometrics, kwiatkowski-phillips-schmidt-shin (kpss) tests are used for testing a null hypothesis that an observable time series is stationary around a deterministic trend (that is, trend-stationary) against the alternative of a unit root [ ] . the kpss test is done using the 'kpss.test' function within the "tseries" package in r statistical software [ ] . serial correlation is the relationship between a variable and a lagged version of itself over various time intervals. serial correlation occurs in time-series studies when the errors associated with a given time period carry over into future time periods [ ] . we have used box-pierce statistics [ ] in our approach to estimate the serial correlation measure and extract the measures from covid- data. the box-pierce statistic was designed by box and pierce in for testing residuals from a forecast model [ ] . it is a common portmanteau test for computing the measure. the mathematical formula of the box-pierce statistic is as follows: where n is the length of the time series, h is the maximum lag being considered (usually h is chosen as ) , and r k is the autocorrelation function. the portmanteau test is done using the 'box.test' function within the "stats" package in r statistical software [ ] . nonlinear time series models have been used extensively to model complex dynamics not adequately represented by linear models [ ] . nonlinearity is one important time series characteristic to determine the selection of an appropriate forecasting method. [ ] there are many approaches to test the nonlinearity in time series models, including a nonparametric kernel test and a neural network test [ ] . in the comparative studies between these two approaches, the neural network test has been reported with better reliability [ ] . in this research, we used teräsvirta's neural network test [ ] for measuring time series data nonlinearity. it has been widely accepted and reported that it can correctly model the nonlinear structure of the data [ ] . it is a test for neglected nonlinearity, likely to have power against a range of alternatives based on the nn model (augmented single-hidden-layer feedforward neural network model). this takes large values when the series is nonlinear and values near zero when the series is linear. the test is done using the 'nonlinearitytest' function within the "nonlineartseries" package in r statistical software [ ] . skewness is a measure of symmetry, or more precisely, the lack of symmetry. a distribution, or dataset, is symmetric if it looks the same to the left and the right of the center point [ ] . a skewness measure is used to characterize the degree of asymmetry of values around the mean value [ ] . for univariate data y t , the skewness coefficient is whereȲ is the mean, σ is the standard deviation, and n is the number of data points. the skewness for a normal distribution is zero, and any symmetric data should have the skewness near zero. negative values for the skewness indicate data that are skewed left, and positive values for the skewness indicate data that are skewed right. in other words, left skewness means that the left tail is heavier than the right tail. similarly, right skewness means the right tail is heavier than the left tail [ ] . skewness is calculated using the 'skewness' function within the "e " package in r statistical software [ ] . kurtosis is a measure of whether the data are peaked or flat, relative to a normal distribution [ ] . a dataset with high kurtosis tends to have a distinct peak near the mean, decline rather rapidly, and have heavy tails. datasets with low kurtosis tend to have a flat top near the mean rather than a sharp peak. for a univariate time series y t , the kurtosis coefficient is the kurtosis for a standard normal distribution is . therefore, the excess kurtosis is defined as so, the standard normal distribution has an excess kurtosis of zero. positive kurtosis indicates a 'peaked' distribution and negative kurtosis indicates a 'flat' distribution [ ] . kurtosis is calculated using the 'kurtosis' function within the "performance-analytics" package in r statistical software [ ] . processes with long-range dependence have attracted a good deal of attention from a probabilistic perspective in time series analysis [ ] . with such increasing importance of the 'self-similarity' or 'long-range dependence' as one of the time series characteristics, we study this feature into the group of pandemic data characteristics. the definition of self-similarity is most related to the self-similarity parameter, also called hurst exponent (h) [ ] . the class of autoregressive fractionally integrated moving average (arfima) processes [ ] is a good estimation method for computing h. in an arima(p, d, q), p is the order of ar, d is the degree first differencing involved, and q is the order of ma. if the time series is suspected of exhibiting long-range dependency, parameter d may be replaced by certain non-integer values in the arfima model [ ] . we fit an arfima( , d, ) to the maximum likelihood, which is approximated by using the fast and accurate method of haslett and raftery [ ] . we then estimate the hurst parameter using the relation h = d + . . the self-similarity feature can only be detected in the raw data of the time series. the value of h can be obtained using the 'hurstexp' function within the "pracma" package in r statistical software [ ] . many systems in nature that were previously considered random processes are now categorized as chaotic systems. for several years, lyapunov characteristic exponents are of interest in the study of dynamical systems to characterize quantitatively their stochasticity properties, related essentially to the exponential divergence of nearby orbits [ ] . nonlinear dynamical systems often exhibit chaos, characterized by sensitive dependence on initial values, or more precisely by a positive lyapunov exponent (le) [ ] . recognizing and quantifying chaos in time series are essential steps toward understanding the nature of random behavior and revealing the extent to which short-term forecasts may be improved [ ] . le, as a measure of the divergence of nearby trajectories, has been used to qualifying chaos by giving a quantitative value [ ] . the algorithm of computing le from time-series is applied to continuous dynamical systems in an n-dimensional phase space [ ] . le is calculated using the 'lyapunov exponent' function within the "tserieschaos" package in r statistical software [ ] . time series forecasting models work by taking a series of historical observations and extrapolating future patterns. these are great when the data are accurate; the future is similar to the past. forecasting tools are designed to predict possible future alternatives and help current planing and decision making [ ] . there are essentially three general approaches to forecasting a time series [ ] : . generating forecasts from an individual model; . combining forecasts from many models (forecast model averaging); . hybrid experts for time series forecasting. single (individual) forecasting models are either traditional statistical methods or modern machine learning tools. we study ten popularly used single forecasting models from classical time series, advanced statistics, and machine learning literature. there has been a vast literature on the forecast combinations motivated by the seminal work of bates & granger [ ] and followed by a plethora of empirical applications showing that combination forecasts are often superior to their counterparts (see, [ , ] , for example). combining forecasts using a weighted average is considered a successful way of hedging against the risk of selecting a misspecified model [ ] . a significant challenge is in choosing an appropriate set of weights, and many attempts to do this have been worse than simply using equal weightssomething that has become known as the "forecast combination puzzle" (see, for example, [ ] ). to overcome this, hybrid models became popular with the seminal work of zhang [ ] and further extended for epidemic forecasting in [ , , ] . the forecasting methods can be briefly reviewed and organized in the architecture shown in figure . the autoregressive integrated moving average (arima) is one of the well-known linear models in time-series forecasting, developed in the early s [ ] . it is widely used to track linear tendencies in stationary time-series data. it is denoted by arima(p, d, q), where the three components have significant meanings. the parameters p and q represent the order of ar and ma models, respectively, and d denotes the level of differencing to convert nonstationary data into stationary time series [ ] . arima model can be mathematically expressed as follows: where y t denotes the actual value of the variable at time t, ǫ t denotes the random error at time t, β i and α j are the coefficients of the model. some necessary steps to be followed for any given time-series dataset to build an arima model are as follows: -identification of the model (achieving stationarity). -use autocorrelation function (acf) and partial acf plots to select the ar and ma model parameters, respectively, and finally estimate model parameters for the arima model. -the 'best-fitted' forecasting model can be found using the akaike information criteria (aic) or the bayesian information criteria (bic). finally, one checks the model diagnostics to measure its performance. an implementation in r statistical software is available using the 'auto.arima' function under the "forecast" package, which returns the 'best' arima model according to either aic or bic values [ ] . wavelet analysis is a mathematical tool that can reveal information within the signals in both the time and scale (frequency) domains. this property overcomes the primary drawback of fourier analysis, and wavelet transforms the original signal data (especially in the time domain) into a different domain for data analysis and processing. wavelet-based models are most suitable for nonstationary data, unlike standard arima. most epidemic time-series datasets are nonstationary; therefore, wavelet transforms are used as a forecasting model for these datasets [ ] . when conducting wavelet analysis in the context of time series analysis [ ] , the selection of the optimal number of decomposition levels is vital to determine the performance of the model in the wavelet domain. the following formula for the number of decomposition levels, w l = int[log(n)], is used to select the number of decomposition levels, where n is the time-series length. the wavelet-based arima (warima) model transforms the time series data by using a hybrid maximal overlap discrete wavelet transform (modwt) algorithm with a 'haar' filter [ ] . daubechies wavelets can produce identical events across the observed time series in so many fashions that most other time series prediction models cannot recognize. the necessary steps of a wavelet-based forecasting model, defined by [ ] , are as follows. firstly, the daubechies wavelet transformation and a decomposition level are applied to the nonstationary time series data. secondly, the series is reconstructed by removing the high-frequency component, using the wavelet denoising method. lastly, an appropriate arima model is applied to the reconstructed series to generate out-of-sample forecasts of the given time series data. wavelets were first considered as a family of functions by morlet [ ] , constructed from the translations and dilation of a single function, which is called "mother wavelet". these wavelets are defined as follows: where the parameter m ( = ) is denoted as the scaling parameter or scale, and it measures the degree of compression. the parameter n is used to determine the time location of the wavelet, and it is called the translation parameter. if the value |m| < , then the wavelet in m is a compressed version (smaller support is the time domain) of the mother wavelet and primarily corresponds to higher frequencies, and when |m| > , then φ ( m, n)(t) has larger time width than φ(t) and corresponds to lower frequencies. hence wavelets have time width adopted to their frequencies, which is the main reason behind the success of the morlet wavelets in signal processing and time-frequency signal analysis [ ] . an implementation of the warima model is available using the 'waveletfittingarma' function under the "waveletarima" package in r statistical software [ ] . fractionally autoregressive integrated moving average or autoregressive fractionally integrated moving average models are the generalized version arima model in time series forecasting, which allow non-integer values of the differencing parameter [ ] . it may sometimes happen that our time-series data is not stationary, but when we try differencing with parameter d taking the value to be an integer, it may over difference it. to overcome this problem, it is necessary to difference the time series data using a fractional value. these models are useful in modeling time series, which has deviations from the long-run mean decay more slowly than an exponential decay; these models can deal with time-series data having long memory [ ] . arfima models can be mathematically expressed as follows: where b is is the backshift operator, p, q are arima parameters, and d is the differencing term (allowed to take non-integer values). an r implementation of arfima model can be done with 'arfima' function under the "forecast"package [ ] . an arfima(p, d, q) model is selected and estimated automatically using the hyndman-khandakar ( ) [ ] algorithm to select p and q and the haslett and raftery ( ) [ ] algorithm to estimate the parameters including d. exponential smoothing state space methods are very effective methods in case of time series forecasting. exponential smoothing was proposed in the late s [ ] and has motivated some of the most successful forecasting methods. forecasts produced using exponential smoothing methods are weighted averages of past observations, with the weights decaying exponentially as the observations get older. the ets models belong to the family of state-space models, consisting of three-level components such as an error component (e), a trend component (t), and a seasonal component(s). this method is used to forecast univariate time series data. each model consists of a measurement equation that describes the observed data, and some state equations that describe how the unobserved components or states (level, trend, seasonal) change over time [ ] . hence, these are referred to as state-space models. [ ] . an r implementation of the model is available in the 'ets' function under "forecast" package [ ] . as an extension of autoregressive model, self-exciting threshold autoregressive (se-tar) model is used to model time series data, in order to allow for higher degree of flexibility in model parameters through a regime switching behaviour [ ] . given a time-series data y t , the setar model is used to predict future values, assuming that the behavior of the time series changes once the series enters a different regime. this switch from one to another regime depends on the past values of the series. the model consists of k autoregressive (ar) parts, each for a different regime. the model is usually denoted as setar (k, p) where k is the number of threshold, there are k + number of regime in the model and p is the order of the autoregressive part. for example, suppose an ar( ) model is assumed in both regimes, then a -regime setar model is given by [ ] : where for the moment the ǫ t are assumed to be an i.i.d. white noise sequence conditional upon the history of the time series and c is the threshold value. the setar model assumes that the border between the two regimes is given by a specific value of the threshold variable y t− . the model can be implemented using 'setar' function under the "tsdyn" package in r [ ] . bayesian statistics has many applications in the field of statistical techniques such as regression, classification, clustering, and time series analysis. scott and varian [ ] used structural time series models to show how google search data can be used to improve short-term forecasts of economic time series. in the structural time series model, the observation in time t, y t is defined as follows: where β t is the vector of latent variables, x t is the vector of model parameters, and ǫ t are assumed follow normal distributions with zero mean and h t as the variance. in addition, β t is represented as follows: where δ t are assumed to follow normal distributions with zero mean and q t as the variance. gaussian distribution is selected as the prior of the bsts model since we use the occurred frequency values ranging from to ∞ [ ] . an r implementation is available under the "bsts" package [ ] , where one can add local linear trend and seasonal components as required. the state specification is passed as an argument to 'bsts' function, along with the data and the desired number of markov chain monte carlo (mcmc) iterations, and the model is fit using an mcmc algorithm [ ] . the 'theta method' or 'theta model' is a univariate time series forecasting technique that performed particularly well in m forecasting competition and of interest to forecasters [ ] . the method decomposes the original data into two or more lines, called theta lines, and extrapolates them using forecasting models. finally, the predictions are combined to obtain the final forecasts. the theta lines can be estimated by simply modifying the 'curvatures' of the original time series [ ] . this change is obtained from a coefficient, called θ coefficient, which is directly applied to the second differences of the time series: where y " data = y t − y t− + y t− at time t for t = , , · · · , n and {y , y , · · · , y n } denote the observed univariate time series. in practice, coefficient θ can be considered as a transformation parameter which creates a series of the same mean and slope with that of the original data but having different variances. now, eqn. ( ) is a second-order difference equation and has solution of the following form [ ] : where a θ and b θ are constants and t = , , · · · , n. thus, y new (θ) is equivalent to a linear function of y t with a linear trend added. the values of a θ and b θ are computed by minimizing the sum of squared differences: forecasts from the theta model are obtained by a weighted average of forecasts of y new (θ) for different values of θ. also, the prediction intervals and likelihoodbased estimation of the parameters can be obtained based on a state-space model, demonstrated in [ ] . an r implementation of the theta model is possible with 'thetaf' function in "forecast" package [ ] . the main objective of tbats model is to deal with complex seasonal patterns using exponential smoothing [ ] . the name is acronyms for key features of the models: trigonometric seasonality (t), box-cox transformation (b), arma errors (a), trend (t) and seasonal (s) components. tbats makes it easy for users to handle data with multiple seasonal patterns. this model is preferable when the seasonality changes over time [ ] . tbats models can be described as follows: where y (µ) t is the time series at time point t (box-cox transformed), s (i) t is the i-th seasonal component, l t is the local level, b t is the trend with damping, d t is the arma(p, q) process for residuals and e t as the gaussian white noise. tbats model can be implemented using 'tbats' function under the "forecast" package in r statistical software [ ] . forecasting with artificial neural networks (ann) has received increasing interest in various research and applied domains in the late s. it has been given special attention in epidemiological forecasting [ ] . multi-layered feed-forward neural networks with back-propagation learning rules are the most widely used models with applications in classification and prediction problems [ ] . there is a single hidden layer between the input and output layers in a simple feed-forward neural net, and where weights connect the layers. denoting by ω ji the weights between the input layer and hidden layer and ν kj denotes the weights between the hidden and output layers. based on the given inputs x i , the neuron's net input is calculated as the weighted sum of its inputs. the output layer of the neuron, y j , is based on a sigmoidal function indicating the magnitude of this net-input [ ] . for the j th hidden neuron, the calculation for the net input and output are: ω ji x i and y j = f (net h j ). for the k th output neuron: with λ ∈ ( , ) is a parameter used to control the gradient of the function and j is the number of neurons in the hidden layer. the back-propagation [ ] learning algorithm is the most commonly used technique in ann. in the error back-propagation step, the weights in ann are updated by minimizing where, d pk is the desired output of neuron k and for input pattern p. the common formula for number of neurons in the hidden layer is h = (i+j) + √ d, for selecting the number of hidden neurons, where i is the number of output y j and d denotes the number of i training patterns in the input x i [ ] . the application of ann for time series data is possible with 'mlp' function under "nnfor" package in r [ ] . autoregressive neural network (arnn) received attention in time series literature in late s [ ] . the architecture of a simple feedforward neural network can be described as a network of neurons arranged in input layer, hidden layer, and output layer in a prescribed order. each layer passes the information to the next layer using weights that are obtained using a learning algorithm [ ] . arnn model is a modification to the simple ann model especially designed for prediction problems of time series datasets [ ] . arnn model uses a pre-specified number of lagged values of the time series as inputs and number of hidden neurons in its architecture is also fixed [ ] . arnn(p, k) model uses p lagged inputs of the time series data in a one hidden layered feedforward neural network with k hidden units in the hidden layer. let x denotes a p-lagged inputs and f is a neural network of the following architecture: where c , a j , w j are connecting weights, b j are p-dimensional weight vector and φ is a bounded nonlinear sigmoidal function (e.g., logistic squasher function or tangent hyperbolic activation function). these weights are trained using a gradient descent backpropagation [ ] . standard ann faces the dilemma to choose the number of hidden neurons in the hidden layer and optimal choice is unknown. but for arnn model, we adopt the formula k = [(p+ )/ ] for non-seasonal time series data where p is the number of lagged inputs in an autoregressive model [ ] . arnn model can be applied using the 'nnetar' function available in the r statistical package "forecast" [ ] . the idea of ensemble time series forecasts was given by bates and granger ( ) in their seminal work [ ] . forecasts generated from arima, ets, theta, arnn, warima can be combined with equal weights, weights based on in-sample errors, or cross-validated weights. in the ensemble framework, cross-validation for time series data with user-supplied models and forecasting functions is also possible to evaluate model accuracy [ ] . combining several candidate models can hedge against an incorrect model specification. bates and granger( ) [ ] suggested such an approach and observed, somewhat surprisingly, that the combined forecast can even outperform the single best component forecast. while combination weights selected equally or proportionally to past model errors are possible approaches, many more sophisticated combination schemes, have been suggested. for example, rather than normalizing weights to sum to unity, unconstrained and even negative weights could be possible [ ] . the simple equal-weights combination might appear woefully obsolete and probably non-competitive compared to the multitude of sophisticated combination approaches or advanced machine learning and neural network forecasting models, especially in the age of big data. however, such simple combinations can still be competitive, particularly for pandemic time series [ ] . a flow diagram of the ensemble method is presented in figure . the ensemble method by [ ] produces forecasts out to a horizon h by applying a weight w m to each m of the n model forecasts in the ensemble. the ensemble forecast f (i) for time horizon ≤ i ≤ h and with individual component model forecasts f m (i) is then the weights can be determined in several ways (for example, supplied by the user, set equally, determined by in-sample errors, or determined by cross-validation). the "forecasthybrid" package in r includes these component models in order to enhance the "forecast" package base models with easy ensembling (e.g., 'hybridmodel' function in r statistical software) [ ] . the idea of hybridizing time series models and combining different forecasts was first introduced by zhang [ ] and further extended by [ , , , ] . the hybrid forecasting models are based on an error re-modeling approach, and there are broadly two types of error calculations popular in the literature, which are given below [ , ] : in the additive error model, the forecaster treats the expert's estimate as a variable,Ŷ t , and thinks of it as the sum of two terms: where y t is the true value and e t be the additive error term. in the multiplicative error model, the forecaster treats the expert's estimateŶ t as the product of two terms: where y t is the true value and e t be the multiplicative error term. now, even if the relationship is of product type, in the log-log scale it becomes additive. hence, without loss of generality, we may assume the relationship to be additive and expect errors (additive) of a forecasting model to be random shocks [ ] . these hybrid models are useful for complex correlation structures where less amount of knowledge is available about the data generating process. a simple example is the daily confirmed cases of the covid- cases for various countries where very little is known about the structural properties of the current pandemic. the mathematical formulation of the proposed hybrid model (z t ) is as follows: where l t is the linear part and n t is the nonlinear part of the hybrid model. we can estimate both l t and n t from the available time series data. letl t be the forecast value of the linear model (e.g., arima) at time t and ǫ t represent the error residuals at time t, obtained from the linear model. then, we write these left-out residuals are further modeled by a nonlinear model (e.g., ann or arnn) and can be represented as follows: where f is a nonlinear function, and the modeling is done by the nonlinear ann or arnn model as defined in eqn. ( ) and ε t is supposed to be the random shocks. therefore, the combined forecast can be obtained as follows: wheren t is the forecasted value of the nonlinear time series model. an overall flow diagram of the proposed hybrid model is given in figure . in the hybrid model, a nonlinear model is applied in the second stage to re-model the left-over autocorrelations in the residuals, which the linear model could not model. thus, this can be considered as an error re-modeling approach. this is important because due to model misspecification and disturbances in the pandemic rate time series, the linear models may fail to generate white noise behavior for the forecast residuals. thus, hybrid approaches eventually can improve the predictions for the epidemiological forecasting problems, as shown in [ , , ] . these hybrid models only assume that the linear and nonlinear components of the epidemic time series can be separated individually. the implementation of the hybrid models used in this study are available in [ ] . five time series covid- datasets for the usa, india, russia, brazil, and peru uk are considered for assessing twenty forecasting models (individual, ensemble, and hybrid). the datasets are mostly nonlinear, nonstationary, and non-gaussian in nature. we have used root mean square error (rmse), mean absolute error (mae), mean absolute percentage error (mape), and symmetric mape (smape) to evaluate the predictive performance of the models used in this study. since the number of data points in both the datasets is limited, advanced deep learning techniques will over-fit the datasets [ ] . we use publicly available datasets to compare various forecasting frameworks. covid- cases of five countries with the highest number of cases were collected [ , ] . the datasets and their description is presented in table . characteristics of these five time series were examined using hurst exponent, kpss test and terasvirta test and other measures as described in section . hurst exponent (denoted by h), which ranges between zero to one, is calculated to measure the longrange dependency in a time series and provides a measure of long-term nonlinearity. for values of h near zero, the time series under consideration is mean-reverting. an increase in the value will be followed by a decrease in the series and vice versa. when h is close to . , the series has no autocorrelation with past values. these types of series are often called brownian motion. when h is near one, an increase or decrease in the value is most likely to be followed by a similar movement in the future. all the five covid- datasets in this study possess the hurst exponent value near one, which indicates that these time series datasets have a strong trend of increase followed by an increase or decrease followed by another decline. kpss tests are performed to examine the stationarity of a given time series. the null hypothesis for the kpss test is that the time series is stationary. thus, the series is nonstationary when the p-value less than a threshold. from table , all the five datasets can be characterized as non-stationary as the p-value < . in each instances. terasvirta test examines the linearity of a time series against the alternative that a nonlinear process has generated the series. it is observed that the usa, russia, and peru covid- datasets are likely to follow a nonlinear trend. on the other hand, india and brazil datasets have some linear trends. further, we examine serial correlation, skewness, kurtosis, and maximum lyapunov exponent for the five covid- datasets. the results are reported in table . the serial correlation of the datasets is computed using the box-pierce test statistic for the null hypothesis of independence in a given time series. the p-values related to each of the datasets were found to be below the significant level (see table ). this indicates that these covid- datasets have no serial correlation when lag equals one. skewness for russia covid- dataset is found to be negative, whereas the other four datasets are positively skewed. this means for the russia dataset; the left tail is heavier than the right tail. for the other four datasets, the right tail is heavier than the left tail. the kurtosis values for the india dataset are found positive while the other four datasets have negative kurtosis values. therefore, the covid- dataset of india tends to have a peaked distribution, and the other four datasets may have a flat distribution. we observe that each of the five datasets is non-chaotic in nature, i.e., the maximum lyapunov exponents are less than unity. a summary of the implementation tools is presented in table . we used four popular accuracy metrics to evaluate the performance of different time series forecasting models. the expressions of these metrics are given below. where y i are actual series values,ŷ i are the predictions by different models and n represent the number of data points of the time series. the models with least accuracy metrics is the best forecasting model. this subsection is devoted to the experimental analysis of confirmed covid- cases using different time series forecasting models. the test period is chosen to be days and days, whereas the rest of the data is used as training data (see table ). in first columns of tables and , we present training data and test data for usa, india, brazil, russia and peru. the autocorrelation function (acf) and partial autocorrelation function (pacf) plots are also depicted for the training period of each of the five countries in tables and . acf and pacf plots are generated after applying the required number of differencing of each training data using the r function 'diff'. the required order of differencing is obtained by using the r function 'ndiffs' which estimate the number of differences required to make a given time series stationary. the integer-valued order of differencing is then used as the value of 'd' in the arima(p, d, q) model. other two parameters 'p' and 'q' of the model are obtained from acf and pacf plots respectively (see tables and ) . however, we choose the 'best' fitted arima model using aic value for each training dataset. table presents the training data (black colored) and test data (red-colored) and corresponding acf and pacf plots for the five time-series datasets. further, we checked twenty different forecasting models as competitors for the short-term forecasting of covid- confirmed cases in five countries. -days and -days ahead forecasts were generated for each model, and accuracy metrics were computed to determine the best predictive models. from the ten popular single models, we choose the best one based on the accuracy metrics. on the other hand, one hybrid/ensemble model is selected from the rest of the ten models. the bestfitted arima parameters, ets, arnn, and arfima models for each country are reported in the respective tables. table presents the training data (black colored) and test data (red-colored) and corresponding plots for the five datasets. twenty forecasting models are implemented on these pandemic time-series datasets. table gives the essential details about the functions and packages required for implementation. results for usa covid- data: among the single models, arima ( , , ) performs best in terms of accuracy metrics for -days ahead forecasts. tbats and arnn ( , ) also have competitive accuracy metrics. hybrid arima-arnn model improves the earlier arima forecasts and has the best accuracy among all hybrid/ensemble models (see table ). hybrid arima-warima also does a good job and improves arima model forecasts. in-sample and out-of-sample forecasts obtained from arima and hybrid arima-arnn models are depicted in fig. (a) . out-of-sample forecasts are generated using the whole dataset as training data. arfima( , , ) is found to have the best accuracy metrics for -days ahead forecasts among single forecasting models. bsts and setar also have good agreement with the test data in terms of accuracy metrics. hybrid arima-warima model and has the best accuracy among all hybrid/ensemble models (see table ). in-sample and out-of-sample forecasts obtained from arfima and hybrid arima-warima models are depicted in fig. (b) . results for india covid- data: among the single models, ann performs best in terms of accuracy metrics for -days ahead forecasts. arima( , , ) also has competitive accuracy metrics in the test period. hybrid arima-arnn model improves the arima( , , ) forecasts and has the best accuracy among all hybrid/ensemble models (see table ). hybrid arima-ann and hybrid arima-warima also do a good job and improves arima model forecasts. in-sample and out-of-sample forecasts obtained from ann and hybrid arima-arnn models are depicted in fig. (a) . out-of-sample forecasts are generated using the whole dataset as training data (see fig. ). ann is found to have the best accuracy metrics for -days ahead forecasts among single forecasting models for india covid- data. ensemble ann-arnn-warima model and has the best accuracy among all hybrid/ensemble models (see table ). in-sample and out-of-sample forecasts obtained from ann and ensemble ann-arnn-warima models are depicted in fig. (b) . results for brazil covid- data: among the single models, setar performs best in terms of accuracy metrics for -days ahead forecasts. ensemble ets-theta-arnn (efn) model has the best accuracy among all hybrid/ensemble models (see table ). in-sample and out-of-sample forecasts obtained from setar and ensemble efn models are depicted in fig. (a). warima is found to have the best accuracy metrics for -days ahead forecasts among single forecasting models for brazil covid- data. hybrid warima-ann model has the best accuracy among all hybrid/ensemble models (see table ). insample and out-of-sample forecasts obtained from warima and hybrid warima-ann models are depicted in fig. (b) . results for russia covid- data: bsts performs best in terms of accuracy metrics for a -days ahead forecast in the case of russia covid- data among single models. theta and arnn( , ) also show competitive accuracy measures. ensemble ets-theta-arnn (efn) model has the best accuracy among all hybrid/ensemble models (see table ). ensemble arima-ets-arnn and ensemble arima-theta-arnn also performs well in the test period. in-sample and out-ofsample forecasts obtained from bsts and ensemble efn models are depicted in fig. (a) . setar is found to have the best accuracy metrics for -days ahead forecasts among single forecasting models for russia covid- data. ensemble arima-theta-arnn (afn) model has the best accuracy among all hybrid/ensemble models (see table ). all five ensemble models show promising results for this dataset. in-sample and out-of-sample forecasts obtained from setar and ensemble afn models are depicted in fig. (b) . results for peru covid- data: warima and arfima( , . , ) perform better than other single models for -days ahead forecasts in peru. hybrid warima-arnn model improves the warima forecasts and has the best accuracy among all hybrid/ensemble models (see table ). in-sample and out-of-sample forecasts obtained from warima and hybrid warima-arnn models are depicted in fig. (a) . arfima( , , ) and ann depict competitive accuracy metrics for -days ahead forecasts among single forecasting models for peru covid- data. ensemble ann-arnn-warima (aaw) model has the best accuracy among all hybrid/ensemble models (see table ). in-sample and out-of-sample forecasts obtained from arfima( , , ) and ensemble aaw models are depicted in fig. (b) . results from all the five datasets reveal that none of the forecasting models performs uniformly, and therefore, one should be carefully select the appropriate forecasting model while dealing with covid- datasets. in this study, we assessed several statistical, machine learning, and composite models on the confirmed cases of covid- data sets for the five countries with the highest number of cases. thus, covid- cases in the usa, followed by india, brazil, russia, and peru, are considered. the datasets mostly exhibit nonlinear and nonstationary behavior. twenty forecasting models were applied to five datasets, and an empirical study is presented here. the empirical findings suggest no universal method exists that can outperform every other model for all the datasets in covid- nowcasting. still, the future forecasts obtained from models with the best accuracy will be useful in decision and policy makings for government officials and policymakers to allocate adequate health care resources for the coming days in responding to the crisis. however, we recommend updating the datasets regularly and comparing the accuracy metrics to obtain the best model. as this is evident from this empirical study that no model can perform consistently as the best forecasting model, one must update the datasets regularly to generate useful forecasts. time series of epidemics can oscillate heavily due to various epidemiological factors, and these fluctuations are challenging to be captured adequately for precise forecasting. all five different countries, except brazil and peru, will face a diminishing trend in the number of new confirmed cases of covid- pandemic. followed by the both short-term out of sample forecasts reported in this study, the lockdown and shutdown periods can be adjusted accordingly to handle the uncertain and vulnerable situations of the covid- pandemic. authorities and health care can modify their planning in stockpile and hospital-beds, depending on these covid- pandemic forecasts. models are constrained by what we know and what we assume but used appropriately, and with an understanding of these limitations, they can and should help guide us through this pandemic. since purely statistical approaches do not account for how transmission occurs, they are generally not well suited for long-term predictions about epidemiological dynamics (such as when the peak will occur and whether resurgence will happen) or inference about intervention efficacy. several forecasting models, therefore, limit their projections to two weeks or a month ahead. in this research, we have focused on analyzing the nature of the covid- time series data and understanding the data characteristics of the time series. this empirical work studied a wide range of statistical forecasting methods and machine learning algorithms. we have also presented more systematic representations of single, ensemble, and hybrid approaches available for epidemic forecasting. this quantitative study could be used to assess and forecast covid- confirmed cases, which will benefit epidemiologists and modelers in their real-world applications. considering this scope of the study, we can present a list of challenges of pandemic forecasting (short-term) with the forecasting tools presented in this chapter: -collect more data on the factors that contribute to daily confirmed cases of covid- . -model the entire predictive distribution, with particular focus on accurately quantifying uncertainty [ ] . -there is no universal model that can generate 'best' short-term forecasts of covid- confirmed cases. -continuously monitor the performance of any model against real data and either re-adjust or discard models based on accruing evidence. -developing models in real-time for a novel virus, with poor quality data, is a formidable task and real challenge for epidemic forecasters. -epidemiological estimates and compartmental models can be useful for longterm pandemic trajectory prediction, but they often assume some unrealistic assumptions [ ] . -future research is needed to collect, clean, and curate data and develop a coherent approach to evaluate the suitability of models with regard to covid- predictions and forecast uncertainties. for the sake of repeatability and reproducibility of this study, all codes and data sets are made available at https://github.com/indrajitg-r/forecasting-covid- -cases. github repository our world in data worldometers data repository modeling the impact of social distancing, testing, contact tracing and household quarantine on second-wave scenarios of the covid- epidemic. medrxiv forecasting time series using wavelets athanasios tsakris, and constantinos siettos. data-based analysis, modelling and forecasting of the covid- outbreak infectious diseases of humans: dynamics and control stability analysis and numerical simulation of seir model for pandemic covid- spread in indonesia principles of forecasting: a handbook for researchers and practitioners the theta model: a decomposition approach to forecasting the combination of forecasts real estimates of mortality following covid- infection. the lancet infectious diseases lyapunov characteristic exponents for smooth dynamical systems and for hamiltonian systems; a method for computing all of them long-term storage: an experimental study package 'pracma the combination of forecasts: a bayesian approach time series analysis: forecasting and control distribution of residual autocorrelations in autoregressive-integrated moving average time series models refining the global spatial limits of dengue virus transmission by evidence-based consensus time series: theory and methods: theory and methods ensemble method for dengue prediction theta autoregressive neural network model for covid- outbreak predictions. medrxiv forecasting dengue epidemics using a hybrid methodology an integrated deterministicstochastic approach for predicting the long-term trajectories of covid- . medrxiv real-time forecasts and risk assessment of novel coronavirus (covid- ) cases: a data-driven analysis time-series forecasting the analysis of time series: an introduction a time-dependent sir model for covid- with undetectable infected persons multiplicative error modeling approach for time series forecasting combining forecasts: a review and annotated bibliography years of time series forecasting forecasting time series with complex seasonal patterns using exponential smoothing fair allocation of scarce medical resources in the time of covid- analysis and forecast of covid- spreading in china, italy and france time series forecasting with neural networks: a comparative study using the air line data chaotic attractors of an infinite-dimensional dynamical system predicting chaotic time series. physical review letters impact of nonpharmaceutical interventions (npis) to reduce covid mortality and healthcare demand non-linear time series models in empirical finance nonlineartseries: nonlinear time series analysis deep learning an introduction to long-memory time series models and fractional differencing improved methods of combining forecasts critical care utilization for the covid- outbreak in lombardy, italy: early experience and forecast during an emergency response measuring skewness and kurtosis clinical characteristics of coronavirus disease in china business forecasting space-time modelling with long-memory dependence: assessing ireland's wind power resource the elements of statistical learning: data mining, inference, and prediction seir modeling of the covid- and its dynamics practical implementation of nonlinear time series methods: the tisean package. chaos: an interdisciplinary feasibility of controlling covid- outbreaks by isolation of cases and contacts. the lancet global health wrong but useful-what covid- epidemiologic models can and cannot tell us the effectiveness of quarantine of wuhan city against the corona virus disease (covid- ): a well-mixed seir model analysis artificial intelligence forecasting of covid- in china clinical features of patients infected with novel coronavirus in wuhan, china. the lancet forecasting with exponential smoothing: the state space approach forecasting: principles and practice unmasking the theta method automatic time series for forecasting: the forecast package for r. number / . monash university, department of econometrics and business statistics forecasting for covid- has failed an introduction to statistical learning multivariate bayesian structural time series model nonlinear time series analysis an artificial neural network (p, d, q) model for timeseries forecasting. expert systems with applications statistical notes for clinical researchers: assessing normal distribution ( ) using skewness and kurtosis projecting the transmission dynamics of sars-cov- through the postpandemic period nnfor: time series forecasting with neural networks nnfor: time series forecasting with neural networks early dynamics of transmission and control of covid- : a mathematical modelling study. the lancet infectious diseases metalearning: a survey of trends and technologies meta-learning for time series forecasting and forecast combination trend and forecasting of the covid- outbreak in china the end of social confinement and covid- re-emergence risk arma models and the box-jenkins methodology time series modelling to forecast the confirmed and recovered cases of covid- global spread of dengue virus types: mapping the year history fforma: feature-based forecast model averaging introduction to the theory of statistics the assessment of probability distributions from expert opinions with an application to seismic fragility curves social contacts and mixing patterns relevant to the spread of infectious diseases comparative study of wavelet-arima and wavelet-ann models for temperature time series data in northeastern bangladesh wavelet methods for time series analysis comparing sars-cov- with sars-cov and influenza pandemics. the lancet infectious diseases forecasting the novel coronavirus covid- a review of epidemic forecasting using artificial neural networks testing for a unit root in time series regression beta autoregressive fractionally integrated moving average models the many estimates of the covid- case fatality rate. the lancet infectious diseases predictions, role of interventions and effects of a historic national lockdown in india's response to the covid- pandemic: data science call to arms. harvard data science review short-term forecasting covid- cumulative confirmed cases: perspectives for brazil log-periodogram regression of time series with long range dependence. the annals of statistics real-time forecasts of the covid- epidemic in china from february th to february th facing covid- in italy-ethics, logistics, and therapeutics on the epidemic's front line a practical method for calculating largest lyapunov exponents from small data sets. physica d: nonlinear phenomena learning internal representations by error propagation depends boom-spikeslab, and linkingto boom. package 'bsts'. bayesian variable selection for nowcasting economic time series predicting the present with bayesian structural time series fast and accurate yearly time series forecasting with forecast combinations forecasthybrid: convenient functions for ensemble time series forecasts the kpss stationarity test as a unit root test a simple explanation of the forecast combination puzzle generalizing the theta method for automatic forecasting a machine learning forecasting model for covid- pandemic in india power of the neural network linearity test linear models, smooth transition autoregressions, and neural networks for forecasting macroeconomic time series: a re-examination forecast combinations. handbook of economic forecasting nonlinear time series analysis since : some personal reflections non-linear time series: a dynamical system approach tseries: time series analysis and computational finance. r package version the "spanish flu" in spain nonlinearity tests for time series time series and forecasting: brief history and future research multiple time scales analysis of hydrological time series with wavelet transform rule induction for forecasting method selection: meta-learning the characteristics of univariate time series forecasting sales by exponentially weighted moving averages no free lunch theorems for optimization nowcasting and forecasting the potential domestic and international spread of the -ncov outbreak originating in wuhan, china: a modelling study time series forecasting using a hybrid arima and neural network model neural network forecasting for seasonal and trend time series forecasting with artificial neural networks:: the state of the art estimation of local novel coronavirus (covid- ) cases in wuhan, china from off-site reported cases and population flow data from different sources. medrxiv key: cord- -hnu gw w authors: buising, kirsty l; thursky, karin a; black, james f; macgregor, lachlan; street, alan c; kennedy, marcus p; brown, graham v title: improving antibiotic prescribing for adults with community acquired pneumonia: does a computerised decision support system achieve more than academic detailing alone? – a time series analysis date: - - journal: bmc med inform decis mak doi: . / - - - sha: doc_id: cord_uid: hnu gw w background: the ideal method to encourage uptake of clinical guidelines in hospitals is not known. several strategies have been suggested. this study evaluates the impact of academic detailing and a computerised decision support system (cdss) on clinicians' prescribing behaviour for patients with community acquired pneumonia (cap). methods: the management of all patients presenting to the emergency department over three successive time periods was evaluated; the baseline, academic detailing and cdss periods. the rate of empiric antibiotic prescribing that was concordant with recommendations was studied over time comparing pre and post periods and using an interrupted time series analysis. results: the odds ratio for concordant therapy in the academic detailing period, after adjustment for age, illness severity and suspicion of aspiration, compared with the baseline period was or = . [ . , . ], p < . , and for the computerised decision support period compared to the academic detailing period was or = . [ . , . ], p = . . during the first months of the computerised decision support period an improvement in the appropriateness of antibiotic prescribing was demonstrated, which was greater than that expected to have occurred with time and academic detailing alone, based on predictions from a binary logistic model. conclusion: deployment of a computerised decision support system was associated with an early improvement in antibiotic prescribing practices which was greater than the changes seen with academic detailing. the sustainability of this intervention requires further evaluation. with the rapidly expanding body of medical knowledge, clinicians need access to appropriate, relevant information to guide their clinical decision making. for many conditions, clinical experts have used available evidence and experience to generate guidelines that endeavour to assist clinicians, and improve patient outcomes. a major problem, however, has been finding the best strategies to implement these guidelines in a busy hospital environment. [ ] [ ] [ ] group lectures, one to one academic detailing, laminated cards and advertising material such as posters have all been tried with variable success. [ ] [ ] [ ] [ ] with the increasing role played by computers as a source of information in the hospital setting, computerised decision support may provide a useful alternate strategy. [ ] [ ] [ ] [ ] at the royal melbourne hospital, a transferable web based computerised decision support system was developed, with the capacity to present any guideline or algorithm. [ ] we chose in the first instance to deploy a guideline for the management of patients with community acquired pneumonia (cap) as this is one of the most common conditions presenting to hospital emergency departments. international and national guidelines have been produced to guide the management of cap [ ] [ ] [ ] , but uptake has been poor. [ ] the general aim of this study was to describe the impact of different methods of guideline promotion on clinician prescribing behaviour. more specifically, a comparison of the impact of both academic detailing (ad) and a computerised decision support system (cdss) on the management of patients with cap in an emergency department (ed) was examined. the outcomes of interest included the prescription of antibiotics that were concordant with guideline recommendations, the early identification of the severely ill patients and adjustment of antibiotics to meet recommendations for prescribing in the severely ill group, and adjustment of antibiotics to accommodate known patient allergies. a two stage pre and post intervention cohort study, and a time series analysis this study was performed at the royal melbourne hospital, an urban adult tertiary teaching hospital with beds including intensive care unit (icu) beds. the emergency department assesses , patients per year, leading to , admissions to hospital. this hospital did not have an electronic medical record or a computerised order entry system. over different doctors were working in the ed at any point in time over the study periods, and the allocation of doctors to patients was not structured. a computerised antibiotic approval system restricting access to ceftriaxone was also in operation over all three time periods of this study. its implementation pre dated the commencement of this study. it approved ceftriaxone use for all patients with severe pneumonia, and its content agreed with the cap guideline content. this study described the prescribing behaviour of doctors (both senior and junior medical staff) managing patients in the ed. specifically, the study focused on antibiotic prescribing for all patients who were initially diagnosed with cap by the treating clinician in the ed. the study extended over three distinct time periods. the first, (or 'baseline') period was from during the first ('baseline') time period, electronic and paper copies of national antibiotic prescribing guidelines were available to staff in the ed [ ] but no particular additional efforts were made to encourage uptake of the guideline. at the start of the second ('academic detailing') time period, a program of academic detailing was initiated at the hospital. this involved training two senior ed clinicians, a pharmacist and a nurse to provide academic detailing to their colleagues. they spent one on one time educating colleagues (doctors and pharmacists) about antibiotic prescribing recommendations. these activities were opportunistic and occurred during the usual rostered hours. interactions were not scheduled and no formal documentation of ad encounters was made. posters and laminated cards with information about severity assessments and appropriate antibiotic choices for patients with cap were distributed and actively promoted throughout the ed during the academic detailing period. these personnel and advertising material remained available throughout the following ('computerised decision support') time period, but were not specifically promoted. at the commencement of the computerised decision support period, the guideline for the management of patients with cap was deployed on an existing decision support tool. this tool is a web-based transferable system that was designed at the hospital using a .net framework and implemented in january . the cap algorithm used the pneumonia severity index (psi) to guide site of management decisions (inpatient vs. outpatient care) and the modified british thoracic society severity score (curb) to highlight patients with severe pneumonia who were likely to need review by the intensive care unit (icu) staff. [ , ] the program was integrated with hospital databases containing patient demographics and pathology results to facilitate rapid calculation of scores required for these prediction rules. use of these scores was not, however mandated. users could choose to skip the score to obtain antibiotic advice alone. antibiotic allergy reminders were included. if a user had previously registered an allergy for a patient this was presented, otherwise a reminder was given to check with the patient. detailed information was included about unusual pathogens to consider, the most appropriate choice of empiric antibiotics, the duration of therapy, and the timing of change from intravenous to oral antibiotic therapy. users had access to medical literature via the internet, along with local interpretation of this literature within the cdss. users could browse the cdss content without logging a patient in, so it could be used as an educational tool as well as providing patient specific advice. there was general agreement between the empiric antibiotic recommendations made in the national guideline, the ad directives and the content of the cdss. the cdss was available hospital wide and its use was entirely voluntary. all hospital clinicians could access it via a shortcut on the desktop of any hospital computer. no specific incentives were provided to encourage its use. it was not triggered by any other computer systems. it resided alongside other electronic hospital guidelines. an introductory demonstration was provided to the ed staff and to all staff at a hospital grand round. thereafter, infectious diseases registrars or pharmacists provided demonstrations informally. all patient presentations to the ed were available for inclusion in the study. patients were prospectively identified from a database in the ed where the treating doctor already routinely recorded the patient's diagnosis. all patients with a diagnosis of pneumonia, chest infection, lower respiratory tract infection, pleuritic chest pain, cough, shortness of breath, and/or aspiration were identified. patients were included in the study if they had a new respiratory symptom, a new chest x-ray infiltrate consistent with pneumonia, and if the initial assessment made by the treating doctor was that the patient had pneumonia. exclusion criteria included: age < years, immunocompromised patients (corticosteroids ≥ mg prednisolone/ day for ≥ weeks, hiv positive with cd < umol/l, transplant recipients on immunosuppressive therapy), suspected or known severe acute respiratory syndrome (sars), nosocomial pneumonia (discharged from hospital in the previous weeks, after an admission longer than hours), and/or known suppurative lung diseases such as cystic fibrosis or bronchiectasis. data were prospectively collected from the medical history by a single trained research nurse, according to a set of specified rules. a single clinician was assigned to make judgements about any difficult issues, and a random sample of these cases was cross checked with a second infectious diseases physician. this group comprised % of the total patient cohort ( patients). specific clinical and pathological and radiological data available within the first hours were sought to allow calculation of severity scores. [ , ] clinicians' comments about suspicion of aspiration, and documentation of known antibiotic allergies were recorded. the time to antibiotic therapy was calculated using the time of presentation, documented electronically by the ed triage nurse, and the time of antibiotic administration as documented on the medication chart by the nurse in ed or on the ward. information regarding ongoing antibiotic use was collected. any antibiotics that were clearly being used to treat a separate infection (as described in the patient's medical record) were not included. where the duration of treatment after discharge was not recorded, it was assumed to be days. antibiotic costs were calculated using pharmacy purchasing data. no actual changes in the cost of drugs commonly prescribed for pneumonia occurred over the study period. the admission criteria for icu were based entirely upon the treating clinician's assessment in all time periods. no protocols or guidelines were enforced. clinicians were not aware that the study was being conducted. the researchers had no clinical role in the ed over the study period. there were no major changes in the number or composition of staff in the ed, or their responsibilities over the study period. this study was approved by the ethics committee of melbourne health. individual consent from the clinicians or the patients involved was not required. the primary outcome assessed was the prescription of empiric antibiotic therapy that adequately covered the likely pathogens (both typical and atypical) and was concordant with recommendations. this included the combination of a recommended beta lactam (amoxicillin, ampicillin, benzylpenicillin, ceftriaxone, cefotaxime or cefuroxime) plus either a macrolide (erythromycin, roxithromycin, clarithromycin or azithromycin) or doxycycline. the use of moxifloxacin alone was also classed as appropriate. patients who received additional antibiotics were still classed as appropriate, so long as their antibiotic regimen included the recommended drugs (reflecting that the patients at least received appropriate cover). the possibility that antibiotics were required for other concordant problems was appreciated, and without detailed clinical information, it was not possible to determine if this additional antibiotic use was unnecessary. a number of secondary outcomes were also examined. for patients who required icu intervention at any time during their admission, the proportion that were admitted directly from the ed to the icu was evaluated as a marker of early recognition of severe disease. similarly, the proportion of patients requiring icu management at any time during their admission who were initially prescribed the recommended empiric broad spectrum antibiotics for severe pneumonia in the ed was compared. appropriate therapy for this group was defined as ceftriaxone (or benzylpenicillin plus gentamicin), in combination with either intravenous azithromycin or erythromycin. the use of moxifloxacin alone was also deemed appropriate. the number of patients prescribed an antibiotic to which they had a documented allergy was examined. the overall pattern of antibiotics prescribed, and the average cost of antibiotics per patient, were assessed in each time period. finally, the time between presentation to the ed and the administration of antibiotics was recorded. baseline characteristics of subjects were compared between the three periods using a chi-squared test of homogeneity for categorical variables and analysis of variance for continuous variables. an a priori level of statistical significance of . was assumed. the baseline period extended over one year to give an indication of the baseline pattern of change in the rate of concordant prescribing over time, in the absence of any intervention. the academic detailing period included enough patients to detect an improvement in mean concordance from % to % ( patients, power = . and p = . ). the computerised decision support period included enough patients to detect an expected further improvement in concordance from % to % ( patients, power = . , p = . ). multivariable logistic models were used to compare the mean proportions of concordance across the three periods, while adjusting for disease severity, age, and suspected aspiration. secondary outcome measures were assessed in the same way. specifically, among the patients who required icu admission, the proportion directly admitted from ed to the icu, and the proportion administered appropriate broad-spectrum empiric antibiotic therapy, were compared. this was specifically recorded as a measure of the degree of recognition of markers of severe illness, which were a key focus of the guideline content. the proportion of patients with a known antibiotic allergy who received that antibiotic was also compared. time to antibiotic administration was recorded as a measure of whether the cdss delayed decision making to any extent. a time series analysis was performed to evaluate changes in concordance of prescribing over time, covering all three time periods. the rate of concordant prescribing was expected to improve over time. change in concordance over time was assessed with a binary logistic model, incorporating month of treatment as a continuous variable. the 'expected' proportion of concordant treatment at any given time then plausibly corresponds to a regression line fitted through the data. we hypothesized that the rate of concordant prescribing after the intervention (in the third time period) would be greater than that expected given the observed trend before the intervention (the first and second time periods). statistical analysis was performed using stata version . . [ ] the demographic details of the patients in each of the three time periods are presented in table . during the computerised decision support period (cdss), patients were generally older than those in the other two time periods (a greater proportion were aged > years), and less likely to have received antibiotic therapy prior to presentation. the observed death rate during the cdss period appeared to be higher than for the other two periods, but this was largely explained by differences in the proportion of patients aged over years, and differences in the number of patients who died in the ed for whom supportive therapy was not thought appropriate . ], p = . . the estimated effect over time within each cohort did not appear to be substantially altered by the inclusion of these covariates. the effect of change over time was observed in more detail. figure illustrates the percentage of empiric antibiotic prescriptions that were concordant with recommendations per month over the entire period. prescribing patterns improved slowly over time. one year after release of the guideline, in the absence of any promotional efforts, (that is, at the end of the baseline period), the concordance rate was around %. the change in the proportion of concordant prescribing between the last month of the baseline period and the first month of the academic detailing period was + . % over months. the change in the proportion of concordant prescribing between the last month of the academic detailing period and the first month of the computerised decision support period was + . % over months. at the end of the study period, the rate of concordant prescribing was high. the first month post the cdss intervention had a very high concordance rate ( %) and thereafter the rate remained around %, although the study was not long enough to demonstrate whether this level was maintained beyond months. further analysis was performed to compare the observed results with that which would be expected based upon an underlying trend in improvement over time [ ] . the observed behaviour in the preceding time periods (over years) were used to predict the expected prescribing behaviour in the latter month period of the study. figure shows the three regression lines that best fit the observed rate of concordance over the three separate time periods, and the concordance predicted from a logistic regression model based upon the first and second time periods extrapolated forward through the third time period (the 'expected' concordance). while it is important to note that such a regression line may be sensitive to outliers, there were in fact few actual outliers in these actual data and the likelihood of effect would be low. during the first six months of the cdss period, the proportion of patients who were prescribed concordant therapy was greater than would be expected based on the observed trend. a confidence interval around the trend line was determined, and this described the likelihood of the observed results in the first month of the cdss period as having a p value of . based on the existing trend alone. secondary outcomes were analysed as a measure of the impact of the changes in prescribing on key areas of interest. regarding those patients who required icu support, the likelihood that recommended broad spectrum empiric antibiotics were received in the ed increased over time. the patient did not increase, and was actually found to progressively fall over the three time periods, from to and then minutes, p < . . this study demonstrates the pattern of behavioural change in emergency department clinicians over three and a half years, and describes the changes surrounding different interventions to promote a particular prescribing strategy. in particular, it demonstrates that the implementation of a computerised decision support system was associated with greater improvement in prescribing practices than would have been expected based upon the predictions made from actual prescribing observed over the preceding years. the baseline period provides an example of the rate of change of prescribing behaviour with passive, informal means of information transfer. it shows that change is slow, and that the rate of change falls with time. this is consistent with the suggestion that while some clinicians respond to recommendations early, others may be more difficult to access, or more resistant to change, and change may be harder to achieve in the later time periods. the improvement in concordance of prescribing was not dramatic with academic detailing, but appeared to be greatest immediately after the cdss was deployed. it is likely that the interest generated by a novel system, and the attention it received during early education sessions contributed to the high initial concordance. junior staff in this ed rotated on average every three months, which means that the impact of ad may not be sustained as new staff enter the unit. it is important to note that % concordance should not be expected in this context. the cap guideline represents a basic recommendation, and individual patients vary from the average. in the case of cap, experienced clinicians would be expected to vary from the guidelines for valid clinical reasons. it is impossible to separate the effect of the computerised decision support system itself, from the effect of the education sessions, which would have increased awareness of the cap guideline and its recommendations. a longer duration of follow up after deployment of the cdss would be required to comment upon the sustainability of any change. the cdss was associated with changes in many of the secondary outcomes of interest that were not demonstrated with academic detailing. in particular, better recognition of patients with severe pneumonia, suggested by increased use of recommended broad-spectrum empiric antibiotics in those requiring icu care was noted. this change occurred without a major increase in the overall rate of cephalosporin use or the average antibiotic costs per patient. this may be because the content of the decision support system highlighted this perceived problem, and percentage of empiric antibiotics prescribed that were concordant with recommendations per month the advice was consistent for all users. in contrast, with passive transfer and academic detailing advice might be less consistent. one of the strengths of this paper is that our statistical analysis has taken in to account the expectation that prescribing practices would improve over time, in the absence of intervention. [ ] this improvement is presumably due to a 'learning effect' as information is disseminated. it demonstrates that trends in prescribing practices were already present before any specific intervention and these should be acknowledged. this is one of the first papers to compare the impact of a cdss with academic detailing alone in the same clinical setting. to date, academic detailing has been one of the more common strategies used to promote guidelines, but it can be a labour intensive exercise. the staff members who provided academic detailing attended a two-day training session, and thereafter dedicated a portion of their clinical time to training purposes. the information provided to different staff members may have varied due to time constraints or the interest of the trainer, and par-ticular areas may not have been discussed. the cdss, in contrast, provided consistent advice, and could be accessed whenever required by the clinicians. it required an initial investment of clinician's time to develop and test the algorithm, but thereafter did not consume any additional staff resources. to date, most evaluations of cdss in hospitals have described large purpose built systems, often in academic centres in the usa with a specific interest in computerisation. [ , ] this paper, in contrast, describes a transferable web based computerised decision support system which can be integrated with many existing clinical databases in other hospitals. this study describes a clinical setting that would be familiar to most tertiary australian hospitals. previous reviewers have noted the lack of reports of systems outside of the usa, and this paper therefore provides an important contribution. [ ] the major limitation of this study is that the changes were not compared with a separate control group. this study used the same group of clinicians at different time points as controls. in order to do this, the effect of time needed to be taken into account. the predictions of prescribing patterns that we have described are extrapolations beyond the actual data, and make assumptions about patterns of practice remaining similar over time. in this hospital, it would not have been practical to separate control and intervention groups without cross contamination. in addition, such a study might increase clinician awareness proportion of concordant therapy prescribed over time figure proportion of concordant therapy prescribed over time. the solid lines indicate regression lines that best fit the observed data in each of the three time periods, demonstrating the percentage of empiric antibiotic therapy that was concordant with recommendations per month over time. the broken line is a regression line that best fits the observed data in just the first and second time periods. this line is projected forward over the third time period to demonstrate the 'predicted' concordance if the underlying trend from the first two time periods was to continue. the horizontal arrows demonstrate the timing of the two interventions. the vertical arrow represents the difference between the 'predicted' concordance and the observed concordance after the computerised decision support system (cdss) intervention. academic detailing cdss and introduce bias affecting prescribing practices. although multiple testing issues are a concern where several hypothesis tests are performed, in this study the findings comparing time periods were relatively consistent across different variables and the statistical significance of the effect was generally better than the . level. it is also important to recognize that the successful implementation of cdss depends heavily on the personnel and the setting, hence separate hospitals or wards do not necessarily provide accurate control groups for comparison. the 'culture' within an institution has important effects on guideline implementation strategies. exploration of the effect of a computerised decision support system on the prescribing practices in other institutions would, therefore, be of interest. this study has demonstrated improved antibiotic prescribing practices in a hospital setting associated with two different strategies for implementation of guidelines. the improvement in prescribing practices was initially more significant with computerised decision support system than with academic detailing alone, although this may represent the effect of increased attention being given to a novel system. further exploration of the role of computerised decision support system in hospitals is warranted to particularly to assess the sustainability of the effect on clinician decision-making at the point of care. publish with bio med central and every scientist can read your work free of charge antibiotic guidelines: improved implementation is the challenge what has evidence based medicine done for us? bmj what's the evidence that nice guidance has been implemented? results from a national evaluation using time series analysis, audit of patients' notes, and interviews a simple intervention to improve hospital antibiotic prescribing improving compliance with hospital antibiotic guidelines: a time-series intervention analysis evaluating the impact of education by a clinical pharmacist on antibiotic prescribing and administration in an acute care state psychiatric hospital printed educational materials: effects on professional practice and health care outcomes a computer-assisted management program for antibiotics and other antiinfective agents reduction of broad-spectrum antibiotic use with computerized decision support in an intensive care unit improving empirical antibiotic treatment using treat, a computerized decision support system: cluster randomized trial interventions to improve antibiotic prescribing practices for hospital inpatients the experience with web-based computerised decision support systems at the royal melbourne hospital-the search for transferability and maintainability. icaac; washington therapeutic guidelines: antibiotic. version ed bts guidelines for the management of community acquired pneumonia in adults guidelines for the initial management of adults with community-acquired pneumonia: diagnosis, assessment of severity, and initial antimicrobial therapy empiric management of community-acquired pneumonia in australian emergency departments a prediction rule to identify low-risk patients with community-acquired pneumonia community acquired pneumonia: aetiology and usefulness of severity criteria on admission release . version. college station, tx: stata corporation use of computerized decision support systems to improve antibiotic prescribing clinical decision support systems and antibiotic use ms thao nguyen and ms annmarie sherman for their assistance with data collection and data management. the authors declare no financial conflict of interest. all authors have been employed by melbourne health who now hold the rights to the computerised decision support system evaluated in this study. melbourne health had no influence over the findings described in this study. the authors have no other personal financial interests in the cdss. kb and kt designed the study, carried out data collection and data analysis. jb and lm provided specific advice regarding statistical evaluation at the study design and analysis stages. as gb and mk participated in study design and analysis. all authors contributed to the final manuscript. the pre-publication history for this paper can be accessed here:http://www.biomedcentral.com/ - / / /prepub key: cord- -o xb vx authors: osserman, jordan; lê, aimée title: waiting for other people: a psychoanalytic interpretation of the time for action date: - - journal: wellcome open res doi: . /wellcomeopenres. . sha: doc_id: cord_uid: o xb vx typical responses to a confrontation with failures in authority, or what lacanians term ‘the lack in the other’, involve attempts to shore it up. a patient undergoing psychoanalysis eventually faces the impossibility of doing this successfully; the other will always be lacking. this creates a space through which she can reimagine how she might intervene in her suffering. similarly, when coronavirus forces us to confront the brute fact of the lack in the other at the socio-political level, we have the opportunity to discover a space for acting rather than continuing symptomatic behaviour that increasingly fails to work. 'a strike is precisely that kind of rapport that connects a group to work.' (lacan, , p. ) we were on the picket lines when the uk woke up to the reality that responding to covid- was going to require mass shut-downs. we had been thinking and speaking, in university college union 'teach outs', about how participation in industrial action opens up a particular and generative kind of temporal space. withdrawing one's labour dramatically disrupts the 'on-go' of daily life. one is thrown into a situation where time takes on a different quality: our relationship to the past is called into question ('what has brought me to the point? where have i been placed within the economic structure?'), and we gain a new sense of agency over the future through a rearticulation of the self. we thought this had something in common with the scenario of a patient undergoing psychoanalytic therapy, and we were attempting to tease out relevant parallels. this was the beginning of theorising an aspect of the psychic life of time rooted in a joyful form of collective struggle. it came to a dramatic halt with covid- , which suspended and indefinitely postponed strike action, while simultaneously throwing the causes for the dispute into sharp relief. what will happen to precarious staff employed on hourly and temporary contracts about to expire, accustomed to regularly moving across the country (or indeed the world) for insecure academic work, in the context of a pandemic and economic crash? how will university pensions, held in investment portfolios, endure a stock market freefall? will we be told, yet again, that 'now is not the time' for rectifying the bame and gender wage gaps, and that taking on unsustainable workloads in the shift to online teaching are simply part of being a team player during a 'chaotic time'? neoliberal economics has shaped our healthcare provision (and indeed our health) for decades, ever since the introduction of 'internal markets' to the nhs, but the extent which health has been deprioritised in order to create an 'efficient' and profitable health service is now showing its true face. prior to the outbreak, hospital occupancy had repeatedly hit all-time record highs, routinely exceeding % of capacity, leading % of doctors in a bma survey to say that the nhs is 'in a state of year-round crisis' (bma, ) . the doctrine of profitability means no margin of 'waste' -which means no ability to cope with everyday volumes of patients, much less an actual crisis. it has become increasingly clear that our physical health relies not only on epidemiology but on the questions of politics, economics and analyses of social life more traditionally associated with the humanities and social sciences. the boundary between the physical and the social body has fallen. here, we attempt to offer some suggestions with regard to these extraordinary times. concomitant with widespread fear of illness and economic ruin associated with covid- , we have observed the emergence of an unusual form of optimism. as governments around the world begin to implement stimulus and rescue packages designed to mitigate the economic effects of the disease -associated in the popular imaginary with wartime spending measuressome are beginning to hope that if we simply 'wait' (or 'hang tight') under quarantine, the government will ensure that things will be 'okay'. things will 'return to normal' eventually (as if returning to the state of affairs that gave rise to this crisis would be desirable), or even (in its more left-wing formulation), with the advent of socialist spending, a new and more equitable social order will arrive. keeping the racial implications of waiting in mind, we might remember colonial injunctions that the time was never right (so colonial subjects had to wait) for independence (chakrabarty, ) , or in the us context, for emancipation and subsequently civil rights, about which langston hughes wrote a lyric to the 'hesitation blues': 'how long/ have i got to wait?/ can i get it now-/ or must i hesitate?' (hughes, , p. ). so do we wait for these conclusions to sink in? this is a question of time, and also, clearly, a question of power. it is evocative of an early experiment in behavioural psychology, the stanford 'marshmallow experiment', meant to explore the connection between delayed gratification and later successful life outcomes (mischel & ebbesen, ) . in the experiment, children were given the choice between an immediate reward (a marshmallow or pretzel), or two rewards if they were willing to wait for minutes. the study, and subsequent others like it, linked children who waited with better test scores, better jobs, even better bodies (casey et al., ; mischel et al., ; shoda et al., ) . in mass media, the results of the study were promoted as a kind of neo-calvinist doctrine of the persevering rich, as well as providing a handy economic allegory about the importance of obedience and trust when facing apparent deprivation. if you follow the rules (and don't, for example, hoard toilet paper), the second marshmallow will be coming along any second now… the researcher who dispenses the marshmallows is playing a role known psychoanalytically as the 'big other' . as theorised by lacan, the big other stands for the place from which people imagine that authority ultimately emanates, a kind of 'necessary illusion' that grounds the otherwise potentially infinite uncertainty of subjective speech and behaviour. ('the other must first of all be considered a locus, the locus in which speech is constituted' [lacan, , p. ].) individuals take on the mantle of the big other insofar as they successfully appear to be a guarantor of futurity: my hands hold the keys to your fate. this is a structural relation between parent and child which, although eventually surmounted to varying degrees, becomes 'transferred' onto figures of authority actual and spectral. however, as derek hook clarifies, 'we should not fix the other in any one personage, or view it in a static way as embodied in certain lofty or powerful figures. … we as subjects constantly call upon, reiterate and thus reinstate the other … [it] is a (trans)subjective presupposition which exists only insofar as we act as if it exists' (hook, , p. ). consider the way investors are speaking about 'the market': 'the market right now is really shellshocked'; 'until the market sees some evidence that we've got the virus under control ... there isn't going to be a lot of confidence to buy'. this anthropomorphic creature we call 'the market' is, of course, the sum total of individual investors' financial behaviour. yet, these investors do not decide whether to buy or sell stocks based directly on what they think other investors will do, but through the mechanism of a presupposed, transubjective third: what i think other people think 'the market' is going to do (see tuckett, ). in his late teaching, lacan made a crucial emphasis on the notion of a lack in the big other. at certain pivotal moments, we begin to realise that nobody is actually behind the curtain. the 'glue' that holds together a social order starts to melt. the covid- crisis is, of course, a prime example of such a moment. it is difficult to overstate just how incompetent and incoherent our political leaders have made themselves out to be. from boris johnson boasting that he was shaking hands with covid- patients before contracting the virus (the guardian, ); to the government denying that it promoted 'herd immunity' (walker, ) ; to cabinet ministers openly contradicting who guidance in order to obscure the government's failure to procure adequate testing, hospital equipment, and ppe (itv news, ) -it has become clear that there no longer exists a stable authority upon whose pronouncements we can rely (see especially recent exposes in the guardian [conn et al., ] and sunday times [calvert et al., ] ). one of the ways lacanian psychoanalysts approach the question of diagnosis is to consider how a patient responds when he is confronted with a lack in the big other. similarly, with the void in power that has emerged as a consequence of covid- , we are witnessing a variety of what we might call 'symptomatic' responses that index the coordinates of individuals' psychic structures: • denial: the big other is perfectly intact. the novel coronavirus isn't any worse than the ordinary flu, people are needlessly panicking due to social media and liberal commentators intent on discrediting our political leaders. • conspiracy: we are being duped, a malevolent big other is pulling the strings. china designed covid- as a biological weapon to destroy the west. • deferral: give the big other some time, and it will reconstitute itself. things are messy now, but if we just wait it out, they will return to normal. once the government secures enough antibody tests, we can go back to work, the pubs will reopen, our holidays abroad will resume. • panicked incapacitation: without the big other, we are doomed. the government is sending us all to our deaths and nothing can be done. in different ways, each of these responses indicate an attempt or wish to shore up the big other, to retrieve some kind of guarantor of the body politic in the midst of its apparent breakdown . here we might also consider how a depoliticised portrayal of 'science' itself constitutes a kind of ' as the clinician thomas svolos notes, 'if psychoanalysis has something to offer here, it is to recognize ... the proper place of the lack in the other, and the very personal nature of the fantasies we make to cover over it, so that people can soberly address the unknown' (svolos, ). in other words, there is another approach: proceeding with the understanding that the lack in the other was there from the beginning. in a sense, we all knew this was coming. as feminist and critical race studies engagement with psychoanalysis has highlighted, the way one imagines and relates to the big other and its inconsistencies is mediated through history, symbolic inheritance, and structural positioning along multiple axes of difference including race and gender (e.g. chistopher & lane, ; fanon, ; mitchell, ; spillers, ) . likewise, the fallout from covid- has differential impacts; while it is beyond the scope of this piece to explore, it is important to emphasise that the consequences of this disease will exacerbate existing inequalities and forms of oppression. people were already perceiving that nobody was properly in charge. regularly we received dire warnings about the nhs: waiting times at record highs, hospitals operating beyond capacity. yet our transference towards the nhs as a safe parental figure (or 'brick mother') seemed to persist: people continued to believe that when they fell ill the nhs would provide adequate care (see baraitser & salisbury, ; moore, , waiting in pandemic times) . similarly, as fixed-term academics, we've long known that universities are simply not offering enough permanent posts for the majority of academics to do their work securely in the sector. yet as a group we nevertheless persist as if we'll all eventually find the right job. (ucu's qualitative study on casualisation found an 'inability to project into the future' one of the significant mental health consequences of precarious academic work [megoran & mason, , p. ].) psychoanalytically, the practice of simultaneously accepting and rejecting a traumatic truth --continuing to behave as if it isn't true -is called disavowal, summarised in the phrase: 'i know very well, but nevertheless' (mannoni, ) . in our daily life before covid- , we were already constantly surrounded by pronouncements of apocalypse, post-history, crisis and collapse -but these were always warnings, as it were, from 'within' the current coordinates, as society as a whole appeared to continue as normal (see flexer, , this collection). we were both present during the california wildfires of , and despite the massive loss of life and environmental destruction, economic activity continued as usual, with the occasional addition of masks, respirators and so on. this seems to be a model for the way our government initially hoped we would respond to coronavirus. before covid- , appeals for redistributive policies were easily diffused with the familiar language of technocratic neoliberalism: 'the numbers don't add up' 'this is not how it works', etc. the message was: 'your material suffering, while regrettable, does not have any bearing on the immutable laws of the economy'. with the sudden emergence of massive government spending -as we were writing this, the government cancelled £ . billion pounds of nhs debt -we're witnessing this logic disappear before our very eyes. this suspension of daily economic activity and the seemingly iron-clad principles that upheld it, alongside the threat of the virus, has interrupted the circuitry that forced us to act as if the big other existed, even when all available evidence indicated otherwise. we began from the transformative potential of suspended time in strike activity, which relies on the conscious decision of workers to withhold our labour. now we have entered a different kind of suspended time. from the collectivity of the strike, we have gone into self-isolation, imposed by the current crisis. these are also not mutually exclusive; workers as well as renters have seized this time to strike. in both cases, however, different kinds of suspended time produce an opportunity for the subject to consider her own agency in relation to the lack in the big other. it's common for a patient to seek out analysis because a feeling of enjoyment, or what lacanians call 'jouissance', is somehow no longer available. this instability provides an opportunity to reconsider the relation to the other. in the current moment, we have arrived at a kind of analytic situation through simply suspending the function of enjoyment. the stock market is crashing but of course in neoliberal capitalism what is also crashing is our jouissance. our typical release valves -going to the pubs, shopping -are gone. amazon is deprioritising shipping anything but 'essentials', only 'key workers' and urgent tasks allowed . we actually have to live in a time that is supposed to be a 'waiting time' -subjectively experience it as our reality in the here and now. lacan in , famously criticised student activists for posing what he took to be their hysterical demands to the powers that be: 'you want a master. you will get one' (see frosh, ). the protests of ' were an explosion of activity, which we could counterpose to today's means of reinstating a powerful other through passivity. the act, as theorised in lacanian psychoanalysis, has to be distinguished from 'acting out', or everyday action. the true act has such stakes that it simultaneously abolishes and transforms (in hegelian terms, sublates) the symbolic coordinates of a given social order. so, how and when do we act? first, we have to find a way of acting within the context of there being no big other. this means our actions cannot be verified or guaranteed to succeed from the outset. nor, however, can we rely on an authority to predictably stop or punish us in the way transgression is often intended. acts will always appear to us as risks -serious ones. this is even true when they are the self-evidently 'right things to do' in retrospect. the corollary to this lack of divine verification is that the time to act never arrives. even as people fall ill with coronavirus, and are no longer waiting to potentially contract it, the question of what to do is not resolved, it is even intensified. we can say that an act never emerges from nothing, but only appears to in retrospect. we must be careful not to fetishize a moment of rupture for its own sake, or, as baraitser ( ) reminds us, to fail to account for the preexisting context of endurance within an impossible situation upon which any significant rupture depends (the pre-existing 'state of year-round crisis' in the nhs, for example, which has led to this point). these would be further forms of acting out. lastly, an act must be collective but each of us cannot wait for another to start it. those of us advocating for radical emancipatory change cannot simply make our individual appeals to 'socialism' as a self-evident intellectual solution to the problems we face, but must directly intervene to build it and create our own vehicles of mass struggle. only through action can we instate a new symbolic situation. we can envision the collapse of neoliberal capitalism -a system that literally cannot function in the present situation -but without an alternative we will remain in the same symbolic coordinates. people are already beginning to figure out ways of coordinating activity during lockdown without risking their health, as technology creates an opportunity for greater international solidarity. the emergence of 'mutual aid' groups across the country is an example of people coordinating responses to the crisis in the absence of adequate government provision. it is a first step but, at present, relies on the voluntary goodwill of people able to share what little they have with each other. the next step would be recognising the production and planning of resources in society -those zones where our intervention was once strictly forbidden -and seizing our right to directly provision to people's material needs rather than obeying market logic. (it is a consequence of attempting to act that one may come to embody the big other. this is a very interesting problem and should be dealt with in a subsequent essay.) we need to push our governments to value human life over economic gain, but we must also recognise that our own activity is what will make this possible, not the benevolence of a prime minister. revisiting the period of post-war reforms that delivered the nhs should make this clear. while claiming to support the principle of a health service in theory, churchill's opposition voted against the establishment of the nhs over a dozen times, including at second and third reading. the nhs was founded despite strong opposition from the tories and the right-wing press, both of whom now praise it as a national achievement . none of the institutions we rely on now -especially during this crisis -came about because they were handed down from above. they were formed through processes of social antagonism. this poses the question: why do people today view themselves as outside of the historical process? attempting to pose these questions to ourselves as well, we decided to act, to directly engage with universities to demand two years' extension of employment for all casualised staff: a #coronacontract (https://coronacontract.org/). we have reached a point where continuing within the existing framework of society is no longer possible. the question is, will we desperately search for another way to shore up the big other, relying on symptomatic behaviour even as it fails to work -or can we find a way to act? all data underlying the results are available as part of the article and no additional source data are required. school of political science, aristotle university of thessaloniki, thessaloniki, greece obviously, the effects of covid- extend far beyond the biological domain. they encompass many biopolitical, psychosocial and (psycho)political aspects in addition to health and welfare stricto sensu. this paper attempts to map and illuminate in an innovative way some of these effects. in particular, special emphasis is placed on the collapse of guarantees, the confrontation with failures in authority this crisis involves (as crises often do), a rubric that merits broader discussion. such failures are traced and framed on a variety of levels (economics, power, time, psychic life, etc.) and are then cogently theorized -through lacanian theory -as encounters with the so-called 'lack in the other', meaning the various instances in which one is bound to feel and, perhaps, have the opportunity to register the cracks in the fantasmatic consistency and the ultimately arbitrary (contingent) foundations of our socio-symbolic order. how are such encounters usually dealt with? and how were they negotiated within the context of the covid- crisis? in other words, are we doomed to reproduce a sisyphean struggle to cover over this lack, which continuously reappears? perhaps psychoanalysis can point to an alternative type of agency beyond this vicious circle, enabling thus a different ethos of political acting. all in all, the paper deals with a highly original, topical and timely theme. it performs an analysis, which is simultaneously accessible and rigorous, straightforward and conceptually sophisticated (drawing on a very pertinent lacanian apparatus). the argument is indeed challenging, ambitious, witty and to the point. thus, the paper does contribute significantly to the state-of-the-art in this field and is bound to influence the ongoing public debate in revealing ways. what is particularly suggestive is the axis of temporality, which is highlighted at various turns of the argumentation. is the study design appropriate and is the work technically sound? baraitser l: enduring time. bloomsbury publishing. . reference source containment, delay, mitigation': waiting and care in the time of a pandemic the seminar of jacques lacan. book iii, the psychoses je sais bien, mais quand même second class academic citizens: the dehumanising effects of casualisation in higher education publisher full text mischel w, ebbesen eb, zeiss ar: cognitive and attentional mechanisms in delay of gratification debating sexual difference, politics, and the unconscious: with discussant section by jacqueline rose containment and delay": covid- , the nhs and highrisk patients bloomberg once suggested farming, factory work don't require much 'gray matter publisher full text spillers hj: 'all the things you could be by now if sigmund freud's wife was your mother': psychoanalysis and race. boundary publisher full text svolos t: coronavirus and the hole in the big other. the lacanian review. . reference source the guardian: 'i shook hands with everybody revisiting the marshmallow test: a conceptual replication investigating links between early delay of gratification and later outcomes pubmed abstract | publisher full text | free full text webster c: conflict and consensus: explaining the british health service if applicable, is the statistical analysis and its interpretation appropriate? not applicable are all the source data underlying the results available to ensure full reproducibility? no source data required are the conclusions drawn adequately supported by the results? yes competing interests: no competing interests were disclosed.reviewer expertise: psychoanalysis, freud, lacan, discourse theory, political theory, populism i confirm that i have read this submission and believe that i have an appropriate level of expertise to confirm that it is of an acceptable scientific standard.reviewer report august https://doi.org/ . /wellcomeopenres. .r © mcgowan t. this is an open access peer review report distributed under the terms of the creative commons attribution license, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited. university of vermont, burlington, vt, usa "waiting for other people" outlines the effect of the coronavirus pandemic on the contemporary political situation. it points out that one of the main effects of the outbreak is that it exposes the lack in the other or the failure of the big other. social authority is unable to deal with the disease, and as a result, subjects' investment in the figure of the big other comes into question. the most widespread response, the authors claim, is the attempt to shore up the big other, to obscure its lack. but at the same time, the virus presents us with another opportunity -the possibility of the genuine political act that occurs through the other's failure.this essay represents an outstanding intervention in the psychoanalysis of the effects of the pandemic. i have read several psychoanalytic accounts of our political situation today, and this is the best. i don't view any changes as necessary. if applicable, is the statistical analysis and its interpretation appropriate? not applicable are all the source data underlying the results available to ensure full reproducibility? yes competing interests: no competing interests were disclosed. i confirm that i have read this submission and believe that i have an appropriate level of expertise to confirm that it is of an acceptable scientific standard. key: cord- -wwi uydi authors: spadon, gabriel; hong, shenda; brandoli, bruno; matwin, stan; rodrigues-jr, jose f.; sun, jimeng title: pay attention to evolution: time series forecasting with deep graph-evolution learning date: - - journal: nan doi: nan sha: doc_id: cord_uid: wwi uydi time-series forecasting is one of the most active research topics in predictive analysis. a still open gap in that literature is that statistical and ensemble learning approaches systematically present lower predictive performance than deep learning methods as they generally disregard the data sequence aspect entangled with multivariate data represented in more than one time series. conversely, this work presents a novel neural network architecture for time-series forecasting that combines the power of graph evolution with deep recurrent learning on distinct data distributions; we named our method recurrent graph evolution neural network (regenn). the idea is to infer multiple multivariate relationships between co-occurring time-series by assuming that the temporal data depends not only on inner variables and intra-temporal relationships (i.e., observations from itself) but also on outer variables and inter-temporal relationships (i.e., observations from other-selves). an extensive set of experiments was conducted comparing regenn with dozens of ensemble methods and classical statistical ones, showing sound improvement of up to . % over the competing algorithms. furthermore, we present an analysis of the intermediate weights arising from regenn, showing that by looking at inter and intra-temporal relationships simultaneously, time-series forecasting is majorly improved if paying attention to how multiple multivariate data synchronously evolve. time series refers to the persistent recording of a phenomenon along time, a continuous and intermittent unfolding of chronological events subdivided into past, present, and future. in the last decades, time series analysis has been vital to predict dynamical phenomena on a wide range of applications, varying from climate change [ ] - [ ] , financial market [ ] , [ ] , land use monitoring [ ] , [ ] , anomaly detection [ ] , [ ] , energy consumption, and price forecasting [ ] , besides epidemiology and healthcare studies [ ] - [ ] . on such applications, an effective data-driven decision frequently requires precise forecasting based on time series [ ] . a prime example is the sars-cov- , covid- , or coronavirus pandemic [ ] , which is known for being highly contagious and causing increased pressure on healthcare systems worldwide. in this case, time-series modeling fig. : an example of a multiple multivariate time-series forecasting problem, where each multivariate time-series (i.e., sample) share the same domain, timestream, and variables. when stacking the time-series together, we assemble a tridimensional tensor with the axes describing samples, timestamps, and variables. the multiple samples have the same variables recorded during the same timestamps, meaning that samples are unique, but every sample is observed in the same way. by tackling the problem altogether, we leverage from inner and outer variables besides intra-and inter-temporal relationships to improve the forecasting. works (gcn) [ ] , the later with significant applications on traffic forecasting [ ] , [ ] . meanwhile, others used unsupervised auto-encoders on time-series forecasting tasks [ ] , [ ] . notwithstanding, all former approaches are bounded to a bidimensional space in which forecasting time-series can be summarized by a non-linear function between time and variables. from a different perspective, we hypothesize that timeseries are not only dependent on their inner variables, which are observations from themselves, but also from outer variables provided by different time series that share the same timestream. for instance, the evolution of a biological species is not related solely to observations from itself, but also from other species that share the same environment, as they are all part of the same food chain. by considering the variables and the dependency aspect during the analysis, the time series gains an increased dimensionality. a previously considered bidimensional problem, in which the forecasting ability of a model comes from observing relationships of variables over time, now becomes tridimensional, where forecasting means understanding the entanglement between variables of different time-series that co-occur in time. accordingly, time-series define an event that is not a consequence of a single chain of observations, but a set of synchronous observations of many time-series. for example, during the coronavirus pandemic, it is paramount to understand the disease's time-aware behavior in every country. despite progressing in different moments and locations, the underlying mechanisms behind the pandemic are supposed to follow similar (and probably interconnected) patterns. along these lines, looking individually at the development of the pandemic in each country, one can describe the problem in terms of multiple variables, like the number of confirmed cases, recovered people, and deaths. however, when looking at all countries at once, the problem yields an additional data dimension, and each country becomes a multivariate sample of a broader problem, such as depicted in fig. . in linguistic terms, we refer to such a problem as a multiple multivariate time-series forecasting. along with these premises, in this study, we contribute with an unpreceded neural network that emerges from a graph-based time-aware auto-encoder with linear and nonlinear components working in parallel to forecast multiple multivariate time-series simultaneously, named as recurrent graph evolution neural network (regenn). we refer to evolution as the natural progression of a process where the neural network iteratively optimizes a graph representing observations from the past until it reaches an evolved version of itself that generalizes on future data still to be observed. accordingly, the underlying network structure of regenn is powered by two graph soft evolution (gse) layers, a further contribution of this study. the gse stands for a graph-based learning-representation layer that enhances the encoding and decoding processes by learning a shared graph across different time-series and timestamps. the results we present are based on an extensive set of experiments, in which regenn surpassed a set of competing algorithms from the fields of deep learning, machine learning, and time-series; among of which are single-target, multioutput, and multitask regression algorithms in addition to univariate and multivariate time-series forecasting algorithms. aside from surpassing the state-ofthe-art, regenn remained effective after three rounds of ablation tests through distinct hyperparameters. all experiments were carried out over the sars-cov- , brazilian weather, and physionet datasets, detailed in the methods. in the task of epidemiology modeling on the sars-cov- dataset, we had improvements of, at least, . %. we outperformed the task of climate forecasting on the brazilian weather dataset by at least . % and patient monitoring on intensive care units on the physionet dataset by . %. furthermore, we analyzed the results by using the cosine similarity on the evolution weights from the gse layers, which are the intermediate hidden adjacency matrices that arise from the graph-evolution process, showing that graphs shed new light on the understanding of non-linear black-box definition ω ∈ n + sliding window size w, z ∈ n + number of training and testing (i.e., stride) timestamps s, t, v ∈ n + number of samples, timestamps, and variables t ∈ r s×t×v tensor of multiple multivariate time-series y ∈ r s×ω×v batched input of the first gse and the autoregression layers yα ∈ r s×ω×v output of the first gse and input of the encoder layers yε ∈ r s×ω×v output of the encoder and input of the decoder layers yε ∈ r s×z×v output from the first recurrent unit and input to the second one y ∈ r s×z×v output of the second recurrent unit and input of the second gse layer y ψ ∈ r s×z×v non-linear output yielded by the second gse layer y λ ∈ r s×z×v linear output provided by the autoregression layer y ∈ r s×z×v final result from the merging of the linear and non-linear outputs g = v, e graph in which v is the set of nodes and e the set of edges a ∈ r v×v adjacency matrix of co-occurring variables aµ ∈ r v×v adjacency matrix shared between gse layers a φ ∈ r v×v evolved adjacency matrix produced by the second gse layer u • v batch-wise hadamard product between matrices u and v u · v batch-wise scalar product between matrices u and v · f the frobenius norm of a given vector or matrix ϕ(·) dropout regularization function σg (·) sigmoid activation function σ h (·) hyperbolic tangent activation function cos θ (·) cosine matrix-similarity relu(·) rectified exponential linear unit softmax(·) normalized exponential function models. since higher dimensional time-series forecasting is a topic to be further explored, we understand regenn has implications in other research areas, like economics, social sciences, and biology, in which different time-series share the same timestream and co-occur in time, mutually influencing one another. in order to present our contributions, this paper is further divided into four sections. we begin by proposing a layer and neural network architecture, besides detailing the methods used along with the study. subsequently, we display the experimental results compared to previous literature. next, we provide an overall discussion on our proposal and the achieved results. finally, we present the conclusions and final remarks. the supplementary material exhibits extended results and additional methods. hereinafter, we use bold uppercase letters to denote multidimensional matrices (e.g., x), bold lowercase letters to vectors (e.g., x), and calligraphic letters to sets (e.g., x ). matrices, vectors, and sets can be used with subscripts. for example, the element in the i-th row and j-th columns of a matrix is x ij , the i-th element of a vector is x i and the j-th element of a set is x j . the transposed matrix of x ∈ r m×n is x t ∈ r n×m , and the transposed vector of x ∈ r m× is x t ∈ r ×m , where m and n are arbitrary dimensions; further symbols are defined as needed, but tab. presents a summary of the notations. graph soft evolution (gse) stands for a representationlearning layer that, given a training dataset, builds a graph in the form of an adjacency matrix, as in fig. . the gse layer receives no graph as input, but a set of multiple multivariate time-series. the graph is built by tracking pairs of co-occurring variables, one sample at a time, and merging the results into a single co-occurrence graph shared among samples and timestamps. we define co-occurring variables as two variables, from a multivariate time-series, with a nonzero value in the same timestamp -in such a case, we say that one variable influences another and is influenced back. the co-occurrence graph is the projection of a tridimensional tensor, t, t ∈ r s×w×v , into a bidimensional one, a, a ∈ r v×v , describing pair-wise time-invariant relationships among variables. the co-occurrence graph a = v, e is symmetric and weighted. it is composed of a set v of |v| nodes equal to the number of variables, and another set e of |e| non-directed edges equal to the number of co-occurring variables. a node v ∈ v corresponds to a variable from the time-series multivariate domain, and an edge e ∈ e is an ordered pair u, v ≡ v, u of co-occurring variables u, v ∈ v. the weight f of the edges corresponds to the summation of the values of the variables u, v ∈ v whenever they co-occur in time, such that f (u, v) = s− i= w− j= t i,j,u + t i,j,v . notice that the whole graph is bounded to w, the number of timestamps existing in the training portion of the input tensor, and if a pair of variables never co-occur in such a subset of data, no edge will be assigned to the graph, which means that u, v ∈ e, and f (u, v) = . given an adjacency matrix a ∈ r v×v , we formulate a gse layer through the following equations: where w α , w η , w µ ∈ r v×v are the weights and b α , b η , b µ ∈ r v the bias to be learned. in further details, fig. : illustration of the graph soft evolution layer's representation-learning, in which the set of multiple multivariate time-series is mapped into adjacency matrices of co-occurring variables. the matrices are element-wise summed to generate a shared graph among samples, which, after a linear transformation, goes through a similarity activation-like function and is scaled by an element-wise multiplication to produce an intermediate adjacency matrix holding the similarity properties inherent to the shared graph. in eq. . , the layer starts by employing a linear transformation to the shared adjacency matrix, which, after multiple iterations, provides a more generic version of the matrix across the samples. subsequently, in eq. . , it uses the cosine similarity on the output of eq. . , which works as an intermediate activation-like function that provides a similarity index for each pair of variables; see the supplementary material. the resulting matrix goes through an element-wise matrix multiplication to transform it back into an adjacency matrix while holding the similarity properties inherent to the shared graph. following, in eq. . , it performs a batchwise matrix-by-matrix multiplication between the adjacency matrix from eq. . and the batched input tensor (i.e., y) so to combine the information from the graph, which generalizes samples and timestamps, with the time-series. the result will be followed by a dropout regularizer [ ] and batch-wise matrix-by-matrix multiplication, where the final features from joining both tensors will be produced. the evolution concept comes from the cooperation be-tween two gse layers, one at the beginning (i.e., right after the input) and the other at the end (i.e., right before the output) of a neural network, such as in the example shown in fig. . as evolution arises from sharing hidden weights between a pair of non-sequential layers, we named this process after soft evolution. accordingly, the first layer (i.e., source) has the aim to learn the weights that will scale the matrix and produce a µ . such a result is the input of the second gse layer (i.e., target), and it will be used for learning the evolved version of the adjacency matrix, referred to as a φ and produced as in eq. . . notice that in fig. , the source layer is different from the target one, as we disregard the regularizer ϕ, trainable weights w α , and bias b α from eq. . . this is because they aim to enhance the feature-learning processes when multiple layers are stacked together. as the last layer, gse provides the output from already learned features through a scalar product between the data propagated throughout the network, i.e., y, and the intermediate evolved adjacency-matrix, i.e., a ψ . fig. : graph soft evolution layers assembled for evolution-based learning. in such a case, the output of the first gse layer (i.e., source) will feed further layers of the neural network, whose result goes through the second gse layer (i.e., target). the gse, as the last layer, does not use regularizers or linear transformations before the output. contrarily, it provides the final predictions by the scalar product between the output of the representation-learning process and the data propagated throughout the network. one can see that the source gse layer has two constant inputs, the graph and input tensor. differently, the target gse layer has two dynamic inputs, the shared graph from the source gse layer and input propagated throughout the network. in the scope of this work, we use an autoencoder in between gse layers to learn data codings from the output of the source layer, which will be decoded into a representation closest to the expected output and later re-scaled by the target layer. in this sense, while the first layer learns a graph from the training data (i.e., past data) working as a pre-encoding feature-extraction layer, the second layer re-learns (i.e., evolve) a graph at the end of the forecasting process based on future data, working as a postdecoding output-scaling layer. when joining the gse layers with the auto-encoder, we assemble the recurrent graph evolution neural network (regenn), introduced in detail as follows. regenn is a graph-based time-aware auto-encoder with linear and non-linear components with parallel data-flows working together to provide future predictions based on observations from the past. the linear component is the autoregression implemented as a feed-forward layer, and the non-linear component is made of an encoder and a decoder module powered by a pair of gse layers. fig. shows how these components communicate from the input to the output, and, in the following, we detail their operation. the non-periodical changes and constant progressions of the series across time usually decrease the performance of the network. that is because the scale of the output loses significance compared to the input, which comes from the complexity and non-linear nature of neural networks in tasks of time series forecasting [ ] . following a systematic strategy to deal with such a problem [ ] , [ ] , regenn leverages from an autoregressive (ar) layer working as a linear feedforward shortcut between the input and output, which for a tridimensional input, is algebraically defined as: where w ∈ r ω×z are the weights and b ∈ r z the bias to be learned. the output of the linear component, i.e., y λ ∈ r s×z×v as in eq. , is element-wise added to the nonlinear component's output, i.e., y ψ ∈ r s×z×v , so to produce the final predictions of the network y ∈ r s×z×v , formally given as y = y λ + y ψ . subsequently, we describe the autoencoder functioning that produces the non-linear output of regenn. we use a non-conventional transformer encoder [ ] , which employs self-attention, to learn an encoding from the features forwarded by the gse layer. the self-attention consists of multiple encoders joined through the scaled dotproduct attention into a single set of encodings through the non-linear one, however, has an auto-encoder and two gse layers. the last gse layer, although equal to the first, yields an early output as it is not stacked with another gse layer. multi-head attention. the number of expected features by the transformer encoder must be a multiple of the number of heads in the multi-head attention. our encoder's nonconventionality comes from the fact that the first gse layer's output goes through a single scaled dot-product attention on a single-head attention task. that is because the number of features produced by the encoder is equal to the length of the sliding window, and through single-head attention, the window can assume any length. the encoder module is defined as follows: where self-attention in eq. a is a particular case of the multi-head attention, in which the input query q, key k, and value v of the scaled dot-product attention, i.e., softmax q · k t ÷ √ d k · v, are equal; and d k is the dimension of the keys. the attention results are followed by a dropout regularization [ ] , a residual connection [ ] , and a layer normalization [ ] as in eq. b, which ensure generalization. the first two layers work to avoid overfitting and gradient vanishing, while the last one normalizes the output such that the samples among the input have zero mean and unit variance γ (∆ (y ε + ϕ (y ε ))) + β, where ∆ is the normalization function, and γ and β are parameters to be learned. after, in eq. c, the intermediate encoding goes through a double linear layer, a point-wise feed-forward layer, which, in this case, consists of two linear transformations in sequence with a relu activation in between, having the weights w ε , w ι and bias b ε , b ι as optimizable parameters. finally, the transformed encoding goes through one last set of generalizing operations, as shown in eq. d. the resulting encoding y ε ∈ r s×ω×v is a tensor with the time-axis length matching the size of the sliding window ω, the same dimension as it is the input tensor. the previous encoding will be decoded by two sequenceto-sequence layers, which in this case, are long short term memory (lstm) [ ] units. the decoder operates in two of the tridimensional axes of the encoding, the time axis, and the variable axis, once at a time. during the time-axis decoding, the first recurrent unit will translate the windowsized input into a stride-sized output as in the following: where v is the v-th variable of the t-th time-series group, and the weights w o ∈ r z are parameters to be learned. along with eq. , we refer to f as the forget gate's activation vector, i as the input and update gate's activation vector, o as the output gate's activation vector, c as the cell input activation vector, c as the cell state vector, and h as the hidden state vector. the last hidden state vector, goes through a dropout regularization ϕ before the next lstm in the sequence. the next recurrent unit decodes the variable axis from the partially-decoded encoding without changing the input dimension. the set of variables within the time-series does not necessarily imply a sequence that does not interfere in the decoding process as long as the variables are kept in the same position. the second lstm in the sequence, in which t is the t-th timestamp of the v-th variable group, work as follows: where y ε is the partially-decoded encoding, and the weights w o ∈ r z are parameters to be learned. the description of the notations within eq. holds for eq. . the difference, besides the decoding target, is the residual connection with the partially-decoded encoding y ε at the last hidden state vector after the dropout regularization ϕ. finally, the output of the last recurrent unit y goes through the last gse layer, so to produce the non-linear output y ψ of regenn. regenn operates on a tridimensional space shared between samples, timestamps, and variables. in such a space, it carries out a time-based optimization strategy. the training process iterates over the time-axis of the dataset, showing the network how the variables within a subset of timeseries behave as time goes by, and later repeating the process through subsets of different samples. the network's weights are shared among the entire dataset and optimized towards best generalization simultaneously across samples, timestamps, and variables. in this work, we used adam [ ] , a gradient descent-based algorithm, to optimize the model. as the optimization criterion, we used the mean absolute error (mae), which is a generalization of the support vector regression [ ] with soft-margin criterion where Ω is the set of internal parameters of regenn, y is the network's output and t the ground truth: due to sars-cov- behave as a streaming time-series, we adopted a transfer learning approach to train the net-work on that dataset only. transfer learning shares knowledge across different domains by using pre-trained weights of another neural network. the approach we adopted, although different, resembles online deep learning [ ] . the main idea is to train the network on incremental slices of the time-axis, such that the pre-trained weights of a previous slice are used to initialize the weights of the network in the next slice. the purpose of this technique is not only to achieve better forecasting performance but also to show that regenn is superior to other algorithms throughout the pandemic. the results are based on three datasets, all of which are multi-sample, multivariate, and vary over time. the first dataset describes the coronavirus pandemic, refereed to as sars-cov- , made available by john hopkins university [ ] . it describes variables through days for countries and varies from the first day of the pandemic to the day it completed four months of duration. the second one is the brazilian weather dataset collected from sensors during , days regarding variables. the third dataset is from the physionet computing in cardiology challenge [ ] , from which we are using variables across hours recorded from , icu patients. the variables within the datasets are: data pre-processing all datasets were normalized between zero and one along the variable axis. such a pre-processing task is conventional to all types of learning algorithms but crucial to deep learning. by doing so, we speed up and avoid gradient spikes during the training phase. however, the neural network output is transformed back to the initial scale before computing any of the evaluation metrics. a simplistic yet effective approach to train time-series algorithms is through the sliding window technique [ ] , also referred to as rolling window. the window size is well known to be a highly sensitive hyperparameter [ ] , [ ] . consequently, we followed a non-tunable approach, in which we set the window size before the experiments, just taking into consideration the context and domain of the datasets. these values were used across all windowbased experiments, including the baselines and ablation . available together with the implementation source-code. tests. it is noteworthy that most of the machine-learning algorithms are not meant to handle time-variant data, such that no sliding window was used in those cases. conversely, we considered training timestamps as features and those reserved for testing as tasks of a multitask regression. on the deep learning algorithms, we used a window size of days for training and reserved days for validation (between the training and test sets) to predict the last days of the sars-cov- dataset. the - - split idea comes from the disease incubation period, which is of days. on the other hand, we used a window size of days and reserved days for validation to predict the last days in the brazilian weather dataset. the - - split is based on the seasonality of the weather data, such that we will look to the previous months (a weather-season window) to predict the last months of the upcoming season. finally, we used a window size of hours for training and hours for validation to predict the last hours of the physionet dataset. the - - split comes from the fact that patients in icus are in a critical state, such that predictions within hours are more useful than long-term predictions. many existing algorithms are limited because they neither support multitask nor multi-output regression, making these algorithms even more limited to tasks when data is tridimensional. the most straightforward yet effective approach we followed to compare them to regenn was to create a chain of ensembles . in such a case, each estimator makes predictions on order specified by the chain using all of the available features provided to the model plus the predictions of models that are earlier in the chain. the number of estimators in each experiment varies according to the type of the ensemble and the type of the algorithms, and the final performance is the average of each estimator's performance. for simplicity sake, we grouped the algorithms into five categories, as follows: corresponds to tridimensional compliant algorithms of single estimators; ○ describes multivariate algorithmss estimators, one estimator for each sample; ◎ consists of multi-output and multitask algorithmsv estimators, one estimator per variable; indicates single-target algorithms -v×z estimators, one estimator per variable and stride; and, represents univariate algorithms -s×v estimators, one estimator for each sample and variable. as time-series forecasting poses as a time-aware regression problem, our goal remains in predicting values that resemble the ground truth the most. hyperparameter tuning unlike many neural networks, regenn has only two hyperparameters able to change the size of the weights' tensors, which are the window size (i.e., input size) and the stride size (i.e., output size). as already discussed, both were set before the experiments, and none of them were tuned towards any performance improvement. the trade-off of having fewer hyperparameters is to spend more energy on training the network towards a better performance. we are focusing on the network optimizer, gradient clipping, learning rate scheduler, and dropout regularization when we refer to tunable hyperparameters. along these lines, we followed a controlled and limited exploratory approach similar to a random grid-search, starting with pytorch's defaults. the tuning process was on the validation set, intentionally reserved for measuring the network improvement. the tuning process follows by updating the hyperparameters whenever observing better results on the validation set, leading us to a set of optimized but no optimum hyperparameters. we used the set of optimized hyperparameters to evaluate the network on the test set. we used the default values for all the other algorithms [ ] - [ ] unless explicitly required for working with a particular dataset, as was the case of lstnet [ ] , dsanet [ ] , and mlcnn [ ] . the list of hyperparameters of regenn and further deep-learning algorithms are in the supplementary material. the experiments related to machine-learning and timeseries algorithms were carried out on a linux-based system with cpus and gb of ram for all the datasets. the experiments related to deep learning on the sars-cov- dataset were carried out on a linux-based system with cpus, gb of ram, and gpus (titan x -pascal). the brazilian weather and physionet datasets were tested on a different system with cpus, gb of ram, and gpus (titan x -maxwell). while cpu-based experiments are even across all cpu architectures, the same does not hold for gpus, such that the gpu model and architecture must match to guarantee reproducibility. aiming at complete reproducibility, we disclose not only the source code of regenn on github , but also the scripts, pre-processed data, and snapshots of all the networks trained on a public folder at google drive . . available at https://bit.ly/ ydbroo. . available at https://bit.ly/ x cwn. subsequently, we go through the experimental results in each one of the benchmarking datasets. due to not all the algorithms performed evenly across the datasets, we display the most prominent ones out of the tested algorithms; for the extended results, see the supplementary material. additionally, we also discuss the ablation experiments, which were carried out with regenn's hyperparameters; in the supplementary material, we provide two other rounds of this same experiment using as hyperparameters the pytorch's defaults and others recurrently employed on the literature. at the end of each experiment, we draw explanations about the evolution weights, i.e., intermediate adjacency matrices from the gse layers, by using the cosine similarity on pairs of co-occurring variables. the sars-cov- has being updated daily since the beginning of the coronavirus pandemic. we used a self-to-self transfer-learning approach to train the network in slices of time due to such a dataset's streaming nature. in short, the network was re-trained every days with new incoming data, using as starting weights, the pre-trained weights from the network trained in the past as a result of the analysis of the dataset in time-slices, we were able to notice that, as time goes by and more information is available on the sars-cov- dataset, the problem becomes more challenging to solve by looking individually at each country, and more natural when looking to all of them together. although countries have their particularities, which make the disease spread in different ways, the main goal is to decrease the spreading, such that similarities between the historical data of different countries provide for finer predictions. furthermore, we also observed that not all the estimators within an ensemble perform in the same way in the face of different countries. due to the regenn capability of observing inter-and intra-relationships between time-series, it performs better on highly uncertain cases like this one. subsequently, we present the ablation results, in which we utilized the same data-flow as regenn but no gse . available at https://bit.ly/ quwrsd. . results presented in descending order of metric mae (i.e., from worst to best performance); the algorithms are symbol-encoded according to their number of estimators. we use gray arrows to describe the standard deviation of the results; the negative deviation, which is equal to the positive one, was suppressed for better visualization. the results confirmed regenn's superior performance as it is the algorithm with the lowest error and standard deviation compared to the others, such that the improvement in the experiment was no lower than . %. layer while systematically changing the decoder architecture. we provide results using different recurrent units (ru), which are the elman rnn [ ] , lstm [ ] , and gru [ ] . we also varied the directional flag of the recurrent unit between unidirectional (u) and bidirectional (b). that because a unidirectional recurrent unit tracks only forward dependencies while a bidirectional one tracks both forward and backward dependencies. additionally, the network architecture of each test is described by a summarized tag. for example, given the architecture (e → u ru + b ru)+ar, it means the model has a transformer encoder as the encoder, a unidirectional recurrent unit as the time-axis decoder, and a bidirectional recurrent unit as the variable-axis decoder. besides, the output of the decoder is element-wise added to the autoregression (ar) output. furthermore, the table shows results with and without the encoder and autoregression component, as well as cases when using a single recurrent unit only for time-axis decoding. according to the ablation results detailed in tab. , one can observe that the improvement of regenn is slightly reduced than previously reported. that is because the performance of it does not only comes from the gse layer but also from how the network handles the multiple multivariate time-series data. consequently, the ablation experiments reveal that some models without gse layers are enough to surpass all the competing algorithms. however, when using regenn, we can improve them further and achieve . % of additional reduction on the mae, . % on the rmse, and . % on the msle. fig. shows the evolution weights originated from applying the cosine similarity on the hidden adjacency matrices of regenn. when comparing the input and evolved graphs, the number of cases and deaths has a mild similarity. that might come from the fact that, at the beginning of the pandemic, diagnosing infected people was already a broad concern. the problem did not go away, but more infected people were discovered as more tests were made, and also because the disease spread worldwide. a similar scenario can be drawn from the number of recovered and the number of cases, as infected people with mild or no symptoms were unaware of being infected. contrarily, we can see that the similarity between the recovered and deaths decreases over time, which comes from the fact that, as more tests are made, the mortality rate drops to a stable threshold due to the increased number of recovered people. the brazilian weather dataset is a highly seasonal dataset with a higher number of samples, variables, and timestamps than the previous one. for simplicity's sake, in this experiment, regenn was trained on the whole training set at once. the results are in fig. , in which regenn was the first-placed algorithm, followed by the elman rnn in second. regenn overcame the elman rnn by . % on the mae, . % on the rmse, and . % on the msle. we noticed that all the algorithms perform quite similarly for this dataset. the major downside for most algorithms comes from predicting small values that are close to zero, as noted by the msle results. in such a case, the ensembles showed a high variance when compared to regenn. we believe this is why the elman rnn shows performance closer to regenn rather than to exponential smoothing, the third-placed algorithm, as regenn has a single estimator, while the exponential smoothing is an ensemble of estimators. another understanding of why some algorithms underperform on the msle might be related to their difficulty to track temporal dependencies, which embraces the weather seasonality. the ablation results are in tab. , in which we observed again that the network without the gse layers already surpasses the baselines. when decommissioning the gse layers of regenn and using gru instead of lstm on the decoder, we observed a . % improvement on the mae, . % on the rmse, and . % msle when compared to the elman rnn results. using regenn instead, we achieve a further performance gain of . % on the mae and . % on the rmse over the ablation experiment. fig. depicts the evolution weights for the current dataset, in which we can observe a consistent similarity between pairs of variables in the input graph, which does not repeat in the evolved graph, implying different relationships. on the evolved graph, we observe that the similarity between all pairs of variables increased as the graph evolved. the pairs solar radiation and rain, maximum autoregressive baselines results for the brazilian weather dataset ordered from worst to best mae performance. along with the image, the algorithms are symbol-encoded based on their type and number of estimators, and we use gray arrows to report the standard deviation of the results; the negative deviation, which is equal to the positive one, was suppressed for improved readability. in such an experiment, regenn once more outperformed all the competing algorithms, demonstrating versatility by performing well even on a highly-seasonal dataset with improvement no lower than . %. in the face of seasonality, the elman rnn surpassed the exponential smoothing, the previously second-placed algorithm. temperature and rain, and solar radiation and minimum temperature stood out. those pairs are mutually related, which comes from solar radiation interfering in both maximum and minimum temperature and also in the precipitation factors, where the opposite relation holds. what can be extracted from the evolution weights, in this case, is the notion of importance between pairs of variables, so that the pairs that stood out are more relevant and provide better information to predict the forthcoming values for the variables in the dataset. the physionet dataset presents a large number of samples and an increased number of variables, but little information on the time axis, a setting in which ensembles still struggle to perform accurate predictions, as depicted in fig. . once again, regenn keeps steady as the first-placed algorithm in performance, showing solid improvement over the linear svr, the second-placed algorithm. the improvement was . % on the mae and . % on the msle, while the rmse achieved by regenn laid within the standard deviation of the linear svr, pointing out an equivalent performance between them. the linear svr is an ensemble with multiple estimators, while regenn uses a single one, which makes it better accurate and more straightforward for dealing with the current dataset. as in tab. , the ablation results reveal that a neural network architecture without the gse layers can achieve a better performance than the baseline algorithms. in this specific case, we see that by using a bidirectional lstm instead of unidirectional on the decoder module of the neural network, we can achieve a performance almost as good as regenn, but not enough to surpass it, as regenn still shows an improvement of . % on the mae and . % on the rmse over the ablation experiment with bidirectional lstm. in this specific case, regenn learns by observing multiple icu patients. however, one cannot say that an icu patient's state is somehow connected to another patient's state. contrarily, the idea holds as in the first experiment, where although the samples are different, they have the same domain, variables, and timestream, such that the information from one sample might help enhance future forecasting for another one. that means regenn learns both from the past of the patient but also from the past of other patients. nevertheless, we must be careful about drawing any understanding about these results, as the reason each patient is in the icu is different, and while some explanations might be suited for a small set of patients, it tends not to generalize to a significant number of patients. when analyzing the evolution weights in fig. aided by a physician, we can say that there is a relationship between the amount of urine excreted by a patient and the arterial blood pressure, and also that there is a relation between the systolic and diastolic blood pressure. however, even aided by the evolution weights, we cannot further describe these relations once there are variables of the biological domain that are not being taken into consideration. the evolution weights are intermediate weights of the representation-learning process (see fig. ), which are optimized throughout the network's training. such weights are time-invariant and are a requirement for the featurelearning behind the gse layer. although time does not flow through the adjacency matrix, the network is optimized as a whole, such that every operation influences the gradients resulting from the backward propagation process. that means that the optimizer, influenced by the gradients of both time-variant and invariant data, will optimize all the weights towards a better forecasting ability. such a process depends not only on the network architecture but also on the reliability of the optimization process. that increases uncertainty, which is the downside of re-genn, demanding more time to train the neural network, and causing the improvement not to be strictly uprising. baseline results for the physionet dataset arranged from the worst to the best mae performance, in which, regenn was the first-placed algorithm followed by the linear svr in the second one. the improvement from one to another was no lower than . %, but, in this case, regenn yielded an rmse compatible with the linear svr. along with the image, the algorithms are symbol-encoded based on type and number of estimators, and gray arrows depict the standard deviation of the results; the negative deviation, which is equal to the positive one, was suppressed to provide better visualization for the results. fig. : evolution weights extracted from regenn after training on the physionet dataset, in which we use cosine similarity to compare the relationship between pairs of variables. we use "abp" as shortening for arterial blood pressure, "ni" as non-invasive, "dias" as diastolic, and "sys" as systolic. consequently, training might take long sessions, even with consistently reduced learning rates on plateaus or simulated annealing techniques; this is influenced by the fact that the second gse layer has two dynamic inputs, which arise from the graph-evolution process. however, we observed that through the epochs, the evolution weights reaches a stable point with no major updates, and as a result, the network demonstrates a remarkable improvement in its last iterations when the remaining weights more intensely converge to a near-optimal configuration. even though regennhas a particular drawback, it shows excellent versatility, which comes from its superior performance in the task of epidemiology modeling on the sars-cov- dataset, climate forecasting on the brazilian weather, and patient monitoring on intensive care units on the physionet dataset. consequently, we see regenn as a tool to be used in data-driven decision-making tasks, helping prevent, for instance, natural disaster, or during the preparation for an upcoming pandemic. as a precursor in multiple multivariate time-series forecasting, there is still much to be improved. for example, reducing the uncertainty that harms regenn without decreasing its performance should be the first step, followed by extending the proposal to handle problems in the spatio-temporal field of great interest to traffic forecasting and environmental monitoring. another possibility would be to remove the recurrent layers within the decoder while tracking the temporal dependencies through multiple graphs, which would provide a whole new temporal-modeling way. notwithstanding, in some cases, where extensive generalization is not required, the analysis of singular multivariate time-series may be preferred to multiple multivariate time-series. that because, when focusing on a single series at a time, some but not all samples might yield a lower forecasting error, as the model will be driven to a single multivariate sample. however, both approaches for tackling time-series forecasting can coexist in the state-of-the-art, and, as a consequence, the decision to work on a higher or lower dimensionality must relate to which problem is being solved and how much data is available to solve it. this paper tackles multiple multivariate time-series forecasting tasks by proposing the recurrent graph evolution neural network (regenn), a graph-based time-aware auto-encoder, powered by a pair of graph soft evolution (gse) layers, a further contribution of this study that stands for a graph-based learning-representation layer. the literature handles multivariate time-series forecasting with outstanding performance, but up to this point, we lacked a technique with increased generalization over multiple multivariate time-series with sound performance. previous research might have avoided tackling such a problem as a neural network to that matter is challenging to train, and usually yield poor results. that because one aims to achieve good generalization on future observations for multivariate time-series that do not necessarily hold the same data distribution. because of that, regenn is a precursor in multiple multivariate time-series forecasting, and even though this is a challenging problem, regenn surpassed all baselines and remained effective after three rounds of ablation tests through distinct hyperparameters. the experiments were carried out over the sars-cov- , brazilian weather, and physionet datasets. showing improvements, respectively, of at least . %, . %, and . %. as a consequence of the results, regenn shows a new range of possibilities in time-series forecasting, starting by demonstrating that ensembles perform poorly than a single model that understands the entanglement between different variables by looking to how variables interact as the time goes by and multiple multivariate time-series evolve. this work was partially supported by the coordenação de aperfeiçoamento de pessoal de nível superior -brazil (capes) -finance code ; fundação de amparo à pesquisa do estado de são paulo (fapesp), through grants / - , / - , / - , / - , and / - ; conselho nacional de desenvolvimento científico e tecnológico (cnpq) through grants / - , and / - ; national science foundation awards iis- , ccf- , and iis ; and, the national institute of health awards nih r r ns - and r hl . we thank jeffrey valdez for his aid with sunlab's computer infrastructure, lucas scabora, for his careful review of the paper and gustavo merchan, m.d., for his analysis of the evolution weights on the physionet dataset. baselines notes. table list the acronym and full name of all algorithms we tested during the baselines computation. tables to present detailed information from the experiments discussed along with the main manuscript. the following tables regard the tests using transfer learning on the sars-cov- dataset, in which a new network was trained every days starting on days after the pandemic started and up to days of its duration. cosine similarity. the cosine similarity, which has been widely applied in learning approaches, accounts for the similarity between two non-zero vectors based on their orientation in an inner product space [ ] . the underlying idea is that the similarity is a function of the cosine angle θ between vectors hence, when θ = , the two vectors in the inner product space have the same orientation, when θ = , these vectors are oriented a • relative to each other, and when θ = − the vectors are diametrically opposed. the cosine similarity between the vectors u and v is defined as it follows: lg] aug the dot product between u and v and u represent the norm of the vector u = √ u · u, while u i is the i-th variable of the object represented by the vector. in the scope of this work, the cosine similarity is used to build similarity adjacency matrices, which measures the similarity between all nodes in a variables' co-occurrence graph. the similarity between two nodes in the graph describes how likely those two variables are to co-occur at the same time for a time-series. in this case, the similarity ends up acting as an intermediate activation function, enabling the graph evolution process by maintaining the similarity of the relationships between pairs of nodes. in such a particular case, we define the cosine-matrix similarity as follows: where a · a t denotes the dot product between the matrix a and the transposed a t , while a represents the norm of that same matrix with respect to any of its ranks, as we consider a to be a squared adjacency matrix. horizon forecasting. the horizon forecasting stands for an approach used for making non-continuous predictions by accounting for a future gap in the data. it is useful in a range of applications by considering, for instance, that recent data is not available or too costly to be collected. thereby, it is possible to optimize a model that disregards the near future and focuses on the far-away future. however, such an approach abdicates from additional information that could be learned from continuous timestamp predictions [ ] . by not considering the near past as a variable that influences the near future, we might result in a non-stochastic view of time, meaning that the algorithm focuses on long-term dependencies rather than both long-and short-term dependencies. along these lines, both the lstnet [ ] and dsanet [ ] comply with horizon forecasting, and to make our results comparable, we set the horizon to one on both of them. thus, we started assessing the test results right after the algorithms' last validation step because, as closer to the horizon, the more accurate these models should be. a simplistic yet effective approach to train time-series algorithms is through the sliding window technique [ ] , which is also referred to as rolling window (see fig. ). such a technique fixes a window size, which slides over the time axis, predicting a predefined number of future steps, referred to as stride. some studies on time-series have been using a variant technique known as expanding sliding window [ , ] . this variant starts with a prefixed window size, which grows as it slides, showing more information to the algorithm as time goes by. regenn holds for the traditional technique as it is bounded to the tensor weights dimension. those dimensions are of a preset size and cannot be effortlessly changed during training, as it comes with increased uncertainty by continuously changing the number of internal parameters, such that a conventional neural network optimizer cannot handle it properly. nevertheless, the window size of the sliding window is well known to be a highly sensitive hyperparameter [ , ] ; to avoid increased number of parameters, we followed a nontunable approach, in which we set the window size before the experiments taking into consideration the context of the datasets; such values were even across all window-based trials, including the baselines and ablation. optimization strategy. regenn operates on a three-dimensional space shared between samples, time, and variables. in such a space, it carries out a time-based optimization strategy. the training process iterates over the time-axis of the dataset, showing to the network how the variables within a subset of time-series behave as time goes by, and later repeating the process through subsets of different samples. the network's weights are shared among the entire dataset and optimized towards best generalization simultaneously across samples, time, and variables. the dataset t ∈ r s×t×v is sliced into training t ∈ r s×w×v and testing t ∈ r s×z×v as follows: once the data is sliced, we follow by using a gradient descent-based algorithm to optimize the model. in the scope of this work, we used adam [ ] as the optimizer, as it is the most common optimizer among timeseries forecasting problems. as the optimization criterion, we used the mean absolute error (mae), which is a generalization of the support vector regression [ ] with soft-margin criterion formalized as it follows: where w is the set of optimizable parameters, · f is the frobenius norm, and both c and ρ are hyperparameters. the idea, then, is to find w that better fit y i , x i ∀i ∈ [ , n] so that all values are in [ρ + ξ i , ρ + ξ * i ], where ξ i and ξ * i are the two farther opposite points in the dataset. a similar formulation on the linear svr implementation for horizon forecasting was presented by lai et al. [ ] . due to the higher-dimensionality among the multiple multivariate time-series used in this study, in which we consider the time to be continuous, the problem becomes: where Ω is the set of internal parameters of regenn, y is the output of the network and t the ground truth. when disregarding c and setting ρ as zero, we can reduce the problem to the mae loss formulation: square-and logarithm-based criteria can also be used with regenn. we avoid doing so, as this is a decision that should be made based on each dataset. contrarily, we follow the svr path towards the evaluation of absolute values, which is less sensitive to outliers and enables regenn to be applied on a range of applications. transfer-learning approach. we adopted a transfer learning approach to train the network on the sars-cov- dataset that, although different, resembles online deep learning [ ] . the idea is to train the network on incremental slices of the time-axis, such that the pre-trained weights of a previous slice are used to initialize the weights of the network in the next slice (see fig. ). the purpose of this technique is not only to achieve better performance towards the network but also to show that regenn is useful throughout the pandemic. slice hyperparameters adjustment is usually required when transferring the weights from one network to another, mainly the learning rate; for the list of hyperparameters we used, see tab. . besides, we deliberately applied a % dropout on all tensor weights outside the network architecture and before starting the training. the aim behind that decision was to insert randomness in the pipeline and avoid local optima. it worth mentioning that we did not observe any decrease in performance, but in some cases, the optimizer's convergence was slower. baselines algorithms. open-source python libraries provided the time series and machine learning algorithms used along with the experiments. time series algorithms came from the statsmodels , while the machine learning ones majorly from the scikit-learn . further algorithms, such as xgboost , lgbm , and catboost , have a proprietary, open-source implementation, which was preferred over the others. we used the default hyperparameters over all the experiments, performing no fine-tuning. however, because all the datasets we tested are strictly positive, we forced all the negative output to become zero, such as made by a relu activation function. a list with names and algorithms tested along with the experiments is provided in tab , which contains more algorithms than we reported in the main paper. that because we are listing all algorithms, even the ones that were removed from the pipeline due to being incapable of working with the input data and yielding exceptions. hyperparameters for the ablation tests legend: algorithms with best performance are in bold, the ones noted as -yielded exceptions, and others as *** were suppressed due to poor performance. isotonic table : detailed results for the first three slices of the sars-cov- . legend: algorithms with best performance are in bold, the ones noted as -yielded exceptions, and others as *** were suppressed due to poor performance. legend: algorithms with best performance are in bold, the ones noted as -yielded exceptions, and others as *** were suppressed due to poor performance. effects of climate and land-use changes on fish catches across lakes at a global scale rising river flows throughout the twenty-first century in two himalayan glacierized watersheds the impact of climate change and glacier mass loss on the hydrology in the mont-blanc massif stock market prediction using optimized deep-convlstm model stock market analysis using candlestick regression and market trend prediction (ckrm) robust landsat-based crop time series modelling continuous monitoring of land disturbance based on landsat time series generic and scalable framework for automated time-series anomaly detection multivariate time series anomaly detection: a framework of hidden markov models a review on time series forecasting techniques for building energy consumption multi-layer representation learning for medical concepts temporal phenotyping of medically complex children via parafac tensor factorization patient trajectory prediction in the mimic-iii dataset, challenges and pitfalls taste: temporal and static tensor factorization for phenotyping electronic health records an interpretable mortality prediction model for covid- patients on the responsible use of digital data to tackle the covid- pandemic temporal dynamics in viral shedding and transmissibility of covid- temporal aggregation of univariate and multivariate time series models: a survey artificial neural networks: a tutorial a simulation study of artificial neural networks for nonlinear time-series forecasting modeling long-and short-term temporal patterns with deep neural networks empirical evaluation of gated recurrent neural networks on sequence modeling dsanet: dual selfattention network for multivariate time series forecasting dstp-rnn: a dual-stage two-phase attention-based recurrent neural network for longterm and multivariate time series prediction attention is all you need towards better forecasting by fusing near and distant future visions gradient-based learning applied to document recognition long short-term memory recurrent neural networks for multivariate time series with missing values geoman: multilevel attention networks for geo-sensory time series prediction ijcai- . international joint conferences on artificial intelligence organization semi-supervised classification with graph convolutional networks t-gcn: a temporal graph convolutional network for traffic prediction spatio-temporal graph convolutional networks: a deep learning framework for traffic forecasting a dual-stage attention-based recurrent neural network for time series prediction multivariate time series forecasting via attention-based encoder-decoder framework dropout: a simple way to prevent neural networks from overfitting highway networks deep residual learning for image recognition layer normalization adam: a method for stochastic optimization support vector method for function approximation, regression estimation and signal processing online deep learning: learning deep neural networks on the fly an interactive web-based dashboard to track covid- in real time predicting in-hospital mortality of icu patients: the physionet/computing in cardiology challenge segmenting time series: a survey and novel approach input window size and neural network predictors time series prediction and neural networks pedregosa :sklearn: machine learning in python xgboost: a scalable tree boosting system catboost: unbiased boosting with categorical features finding structure in time data mining introduction towards better forecasting by fusing near and distant future visions modeling long-and short-term temporal patterns with deep neural networks dsanet: dual self-attention network for multivariate time series forecasting segmenting time series: a survey and novel approach picture fuzzy time series: defining, modeling and creating a new forecasting method machine learning time series regressions with an application to nowcasting input window size and neural network predictors time series prediction and neural networks adam: a method for stochastic optimization support vector method for function approximation, regression estimation and signal processing online deep learning: learning deep neural networks on the fly key: cord- -pfx eh b authors: sotolongo-costa, oscar; weberszpil, jos'e; sotolongo-grau, oscar title: a fractal viewpoint to covid- infection date: - - journal: nan doi: nan sha: doc_id: cord_uid: pfx eh b one of the central tools to control the covid- pandemics is the knowledge of its spreading dynamics. here we develop a fractal model capable of describe this dynamics, in term of daily new cases, and provide quantitative criteria for some predictions. we propose a fractal dynamical model using conformed derivative and fractal time scale. a burr-xii shaped solution of the fractal-like equation is obtained. the model is tested using data from several countries, showing that a single function is able to describe very different shapes of the outbreak. the diverse behavior of the outbreak on those countries is presented and discussed. moreover, a criterion to determine the existence of the pandemic peak and a expression to find the time to reach herd immunity are also obtained. the worldwide pandemic provoked by the sars-cov- coronavirus outbreak have attracted the attention of the scientific community due to, among other features, its fast spread. its strong contamination capacity has created a fast growing population of people enduring covid- , its related disease, and a non small peak of mortality. the temporal evolution of contagion over different countries and worldwide brings up a common dynamic characteristic, in particular, its fast rise to reach a maximum followed by a slow decrease (incidentally, very similar to other epidemic processes) suggesting some kind of relaxation process, which we try to deal with, since relaxation is, essentially, a process where the parameters characterizing a system are altered, followed by a tendency to equilibrium values. in physics, clear examples are, among others, dielectric or mechanical relaxation. in other fields (psychology, economy, etc.) there are also phenomena in which an analogy with "common" relaxation can be established. in relaxation, temporal behavior of parameters is of medular methodological interest. that is why pandemics can be conceived as one in which this behavior is also present. for this reason, we are interested, despite the existence of statistical or dynamical systems method, in the introduction of a phenomenological equation containing parameters that reflect the system s behavior, from which its dynamics emerges. we are interested in studying the daily presented new cases, not the current cases by day. this must be noted to avoid confusion in the interpretation, i.e. we study not the cumulative number of infected patients reported in databases, but its derivative. this relaxation process in this case is, for us, an scenario that, by analogy, will serve to model the dynamics of the pandemics. this is not an ordinary process. due to the concurrence of many factors that make very complex its study, its description must turn out to non classical description. so, we will consider that the dynamics of this pandemic is described by a "fractal" or internal time [ ] . the network formed by the people in its daily activity forms a complex field of links very difficult, if not impossible, to describe. however, we can take a simplified model where all the nodes belong to a small world network, but the time of transmission from one node to other differs for each link. so, in order to study this process let us assume that spread occurs in "fractal time" or internal time [ , ] . this is not a new tool in physics. in refs. [ ] [ ] [ ] this concept has been successfully introduced and here, we keep in mind the possibility of a fractal-like kinetics [ ] , but generalizing as a nonlinear kinetic process. here we will follow to what we refer as a "relaxation-like" approach, to model the dynamics of the pandemic and that justify the fractal time. by analogy with relaxation, an anomalous relaxation, we build up a simple nonlinear equation with fractal-time. we also regain the analytical results using a deformed derivative approach, using conformable derivative (cd) [ ] . in ref. [ ] one of the authors (j.w.) have shown intimate relation of this derivative with complex systems and nonadditive statistical mechanics. this was done without resort to details of any kind of specific entropy definition. our article is outlined as follows: in section , we present the fractal model formulated in terms of conformable derivatives, to develop the relevant expressions to adjust data of covid- . in section , we show the results and figures referring to the data fitting along with discussions. in section , we finally cast our general conclusions and possible paths for further investigations. let us denote by f (t) the number of contagions up to time t. the cd is defined as [ ] d α note that the deformation is placed in the independent variable. for differentiable functions, the cd can be written as an important point to be noticed here is that the deformations affect different functional spaces, depending on the problem under consideration. for the conformable derivative [ ] [ ] [ ] [ ] [ ] , the deformations are put in the independent variable, which can be a space coordinate, in the case of, e.g, mass position dependent problems, or even time or spacetime variables, for temporal dependent parameter or relativistic problems. since we are dealing with a complex system, a search for a mathematical approach that could take into account some fractality or hidden variables seems to be adequate. this idea is also based in the fact that we do not have full information about the system under study. in this case, deformed derivatives with fractal time seems to be a good option to deal with this kind of system. deformed derivatives, in the context of generalized statistical mechanics are present and connected [ ] . there, the authors have also shown that the q − def ormed derivative has also a dual derivative and a q − exponential related function [ ] . here, in the case under study, the deformation is considered for the solutions-space or dependent variable, that is, the number f (t) of contagions up to time t. one should also consider that justification for the use of deformed derivatives finds its physical basis on the mapping into the fractal continuum [ , [ ] [ ] [ ] . that is, one considers a mapping from a fractal coarse grained (fractal porous) space, which is essentially discontinuous in the embedding euclidean space, to a continuous one [ ] . in our case the fractality lies in the temporal variable. then the cd is with respect to time. a nonlinear relaxation model can be proposed here, again based on a generalization of brouers-sotolongo fractal kinetic model (bsf) [ , , ] , but here represented by a nonlinear equation written in terms of cd: where τ is our "relaxation time" and q and α here are real parameters. we do not impose any limit for the parameters. equation ( ) has as a well known solution a function with the shape of burr xii [ ] , with : the density (in a similar form of a pdf, but here it is not a pdf) is, then: where ,which can be expressed as: where the parameter are or, in a simpler form for data adjustment purposes with this is very similar, though not equal, to the function proposed by tsallis [ , ] in an ad hoc way. here, however, a physical representation by the method of analogy is proposed to describe the evolution of the pandemics. though we have introduced a, b, c, b, and a as parameters to simplify the fitting, the true adjustment constants are, clearly, q, τ and α. note that we do not impose any restrictive values to the parameters. there is no need to demand that the solution always converge. the equation to obtain burr xii has to impose restrictions but this is not the case. in burr xii the function was used as a probability distribution. but here the function describes a dynamic, which can be explosive, as will be shown for the curves of brazil and mexico. therefore, if we consider infinite population, a peak will never be reached unless the circumstances change (treatments, vaccines, isolation, etc.). our model does not impose finiteness of the solution. the possibility for a decay of the pandemic in a given region in this model requires the fulfillment of the condition what expresses the property that what means that the function has a local maximum. if this condition is not accomplished, the pandemic does not have a peak and, therefore, the number of cases increases forever in this model. in this case there is, apart from the change of propagation and development conditions, the possibility for a given country that does not satisfies condition ( ), to reach "herd immunity", i.e., when the number of contagions has reached about % of population, in which case we may calculate the time to reach such state using ( ), assuming t = : we will work with what we will call t ahead and that seems to make more sense and bring more information. with eq. ( ) let us fit the data of the epidemic worldwide. the data was extracted from johns hopkins university [ ] and the website [ ] to process the data for several countries. we covered the infected cases taken at jan as day , up to june . the behavior of new infected cases by day is shown in figure . the fitting was made with gnuplot . . as it seems, the pandemic shows some sort of "plateau", so the present measures of prevention are not able to eliminate the infection propagation in a short term, but it can be seen that condition ( ) is weakly fulfilled. table i . condition ( ) is satisfied. in the particular case of mexico the fitting is shown in figure . in this case condition ( ) is not fulfilled. in terms of our model this means that the peak is not predictable within the present dynamics. something similar occurs with brazil, as shown in figure . the data for brazil neither fulfill the condition ( ) . in this case there is neither the prevision of a peak and we can say that the data for mexico and brazil reveals a dynamics where the peak seems to be quite far if it exists. but there are some illustrative cases where the peak is reached. progression of the outbreak in cuba and iceland are shown in figure and respectively. condition ( ) is satisfied for both countries and we can see that the curve of infection rate descends at a good speed after past the peak. now let us take a look at united states data, shown in figure . the usa outbreak is characterized by a very fast growth until the peak and, then, very slow decay of the infection rate is evident. as discussed above, the outbreak will be controlled for almost infinite time in this dynamics. there is also some intermediate cases as spain and italy, shown in figures and . in this case the data exhibits the same behavior as in usa, a fast initial growth and a very slow decay after the peak. however, the outbreak is controlled in a finite amount of time. in table i we present the relevant fitting parameters, including herd immunity time, t hi and t , the time to reach a rate of infections daily. this, for countries that have not reached the epidemic peak, mexico and brazil. we also include the population, p ; of each country. table i . t hi = days. condition ( ) is not satisfied. table i . condition . figure . daily infections in usa, where the peak looks already surpassed. here again, the behavior of the pandemic in this country looks well described by eq. ( ). see fitting parameters in table i . condition ( ) is satisfied. table i . condition ( ) is satisfied. table i but let us briefly comment about herd immunity. those countries that have managed to stop the outbreak, even with relative high mortality as spain and italy, will not reach the herd immunity. as a matter of fact, this can not be calculated for those countries. then, we can see countries like brazil where, if the way of deal with the outbreak do not change, the herd immunity will be reached. even when it seems desirable, the ability to reach the herd immunity brings with it a high payload. that is, for a country like brazil the herd immunity would charge more than million of infected people. that is, much the same as if a non small war devastates the country. there is an alike scenario in mexico, but the difference here is that the value for t hi is so high that sars-cov- could even turn into a seasonal virus, at least for some years. we can expect around the same mortality but scattered over a few years. a special observation deserves usa, where t hi tends to infinity. here we can expect a continuous infection rate for a very long time. the outbreak is controlled but not enough to eradicate the virus. virus will not disappear in several years but maybe the healthcare system could manage it. the virus will get endemic, and immunity will never be reached. however the infections and mortality rate associated with it, can be, hypothetically, small if compared with mexico and brazil. we can also compare the speed of the outbreak in different countries. as we already said in table i we calculated t for some countries. however, it should be noticed that this time is not calculated from day , which is always january , but for the approximated day when the outbreak began in the correspondent country. by example, in brazil there was no cases at january, but the first cases were detected around march, . so both, data fitting and t , were calculated from march, . in this work, for the first time, we presented a model built using the method of analogy, in this case with a nonlinear relaxation-like behavior. with this, a good fitting with the observed behavior of the daily number of cases with time is obtained. the explicit expressions obtained may be used as a tool to approximately forecast the development of the covid- pandemic in different countries and worldwide. in principle, this model can be used as a help to elaborate or change actions. this model does not incorporate any particular property of this pandemic, so we think it could be used to study pandemics with different sources. with the collected data of the pandemics at early times, using this model, it can be predicted the possibility of a peak, indefinite growth, time for herd immunity, etc. what seems to be clear from the covid- data, the fitting and the values shown in the table i , is that sars-cov- is far from being controlled at world level. even when some countries appear to control the outbreak, the virus is still a menace for its health system. furthermore, in the nowadays interconnected world it is impossible for any country to keep closed borders and pay attention to what happens only inside. all isolation measures should be halted at some time and we can expect new outbreaks in countries like spain or herd immunity. indeed, the model made possible to make an approximate forecast of the time to reach the herd immunity. this may be useful in the design of actions and policies about the pandemic. we have introduced the t , that gives information about the early infection behavior in populous countries. a possible improvement of this model is the formal inclusion of a formulation including the dual conformable derivative [ , ] . this will be published elsewhere. proceedings of ieee conference on electrical insulation and dielectric phenomena-(ceidp' we acknowledge dr. carlos trallero -giner for helpful comments and suggestions the authors declare that they have no conflict of interest. key: cord- - yay kq authors: sun, chenxi; hong, shenda; song, moxian; li, hongyan title: a review of deep learning methods for irregularly sampled medical time series data date: - - journal: nan doi: nan sha: doc_id: cord_uid: yay kq irregularly sampled time series (ists) data has irregular temporal intervals between observations and different sampling rates between sequences. ists commonly appears in healthcare, economics, and geoscience. especially in the medical environment, the widely used electronic health records (ehrs) have abundant typical irregularly sampled medical time series (ismts) data. developing deep learning methods on ehrs data is critical for personalized treatment, precise diagnosis and medical management. however, it is challenging to directly use deep learning models for ismts data. on the one hand, ismts data has the intra-series and inter-series relations. both the local and global structures should be considered. on the other hand, methods should consider the trade-off between task accuracy and model complexity and remain generality and interpretability. so far, many existing works have tried to solve the above problems and have achieved good results. in this paper, we review these deep learning methods from the perspectives of technology and task. under the technology-driven perspective, we summarize them into two categories - missing data-based methods and raw data-based methods. under the task-driven perspective, we also summarize them into two categories - data imputation-oriented and downstream task-oriented. for each of them, we point out their advantages and disadvantages. moreover, we implement some representative methods and compare them on four medical datasets with two tasks. finally, we discuss the challenges and opportunities in this area. time series data have been widely used in practical applications, such as health [ ] , geoscience [ ] , sales [ ] , and traffic [ ] . the popularity of time series prediction, classification, and representation has attracted increasing attention, and many efforts have been taken to address the problem in the past few years [ , , , ] . the majority of the models assume that the time series data is even and complete. however, in the real world, the time series observations usually have non-uniform time intervals between successive measurements. three reasons can cause this characteristic: ) the missing data exists in time series due to broken sensors, failed data transmissions or damaged storage. ) the sampling machine itself does not have a constant sampling rate. ) different time series usually comes from different sources that have various sampling rates. we call such data as irregularly sampled time series (ists) data. ists data naturally occurs in many real-world domains, such as weather/climate [ ] , traffic [ ] , and economics [ ] . in the medical environment, irregularly sampled medical time series (ismts) is abundant. the widely used electronic health records (ehrs) data have a large number of ismts data. ehrs are the real-time, patient-centered digital version of patients' paper charts. ehrs can provide more opportunities to develop advanced deep learning methods to improve healthcare services and save more lives by assisting clinicians with diagnosis, prognosis, and treatment [ ] . many works based on ehrs data have achieved good results, such as mortality risk prediction [ , ] , disease prediction [ , , ] , concept representation [ , ] and patient typing [ , , ] . due to the special characteristics of ismts, the most important step is establishing the suitable models for it. however, it is especially challenging in medical settings. various tasks need different adaptation methods. data imputation and prediction are two main tasks. the data imputation task is a processing task when modeling data, while the prediction task is a downstream task for the final goal. the two types of tasks may be intertwined. standard techniques, such as mean imputation [ ] , singular value decomposition (svd) [ ] and k-nearest neighbour (knn) [ ] , can impute data. but they still lead to the big gap between the calculated data distribution and have no ability for the downstream task, like mortality prediction. linear regression (lr) [ ] , random forest (rf) [ ] and support vector machines (svm) [ ] can predict, but fails for ists data. state-of-the-art deep learning architectures have been developed to perform not only supervised tasks but also unsupervised ones that relate to both imputation and prediction tasks. recurrent neural networks (rnns) [ , , ] , auto-encoder (ae) [ , ] and generative adversarial networks (gans) [ , ] have achieved good performance in medical data imputation and medical prediction thanks to their abilities of learning and generalization obtained by complex nonlinearity. they can carry out prediction task or imputation task separately, or they can carry out two tasks at the same time through the splicing of neural network structure. different understandings about the characteristics of ismts data appear in existing deep learning methods. we summarized them as missing data-based perspective and raw data-based perspective. the first perspective [ , , , , ] treat irregular series as having missing data. they solve the problem through more accurate data calculation. the second perspective [ , , , , ] is on the structure of raw data itself. they model ismts directly through the utilization of irregular time information. neither views can defeat the other. either way, it is necessary to grasp the data relations comprehensively for more effectively modeling. we conclude two relations of ismts -intra-series relations (data relations within a time series) and inter-series relations (data relations between different time series). all the existing works model one or both of them. they relate to the local structures and global structures of data and we will introduced in section . besides, different ehr datasets may lead to different performance of the same method. for example, the real-world mimic-iii [ ] and cinc [ ] datasets record multiple different diseases. the records between diseases have distinct data characteristics and the prediction results of each general methods [ , , , ] varied between each disease datasets. thus, many existing methods model a specific disease record, like sepsis [ ] , atrial fibrillation [ , ] and kidney disease [ ] and have improved the predicting accuracy. the rest of the paper is organized as follows. section gives the basic definition and abbreviations. section describes the features of ismts based on two viewpoints -intra-series and inter-series. section and section introduce the related works in technology-driven perspective and task-driven perspective. in each perspective, we summarize the methods into specific categories and analyze the merits and demerits. section compares the experiments of some methods on four medical datasets with two tasks. in section and , we raise the challenges and opportunities for modeling ismts data and then make conclusion. the summary of abbreviations is in table . a typical ehr dataset is consist of a number of patient information which includes demographic information and in-hospital information. in-hospital information is a hierarchical patient-admission-code form shown in figure . each patient has certain admission records as he/she could be in hospital several times. the codes have diagnoses, lab values and vital sign measurements. each record r i is consist of many codes, including static diagnoses codes set d i and dynamic vital signs codes set x i . each code has the time stamp t. ehrs have many ismts because of two aspects: ) multiple admissions of one patient and ) multiple time series records in one admission. multiple admission records of each patient have different time stamps. because of health status dynamics and some unpredictable reasons, a patient will visit hospitals under varying intervals [ ] . for example, in figure , march , , july , and february , are patient admission times. the time interval between the st admission and nd admission is couple of months while the time interval between admissions , is years. each time series, like blood pressure in one admission, also has different time intervals. shown as admission in figure , the sampling time is not fixed. different physiological variables are examined at different times due to the changes in symptoms. every possible test is not regularly measured during an admission. when a certain symptom worsens, corresponding variables are examined more frequently; when the symptom disappears, the corresponding variables are no longer examined. without the loss of generality, we only discuss univariate time series. multivariate time series can be modeled in the same way. definition illustrates three important matters of ismts -the value x, the time t and the time interval δ. in some missing value-based works (we will introduce in section ), they use masking vector m ∈ { , } to represent the missing value. characteristics of irregularly sampled medical time series the medical measurements are frequently correlated both within streams and across streams. for example, the value of blood pressure of a patient at a given time could be correlated with the blood pressure at other times, and it could also have a relation with the heart rate at that given time. thus, we will introduce ismts's irregularity in two aspects: ) intra-series and ) inter-series. intra-series irregularity is the irregular time intervals between the nearing observations within a stream. for example, shown in figure , the blood pressure time series have different time intervals, such as hour, hours, and even hours. the time intervals add a time sparsity factor when the intervals between observations are large [ ] . existing two ways can handle the irregular time intervals problem: ) determining a fixed interval, treating the time points without data as missing data. ) directly modeling time series, seeing the irregular time intervals as information. the first way requires a function to impute missing data [ , ] . for example, some rnns [ , , , , , ] can impute the sequence data effectively by considering the order dependency. the second way usually uses the irregular time intervals as inputs. for example, some rnns [ , ] apply time decay to affect the order dependency, which can weaken the relation between neighbors with long time intervals. inter-series irregularity is mainly reflected in the multi-sampling rates among different time series. for example, shown in figure , vital signs such as heart rate (ecg data) have a high sampling rate (in seconds), while lab results such as ph (ph data) are measured infrequently (in days) [ , ] . existing two ways can handle the multi-sampling rates problem: ) considering data as a multivariate time series. ) processing multiple univariable time series separately. the first way aligns the variables of different series in the same dimension and then solves the missing data problem [ ] . the second way models different time series simultaneously and then designs fusion methods [ ] . numerous related works are capable of modeling ismts data, we category them from two perspectives: ) technologydriven and ) task-driven. we will describe each category in detail. based on technology-driven, we divide the existing works into two categories: ) missing data-based perspective and ) an raw data-based perspective. the specific categories are shown in figure . the missing data-based perspective regards every time series has uniform time intervals. the time points without data are considered to be the missing data points. as shown in figure a , when converting irregular time intervals to regular time intervals, missing data shows up. the missing rate r missing can measure the degree of the missing at a given sampling rate r sampling . r missing = # of time points with missing data # of time points ( ) the ismts in the real-world ehrs have a severe problem with missing data. for example, luo et al. [ ] gathered statistics of cinc dataset [ , ] . as time goes by, the results show that the maximum missing rate at each timestamp is always higher than %. most variables' missing rate is above %, and the mean of the missing rate is . %, as shown in figure a . the other three real-word ehrs data set mimic-iii dataset [ ] ,cinc dataset [ , ] , and covid- dataset [ ] are also affected by the missing data, shown in figure b , c, and d. in this viewpoint, existing methods impute the missing data, or model the missing data information directly. the raw data-based perspective uses irregular data directly. the methods do not fill in missing data to make the irregular sampling regular. on the contrary, they think that irregular time itself is the valuable information. as shown in figure b , the time are still irregular and the time intervals are recorded. irregular time intervals and multi-sampling rates are intra-series characteristic and inter-series characteristic we have introduced in section respectively. they are very common phenomenons in ehr database. for example, cinc dataset is relatively clean but still has more than % samples with irregular time intervals. only . % samples have the same sampling rate in mimic-iii dataset. in this viewpoint, methods usually integrate the features of varied time intervals to the inputs of model, or design models which can process samples with different sampling rates. the methods of missing data-based perspective convert ismts into equally spaced data. they [ , , ] discretize the time axis into non-overlapping intervals with hand-designed intervals. then the missing data shows up. the missing values damage temporal dependencies of sequences [ ] and make applying many existing models directly infeasible, such as linear regression [ ] and recurrent neural networks (rnn) [ ] . as shown in figure , because of missing values, the second valley of the blue signal is not observed and cannot be inferred by simply relying on existing basic models [ , ] . but the valley values of blood pressure are significant for icu patients to indicate sepsis [ ] , a leading cause of patient mortality in icu [ ] . thus, missing values have an enormous impact on data quality, resulting in unstable predictions and other unpredictable effects [ ] . many prior efforts have been dedicated to the models that can handle missing values in time series. and they can be divided into two categories: ) two-step approaches and ) end-to-end approaches. two-step approaches ignore or impute missing values and then process downstream tasks based on the preprocessed data. a simple solution is to omit the missing data and perform analysis only on the observed data. but it can result in a large amount of useful data not being available [ ] . the core of these methods is how to impute the missing data. some basic methods are dedicated to filling the values, such as smoothing, interpolation [ ] , and spline [ ] . but they cannot capture the correlation between variables and complex patterns. other methods estimate the missing values by spectral analysis [ ] , kernel methods [ ] , and expectation-maximization (em) algorithm [ ] . however, simple reasoning design and necessary model assumptions make data imputation not accurate. recently, with the vigorous development of deep learning, these methods have higher accuracy than traditional methods. rnns and gans mainly realize the deep learning-based data imputation methods. a substantial literature uses rnns to impute the missing data in ismts. rnns take sequence data as input, recursion occurs in the direction of sequence evolution, and all units are chained together. their special structure endows them with processing sequence data by learning order dynamics. in a rnn, the current state h t is affected by the previous state h t− and the current input x t and is described as rnn can integrate basic methods, such as em [ ] and linear model (lr) [ ] . the methods first estimate the missing values and again uses the re-constructed data streams as inputs to a standard rnn. however, em imputes the missing values by using only the synchronous relationships across data streams (inter-series relations) but not the temporal relationships within streams (intra-series relations). lr interpolates the missing values by using only the temporal relationships within each stream (intra-series relations) but ignoring the relationships across streams (inter-series relations). meanwhile, most of the rnn-based imputation methods, like simple recurrent network (srn) and lstm, which have been proved to be effective to impute medical data by kim et al. [ ] , are also learn an incomplete relation with considering intra-series relations only. chu et al. [ ] have noticed the difference between these two relations in ismts data and designed multi-directional recurrent neural network (m-rnn) for both imputation and interpolation. m-rnn operates forward and backward in the intra-series directions according to an interpolation block and operates across inter-series directions by an imputation block. they implanted imputation by a bi-rnn structure recorded as function Φ and implanted interpolation by fully connected layers with function Ψ. the final objective function is mean squared error between the real data and calculated data. where x, m and δ represent data value, masking and time interval we have defined in , we will not repeat it below. bi-rnn is bidirectional-rnn [ ] . it is an advanced rnn structure with forward and backward rnn chains. it have two hidden states for one time point in the above two orders. two hidden states concatenate or sum into the final value in this time point. unlike the basic bi-rnn, the timing of inputs into the hidden layers of m-rnn is lagged in the forward direction and advanced in the backward direction. however, in m-rnn, the relations between missing variables are dropped, the estimated values are treated as constants which cannot be sufficiently updated. to solve the problem, cao et al. [ ] proposed bidirectional recurrent imputation for time series (brits) to predict missing values with bidirectional recurrent dynamics. in this model, the missing values are regarded as the variables in the model graph and get delayed gradients in both forward and backward directions with consistency constraints, which makes the estimation of missing values more accurate. it can update the predicted missing data with a combined three objective function l -the errors of historical-based estimationx, the feature-based estimationẐ and the combined estimationĈ, which not only considered the relations between missing data and known data, but also modeled the relations between missing data ignored by m-rnn. but brits did not take both inter-series and intra-series relations into account, m-rnn solved it. gans are a type of deep learning model which train generative deep models through an adversarial process [ ] . from the perspective of game theory, gan training can be seen as a minimax two-player game [ ] between generator g and discriminator d with the objective function. however, typical gans require fully observed data during training. in response to this, yoon et al. [ ] proposed generative adversarial imputation nets (gain) model. different from the standard gan, its generator receives both noise z and mask m as input data, the masking mechanism makes missing data as input possible. gain's discriminator outputs both real and fake components. meanwhile, a hint mechanism h makes discriminator get some additional information in the form of a hint vector. gain changes the objective min g max d (v (d, g)) of basic gan to to improve gain, camino et al. [ ] used multiple-inputs and multiple-outputs to the generator and the discriminator. the method did the variable splitting by using dense layers connected parallelly for each variable. zhang et al. [ ] designed stackelberg gan based on gain to impute the medical missing data for computational efficiency. stackelberg gan can generate more diverse imputed values by using multiple generators instead of a single generator and applying the ensemble of all pairs of standard gan losses. the main goal of the above two-step methods is to estimate the missing values in the converted time series of ismts (convert irregularly sampled features to missing data features). however, in medical background, the ultimate goal is to carry out medical tasks such as mortality prediction [ , ] and patient subtyping [ , , ] . two separated steps may lead to the suboptimal analyses and predictions [ ] as the missing patterns are not effectively explored for final tasks. thus, some researches proposed finding ways to solve the downstream tasks directly, rather than filling missing values. end-to-end approaches process the downstream tasks directly based on modeling the time series with missing data. the core objective is to predict, classify, or clustering. data imputation is an additional task or not even a task in this type of methods. lipton et al. [ ] demonstrated a simple strategy -using the basic rnn model to cope with missing data in sequential inputs and the output of rnn being the final characteristics for prediction. then, to improve this basic idea, they addressed the task of multilabel classification of diagnoses by given clinical time series and found that rnns can make remarkable use of binary indicators for missing data, improving auc, and f significantly. thus, they approached missing data by heuristic imputation directly model missingness as a first-class feature in the new work [ ] . similarly, che at al. [ ] also use rnn idea to predict medical issues directly. for solving the missing data problem, they designed a kind of marking vector as the indicator for missing data. in this approach, the value x, the time interval δ and the masking m impute missing data x * together. it first replaces missing data with the mean values, and then used the feedback loop to update the imputed values, which are the input of a standard rnn for prediction. meanwhile, they proposed gru-decay (gru-d) to model ehrs data for medical predictions with trainable decays. the decay rate γ weighs the correlation between missing data x t and other data (previous data x t and mean datax t ). meanwhile, in this research, the authors plotted the pearson correlation coefficient between variable missing rates of mimic-iii dataset. they have observed that the missing rate is correlated with the labels, demonstrating the usefulness of missingness patterns in solving a prediction task. however, the above models [ , , , , ] are limited to using local information (empirical mean or the nearest observation) of ismts. for example, gru-d assumed that a missing variable could be represented as the combination of its corresponding last observed value and the mean value. the global structure and statistics are not directly considered. the local statistics are unreliable when the continuous data misses (shown in figure ), or the missing rate rises up. tang et al. [ ] have realized this problem and designed lgnet, exploring the global and local dependencies simultaneously. they used gru-d model local structure, grasping intra-series relations, and used a memory module to model the global structures, learning inter-series relations. the memory module g have l rows, it capture the global temporal dynamics for missing values with the variable correlations a. meanwhile, an adversarial training process can enhance the modeling of global temporal distribution. the alternative of processing the sequences with missing data by pre-discretizing ismts is constructing models which can directly receive ismts as input. the intuition of raw data-based perspective is from the characteristics of raw data itself -the intra -series relation and the inter-series relation. the intra -series relation of ismts is reflected in the irregular time intervals between two neighbor observations within one series; the inter-series relation is reflected in the different sampling rate of different time series. thus, two subcategories are ) irregular time intervals-based approaches and ) multi-sampling rates-based approaches. in ehrs setting, the time lapse between successive elements in patient records can vary from days to months, which is the characteristic of irregular time intervals in ismts. a better way to handle it is to model the unequally spaced data using time information directly. basic rnns only process uniformly distributed longitudinal data by assuming that the sequences have an equal distribution of time differences. thus, design of traditional rnns may lead to suboptimal performance. they applied a memory discount in coordination with elapsed time to capture the irregular temporal dynamics to adjust the hidden status c t− of basic lstm to a new hidden state c * t− . however, when ismts is univariate, t-lstm is not a completely irregular time intervals-based method. for the multivariate ismts, it has to align multiple time series and filling missing data first. where they have to solve the missing data problem again. but the research did not mention the specific filling strategy and used simple interpolation like mean values when data preprocessing. for the multivariate ismts and the alignment problem, tan et al. [ ] gave an end-to-end dual-attention time-aware gated recurrent unit (data-gru) to predict patients' mortality risk. data-gru uses a time-aware gru structure t-gru as same as t-lstm. besides, the authors give the strategy of multivariate data alignment problem. when aligning different time series to multi dimensions, previous missing data approaches, such as gru-d [ ] and lgnet [ ] , assigned equal weights to observed data and imputed data, ignoring the relatively larger unreliability of imputation compared with actuality. data-gru tackles this difficulty by a novel dual-attention structure -unreliability-aware attention α u with reliability score c and symptom-aware attention α s . the dual-attention structure jointly considers the data-quality and the medical-knowledge. further, the attention-like structure makes data-gru explainable according to the interpretable embedding, which is an urgently needed issue in medical tasks. instead of using rnns to learn the order dynamics in ismts, bahadori et al. [ ] have proposed methods for analyzing multivariate clinical time series that are invariant to temporal clustering. the events in ehrs may appear in a single admission together or may disperse over multiple admissions. for example, the authors postulated that whether a series of blood tests are completed at once or in rapid succession should not alter predictions. thus, they designed a data augmentation technique, temporally coarsening, to exploits temporal-clustering invariance to regularize deep neural networks optimized for clinical prediction tasks. moreover, they proposed a multi-resolution ensemble (mre) model with the coarsening transformed inputs to improve predictive accuracy. only modeling the irregular time intervals of intra-series relation would ignore the multi-sampling rate relation of inter-series relation. further, modeling inter-series relation is also a reflection of considering the global structure of ismts. the above rnn-based methods of irregular time intervals-based category only consider the local order dynamics information. although lgnet [ ] has integrated the global structures, it incorporates all of the information from all time points into an interpolation model, which is redundant and low adaptive. some models can also learn the global structures of time series, like a basic model kalman filters [ ] and a deep learning deep markov models [ ] . however, this kind of models mainly process the every time series with a stable sampling rate. che et al. [ ] focused on the problem of modeling multi-rate multivariate time series and proposed a multi-rate hierarchical deep markov model (mr-hdmm) for healthcare forecasting and interpolation tasks. mr-hdmm learns generation model and inference network by auxiliary connections and learnable switches. the latent hierarchical structure reflected in the states/switches s factorizing by joint probability p with layer z. p(x , z , s |z ) = p(x |z )p(z , s |z ) these structures can capture the temporal dependencies and data generation process. similarly, binkowski et al. [ ] presented an autoregressive framework for regression tasks by modeling ismts data. the core idea of implementation is roughly similar with mr-hdmm. however, these methods considered the different sampling rates between series but ignored the irregular time intervals in each series. they process the data with a stable sampling rate (uniform time intervals) for each time series. for the stable sampling rate, they have to use forward or linear interpolation, where the global structures are omitted again for getting the uniform intervals. the gaussian process can build global interpolation layers for process multi-sampling rate data. li et al. [ ] and futoma et al. [ ] used this technique. but if a time series is multivariate data, covariance functions are challenging due to the complicated and expensive computation. satya et al. [ ] designed a fully modular interpolation-prediction network (ipn). ipn has an interpolation network to accommodate the complexity of ismts data and provide the multi-channel output by modeling three informationbroad trends χ, transients τ and local observation frequencies λ. the three information is calculated by a low-pass interpolation θ, a high-pass interpolation γ and an intensity function λ. ipn also has a prediction network which operates the regularly partitioned inputs from the former interpolation module. in addition to taking care of data relationships from multiple perspectives, ipn can make up for the lack of modularity in [ ] and address the difficulty of the complexity of the gaussian process interpolation layers in [ , ] . modeling ists data aims to achieve two main tasks: ) missing data imputation and ) downstream tasks. the specific categories are shown in figure . missing data imputation is of practical significance, as works on machine learning have become actively, getting large amounts of complete data has become an important issue. however, it is almost impossible in the real world to get complete data for many reasons like lost records. in many cases, the time series with missing values becomes useless and then thrown away. this results in a large amount of data loss. the incomplete data has adverse effects when learning a model [ ] . existing basic methods, such as interpolation [ ] kernel methods [ ] and em algorithm [ , ] , have been proposed a long time ago. with the popularity of deep learning in recent years, most new methods are implemented by artificial neural networks (anns). one of the most popular models is rnn [ ] . rnns can capture long-term temporal dependencies and use them to estimate the missing values. existing works [ , , , , , ] have designed several special rnn structures to adapt the missingness and achieve good results. another popular model is gans [ ] , which generate plausible fake data through adversarial training. gan has been successfully applied to face completion and sentence generation [ , , , ] . based on their data generation abilities, some research [ , , , ] have applied gan on time series data generation with considering sequence information into the process. downstream tasks generally include prediction, classification, and clustering. for ismts data, medical prediction (such as mortality prediction, disease classification and image classification) [ , , , , ] , concept representation [ , ] and patient typing [ , , ] are three main tasks. the downstream task-oriented methods calculate missing values and perform downstream tasks simultaneously, which is expected to avoid suboptimal analyses and predictions caused by the not effectively explored missing patterns due to the separation of imputations and final tasks [ ] . most methods [ , , , , , , , ] use deep learning technology to achieve higher accuracy on tasks. in this section, we apply the above methods on four datasets and two tasks. we will analyze the method through the experimental results. four datasets were used to evaluate the performance of baselines. cinc dataset [ ] consist of records from , icu stays and have multivariate clinical time series. all patients were adults who were admitted for a wide variety of reasons to cardiac, medical, surgical, and trauma icus. each record is a multivariate time series of roughly hours and contains variables such as albumin, heart rate, glucose etc. cinc dataset [ ] is publicly available and comes from two hospitals; it contains , patient admission records and , records of diagnosed sepsis cases. it is a set of multivariate time series that contains related features, kinds of vital signs, kinds of laboratory values and kinds of demographics. the time interval is hour. the sequence length is between and , and , records have lengths less than . covid- dataset [ ] is collected between january and february from tongji hospital of tongji medical college, huazhong university of science and technology, wuhan, china. the dataset contains patients with blood sample records as training set, patients with records as test set and characteristics. the experiments have two tasks - ) mortality prediction and ) data imputation. the mortality prediction tasks use the time series of hours before onset time from the above four datasets. the imputation tasks use features (using the method in [ ] ) which are eliminated % of observed measurements from data. the eliminated data is the new ground-truth. for rnn-based method, we fix the dimension of hidden state is . for gan-based methods, the series inputs also use rnn structure. for final prediction, all methods use one -dimensions fc layer and one -dimensions fc layer. all methods apply adam optimizer [ ] with α = . , β = . and β = . . we use the learning rate decay α current = α initial · γ global step decay steps with decay rate γ = . and the decay step is . the -fold cross validation is used for both two tasks. [ ] . ± . . ± . . ± . . ± . lstm [ ] . ± . . ± . . ± . . ± . gru-d [ ] . ± . . ± . . ± . . ± . m-rnn [ ] . ± . . ± . . ± . . ± . brits [ ] . ± . . ± . . ± . . ± . t-lstm [ ] . ± . . ± . . ± . . ± . data-gru [ ] . ± . . ± . . ± . . ± . lgnet [ ] . ± . . ± . . ± . . ± . ipn [ ] . ± . . ± . . ± . . ± . the prediction results were evaluated by assessing the area under curve of receiver operating characteristic (auc-roc). roc is a curve of true positive rate (tpr) and false positive rate (fpr). tn, tp, fp and fn stand for true positive, true negative, false positive and false negative rates. t p r = t p t p + f n f p r = f p t n + f p ( ) we evaluate the imputation performance in terms of mean squared error (mse). for ith item,x i is the real value and x i is the predicting value. the number of missing values is n . table shows the performances of baselines for the mortality prediction task. for the two categories of technologydriven methods, each has its own merits, but irregularity-based methods work relatively well. missing data-based methods have / top results and / top results, while irregularity-based methods have / top results and / top results. for the methods of whether the two series relation are considered, the methods that take both inter-series relation and intra-series relation (both global and local structures) into account perform better. ipn, lgnet, and data-gru have relatively good results. for different datasets, the methods show different effects. for example, as covid- is a small dataset, unlike the other three datasets, the relatively simple methods perform better on this dataset, like t-lstm, which doesn't perform very well on the other three datasets. table the data imputation is better in the sepsis and covid- dataset. perhaps the time series in these two datasets is from the patients who suffered from the same disease. that's probably why they also have relatively better results in the prediction task. table shows a basic rnn model's performance for mortality prediction tasks based on baselines' imputation data. different from the results in table , the rnn-based methods perform better. where the rnn-based methods have / top results, but gan-based methods have / . the reason may be that the rnn-based approaches have integrated the downstream tasks when imputing. so, the data generated by them is more suitable for the final prediction task. according to the analysis of technologies and experiment results, in this section, we will discuss ismts modeling task from three perspectives - ) imputation task with prediction task, ) intra-series relation with inter-series relation / local structure with global structure and ) missing data with raw data. the conclusions of the approaches in this survey are in table . based on the above five perspectives, we summarize the challenges as follows. how to balance the imputation with the prediction? different kinds of methods suit different tasks. gans prefer imputation while rnns prefer prediction. however, in the medical setting, aiming at different datasets, the conclusion does not seem correct. for example, missing data is generated better by rnns than gans in the covid- dataset. and the two-step methods based on gans for mortality prediction are no worse than using rnns directly. therefore, it seems difficult to achieve a general and effective modeling method in medical settings. the method should be specified according to the specific task and the characteristics of the datasets. how to handle the intra-series relation with inter-series relation of ismts? in other words, how to trade off the local structure with global structure. in ismts format, a patient has several time series of vital signs connected to the diagnoses of diseases or the probability of death. seeing these time series as a whole multivariate data sample, intraseries relations are reflected in longitudinal dependences and horizontal dependencies. the longitudinal dependencies contain the sequence order and context, time intervals, and decay dynamics. the horizontal dependence is the relations between different dimensions. and the inter-series relations are reflected in the patterns of time series of different samples. however, when seeing these time series as separated multi-samples of a patient, the relations will change. intra-series relations change to the dependencies of values observed on different time steps in a univariate ismts. the features of different time intervals should be taken care of. inter-series relations change to the pattern relations between different patients' different samples and between different time series of the same vital sign. for the structural level, modeling intra-series relations is basically at the local level, while modeling inter-series relations is global. it is not clear what kind of consideration and which structure will make the results better. modeling local and global structures seems to perform better in morality prediction, but it is a more complex method, and it's not universal for different datasets. how to choose the modeling perspective, missing data-based or irregularity-based? both two kinds of methods have advantages and disadvantages. most existing works are missing data-based and there are methods of estimating missing data for a long time [ ] . in settings of missing data-based perspective, the discretization interval length is a hyper-parameter needs to be determined. if the interval size is large, missing data is less, but several values will show in low-applicability for multivariate data; incomplete data relation. multi sampling rates-based [ , , , ] no artificial dependency; no data imputation. implementation complexity; data generation patterns assumptions. the same interval; if the interval size is small, the missing data becomes more. no values in an interval will hamper the performance, while too many values in an interval need an ad-hoc choosing method. meanwhile, missing data-based methods have to interpolate new values, which may artificially introduce some naturally occurring dependencies. over-imputation may result in an explosion in size and the pursuit of multivariate data alignment may lead to the loss of raw data dependency. thus, of particular interest are irregularity-based methods that can learn directly by using multivariate sparse and irregularly sampled time series as input without the need for other imputation. however, although the raw data-based methods have metrics of no artificial dependencies introduced, they suffer from not achieving the desired results, complex designs, and large parameters. irregular time intervals-based methods are not complex as they can be achieved by just injecting time decay information. but in terms of specific tasks, such as morality prediction, the methods seem not as good as we think (concluded from experiments section). meanwhile, for multivariable time series, these methods have to align values on different dimensions, which leads to missing data problems again. multi-sampling rates-based methods will not cause missing data. however, processing multiple univariate time series at the same time requires more parameters and is not friendly to batch learning. meanwhile, modeling the entire univariate series may require data generation model assumptions. considering the complex patient states, the amount of interventions and the real-time requirement, the data-driven approaches by learning from ehrs are the desiderata to help clinicians. although some difficulties have not been solved yet, the deep learning method does show a better ability to model medical ismts data than the basic methods. basic methods can't model ismts completely as interpolation-based methods [ , ] just exploit the correlation within each series, imputation-based methods [ , ] just exploit the correlation among different series, matrix completion-based methods [ , ] assume that the data is static and ignore the temporal component of the data. deep learning methods use parameter training to learn data structures, and many basic methods can be integrated into the designs of neural networks. the deep learning methods introduced in this survey basically solve the problem of common methods and have achieved state-of-the-art in medical prediction tasks, including mortality prediction, disease prediction, and admission stay prediction. therefore, the deep learning model based on ismts data has a broad prospect in medical tasks. the deep learning methods, both rnn-based and gan-based methods mentioned in this survey, are troubled by poor interpretability [ , ] , and clinical settings prefer interpretable models. although this defect is difficult to solve due to models' characteristics, some researchers have made some breakthroughs and progress. for example, the attention-like structures which are used in [ , ] can give an explanation for medical predictions. this survey introduced a kind of data -irregularly sampled medical time series (ismts). combined with medical settings, we described characteristics of ismts. then, we have investigated the relevant methods for modeling ismts data and classified them by technology-driven perspective and task-driven perspective. for each category, we divided the subcategories in detail and represented each specific model's implementation method. meanwhile, according to imputation and prediction experiments, we analyzed the advantages and disadvantages of some methods and made conclusions. finally, we summarized the challenges and opportunities of modeling ismts data task. recurrent neural networks for multivariate time series with missing values convolutional lstm network: a machine learning approach for precipitation nowcasting restful: resolution-aware forecasting of behavioral time series data tensorized lstm with adaptive shared memory for learning trends in multivariate time series clustering and classification for time series data in visual analytics: a survey time graph: revisiting time series modeling with dynamic shapelets adversarial unsupervised representation learning for activity time-series revisiting spatial-temporal similarity: a deep learning framework for traffic prediction deep ehr: a survey of recent advances in deep learning techniques for electronic health record (ehr) analysis predicting in-hospital mortality of icu patients: the physionet/computing in cardiology challenge holmes: health online model ensemble serving for deep learning models in intensive care units dipole: diagnosis prediction in healthcare via attention-based bidirectional recurrent neural networks learning to diagnose with lstm recurrent neural networks retain: an interpretable predictive model for healthcare using reverse time attention mechanism multi-layer representation learning for medical concepts mime: multilevel medical embedding of electronic health records for predictive healthcare patient subtyping via time-aware lstm networks deep computational phenotyping a survey of methodologies for the treatment of missing values within datasets: limitations and benefits singular value decomposition and least squares solutions an efficient nearest neighbor classifier algorithm based on pre-classify. computer ence simple linear regression in medical research predicting disease risks from highly imbalanced data using random forest a modified svm classifier based on rs in medical disease prediction alzheimer's disease neuroimaging initiative. rnn-based longitudinal analysis for diagnosis of alzheimer's disease estimating brain connectivity with varying-length time lags using a recurrent neural network on clinical event prediction in patient treatment trajectory using longitudinal electronic health records bidirectional recurrent auto-encoder for photoplethysmogram denoising a deep learning method based on hybrid auto-encoder model research and application progress of generative adversarial networks an accurate saliency prediction method based on generative adversarial networks joint modeling of local and global temporal dynamics for multivariate time series forecasting with missing values directly modeling missing data in sequences with rnns: improved classification of clinical time series brits: bidirectional recurrent imputation for time series recurrent neural networks with missing information imputation for medical examination data prediction data-gru: dual-attention time-aware gated recurrent unit for irregular multivariate time series temporal-clustering invariance in irregular healthcare time series. corr, abs interpolation-prediction networks for irregularly sampled time series hierarchical deep generative models for multi-rate multivariate time series mimic-iii, a freely accessible critical care database. sci. data early prediction of sepsis from clinical data: the physionet/computing in cardiology challenge an intelligent warning model for early prediction of cardiac arrest in sepsis patients k-marginbased residual-convolution-recurrent neural network for atrial fibrillation detection opportunities and challenges of deep learning methods for electrocardiogram data: a systematic review risk prediction for chronic kidney disease progression using heterogeneous electronic health record data and time series analysis learning from irregularly-sampled time series: a missing data perspective. corr, abs time series analysis : forecasting and control forecasting in multivariate irregularly sampled time series with missing values. corr, abs estimating missing data in temporal data streams using multi-directional recurrent neural networks long short-term memory empirical evaluation of gated recurrent neural networks on sequence modeling temporal belief memory: imputing missing data during rnn training survey of clinical data mining applications on big data in health informatics analysis of incomplete and inconsistent clinical survey data modeling irregularly sampled clinical time series multivariate time series imputation with generative adversarial networks physiobank, physiotoolkit, and physionet: components of a new research resource for complex physiologic signals early prediction of sepsis from clinical data -the physionet computing in cardiology challenge an interpretable mortality prediction model for covid- patients ua-crnn: uncertainty-aware convolutional recurrent neural network for mortality risk prediction a hybrid residual network and long short-term memory method for peptic ulcer bleeding mortality prediction raim: recurrent attentive and intensive model of multimodal patient monitoring data linear regression with censored data a learning algorithm for continually running fully recurrent neural networks arterial blood pressure during early sepsis and outcome hospital deaths in patients with sepsis from independent cohorts data cleaning: overview and emerging challenges the effects of the irregular sample and missing data in time series analysis wavelet methods for time series analysis. (book reviews) comparison of correlation analysis techniques for irregularly sampled time series multiple imputation using chained equations. issues and guidance for practice pattern classification with missing data: a review. neural computing and applications a solution for missing data in recurrent neural networks with an application to blood glucose prediction speech recognition with missing data using recurrent neural nets framewise phoneme classification with bidirectional lstm and other neural network architectures a survey of missing data imputation using generative adversarial networks stable and improved generative adversarial nets (gans): a constructive survey gain: missing data imputation using generative adversarial nets improving missing data imputation with deep generative models. corr, abs medical missing data imputation by stackelberg gan strategies for handling missing data in electronic health record derived data kalman filtering and neural networks hidden markov and other models for discretevalued time serie autoregressive convolutional neural networks for asynchronous time series a scalable end-to-end gaussian process adapter for irregularly sampled time series classification learning to detect sepsis with a multitask gaussian process rnn classifier doctor ai: predicting clinical events via recurrent neural networks generative face completion generative adversarial nets ambientgan: generative models from lossy measurements approximation and convergence properties of generative adversarial learning seqgan: sequence generative adversarial nets with policy gradient learning from incomplete data with generative adversarial networks adam: a method for stochastic optimization a study of handling missing data methods for big data multiple imputation for nonresponse in surveys spectral regularization algorithms for learning large incomplete matrices temporal regularized matrix factorization for high-dimensional time series prediction interpretable machine learning: a guide for making black box models explainable. online interpretability of machine learning-based prediction models in healthcare iterative robust semi-supervised missing data imputation medical time-series data generation using generative adversarial networks unsupervised online anomaly detection on irregularly sampled or missing valued time-series data using lstm networks. corr, abs kernels for time series with irregularly-spaced multivariate observations. corr, abs timeautoml: autonomous representation learning for multivariate irregularly sampled time series a distributed descriptor characterizing structural irregularity of eeg time series for epileptic seizure detection a bio-statistical mining approach for classifying multivariate clinical time series data observed at irregular intervals automatic classification of irregularly sampled time series with unequal lengths: a case study on estimated glomerular filtration rate mcpl-based ft-lstm: medical representation learning-based clinical prediction model for time series events a comparison between discrete and continuous time bayesian networks in learning from clinical time series data with irregularity multi-resolution networks for flexible irregular time series modeling (multi-fit) key: cord- -hi rkp l authors: zhang, shu-ning; li, yong-quan; liu, chih-hsing; ruan, wen-qi title: a study on china's time-honored catering brands: achieving new inheritance of traditional brands date: - - journal: journal of retailing and consumer services doi: . /j.jretconser. . sha: doc_id: cord_uid: hi rkp l the time-honored brand is the best brand retained from centuries of business and handicraft competition, representing inestimable brand, economic and cultural value. however, it has encountered the issue of heritage in the new era. to address this issue, in view of the critical role of customer word-of-mouth (wom) in brand inheritance and reputation, this study constructed and examined the wom path of time-honored catering brands by investigating customers. its conclusions highlight the positive antecedent of brand authenticity for customers' in-person wom and ewom. the path is influenced by the mediation mechanisms of response (awakening of interest), cognition (brand experience) and affection (brand identification). moreover, the interaction between creative performance and brand authenticity can positively promote customers' brand experience. however, cultural proximity plays different roles in the stages of customers' brand attitudes and behaviors. the results provide managerial implications for how to promote the sustainable inheritance of traditional brands. around the world, many traditional brands are struggling with their own decline (li et al., ) . similarly, in china, the time-honored brand is the best surviving centuries of business and handicraft competition after generations of inheritance, showing irreplaceable value. unfortunately, most traditional brands won in history but have lost today. a survey by the ministry of commerce of the people's republic of china showed that the number of china time-honored brands dropped from approximately , in to in (li et al., ) . only % are thriving, and many time-honored brands' survival and growth are at serious risk. especially with the explosive growth of modern restaurants, traditional catering brands are experiencing unprecedentedly fierce market competition and challenges (cheng et al., ; troiville et al., ; phung et al., ; koh et al., ). how to survive and inherit traditional catering brands today is an urgent issue to be solved, which is related to the sustainable development of traditional excellent brands and the common treasure of mankind. the customer perspective is critical for evaluating the development of catering companies. traditional catering references have explored consumer decision, intention, satisfaction, service experience, brand image, etc. (sürücü et al., ; halim and hamed, ; stanujkic et al., ; tan and chang, ) . however, previous conclusions mainly highlight the customers' response to the consumption process and pay little attention to the future survival of traditional catering (chen and huang, ) . importantly, brand inheritance is mainly influenced by consumer word-of-mouth (wom). the connotation and development of the time-honored catering brand indicates that it historically relies on customer in-person wom to earn social reputation and loyalty (he, ; tian et al., ) . today, information technology and social media are advancing rapidly. electronic word-of-mouth (ewom) is not limited by time and space, showing a wider promotion effect. accordingly, customer wom contributes to greater influence than transitional marketing tools, such as enterprise publicity (steven podoshen, ) ; customer wom can change consumer brand attitudes, brand behaviors, and brand choices (chu and kim, ; east et al., ; séraphin et al., ) . therefore, an excellent means of exploring the time-honored catering brand inheritance path is examining how to improve the customer wom path. this new attempt can fill the theoretical gaps of traditional catering inheritance and provide managerial implications for its sustainable development in the new era. exploring the inheritance of the time-honored brands actually means constructing the formation path of customer wom. several issues have been raised and need to be addressed further. first, what is the guiding factor for customers in customer wom about time-honored brands? different from fast food and creative restaurants, etc., the essence and evaluation standards of the time-honored brand depends on recipe originality, craftsmanship, historical culture, brand spirit and so forth (li, ) . inheritance points to brand authenticity as the core element of the time-honored catering brands. in other words, brand authenticity represents a symbol of the unique chinese catering culture (tsai and lu, ; modya and hanksb, ) , which is also regarded as a brand symbol and an attraction that determines customers' consumption motivation (moulard et al., ) . therefore, this study asserts that brand authenticity is the critical leading factor for the inheritance of time-honored chinese catering brands. second, how does the authenticity of time-honored brands promote customers' wom? few scholars have attempted to examine the relationship between brand authenticity and customer wom (dipietro and levitt, ), especially in the context of time-honored catering brands. however, the stimulus-organism-response (sor) theory contributes to a theoretical foundation for investigating the time-honored catering brands' wom path (chang et al., ) . to this end, this study supposes that brand authenticity as a brand stimulus may promote customers' wom in stages: awaken, experience and identify (moulard et al., ; tsai and wang, ; kim et al., ) . recently, culture and creativity are considered important development factors (zhang et al., a,b) . in the new era, can traditional catering improve customer experience with creative performance? moreover, time-honored brands show profound cultural heritage. what role do customers' cultural backgrounds play in brand heritage? this study will systematically answer these questions to clarify the inheritance path of traditional catering brands. the perspectives of wom and creativity provide a new research scheme for the inheritance path of time-honored catering brands. our research not only addresses urgent issues, providing a theoretical path for the brand inheritance of time-honored catering brands and clarifying the specific role of the influencing factors, but also expands consumer behavior theory (liu and jang, ) , brand management theory (hyun, ) and cultural theory (arnould and thompson, ) . more importantly, our results are of great practical value and provide the significant contribution of specific managerial guidance for the sustainable development path of traditional catering brands. the phrase "time-honored brands" refers to well-recognized brands with unique regional cultural and historical value, traditional wellknown brands that have passed down through several generations and established before . these brands have a long history of recipes, crafts or services and have won a wide range of social praise (he, ; tian et al., ) . because of the combination of unique traditional historical value and modern marketing brand concepts, the time-honored brand has become a popular research topic in academic circles (shang and chen, ) . related studies focus on the development of traditional catering brands, including traditional food technology, cooking (li and hsieh, ) , protection, innovation (lee, ; koh et al., ) and business models of traditional restaurants (indrawan et al., ) . some references reflect consumers' attitudes toward traditional food, including consumer brand image evaluation (almli et al., ; sürücü et al., ) , perception (guerrero et al., ) , motivation (wang et al., ) , and preferences (balogh et al., ) . moreover, china time-honored catering brands are the most unique and important branch of traditional catering. however, a large number of time-honored catering brands have gradually disappeared from the market (mu, ) . few studies have focused on the realistic and severe problems of inheritance and development that time-honored catering brands face. the current research on time-honored catering brands mainly explores the following two aspects: first, the importance of brand value and realistic problems, such as the relationship between the brand equity of time-honored brands and generational transfer (he, ) , and discussions on intangible value and brand value (li, ; grace and o'cass, ; sarker et al., ) . second, the exploration of time-honored brand development strategies, including micromarketing strategies (leng, ) , brand activation and revitalization (forêt and mazzalovo, ) , and brand image design (guo and kwon, ) . these studies mainly discuss the development of time-honored brands from the perspective of business management, and few scholars have conducted quantitative research from the perspective of customers. for example, mu ( ) found that consumers' nostalgic psychology played a key positive role in cognition and purchase intention by constructing a purchase intention model of time-honored brands. huang ( ) analyzed the difference in chinese and foreign customers' experiences with time-honored catering brands from a cross-cultural perspective. accordingly, previous studies show the following gaps: ( ) previous studies mainly considered the perspective of enterprise management, ignoring the fact that customer wom is a critical factor in time-honored brands' ability to earn a reputation and in their historical heritage (tian et al., ) . therefore, this study is innovative in its research perspective on time-honored catering brands and explores brand inheritance from the perspective of customer wom. ( ) current studies do not pay enough attention to the core element of brand authenticity or clearly explore the inheritance path of time honored brands from the customer perspective. therefore, this study constructs a brand inheritance model of time-honored catering brands. ( ) although some scholars have paid attention to the impact of traditional catering on customer experience from a cross-cultural perspective or the perspective of innovation (huang, ; koh et al., ) , fewer studies have answered whether cultural factors and creativity can improve customers' cognitive attitudes and behaviors regarding catering time-honored brands. to address these unresolved issues, we introduce the moderator variables of creative performance and cultural proximity. the stimulus-organism-response (sor) theory was first proposed by mehrabian and russell ( ) and was later modified by jacoby ( ) . sor theory emphasizes that some external influences provoke and change the individual's emotional and cognitive condition, leading to certain behavioral outcomes (kamboj ). the "s-o-r framework" consists of components: stimulus, organism and response. the first "stimulating" component refers to "stimulating an individual's influence". in the restaurant experience, stimulation is an expression of the core features provided by the restaurant. undoubtedly, brand authenticity, as brand packaging attraction (moulard et al., ) , is an external stimulus condition for the time-honored catering brands to awaken customers' interest and passion. creative performance as a means of innovation, to some extent, belongs to an external stimulus. the "organism" is a second component of sor theory and refers to the customers' cognition and affection. the organism exists in the process from stimuli to customer responses (kamboj et al., ) . cognition represents people's understanding of things or phenomena, which specifically includes psychological experience processes such as feeling, perception, imagination, and thinking. further, affection is the attitude that people produce about whether objective things meet their own needs. after interest is aroused by brand authenticity, customers are more likely to engage with the brand, resulting in brand experience. brand experience is a critical attribute of direct and lasting links for customers to connect with time-honored brands (tsai and wang, ; ong et al., ) . when the experience notably meets customers' expectations of the brand, it will stimulate a high-level affection of brand identification (kempf, ) . behavior, as a final stage, is the response of customers to external stimuli. positive brand experience and brand identification promote customers to form cognition and emotion for traditional brands and ultimately lead to the wom behavior (kim et al., ) . in other words, to realize the wom behavior regarding time-honored catering brands, customers may need to experience the intermediary mechanism process of awakened interest, brand experience and brand identification. in addition, we need to consider some key accelerators. some of time-honored chinese catering brands perform poorly and are being eliminated (li, ) because they fail to match changing consumer trends or lack innovation (leng, ; munthree et al., ) . creativity is the manifestation of the originality and genuineness of time-honored brands, which may be a great approach for customers to better understand the brands (horng et al., ) . further, cultural background, such as cultural proximity, is considered another significant factor affecting customers' experiences and value evaluations (chang et al., ) . the role of cultural background in customer attitude and behavior is still unknown. however, the time-honored brand has a distinctive cultural characteristic. under this scenario, it is of great significance to explore the influence of cultural proximity on the customer's dining process. based on these viewpoints, the study will explore the moderating effects of creative expression and cultural proximity. consequently, based on sor theory, this study constructed a more complex two-stage mediating moderating model in fig. . brand authenticity refers to the degree to which a brand is perceived to be original and authentic, which means that it is unique rather than derivative (akbar and wymer, ) . if the food is produced by traditional or manual methods, the product may be considered authentic (cinelli and leboeuf, ) . the connotations and historical value of time-honored catering brands all point to the high level of uniqueness and originality of the brands, and these characteristics are the core of the brands' attraction and stimulation of consumer demand. experiencing authentic food is the primary motivation for customers to become interested and make decisions, which highlights the influence of authenticity on consumer decisions and behaviors (jiménez- barreto et al., ) . guèvremont and grohmann ( ) assert that the nature of brand authenticity can awaken great interest from consumers. the awakening of interest reflects the stimulation of a customer's potential interest in something, which serves as an important factor in predicting customer behavior (machleit et al., ) . if sensory and affective interest is awakened, they will have a strong need to participate in the brand experience and be satisfied through brand experience. in a word, the brand authenticity of time-honored catering brands will awaken customers' interest and enhance their in-depth brand experience, because customer interests can arouse their positive emotions. when customer interest is aroused, the impression of authenticity of time-honored catering brands will be deepened, thereby affecting customer brand experience. thus, this study concludes that the authenticity of time-honored catering brands can enhance customers' experience by awakening their interest. hypothesis . brand authenticity will affect customers' brand experience through awakening their interest. in an ever more competitive market, brands must offer memorable experiences to their customers if they want to differentiate themselves and build a solid competitive position (iglesias et al., ) . brand-related stimuli (i.e., product, design, atmosphere, packaging and publicity) is the main source of consumers' subjective response and feelings, which are called brand experience (brakus et al., ; carlson et al., ; liljander et al., ; ong et al., ) . the concept of brand experience is of great interest to marketers because brand experience is crucial in determining consumers' brand attitudes and behaviors . scholars have confirmed that brand experience is the leading factor driving customers with positive brand attitudes (shamim and mohsin butt, ) and driving brand identity (rahman, ) . customers' understanding and identification of time-honored brands will be generated and enhanced easily when they have deep experiences, such as the mobilization of the senses, affection, behavior, and even intelligence (füller et al., ) . further, some scholars assert that consumer brand experience is often closely associated with the authenticity represented by traditional culture and enhancement (alexander, ). brand authenticity is often used by companies to stimulate customer (laub et al., ) . the reason is that authenticity has become an important indicator of a brand, enriching the brand experience (modya and hanksb, ) . if time-honored catering brands can show the characteristics of being original and genuine, their customer brand experience will be enhanced. therefore, the authenticity of time-honored catering brands can enhance consumer brand experience and ultimately lead to positive brand identification. hypothesis a. brand authenticity will affect customers' brand identification through brand experience. another viewpoint on the effects of brand experience emphasizes that when a brand awakens the interest of consumers, internally consistent consumption desires will enhance brand depth experience (coelho et al., ) . the rationale for hypothesis a is that brand experience is subjective and an internal consumer response. over time, brand experiences may produce changes in affective bonds, participation behaviors, senses and intelligence (prentice et al., ; brakus et al., ) . in this study, the sensory experience comprises the tactile, visual, auditory, olfactory, and gustatory stimulation generated in customers by brands (iglesias et al., ) . because of the sense of history and cultural heritage of time-honored brand, the affective dimension captures the degree to which customers perceive it as an emotional brand (iglesias et al., ) . the intelligence level and behavioral level are the imagination, curiosity and customer attitude and behavior initiated by the brand, respectively (das et al., ) . these aspects compose the internal results of the stimulus that awakens the experience; in particular, contact with senses and behaviors greatly enhances the experience (pinker, ) . in the early stage of customer visits, marketers try to enhance the experience by awakening visitors with promotional materials (kim and ko, ) . some evidence in other research fields also shows that commercial websites enhance customers' online experiences by attracting their interest (nah et al., ) . in this study, the original attraction of time-honored catering brands can successfully awaken customer interest, which will trigger customers' motivation to participate in the experience and enhance their brand experience level (jones and runyan, ; chhabra, ) . when the customers' interests are aroused, they will identify the time-honored catering brands through in-depth brand experience. therefore, the hypothesis is as follows: hypothesis b. awakening customers' interest will affect their brand identification through brand experience. customers' brand identification is regarded as a high degree of brand understanding and recognition (popp and woratschek, ) , which affects additional customer behavior toward the brand, including positive wom and other supportive behaviors (zhu et al., ) . wom has become one of the most effective marketing tools (prentice et al., ) . arnett, german, and hunt ( ) assert that brand wom is a way of expressing and improving self-identity, and a behavioral response to customer identification. individuals achieve identification with a specific brand after experience, which ultimately leads to a positive behavioral outcome (alexander, ) . therefore, brand experience is often regarded to predict consumer behavior (zarantonello and schmitt, ) . further, brand identity as a link between brand and customer consistency promotes customers' intention to recommend (berrozpe et al., ) . more importantly, brand identification mainly occurs after a memorable positive brand experience (merk and michel, ) , because positive brand experience forms an identity between the customer and the brand (han et al., ) . when the degree of customer brand experience with time-honored catering brands is high, customers will show positive brand identification psychology and support the brands through wom (han et al., ) . further, according to sor theory, brand experience is the cognitive process of interacting with time-honored catering brands, identification represents the customer's brand attitude, and wom is the display of customers' brand behavior. in the process of brand experience accumulation, customers gain a comprehensive understanding of the brand and take action (wom) (hanna and rowley, ; saleem et al., ) . in other words, after obtaining a higher brand experience, customers will enhance their recognition of time-honored brands and immediately generate brand wom to support their identification psychology. additionally, wom includes two important forms of communication: in-person wom and ewom (eelen et al., ) . in-person wom is the real-time interaction between time-honored brand fans and other potential customers after the former's brand recognition (klesse et al., ) . in-person wom with strong credibility has a high success rate in influencing other customers to visit. in the process of ewom, customers have more time to think and reconstruct the communication content regarding time-honored brands (sijoria et al., ) . whether it is in-person wom in the traditional period or ewom in the internet era, it shows the important brand communication behavior after customer's brand identity. therefore, this study asserted that customer brand experience will influence brand identification, resulting in both positive in-person wom behavior and ewom behavior. hypothesis a. customers' brand identification mediates the relationship between their brand experience and in-person wom. hypothesis b. customers' brand identification mediates the relationship between their brand experience and ewom. creative performance is described as the ability to generate new ideas, behaviors, concepts, designs and service programs. it refers to update old ideas into new and unique ideas (wang and netemeyer, ) . previous studies indicate that creative performance can present innovative service forms and improve product quality (sternberg, ) , which are key factors in improving brand experience (füller et al., ) . especially, the combination of authenticity and creativity has become the key means for the development of tourism and leisure experience in the new period (zhang et al., ) . the authenticity of time-honored brands contributes to a key feature for customers in choosing to visit the restaurant (ponnam and balaji, ) . however, customers evaluate brand experience not only for the brand's authenticity but also for its creativity (darvishmotevali et al., ) . in the construction of time-honored catering brands, different creative ways can be adopted to improve customers' brand experience, such as the presentation of unique ideas, new marketing strategies and new services (darvishmotevali et al., ) . the creative development of innovation products and services by brands can satisfy the changing needs of customers (chang and teng, ) . moreover, creativity is regarded as the best expression and transmission method of brand authenticity conservation and inheritance. it not only helps customers enhance their cognition and understanding of the original nature of the brand, but also influences customers' brand experience and enhancing the emotional connection between customers and brands (schmitt, ) . therefore, this study concludes that the interaction between authenticity and creative performances of time-honored catering brands can enhance customers' brand experience. hypothesis . creative performance will positively moderate the relationship between the brand authenticity and brand experience. unlike other brands, time-honored brands are well known for their splendid culture (mu, ) and characteristics of regional cultures in china (forêt and mazzalovo, ) . some scholars argue that cultural background factors (e.g., cultural differences, cultural proximity, and cultural distance) can explain a variety of customer dining behaviors (sheldon and fox, ) . for example, chang et al. ( ) found that customers evaluated the local cuisine based on their own culinary culture and habits. in the background of time-honored catering brands, the process of customers' cognition (brand experience), attitude (brand identification) and behavior (wom) may be affected by cultural proximity. although many studies have proven that cultural proximity influences customers' visit motivation, interest, and familiarity (weiermair, ; huang et al., ) , no direct evidence has confirmed that cultural proximity plays an important role in determining the customers' brand attitudes and the backwards behavior of time-honored catering brands. kivela and crotts ( ) believe that restaurants have become the main channel for customers to experience local cultures, and the degree of cultural proximity is likely to affect their attitudes toward a customer destination. the time-honored catering brands represent unique regional cultures, and the degree of cultural proximity can affect the degree of customers' recognition of their brands (forêt and mazzalovo, ) , because previous research has shown that the unique food culture determines which customer groups are targeted (kivela and johns, ) . therefore, customers with the similar cultural backgrounds are likely to show a more positive brand identification after an in-depth brand experience. therefore, another hypothesis is as follows: hypothesis . cultural proximity will positively moderate the relationship between brand experience and brand identification. cultural background may influence the change of customer attitude to positive behavior. culture includes factors such as common values, beliefs, attitudes, behavior norms, customs, rituals, ceremonies and perceptions (warner and joynt, ) . especially in china, there are differences in the cultures of different ethnic groups, geographical locations, and regions, etc. (zhang et al., ) , causing differences in customer preferences, tastes, and habits, etc., because cultural characteristics are deeply rooted and influence multiple individual behaviors and distinct cultural background of time-honored catering brands (atkins and bowler, ) . for example, barreto ( ) asserts that cultural differences can affect the expression of customers' wom. when the cultural background is similar, customers are more likely to express their opinions and recommendations through wom after understanding and recognizing the time-honored catering brands (yaveroglu and donthu, ) . as part of a place's intangible cultural heritage, time-honored catering brands reflect local culture characteristics and create a sense of place (gordin and trabskaya, ) . customers with stronger cultural proximity can find cultural resonance with such brands and prefer to recommend the catering brand to others. if they have similar cultural backgrounds to the brands, customers can easily understand time-honored catering brands. additionally, cultural proximity in a region plays social and other comprehensive roles as a background factor (sahin and baloglu, ) , affecting the customer wom (in-person wom and ewom). therefore, this study concludes that customers with higher cultural proximity are more likely to have positive in-person wom and ewom after the identification. hypothesis a. cultural proximity will positively moderate the relationship between customers' brand identification and in-person wom. hypothesis b. cultural proximity will positively moderate the relationship between customers' brand identification and ewom. this study used a questionnaire survey to collect data and used structural equation modeling (sem) for data analysis. the latter is a quantitative research method with positivism, a methodology often used in restaurant research (chou et al., ; han et al., ; chen et al., ) . questionnaire surveys can reduce the interference of the surveyor with the respondents. furthermore, sem can test the relationship between multiple variables at the same time in this study. it provides a more complete test of the entire proposed theoretical model, avoiding inaccurate standard error estimates or evaluation biases due to nonindependent observations (liu, ) . therefore, the method can well test the problem of hypothesis relationship and customer's wom path formation in this study. to make the investigation in this study more accurate, data were collected using questionnaires completed by customers; an intercept approach was utilized in fujian province (zhang and xu, ) . we followed several lines of reasoning and consulted several resources to select the samples. first, according to the definition and industry standards for china's time-honored brands, a brand must have been established before and must possess unique products, skills or services. further, the brand must show bright regional cultural characteristics and historical and cultural value and have a good reputation. second, as the conditions for becoming a time-honored brand are relatively strict, we selected time-honored brands with broad social awareness distributed throughout china (shown in fig. ) . for example, quanjude (全聚德) was founded in of the qing dynasty, with a history of years. founder yang quanren, famous for making beijing roast duck, pioneered the roast duck. at present, the roast duck technology has been selected as a national intangible heritage project. moreover, quanjude's "all-duck feast (全鸭席)" and more than special dishes are known as "the first chinese food". zhou enlai, the former prime minister of the people's republic of china, has repeatedly chosen quanjude's "all-duck feast" as the state banquet. overall, fig. showed the typicality and representativeness of our research objects. third, before the survey, we asked the respondents to choose the time-honored catering brands with which they had the deepest impression or experience to complete the questionnaire. therefore, the people who had no consumption experience with time-honored catering were not asked to participate. to efficiently gather quality research data, the data were collected using the following steps. first, we conducted survey training for investigators, such as the questionnaire distribution process and anonymous surveying, etc. second, several trained assistants were instructed to intercept customers who passed through the research area near restaurants and to distribute the questionnaire (zhang et al., ) . third, we clarified the purpose of the questionnaire and the procedure for completing it and answered any doubts. fourth, the respondents were asked to fill in the items one by one. then, the research assistants checked the questionnaire before it was collected and gave restaurant coupons as a token of appreciation. finally, the questionnaires were completed from january to april . a total of questionnaires were distributed, and questionnaires ( . %) were recovered. after checking the validity of the collected questionnaires, invalid questionnaires (with completely self-consistent data and incomplete data) were excluded, leaving questionnaires ( . %) for the data analysis. table summarizes the detailed statistics of the respondents. the study scales consisted of the following constructs: brand authenticity, awakening of interest, brand experience, brand identification, creative performance, cultural proximity, in-person wom and ewom. the scales were originally developed in english and then translated into chinese. thus, a back-translation procedure (brislin, ) was conducted by four professors and researchers in the tourism management research field to retain the original meanings of the items and obtain the chinese version of the scale. moreover, to make the measurement more accurate, we adopted a -point likert scale ( = totally disagree to = totally agree) (stylidis et al., ) . a maturity scale based on the previous literature was used in this study. specifically, ( ) brand authenticity was measured using the scale of modya and hanksb ( ) containing items, which comprised the two sub-dimensions of originality ( items) and genuineness ( items). ( ) we assessed awakening of interest using items from tercia et al. ( ) , measuring the degree of customers' interest in time-honored caterings ( ) from brakus et al. ( ) , we used items (e.g., sensory, items; intellectual, items; affective, items; and behavioral, items). to measure brand experience. ( ) five items adapted from popp and woratschek ( ) were used to measure customers' brand identification, reflecting the degree to which the customers' identified the time-honored brands. ( ) we used items adopted from darvishmotevali et al. ( ) to measure creative performance, showing customers' perception of the creative ideas, behaviors or measures conveyed by time-honored brands. ( ) cultural proximity was also measured by items (huang et al., ) , which reflected the degree of proximity between the permanent residence culture of customers and the culture of the time-honored brands they visited. ( ) the variables of in-person wom ( items) and ewom ( items) were measured by items from eelen et al. ( ) . ( ) some variables related to demographics were controlled, including gender, age, education background, monthly income and number of experiences with relevant brands (fu et al., ; pan et al., ) . the mean, standard deviation, factor loading, composite reliability (cr), average variance extracted (ave) and cronbach's alpha value of measurement variables are shown in table . all of the cronbach's alpha values were above . , indicating the high level reliability for each construct (ryu et al., ) . the standardized factor loading of each item was significant above . , satisfying the threshold value of exceeding . (gieling and ong, ) . moreover, the value of cr was within the range of . - . (above . ), and the value of ave ranged from . to . (above . ), which demonstrated the reliability and convergent validity of each construct (bagozzi and yi, ) . further, cronbach's alpha value was higher than the . cut-off recommended by chow and chan ( ) , showing the high level of internal consistency of each construct. to confirm the construct validity, we used confirmatory factor analysis (cfa) to examine the first and second factor structures (zhang et the cfa examination results indicated that the first-and second-factor structure models were considered acceptable for future study. as shown in table , there was a high level of correlation between the structures. therefore, the variance inflation factor test (vif) needed to be examined to determine whether there was a high level of collinearity between each construct. the results showed that all the vifs of the constructs were less than . (liu, ) , indicating that collinearity was not a serious issue. further, this study confirmed discriminant validity between each pair of constructs because the values of ave were greater than the correlation coefficient (fornell and larcker, ) . in addition, this study used similar steps to measure each variable of serious correlations between variables (craighead et al., ) . according to the previous suggestions, the values of common method variation (cmv) needed to be calculated (podsakoff et al., ) . we applied satorra-bentler's scaled chi-square difference method using spss . . finally, the results of the common factor model showed that the first factor that was extracted only explained . % of the variance below the % threshold (podsakoff, ) . therefore, there was no concern about potential common method variation in this study. to test the research hypothesis in this study, a two-step procedure was followed. first, sem was applied in amos . to test the overall model structure. further, we used a bootstrap confidence interval approach, monte carlo approach resampling, % bias-corrected confidence intervals (ci) and p-values (liu, ) to examine the mediating effects. second, this study employed hierarchical regressions to test the moderating effects with stata . (zhang et al., ). fig. shows the standardized path coefficients, and each direct path was significant at p < . . further, the overall model fit the data well (χ = . . p < . ; χ /df = . ; cfi = . ; ifi = . ; tli = . ; nfi = . ; agfi = . ; rfi = . and rmsea = . ). this study tested the mediators of awakening of interest, brand experience and customers' brand identification using sem. first, hypothesis proposed that awakening of interest mediated the relationships between brand authenticity and brand experience. as illustrated in fig. , the two sub-dimensions of brand authenticity (originality, β = . ; genuineness, β = . ; all p < . ) directly affected the mediator of awakening of interest (β = . ; p < . ). moreover, . this time-honored catering brand is not action oriented. . customers-brand identification (cronbach's alpha = . ) (popp and woratschek, ) . . this btime-honored catering brand says a lot about the kind of person i am. . this time-honored catering brand's image and my self-image are similar in many respects. . this time-honored catering brand plays an important role in my life. . i am very attached to the time-honored catering brand. . the time-honored catering brand raises a strong sense of belonging. . creative performance (cronbach's alpha = . ) darvishmotevali et al. ( ) . . employees of the time-honored catering brand could carry out his/her routine tasks in resourceful ways. . this time-honored catering brand could come up with novel ideas to satisfy customer needs. . this time-honored restaurant offers a variety of dishes for customers to choose from. . . expressing your opinion about this time-honored catering brand online. . sharing ideas for new products and experiences of this time-honored catering brand online. . participating in a discussion on the brand website of this time-honored catering brand. . liking this time-honored catering brand on wechat or weibo. . sending or sharing online messages or promos of this time-honored catering brand to others. . writing an online review about this time-honored catering brand. . writing something or post a video about this timehonored catering brand. online . awakening of interest was positively associated with brand experience (β = . ; p < . ). the average indirect effects of brand authenticity on brand experience through awakening of interest were significant (β = . , p < . ). therefore, hypothesis was supported. second, the mediating effect of brand experience was examined in terms of the predictions of hypotheses a and b. the direct effects of brand authenticity on brand experience (sensory, β = . ; intellectual, β = . ; affective, β = . ; behavioral, β = . ; all p < . ) were positive and significant (β = . , p < . ). additionally, brand experience had a positive effect on customers' brand identification (β = . ; p < . ). thus, brand experience played a mediator role on the relationships between brand authenticity and customers' brand identification (β = . ; p < . ) and the relationship between awakening of interest and customers' brand identification (β = . ; p < . ). as such, hypotheses a and hypotheses b were supported. hypotheses a and hypotheses b predicted that customers' brand identification played a mediator role in the relationships among brand experience, in-person wom and ewom. customers' brand identification affected in-person wom (β = . ; p < . ) and ewom (β = . ; p < . ) positively and significantly. the results showed that customers' brand identification mediated the relationship between brand experience and in-person wom (β = . ; p < . ). further, brand experience had a positive and significant effect on ewom through customers' brand identification (β = . ; p < . ), which demonstrated that hypotheses a and b were supported. as seen in table , no confidence intervals of the two-tailed tests contained (saleem et al., ) , confirming that the mediator variables for awakening of interest, brand experience and customers' brand identification were fully supported with hypotheses to . the relationships between brand authenticity and customer wom were also moderated by creative performance and cultural proximity. as shown in table , models and were the baseline models, including the control variables and independent variables of brand authenticity and brand experience. models and added interaction effects, which examined hypotheses and . the results indicated that the coefficient for the interaction term brand authenticity × creative performance was positive and significant for brand experience (β = . ; p < . ). further, a slope test was used, and a two-dimensional diagram was drawn to further confirm the interaction effect's specific development trend. fig. demonstrates that when the customers perceived that the time-honored catering brands had a high level of creative performance, the effects of brand authenticity on brand experience were enhanced. moreover, the results showed that the interaction effect of brand experience and cultural proximity on customers' brand identification was not significant (β = . ; p = . ), which did not support hypothesis . *p < . ; **p < . ; ***p < . . correlation values above . were significant at *p < . . square root of average variance extraction are shown on the diagonal in bold. a similar procedure method was used to examine hypotheses a and b. table summarized the moderating effects of cultural proximity. model showed that the moderator of cultural proximity associated with customers' brand identification and in-person wom was positive and significant (β = . ; p < . ). further, there was also a positive interaction effect of customers' brand identification and cultural proximity on ewom (β = . ; p < . ). as shown in fig. and fig. , the simple slope analysis showed that at a higher level of cultural proximity, there were two more significant positive correlations between customers' brand identification, in-person wom and ewom. thus, hypotheses a and hypotheses b were supported. to examine whether the results of this study have strong stability, the same procedure including sem and regression analyses was used to test the mediating moderating models (tsai et al., ; liu, ) . the independent variable of brand authenticity was separated into two dimensions of originality and genuineness in the alternative model. fig. summarized the output path estimates, which showed that all direct paths were significant. the overall fitness of the alternative model was worse than that of the proposed model (χ = . , p < . ; χ /df specifically, the mediating effects of awakening of interest on the relationship between originality and brand experience (β = . ; p < . ) and the relationship between genuineness and brand experience (β = . ; p < . ), which provided evidence regarding hypothesis . further, originality (β = . ; p < . ) and genuineness (β = . ; p < . ) affected brand experience through awakening of interest. additionally, brand experience mediated the effect of awakening of interest on customers' brand identification (β = . ; p < . ), which still supported hypotheses a and hypotheses b. moreover, customers' brand identification positively and still significantly mediated the . support *p < . ; **p < . ; ***p < . . relationships among brand experience, in-person wom and ewom (h a: β = . , h b: β = . ; all p < . ). therefore, hypotheses a and b were fully supported. next, we evaluated two moderators. the interaction effects of originality × creative performance were not significant for brand experience (β = . ; p = . ), but genuineness × creative performance (β = . ; p < . ) was positive and significant for brand experience, which partially supported hypothesis . similarly, the moderating test results of cultural proximity were identical to the proposed model validation results. undoubtedly, the structural model proposed in this study was robust. the inheritance path of traditional catering brands has always been an issue that needs urgent theoretical response (tian et al., ) . this study analyzes the development characteristics of time-honored catering brands and finds the significance of customer wom for brand heritage. based on sor theory, we construct and test the wom path. first, this study has proved that brand authenticity is a critical leading factor and the interaction between brand authenticity and creative performance can promote the traditional brand inheritance. brand authenticity reflects the originality and genuineness of time-honored catering brands, which is one of the characteristics that distinguish these brands from other catering brands. however, the heritage of time-honored catering brands needs creative elements to improve customer experience in the performance reflects the combination of historical and innovative elements to enhance customer experience, leading to customer brand wom behavior. second, customers' responses to the brand (awakening of interest), cognition (brand experience) and attitudes (brand identity) are the important mediating factors for their wom. sor theory consolidates the theoretical foundation of the inheritance path model. as the stimulus, brand authenticity can successfully awaken customers' potential interest for experience. the brand experience process mobilizes customers' positive senses, behaviors, affections, intelligence and other comprehensive feelings (brakus et al., ) , leading to brand identification towards traditional brands. consequently, customers manifest desired behaviors such as in-person wom and ewom, showing positive results for brand recognition (dimitriadis and papista, ) . this study highlights the positive effects of cognition (brand experience) and affection (identification) on customers' wom in the context of time-honored catering brands. third, an interesting finding shows that cultural proximity can strengthen the behavior (wom) of customers after brand identification, but it cannot strengthen the formation process of customers' brand identification. customers with higher cultural proximity are willing to express common cultural emotions and generate desired behavior. it is easier to convey the cultural connotation and essence of time-honored brands. brand identification is a direct consequence of affective mobilization in the process of brand experience (stryker, ) . further, customers' brand identification is actually the evaluation of intuitive feelings in the process of high-level experience (lin and sung, ) . this evaluation is mainly aimed at the experience level of catering products, especially food, rather than cultural backgrounds. it is difficult to enhance or weaken the influence of customer brand experience on brand identification through cultural proximity. given the decline and failures stemming from serious issues with time-honored catering brands (li, ) , the study results contribute significant theoretical value and breakthroughs to the research field. first, this study is the first to provide a specific path of traditional catering brand inheritance. the theoretical process highlights the core driver of brand authenticity and the accelerator of creative performance. although many scholars have emphasized the unique historical and cultural value of traditional catering brands (tian et al., ) , to date, few studies have focused on addressing the brand decline and constructing a theoretical path of traditional catering brand inheritance, especially in remodeling from a new customer perspective rather than an enterprise perspective (li et al., ; he, ) . importantly, the critical leading factor of brand authenticity is placed in the present and the future, not just historical period (lu et al., ; sims, ) . only the combination of old originality and new creativity approaches can contribute to the process of high levels of customer wom (lu et al., ; wang and netemeyer, ) . this study is a new attempt to explore the creative performance of time-honored catering brands in terms of products and services because the variable is often used to represent the influence of employee service behavior (darvishmotevali et al., ) . therefore, creativity based on brand authenticity needs to be valued in the future. second, the findings contribute to the different influences of cultural background factors on traditional catering customers in attitude and behavior processes. the conclusions respond to the previous arguments about the influence of cultural distance on individuals (huang et al., ) . they further help us to re-understand the different roles of cultural proximity in the three stages of before, during and after a visit. many studies have proven that customers with greater cultural differences have more novelty interest and motivation before they visit (huang, ; huang et al., ) . however, the different findings in this study provide direct evidence for cultural proximity strengthening or weakening customer attitudes and behaviors after visits rather than before visits as indicated in a previous study (sims and rebecca, ). moreover, on the basis of the influence of cultural differences on customer motivation, we distinguish and expand the different roles of cultural proximity in different stages of customer experience, especially in the context of traditional cultural brands. the results complement the research on cross-cultural customer psychology and behavior and cultural theory (huang, ) . third, this study provides a new perspective to address and confirm the relationship between brand authenticity and customer behavior. although previous studies have highlighted that authenticity, as an attraction, plays an important role in customer satisfaction, evaluation and motivation (zeng et al., ; dipietro and levitt, ) . this study expands the research on the influence of brand authenticity on customer wom. more importantly, the research identifies the three-stage organism process through customers' responses (awakening of interest), cognitive processes (brand experience) and attitude formation (brand identification) (lu et al., ; dipietro and levitt, ) . this conclusion highlights that customer cognition and attitude are significant bridges in the process of forming their wom (moulard et al., ) . in addition, the paths of mediating mechanisms broaden the sor theory and facilitate theorizing how to realize traditional brand inheritance in the new period. the research results provide an important development path for the realization of time-honored catering brand inheritance. first, timehonored catering enterprises should protect their unique traditional secret formulas and manual skills to ensure the original flavor. for example, provide sufficient fund to train restaurant artisans and use original recipes. in addition, the management system of process inheritance and brand expansion need to learn from the modern enterprise development model based on their own traditional characteristics, which helps to adapt to the changing customer' demands. on the other hand, it is necessary to ensure the consistency of brand culture in services, products, and marketing; the core position of the traditional elements of time-honored catering brands, including brand logo, brand style and brand design, must be ensured. for example, authentic food ingredients and decoration features should be used, and the staff should wear cultural costumes, which will enhance the aesthetic and create a more unique and authentic service experience (lu et al., ) . in addition, unlike other forms of catering such as takeout, authenticity requires a sense of presence, and live production can enhance the sensory and emotion experience. however, it is important to maintain the social distancing of the dining tables to serve the customers and allow them to enjoy the authenticity in the loose space. in each link, time-honored catering enterprises should pay attention to the cultural storytelling to highlight their historical competitiveness, such as origin of each dish, technology process, founder experience, etc. second, authenticity alone may no longer be enough to ensure the sustainable development of a traditional restaurant (lu et al., ) . time-honored catering brands must realize that creative products and services are the driving factors of competitive advantage (liang and james, ) . it may include product windows and creative feature films and projects that enhance customers' intuitive perceptions of the brand (bogicevic et al., ) . the manifestation of time-honored products can be more diversified and creative and include the modeling of dishes, plate presentation, tableware, etc. further, the combination of traditional and modern elements needs to be emphasized. time-honored brands can also use creative derivatives to stimulate a diverse consumption experience, such as creative tableware souvenirs, time-honored seasonings, etc. (ryu and zhong, ) . further, personalized service should be available when necessary. transparent window can be applied to show the cooking process, which is conducive to tourists not only get taste experience, but can also understand traditional brands visually. moreover, due to the differences in regional tastes, managers should make appropriate improvements to adapt the taste to local people, such as reducing the spiciness. third, customers' wom is spontaneous behavior, but providing online and offline wom platforms for customers is still an initiative that time-honored catering brands must take seriously. therefore, creating social media platforms and online communities is essential (bernritter et al., ) . for example, the strong tone and style of the cultural atmosphere designed by the manager is a community activity with a sense of identity, which will maintain emotional bonds through social media interaction. further, festivals, special events should be actively taken advantage of to provide customers with promotional material about time-honored brands, pushing the customers to share brand information (collins-dodd and lindley, ; eelen et al., ) . moreover, establishing an internal connection between time-honored brands and customers is the key to maintaining wom, which can highlight customers' identification. for example, providing time-honored brand membership, benefits and feedback systems can enhance the offline one-to-one connection between customers and brands, acting as a channel for highlighting customers' unique identities. although this study creates a new path model for solving the issue of time-honored catering brand inheritance and the results make great significant contributions, some limitations can provide insights for future research. this study collected data from chinese customers of time-honored catering brands. however, research on cultural proximity needs to involve cross-cultural samples, and it is better to use a multinational sample to highlight the influence of cultural distance on customer attitudes and behaviors. future research may examine different groups in transnational cultures to ensure the universality and external validity of the hypothesis model proposed in this study (hwang and ok, ) . further, an examination of the results through a comparative study of transnational samples is needed, such as in different countries. in addition, sem was used in this study to discuss the inheritance of time-honored catering brands from the perspective of customers' wom. the multilevel model may be the best way to examine the influence of brand authenticity on customers' wom. unfortunately, there were a total of time-honored authorized enterprises in , and only one in ten of them is thriving. further, time-honored catering brands are few, and there may be fewer than typically successful time-honored catering brands that satisfy our study sample needs (li et al., ) , and this is insufficient to meet the data standards for the multilevel method. future research may not only be limited to samples of time-honored catering brands but also attempt to study traditional catering enterprises. the robustness of the model may be further examined through a multilevel analysis method. finally, future research can use qualitative interviews to investigate the relationship between spatial distance, crowding degree and customer brand authenticity experience in a new, post-covid world order. refining the conceptualization of brand authenticity brand authentication: creating and maintaining brand auras general image and attribute perceptions of traditional food in six european countries the identity salience model of relationship marketing success: the case of nonprofit marketing consumer culture theory (cct): twenty years of research food in society on the use of structural equation models in experimentaldesigns consumer willingness to pay for traditional food products the word-of-mouth phenomenon in the social media era why nonprofits are easier to endorse on social media: the roles of warmth and brand symbolism am i ibiza? measuring brand identification in the tourism context virtual reality presence as a preamble of tourism experience: the role of mental imagery brand experience: what is it? how is it measured? does it affect loyalty? back-translation for cross-cultural research enhancing brand relationship performance through customer participation and value creation in social media brand communities intrinsic or extrinsic motivations for hospitality employees' creativity: the moderating role of organization-level regulatory focus attributes that influence the evaluation of travel dining experience: when east meets west understanding the importance of food tourism to chongqing nostalgic emotion, experiential value, brand image, and consumption intentions of customers of nostalgic-themed restaurants an empirical study on culinary tourism destination brand personality and its impact in the context of confucian culture back to the past: a sub-segment of generationy's perceptions of authenticity the critical criteria for innovation entrepreneurship of restaurants: considering the interrelationship effect of human capital and competitive strategy a case study in taiwan social network, social trust and shared goals in organizational knowledge sharing keeping it real: how perceived brand authenticity affects product perceptions on the relationship between consumer-brand identification, brand community, and brand loyalty store brands and retail differentiation: the influence of store image and store brand attitude on store own brand perceptions addressing common method variance: guidelines for survey research on information technology, operations, and supply chain management emotional intelligence and creative performance: looking through the lens of environmental uncertainty and cultural intelligence does brand experience translate into brand commitment?: a mediated-moderation model of brand passion and perceived brand ethicality integrating relationship quality and consumer-brand identification in building brand relationships: proposition of a conceptual model restaurant authenticity: factors that influence perception, satisfaction and return intentions at regional american-style restaurants measuring the impact of positive and negative word of mouth on brand purchase probability the differential impact of brand loyalty on traditional and online word of mouth: the moderating roles of self-brand connection and the desire to help the brand the long march of the chinese luxury industry towards globalization: questioning the relevance of the "china time-honored brand evaluating structural equations models with unobservable variables and measurement error reality tv, audience travel intentions, and destination image why co-creation experience matters? creative experience and its impact on the quantity and quality of creative contributions. r d manag brand community members as a source of innovation warfare tourism experiences and national identity: the case of airborne museum 'hartenstein' in oosterbeek the role of gastronomic brands in customer destination promotion: the case of st. petersburg service branding: consumer verdicts on service brands perception of traditional food products in six european regions using free word association does brand authenticity alleviate the effect of brand scandals? the remodeling of brand image of the time-honored restaurant brand of wuhan based on emotional design consumer purchase intention at traditional restaurant and fast food restaurant antecedents and the mediating effect of customer-restaurant brand identification towards a strategic place brand-management model transference or severance: an exploratory study on brand relationship quality of china's time-honored brands based on intergenerational influence creativity as a critical criterion for future restaurant space design: developing a novel model with dematel application the dining experience of beijing roast duck: a comparative study of the chinese and english online consumer reviews cultural proximity and intention to visit: destination image of taiwan as perceived by mainland chinese visitors the antecedents and consequence of consumer attitudes toward restaurant brands: a comparative study between casual and fine dining restaurants creating a model of customer equity for chain restaurant brand formation how does sensory brand experience influence brand equity? considering the roles of customer satisfaction, customer affective commitment, and employee empathy a business model canvas: traditional restaurant "melayu stimulus-organism-response reconsidered: an evolutionary step in modeling (consumer) behavior destination brand authenticity: what an experiential simulacrum! a multigroup analysis of its antecedents and outcomes through official online platforms brand experience and brand implications in a multichannel setting. the international review of retail examining branding co-creation in brand communities on social media: applying the paradigm of stimulus-organism-response attitude formation from product trial: distinct roles of cognition and affect for hedonic and functional products do social media marketing activities enhance customer equity? an empirical study of luxury fashion brand experience, brand prestige, perceived value (functional, hedonic, social, and financial), and loyalty among grocerant customers tourism and gastronomy: gastronomy's influence on how customers experience a destination restaurants, gastronomy and customers: a novel method for investigating customers' dining out experiences the effect of preference expression modality on self-control impact of brand recognition and brand reputation on firm performance: us-based multinational restaurant companies' perspective how archetypal brands leverage consumers' perception: a qualitative investigation of brand loyalty and storytelling how to protect traditional food and foodways effectively in terms of intangible cultural heritage and intellectual property laws in the republic of korea research on the reasons and countermeasures for the lagging development of china's traditional brand brand revitalization of heritage enterprises for cultural sustainability in the digital era: a case study in china intangible assets are more valuable than the tangible--study on the innovation and development of the traditional time-honored brands traditional chinese food technology and cuisine the low-cost carrier model in china: the adoption of a strategic innovation nothing can tear us apart: the effect of brand identity fusion in consumer-brand relationships examining social capital, organizational learning and knowledge transfer in cultural and creative industries of practice the relationships among intellectual capital, social capital, and performance-the moderating role of business ties and environmental uncertainty perceptions of chinese restaurants in the us: what affects customer satisfaction and behavioral intentions? modelling consumer responses to an apparel store brand: store image as a risk reducer authenticity perceptions, brand equity and brand choice intention: the case of ethnic restaurants the mature brand and brand interest: an alternative consequence of ad-evoked affect an approach to environmental psychology the dark side of salesperson brand identification in the luxury sector: when brand orientation generates management issues and negative customer perception tarik dogru. parallel pathways to brand loyalty: mapping the consequences of authentic consumption experiences for hotels and airbnb brand authenticity: testing the antecedents and outcomes of brand management's passion for its products the study on activation strategy of time-honored brand a framework for brand revitalization through an upscale line extension enhancing brand equity through flow and telepresence: a comparison of d and d virtual worlds impact of brand experience on loyalty development and validation of a destination personality scale for mainland chinese travelers the effect of authenticity perceptions and brand equity on brand choice intention how the mind works self-reports in organizational research: problems and prospects common method biases in behavioral research: a critical review of the literature and recommended remedies matching visitation-motives and restaurant attributes in casual dining restaurants consumers' relationships with brands and brand communities-the multifaceted roles of identification and satisfaction the influence of brand experience and service quality on customer engagement differentiated brand experience in brand parity through branded branding strategy effect of a brand story structure on narrative transportation and perceived brand image of luxury hotels antecedents and consequences of customers' menu choice in an authentic chinese restaurant context the effects of brand experiences, trust and satisfaction on building brand loyalty; an empirical research on global brands brand personality and destination image of istanbul drivers of customer loyalty and word of mouth intentions: moderating role of interactional justice conceptualising consumerbased service brand equity (cbsbe) and direct service experience in the airline sector customer experience management: a revolutionary approach to connecting with your customers destination branding and overtourism a critical model of brand experience consequences a study on development strategies of time-honored catering brand from the perspective of food tourism the role of foodservice in vacation choice and experience: a cross-cultural analysis impact of the antecedents of electronic word of mouth on consumer based brand equity: a study on the hotel industry food, place and authenticity: local food and the sustainable tourism experience an approach to determining customer satisfaction in traditional serbian restaurants the assessment of creativity: an investment-based approach word of mouth, brand loyalty, acculturation and the american jewish consumer integrating emotion into identity theory testing an integrated destination image model across residents and customers brand awareness, image, physical quality and employee behavior as building blocks of customer-based brand equity: consequences in the hotel context development and evaluation of an rfid-based e-restaurant system for customer-centric service conveying pre-visit experiences through travel advertisements and their effects on destination decisions old names neet the new market: an ethnographic study of classic brands in the foodservice industry in shantou definition, conceptualization and measurement of consumer-based retailer brand equity authentic dining experiences in ethnic theme restaurants experiential value in branding food tourism work environment and atmosphere: the role of organizational support in the creativity performance of tourism and hospitality organizations salesperson creative performance: conceptualization, measurement, and nomological validity motives for consumer choice of traditional food and european food in mainland china customers' perceptions towards and satisfaction with service quality in the cross-cultural service encounter: implications for hospitality and tourism management cultural influences on the diffusion of new products using the brand experience scale to profile consumers and predict consumer behaviour paradox of authenticity versus standardization: expansion strategies of restaurant groups in china a structural model of liminal experience in tourism critical factors in the identification of word-of-mouth enhanced with travel apps: the moderating roles of confucian culture and the switching cost view how does authenticity enhance flow experience through perceived value and involvement: the moderating roles of innovation and cultural identity effect of social support on customer satisfaction and citizenship behavior in online brand communities: the moderating role of support source key: cord- -ta hebbg authors: balachandar, s.; zaleski, s.; soldati, a.; ahmadi, g.; bourouiba, l. title: host-to-host airborne transmission as a multiphase flow problem for science-based social distance guidelines date: - - journal: nan doi: . /j.ijmultiphaseflow. . sha: doc_id: cord_uid: ta hebbg covid- pandemic has strikingly demonstrated how important it is to develop fundamental knowledge related to generation, transport and inhalation of pathogen-laden droplets and their subsequent possible fate as airborne particles, or aerosols, in the context of human to human transmission. it is also increasingly clear that airborne transmission is an important contributor to rapid spreading of the disease. in this paper, we discuss the processes of droplet generation by exhalation, their potential transformation into airborne particles by evaporation, transport over long distances by the exhaled puff and by ambient air turbulence, and final inhalation by the receiving host as interconnected multiphase flow processes. a simple model for the time evolution of droplet/aerosol concentration is presented based on a theoretical analysis of the relevant physical processes. the modeling framework along with detailed experiments and simulations can be used to study a wide variety of scenarios involving breathing, talking, coughing and sneezing and in a number of environmental conditions, as humid or dry atmosphere, confined or open environment. although a number of questions remain open on the physics of evaporation and coupling with persistence of the virus, it is clear that with a more reliable understanding of the underlying flow physics of virus transmission one can set the foundation for an improved methodology in designing case-specific social distancing and infection control guidelines. the covid- pandemic has made clear the fundamental role of airborne droplets and aerosols as potential virus carriers. the importance of studying the fluid dynamics of exhalations, starting from the formation of droplets in the respiratory tracts to their evolution and transport as a turbulent cloud, can now be recognized as the key step towards understanding sars-cov- transmission. respiratory droplets are formed and emitted at high speed during a sneeze or cough [ ] , and at a lower speed while talking or breathing. the virus-laden droplets are then initially transported as part of the coherent gas puff of buoyant fluid ejected by the infected host [ ] . the very large drops of o(mm) in size, which are visible to the naked eye, are minimally affected by the puff. they travel semi-ballistically with only minimal drag adjustment, but rapidly fall down due to gravitational pull. they can exit the puff either by overshooting or by falling out of the puff at the early stage of emission (fig. ). smaller droplets ( o( µm)) that remain suspended within the puff are advected forward. as the suspended droplets steadily evaporate within the cloud, the virus takes the form of potentially inhalable droplet nuclei when the evaporation of water is complete. meanwhile, the velocity of the turbulent puff continues to decay both due to entrainment and drag. once the puff slows down sufficiently, and its coherence is lost, the eventual spreading of the virus-laden droplet nuclei becomes dependent on the ambient air currents and turbulence. the isolated respiratory droplet emission framework was introduced by wells [ ] in the s and remains the framework used for guidelines by public health agencies, such as the who, cdc and others. however, it does not consider the role of the turbulent gas puff within which the droplets are embedded. regardless of their size and their initial velocity, the ejected droplets are subject to both gravitational settling and evaporation [ ] . although droplets of all sizes undergo continuous settling, droplets with settling speed smaller than the fluctuating velocity of the surrounding puff can remain trapped longer within the puff (fig. ) . furthermore, the water content of the droplets continuously decreases due to evaporation. when conditions are appropriate for near complete evaporation, the ejected droplets quickly become droplet nuclei of non-volatile biological material. the settling velocity of these droplet nuclei is sufficiently small that they can remain trapped as a cloud and get advected by ambient air currents and dispersed by ambient turbulence. based on the above discussion, we introduce the following terminology that will be consistently used in this paper: • puff: warm, moist air exhaled during breathing, talking, coughing or sneezing, which remains coherent and moves forward during early times after exhalation • cloud: the distribution of ejected droplets that remain suspended even after the puff has lost its coherence. the cloud is advected by the air currents and is dispersed by ambient turbulence • exited droplets: droplets that have either overshot the puff/cloud or settled down due to gravity • airborne (evaporating) droplets: droplets which have not completed evaporation and retained within the puff/cloud • (airborne) droplet nuclei: droplets that remain airborne within the puff/cloud and that have fully evaporated, which will also be termed aerosols. host-to-host transmission of virus-laden droplets and droplet nuclei generally occurs through direct and indirect routes [ , , ] . the direct route of transmission involves the larger droplets that may ballistically reach the recipient's mucosa. this route is currently thought to involve either the airborne route or drops that have settled on surfaces. the settled drops remain infectious, to be later picked up by the recipient, and are generally thought to be localized to the vicinity or at close range of the original infectious emitter. with increased awareness and modified physical distancing norms, it is possible to minimize the spreading of the virus by such direct route. the indirect route of transmission is one that does not necessarily involve a direct or close interaction between the infectious individual and the recipient or for the two to be synchronously present figure : image reproduction showing the semi-ballistic largest drops, visible to the naked eye, and on the order of mm, which can overshoot the puff at its early stage of emission [ , ] . the puff continues to propagate and entrain ambient air as it moves forward, carrying its payload of a continuum of drops [ ] , over distances up to meters for violent exhalations such as sneezes [ ] . in the same contaminated space at the same time. thus, the indirect route involves respiratory droplets and fully-evaporated droplet nuclei that are released to the surrounding by the infected individual, which remain airborne as the cloud carries them over longer distances [ , , , ] . the settling speeds of the airborne droplets and droplet nuclei are so small, that they remain afloat for longer times [ ] , while being carried by the background turbulent airflow over distances that can span the entire room or even multiple rooms within the building (o( − ) feet). a schematic of the two routes of transmission is shown in fig. and in this paper we will focus on the indirect airborne transmission. another factor of great importance is the possibility of updraft in the region of contamination, due to buoyancy of the virus-laden warm ejected air-mass. these slight updrafts can keep the virusladen droplets suspended in the air and enhance the inhalability of airborne droplets and droplet nuclei by recipients who are located farther away. the advection of airborne droplets and nuclei by the puff and subsequently as a cloud may represent transmission risk for times and distances much longer than otherwise previously estimated, and this is a cause of great concern [ , ] . note that if we ignore the motion of the puff of air carrying the droplets, as in the analysis of wells, the airborne droplets and nuclei would be subjected to such high drag that they could not propagate more than a few cm away from the exhaler, even under conditions of fast ejections, such as in a sneeze. this illustrates the importance of incorporating the correct multiphase flow physics in the modeling of respiratory emissions [ ] , which we shall discuss further here. it has been recently reported that the covid- virus lives in droplets and aerosols for many hours in laboratory experiments [ ] . at the receiving end, an increased concentration of virusladen airborne droplets and nuclei near the breathing zone increases the probability of them settling on the body or, more importantly, being inhaled. depending on its material and sealing properties, the use of a mask by the infected host can help reduce the number of virus-laden droplets ejected into the air. the use of a mask or other protective devices by the receiving host may reduce the probability of inhalation of the virus-laden airborne droplets and nuclei in a less effective way. the above description provides a clear sketch of the sequence of processes by which the virus is transferred host-to-host. this simplistic scenario, though pictorially evocative, is tremendously insufficient to provide science-based social distancing guidelines and recommendations. there is substantial variability (i) in the quantity and quality of contaminated droplets and aerosols generated by an infected person, (ii) in the manner by which the contaminated droplets and droplet nuclei remain afloat over longer distances and time, (iii) in the possibility of the contaminant being inhaled by a recipient and (iv) in the effectiveness of masks and other protection devices. violent exhalations, such as sneezing and coughing, yield many more virus-laden droplets and aerosols than breathing and talking [ , ] . all coughing and sneezing events are not alike -the formation of droplets by break up of mucus and saliva varies substantially between individuals. significant variation in initial droplet size and velocity distribution has been reported in [ , , , ] . the measured droplet size distribution, particularly for transient biological emissions such as respiratory exhalations, also depends on ambient temperature and humidity and on the methodology and instrumentation used to characterize the size distribution [ , , ] . furthermore, it is of importance to consider the volume of air, and the pathogen load, being inhaled during breathing by the receiving host. thus, there is great variability in how much of the virus-laden aerosols reach from the infected host to the receiving host. although less violent, it has been suggested that breathing can also be a significant source of contagion since it occurs at great regularity, and thus much more frequently [ , , , ] . furthermore, these works suggest different possible mechanisms of droplet generation in the lower respiratory tracts for these less violet periodic ejection events and as a result the ejected droplets and aerosols are typically much smaller. as a result the effectiveness of ordinary cotton and gauze masks have been questioned [ ] , leung, bae. though the general mathematical framework to be presented in this paper applies to all forms of exhalations, our particular focus of demonstration will be for more violent ejections in the form of coughing and sneezing. cdc guideline of social distancing of meters ( feet) is based on the disease transmission theory originally developed in s and later improved by others [ , , ] . the current recommendation of feet as the safe distance is somewhat outdated and based on the assumption that the direct route is the main mechanism of transmission. therefore, it can be improved in several ways: (i) by accurately accounting for the distance traveled by the puff and the droplets contained within it, while some continuously settling out of the puff, (ii) by accurately evaluating the evaporation of droplets and the subsequent advection and dispersal of droplet nuclei as a cloud [ ] , (iii) by incorporating the effect of adverse flow conditions that prevail under confined indoor environments including elevators, aircraft cabins, and public transit, or favorable conditions of open space with good breeze or cross ventilation, and (iv) by correctly assessing the effectiveness of masks and other protective devices [ ] . thus, mechanistic, evidence-based understanding of exhalation and dispersal of expelled respiratory droplets, and their subsequent fate as droplet nuclei in varying scenarios and environments is important. we must therefore revisit the safety guidelines and update them to modern understanding. in particular, a multi-layered guideline that differentiates between crowded class rooms, auditoriums, buses, elevators and aircraft cabins from open outdoor cafes is desired. only through a reliable understanding of the underlying flow physics of virus transmission, one can arrive at such nuanced guidance in designing case-specific social distancing guidelines. the objective of the paper is to aid in the development of a comprehensive scientific guideline for social distancing that (i) considers airborne transmission via state-of-the-art understanding of respiratory ejections and (ii) substantially improve upon the older models of [ , ] . towards this objective we present a coherent analytic and quantitative description of the droplet generation, transport, conversion to droplet nuclei, and eventual inhalation processes. we will examine the available quantitative relationships that describe the above processes and adapt them to the present problem. the key outcomes that we desire are (i) a simple universal description of the initial droplet size spectrum generated by sneezing, coughing, talking and breathing activities. such a description must recognize the current limitations of measurements of droplet size distribution under highly transient conditions of respiratory events. (ii) a first-order mathematical framework that describes the evolution of the cloud of respiratory droplets and their conversion to droplet nuclei, as a function of time, and (iii) a simple description of the inhalability of the aerosols along with the corresponding evaluation of the effectiveness of different masks based on existing data reported to date. the physical picture and the quantitative results to be presented can then be used to study a statistical sample of different scenarios and derive case-specific guidelines. we anticipate the present paper to spawn future research in the context of host-to-host airborne transmission. after presenting the mathematical framework in section , the three different stages of transmission, namely droplet generation, transport and inhalation will be independently analyzed in sections , and . these sections will consider the evolution of the puff of exhaled air and the droplets contained within. section will put together the different models of the puff and droplet evolution described in the previous sections, underline their simplifications, and demonstrate their ability to make useful predictions. finally, conclusions and future perspectives are offered in section . we wish to describe the three main stages involved in the host-to-host transmission of the virus: droplet generation during exhalation, airborne transport, and inhalation by the receiving host. in the generation stage, virus-laden drops are generated throughout the respiratory tract by the exhalation air flow, which carries them through the upper airway toward the mouth where they are ejected along with the turbulent puff of air from the lungs. the ejected puff of air can be characterized with the following four parameters: the volume q pe , the momentum m pe , and the buoyancy b pe of the ejected puff, along with the angle θ e to the horizontal at which the puff is initially ejected. the initial momentum and buoyancy of the puff are given by m pe = ρ pe q pe v pe and b pe = (ρ a − ρ pe )q pe g, where v pe is the initial velocity of ejected puff, ρ pe and ρ a are the initial density of the puff and the ambient, respectively, and g is the gravitational acceleration. the ejected droplets are characterized by their total number n e , size distribution n e (d), droplet velocity distribution v de (d) and droplet temperature distribution t de (d), where d is the diameter of the droplet. to simplify the theoretical formulation, here we assume the velocity and temperature of the ejected droplets to depend only on the diameter and show no other variation. as we shall see in section , this assumption is not very restrictive, since the velocity and temperature of the droplets that remain within the puff very quickly adjust to those of the puff. both the ejected puff of air and the detailed distribution of droplets depend on the nature of the exhalation event (i.e., breathing, talking, coughing or sneezing), and also on the individual. this is followed by the transport stage, where the initially ejected puff of air and droplets are transported away from the source. the volume of the puff of air increases due to entrainment of ambient air. the puff velocity decreases due to both entrainment of ambient air as well as drag. since the temperature and moisture content of the ejected puff of air is typically higher than the ambient, the puff is also subjected to a vertical buoyancy force, which alters its trajectory from a rectilinear motion. the exhaled puff is turbulent, and both the turbulent velocity fluctuations within the puff and the mean forward velocity of the puff decay over time. the time evolution of the puff during the transport stage can then be characterized by the following quantities: the volume q p (t), the momentum m p (t), buoyancy b p (t) of the ejected puff, and ρ p (t) is the density of air within the puff which changes over time due to entrainment and evaporation. the trajectory of the puff is defined in terms of the distance traveled s(t) and the angle to the horizontal θ(t) of its current trajectory. following the work of bourouiba et al. [ ] we have chosen to describe the puff trajectory in terms of s(t) and θ(t). this information can be converted to horizontal and vertical positions of the centroid of the puff as a function time. if we ignore the effects of thermal diffusion and ambient stratification between the puff and the surrounding air, then the buoyancy of the puff remains a constant as b p (t) = b pe . furthermore, as will be seen below, the buoyancy effects are quite weak in the early stages when the puff remains coherent, and thus, the puff to good approximation can be taken to travel along a straight line path, as long as other external flow effects are unimportant. to characterize the time evolution of the virus-laden droplets during the transport stage, we distinguish the droplets that remain within the puff, whose diameter is less than a cutoff (i.e., d < d exit ), from the droplets (i.e., d > d exit ) that escape out of the puff. as will be discussed subsequently in § , the cutoff droplet size d exit decreases with time. thus, the total number of droplets that remain within the puff can be estimated as n (t) = d exit n (d, t) dd. however, the size distribution of droplets at any later time, denoted as n (d, t), is not the same as that at ejection. due to evaporation, size distribution shifts to smaller diameters over time. we introduce the mapping d(d e , t), which gives the current diameter of a droplet initially ejected as a droplet of diameter d e . then, assuming well-mixed condition within the puff, the airborne droplet and nuclei concentration (number per volume) distribution can be expressed as where the inverse mapping d − gives the original ejected diameter of a droplet whose current size is d. the prefactor /q p (t) accounts for the decrease in concentration due to the enlargement of the puff over time. in this model, the airborne droplets and nuclei that remain within the coherent puff are assumed to be in equilibrium with the turbulent flow within the puff. under this assumption, the velocity v d (d, t) and temperature t d (d, t) of the droplets can be estimated with the equilibrium eulerian approximation [ , ] . when the puff's mean and fluctuating velocities fall below those of the ambient, the puff can be taken to lose its coherence. thus, the puff remains coherent and travels farther in a confined relatively quiescent environment, such as an elevator, class room or aircraft cabin, than in an open outdoor environment with cross-wind or in a room with strong ventilation. we define a transition time t tr , below which the puff is taken to be coherent and the above described puff-based transport model applies. for t > t tr , we take the aerosol transport and dilution to be dominated by ambient turbulent dispersion. accordingly, this late-time behavior of total number of airborne droplets and nuclei and their number density distribution are given by the theory of turbulent dispersion. it should be noted that the value of transition time will depend on both the puff properties as well as the level of ambient turbulence (see section . ) . we now consider the final inhalation stage. depending on the location of the recipient host relative to that of the infected host, the recipient may be subjected to either the puff that still remains coherent, carrying a relatively high concentration of virus-laden droplets or nuclei, or to the more dilute dispersion of droplet nuclei, or aerosols. these factors determine the number and size distribution of virus-laden airborne droplets and nuclei the recipient host will be subjected to. the inhalation cycle of the recipient, along with the use of masks and other protective devices, will then dictate the aerosols that reach sensitive areas of the respiratory tract where infection can occur. following the above outlined mathematical framework we will now consider the three stages of generation, transport and inhalation. knowing the droplet sizes, velocities and ejection angles resulting from an exhalation is the key first step in the development of a predictive ability for droplet dispersion and evolution. respiratory droplet size distributions have been the object of a large number of studies, as reviewed in [ ] , and among them, those of duguid [ ] and loudon & roberts [ ] have received particular scrutiny as a basis for studies of disease transmission by nicas, nazaroff & hubbard [ ] . there are substantial differences in the methodologies used for quantification of respiratory emission sprays. few studies have used common instrumentation that have enough overlap to reconstruct the full distribution of sizes. for example, there are important gaps in reporting the total volume or duration of air sampling, in addition there are issues in reporting the effective evaporation rates used to back-compute the initial distribution and in the documentation of assumptions about optical or shape properties of the droplets being sampled. in addition, sensitivity analyses are often missing regarding the role of orientation or calibration of sensing instruments with respect to highly variable emissions from human subjects. finally, regarding direct high-speed imaging methods [ , ] , the tools for precise quantification of complex unsteady fragmentation and atomization processes are only now being developed [ , , ] . there are far fewer studies on the velocities and angles of the droplets produced by atomizing flows. the studies of duguid and loudon & roberts have been performed by allowing the exhaled droplets to impact various sheets or slides, with different procedures being used for droplets smaller than µm. the size of the stains on the sheets was observed and the original droplet size was inferred from the size of the stains. to account for the difference between the droplet and the stain sizes an arbitrary factor is applied and droplets smaller than or microns are processed differently than larger droplets. the whole process makes the determination of the number of droplets smaller than microns less reliable. the data are replotted in fig. . many authors have attempted to fit the data with a log-normal probability distribution function. in that case, the number of droplets between diameter d and d + dd is n e (d) dd, and the frequency of ejected droplet size distribution is given by where dd is a relatively small diameter increment or bin width, b is a normalization constant,μ is the expected value of ln d, also called the geometric mean andσ is the standard deviation of ln d, also called the geometric standard deviation (gsd). on the other hand, there have also been numerous studies of the fragmentation of liquid masses in various physical configurations other than the exhalation of mucosalivary fluid [ , , , ] . these configurations include spray formation on wave crests [ ] , droplet impacts on solids and liquids [ ] , wave impacts on vertical or finite walls/surfaces [ , , ] , and jet atomization [ ] . these studies reveal a number of qualitative similarities between the various processes, which can be best described as a sequence of events. those events include a primary instability of sheared layers in high speed air flows [ ] , and then the nonlinear growth of the perturbation into thin liquid sheets. the sheets themselves may be destabilized by two routes, one involving the formation of taylor-culick end rims [ , ] , and their subsequent deformation into detaching droplets [ ] . the other route to the formation of droplets is the formation of holes in the thin sheets [ , , ] . the holes then expand and form free hanging ligaments, which fragment into droplets through the rayleigh-plateau instability [ ] . considering the apparent universality of the process, one may infer that a universal distribution of droplet sizes may exist. indeed, the log-normal distribution has often been fitted to experimental [ ] and numerical data on jet formation [ , ] , for droplet impacts on solid surfaces [ ] , and for wave impacts on solid walls [ ] . the log-normal distribution is frequently suggested for exhalations [ , ] . the fit of the numerical results of [ ] is shown in fig. . however, this apparent universality of the log-normal distribution is questionable for several reasons. first, many other distributions, such as exponential, poisson, weibull-rosin-rammler, beta, or families of gamma or compound gamma distributions [ , ] capture to some extent the complexity of atomization physics. second, the geometrical standard deviation (gsd) of the log-normal fits to the many numerical and experimental measurements is relatively small (of the order of . [ ] or . [ ] ), while the wide range of scales in fig. seems to indicate a much larger gsd. indeed nicas, nazaroff & roberts [ ] obtainσ − . one explanation for the smaller gsd in jet atomization studies, both numerical and experimental, is that the numerical or optical resolution is limited at the small scales. indeed, as grid resolution is increased, the observed gsd also increases [ ] . third, many authors [ , ] observe multimodal or bimodal distributions, that can be obtained for example by the superposition of several physical processes. this would arise in a very simple manner if the taylor-culick rim route produced drops of a markedly different size than the holes-infilm route. the non-newtonian nature of the fluid will also influence the instabilities and thereby the droplet generation process. other less violent processes could lead to the formation of small droplets such as the breakup of small films and menisci described in [ ] without going through the sequence of events described above. frequency n e (d) ( /microns) duguid cough data loudon and roberts cough data pareto b/d fit figure : frequency of droplet size distribution, replotted from duguid [ ] and loudon & roberts [ ] . the pareto distribution is also plotted. in order to elucidate this discrepancy, we take another look at the fit of the duguid data in fig. . we replot the data that was provided in table of duguid. since the data are given as counts n i in bins defined by the interval (d i , d i+ ), we approximate n e (d) at collocation points fig. , since if plotted in the variables x = ln d and y = ln[dn e (d)] the distribution ( ) appears as a parabola. when one attempts to fit a parabola between and µm, one obtains a log-normal distribution withσ = . andμ = ln( ) (for diameters in microns). however, the data above µm are completely outside this distribution. if instead the whole range from to µm is fit to a log-normal distribution, one obtains a very wide log-normal or alternatively a pareto distribution of power in figs roberts data. it is especially clear from fig. that if one does not trust either data at d < µm then both data sets are well described by the pareto distribution. this, however, does not eliminate the possibility that more data with more statistical power could show deviations from pareto, in particular, as multimodal distributions. nevertheless, the multimodal deviation from the pareto distribution is difficult to characterize and will not be pursued in what follows for the sake of simplicity. it is clear that the pareto distribution cannot be valid at diameters that are either too large or too small. the equivalent diameter of the total mass of liquid being atomized is an obvious upper bound, but it is also very unlikely that droplets with d > h where h is the initial film thickness will be observed. it is reasonable to put this film thickness on the scale of mm, which corresponds to the upper bound on diameters in the data of figs. and . the lower bound on droplet diameter is much harder to determine. exhalations are highly transient, or unsteady, processes involving complex multiscale geometry [ ] , and thread breakup is a fractal multiscale process with satellite droplets [ , ] . going down in scale, the fractal process repeats itself as long as continuum mechanics remains valid, to around nm. this would not be relevant for viral disease propagation as a lot of the relevant viruses have sizes ranging from o( − nm), with an estimated size for sars-cov- ranging from - nm, for example. if the smallest length scale is the thickness at which the thin liquid sheets will break, then experimental observations in water [ ] suggest a scale of o( ) nm. other fluids, including biological fluids or biologically contaminated fluids such as those investigated in [ , , ] may yield different length scales. based on the above considerations, we take a histogram of droplet sizes that reads where d is set to o( nm) and d to o( mm) for simplicity. the total volume of the droplets is fig. . the fit is adequate only up to µm. as a result, only a fraction of the reliable data fits the log-normal. the pareto distribution is a reasonable capture of the data in the to µm range. in the log-log coordinates, the log-normal distribution appears as a parabola while the pareto distribution is a straight line. since d is four orders of magnitude smaller than d , the total number of droplets is well approximated by and the cumulative number of droplets f (x) = n e (d ≤ d ≤ x), i.e., the number of droplets with diameter smaller than x, is very well approximated by so that f ( d )/n e = % of the droplets are of size less than d µm. in other words, a numerical majority of the droplets are near the lower diameter bound. on the other hand, a majority of the volume of fluid is in the larger droplet diameters. the distribution of velocities and ejection angles has been investigated in the atomization experiments of [ ] , which follow approximately the geometry of a high speed stream peeling by a gas layer. these experiments were qualitatively reproduced in the numerical simulations of [ ] . to cite ref. [ ] , "most of the ejection angles are in the range • to • , however, it occurs occasionally that the drops are ejected with angles as high as • ". on the other hand, there are to our knowledge no experimental data on the velocity of droplets, as they are formed in an atomizing jet, that could be used directly to estimate the ejection speed of droplets in exhalation. there are however numerical studies [ , ] in the limit of very large reynolds and weber numbers. the group velocity of waves formed on a liquid layer below a gas stream has been estimated by dimotakis [ ] as where ρ d is droplet density. in [ , ] it was shown that this was also the vertical velocity of the interface perturbation. it is thus likely that this velocity plays a role at the end of the first instability stage of atomization. after this stage, droplets are detached and immersed in a gas stream of initial ejection velocity v pe . since the density ratio ρ p /ρ d is o( − ), we expect the initial velocity of the ejected droplets at the point of their formation to be small. as we show below, it is interesting to note that the large reynolds number limit may apply at the initial injection stage to a wide range of droplets in the spectrum of sizes found above. indeed the ejection reynolds number of a droplet ejected at a velocity v de in a surrounding air flow of velocity v pe is where ν a is the kinematic viscosity of the ejected puff of air (here taken to be the same as that of the ambient air). the largest reynolds number is obtained for the upper bound of d = mm. for example, if the droplet's initial velocity is set to v de ≈ , and the air flow velocity in some experiments [ ] is as high as m/s, we can estimate the largest ejection reynolds number to be re e ≈ and the reynolds number will stay above unity for droplets down to micron size. but as the puff of air and the droplets move forward, the droplet reynolds number rapidly decreases for the following reasons: (i) as will be seen in section . the puff velocity decreases due to entrainment and drag, (ii) as will be seen in section . . the droplet diameter will decrease rapidly due to evaporation, (iii) as will be seen in section . . the time scale τ v on which the droplet accelerates to the surrounding fluid velocity of the puff is quite small, and (iv) very large droplets quickly fall out of the puff and do not form part of airborne droplets. thus, it can be established that droplets smaller than µm quickly equilibrate with the puff within the first few cm after exhalation. this section will consider the evolution of the puff of hot moist air with the droplets after their initial ejection. first in section . we will present a simple modified model for the evolution of the puff of exhaled air, evaluating the effects of drag and the inertia of the droplets within it. this will enable us, in section . to discuss the evolution of the droplet size spectrum, velocity and temperature distributions, with simple first order models. additionally, section . will discuss the effect of non-volatiles on the droplet evolution and the formation of a fully evaporated droplet nuclei or aerosol particle. late-time turbulent dispersion of the virus-laden droplet nuclei, when the puff of air within which they are contained stops being a coherent entity, is then addressed in section . . for the puff model, we follow the approach of bourouiba et al. [ ] , but include the added effects of drag and the mass of the injected droplets. in addition, a perturbation approach is pursued to obtain a simple solution with all the added effects included. fig. shows the evolution of the puff along with quantities that define the puff [ ] . we define t to be the time elapsed from exhalation and s(t) to be the distance traveled by the puff since exhalation. for analytical considerations we define the virtual origin to be at a distance s e from the real source in the backward direction and t e to be the time it takes for the puff to travel from the virtual origin to the real source. we define t = t + t e to be time from the virtual origin and s = s + s e to the distance traveled from the virtual origin -their introduction simplifies the analysis. from the theory of jets, plumes, puffs and thermals [ ] the volume of the puff exhaled grows by entrainment. bourouiba et al. [ ] defined the puff to be spheroidal in shape with the transverse dimension to evolve in a self-similar manner as r (t ) = αs (t ), where α is related to entrainment coefficient. the volume of the puff is then q p (t ) = ηr (t ) = ηα s (t ) and the projected, or cross-sectional, area of the puff a(t ) = βr (t ) = βα s (t ), where the constants η and β depend on the shape of the spheroid. for a spherical puff η = π/ and β = π. as defined earlier, the ejected puff at the real source (i.e., at t = t e ) is characterized by the volume q pe = ηα s e , momentum m pe = ρ pe q pe v pe , buoyancy b pe = q pe (ρ a − ρ pe )g and ejection angle θ e . from the assumption of self-similar growth, we obtain the virtual origin to be defined as where the constant c depends on the drag coefficient of the puff and will be defined below. if we assume a spherical puff with an entrainment factor α = . [ ] , the distance s e depends only on the ejected volume. experimental measurements suggest q pe to vary over the range . to . m . accordingly, s e can vary from . to . m. similar estimates of t e can be obtained for a spherical puff: as q pe varies from . to . m and as the ejected velocity varies from to m/s the value of t e varies over the range . to . s. the horizontal and vertical momentum balances in dimensional terms are in the above c d is the drag coefficient of the puff and m d is the momentum of droplets within the puff. while the puff velocity decreases rapidly over time, the velocity of the larger droplets will change slowly. note that in the analysis to follow, we take the velocity of those droplets that remain within the puff to be the same as the puff velocity. figure : evolution of a typical cloud of respiratory multiphase turbulent droplet-laden air following breathing, talking, coughing and sneezing activities. image adapted from [ ] . we use s e and t e as the length and time scales to define nondimensional quantities:s = s /s e andt = t /t e . with this definition the virtual origin becomest = ands = and the real source becomest = ands = . in terms of non-dimensional quantities the governing momentum equations can be rewritten as there are three nondimensional parameters: mass ratio of the initial ejected droplets to the initial air puff: r m = ρ d q de /(ρ p q pe ); the scaled drag coefficient: c = c d β/( ηα); and the buoyancy parameter: a = b pe t e /(ρ pe q pe s e ). in the above equations, r m is defined in terms of the mass of the initial ejected droplets. this is an approximation since some of the droplets exit the puff over time. even though the droplet mass decreases due to evaporation, the associated momentum is not lost from the system since it remains within the puff. in any case, soon it will be shown that the value of r m is small and the role of ejected droplets on the momentum balance is negligible. it should also be noted that under boussinesq approximation the small difference in density between the puff and that the ambient is important only in the buoyancy term. for all other purposes, the two will be taken to be the same and as a result the time variation of puff density is not of importance (i.e., ρ p = ρ pe = ρ a ). the importance of inertia of the ejected droplets, drag on the puff and buoyancy effects can now be evaluated in terms of the magnitude of the nondimensional parameters. typical experimental measurements of breathing, talking, coughing and sneezing indicate that the value of r m is smaller than . and often much smaller. furthermore, as droplets fall out continuously [ ] from the turbulent puff, this ratio changes over time. here we will obtain an upper bound on the inertial effect of injected droplets by taking the value of r m to be . . the drag coefficient of a spherical puff of air is also typically small -again as an upper bound we take c d = . , which yields c = . for a spherical puff. the value of the buoyancy parameter a depends on the density difference between the ejected puff of air and the ambient, which in turn depends on the temperature difference. for the entire range of ejected volumes and velocities, the value of a comes to be smaller than . , for temperature differences of the order of ten to twenty degrees between the exhaled puff and the ambient. since all three parameters r m , c and a can be considered as small perturbations, the governing equations can be readily solved in their absence to obtain the following classical expressions for the nondimensional puff location and puff velocity: with the inclusion of the drag term the governing equations become nonlinear. nevertheless, they allow a simple exact solution which can be expressed as thus, as to be expected, the forward propagation of the puff slows down with increasing nondimensional drag parameter c. for small values of c the above can be expanded in taylor series as a comparison of the exact solution with the above asymptotic expansion shows its adequacy for small values of c. for small non-zero values of r m , c and a, the governing equations can be solved using regular perturbation theory. the result can be expressed as and the above expression is accurate to o(c , r m , a ). although the effect of buoyancy is to curve the trajectory of the puff, the leading order effect of buoyancy is to only alter the speed of rectilinear motion. also, as expected, the effect of non-zero r m is to add to the total inertia and thereby slow down the motion of the puff. on the other hand, the effect of buoyancy is to slow down if the initial ejection is angled down (i.e., if θ e < ) and to speed up if the ejection is angled up, provided the ejected puff is warmer than the ambient. the time evolution of the puff as predicted by the above analytical expression is shown in fig. . note that the point of ejection is given byt = ,s = , and the initial non-dimensional velocitỹ v(t = ) = / . the results for four different combinations of c and r m are shown. the buoyancy parameter has very little effect on the results and, therefore, is not shown. it should be noted that at late stages when the puff velocity slows down the effect of buoyancy can start to play a role as indicated in experiments and simulations. it can be seen that the effect of inertia of the ejected droplets, even with the upper bound of holding their mass constant at the initial value, has negligible effect. only the drag on the puff has a significant effect in reducing the distance traveled by the puff. it can then be taken that the puff evolution to good accuracy can be represented by ( ) . over a time span of nondimensional units the puff has traveled about . s e and the velocity has dropped to about % of the initial velocity. by nondimensional units the puff has traveled about . s e and the velocity has dropped to about . % of the initial velocity. the ejected droplets are made of a complex fluid that is essentially a mixture of oral fluids, including secretions from both the major and minor salivary glands. in addition, it is added up with several constituents of non-salivary origin, such as gingival crevicular fluid, exhalted bronchial and nasal secretions, serum and blood derivatives from oral wounds, bacteria and bacterial products, viruses and fungi, desquamated epithelial cells, other cellular components, and food debris [ ] . therefore, it is not easy to determine precisely the transport properties of the droplet fluid. although surface tension is measured similar to that of water, viscosity can be one or two orders of magnitude larger [ ] making drops less coalescence prone [ , ] . in the present context, viscosity and surface tension might be of importance, because they can influence droplet size distribution specifically by controlling coalescence and breakage. these processes are important only during the ejection stage, and once droplets are in the range below µm, coalescence and break up processes are impeded. due to the dilute dispersed nature of the flow droplet-droplet interaction can be ignored. the ejected swarm of droplets is characterized by its initial size spectrum as given in ( ). the time evolution of the spectrum of droplets that remain within the puff in terms of droplet size, velocity and temperature is the object of interest in this section. the evolution of the ejected droplets depends on the following four important parameters: the time scale τ v on which the droplet velocity relaxes to the puff fluid velocity (in the absence of other forcings), the time scale τ t on which the droplet temperature relaxes to the puff fluid temperature, the settling velocity w of the droplet within the puff fluid, and the reynolds number re based on settling velocity. these quantities are given by [ , , ] where ρ ≈ is the droplet-to-air density ratio, c r ≈ . is the droplet-to-air specific heat ratio, g is the acceleration due to gravity, ν p and κ p are the kinematic viscosity and thermal diffusivity of the puff. in the above, Φ = + . re . and n u = + . re / p r / are the finite reynolds number drag and heat transfer correction factors, where the later is the well-known ranz-marshall nusselt or sherwood number correlation. both corrections simplify in the stokes regime for drops smaller than about µm. here we take the prandtl number of air to be p r = . . in the stokes limit, the velocity and thermal time scales, and the settling velocity of the droplet increase as d , while reynolds number scales as d . the value of these four parameters for varying droplet sizes is presented in fig. , where it is clear that the effect of finite re becomes important only for droplets larger than µm. for smaller droplets τ v , τ t (s), w (m/s), and re . the size of the droplets under investigation is sufficiently small, and the swarm is dilute to prevent their coalescence. furthermore, the droplet weber number w e = ρ p w d/σ can be estimated to be quite small even for droplets of size µm, where σ is the surface tension of the droplet and the relative velocity will be shown in the next section to be well approximated by the settling velocity. therefore, secondary breakup of droplets within the puff can be ignored and the only way in which droplets change their size is via evaporation. according to the analysis of langmuir [ ] , the rate of mass loss due to evaporation of a small sphere depends on the diffusion of the vapor layer away from the sphere surface, and under reasonable hypotheses [ , , , ] , it can be expressed as : where, m is the mass of a droplet of diameter d, d is the diffusion coefficient of the vapor, ρ p is the density of puff air and is the spalding mass number, where y d is the mass fraction of water vapor at the droplet surface and y p is the mass fraction of water vapor in the surrounding puff. under the assumption that n u and b m are nearly constant for small droplets, the above equation can be integrated [ ] to obtain the following law (mapping) for the evolution of the droplet: where d e is the initial droplet diameter at ejection and k = dn u ln( + b m )/ρ has units of m /s and thus represent an effective evaporative diffusivity. it is important to observe that ( ) would predict a loss of mass per unit area tending to infinity as the diameter of the drop tends to zero. this implies that the droplet diameter goes to zero in a finite time and we establish the result which for any time t yields a critical value of droplet diameter, and all droplets that were smaller, or equal, at exhalation (i.e., d e ≤ d e,evap ) would have fully evaporated by t. the only parameter is k . assuming n u = and d = . × − m /s, even for very small values of b m , we obtain the evaporation time for a µm droplet to be less than a second. however, it appears that smaller than a certain critical size, the loss of mass due to evaporation slows down [ ] . this could partly be due to the presence of non-volatiles and other particulate matter within the droplet, whose effects were ignored in the above analysis, and will be addressed in section . . it seems that ( ) can give reliable predictions for droplet diameter down to a few µm with much slower evaporation rates for smaller sizes. irrespective of whether water completely evaporates leaving only the non-volatile droplet nuclei, or the droplet evaporation slows down, the important consequence on the evolution of the droplet size distribution is that it is narrower and potentially centered around micron size. we now consider the motion of the ejected droplets, while they rapidly evaporate. the equation of motion of the droplet is newton's law where e e e z is the unit vector along the vertical direction, m p is the mass of puff displaced by the droplet, v v v d and v v v p are the vector velocity of the droplet and the surrounding puff. provided the droplet time scale τ v is smaller than the time scale of surrounding flow, which is the case for droplets of diameter smaller than µm, the above ode can be perturbatively solved to obtain the following leading order solution [ , , ] according to the above equation, the equilibrium eulerian velocity of the droplet depends on the local fluid velocity plus the still fluid settling velocity w of the droplet plus the third term that arises due to the inertia of the droplet. though at ejection the droplet speed is smaller than the surrounding gas velocity, as argued in section . , the droplets quickly accelerate to approach the puff velocity. in fact, since the puff is decelerating (i.e., |dv v v p /dt| < ), the droplet velocity will soon be larger than the local fluid velocity. as long as the droplet stays within the puff, the velocity and acceleration of the surrounding fluid can be approximated by those of the puff as |v v v p | = ds/dt and |dv v v p /dt| = d s/dt . this allows evaluation of the relative importance of the third term (inertial slip velocity) in terms of the puff motion, which is given in ( ) as [ ] this ratio takes its largest value at the initial time of injection and then decays as /t. using the range of possible values of t e given earlier, this ratio is small for a wide range of initial droplet sizes. we thus confirm that for the most part droplet inertia can be ignored in its motion, and the droplet velocity can be taken to be simply the sum of local fluid velocity and the still fluid settling velocity of the droplet. while the effect of buoyancy on the puff was shown to be small, the same cannot be said of the droplets. the vertical motion of a droplet with respect to the surrounding puff, due to its higher density, is dependent only on the fall velocity w , which scales as d , which in turn decreases as given in ( ) due to evaporation. the droplet's gravitational settling velocity can be integrated over time to obtain the distance over which it falls as a function of time. we now set this fall distance (left hand side) equal to the puff radius (right hand side) to obtain where we have set the droplet diameter at exhalation to be d e,exit , indicating the fact that a droplet of initial diameter equal to d e,exit has fallen by a distance equal to the puff size at time t. thus all larger droplets of size d e > d e,exit have fallen out of the puff by t and we have been referring to these as the exited droplets. it should be pointed out that in the above simple analysis the vertical motion of the particle ignored the vertical component of fluid velocity both from turbulent fluctuations and from the entrainment process. the two critical initial droplet diameters, d e,evap and d e,exit are plotted in fig. a as a function of t. the only other key parameter of importance is k , whose value is varied from − to − m /s. in evaluating d e,exit using ( ), apart from the property values of water and air, we have used the nominal values of α = . , s e = . m and t e = . s (as an example). the solid lines correspond to d e,exit , which decreases with increasing t and for each value of k , there exists a minimum d e below which there is no solution to ( ) since the droplet fully evaporates before falling out of the puff. the dotted lines correspond to d e,evap , which increases with t. the intersection of the two curves is marked by the solid square, which corresponds to the limiting time t lim (k ), beyond which the puff contains only fully-evaporated droplet nuclei containing the viruses. correspondingly we can define a limiting droplet diameter d e,lim (k ). given sufficient time, all initially ejected larger droplets (i.e., d e > d e,lim ) would have fallen out of the puff and all smaller droplets (i.e., d e ≤ d e,lim ) would have evaporated to become droplet nuclei. at times smaller than the limiting time (i.e., for t < t lim ) we have the interesting situation of some droplets falling out of the puff (exited droplets), some still remaining as partially evaporated airborne droplets, and some fully-evaporated to become droplet nuclei. this scenario is depicted in fig. a with an example of t = . s for k = − m /s plotted as a dashed line. there can be significant presence of non-volatile material such as mucus, bacteria and bacterial products, viruses and fungi, and food debris in the ejected droplets [ ] . however, the fraction of ejected droplet volume q de that is made up of these non-volatiles varies substantially from person to person. the presence of non-volatiles alters the analysis of the previous sections in two significant ways. first, each ejected droplet, as it evaporates, will reach a final size that is dictated by the amount of non-volatiles that were initially in it. the larger the droplet size at initial ejection, the larger will be its final size after evaporation, since it contains a larger amount of non-volatiles. if ψ is the volume fraction of non-volatiles in the initial droplet, the final diameter of the droplet nuclei after complete evaporation of volatile matter (i.e., water) will be this size depends on the initial droplet size and composition. note that even a small, for example %, non-volatile composition results in d dr being around % of the initial ejected droplet size. it has also been noted that the evaporation of water can be partial, depending on local conditions in the cloud or environment. we simply assume the fraction ψ to also account for any residual water retained within the droplet nuclei. the second important effect of non-volatile is to reduce the rate of evaporation. as evaporation occurs at the droplet surface, a fraction of the surface will be occupied by the non-volatiles reducing the rate of evaporation. for small values of ψ, the effect of non-volatiles is quite small only at the beginning. the effect of non-volatiles will increase over time, since the volume fraction of nonvolatiles increases as the volatile matter evaporates. because of this ever decreasing evaporation rate, it may take longer for a droplet to decrease from its ejection diameter of d e to its final droplet nuclei diameter of d dr , than what is predicted by ( ) . it should be noted that intermittency of turbulence and heterogeneity of vapor concentration and droplet distribution within the puff will influence the evaporation rate [ , , ] . nevertheless, for simplicity, and for the purposes of the present first order mathematical framework, we use the d -law given in ( ) , but with a smaller value of effective k to account for the effect of non-volatiles and turbulence intermittency. this approximation is likely to be quite accurate in describing the early evolution of the droplet. only at late stages as the droplet approaches its final diameter d dr , the d -law will be in significant error. applying the analysis of the previous sections, taking into account the presence of non-volatiles, we separate the two different time regimes of t ≤ t lim and t ≥ t lim . in the case when t ≤ t lim , we have three types of droplets: (i) exited droplets whose initial size at injection is greater than d e,exit , (ii) droplets of size at ejection smaller than d e,evap that have completely evaporated to become droplet nuclei of size d dr and (iii) intermediate size airborne droplets that are within the puff and still undergoing evaporation. we assume an equation of the form ( ) to approximately apply even in the presence of non-volatiles. with this balance between fall distance of a droplet and the puff radius we obtain the following expression the corresponding limiting diameter of complete evaporation can be obtained from setting d = d e,evap ψ / and d e = d e,evap in ( ) as while the above two estimates are in terms of the droplet diameter at injection, their current diameter at t can be expressed as form the above expressions, we define t lim to be the time when d e,exit = d e,evap , which in terms of current droplet diameter becomes d exit = d evap . beyond this limiting time (i.e., for t > t lim ) the droplets can be separated into only two types: (i) exited droplets whose initial size at injection greater than d e,exit = d e,evap , and (ii) droplets of size at ejection smaller that have become droplet nuclei. the variation of t lim and d e,lim as a function of k is presented in fig. b . it is clear that as k varies over a wide range, t lim ranges from . s to s, and correspondingly d e,lim varies from to µm. we now put together all the above arguments to present a predictive model of the droplet concentration within the puff. the initial condition for the size distribution is set by the ejection process discussed in section , and the simple pareto distribution given in ( ) provides an accurate description. based on the analysis of the previous sections, we separate the two different time regimes of t ≤ t lim and t ≥ t lim . in the case when t ≤ t lim the droplet/aerosol concentration (or the number per unit volume of the puff) can be expressed as where we have recognized the fact that equation ( ) is the mapping d between the current droplet size and its size at injection. due to the turbulent nature of the puff, the distribution of airborne droplets and nuclei is taken to be uniform within the puff. quantities such ass, d evap and d exit are as they have been defined above and the pre-factor /q(t) accounts for the expansion of the puff volume. in the case of t ≥ t lim , the droplet number density spectrum becomes and only droplet nuclei remain within the puff. here, the size of the largest droplet nuclei within the puff is related to its initial unevaporated droplet size as d lim = d e,lim ψ / , and the plot of d e,lim as a function of k for a specific example case of puff and droplet ejection was shown in fig. b . in this subsection we will briefly consider droplet temperature, since it plays a role in determining saturation vapor pressure and the value of k . following pirhadi et al. [ ] we write the thermal equation of the droplet as where c pw is the specific heat of water, k p is the thermal conductivity of the puff air, l is the latent heat of vaporization, t d and t p are the temperatures of the droplet and the surrounding puff. the first term on the right accounts for convective heat transfer from the surrounding air and the second term accounts for heat needed for phase change during evaporation. it can be readily established that the major portion of heat required for droplet evaporation must come from the surrounding air through convective heat transfer. the equilibrium eulerian approach [ ] can again be used to obtain the asymptotic solution of the above thermal equation and the droplet temperature can be explicitly written as where τ t is the thermal time scale of the droplet that was introduced earlier. the second term on the right is negative and thus contributes to the droplet temperature being lower than the surrounding puff. simple calculation with typical values shows that the contribution of the third term is quite small and can be ignored. as a result, the temperature difference between the droplet and the surrounding is largely controlled by the evaporation rate dm/dt, which decreases over time. again, using the properties of water and air, and typical values for n u and b m , we can evaluate the temperature difference t p − t d to be typically a few degrees. thus, the evaporating droplets need to be only a few degrees cooler than the surrounding puff for evaporation to continue. when the puff equilibrates with the surrounding and its velocity falls below the ambient turbulent velocity fluctuation, the subsequent dynamics of the droplet cloud is governed by turbulent dispersion. this late-time evolution of the droplet cloud depends on many factors that characterize the surrounding air. this is where the difference between a small enclosed environment such as an elevator or an aircraft cabin or an open field matters, along with factors such as cross breeze and ventilation. a universal analysis of the late-time evolution of the droplet nuclei cloud is thus not possible, due to problem-specific details. the purpose of this brief discussion is to establish a simple scaling relation to guide when the puff evolution model presented in the above sections gives way to advection and dispersion by ambient turbulence. it should again be emphasized that the temperature difference between the puff fluid containing the droplet nuclei cloud and the ambient air may induce buoyancy effects, which for model simplicity will be taken into account as part of turbulent dispersion. we adopt the classical scaling analysis of richardson [ ] , according to which the radius of a droplet cloud, in the inertial range, will increase as the / power of time as given by where c is a constant, is the dissipation rate, which will be taken to be a constant property of ambient turbulence, and t is the time shift required to match the cloud size at the transition time between the above simple late time model and the puff model. in the above, the subscript lt stands for the late-time behavior of the radius of the droplet-laden cloud. we now make a simple proposal that there exists a transition time t tr , below which the rate of expansion of the puff as given by the puff model is larger than dr lt /dt computed from the above expression. during this early time, ambient dispersion effects can be ignored in favor of the puff model. but for t > t tr droplet-laden cloud's ambient dispersion becomes the dominant effect. the constants t and t tr can be obtained by satisfying the two conditions: (i) the size of the droplet-laden cloud given by ( ) at t tr matches the puff radius at that time given by αs e ((t tr + t e )/t e ) /( +c) , and (ii) the rate of expansion of the droplet-laden cloud by turbulent dispersion matches the rate of puff growth given by the puff model. this latter condition can be expressed as from these two simple conditions, we obtain the final expression for the transition time as given a puff, characterized by its initial ejection length and time scales s e and t e , and the ambient level of turbulence characterized by , the value of transition time can be estimated. if we take entrainment coefficient α = . , the constant c = , and typical values of s e = . m and t e = . s, we can estimate t tr = . s for a dissipation rate of c = − m /s . the transition time t tr increases (or decreases) slowly with decreasing (or increasing) dissipation rate. thus, the early phase of droplet evaporation described by the puff model is valid for o( ) s, before being taken over by ambient turbulent dispersion. however, it must be stressed that the scaling relation of richardson is likely an over-estimation of ambient dispersion, as there are experimental and computational evidences that suggest that the power-law exponent in ( ) is lower than [ ] . but it must be remarked that even with corresponding changes to late-time turbulent dispersion, the impact on transition time can be estimated to be not very large. also, it must be cautioned that according to classical turbulent dispersion theory, during this late-time dispersal, the concentration of virus-laden droplet nuclei within the cloud will not be uniform, but will tend to decay from the central region to the periphery. nevertheless, for sake of simplicity here we assume ( ) to apply and we take the droplet nuclei distribution to be uniform. according to above simple hypothesis, the effect of late-time turbulent dispersion on the number density spectrum is primarily due to the expansion of the could, while the total number of droplet nuclei within the cloud remains the same. thus, the expressions ( ) and ( ) still apply. however, the expression for the volume of the cloud must be appropriately modified as the location of the center of the expanding cloud of droplets is still given by the puff trajectory s(t), which has considerably slowed down during late-time dispersal. the strength of the above model is in its theoretical foundation and analytical simplicity. but, the validity of the approximations and simplifications must be verified in applications to specific scenarios being considered. for example, considering variability in composition, turbulence intermittency, initial conditions of emissions and the state of the ambient, direct observations show that the transition between puff dominated and ambient flow dominated fate of respiratory droplets vary from o( - s) [ ] . this section will mainly survey the existing literature on issues pertaining to what fraction of the droplets and aerosols at any location gets inhaled by the recipient host, and how this is modified by the use of masks. these effects modeled as inhalation (aspiration) and filtration efficiencies will then be incorporated into the puff-cloud model. the pulmonary ventilation (breathing) has a cyclic variation that varies markedly with age and metabolic activities. the intensity of breathing (minute ventilation) is expressed in l/min of inhaled and exhaled air. for the rest condition, the ventilation rate is about - l/min and increases to about - l/min for mild activities. during exercise, ventilation increases significantly depending on age and metabolic needs of the activity. in the majority of earlier studies on airflow and particle transport and deposition in human airways, the transient nature of breathing was ignored for simplification and to reduce the computational cost. haubermann et al. [ ] performed experiments on a nasal cast and found that particle deposition for constant airflow is higher than those for cyclic breathing. shi et al. [ ] performed simulations on nanoparticle depositions in the nasal cavity under cyclic airflow and found that the effects of transient flow are important. grgic et al. [ ] and horschler et al. [ ] performed experimental and numerical studies, respectively, on flow and particle deposition in a human mouth-throat model, and the human nasal cavity. particle deposition in a nasal cavity under cyclic breathing condition was investigated by bahmanzadeh et al. [ ] , naseri et al. [ ] , and kiasadegh et al. [ ] , where the unsteady lagrangian particle tracking was used. they found there are differences in the predicted local deposition for unsteady and equivalent steady flow simulations. in many of these studies, a sinusoidal variation for the volume of air inhaled is used. that is here q max is the maximum flow rate, and t = s is the period of breathing cycle for an adult during rest or mild activity. the period of breathing also changes with age and the level of activity. haghnegahdar et al. [ ] investigated the transport, deposition, and the immune system response of the low-strain influenza a virus iav laden droplets. they noted that the shape of the cyclic breathing is subject dependent and also changes with nose and mouth breathing. they provided an eight-term fourier series for a more accurate description of the breathing cycle. the hygroscopic growth of droplets was also included in their study. analysis of aspiration of particles through the human nose was studied by ogden and birkett [ ] and armbruster and breuer [ ] . accordingly, the aspiration efficiency η a is defined as the ratio of the concentration of inhaled particles to the ambient concentration. using the results of earlier studies and also his works, vincent [ ] proposed a correlation for evaluating the inhalability of particles. that is, the aspiration efficiency η a of particles smaller than µm is given as, η a (d) = . [ + exp(− . d)] for d < µm . ( ) figure : influence of thermal plume on aspiration efficiency [ ] . here, d is the aerodynamic diameter of the particles in micron. while the above correlation provides the general trend that larger particles are more difficult to inhale, it has a number of limitations. it was developed for mouth-breathing with the head oriented towards the airflow direction with speeds in the range of m/s to m/s. the experimental investigation of aerosol inhalability was reported by hsu and swift [ ] , su and vincent [ , ] , aitken et al. [ ] , and kennedy and hinds [ ] . dai et al. [ ] performed in-vivo measurements of inhalability of large aerosol particles in calm air and fitted their data to several correlations. for calm air condition, they suggested, where d must be in microns. computational modeling of inhalability of aerosol particles were reported by many researchers [ , , , , , ] . interpersonal exposure was studied by [ , ] . the influence of thermal plume was studied by salmanzadeh et al. [ ] . naseri et al. [ ] performed a series of computational modeling and analyzed the influence of the thermal plume on particle aspiration efficiency when the body temperature is higher or lower than the ambient. their results are reproduced in figure . here the case that the body temperature t b = . • c and the ambient temperature t a = . • c (upward thermal plume) and the case that t b = . • c and t a = . • c (downward thermal plume) are compared with the isothermal case studied by dai et al. [ ] . it is seen that when the body is warmer than the surrounding, the aspiration ratio increases. when the ambient air is at a higher temperature than the body, the inhalability decreases compared to the isothermal case. in light of the results of the previous section, it can be concluded that at a distance of o( ) m the ejected mostly water droplets have sufficiently reduced in size that these o( ) µm aerosols have near perfect inhalability. using a respiratory face mask is a practical approach against exposure to airborne viruses and other pollutants. among the available facepiece respirators, n , and surgical masks are considered figure : filtration efficiency of different respiratory masks under normal breathing conditions [ , ] . to be highly effective [ , ] . n mask has a filtration efficiency of more than % in the absence of face leakage [ , ] . surgical masks are used extensively in the hospital and operating rooms [ ] . nevertheless, there have been concerns regarding their effective filtration of airborne bacteria and viruses [ , , ] . there is often discomfort in wearing respiratory masks for extended durations that increases the risk of spread of infection. the breathing resistance of a mask is directly related to the pressure drop of the filtering material. the efficiency of respiratory masks varies with several factors, including the intensity and frequency of breathing as well as the particle size [ ] . the filtration efficiencies of different masks under normal breathing conditions, as reported by zhang et al. [ ] and feng et al. [ ] , in the absence of leakage, are shown in figure . as an example, the measured filtration efficiency of the surgical mask can be fit as where droplet nuclei diameter d must be in microns. it is seen that the filtration efficiencies of different masks vary significantly, with n having the best performance, which is followed by the surgical mask. it is also seen that all masks could capture large particles. the n , surgical, and procedure masks remove aerosols larger than a couple of microns. cotton and gauze masks capture a major fraction of particles larger than µm. the capture efficiency of all masks also shows an increasing trend as particle size becomes smaller than nm due to the effect of the brownian motion of nanoparticles. figure also shows that the filtration efficiencies of all respiratory masks drop for the particle sizes in the range of nm to about µm. this is because, in this size range, both the inertia impaction and the brownian diffusion effect are small, and the mask capture efficiency reduces. based on these results, and the earlier finding that most ejected droplets within the cloud reduce their size substantially and could become sub-micron-sized aerosol particles by about o( − ) m distance, it can be stated that only professional masks such as n , surgical, and procedure masks provide reliable reduction in the inhaled particles. hence, it is important for healthcare workers to have access to high-grade respirators upon entering a room or space with infectious patients [ ] . another importance of mask is that it will eliminate the momentum of expelled puff during sneezing, coughing, speaking, and breathing, and reduce the distance that the droplet cloud would transport. therefore, wearing a mask will reduce the chance for transmission of infectious viruses. it should be emphasized that the concentration that a receiving host will inhale (φ inhaled ) depends on the local concentration in the breathing zone adjusted by the aspiration efficiency given by equations ( ) and ( ) (or plotted in figure ). when the receiving host wears a mask, an additional important correction is needed by multiplying by a factor ( − η f ), where η f is the filtration efficiency plotted in figure . that is, where φ(d, t) is the droplet nuclei concentration at the breathing zone given in ( ) or ( ) . it is seen that the concentration of inhaled droplets larger than microns significantly decreases when the mask is used. but the exposure to smaller droplets, particularly, in the size range of nm to µm varies with the kind of mask used. the object of this section is to put together the different models of the puff and droplet evolution described in the previous sections, underline their simplifications, and demonstrate their ability to make useful predictions. such results under varying scenarios can then be potentially used for science-based policy making, such as establishing multi-layered social distancing guidelines and other safety measures. in particular, we aim at modeling the evolution of the puff and the concentration of airborne droplets and nuclei that remain within the cloud so that the probability of potential transmission can be estimated. as discussed in section . , the virus-laden droplets exhaled by an infected host will undergo a number of transformations before reaching the next potential host. to prevent transmission, current safety measures impose a safety distance of two meters. furthermore, cloth masks are widely used by the public and their effectiveness has been shown to be questionable for droplets and aerosols of size about a micron. the adequacy of these common recommendations and practices can be evaluated by investigating the concentration of airborne droplets and nuclei at distances larger than one meter and the probability of them being around a micron in diameter, since such an outcome will substantially increase the chances of transmission. in the following we will examine two effects: the presence of small quantities of non-volatile matter in the ejected drops that remain as droplet nuclei after evaporation, and the adequacy of the log-normal or pareto distribution to quantify the number of droplets in the lower diameter classes. first, in section . , we will consider predictions based on a currently used model, where the droplets are allowed to fully evaporate. then, in section . we will consider improved predictions based on the present model, where the effect of non-volatiles and the motion of the puff are accurately modeled. let us consider the situation of speaking or coughing, whose initial puff volume and momentum are such that they yield s e . m and t e . s. under this specific condition, as shown in figure the puff travels about m in about s . for this simple example scenario, we will examine our ability to predict airborne droplet and nuclei concentration, as an important step towards estimating the potential for airborne transmission in situations commonly encountered. in most of the countries, current guidelines are based on the work by xie et al. [ ] , who revisited previous guidelines by [ ] with improved evaporation and settling models. they identified the possibility that, due to evaporation, the droplets quickly become vanishingly small before reaching a significant distance and thus may represent a minor danger for transmission due to their minimal virus loading. this scenario is shown in figure , where we present the evolution of the drop size figure : evolution of the drop size distribution spectra according to the currently used evaporation models [ , ] . spectrum while droplets are transported by the ejected puff. the initial droplet size distribution is taken to be that measured by duguid [ ] modeled with a log-normal distribution, which in the monte-carlo approach is randomly sampled with one million droplets divided into one thousand diameter classes. each droplet is then followed while evaporating and falling. the evaporation model is taken to be ( ) with the effective diffusion coefficient estimated as k · − m /s. this value is computed under the assumption that drops are made of either pure water or a saline solution [ ] and that air has about % humidity. therefore, this is an environment unfavorable to evaporation and consequently drop size reduction happens relatively slowly. however, from the figure it is clear that, even in this extreme case, after few tens of centimeters, and within a second, all droplets have evaporated down to a size below µm. this is in line with the predictions of xie et al. [ ] . naturally, if the air is dryer, the effective evaporation coefficient will be larger (even as large as k − m /s) and the droplet size spectrum will evolve even faster, leaving virtually all droplets to be smaller than µm in the puff. in the model, we set the minimum diameter that all drops can achieve equal to µm (shown by the single point indicated in the figure) so to emphasize this effect of the model. recall that intermittency of turbulence with the puff can create clusters of droplets and concentration of vapor and thereby significantly alter the evaporation rate [ , , ] . hence, our estimate of evaporation time is a lower bound, as governed by the d -law ( ). as discussed in section . there is current consensus that droplets ejected during sneezing or coughing contain, in addition to water, other biological and particulate non-volatile matter. specifically, viruses themselves are of size almost . µm. here we will examine the evolution of droplet size distribution in the presence of non-volatile matter. it will be clear in the following, that in this case, even a small amount of non-volatile matter plays an important role with the evaporation coefficient being a minor factor in deciding how fast the final state is reached. in figure , we show the final distribution of droplets under two scenarios, where the initially ejected droplets contain . % and . % of non-volatile matter. in figure a , the initial drop size distribution is modeled as a log-normal distribution (i.e., as in fig. ), whereas in figure b , the initial drop size distribution is modeled according to the pareto distribution with initial droplet size varying between and µm. this range is smaller than that suggested earlier in section . however, drops that are larger than µm fall out of the cloud and therefore are not important for airborne transmission and droplets initially smaller than µm have much smaller viral load. here "final droplet size distribution" indicates the number of droplets that remain within the puff after all the larger droplets have fallen out and all others have completed their evaporation to become droplet nuclei. this final number of droplet nuclei as a function of size does not vary with time or distance. the size distribution is computed here as in figure , with a random sampling from the initial log-normal or pareto distribution. as before, these computations used an evaporation coefficient of k = − m /s. however, there are two important differences: each droplet is allowed to fall vertically according to its time-dependent settling velocity, w , which decreases over time as the droplet evaporates. integration of the fall velocity over time provides the distance traveled by the droplet relative to the puff. droplets whose fall distance exceeds the size of the puff are removed from consideration. second, each droplet that remains within the puff evaporates to its limiting droplet nuclei size that is dictated by the initial amount of non-volatile matter contained within the droplet. for ψ = . % non-volatile matter, the final aerosol size cannot decrease below % of the initial droplet diameter, whereas for . % of non-volatile matter, the final droplet size cannot decrease below % of the initial diameter. from fig. , it is clear that when evaporation is complete, the drop size distribution rigidly shifts towards smaller diameters, with a cut-off upper diameter due to the settling of large drops (these cut-offs are the upper limits of the blue and red curves). essentially, it is clear that the initial number of viruses that were in droplets of size smaller than d e,exit still remain within the cloud almost unchanged, representing a more dangerous source of transmission than predicted by the conventional assumption of near-full evaporation. again, it is important to note that the final droplet size distribution is established rapidly even with the somewhat lower effective evaporation diffusivity of k = − m /s, and when not accounting for the effect of localized moisture of the cloud in further reducing the rate. figure also illustrates the important difference in the drop size distribution. the pareto distribution will predict a much larger number of drops in the micron and sub-micron range, possibly the most dangerous for both aspiration efficiency and filtration inefficiency. in this section we will demonstrate the efficacy of the simple model presented in ( ) and ( ) for the prediction of droplet/aerosol concentration. in contrast to the monte-carlo approach of the previous subsection, where the evolution of each droplet was accurately integrated, here we will use the analytical prediction along with its simplifying assumptions. the cases considered are identical to those presented in figure for ψ = . % and k = − m /s. the initial droplet size distributions considered are again log-normal and pareto distributions. in this case, however, we underline that the quantity of importance in airborne transmission is not the total number of droplet nuclei, but rather their concentration in the proximity of a susceptible host. accordingly, we plot in figure airborne droplet and nuclei concentration (per liter) of volume as a function of droplet size. these results are without taking into account the aspiration and filtration efficiencies given in ( ) . here the area under the curve between any two diameters yields the number of droplets within this size range per liter of volume within the cloud. at the early times of t = . and . s, we see that larger droplets above a certain size have fallen out of the cloud, while droplet nuclei smaller than d evap have fully evaporated and their distribution is a rigidly-shifted version of the original distribution. the distribution of intermediate size airborne droplets reflects the fact that they are still undergoing evaporation. unlike in figure , the concentration continues to fall even after t lim . s when the number and size of droplets within the cloud have reached their limiting value. this is simply due to the fact that the volume of the puff continues to increase and this continuously dilutes the aerosol concentration. most importantly, the results of the simple model presented in ( ) and ( ) are in excellent agreement with those obtained from monte-carlo simulation. the increasing size of the contaminated cloud with time can be predicted with ( ) and the centroid is given by the scaling law ( ) . as the final step, we include the effect of aspiration and filtration efficiencies to compute the concentration of droplet nuclei that get into the receiving host. in computing φ inhaled using ( ), we take the droplet/nuclei concentration at the location of the receiving host to be that computed and presented in figure . we consider the receiving host to be using a surgical mask, whose efficiency was shown in figure and given in ( ) . the aspiration efficiency of the receiving host is taken to that given in ( ) . the results are presented in figure , where the figure includes the initial log-normal and pareto distributions (green lines). it is clear that due to filtration efficiency of the surgical mask no droplet nuclei of size greater than µm gets into the receiving host. for smaller droplet nuclei, the inhaled concentration is substantially lower due to both the aspiration and the filtration efficiencies. clearly, the inhaled concentration will be higher and the size range will be wider, and will approach those shown in figure , with the use of cotton or gauze masks. figure : droplet/aerosol concentration evolution as predicted by the analytical model presented in ( ) and ( ) . left frame shows the evolution starting from the log-normal distribution. right frame shows the evolution starting from the pareto distribution. both cases use k = − m s. the primary goal of this paper is to provide a unified theoretical framework that accounts for all the physical processes of importance, from the ejection of droplets by breathing, talking, coughing and sneezing to the inhalation of resulting aerosols by the receiving host. these processes include: (i) forward advection of the exhaled droplets with the puff of air initially ejected; (ii) growth of the puff by entrainment of ambient air and its deceleration due to drag; (iii) gravitational settling of some of the droplets out of the puff; (iv) modeling of droplets evaporation, assuming that the d law prevails; (v) presence of non-volatile compounds which form the droplet nuclei left behind after evaporation; (vi) late-time dispersal of the droplet nuclei-laden cloud due to ambient air turbulent dispersion. despite the complex nature of the physical processes involved, the theoretical framework results in a simple model for the airborne droplet and nuclei concentration within the cloud as a function of droplet diameter and time, which is summarized in equations ( ), ( ) and ( ) . this framework can be used to calculate the concentration of virus-laden nuclei at the location of any receiving host as a function of time. as additional processes, the paper also considers (vii) efficiency of aspiration of the droplet nuclei by the receiving host; and (viii) effectiveness of different kinds of masks in filtering the nuclei of varying size. it must be emphasized that the theoretical framework has been designed to be simple and therefore involves a number of simplifying assumptions. hence, it must be considered as the starting point. by relaxing the approximations and by adding additional physical processes of relevance, more complex theoretical models can be developed. one of the primary advantages of such a simple theoretical framework is that varying scenarios can be considered quite easily: these figure : droplet nuclei concentration inhaled by the infected host wearing a surgical mask as predicted by the analytical model presented in ( ) and ( ) with the aspiration and filtration efficiencies given in ( ) and ( ) . the left and right frames show the results of initial log-normal and pareto distributions. both cases use k = − m s. different scenarios include varying initial puff volume, puff velocity, number of droplets ejected, their size distribution, non-volatile content, ambient temperature, humidity, and ambient turbulence. the present theoretical framework can be, and perhaps must be, improved in several significant ways in order for it to become an important tool for reliable prediction of transmission. (i) accurate quantification of the initially ejected droplets still remains a major challenge. further high-quality experimental measurements and high-fidelity simulations [ ] are required, especially mimicking the actual processes of breathing, talking, coughing and sneezing, to fully understand the entire range of droplet sizes produced during the exhalation process. (ii) as demonstrated above, the rate at which an ejected droplet evaporates plays an important role in determining how fast they reach their fully-evaporated state. it is thus important to calculate more precisely the evaporation rate of non-volatile-containing realistic droplets resulting from human exhalation. the precise value of evaporation rate may not be important when droplets evaporate fast, since all droplets remaining within the puff would have completed their evaporation. but under slow evaporation conditions, accurate evaluation of evaporation is important. (iii) the assumption of uniform spatial distribution of droplets within the puff and later within the dispersing cloud is a serious approximation [ ] . the intermittency of turbulence within the initial puff and later within the droplet cloud is important to understand and couple with the evaporation dynamics of the droplets. in addition to the role of intermittency, even the mean concentration of airborne droplets and nuclei may decay from the center to the outer periphery of the puff/cloud. characterization of this inhomogeneous distribution will improve the predictive capability of the model. (iv) the presence of significant ambient mean flow and turbulence either from indoor ventilation or outdoor cross-flow will greatly influence the dispersion of the virus-laden droplets. but accounting for their effects can be challenging even in experimental and computational approaches. detailed experiments and highly-resolved simulations of specific scenarios should be pursued. but it will not be possible to cover all possible scenarios with such an approach. a simpler approach where the above theoretical framework can be extended to include additional models such as random flight model (similar to those pursued in the calculation of atmospheric dispersion of pollutants [ ] ) may be promising approaches. aerosol inhalability in low air movement environments effect of airway opening on production of exhaled particles investigations into defining inhalable dust aerosol emission and superemission during human speech increase with voice loudness natural ventilation for infection control in health-care settings edited by world health organization effectiveness of surgical and cotton masks in blocking sarscov- : a controlled comparison in patients an experimental framework to capture the flow dynamics of droplets expelled by a sneeze airborne or droplet precautions for health workers treating coronavirus disease . the journal of infectious diseases unsteady particle tracking of micro-particle deposition in the human nasal cavity under cyclic inspiratory flow turbulent dispersed multiphase flow a scaling analysis for point-particle approaches to turbulent multiphase flows manikinbased performance evaluation of n filtering-facepiece respirators challenged with nanoparticles self-similar wave produced by local perturbation of the kelvin-helmholtz shear-layer instability turbulent gas clouds and respiratory pathogen emissions: potential implications for reducing transmission of covid- violent expiratory events: on coughing and sneezing anatomy of a sneeze. howard hughes medical institute image of the week the fluid dynamics of disease transmission turbulent gas clouds and respiratory pathogen emissions: potential implications for reducing transmission of covid- the rate of evaporation of droplets. evaporation and diffusion coefficients, and vapour pressures of dibutyl phthalate and butyl stearate prediction of particle transport in enclosed environment extended lifetime of respiratory droplets in a turbulent vapour puff and its implications on airborne disease transmission a systematic review of the science and engineering of masks and respiratory protection: need for standardized evaluation and testing comments on a ruptured soap film in vivo measurements of inhalability of ultralarge aerosol particles in calm air by humans dense spray evaporation as a mixing process gas-liquid atomisation: gas phase characteristics by piv measurements and spatial evolution of the spray entrainment and growth of a fully developed, two-dimensional shear layer aerosol and surface stability of sars-cov- as compared with sars-cov- the size and the duration of air-carriage of respiratory droplets and dropletnuclei preferential concentration of particles by turbulence nonlinear dynamics and breakup of free-surface flows quantification of preferential concentration of colliding particles in a homogeneous isotropic turbulent flow influence of wind and relative humidity on the social distancing effectiveness to prevent covid- airborne transmission: a numerical study a fast eulerian method for disperse two-phase flow a locally implicit improvement of the equilibrium eulerian method equilibrium eulerian approach for predicting the thermal field of a dispersion of small particles airborne infectious disease and the suppression of pulmonary bioaerosols instability regimes in the primary breakup region of planar coflowing sheets transient cfd simulation of the respiration process and interperson exposure assessment characterisation of human saliva as a platform for oral dissolution medium development modeling primary atomization the role of particle size in aerosolised pathogen transmission: a review the effect of unsteady flow rate increase on in vitro mouth-throat deposition of inhaled boluses performance of an n filtering facepiece particulate respirator and a surgical mask during human breathing: two pathways for particle penetration lung aerosol dynamics of airborne influenza a virusladen droplets and the resultant immune system responses: an in silico study a novel approach to atmospheric dispersion modelling: the puffparticle model characterizations of particle size distribution of the droplets exhaled by sneeze the influence of breathing patterns on particle deposition in a nasal replicate cast cfd study of exhaled droplet transmission between occupants under different ventilation strategies in a typical office room on simulating primary atomization using the refined level set grid method on the assumption of steadiness of nasal cavity flow the measurements of human inhalability of ultralarge aerosols in calm air using mannikins evolution of raindrop size distribution by coalescence, breakup, and evaporation: theory and observations detailed predictions of particle aspiration affected by respiratory inhalation and airflow source and trajectories of inhaled particles from a surrounding environment and its deposition in the respiratory airway vortices catapult droplets in atomization the mechanism of breath aerosol formation the diagnostic applications of saliva -a review inhalability of large solid particles transient numerical simulation of airflow and fibrous particles in a human upper airway model inhalability of micron particles through the nose and mouth the evaporation of small spheres respiratory performace offered by n respirators and surgical masks: human subject evaluation with nacl aerosol representing bacterial and viral particle size range edge-effect: liquid sheet and droplets formed by drop impact close to an edge atomization and sprays respiratory virus shedding in exhaled breath and efficacy of face masks effervescent atomization in two dimensions a scaling analysis of added-mass and history forces and their coupling in dispersed multiphase flows inter-phase heat transfer and energy coupling in turbulent dispersed multiphase flows spray formation in a quasiplanar gas-liquid mixing layer at moderate density ratios: a numerical closeup multiscale simulation of atomization with small droplets represented by a lagrangian point-particle model disposable surgical face masks for preventing surgical wound infection in clean surgery surgical mask vs n respirator for preventing influenza among health care workers: a randomized trial relation between the airborne diameters of respiratory droplets and the diameter of the stains left after recovery propagation and breakup of liquid menisci and aerosol generation in small airways density contrast matters for drop fragmentation thresholds at low ohnesorge number contributionà l'étude de l'atomisation assistée d'un liquide : instabilité de cisaillement et génération du spray experimental and analytical study of the shear instability of a gas-liquid mixing layer improved strategy to control aerosol-transmitted infections in a hospital suite a review of inhalability fraction models: discussion and recommendations influenza virus aerosols in human exhaled breath: particle size, culturability, and effect of surgical masks it is time to address airborne transmission of covid- airborne transmission of sars-cov- : the world should face the reality droplet-wall collisions: experimental studies of the deformation and breakup process effect of turbulent thermal plume on aspiration efficiency of microparticles numerical investigation of transient transport and deposition of microparticles under unsteady inspiratory flow in human upper airways cfr respiratory protective devices: final rules and notice toward understanding the risk of secondary airborne infection: emission of respirable pathogens the human head as a dust sampler oceanic diffusion diagrams droplet-air collision dynamics: evolution of the film thickness collection, particle sizing and detection of airborne viruses use of breakup time data and velocity history data to predict the maximum size of stable fragments fo acceleration-induced breakup of a single drop phase change and deposition of inhaled droplets in the human nasal cavity under cyclic inspiratory airflow ageing and burst of surface bubbles biosurfactants change the thinning of contaminated bubbles at bacteria-laden water interfaces performance of n respirators: filtration efficiency for airborne microbial and inert particles oxford-mit evidence review: what is the evidence to support the -metre social distancing rule to reduce covid- transmission? atmospheric diffusion shown on a distance-neighbour graph viscosity-modulated breakup and coalescence of large drops in bounded turbulence effect of thermal plume adjacent to the body on the movement of indoor air aerosol particles advanced models of fuel droplet heating and evaporation mechanisms for selective radial dispersion of microparticles in the transitional region of a confined turbulent round jet visualization of sneeze ejecta: steps of fluid fragmentation leading to respiratory droplets breathing is enough: for the spread of influenza virus and sars-cov- by breathing only laminar airflow and nanoparticle or vapor deposition in a human nasal cavity model controversy around airborne versus droplet transmission of respiratory viruses: implication for infection prevention. current opinion in infectious diseases assessing the dynamics and control of dropletand aerosol-transmitted influenza using an indoor positioning system coalescence and size distribution of surfactant laden droplets in turbulent flow small droplet aerosols in poorly ventilated spaces and sars-cov- transmission new experimental studies to directly measure aspiration efficiencies of aerosol samplers in calm air experimental measurements of aspiration efficiency for idealized spherical aerosol samplers in calm air the dynamics of thin sheets of fluid iii. disintegration of fluid sheets satellite and subsatellite formation in capillary breakup buoyancy effects in fluids ocean spray drop fragmentation on impact fine structure of the vapor field in evaporating dense sprays aerosol sampling. science and practice unsteady sheet fragmentation: droplet sizes and speeds universal rim thickness in unsteady sheet fragmentation non-galilean taylor-culick law governs sheet dynamics in unsteady fragmentation transverse instabilities of ascending planar jets formed by wave impacts on vertical walls on air-borne infection: study ii. droplets and droplet nuclei airborne contagion and air hygiene. an ecological study of droplet infections. airborne contagion and air hygiene. an ecological study of droplet infections prediction of the size distribution of secondary ejected droplets by crown splashing of droplets impinging on a solid wall how far droplets can move in indoor environmentsrevisiting the wells evaporation-falling curve investigation of the flow-field in the upper respiratory system when wearing n filtering facepiece respirator airflow resistance and bio-filtering performance of carbon nanotube filters and current facepiece respirators key: cord- - tj eve authors: porter, mason a. title: nonlinearity + networks: a vision date: - - journal: nan doi: nan sha: doc_id: cord_uid: tj eve i briefly survey several fascinating topics in networks and nonlinearity. i highlight a few methods and ideas, including several of personal interest, that i anticipate to be especially important during the next several years. these topics include temporal networks (in which the entities and/or their interactions change in time), stochastic and deterministic dynamical processes on networks, adaptive networks (in which a dynamical process on a network is coupled to dynamics of network structure), and network structure and dynamics that include"higher-order"interactions (which involve three or more entities in a network). i draw examples from a variety of scenarios, including contagion dynamics, opinion models, waves, and coupled oscillators. in its broadest form, a network consists of the connectivity patterns and connection strengths in a complex system of interacting entities [ ] . the most traditional type of network is a graph g = (v, e) (see fig. a) , where v is a set of "nodes" (i.e., "vertices") that encode entities and e ⊆ v × v is a set of "edges" (i.e., "links" or "ties") that encode the interactions between those entities. however, recent uses of the term "network" have focused increasingly on connectivity patterns that are more general than graphs [ ] : a network's nodes and/or edges (or their associated weights) can change in time [ , ] (see section ), nodes and edges can include annotations [ ] , a network can include multiple types of edges and/or multiple types of nodes [ , ] , it can have associated dynamical processes [ ] (see sections , , and ) , it can include memory [ ] , connections can occur between an arbitrary number of entities [ , ] (see section ) , and so on. associated with a graph is an adjacency matrix a with entries a i j . in the simplest scenario, edges either exist or they don't. if edges have directions, a i j = when there is an edge from entity j to entity i and a i j = when there is no such edge. when a i j = , node i is "adjacent" to node j (because we can reach i directly from j), and the associated edge is "incident" from node j and to node i. the edge from j to i is an "out-edge" of j and an "in-edge" of i. the number of out-edges of a node is its "out-degree", and the number of in-edges of a node is its "in-degree". for an undirected network, a i j = a ji , and the number of edges that are attached to a node is the node's "degree". one can assign weights to edges to represent connections with different strengths (e.g., stronger friendships or larger transportation capacity) by defining a function w : e −→ r. in many applications, the weights are nonnegative, although several applications [ ] (such as in international relations) incorporate positive, negative, and zero weights. in some applications, nodes can also have selfedges and multi-edges. the spectral properties of adjacency (and other) matrices give important information about their associated graphs [ , ] . for undirected networks, it is common to exploit the beneficent property that all eigenvalues of symmetric matrices are real. traditional studies of networks consider time-independent structures, but most networks evolve in time. for example, social networks of people and animals change based on their interactions, roads are occasionally closed for repairs and new roads are built, and airline routes change with the seasons and over the years. to study such time-dependent structures, one can analyze "temporal networks". see [ , ] for reviews and [ , ] for edited collections. the key idea of a temporal network is that networks change in time, but there are many ways to model such changes, and the time scales of interactions and other changes play a crucial role in the modeling process. there are also other [i drew this network using tikz-network, by jürgen hackl and available at https://github.com/hackl/tikz-network), which allows one to draw networks (including multilayer networks) directly in a l a t e x file.] . an example of a multilayer network with three layers. we label each layer using di↵erent colours for its state nodes and its edges: black nodes and brown edges (three of which are unidirectional) for layer , purple nodes and green edges for layer , and pink nodes and grey edges for layer . each state node (i.e. nodelayer tuple) has a corresponding physical node and layer, so the tuple (a, ) denotes physical node a on layer , the tuple (d, ) denotes physical node d on layer , and so on. we draw intralayer edges using solid arcs and interlayer edges using broken arcs; an interlayer edge is dashed (and magenta) if it connects corresponding entities and dotted (and blue) if it connects distinct ones. we include arrowheads to represent unidirectional edges. we drew this network using tikz-network (jürgen hackl, https://github.com/hackl/tikz-network), which allows one to draw multilayer networks directly in a l at ex file. , which is by jürgen hackl and is available at https://github.com/hackl/tikz-network. panel (b) is inspired by fig. of [ ] . panel (d), which is in the public domain, was drawn by wikipedia user cflm and is available at https://en.wikipedia.org/wiki/simplicial_complex.] important modeling considerations. to illustrate potential complications, suppose that an edge in a temporal network represents close physical proximity between two people in a short time window (e.g., with a duration of two minutes). it is relevant to consider whether there is an underlying social network (e.g., the friendship network of mathematics ph.d. students at ucla) or if the people in the network do not in general have any other relationships with each other (e.g., two people who happen to be visiting a particular museum on the same day). in both scenarios, edges that represent close physical proximity still appear and disappear over time, but indirect connections (i.e., between people who are on the same connected component, but without an edge between them) in a time window may play different roles in the spread of information. moreover, network structure itself is often influenced by a spreading process or other dynamics, as perhaps one arranges a meeting to discuss a topic (e.g., to give me comments on a draft of this chapter). see my discussion of adaptive networks in section . for convenience, most work on temporal networks employs discrete time (see fig. (b) ). discrete time can arise from the natural discreteness of a setting, dis-cretization of continuous activity over different time windows, data measurement that occurs at discrete times, and so on. one way to represent a discrete-time (or discretized-time) temporal network is to use the formalism of "multilayer networks" [ , ] . one can also use multilayer networks to study networks with multiple types of relations, networks with multiple subsystems, and other complicated networked structures. fig. (c)) has a set v of nodesthese are sometimes called "physical nodes", and each of them corresponds to an entity, such as a person -that have instantiations as "state nodes" (i.e., node-layer tuples, which are elements of the set v m ) on layers in l. one layer in the set l is a combination, through the cartesian product l × · · · × l d , of elementary layers. the number d indicates the number of types of layering; these are called "aspects". a temporal network with one type of relationship has one type of layering, a timeindependent network with multiple types of social relationships also has one type of layering, a multirelational network that changes in time has two types of layering, and so on. the set of state nodes in m is v m ⊆ v × l × · · · × l d , and the set of indicates that there is an edge from node j on layer β to node i on layer α (and vice versa, if m is undirected). for example, in fig. (c) , there is a directed intralayer edge from (a, ) to (b, ) and an undirected interlayer edge between (a, ) and (a, ). the multilayer network in fig. (c) has three layers, |v | = physical nodes, d = aspect, |v m | = state nodes, and |e m | = edges. to consider weighted edges, one proceeds as in ordinary graphs by defining a function w : e m −→ r. as in ordinary graphs, one can also incorporate self-edges and multi-edges. multilayer networks can include both intralayer edges (which have the same meaning as in graphs) and interlayer edges. the multilayer network in fig. (c) has directed intralayer edges, undirected intralayer edges, and undirected interlayer edges. in most studies thus far of multilayer representations of temporal networks, researchers have included interlayer edges only between state nodes in consecutive layers and only between state nodes that are associated with the same entity (see fig. (c)). however, this restriction is not always desirable (see [ ] for an example), and one can envision interlayer couplings that incorporate ideas like time horizons and interlayer edge weights that decay over time. for convenience, many researchers have used undirected interlayer edges in multilayer analyses of temporal networks, but it is often desirable for such edges to be directed to reflect the arrow of time [ ] . the sequence of network layers, which constitute time layers, can represent a discrete-time temporal network at different time instances or a continuous-time network in which one bins (i.e., aggregates) the network's edges to form a sequence of time windows with interactions in each window. each d-aspect multilayer network with the same number of nodes in each layer has an associated adjacency tensor a of order (d + ). for unweighted multilayer networks, each edge in e m is associated with a entry of a, and the other entries (the "missing" edges) are . if a multilayer network does not have the same number of nodes in each layer, one can add empty nodes so that it does, but the edges that are attached to such nodes are "forbidden". there has been some research on tensorial properties of a [ ] (and it is worthwhile to undertake further studies of them), but the most common approach for computations is to flatten a into a "supra-adjacency matrix" a m [ , ] , which is the adjacency matrix of the graph g m that is associated with m. the entries of diagonal blocks of a m correspond to intralayer edges, and the entries of off-diagonal blocks correspond to interlayer edges. following a long line of research in sociology [ ] , two important ingredients in the study of networks are examining ( ) the importances ("centralities") of nodes, edges, and other small network structures and the relationship of measures of importance to dynamical processes on networks and ( ) the large-scale organization of networks [ , ] . studying central nodes in networks is useful for numerous applications, such as ranking web pages, football teams, or physicists [ ] . it can also help reveal the roles of nodes in networks, such as those that experience high traffic or help bridge different parts of a network [ , ] . mesoscale features can impact network function and dynamics in important ways. small subgraphs called "motifs" may appear frequently in some networks [ ] , perhaps indicating fundamental structures such as feedback loops and other building blocks of global behavior [ ] . various types of largerscale network structures, such as dense "communities" of nodes [ , ] and coreperiphery structures [ , ] , are also sometimes related to dynamical modules (e.g., a set of synchronized neurons) or functional modules (e.g., a set of proteins that are important for a certain regulatory process) [ ] . a common way to study large-scale structures is inference using statistical models of random networks, such as through stochastic block models (sbms) [ ] . much recent research has generalized the study of large-scale network structure to temporal and multilayer networks [ , , ] . various types of centrality -including betweenness centrality [ , ] , bonacich and katz centrality [ , ] , communicability [ ] , pagerank [ , ] , and eigenvector centrality [ , ] -have been generalized to temporal networks using a variety of approaches. such generalizations make it possible to examine how node importances change over time as network structure evolves. in recent work, my collaborators and i used multilayer representations of temporal networks to generalize eigenvector-based centralities to temporal networks [ , ] . one computes the eigenvector-based centralities of nodes for a timeindependent network as the entries of the "dominant" eigenvector, which is associated with the largest positive eigenvalue (by the perron-frobenius theorem, the eigenvalue with the largest magnitude is guaranteed to be positive in these situations) of a centrality matrix c(a). examples include eigenvector centrality (by using c(a) = a) [ ] , hub and authority scores (by using c(a) = aa t for hubs and a t a for authorities) [ ] , and pagerank [ ] . given a discrete-time temporal network in the form of a sequence of adjacency matrices i j denotes a directed edge from entity i to entity j in time layer t, we construct a "supracentrality matrix" c(ω), which couples centrality matrices c(a (t) ) of the individual time layers. we then compute the dominant eigenvector of c(ω), where ω is an interlayer coupling strength. in [ , ] , a key example was the ranking of doctoral programs in the mathematical sciences (using data from the mathematics genealogy project [ ] ), where an edge from one institution to another arises when someone with a ph.d. from the first institution supervises a ph.d. student at the second institution. by calculating timedependent centralities, we can study how the rankings of mathematical-sciences doctoral programs change over time and the dependence of such rankings on the value of ω. larger values of ω impose more ranking consistency across time, so centrality trajectories are less volatile for larger ω [ , ] . multilayer representations of temporal networks have been very insightful in the detection of communities and how they split, merge, and otherwise evolve over time. numerous methods for community detection -including inference via sbms [ ] , maximization of objective functions (especially "modularity") [ ] , and methods based on random walks and bottlenecks to their traversal of a network [ , ] -have been generalized from graphs to multilayer networks. they have yielded insights in a diverse variety of applications, including brain networks [ ] , granular materials [ ] , political voting networks [ , ] , disease spreading [ ] , and ecology and animal behavior [ , ] . to assist with such applications, there are efforts to develop and analyze multilayer random-network models that incorporate rich and flexible structures [ ] , such as diverse types of interlayer correlations. activity-driven (ad) models of temporal networks [ ] are a popular family of generative models that encode instantaneous time-dependent descriptions of network dynamics through a function called an "activity potential", which encodes the mechanism to generate connections and characterizes the interactions between enti-ties in a network. an activity potential encapsulates all of the information about the temporal network dynamics of an ad model, making it tractable to study dynamical processes (such as ones from section ) on networks that are generated by such a model. it is also common to compare the properties of networks that are generated by ad models to those of empirical temporal networks [ ] . in the original ad model of perra et al. [ ] , one considers a network with n entities, which we encode by the nodes. we suppose that node i has an activity rate a i = ηx i , which gives the probability per unit time to create new interactions with other nodes. the scaling factor η ensures that the mean number of active nodes per unit time is η we define the activity rates such that x i ∈ [ , ], where > , and we assign each x i from a probability distribution f(x) that can either take a desired functional form or be constructed from empirical data. the model uses the following generative process: • at each discrete time step (of length ∆t), start with a network g t that consists of n isolated nodes. • with a probability a i ∆t that is independent of other nodes, node i is active and generates m edges, each of which attaches to other nodes uniformly (i.e., with the same probability for each node) and independently at random (without replacement). nodes that are not active can still receive edges from active nodes. • at the next time step t + ∆t, we delete all edges from g t , so all interactions have a constant duration of ∆t. we then generate new interactions from scratch. this is convenient, as it allows one to apply techniques from markov chains. because entities in time step t do not have any memory of previous time steps, f(x) encodes the network structure and dynamics. the ad model of perra et al. [ ] is overly simplistic, but it is amenable to analysis and has provided a foundation for many more general ad models, including ones that incorporate memory [ ] . in section . , i discuss a generalization of ad models to simplicial complexes [ ] that allows one to study instantaneous interactions that involve three or more entities in a network. many networked systems evolve continuously in time, but most investigations of time-dependent networks rely on discrete or discretized time. it is important to undertake more analysis of continuous-time temporal networks. researchers have examined continuous-time networks in a variety of scenarios. examples include a compartmental model of biological contagions [ ] , a generalization of katz centrality to continuous time [ ] , generalizations of ad models (see section . . ) to continuous time [ , ] , and rankings in competitive sports [ ] . in a recent paper [ ] , my collaborators and i formulated a notion of "tie-decay networks" for studying networks that evolve in continuous time. they distinguished between interactions, which they modeled as discrete contacts, and ties, which encode relationships and their strength as a function of time. for example, perhaps the strength of a tie decays exponentially after the most recent interaction. more realistically, perhaps the decay rate depends on the weight of a tie, with strong ties decaying more slowly than weak ones. one can also use point-process models like hawkes processes [ ] to examine similar ideas using a node-centric perspective. suppose that there are n interacting entities, and let b(t) be the n × n timedependent, real, non-negative matrix whose entries b i j (t) encode the tie strength between agents i and j at time t. in [ ] , we made the following simplifying assumptions: . as in [ ] , ties decay exponentially when there are no interactions: where α ≥ is the decay rate. . if two entities interact at time t = τ, the strength of the tie between them grows instantaneously by . see [ ] for a comparison of various choices, including those in [ ] and [ ] , for tie evolution over time. in practice (e.g., in data-driven applications), one obtains b(t) by discretizing time, so let's suppose that there is at most one interaction during each time step of length ∆t. this occurs, for example, in a poisson process. such time discretization is common in the simulation of stochastic dynamical systems, such as in gillespie algorithms [ , , ] . consider an n × n matrix a(t) in which a i j (t) = if node i interacts with node j at time t and a i j (t) = otherwise. for a directed network, a(t) has exactly one nonzero entry during each time step when there is an interaction and no nonzero entries when there isn't one. for an undirected network, because of the symmetric nature of interactions, there are exactly two nonzero entries in time steps that include an interaction. we write equivalently, if interactions between entities occur at times τ ( ) such that ≤ τ ( ) < τ ( ) < . . . < τ (t ) , then at time t ≥ τ (t ) , we have in [ ] , my coauthors and i generalized pagerank [ , ] to tie-decay networks. one nice feature of their tie-decay pagerank is that it is applicable not just to data sets, but also to data streams, as one updates the pagerank values as new data arrives. by contrast, one problematic feature of many methods that rely on multilayer representations of temporal networks is that one needs to recompute everything for an entire data set upon acquiring new data, rather than updating prior results in a computationally efficient way. a dynamical process can be discrete, continuous, or some mixture of the two; it can also be either deterministic or stochastic. it can take the form of one or several coupled ordinary differential equations (odes), partial differential equations (pdes), maps, stochastic differential equations, and so on. a dynamical process requires a rule for updating the states of its dependent variables with respect one or more independent variables (e.g., time), and one also has (one or a variety of) initial conditions and/or boundary conditions. to formalize a dynamical process on a network, one needs a rule for how to update the states of the nodes and/or edges. the nodes (of one or more types) of a network are connected to each other in nontrivial ways by one or more types of edges. this leads to a natural question: how does nontrivial connectivity between nodes affect dynamical processes on a network [ ] ? when studying a dynamical process on a network, the network structure encodes which entities (i.e., nodes) of a system interact with each other and which do not. if desired, one can ignore the network structure entirely and just write out a dynamical system. however, keeping track of network structure is often a very useful and insightful form of bookkeeping, which one can exploit to systematically explore how particular structures affect the dynamics of particular dynamical processes. prominent examples of dynamical processes on networks include coupled oscillators [ , ] , games [ ] , and the spread of diseases [ , ] and opinions [ , ] . there is also a large body of research on the control of dynamical processes on networks [ , ] . most studies of dynamics on networks have focused on extending familiar models -such as compartmental models of biological contagions [ ] or kuramoto phase oscillators [ ] -by coupling entities using various types of network structures, but it is also important to formulate new dynamical processes from scratch, rather than only studying more complicated generalizations of our favorite models. when trying to illuminate the effects of network structure on a dynamical process, it is often insightful to provide a baseline comparison by examining the process on a convenient ensemble of random networks [ ] . a simple, but illustrative, dynamical process on a network is the watts threshold model (wtm) of a social contagion [ , ] . it provides a framework for illustrating how network structure can affect state changes, such as the adoption of a product or a behavior, and for exploring which scenarios lead to "virality" (in the form of state changes of a large number of nodes in a network). the original wtm [ ] , a binary-state threshold model that resembles bootstrap percolation [ ] , has a deterministic update rule, so stochasticity can come only from other sources (see section . ). in a binary state model, each node is in one of two states; see [ ] for a tabulation of well-known binary-state dynamics on networks. the wtm is a modification of mark granovetter's threshold model for social influence in a fully-mixed population [ ] . see [ , ] for early work on threshold models on networks that developed independently from investigations of the wtm. threshold contagion models have been developed for many scenarios, including contagions with multiple stages [ ] , models with adoption latency [ ] , models with synergistic interactions [ ] , and situations with hipsters (who may prefer to adopt a minority state) [ ] . in a binary-state threshold model such as the wtm, each node i has a threshold r i that one draws from some distribution. suppose that r i is constant in time, although one can generalize it to be time-dependent. at any time, each node can be in one of two states: (which represents being inactive, not adopted, not infected, and so on) or (active, adopted, infected, and so on). a binary-state model is a drastic oversimplification of reality, but the wtm is able to capture two crucial features of social systems [ ] : interdependence (an entity's behavior depends on the behavior of other entities) and heterogeneity (as nodes with different threshold values behave differently). one can assign a seed number or seed fraction of nodes to the active state, and one can choose the initially active nodes either deterministically or randomly. the states of the nodes change in time according to an update rule, which can either be synchronous (such that it is a map) or asynchronous (e.g., as a discretization of continuous time) [ ] . in the wtm, the update rule is deterministic, so this choice affects only how long it takes to reach a steady state; it does not affect the steady state itself. with a stochastic update rule, the synchronous and asynchronous versions of ostensibly the "same" model can behave in drastically different ways [ ] . in the wtm on an undirected network, to update the state of a node, one compares its fraction s i /k i of active neighbors (where s i is the number of active neighbors and k i is the degree of node i) to the node's threshold r i . an inactive node i becomes active (i.e., it switches from state to state ) if s i /k i ≥ r i ; otherwise, it stays inactive. the states of nodes in the wtm are monotonic, in the sense that a node that becomes active remains active forever. this feature is convenient for deriving accurate approximations for the global behavior of the wtm using branchingprocess approximations [ , ] or when analyzing the behavior of the wtm using tools such as persistent homology [ ] . a dynamical process on a network can take the form of a stochastic process [ , ] . there are several possible sources of stochasticity: ( ) choice of initial condition, ( ) choice of which nodes or edges to update (when considering asynchronous updating), ( ) the rule for updating nodes or edges, ( ) the values of parameters in an update rule, and ( ) selection of particular networks from a random-graph ensemble (i.e., a probability distribution on graphs). some or all of these sources of randomness can be present when studying dynamical processes on networks. it is desirable to compare the sample mean of a stochastic process on a network to an ensemble average (i.e., to an expectation over a suitable probability distribution). prominent examples of stochastic processes on networks include percolation [ ] , random walks [ ] , compartment models of biological contagions [ , ] , bounded-confidence models with continuous-valued opinions [ ] , and other opinion and voter models [ , , , ] . compartmental models of biological contagions are a topic of intense interest in network science [ , , , ] . a compartment represents a possible state of a node; examples include susceptible, infected, zombified, vaccinated, and recovered. an update rule determines how a node changes its state from one compartment to another. one can formulate models with as many compartments as desired [ ] , but investigations of how network structure affects dynamics typically have employed examples with only two or three compartments [ , ] . researchers have studied various extensions of compartmental models, contagions on multilayer and temporal networks [ , , ] , metapopulation models on networks [ ] for simultaneously studying network connectivity and subpopulations with different characteristics, non-markovian contagions on networks for exploring memory effects [ ] , and explicit incorporation of individuals with essential societal roles (e.g., health-care workers) [ ] . as i discuss in section . , one can also examine coupling between biological contagions and the spread of information (e.g., "awareness") [ , ] . one can also use compartmental models to study phenomena, such as dissemination of ideas on social media [ ] and forecasting of political elections [ ] , that are much different from the spread of diseases. one of the most prominent examples of a compartmental model is a susceptibleinfected-recovered (sir) model, which has three compartments. susceptible nodes are healthy and can become infected, and infected nodes can eventually recover. the steady state of the basic sir model on a network is related to a type of bond percolation [ , , , ] . there are many variants of sir models and other compartmental models on networks [ ] . see [ ] for an illustration using susceptible-infectedsusceptible (sis) models. suppose that an infection is transmitted from an infected node to a susceptible neighbor at a rate of λ. the probability of a transmission event on one edge between an infected node and a susceptible node in an infinitesimal time interval dt is λ dt. assuming that all infection events are independent, the probability that a susceptible node with s infected neighbors becomes infected (i.e., for a node to transition from the s compartment to the i compartment, which represents both being infected and being infective) during dt is if an infected node recovers at a constant rate of µ, the probability that it switches from state i to state r in an infinitesimal time interval dt is µ dt. when there is no source of stochasticity, a dynamical process on a network is "deterministic". a deterministic dynamical system can take the form of a system of coupled maps, odes, pdes, or something else. as with stochastic systems, the network structure encodes which entities of a system interact with each other and which do not. there are numerous interesting deterministic dynamical systems on networksjust incorporate nontrivial connectivity between entities into your favorite deterministic model -although it is worth noting that some stochastic features (e.g., choosing parameter values from a probability distribution or sampling choices of initial conditions) can arise in these models. for concreteness, let's consider the popular setting of coupled oscillators. each node in a network is associated with an oscillator, and we want to examine how network structure affects the collective behavior of the coupled oscillators. it is common to investigate various forms of synchronization (a type of coherent behavior), such that the rhythms of the oscillators adjust to match each other (or to match a subset of the oscillators) because of their interactions [ ] . a variety of methods, such as "master stability functions" [ ] , have been developed to study the local stability of synchronized states and their generalizations [ , ] , such as cluster synchrony [ ] . cluster synchrony, which is related to work on "coupled-cell networks" [ ] , uses ideas from computational group theory to find synchronized sets of oscillators that are not synchronized with other sets of synchronized oscillators. many studies have also examined other types of states, such as "chimera states" [ ] , in which some oscillators behave coherently but others behave incoherently. (analogous phenomena sometimes occur in mathematics departments.) a ubiquitous example is coupled kuramoto oscillators on a network [ , , ] , which is perhaps the most common setting for exploring and developing new methods for studying coupled oscillators. (in principle, one can then build on these insights in studies of other oscillatory systems, such as in applications in neuroscience [ ] .) coupled kuramoto oscillators have been used for modeling numerous phenomena, including jetlag [ ] and singing in frogs [ ] . indeed, a "snowbird" (siam) conference on applied dynamical systems would not be complete without at least several dozen talks on the kuramoto model. in the kuramoto model, each node i has an associated phase θ i (t) ∈ [ , π). in the case of "diffusive" coupling between the nodes , the dynamics of the ith node is governed by the equation where one typically draws the natural frequency ω i of node i from some distribution g(ω), the scalar a i j is an adjacency-matrix entry of an unweighted network, b i j is the coupling strength on oscillator i from oscillator j (so b i j a i j is an element of an adjacency matrix w of a weighted network), and f i j (y) = sin(y) is the coupling function, which depends only on the phase difference between oscillators i and j because of the diffusive nature of the coupling. once one knows the natural frequencies ω i , the model ( ) is a deterministic dynamical system, although there have been studies of coupled kuramoto oscillators with additional stochastic terms [ ] . traditional studies of ( ) and its generalizations draw the natural frequencies from some distribution (e.g., a gaussian or a compactly supported distribution), but some studies of so-called "explosive synchronization" (in which there is an abrupt phase transition from incoherent oscillators to synchronized oscillators) have employed deterministic natural frequencies [ , ] . the properties of the frequency distribution g(ω) have a significant effect on the dynamics of ( ). important features of g(ω) include whether it has compact support or not, whether it is symmetric or asymmetric, and whether it is unimodal or not [ , ] . the model ( ) has been generalized in numerous ways. for example, researchers have considered a large variety of coupling functions f i j (including ones that are not diffusive) and have incorporated an inertia term θ i to yield a second-order kuramoto oscillator at each node [ ] . the latter generalization is important for studies of coupled oscillators and synchronized dynamics in electric power grids [ ] . another noteworthy direction is the analysis of kuramoto model on "graphons" (see, e.g., [ ] ), an important type of structure that arises in a suitable limit of large networks. an increasingly prominent topic in network analysis is the examination of how multilayer network structures -multiple system components, multiple types of edges, co-occurrence and coupling of multiple dynamical processes, and so onaffect qualitative and quantitative dynamics [ , , ] . for example, perhaps certain types of multilayer structures can induce unexpected instabilities or phase transitions in certain types of dynamical processes? there are two categories of dynamical processes on multilayer networks: ( ) a single process can occur on a multilayer network; or ( ) processes on different layers of a multilayer network can interact with each other [ ] . an important example of the first category is a random walk, where the relative speeds and probabilities of steps within layers versus steps between layers affect the qualitative nature of the dynamics. this, in turn, affects methods (such as community detection [ , ] ) that are based on random walks, as well as anything else in which the diffusion is relevant [ , ] . two other examples of the first category are the spread of information on social media (for which there are multiple communication channels, such as facebook and twitter) and multimodal transportation systems [ ] . for instance, a multilayer network structure can induce congestion even when a system without coupling between layers is decongested in each layer independently [ ] . examples of the second category of dynamical process are interactions between multiple strains of a disease and interactions between the spread of disease and the spread of information [ , , ] . many other examples have been studied [ ] , including coupling between oscillator dynamics on one layer and a biased random walk on another layer (as a model for neuronal oscillations coupled to blood flow) [ ] . numerous interesting phenomena can occur when dynamical systems, such as spreading processes, are coupled to each other [ ] . for example, the spreading of one disease can facilitate infection by another [ ] , and the spread of awareness about a disease can inhibit spread of the disease itself (e.g., if people stay home when they are sick) [ ] . interacting spreading processes can also exhibit other fascinating dynamics, such as oscillations that are induced by multilayer network structures in a biological contagion with multiple modes of transmission [ ] and novel types of phase transitions [ ] . a major simplification in most work thus far on dynamical processes on multilayer networks is a tendency to focus on toy models. for example, a typical study of coupled spreading processes may consider a standard (e.g., sir) model on each layer, and it may draw the connectivity pattern of each layer from the same standard random-graph model (e.g., an erdős-rényi model or a configuration model). however, when studying dynamics on multilayer networks, it is particular important in future work to incorporate heterogeneity in network structure and/or dynamical processes. for instance, diseases spread offline but information spreads both offline and online, so investigations of coupled information and disease spread ought to consider fundamentally different types of network structures for the two processes. network structures also affect the dynamics of pdes on networks [ , , , , ] . interesting examples include a study of a burgers equation on graphs to investigate how network structure affects the propagation of shocks [ ] and investigations of reaction-diffusion equations and turing patterns on networks [ , ] . the latter studies exploit the rich theory of laplacian dynamics on graphs (and concomitant ideas from spectral graph theory) [ , ] and examine the addition of nonlinear terms to laplacians on various types of networks (including multilayer ones). a mathematically oriented thread of research on pdes on networks has built on ideas from so-called "quantum graphs" [ , ] to study wave propagation on networks through the analysis of "metric graphs". metric graphs differ from the usual "combinatorial graphs", which in other contexts are usually called simply "graphs". in metric graphs, in addition to nodes and edges, each edge e has a positive length l e ∈ ( , ∞]. for many experimentally relevant scenarios (e.g., in models of circuits of quantum wires [ ] ), there is a natural embedding into space, but metric graphs that are not embedded in space are also appropriate for some applications. as the nomenclature suggests, one can equip a metric graph with a natural metric. if a sequence {e j } m j= of edges forms a path, the length of the path is j l j . the distance ρ(v , v ) between two nodes, v and v , is the minimum path length between them. we place coordinates along each edge, so we can compute a distance between points x and x on a metric graph even when those points are not located at nodes. traditionally, one assumes that the infinite ends (which one can construe as "leads" at infinity, as in scattering theory) of infinite edges have degree . it is also traditional to assume that there is always a positive distance between distinct nodes and that there are no finite-length paths with infinitely many edges. see [ ] for further discussion. to study waves on metric graphs, one needs to define operators, such as the negative second derivative or more general schrödinger operators. this exploits the fact that there are coordinates for all points on the edges -not only at the nodes themselves, as in combinatorial graphs. when studying waves on metric graphs, it is also necessary to impose boundary conditions at the nodes [ ] . many studies of wave propagation on metric graphs have considered generalizations of nonlinear wave equations, such as the cubic nonlinear schrödinger (nls) equation [ ] and a nonlinear dirac equation [ ] . the overwhelming majority of studies in metric graphs (with both linear and nonlinear waves) have focused on networks with a very small number of nodes, as even small networks yield very interesting dynamics. for example, marzuola and pelinovsky [ ] analyzed symmetry-breaking and symmetry-preserving bifurcations of standing waves of the cubic nls on a dumbbell graph (with two rings attached to a central line segment and kirchhoff boundary conditions at the nodes). kairzhan et al. [ ] studied the spectral stability of half-soliton standing waves of the cubic nls equation on balanced star graphs. sobirov et al. [ ] studied scattering and transmission at nodes of sine-gordon solitons on networks (e.g., on a star graph and a small tree). a particularly interesting direction for future work is to study wave dynamics on large metric graphs. this will help extend investigations, as in odes and maps, of how network structures affect dynamics on networks to the realm of linear and nonlinear waves. one can readily formulate wave equations on large metric graphs by specifying relevant boundary conditions and rules at each junction. for example, joly et al. [ ] recently examined wave propagation of the standard linear wave equation on fractal trees. because many natural real-life settings are spatially embedded (e.g., wave propagation in granular materials [ , ] and traffic-flow patterns in cities), it will be particularly valuable to examine wave dynamics on (both synthetic and empirical) spatially-embedded networks [ ] . therefore, i anticipate that it will be very insightful to undertake studies of wave dynamics on networks such as random geometric graphs, random neighborhood graphs, and other spatial structures. a key question in network analysis is how different types of network structure affect different types of dynamical processes [ ] , and the ability to take a limit as model synthetic networks become infinitely large (i.e., a thermodynamic limit) is crucial for obtaining many key theoretical insights. dynamics of networks and dynamics on networks do not occur in isolation; instead, they are coupled to each other. researchers have studied the coevolution of network structure and the states of nodes and/or edges in the context of "adaptive networks" (which are also known as "coevolving networks") [ , ] . whether it is sensible to study a dynamical process on a time-independent network, a temporal network with frozen (or no) node or edge states, or an adaptive network depends on the relative time scales of the dynamics of network structure and the states of nodes and/or edges of a network. see [ ] for a brief discussion. models in the form of adaptive networks provide a promising mechanistic approach to simultaneously explain both structural features (e.g., degree distributions and temporal features (e.g., burstiness) of empirical data [ ] . incorporating adaptation into conventional models can produce extremely interesting and rich dynamics, such as the spontaneous development of extreme states in opinion models [ ] . most studies of adaptive networks that include some analysis (i.e., that go beyond numerical computations) have employed rather artificial adaption rules for adding, removing, and rewiring edges. this is relevant for mathematical tractability, but it is important to go beyond these limitations by considering more realistic types of adaptation and coupling between network structure (including multilayer structures, as in [ ] ) and the states of nodes and edges. when people are sick, they stay home from work or school. people also form and remove social connections (both online and offline) based on observed opinions and behaviors. to study these ideas using adaptive networks, researchers have coupled models of biological and social contagions with time-dependent networks [ , ] . an early example of an adaptive network of disease spreading is the susceptibleinfected (si) model in gross et al. [ ] . in this model, susceptible nodes sometimes rewire their incident edges to "protect themselves". suppose that we have an n-node network with a constant number of undirected edges. each node is either susceptible (i.e., of type s) or infected (i.e., of type i). at each time step, and for each edge -so-called "discordant edges" -between nodes of different types, the susceptible node becomes infected with probability λ. for each discordant edge, with some probability κ, the incident susceptible node breaks the edge and rewires to some other susceptible node. this is a "rewire-to-same" mechanism, to use the language from some adaptive opinion models [ , ] . (in this model, multi-edges and selfedges are not allowed.) during each time step, infected nodes can also recover to become susceptible again. gross et al. [ ] studied how the rewiring probability affects the "basic reproductive number", which measures how many secondary infections on average occur for each primary infection [ , , ] . this scalar quantity determines the size of a critical infection probability λ * to maintain a stable epidemic (as determined traditionally using linear stability analysis of an endemic state). a high rewiring rate can significantly increase λ * and thereby significantly reduce the prevalence of a contagion. although results like these are perhaps intuitively clear, other studies of contagions on adaptive networks have yielded potentially actionable (and arguably nonintuitive) insights. for example, scarpino et al. [ ] demonstrated using an adaptive compartmental model (along with some empirical evidence) that the spread of a disease can accelerate when individuals with essential societal roles (e.g., health-care workers) become ill and are replaced with healthy individuals. another type of model with many interesting adaptive variants are opinion models [ , ] , especially in the form of generalizations of classical voter models [ ] . voter dynamics were first considered in the s by clifford and sudbury [ ] as a model for species competition, and the dynamical process that they introduced was dubbed "the voter model" by holley and liggett shortly thereafter [ ] . voter dynamics are fun and are popular to study [ ] , although it is questionable whether it is ever possible to genuinely construe voter models as models of voters [ ] . holme and newman [ ] undertook an early study of a rewire-to-same adaptive voter model. inspired by their research, durrett et al. [ ] compared the dynamics from two different types of rewiring in an adaptive voter model. in each variant of their model, one considers an n-node network and supposes that each node is in one of two states. the network structure and the node states coevolve. pick an edge uniformly at random. if this edge is discordant, then with probability − κ, one of its incident nodes adopts the opinion state of the other. otherwise, with complementary probability κ, a rewiring action occurs: one removes the discordant edge, and one of the associated nodes attaches to a new node either through a rewire-to-same mechanism (choosing uniformly at random among the nodes with the same opinion state) or through a "rewire-to-random" mechanism (choosing uniformly at random among all nodes). as with the adaptive si model in [ ] , self-edges and multi-edges are not allowed. the models in [ ] evolve until there are no discordant edges. there are several key questions. does the system reach a consensus (in which all nodes are in the same state)? if so, how long does it take to converge to consensus? if not, how many opinion clusters (each of which is a connected component, perhaps interpretable as an "echo chamber", of the final network) are there at steady state? how long does it take to reach this state? the answers and analysis are subtle; they depend on the initial network topology, the initial conditions, and the specific choice of rewiring rule. as with other adaptive network models, researchers have developed some nonrigorous theory (e.g., using mean-field approximations and their generalizations) on adaptive voter models with simplistic rewiring schemes, but they have struggled to extend these ideas to models with more realistic rewiring schemes. there are very few mathematically rigorous results on adaptive voter models, although there do exist some, under various assumptions on initial network structure and edge density [ ] . researchers have generalized adaptive voter models to consider more than two opinion states [ ] and more general types of rewiring schemes [ ] . as with other adaptive networks, analyzing adaptive opinion models with increasingly diverse types of rewiring schemes (ideally with a move towards increasing realism) is particularly important. in [ ] , yacoub kureh and i studied a variant of a voter model with nonlinear rewiring (where the probability that a node rewires or adopts is a function of how well it "fits in" within its neighborhood), including a "rewire-tonone" scheme to model unfriending and unfollowing in online social networks. it is also important to study adaptive opinion models with more realistic types of opinion dynamics. a promising example is adaptive generalizations of bounded-confidence models (see the introduction of [ ] for a brief review of bounded-confidence models), which have continuous opinion states, with nodes interacting either with nodes or with other entities (such as media [ ] ) whose opinion is sufficiently close to theirs. a recent numerical study examined an adaptive bounded-confidence model [ ] ; this is an important direction for future investigations. it is also interesting to examine how the adaptation of oscillators -including their intrinsic frequencies and/or the network structure that couples them to each other -affects the collective behavior (e.g., synchronization) of a network of oscillators [ ] . such ideas are useful for exploring mechanistic models of learning in the brain (e.g., through adaptation of coupling between oscillators to produce a desired limit cycle [ ] ). one nice example is by skardal et al. [ ] , who examined an adaptive model of coupled kuramoto oscillators as a toy model of learning. first, we write the kuramoto system as where f i j is a π-periodic function of the phase difference between oscillators i and j. one way to incorporate adaptation is to define an "order parameter" r i (which, in its traditional form, quantifies the amount of coherence of the coupled kuramoto oscillators [ ] ) for the ith oscillator by and to consider the following dynamical system: where re(ζ) denotes the real part of a quantity ζ and im(ζ) denotes its imaginary part. in the model ( ), λ d denotes the largest positive eigenvalue of the adjacency matrix a, the variable z i (t) is a time-delayed version of r i with time parameter τ (with τ → implying that z i → r i ), and z * i denotes the complex conjugate of z i . one draws the frequencies ω i from some distribution (e.g., a lorentz distribution, as in [ ] ), and we recall that b i j is the coupling strength on oscillator i from oscillator j. the parameter t gives an adaptation time scale, and α ∈ r and β ∈ r are parameters (which one can adjust to study bifurcations). skardal et al. [ ] interpreted scenarios with β > as "hebbian" adaptation (see [ ] ) and scenarios with β < as anti-hebbian adaptation, as they observed that oscillator synchrony is promoted when β > and inhibited when β < . most studies of networks have focused on networks with pairwise connections, in which each edge (unless it is a self-edge, which connects a node to itself) connects exactly two nodes to each other. however, many interactions -such as playing games, coauthoring papers and other forms of collaboration, and horse racesoften occur between three or more entities of a network. to examine such situations, researchers have increasingly studied "higher-order" structures in networks, as they can exert a major influence on dynamical processes. perhaps the simplest way to account for higher-order structures in networks is to generalize from graphs to "hypergraphs" [ ] . hypergraphs possess "hyperedges" that encode a connection between on arbitrary number of nodes, such as between all coauthors of a paper. this allows one to make important distinctions, such as between a k-clique (in which there are pairwise connections between each pair of nodes in a set of k nodes) and a hyperedge that connects all k of those nodes to each other, without the need for any pairwise connections. one way to study a hypergraph is as a "bipartite network", in which nodes of a given type can be adjacent only to nodes of another type. for example, a scientist can be adjacent to a paper that they have written [ ] , and a legislator can be adjacent to a committee on which they sit [ ] . it is important to generalize ideas from graph theory to hypergraphs, such as by developing models of random hypergraphs [ , , ]. another way to study higher-order structures in networks is to use "simplicial complexes" [ , , ] . a simplicial complex is a space that is built from a union of points, edges, triangles, tetrahedra, and higher-dimensional polytopes (see fig. d ). simplicial complexes approximate topological spaces and thereby capture some of their properties. a p-dimensional simplex (i.e., a p-simplex) is a p-dimensional polytope that is the convex hull of its p + vertices (i.e., nodes). a simplicial complex k is a set of simplices such that ( ) every face of a simplex from s is also in s and ( ) the intersection of any two simplices σ , σ ∈ s is a face of both σ and σ . an increasing sequence k ⊂ k ⊂ · · · ⊂ k l of simplicial complexes forms a filtered simplicial complex; each k i is a subcomplex. as discussed in [ ] and references therein, one can examine the homology of each subcomplex. in studying the homology of a topological space, one computes topological invariants that quantify features of different dimensions [ ] . one studies "persistent homology" (ph) of a filtered simplicial complex to quantify the topological structure of a data set (e.g., a point cloud) across multiple scales of such data. the goal of such "topological data analysis" (tda) is to measure the "shape" of data in the form of connected components, "holes" of various dimensionality, and so on [ ] . from the perspective of network analysis, this yields insight into types of large-scale structure that complement traditional ones (such as community structure). see [ ] for a friendly, nontechnical introduction to tda. a natural goal is to generalize ideas from network analysis to simplicial complexes. important efforts include generalizing configuration models of random graphs [ ] to random simplicial complexes [ , ] ; generalizing well-known network growth mechanisms, such as preferential attachment [ ] ; and developing geometric notions, like curvature, for networks [ ] . an important modeling issue when studying higher-order network data is the question of when it is more appropriate (or convenient) to use the formalisms of hypergraphs or simplicial complexes. the computation of ph has yielded insights on a diverse set of models and applications in network science and complex systems. examples include granular materials [ , ] , functional brain networks [ , ] , quantification of "political islands" in voting data [ ] , percolation theory [ ] , contagion dynamics [ ] , swarming and collective behavior [ ] , chaotic flows in odes and pdes [ ] , diurnal cycles in tropical cyclones [ ] , and mathematics education [ ] . see the introduction to [ ] for pointers to numerous other applications. most uses of simplicial complexes in network science and complex systems have focused on tda (especially the computation of ph) and its applications [ , , ] . in this chapter, however, i focus instead on a somewhat different (and increasingly popular) topic: the generalization of dynamical processes on and of networks to simplicial complexes to study the effects of higher-order interactions on network dynamics. simplicial structures influence the collective behavior of the dynamics of coupled entities on networks (e.g., they can lead to novel bifurcations and phase transitions), and they provide a natural approach to analyze p-entity interaction terms, including for p ≥ , in dynamical systems. existing work includes research on linear diffusion dynamics (in the form of hodge laplacians, such as in [ ] ) and generalizations of a variety of other popular types of dynamical processes on networks. given the ubiquitous study of coupled kuramoto oscillators [ ] , a sensible starting point for exploring the impact of simultaneous coupling of three or more oscillators on a system's qualitative dynamics is to study a generalized kuramoto model. for example, to include both two-entity ("two-body") and three-entity interactions in a model of coupled oscillators on networks, we write [ ] x where f i describes the dynamics of oscillator i and the three-oscillator interaction term w i jk includes two-oscillator interaction terms w i j (x i , x j ) as a special case. an example of n coupled kuramoto oscillators with three-term interactions is [ ] where we draw the coefficients a i j , b i j , c i jk , α i j , α i j , α i jk , α i jk from various probability distributions. including three-body interactions leads to a large variety of intricate dynamics, and i anticipate that incorporating the formalism of simplicial complexes will be very helpful for categorizing the possible dynamics. in the last few years, several other researchers have also studied kuramoto models with three-body interactions [ , , ] . a recent study [ ] , for example, discovered a continuum of abrupt desynchronization transitions with no counterpart in abrupt synchronization transitions. there have been mathematical studies of coupled oscillators with interactions of three or more entities using methods such as normal-form theory [ ] and coupled-cell networks [ ] . an important point, as one can see in the above discussion (which does not employ the mathematical formalism of simplicial complexes), is that one does not necessarily need to explicitly use the language of simplicial complexes to study interactions between three or more entities in dynamical systems. nevertheless, i anticipate that explicitly incorporating the formalism of simplicial complexes will be useful both for studying coupled oscillators on networks and for other dynamical systems. in upcoming studies, it will be important to determine when this formalism helps illuminate the dynamics of multi-entity interactions in dynamical systems and when simpler approaches suffice. several recent papers have generalized models of social dynamics by incorporating higher-order interactions [ , , , ] . for example, perhaps somebody's opinion is influenced by a group discussion of three or more people, so it is relevant to consider opinion updates that are based on higher-order interactions. some of these papers use some of the terminology of simplicial complexes, but it is mostly unclear (except perhaps for [ ] ) how the models in them take advantage of the associated mathematical formalism, so arguably it often may be unnecessary to use such language. nevertheless, these models are very interesting and provide promising avenues for further research. petri and barrat [ ] generalized activity-driven models to simplicial complexes. such a simplicial activity-driven (sad) model generates time-dependent simplicial complexes, on which it is desirable to study dynamical processes (see section ), such as opinion dynamics, social contagions, and biological contagions. the simplest version of the sad model is defined as follows. • each node i has an activity rate a i that we draw independently from a distribution f(x). • at each discrete time step (of length ∆t), we start with n isolated nodes. each node i is active with a probability of a i ∆t, independently of all other nodes. if it is active, it creates a (p − )-simplex (forming, in network terms, a clique of p nodes) with p − other nodes that we choose uniformly and independently at random (without replacement). one can either use a fixed value of p or draw p from some probability distribution. • at the next time step, we delete all edges, so all interactions have a constant duration. we then generate new interactions from scratch. this version of the sad model is markovian, and it is desirable to generalize it in various ways (e.g., by incorporating memory or community structure). iacopini et al. [ ] recently developed a simplicial contagion model that generalizes an si process on graphs. consider a simplicial complex k with n nodes, and associate each node i with a state x i (t) ∈ { , } at time t. if x i (t) = , node i is part of the susceptible class s; if x i (t) = , it is part of the infected class i. the density of infected nodes at time t is ρ(t) = n n i= x i (t). suppose that there are d parameters , . . . , d (with d ∈ { , . . . , n − }), where d represents the probability per unit time that a susceptible node i that participates in a d-dimensional simplex σ is infected from each of the faces of σ, under the condition that all of the other nodes of the face are infected. that is, is the probability per unit time that node i is infected by an adjacent node j via the edge (i, j). similarly, is the probability per unit time that node i is infected via the -simplex (i, j, k) in which both j and k are infected, and so on. the recovery dynamics, in which an infected node i becomes susceptible again, proceeds as in the sir model that i discussed in section . . one can envision numerous interesting generalizations of this model (e.g., ones that are inspired by ideas that have been investigated in contagion models on graphs). the study of networks is one of the most exciting and rapidly expanding areas of mathematics, and it touches on myriad other disciplines in both its methodology and its applications. network analysis is increasingly prominent in numerous fields of scholarship (both theoretical and applied), it interacts very closely with data science, and it is important for a wealth of applications. my focus in this chapter has been a forward-looking presentation of ideas in network analysis. my choices of which ideas to discuss reflect their connections to dynamics and nonlinearity, although i have also mentioned a few other burgeoning areas of network analysis in passing. through its exciting combination of graph theory, dynamical systems, statistical mechanics, probability, linear algebra, scientific computation, data analysis, and many other subjects -and through a comparable diversity of applications across the sciences, engineering, and the humanities -the mathematics and science of networks has plenty to offer researchers for many years. congestion induced by the structure of multiplex networks tie-decay temporal networks in continuous time and eigenvector-based centralities multilayer networks in a nutshell multilayer networks in a nutshell temporal and structural heterogeneities emerging in adaptive temporal networks synchronization in complex networks mathematical frameworks for oscillatory network dynamics in neuroscience turing patterns in multiplex networks morphogenesis of spatial networks evolving voter model on dense random graphs generative benchmark models for mesoscale structure in multilayer networks birth and stabilization of phase clusters by multiplexing of adaptive networks network geometry with flavor: from complexity to quantum geometry chaos in generically coupled phase oscillator networks with nonpairwise interactions topology of random geometric complexes: a survey explosive transitions in complex networksÕ structure and dynamics: percolation and synchronization factoring and weighting approaches to clique identification mathematical models in population biology and epidemiology how does active participation effect consensus: adaptive network model of opinion dynamics and influence maximizing rewiring anatomy of a large-scale hypertextual web search engine a model for the influence of media on the ideology of content in online social networks frequency-based brain networks: from a multiplex network to a full multilayer description statistical physics of social dynamics bootstrap percolation on a bethe lattice configuration models of random hypergraphs annotated hypergraphs: models and applications hebbian learning architecture and evolution of semantic networks in mathematics texts a model for spatial conflict reaction-diffusion processes and metapopulation models in heterogeneous networks multiple-scale theory of topology-driven patterns on directed networks generalized network structures: the configuration model and the canonical ensemble of simplicial complexes structure and dynamics of core/periphery networks the physics of spreading processes in multilayer networks mathematical formulation of multilayer networks navigability of interconnected networks under random failures identifying modular flows on multilayer networks reveals highly overlapping organization in interconnected systems explosive phenomena in complex networks graph fission in an evolving voter model a practical guide to stochastic simulations of reaction-diffusion processes persistent homology of geospatial data: a case study with voting limitations of discrete-time approaches to continuous-time contagion dynamics is the voter model a model for voters? the use of multilayer network analysis in animal behaviour on eigenvector-like centralities for temporal networks: discrete vs. continuous time scales community detection in networks: a user guide configuring random graph models with fixed degree sequences nine challenges in incorporating the dynamics of behaviour in infectious diseases models modelling the influence of human behaviour on the spread of infectious diseases: a review anatomy and efficiency of urban multimodal mobility random hypergraphs and their applications elementary applied topology two's company, three (or more) is a simplex binary-state dynamics on complex networks: pair approximation and beyond quantum graphs: applications to quantum chaos and universal spectral statistics the structural virality of online diffusion patterns of synchrony in coupled cell networks with multiple arrows finite-size effects in a stochastic kuramoto model dynamical interplay between awareness and epidemic spreading in multiplex networks threshold models of collective behavior on the critical behavior of the general epidemic process and dynamical percolation a matrix iteration for dynamic network summaries a dynamical systems view of network centrality adaptive coevolutionary networks: a review epidemic dynamics on an adaptive network pathogen mutation modeled by competition between site and bond percolation ergodic theorems for weakly interacting infinite systems and the voter model modern temporal network theory: a colloquium nonequilibrium phase transition in the coevolution of networks and opinions temporal networks temporal networks temporal network theory an adaptive voter model on simplicial complexes simplical models of social contagion turing instability in reaction-diffusion models on complex networks games on networks the large graph limit of a stochastic epidemic model on a dynamic multilayer network a local perspective on community structure in multilayer networks structure of growing social networks wave propagation in fractal trees synergistic effects in threshold models on networks hipsters on networks: how a minority group of individuals can lead to an antiestablishment majority drift of spectrally stable shifted states on star graphs maximizing the spread of influence through a social network second look at the spread of epidemics on networks centrality prediction in dynamic human contact networks mathematics of epidemics on networks multilayer networks authoritative sources in a hyperlinked environment dynamics of multifrequency oscillator communities finite-size-induced transitions to synchrony in oscillator ensembles with nonlinear global coupling pattern formation in multiplex networks quantifying force networks in particulate systems quantum graphs: i. some basic structures fitting in and breaking up: a nonlinear version of coevolving voter models from networks to optimal higher-order models of complex systems hawkes processes complex spreading phenomena in social systems: influence and contagion in real-world social networks wave mitigation in ordered networks of granular chains centrality metric for dynamic networks control principles of complex networks resynchronization of circadian oscillators and the east-west asymmetry of jet-lag transitivity reinforcement in the coevolving voter model ground state on the dumbbell graph random walks and diffusion on networks the nonlinear heat equation on dense graphs and graph limits multi-stage complex contagions opinion formation and distribution in a bounded-confidence model on various networks network motifs: simple building blocks of complex networks portrait of political polarization six susceptible-infected-susceptible models on scale-free networks a network-based dynamical ranking system for competitive sports community structure in time-dependent, multiscale, and multiplex networks multi-body interactions and non-linear consensus dynamics on networked systems scientific collaboration networks. i. network construction and fundamental results network structure from rich but noisy data collective phenomena emerging from the interactions between dynamical processes in multiplex networks nonlinear schrödinger equation on graphs: recent results and open problems complex contagions with timers a theory of the critical mass. i. interdependence, group heterogeneity, and the production of collective action interaction mechanisms quantified from dynamical features of frog choruses a roadmap for the computation of persistent homology chimera states: coexistence of coherence and incoherence in networks of coupled oscillators network analysis of particles and grains epidemic processes in complex networks topological analysis of data master stability functions for synchronized coupled systems cluster synchronization and isolated desynchronization in complex networks with symmetries bayesian stochastic blockmodeling modelling sequences and temporal networks with dynamic community structures activity driven modeling of time varying networks simplicial activity driven model the multilayer nature of ecological networks network analysis and modelling: special issue of dynamical systems on networks: a tutorial the role of network analysis in industrial and applied mathematics a network analysis of committees in the u.s. house of representatives communities in networks spectral centrality measures in temporal networks reality inspired voter models: a mini-review the kuramoto model in complex networks core-periphery structure in networks (revisited) dynamic pagerank using evolving teleportation memory in network flows and its effects on spreading dynamics and community detection recent advances in percolation theory and its applications dynamics of dirac solitons in networks simplicial complexes and complex systems comparative analysis of two discretizations of ricci curvature for complex networks dynamics of interacting diseases null models for community detection in spatially embedded, temporal networks modeling complex systems with adaptive networks social diffusion and global drift on networks the effect of a prudent adaptive behaviour on disease transmission random walks on simplicial complexes and the normalized hodge -laplacian multiopinion coevolving voter model with infinitely many phase transitions the architecture of complexity the importance of the whole: topological data analysis for the network neuroscientist abrupt desynchronization and extensive multistability in globally coupled oscillator simplexes complex macroscopic behavior in systems of phase oscillators with adaptive coupling sine-gordon solitons in networks: scattering and transmission at vertices topological data analysis of continuum percolation with disks from kuramoto to crawford: exploring the onset of synchronization in populations of coupled oscillators motor primitives in space and time via targeted gain modulation in recurrent cortical networks multistable attractors in a network of phase oscillators with threebody interactions analysing information flows and key mediators through temporal centrality metrics topological data analysis of contagion maps for examining spreading processes on networks eigenvector-based centrality measures for temporal networks supracentrality analysis of temporal networks with directed interlayer coupling tunable eigenvector-based centralities for multiplex and temporal networks topological data analysis: one applied mathematicianÕs heartwarming story of struggle, triumph, and ultimately, more struggle topological data analysis of biological aggregation models partitioning signed networks on analytical approaches to epidemics on networks using persistent homology to quantify a diurnal cycle in hurricane felix resolution limits for detecting community changes in multilayer networks analytical computation of the epidemic threshold on temporal networks epidemic threshold in continuoustime evolving networks network models of the diffusion of innovations graph spectra for complex networks non-markovian infection spread dramatically alters the susceptible-infected-susceptible epidemic threshold in networks temporal gillespie algorithm: fast simulation of contagion processes on time-varying networks forecasting elections using compartmental models of infection ranking scientific publications using a model of network traffic coupled disease-behavior dynamics on complex networks: a review social network analysis: methods and applications a simple model of global cascades on random networks braess's paradox in oscillator networks, desynchronization and power outage inferring symbolic dynamics of chaotic flows from persistence continuous-time discrete-distribution theory for activitydriven networks an analytical framework for the study of epidemic models on activity driven networks modeling memory effects in activity-driven networks models of continuous-time networks with tie decay, diffusion, and convection key: cord- - uzk pi authors: soriano, joan b. title: humanistic epidemiology: love in the time of cholera, covid- and other outbreaks date: - - journal: eur j epidemiol doi: . /s - - -y sha: doc_id: cord_uid: uzk pi nan colombian nobel prize awardee author gabriel garcía márquez, suffered cholera and many bouts of malaria during his life. in love in the time of cholera, one of many masterpieces by him, he wrote that persistence (and handwashing!) were rewarded with love after a life of living with countless cholera outbreaks. i am a respiratory epidemiologist; and literally, at the peak of the covid- pandemic, we are now being bombarded with descriptive epidemiology statistics and standard, cold figures: "as of today the death toll of covid- worldwide is , "; "the peak resource use of respirators and icu rooms in the usa is expected on april , …", and counting. in the distant past, there have been devastating epidemics of infectious disease, such as cholera, the flu (wrongly called spanish), the plague's black death,… other more recent outbreaks like sars, mers or ebola were considered exotic, faraway occurrences. yet we were not ready for this one. and at the least for the last four generations, we are now living unprecedented times. no one, even in the wildest nightmares of any hollywood-based science fiction screenwriter, would have anticipated that would have started with such drama and suffering. when we were raising our glasses and toasting on new year's eve for a happy , few were aware of a safety alert reported that morning in wuhan, hubei province, china due to a cluster of pneumonia cases of unknown etiology [ ] . it took only days, on january , , for china's cdc to report that a novel coronavirus was the causative agent of that local outbreak. as for good and for bad, all is globally interconnected, that minute incident in china is the reason why we live in lockdown, basic civil liberties are limited, many deaths and suffering, and locally my hospital being near collapse. hospital de la princesa, an old -bed, tertiary hospital in downtown madrid, spain, had its d day on march , , when a total covid- patients were admitted, and + more patients were in the emergency room, impatiently waiting to be admitted [ ] . many twopatient rooms already had three, even four occupants. our petite, modern icu room with beds had to be stretched to beds, by invading two surgical theatres turned to critical care, as well as the entire psychiatric ward. mirroring ancient times, all mentally ill patients, including those with active, severe paranoid schizophrenia or major depression, were sent home with their relatives to make room for others requiring invasive mechanical ventilations, mostly with improvised ventilators, or by reusing disposable ones, or duplicating machines with home-made technology. even friends who have been veteran volunteers with médecins sans frontières in syria's civil war, or at sierra leone's ebola zone, were not ready. using military terminology, la princesa was a war-time hospital in the front line; my respiratory department with thirteen staff plus eight residents, suffered eleven "casualties", counting quarantines plus infections plus one admission with severe, bilateral pneumonia. but other madrid hospitals were hit even harder; colleagues at hospital la paz or gregorio marañón, were suffering an even worse avalanche of patients to care for. all like a modern hecatombe, literally from the ancient greek ἑκατόν, hekatón, "one hundred" and βοῦς, boũs, "ox", a religious sacrifice of a hundred oxen to indicate a great catastrophe with great mortality, or the end of the world. we are still facing a cruel disease and global epidemic, both of biblical proportions [ ] . it is still severely and seriously affecting our old ones and others with heart, lung and other chronic diseases. but not only them. several colleagues of mine, young, completely healthy, even athletic, have been admitted into the same ward where the previous day they we still don't know whether an immunological, genetic factor, a combination of risk factors, or serendipity make this little rna virus collapse your bronchi and lungs with a thick "snail snot or slime" and accompanied with an inflammatory outburst killing some perfectly healthy lungs. as dr landete was explaining to junior residents in the morning clinical round: "-this is the first time that i have seen the occurrence of acute sudden respiratory distress syndrome (ards) in front of my eyes. in the emergency room i was examining a walk-in yr-old, female patient with temperature, malaise and a dry cough, and within min, i had to call an ent colleague to intubate this patient, as she had developed the fastest, quickest ards i've ever seen". even after all their greatest efforts and in the best hands, they could not save her. it is indeed a nasty little bug [ ] . however, there is always hope, and as seen in great literature, times of crisis bring out the best in us. to date, i have seen residents choosing to stay longer after finishing a -h duty to try and save one more critically ill patient; auxiliary nurses improvising aprons and boots with trash bags, who, on finally receiving their space suits, posed for posterity like a football team, always with a ready smile (fig. ) ; residents in neurology, immunology or pathology becoming chest medicine residents; medical students volunteering to learn the practicalities of lung mechanics and gas exchange; a department head creating a blog aimed at praising individuals for outstanding bravery and commitment; or i have been privileged to lead a small think tank including nurses, doctors, physicists, engineers and other friends who from saturday march have met on a daily basis to brainstorm initiatives by videoconference at am, just before seeing patients or awakening their families. many of the above have been living for weeks in hotels next to our hospital, extremely and severely sleep deprived for a month already (fig. ) . our hospital administrators recommended all staff not to take weekends off until further notice. no one disagreed, trade unions included, out of a + headcount. and this has been going on for nearly a month. again, all always with a ready smile. this is the so-called, espíritu de la princesa. myself, a humble respiratory epidemiologist that has dedicated his professional life to research on copd, asthma and tobacco had to go back to the textbooks and online resources for a fast-track, hands-on crash course on outbreaks research, counting the number of deaths, infected cases, r infectivity, and the like [ ] . that was the easy part. realizing that behind every case there was a personal tragedy, a family loss, slowly broke my heart and my lungs. so many people dying alone in elderly homes and residences, without medical care, without any care. i imagine not even anyone holding their hands. calling for the sake of hygiene and competing priority, no one available to say a prayer while they were buried or cremated, alone. it will take time to accept this sad passing away, a cruel ending for many. we must live in this planet, there is no other earth or planet/plan b. and we have observed that air pollution and planetary health can be improved within weeks, with concerted individual and societal efforts [ ] . children confined at home for weeks already, have appreciated playing with their brothers and sisters, or talking with neighbors across the balconies; even remotely with their school friends and teachers. they should be the first to end confinement. and we need to learn the lessons from history. this is not our first epidemic. it is the toll we have to pay for living in society and in cities. if we were still collectors and hunters in the wild, no such thing would have happened. yet humans are emotional, social animals and beyond our species homo sapiens, scholars say we are of emotionalis subspecies. as human animals we are not meant to live alone, or die alone, or in solitude. i have no doubt that when this crisis is over, and i am positive it will be over soon, music, theatre, movies, literature and the arts in general, will help to restore balance, and make us all wiser, better persons. the so-called move from omics to humanomics [ ] . beyond modern, ever more technical and robotized medicine, medical humanism in the twenty-first century is to be more important than ever [ ] . as pangloss, candide's optimistic teacher in voltaire's masterpiece, said: "everything happens for a reason". pangloss chants over and over: "… all is for the best in the best of all possible worlds" while candide leads an outrageous life illustrating that it is patently false. but we have no room for pessimism. i remember reading essay on blindness by the portuguese author josé saramago; happily, the panic and selfishness in his outbreak of sudden blindness only occurred in his literature. let's only imagine if gabo's inspiration were by nowadays covid- pandemic, and his love in the time of cholera, were rewritten. or la peste by french novelist albert camus who, at the premature age of died in a car accident near sens. camus, not wearing a safety belt in the passenger seat, died instantly. but, what a life! la peste tells the story of a plague sweeping the french algerian city of oran. nevertheless. it is not a medical book, but about human passions during and after an outbreak. can't wait to re-read it. in all of these books, and other, health personnel have been rightfully characterized and praised as heroes and martyrs. yet, last but certainly not least, i wish to make a call to remember the crucial role played by our nonhealth related hospital staff. thoroughly well-deservedly nurses and doctors are credited since they must frequently and harshly endure the pains of covid- . however, their work would all be a lost effort without the cleaning personnel, wardens, cooks and cafeteria caterers, administrative workers, security forces, lab technicians, and other hospital-based job groups. they suffer this modern plague equally, often without protection, mostly without recognition, but always proudly; working / , weekends included, and again always with a ready smile. these critical workers should be praised and acknowledged equally since, with no cleaners and cooks, our hospitals would instantly collapse. as this is neither the last outbreak, and with all likelihood nor "the" last big one, we need to learn one more lesson from the past. in the future, let's never again take for granted those simple things that during confinement we have suddenly seen as precious: a bear hug, a slap on the back and, of course, a ready smile without a face mask. i have no doubt that medical humanism and the arts are already helping; and they will help us to learn to take better care of our patients, our loved ones, and ourselves. jb soriano, md. madrid, april , . emergencies preparedness, response. pneumonia of unknown cause-china situación de covid- en españa offline: covid- -what countries must do now a novel coronavirus from patients with pneumonia in china in snow's footsteps: commentary on shoe-leather and applied epidemiology the report of the lancet countdown on health and climate change: ensuring that the health of a child the need for humanomics in the era of genomics and the challenge of chronic disease management opening editorial-the importance of the humanities in medical education conflict of interest there are no conflicts of interest or competing interests to report. key: cord- - mf jmqp authors: rosen, claire b.; joffe, steven; kelz, rachel r. title: covid- moves medicine into a virtual space: a paradigm shift from touch to talk to establish trust date: - - journal: ann surg doi: . /sla. sha: doc_id: cord_uid: mf jmqp nan counterbalance the value of an in-person exam. during the pandemic, the added risk of exposure to sars-cov- tips the scales further in favor of virtual visits. the doctor-patient relationship hinges on mutual respect and trust. in a world where online dating dominates the singles scene and video chatting with licensed therapists allows patients critical access to mental health care, surgeons should believe that their ability to establish a relationship based on trust does not require physical contact. shouldn't it be possible for a surgeon to inspire her patients to believe in her ability during virtual visits where she faces the patient, his caregivers, and the electronic health record simultaneously? might it, in fact, be easier to make meaningful connections with patients when one can see them on time, in the convenience of their home or place of work? wouldn't it be more respectful and financially responsible to offer a visit free from wasted travel and waitingroom time? prior to the covid pandemic, with diminishing reimbursements and the advent of the electronic health record, physicians were already spending less face-to-face time with patients in favor of more face-to-screen time. in addition, expensive parking fees or transportation costs, coupled with crowded waiting rooms, were the norm. to make time for medical visits, patients often needed to take time off from work, and some have faced the threat of unemployment in order to meet the demands of their medical needs. telehealth dramatically reduces the time and economic burden of routine medical care , and, in times of contagion, eliminates the risk of transmission of infectious diseases in overcrowded waiting more screen time, less face time -implicaitons for her design improving value and access to specialty medical care for families: a pediatric surgery telehealth program association of paid sick leave with job retention and financial burden among working patients with colorectal cancer patient preference for time-saving telehealth postoperative visits after routine surgery in an urban setting development of a telehealth monitoring service after colorectal surgery: a feasibility study influence of an early recovery telehealth intervention on physical activity and functioning following coronary artery bypass surgery (cabs) among older adults with high disease burden medicare telemedicine health care key: cord- -dqmztvo authors: oghaz, toktam a.; mutlu, ece c.; jasser, jasser; yousefi, niloofar; garibay, ivan title: probabilistic model of narratives over topical trends in social media: a discrete time model date: - - journal: nan doi: nan sha: doc_id: cord_uid: dqmztvo online social media platforms are turning into the prime source of news and narratives about worldwide events. however,a systematic summarization-based narrative extraction that can facilitate communicating the main underlying events is lacking. to address this issue, we propose a novel event-based narrative summary extraction framework. our proposed framework is designed as a probabilistic topic model, with categorical time distribution, followed by extractive text summarization. our topic model identifies topics' recurrence over time with a varying time resolution. this framework not only captures the topic distributions from the data, but also approximates the user activity fluctuations over time. furthermore, we define significance-dispersity trade-off (sdt) as a comparison measure to identify the topic with the highest lifetime attractiveness in a timestamped corpus. we evaluate our model on a large corpus of twitter data, including more than one million tweets in the domain of the disinformation campaigns conducted against the white helmets of syria. our results indicate that the proposed framework is effective in identifying topical trends, as well as extracting narrative summaries from text corpus with timestamped data. social media and microblogging platforms, such as twitter and facebook, are becoming the primary sources of real-time content regarding ongoing socio-political events, such as united states presidential election in [ ] , and natural and man-made emergencies, such as covid- pandemic in [ ] . however, without the appropriate tools, the massive textual data from these platforms makes it extremely challenging to obtain relevant information on significant events, distinguish between high-quality and unreliable content [ ] , or identify the opinions within a polarized domain [ ] . the challenges mentioned above have been studied from different aspects related to topic detection and tracking within the field in this study, we have used the terms narrative and story interchangeably. *equal contribution. of natural language processing (nlp). researchers have developed automatic document summarization tools and techniques, which intend to provide concise and fluent summaries over a large corpus of textual data [ ] . preserving the key information in the summary and producing summaries that are comparable to human-created narratives are the primary goals of the extractive and abstractive approaches for automatic text summarization [ ] . news websites are a prime example of such techniques, where automatic text summarization algorithms are applied to generate news headlines and titles from the news content [ ] . the shortage of labeled data for text analysis has encouraged researchers to develop novel unsupervised algorithms that consider co-occurrence of words in documents as well as emerging new techniques such as exploiting an additional source of information similar to wikipedia knowledge-based topic models [ , ] . additionally, unsupervised learning enables training general-purpose systems that can be used for a variety of tasks and applications as strong classifiers [ ] . in this regard, statistical models of co-occurrence such as latent dirichlet allocation (lda) [ ] , discover the relevant structure and co-occurrence dependencies of words within a collection of documents to capture the distribution of topic latent variable from the data. although an abundant timestamped textual data, particularly from social media platforms and news reports are available for analysis, the changes in the distribution of data over time have been neglected in most of the topic mining algorithms proposed in the literature [ ] . for instance, time-series analysis on datasets over the events relative to us presidential election suggests that modeling topics and extracting summaries without considering the text-time relationship lead to missing the rise and fall of topics over time, the changes in terms of correlations, and the emergence of new topics [ ] . although continuous-time topic models such as [ ] have been proposed in the literature, topical models with continuous-time distribution cannot model many modes in time, which leads to deficiency in modeling the fluctuations. additionally, continuoustime models suffer from instability problems in the case of analyzing a multimodal dataset that is sparse in time. in this paper, we propose a probabilistic model of topics over time with categorical time distribution to detect topical recurrence, designed as an lda-based generative model. to achieve probabilistic modeling of narratives over topical trends, we incorporate the components of narratives including named-entities and temporal-causal coherence between events into our topical model. we believe that what differentiates a narrative model from topic analysis and summarization approaches is the ability to extract relevant sequences of text relative to the corresponding series of events associated with the same topic over time. accordingly, our proposed narrative framework integrates unsupervised topic mining with extractive text summarization for narrative identification and summary extraction. we compare the identified narratives by our model with the topics identified by latent dirichlet allocation (lda) [ ] and topics over time (tot) [ ] . this comparison includes presenting numerical results and analysis for a large corpus of more than one million tweets in the domain of disinformation campaigns conducted against the white helmets of syria. the collected dataset contains tweets spanning months within the years and . our results provide evidence that our proposed method is effective in identifying topical trends within a timestamped data. furthermore, we define a novel metric called significance-dispersity trade-off (sdt) in order to compare and identify topics with higher lifetime attractiveness in timestamped data. finally, we demonstrate that our proposed model discovers time localized topics over events that approximates the distribution of user activities on social media platforms. the remaining of this paper is organized as follows: first, an overview of the related works is provided in section . in section , we provide a detailed explanation of our proposed method followed by the experimental setup and results. finally, in section we conclude the paper and discuss future directions. in this section, we first provide a background on narrative analysis and how literature has investigated stories in social media. then, we present an overview of topic modeling and text summarization. narratives can be found in all day-to-day activities. the fields of research on narrative analysis include narrative representation, coherence and structure of narratives, and the strategies, aim, and functionality of storytelling [ ] . from a computational perspective, narratives may relate to topic mining, text summarization, machine translation [ ] , and graph visualization. the later can be achieved via using directed acyclic graphs (dags) to demonstrate relationships over the network of entities [ ] . narrative summaries can be constructed from an ordered chain of individual events with causality relationships amongst events, appeared within a specific topic [ ] . the narrative sequence may report fluctuations over time relative to the underlying events. additionally, the story-like interpretation of the text is a must to imply a narrative [ ] . since social media have been admitted as a component of today's society, many studies have investigated narratives in social media content [ , , ] . these narratives contain small autobiographies that have been developed in personal profiles and cover trivial everyday life events. other types of narratives appearing in social media platforms consist of breaking news and long stories of past events [ ] . some types of narratives, such as breaking news, result in the emergence of other narratives related to the predictions or projections of events in near future [ ] . these literature view social media conversation cascades as stories that are co-constructed by the tellers and their audience, and are circulating amongst the public within and across social media platforms. moreover, the events have been considered as the causes of online user activity that can be identified via activity fluctuations over time [ , ] . developing appropriate tools for social media narrative analysis can facilitate communicating the main ideas regarding the events in large data. as social media activities generate abundant timestamped multimodal data, many studies such as [ ] have presented algorithms to discover the topics and develop descriptive summaries over social media events. probabilistic models to discover word patterns that reflect the underlying topics in a set of document collections [ ] . the most commonly used approach to topic modeling is latent dirichlet allocation (lda) [ ] . lda is a generative probabilistic model with a hierarchical bayesian network structure that can be used for a variety of applications with discrete data, including text corpora. using lda for topic mining, a document is a bagof-words that has a mixture of latent topics [ ] . many advanced topic modeling approaches have been derived from lda, including hierarchical topic models [ , ] that learn and organize the topics into a hierarchy to address a super-sub topic relationship. this approach is well-suited for analyzing social media and news stories that contain rich data over a series of real-world events [ ] . topic models over time with continuous-time distribution [ ] and dynamic topic models [ ] intend to capture the rise and falls of topics within a time range. however, continuous-time topic models, such as beta or normal time distribution, cannot model many modes in time. furthermore, the smooth time distribution over topics does not allow recognizing distinct topical events in the timestamped dataset, where topical events reflect the event-based topic activity fluctuations over time. topic modeling and summarization of social media data is challenging as a result of certain restrictions, such as the maximum number of characters allowed on the twitter platform. as shorttext or microblogs have low word co-occurrence and contextual information, models designed for short-text topic analysis and summarization may obtain context information with short-text aggregation to enrich the relevant context before further analysis [ ] . document summarization techniques are generally categorized into abstractive and generative text summarization models. herein, we consider extractive text summarization methods. several algorithms for extractive text summarization have been proposed in the literature that assign a salient score to sentences [ ] . to summarize a text corpus with short text, [ ] presents an automatic summarization algorithm with topic clustering, cluster ranking and assigning scores to the intermediate features, and sentence extraction. some other approaches, particularly for the twitter data include aggregating tweets by hashtags or conversation cascades [ , ] , and obtaining summaries for a targeted event of interest as one or a set of tweets that are representative of the topics [ ] . additionally, neural network-based summarization models [ , ] , commonly with an encoder-decoder architecture, leverage attention mechanism for contextual information among sentences or rouge evaluation metric to identify discriminative features for sentence ranking and summarization. however, these architectures require labeled datasets and might not apply to short-text. text summarization with compression using neural networks is proposed by [ ] which applies joint extraction and syntactic compression to rank compressed summaries with a neural network. our focus in the present work is on probabilistic topic modeling and extractive text summarization to provide descriptive narratives for the underlying events that occurred over a period of time. in this section we explain our narrative framework. the framework comprises of steps: i. narrative modeling based on topic identification over time and ii. extractive summarization from the identified narratives. to discover the narratives over topical events, first, we use our discrete-time generative narrative model as an unsupervised learning algorithm to learn distribution of textual contents from daily conversation cascades. then, we extract narrative summaries over topical events from sentences in the time categories. this is achieved by sampling from the identified distribution of narratives and perform sentence ranking. narrative modeling and summarization steps are explained below in separate subsections. to model narratives, we design our topic model such that the discovered topics present a series of timely ordered topical events. accordingly, the topical events deliver a narrative covering distinct events over the same topic. in this regard, we present narratives over categorical time (noc), a novel probabilistic topic model that discovers topics based on both word co-occurrence and temporal information to present a narrative of events. according to the topic-time relationship explained above, we refer to the topics or narratives, topical events as events, and the extracted timely ordered sentences of documents with high probability of belonging to each event as the extracted narrative summary. to fully comply with the definition of narrative, we assume a causality relation between the conversation cascades in social media. however, we do not investigate the causality relation across the conversation cascades or named-entities. the differences between our narrative model with dynamic topic models [ ] , topic models with continuous time distribution [ ] , and hierarchical topic models [ , ] include: not filtering the data for an specific event, imposing sharp transition for topic-time changes with time slicing, discovering topical events without scalability and sparsity issues, allowing multimodal topic distribution in time as a result of categorical time distribution, and selecting an appropriate slicing size such that distinct topical events be recognizable. additionally, categorical time distribution enables discovering topical events with varying time resolution, for instance, weekly, biweekly, and monthly. time discretization brings the question of selecting the appropriate slicing size or the number of categories that depends on the characteristics of the dataset under study. on the contrary, topical models with continuous time distribution cannot model many modes in time. additionally, continuous time models such as [ ] suffer from instability problem if the dataset is multimodal and sparse in time. furthermore, categorical time enables discovering topic recurrence which results in identifying topical events related to distinct narrative activities, which is of our interest in this paper. narrative activities in social media refer to the amount of textual content that is circulating in online platforms over time, corresponding to a specific topic. the generative process in noc, models timestamps and words per documents using gibbs sampling which is a markov chain monte carlo (mcmc) algorithm. the graphical model of noc is illustrated in figure . as can be seen from the graphical model, the posterior distribution of topics is dependent on both text and time modalities. this generative procedure can be described as follows: i. for each topic z, draw t multinomials ϕ z from a dirichlet prior β; ii. for each document d, draw a multinomial θ d from a dirichlet prior α; iii. for each word w di in d: (a) draw a topic z di from multinomial θ d ; where in this model, gibbs sampling provides an approximate inference instead if exact inference. to calculate the probability of topic assignment to word w di , we first need to calculate the joint probability of the dataset as p(z d i , w d i , t d i |w −di , t −di , z −di , α, β,ψ ) and use chain rule to derive the probability of p(z d i |w, t, z −d i , α, β,ψ ) as below, where −di subscripts refers to all tokens except w di : where n zv refers to the number of words v assigned to topic z, m dz refers to the number of word tokens in document d that are assigned to topic z, and b k represents the kth time slice. the details on the gibbs sampling derivation can be found in the appendix section. after each iteration of gibbs sampling, we update the probability of p(t z d i ∈ b k ) as follows: where i(.) is equal to when t z d i ∈ b k , and otherwise. in this paper, we report results with bi-weekly categorical time resolution. to determine the values for hyper-parameters α and β and to investigate the sensitivity of the model to these values, we repeated our experiment with symmetric dirichlet distributions using values α ∈ [ . , . , ], β ∈ [ . , . , . , . , ]. we observed that the model did not show significant sensitivity to the values of these hyper-parameters. thus, we fix α = and β = . , both as symmetric dirichlet distributions. we initialize the hyperparameter ψ in ways for comparison: i. random initialization (model referred as noc r ); and ii. based on the probability of user activity per time category, illustrated in figure c , (model referred as noc a ). to estimate the number of topics for our experiments, we first visualize the tweets' hashtag co-occurrence graph. we measure the graph modularity to examine the structure of the communities in this graph. we observe the highest modularity score of . using modularity resolution equal to . . figure illustrates a downsample version of this graph, where each color represents a modularity class. the edges of the graph are weighted according to the number of hashtags' co-occurrence in the document collection. our modularity analysis suggests that few distinct hashtag communities exist. additionally, the dataset under study contains tweets associated with a single domain. as a result, we assume the number of topics to be relatively low. to choose an appropriate number of topics, we repeated lda with the number of topics as t ∈ [ , . . . , ] with increments of size . we evaluated the c v coherence of topics identified by lda and observed the highest coherence score for t = and t = , respectively. thus, we report our experimental results using these values. we employ the discovered probabilities of topics over documents, θ , probabilities of words per topic, ϕ, and probabilities of topics per time category, ψ to perform sentence ranking. this ranking allows extracting the sentences with the higher scores of belonging to each topic. this is achieved via performing weighted sampling on the collection of documents based on the probabilities of topics per time category ψ and draw d documents from θ . the weighted sampling leads to drawing more documents from the time categories b k with a higher ψ as this time slices contain more documents related to the topic z. each document contains a sequence of sentences (s , s , . . . , s j ) ∈ d from the aggregated conversation cascades per day. information on the aggregation of conversation cascades and document preparation can be found in section . . since the social media narrative activity over a topic evolves from the circulation of identical or similar textual content in the platform, the content involves significant similarity. for instance, the twitter conversation cascades include replies, quotes, and comments, where replies and quotes duplicate the textual content. therefore, we applied jaro-winkler distance over the timely ordered sentences and dismissed the sentences with similarity above %, while keeping the longest sentence. after removing redundant text as described earlier, we calculate the probability of each sentence s j by measuring the sum of the probabilities of topics for words w di ∈ s j . then, we select the sentences with the highest accumulative probability of words w per topic z. summary coherence was induced as suggested in [ ] by ordering the extracted sentences according to their timestamps such that the oldest sentences appear first. table in the appendix section contains the extracted narrative summaries for topics for a sample run. as mentioned earlier, the discovered topics by noc present a series of timely ordered topical events. thus, the topical events deliver a narrative covering distinct social media events over the same topic. figure demonstrates the generated narrative distributions with noc, where the hyperparameter ψ was randomly initialized (referred to as noc r ). this figure represents that the identified narratives by our model are distinct from each other and the collapsed distribution of all narratives approximates the distribution of social media user activity over time. the identified narratives can be evaluated using effective evaluation metrics for topic models. accordingly, we calculate pointwise mutual information [ ] to measure the coherence of a topic z as follows: where k is the number of most probable words for each narrative, p(w j ) and p(w k ) refer to the probabilities of occurrence for words w j and w k , and p(w j , w k ) represents the probability of co-occurrence for the two words in the collection of documents. we compare our model with lda and tot [ ] , where tot is a probabilistic topic model over time with beta distribution for time. table displays the average coherence score measured across the discovered topics by lda, tot, and noc. for noc, we investigate initializing the parameterψ with random and user activity-based initialization, referred as noc r and noc a , respectively. we considere k = most probable words from each topic. this comparison suggests that the narratives identified by noc are more coherent than the identified topics by lda, with an improvement in coherence of about %. the observed improvement comparing with tot was about %. additionally, initializing the hyperparameter ψ in noc using the distribution of user activity improves the narrative coherence by about %. the topic attractiveness to social media users can be investigated as a measure of the length of conversation cascades, the number of initiated textual content, and the number of unique users performing an activity relative to the underlying topic. the user activity fluctuations for timestamped data may contain activity bursts that are illustrative of significant events. similarly, the generation and propagation of textual content within an online platform can illustrate the narrative activity relative to the events over time, where a burst represents a significant narrative activity. additionally, the recurrence of a topic can be considered as an attractiveness measure for the associated topic. in this regard, we propose the significance-dispersity trade-off (sdt) metric to compare the identified narratives against each-other. sdt measures the lifetime attractiveness of the identified narratives based on the distribution of narratives over topical events. the proposed metric quantifies the significance of the narrative activities and recurrence of a topic via employing the shannon entropy for the discovered narrative distributions. the intuition behind the sdt score is that the value of the entropy is maximum when the probability distribution is uniform. on the contrary, this value is minimum if the distribution is delta function. this is visualized in figure in the appendix section. we define dispersity of a categorical time topic distribution as a measure of the dispersion of the time categories. based on this definition, sdt score of topic z can be obtained as: where h is the shannon entropy for the categorical distribution of time for topic z: h max = loд (k), and k refers to the number of time slices in the distribution. we assume that social media topics with high lifetime attractiveness are significant and recurrent. however the probability distribution imposes a trade-off on the two. the parameter α provides a weighted geometric mean of h and h max − h that enables promoting either significance or recurrence, dependent on the application under study. a larger value of parameter α promotes dispersity for sdt score, and a smaller amount of this parameter promotes mode significant. the bounds for the sdt score are: where h = occurs when the distribution under study is uniform, and h = h max relates to delta distribution. since the time categorical distribution of our narrative model allows many modes in time, recurrent narratives can be identified. additionally, the narrative activity fluctuations can be modeled using categorical time distribution in topic analysis. table provides a comparison for the sdt scores measured for the identified narratives, using varying values of α. the illustration of the distribution of the extracted narratives can be seen in figure a . we can clearly see in this figure that narratives and have the highest dispersity. on the contrary, narratives and have the highest significance. we compare sdt i for narrative i with the number of user activity associated with narrative z. the results suggest that sdt score can be used to identify the narrative with higher lifetime attractiveness in a timestamped dataset. in our experiments, this is achieved for topic when the value of γ is greater than or equal to . . as it can be seen, this topic is associated with the highest user activity count, reported in the same table. to analyze topical events and provide narratives, we investigate the twitter dataset on the domain of white helmets of syria over a period of month from april to april . this dataset was provided to us by leidos inc as part of the computational simulation of online social behavior (socialsim) program initiated by the defense advanced research projects agency (darpa). we analyze more than , , tweets from april to april . to prepare the model inputs, we filter the tweets from non-english text. then, we clean up the data by removing usernames, short urls, as well as emoticons. additionally, we remove the stopwords, performe part of speech (pos) tagging and named entity recognition (ner) on each tweet using stanford named entity recognizer model. using the ner tool, we extract persons, locations and organizations and removed all pseudo-documents that do not contain named entities similar to [ ] . furthermore, we remove the tweets shorter than words. as the twitter maintains a maximum allowed character limit of characters, collected tweets lack context information and have very low word co-occurrence. we tackle the challenge of topic modeling on short-text tweets and to include plentiful context information by preparing pseudo-documents for our model inputs via aggregating daily root, parent, and reply/quote/retweet comments in each conversation cascade. we maintain the order of the conversation according to the timestamps associated with each tweet. this text aggregation method results in preparing pseudodocuments rich of context and related words with a daily time resolution. we use the pre-processing phase output as the model input pseudo-documents, referred as documents in this paper. in this paper, we addressed the problem of narrative modeling and narrative summary extraction for social media content. we presented a narrative framework consisting of i. narratives over topic categories (noc), a probabilistic topic model with categorical time distribution; and ii. extractive text summarization. the proposed narrative framework identifies narrative activities associated with social media events. identifying topics' recurrence and significance over time categories with our model allowed us to propose significance-dispersity trade-off (sdt) metric. sdt can be employed as a comparison measure to identify the topic with the highest lifetime attractiveness in a timestamped corpus. results on real-world timestamped data suggest that the narrative framework is effective in identifying distinct and coherent topics from the data. additionally, the results illustrate that the identified narrative distributions approximate the user activity fluctuations over time. moreover, informative, and concise narrative summaries for timestamped data are produced. further improvement of the narrative framework can be achieved via incorporating the causality relation cross the social media conversation cascades and social media events into account. other future directions include identifying topical hierarchies and extract summaries associated with each hierarchy. starting with the joint distribution p(w, t, z|α, β,ψ ), we can use conjugate priors to simplify the equations as below: p(w, t, z|α, β,ψ ) = p(w |z, β) p(t |ψ , z) p(z|α) where p and p refer to the probability mass function (pmf) and probability density function (pdf), respectively. the conditional probability p(z di |w, t, z −di , α, β,ψ ) can be found using the chain rule as: the probability of p(t di ∈ b k ) can be measured as follows: where i(.) is equal to when t z d i ∈ b k , and otherwise. remember first they said the video including the pics of the chlorine cylinder was fake. whitehelmets one america news pearson sharp visits hospital in douma where white helmets filmed chemical attack hoax multiple eyewitness doctors say no chemical attack took place syria. this is the video evidence of the airstrike on zardana an idlib town controlled by very expensive camera on the helmet of the whitehelmets rescuer. white helmets making films of chemical attacks with children in idlib. chemical, attack, douma, terrorist, fake, child, propaganda, video, russian, russia from the fabrication of the plays of the chemist and coverage of the crimes of terrorism to the public cooperation with the israeli army the white helmets. they are holding children! another chemical attack is imminent its all they've got left! dead including two children and more than wounded mostly women and children. love the white helmets propaganda almost as untruthful as the bbc. trumps usa has built a rationale for its public that it will need to support rebels in holding on to a large chunk of syria. i wonder how it is possible that criminal associations such as whitehelmets and the syrian human rights observatory can make the world go round as they want by influencing the policies of world leaders. u.s. freezes funding for syrias white helmets. white helmets are terrorists. former head of royal navy lord west on bbc white helmets aren't neutral they're on the side of the terrorists. the summaries provided here are the results for a sample run of the proposed narrative framework and do not reflect authors' personal opinions. a survey of topic modeling in text mining text summarization techniques: a brief survey leveraging burst in twitter network communities for event detection sentence ordering in multidocument summarization dynamic topic models latent dirichlet allocation a survey of multi-label topic models automatic summarization of events from social media the covid- social media infodemic summarizing microblogs during emergency events: a comparison of extractive summarization algorithms twitter as arena for the authentic outsider: exploring the social media campaigns of trump and clinton in the us presidential election themedelta: dynamic segmentations over temporal topic models polarization in social media assists influencers to become more influential: analysis and two inoculation strategies small stories research: a narrative paradigm for the analysis of social media. the sage handbook of social media research methods hieve: a corpus for extracting event hierarchies from news stories hierarchical topic models and the nested chinese restaurant process claimbuster: the first-ever end-to-end factchecking system skip n-grams and ranking functions for predicting script events latent dirichlet allocation (lda) and topic modeling: models, applications, a survey twitter based event summarization real-time entity-based event detection for twitter models of narrative analysis: a typology ranking sentences for extractive summarization with reinforcement learning automatic evaluation of topic coherence seriality and storytelling in social media large-scale hierarchical topic models short and sparse text topic modeling via self-aggregation leveraging contextual sentence relations for extractive summarization using a neural attention model sumblr: continuous summarization of evolving tweet streams sub-story detection in twitter with hierarchical dirichlet processes from neural sentence summarization to headline generation: a coarse-to-fine approach seq seq models for recommending short text conversations narrative information extraction with non-linear natural language processing pipelines make data sing: the automation of storytelling topics over time: a non-markov continuous-time model of topical trends neural extractive text summarization with syntactic compression incorporating wikipedia concepts and categories as prior knowledge into topic models. intelligent data analysis concept over time: the combination of probabilistic topic model with wikipedia knowledge key: cord- - vln erl authors: bhardwaj, rajneesh; agrawal, amit title: likelihood of survival of coronavirus in a respiratory droplet deposited on a solid surface date: - - journal: phys fluids ( ) doi: . / . sha: doc_id: cord_uid: vln erl we predict and analyze the drying time of respiratory droplets from a covid- infected subject, which is a crucial time to infect another subject. drying of the droplet is predicted by using a diffusion-limited evaporation model for a sessile droplet placed on a partially wetted surface with a pinned contact line. the variation in droplet volume, contact angle, ambient temperature, and humidity are considered. we analyze the chances of the survival of the virus present in the droplet based on the lifetime of the droplets under several conditions and find that the chances of the survival of the virus are strongly affected by each of these parameters. the magnitude of shear stress inside the droplet computed using the model is not large enough to obliterate the virus. we also explore the relationship between the drying time of a droplet and the growth rate of the spread of covid- in five different cities and find that they are weakly correlated. previous studies have reported that infectious diseases such as influenza spread through respiratory droplets. the respiratory droplets could transmit the virus from one subject to another through the air. these droplets can be produced by sneezing and coughing. han et al. measured the size distribution of sneeze droplets expelled from the mouth. they reported that the geometric mean of the droplet size of sneezes of healthy subjects is around μm for unimodal distribution and is μm for bimodal distribution. liu et al. reported around % longer drying time of saliva droplets as compared to water droplets deposited on a teflonprinted slide. they also predicted and compared these times with a model and considered the solute effect (raoult's effect) due to the presence of salt/electrolytes in saliva. the slower evaporation of the saliva droplet is attributed to the presence of the solute in it. xie et al. developed a model for estimating the droplet diameter, temperature, and falling distance as a function of time as droplets are expelled during various respiratory activities. they reported that large droplets expelled horizontally can travel a long distance before hitting the ground. in a recent study, bourouiba has provided evidence that droplets expelled during sneezing are carried to a much larger distance (of - m) than the distance previously found. the warm and moist air surrounding the droplets helps in carrying the droplets to such a large distance. while the role of virus-laden droplets in spreading infectious diseases is well-known, the drying time of such droplets after falling on a surface has not been well-studied. in this context, buckland and tyrrell experimentally studied the loss in infectivity of different viruses upon drying of virus-laden droplets on a glass slide. at room temperature and % relative humidity, the mean log reduction in titer was reported to be in the range of . - . for viruses they considered. the need for studying the evaporation dynamics of virus-laden droplets has also been recognized in the recent article by mittal et al. furthermore, to reduce the transmission of covid- pandemic caused by sars-cov- , the use of a face mask has been recommended by who. the infected droplets could be found on a face mask or a surface inside the room, which necessitates the regular cleaning of the surface exposed to the droplets. therefore, the present study examines the drying times of such droplets, which correlates with the time in which the chances of the transmissibility of the virus are high. , first, we present different components of the model that are used to estimate the drying time and shear stress. we consider aqueous respiratory droplets that are on the order of - nl on a solid surface. the range of the volume is consistent with previous measurements. the corresponding diameters of the droplets in the air are around μm and μm, and the probability density scitation.org/journal/phf function (pdf) of the normal distribution of the droplet diameter in the air is plotted in fig. . the mean diameter and standard deviation are μm and μm, respectively. droplets smaller than μm are not considered in this study because such droplets are expected to remain airborne, while the larger droplets being heavier settle down. the droplet is assumed to be deposited as a spherical cap on the substrate. since the wetted diameter of the droplet is lesser than the capillary length ( . mm for water), the droplet maintains a spherical cap shape throughout evaporation. the volume (v) and contact angle (θ) of a spherical cap droplet are expressed as follows: where h and r are droplet height and wetted radius, respectively. we consider diffusion-limited, quasi-steady evaporation of a sessile droplet with a pinned contact line on a partially wetted surface (fig. ) . the assumption of quasi-steady evaporation is valid for t h /tf < . , as suggested by larson, where t h and tf are heat equilibrium time in the droplet and drying time, respectively. t h /tf scales as follows: where d, α, h, r, csat, and ρ are diffusion coefficient of liquid vapor in the air, thermal diffusivity of the droplet, droplet height, wetted radius, saturation liquid vapor concentration, and droplet density, respectively. in the present work, the maximum value of t h /tf is estimated to be around . at ○ c, the maximum water droplet temperature considered in the present work, and a contact angle of ○ (h/r = ). the values of d, α, and ρ are set as . × − m /s, . × − m /s, and kg/m , respectively. therefore, the assumption of quasi-steady evaporation is justified. the mass lost rate (kg/s) of an evaporating sessile droplet is expressed as follows: where h and θ are relative humidity and static contact angle, respectively. the saturated concentration (kg/m ) at a given temperature for water vapor is obtained using the following third order polynomial: , where t is the temperature in ○ c ( ○ c ≤ t < ○ c). the dependence of the diffusion coefficient (m /s) of water vapor on temperature ( ○ c) is given by , ). assuming a linear rate of change in the volume of the droplet for a sessile droplet pinned on the surface, , the drying time of the droplet is given by where v and ρ are the initial volume and density of the droplet, respectively. the properties of pure water have been employed in the present calculations to determine the drying time and shear stress. since the thermo-physical properties of saliva are not very different from water, the present results provide a good estimate of the evaporation time under different scenarios and shear stress. furthermore, we obtain the expression of the maximum shear stress (τ) on the nm diameter sars-cov- , suspended in the sessile water droplet, and estimate its range for the droplet size considered. the shear stress on the virus would be maximum for a virus adhered to the substrate surface (fig. ) . assuming a linear velocity profile across the cross section of the virus, the expression of τ is given by where μ, u, and dv are the viscosity of the droplet, flow velocity on the virus apex ( fig. ) , and virus diameter, respectively. the flow inside the droplet is driven by the loss of liquid vapor by diffusion. we neglect the flow caused by marangoni stress, since an evaporating water droplet in ambient does not exhibit this stress. , , the expression of the non-uniform evaporative mass flux on the liquid-gas interface, j, (kg m − s − ), is given by where λ(θ) = . −θ/π and r is the radial coordinate (fig. ) . the above expression exhibits singularity at r = r, and the maximum value of j (say, jmax) occurs near the contact line region (say, at r = . r). the magnitude of the evaporative-driven flow velocity (m s − ) is expressed as follows: the following expression of the maximum shear stress (τ) is therefore obtained: using eqs. ( ) and ( ) second, we present the effect of ambient temperature, surface wettability, and relative humidity on the drying time of the droplet. in this context, we examine the drying time of a deposited droplet in two different ambient temperatures, ○ c and ○ c. the chosen temperatures are representative of temperatures inside a room with air-conditioning and outdoors in summer. figure shows the variation in evaporation time with the droplet volume at the two different ambient temperatures considered. the contact angle and humidity for these simulations are set as ○ and %, respectively. at ○ c, the evaporation time for small droplets is about s, which increases to s for large size droplets. the evaporation time increases as the square of the droplet radius or / power of volume. an increase in the ambient temperature reduces the evaporation time substantially (by about % for ○ c rise in temperature). therefore, an increase in the ambient temperature is expected to drastically reduce the chance of infection through contact with an infected droplet. the effect of the surface on which the droplet can fall onto is modeled here through an appropriate value of the contact angle. the contact angle of ○ corresponds to a water droplet on glass, while ○ corresponds to a water droplet on the touch screen of a smartphone (table i) . the results of the simulations corresponding to these two contact angles are plotted in fig. . the ambient temperature and humidity are set as ○ c and %, respectively. figure shows that the effect of the surface can be quite profound; the evaporation time can increase by % for a more hydrophobic surface. the droplet spreading on the surface is larger as the contact angle decreases and thereby, enhancing the mass loss rate of liquid from the droplet to the ambient. therefore, for a surface with a smaller contact angle, the evaporation time of the droplet is smaller. the effect of the surface can further be manifested by a difference in temperature in different parts of the surface. such inhomogeneity in the surface temperature can be brought about by the difference in the surface material (leading to the difference in the emissivity) or differential cooling (for example, due to the corner effect). even a slight difference in the surface temperature can further aggravate the surface effect by influencing the evaporation time. sars-cov- has a lipid envelop, and in general, the survival tendency of such viruses, when suspended in air, is larger at a lower relative humidity of %- %, as compared to several other viruses that do not have a protective lipid layer. here, we examine the effect of the relative humidity on the survival of the virus inside a droplet deposited on a surface. figure shows that the relative humidity has a strong effect on the evaporation time. the contact angle and ambient temperature for these calculations are set as ○ c and ○ c, respectively. the evaporation time of a droplet increases almost sevenfold with an increase in humidity from % to %. furthermore, the evaporation time becomes greater than min for large droplets at high humidity. with the increase in humidity in coastal areas in summer and later in other parts of asia in july-september with an advent of monsoon, this may become an issue as there will be sufficient time for the virus to spread from the droplet to new hosts upon contact with the infected droplet. therefore, higher humidity increases the survival of the virus when it is inside the droplet; however, it decreases its chances of the survival if the virus is airborne. finally, we discuss the relevance of the present results in the context of covid- pandemic. the evaporation time of a droplet is a critical parameter as it determines the duration over which the spread of infection from the droplet to another person coming in contact with the droplet is possible. the virus needs a medium to stay alive; therefore, once the droplet has evaporated, the virus is not expected to survive. the evaporation time can, therefore, be taken as an indicator of the survival time of the virus. in general, it is regarded that a temperature of ○ c maintained for more than min inactivates most of the viruses; however, contrary reports about the effect of temperature on the survivability of sars-cov- has been reported. , our results indicate that the survival time of the virus depends on the surface on which the droplet has fallen, along with the temperature and humidity of the ambient air. the present results are expected to be of relevance in two different scenarios: when droplets are generated by an infected person by coughing or sneezing (in the absence of a protective mask) or when fine droplets are sprayed on a surface for cleaning/disinfecting the surface. a wide range of droplet sizes is expected to be produced in these cases. the mutual interaction of the droplets such that they interfere in the evaporation dynamics is, however, expected to be weak because of the large distance between the droplets, as compared to their diameter. the virus inside a droplet is subjected to shear stresses due to the generation of the evaporation-induced flow inside the droplet. the magnitude of this shear stress has however been estimated to be small, and the virus is unlikely to be disrupted by this shear stress inside the droplet. to determine the likelihood of the droplet and the virus on the surface, we find the mean and standard deviation of the probability density function (pdf) of the normal distribution of the droplet drying times for different cases of ambient temperature, contact angle, and relative humidity. the values of the mean and standard deviation are plotted using the bar and error bar, respectively, in fig. . the likelihood lifetime is in the range for ( - ) s for h ≤ %, while it is in the range of ( - ) s for h = %. this result shows that the drying time is likely to be larger by around five times in the case of large relative humidity values, thereby increasing the chances of the survival of the virus. furthermore, we examine the connection between the drying time of a droplet and the growth of the infection. a similar approach was recently tested for suspended droplets in air in ref. . we hypothesize that since the drying time of a respiratory droplet on a surface is linked to the survival of the droplet, it is correlated with the growth of the pandemic. since the drying time is a function of weather, we compare the growth of infection with the drying time in different cities. the cities were selected based on cold/warm and dry/humid weather. the growth of the total number of infections is plotted for cities with different weather conditions during the pandemic in fig. . the data of the infections were obtained from public repositories. , the data were fitted with linear curves using the least-squares method, and the slope of the fits represents the growth rate (the number of infections per day) of the respective city. the growth rate of new york city and singapore is the highest and the lowest, respectively. for different cities, we compute the drying time of a droplet of nl volume, which is the mean volume obtained using the pdf of the distribution (fig. ) . the ambient temperature and relative humidity are taken as a mean of the respective ranges listed in table ii . as discussed earlier, the drying time increases with an increase in humidity; however, it decreases with an increase in ambient temperature. thus, the combined effect of humidity and temperature dictates the final drying time. this can be illustrated by comparing the drying time of singapore and new york city plotted in fig. . the time is shorter for the former as compared to the latter despite with a large humidity for the former ( %- %) as compared to the latter ( %- %). finally, fig. compares the growth rate and drying time in different cities using vertical bars and symbols, respectively. the growth rate appears to be weakly correlated with the drying time, i.e., a larger (lower) growth rate corresponds to larger (lower) drying time. qualitatively, these data verify that when a droplet evaporates slowly, the chance for the survival of the virus is enhanced and the growth rate is augmented. we recognize that our model has limitations, which can be improved in subsequent studies. in particular, air has been assumed to be stationary; the evaporation time is expected to reduce in the presence of convective currents. therefore, the value of the predicted evaporation times is on the conservative side, and the actual evaporation time will be smaller than that obtained here. the effect of the solute present (i.e., raoult's law) in saliva/mucus has not been modeled, and the contact angle and drying of these biological fluids could be slightly different from that of pure water on a solid surface. however, the impact of these latter effects on the drying time is expected to be small. furthermore, the model does not consider the interaction of the droplets. it is likely that the respiratory droplets, expelled from mouth and/or nose, deposit adjacent to each other on a surface and could interact while evaporating. they may interact while falling, and a falling droplet may coalesce on an already deposited droplet on a surface. in addition, receding of the contact line may influence the drying time, which is not considered in the present work. in sum, we have examined the likelihood of the survival of sars-cov- suspended in respiratory droplets originated from a covid- infected subject. the droplet is considered to be evaporating under ambient conditions on different surfaces. the droplet's volume range is considered as ( , ) nl. the datasets of the drying time presented here for different ambient conditions and surfaces will be helpful for future studies. the likelihood of the survival of the virus increases roughly by five times under a humid condition as compared to a dry condition. the growth rate of covid- was found to be weakly correlated with the outdoor weather. while the present letter discusses the results in the context of covid- , the present model is also valid for respiratory droplets of other transmissible diseases, such as influenza a. the data that support the findings of this study are available from the corresponding author upon reasonable request. characterizations of particle size distribution of the droplets exhaled by sneeze evaporation and dispersion of respiratory droplets from coughing how far droplets can move in indoor environments-revisiting the wells evaporation-falling curve turbulent gas clouds and respiratory pathogen emissions potential implications for reducing transmission of covid- loss of infectivity on drying various viruses the flow physics of covid- a review of coronavirus disease- (covid- ) inactivation of influenza a viruses in the environment and modes of transmission: a critical review aerobiology and its role in the transmission of infectious diseases transport and deposition patterns in drying sessile droplets crc handbook of chemistry and physics evaporation of a sessile droplet on a substrate pattern formation during the evaporation of a colloidal nanoliter drop: a numerical and experimental study a combined computational and experimental investigation on evaporation of a sessile water droplet on a heated hydrophilic substrate evaporative deposition patterns: spatial dimensions of the deposit analysis of the microfluid flow in an evaporating sessile droplet self-assembly of colloidal particles from evaporating droplets: role of dlvo interactions and proposition of a phase diagram dynamics of water spreading on a glass surface wetting of wood on the collision of a droplet with a solid surface water wetting and retention of cotton assemblies as affected by alkaline and bleaching treatments preparation and adhesion performance of transparent acrylic pressure sensitive adhesives for touch screen panel the effect of environmental parameters on the survival of airborne infectious agents high temperature and high humidity reduce the transmission of covid- no association of covid- transmission with temperature or uv radiation in chinese cities modeling ambient temperature and relative humidity sensitivity of respiratory droplets and their role in determining growth rate of covid- outbreaks evaporation-induced transport of a pure aqueous droplet by an aqueous mixture droplet effect of viscosity on droplet-droplet collisional interaction coalescence dynamics of a droplet on a sessile droplet on the lifetimes of evaporating droplets with related initial and receding contact angles key: cord- -rn dow authors: gunson, r.n.; collins, t.c.; carman, w.f. title: practical experience of high throughput real time pcr in the routine diagnostic virology setting date: - - journal: j clin virol doi: . /j.jcv. . . sha: doc_id: cord_uid: rn dow the advent of pcr has transformed the utility of the virus diagnostic laboratory. in comparison to traditional gel based pcr assays, real time pcr offers increased sensitivity and specificity in a rapid format. over the past years, we have introduced a number of qualitative and quantitative real time pcr assays into our routine testing service. during this period, we have gained substantial experience relating to the development and implementation of real-time assays. furthermore, we have developed strategies that have allowed us to increase our sample throughput while maintaining or even reducing turn around times. the issues resulting from this experience (some of it bad) are discussed in detail with the aim of informing laboratories that are only just beginning to investigate the potential of this technology. the advent of polymerase chain reaction (pcr) has transformed the utility of the virus diagnostic laboratory. in comparison with traditional methods, pcr offers a highly sensitive and specific result within - h. the routine use of this test in diagnostic laboratories has led to many benefits including improved patient management, and increased ascertainment of previously under-diagnosed and undetectable viruses. the advent of real time pcr technologies has further improved upon these already significant benefits (arya et al., ; aslanzadeh, ; bustin and nolan, ; mackay, ; tan et al., ) . in comparison to traditional gel-based pcr assays, real time pcr offers increased sensitivity and specificity in a rapid format (turn around time from sample receipt to result < h). unlike traditional systems, which rely upon endpoint analysis, real time pcr assays visualise the reaction as it is taking place allowing quantification and reaction analysis (e.g., pcr efficiency). since real time pcr reactions are performed in a closed system (no gel analysis needed) the risk of contamination has been substantially reduced. this has also reduced the requirement for a stringent laboratory structure. the increasing number of chemistries and platforms available for real time pcr have reduced its overall cost significantly making this an increasingly attractive technique. over the past years we have introduced a number of qualitative and quantitative real time pcr assays into our routine testing service. these include assays for the detection of influenza a, b and c, human metapneumovirus, respiratory syncytial viruses (rsv) a and b, rhinovirus, parainfluenza viruses - , coronaviruses nl , oc and e, chlamydia pneumoniae, mycoplasma pneumoniae, pneumocystis jiroveci, varicella zoster virus (vzv), herpes simplex virus (hsv) and , cytomegalovirus (cmv), epstien barr virus (ebv), hhv- , hhv- , norovirus, adenovirus, rotavirus, astrovirus, sapovirus, erythrovirus b , mumps, chlamydia trachomatis, mycoplasma genitalium, nesseria gonnorhoeae and enterovirus. each year we carry out more than , pcr tests. during this period we have gained substantial experience relating to the development and implementation of real time assays. furthermore we have developed strategies that have allowed us to increase our sample throughput while maintaining or even reducing turn around times. the issues resulting from this experience (some of it bad) are discussed in detail below with the aim of informing labo-ratories that are only just beginning to investigate the potential of this technology. there are numerous chemistries available to carry out real time pcr. these include dual labelled probes (often known as taqman tm probes), minor groove binding (mgb) probes, molecular beacons, fluorescence energy transfer (fret) probes, intercalating dyes (such as sybr green) and more recently developed fluorescent labelled primers such as sunrise tm , lux tm or scorpion primers tm . the advantages and disadvantages of each chemistry are discussed (arya et al., ; aslanzadeh, ; bustin and nolan, ; mackay, ; tan et al., ) (table ) . most of the published real time probe based pcr assays for viral diagnosis utilise either molecular beacons or dual labelled probes although more recent publications tend to favour the use of dual labelled probes. currently all our real time pcr tests are dual labelled probe assays. however, we do have experience of both methods and have noted that there are several important differences between these two systems, which should be considered before developing or implementing a diagnostic virology test. molecular beacons are very specific (tyagi and kramer, ) . the specificity is a direct result of their structure. in free solution a molecular beacon adopts a hairpin-loop conformation in which the reporter fluorescence is quenched by its proximity to the quencher molecule ( fig. ). this is a very stable state and a molecular beacon will only bind to a target sequence, become linear and fluoresce if it is highly complementary. any nucleotide differences between the beacon and the target sequence will greatly reduce the target binding efficiency of the probe. as a result molecular beacons have an increased propensity for false negative results. we encountered this problem during the implementation of a previously published molecular beacon based test for the detection of parainfluenza viruses. during the initial assessment, this detected all culture/direct immunoflorescence parainfluenza positive samples detected between and . however, all parainfluenza positive samples detected by dif or culture in were negative when tested with this assay. to assess whether the primers were amplifying the parainfluenza viral rna, sybr green was added to the pcr reaction in place of the molecular beacon. the formation of pcr product was observed. using melt curve analysis, identical melting peaks were observed in all parainfluenza samples and controls (fig. ) . running the pcr products on an agarose gel and observing a band of the expected size confirmed the successful amplification of parainfluenza rna by the primers. based on these results we deduced that the molecular beacon was no longer complementary to the amplified target sequence. consequently, following analysis of more recently available sequences in the database, a new molecular beacon was designed, which detected all parainfluenza samples. fig unlike molecular beacons, dual labelled probes are in a permanent linear conformation (lee et al., ) (fig. ) . like primers, they can tolerate a small number of mismatches between the probe and target and still bind to the target. consequently, dual labelled probes are less likely to result in false negative reactions and may, in comparison with molecular beacons, be of greater use in viral diagnosis where occasional changes in even the most conserved target sequence may be expected to occur (although it should be noted that mismatches can also lead to false negative reactions with dual labelled probes). however, either method will be useful if targeting a highly conserved region. the second difference between molecular beacons and dual labelled probe chemistries is related to the normalised fig. . figure showing assessment of whether real time molecular beacon primers were amplifying the parainfluenza viral rna. in this reaction, sybr green was added to the pcr reaction in place of the molecular beacon (all samples negative by molecular beacon assay). the formation of pcr product was observed. using melt curve analysis, identical melting peaks were observed in all parainfluenza samples and controls (http://probes.invitrogen.com/handbook/figures/ .html). fig . . dual labelled probes (also known as taqman tm probes) are oligonucleotides that contain a fluorescent dye on the base, and a quenching dye located on the base. when excited the flourescent dye transfers energy to the nearby quenching dye molecule rather than fluorescing, resulting in a non-fluorescent probe. dual labelled probes are designed to hybridize to an internal region of a pcr product. during pcr, when the polymerase replicates a template on which the probe is bound, the -exonuclease activity of the polymerase cleaves the probe. this separates the fluorescent and quenching dyes and fret no longer occurs, allowing detection of the signal from the reporter dye. fluorescence increases in each cycle, proportional to the rate of probe cleavage. change in fluorescence ( r n ) produced during successful real time pcr. we have found in most, but not all cases, that dual labelled probes produce a greater fluorescent change than molecular beacons (fig. ) . a larger r n allows easier interpretation of results, as low positive results maybe more easily differentiated from the variable background. dual labelled probes provide a greater fluorescent change as the reporter dye is irreversibly released from the quencher during the extension stage of each pcr cycle. consequently there is a cumulative and permanent record of successful amplification, which is added to during subsequent pcr cycles. molecular beacons are not destroyed at the end of each cycle, but return to free solution during the denaturation phase and revert back to their hairpin-loop structure. consequently there is no accumulation of free reporter dye and the extra fluorescence produced is less after each cycle than when compared to dual labelled probes. since real time pcr is a relatively new technique, published assays may not be available for all viral pathogens. as a result many laboratories may wish to develop novel in-house real time pcr assays. the initial stages in developing a real time pcr assay are the same as those required for designing traditional gel based pcr tests. the first step is to identify a conserved region of the viral (or other pathogen) genome in which to design the assay. a literature review will often reveal which genes are conserved, and most often these will be genes encoding nonimmunogenic proteins. once a gene is identified, a blast search (http://www.ncbi.nlm.nih.gov/blast/) is performed to locate the most conserved regions within this gene. as real time amplicons are short and contain a third oligonucleotide (i.e., the probe), the ideal region to design an assay would be - nucleotides long with regions of bases devoid of all base degeneracies. it is best to find a conserved region of - bases to allow the software to identify a number of potential assays. several software programs are available to design real time assays, and often software is provided by the instrument supplier. beacon designer (pre- fig. . comparison of the r n produced when using dual labelled probes (a) vs. molecular beacons (b). table main factors to consider when developing a dual labelled probe pcr assay factors to consider when developing a dual labelled probe pcr assay identify a conserved region of the viral (or other pathogen) genome identify a region within this area of ∼ - bases in length check that the probe sequence contains more c residues than g residues ensure that the probe does not begin with a g the optimal primer t m values are - • c the optimal probe t m should be ∼ • c higher the amplicon should not exceed bp in length primers should not contain more than / g or c residues at the check the amplicon for secondary structure, and for specificity mier biosoft) can be used to design either molecular beacon assays or dual labelled probe based assays, and has additional functionality such as blast and secondary structure searching. primer express (applied biosystems) is another useful tool for designing taqman based assays, and is the only software currently available for designing assays based on applied biosystems (abi) mgb probes. once the software has suggested a primer and probe set, it is important to ensure that they meet the criteria ( table ) . assuming the primers and probe meet these criteria, it is advisable to check the amplicon for secondary structure, and for specificity. secondary structure prediction software is available on the internet, for example, michael zukers' m-fold server (http://www.bioinfo.rpi.edu/∼zukerm/rna/) is particularly useful. a highly structured amplicon (higher − g) may reduce the efficiency of reverse transcription or primer annealing (fig. ). this may reduce the overall sensitivity of the assay. the final stage of the design process is to check the amplicon for specificity using the blast algorithm (http://www.ncbi.nlm.nih.gov/blast/). the assay should be specific for the sequence/organism of interest, and should not detect other sequences. non-specific matches may be picked up, but closer analysis of the primer and probe binding sites often confirms that these sequences will not be amplified or detected due to multiple base changes. when performing gel based pcr it is essential to fully optimise primer concentrations to achieve the best sensitivity of the assay and best end-point signal (brightness of band) (gunson et al., ) . in real time pcr, the signal is detected early in the amplification process, and therefore the end-point variation seen in gel-based assays does not affect the result. also, careful design of the assay can reduce primer-dimer formation fig. . structure within pcr amplicons may affect the sensitivity of an assay. respiratory syncitial virus (rsv)-a detection limit is copies/reaction, while rsv-b which is more structured has a detection limit of - copies per reaction. and increase the efficiency of the specific amplification reaction. consequently, many manufacturers of real time pcr equipment and oligonucleotide primers and probes no longer recommend optimising primer and probe concentrations for real time taqman assays. despite this we still perform an initial optimisation of both primer and probe concentrations to ensure we are running our real time pcr assays at their most sensitive and efficient. although for the majority of our assays the optimal concentrations are : : nm (forward:reverse:probe), we have observed on several occasions that the optimal primer and probe concentrations were different to the values recommended. our method for primer and probe optimisation is available online (www.clinical-virology.org). however, other methods will also be available. optimisation of a real time pcr requires positive control material. where positive control is not available (examples being a virus, which cannot be cultured or a highly pathogenic virus such as h influenza or sars coronavirus), dna or rna oligonucleotide targets may be ordered. these are also useful as alternatives to plasmids as standards in quantitative assays. it should be noted that these oligonucleotide controls must be ordered from a separate supplier to prevent contamination of the primer-probe set, and should be diluted in a separate laboratory prior to use as they may contain up to target copies per ml, and are therefore a considerable source of potential contamination. we have observed contamination in erythrovirus b primers purchased several months after a full length oligonucleotide control was ordered from the same supplier (fig. ). once the assay is optimised, it is essential to check the sensitivity and specificity of a new pcr assay by using a selection of sample 'panels'. there is much debate about what is an acceptable validation process. these should minimally include clinical samples known to be positive by the current standard assay and should consist of the sample types commonly submitted to be examined for the virus in question. clinical samples tested negative by the previous method should also be examined to determine if the new assay is more sensitive than the current test, and samples known to be positive for other agents should be tested to confirm assay fig. . contamination of primer and probes with assay target produced at the same facility. label , reaction component: supplier a salt free negative control, mean c t value: . ; label , reaction component: supplier a salt free positive control (− ), mean c t value: . ; label , reaction component: supplier a hplc negative control, mean c t value: . ; label , reaction component: supplier a hplc positive control (− ), mean c t value: . ; label , reaction component: supplier b negative control, mean c t value: . ; label , reaction component: supplier b positive control (− ), mean c t value: . . a full length dna oligonucleotide representing the amplicon of a b real time pcr assay was synthesised by supplier a. during a later investigation into assay contamination following a reagent change, primers and probes were again purchased from supplier a (salt free and hplc purified), and from an alternative supplier b. the reagents purchased from supplier b ( and ) were clean, whilst those from supplier a ( + ) were contaminated with the previously synthesised positive control, even after hplc purification ( + ). specificity. serial dilution series of known positive samples may also be prepared, and tested in parallel in the new and previous assay systems to determine which assay is more sensitive. ideally these dilution panels should represent all subgroups of the target virus to ensure the test is sensitive for all types. a new test must be at least as sensitive as the assay in current use, and should ideally be able to detect a wider range of virus subtypes/variants. an additional method to validate a new assay is to test the assay using samples from an external quality assurance program. panels may be obtained from various sources, including national external quality assessment service (neqas) and quality control for molecular diagnostics (qcmd), and the expected results may be compared with those obtained from the old and new assays in parallel. the use of such panels also allows the comparison of assays currently in use by different laboratories. when implementing a newly designed or previously published assay a number of changes can be made in order to reduce the turn around time of the assay and increase laboratory throughput. multiplex real time pcr assays allow the detection of multiple pathogens within a single tube. the utilisation of such assays reduces overall testing costs and turn around times, enabling a high throughput. there are a number of multiplex real time pcr assays described in the literature (draganov and kulvachev, ; gunson et al., ; hindiyeh et al., ; richards et al., ; templeton et al., ) . we recently described triplex assays designed to detect respiratory viral pathogens. designing a multiplex real time pcr is a complicated process often requiring a great deal of trial and error. below are outlined some general criteria that may prove useful when designing such assays. in order to design appropriate primers and probes users should follow the development protocols outlined above. however, care should be taken to ensure that there is no primer or probe interaction that may reduce the sensitivity or efficiency of the pcr reaction. most primer design software will allow primer-probe interactions to be examined. in order to optimise the multiplex assay each separate pcr should be optimised separately before being assessed when added together (see above section for details). further experiments should include assessing the sensitivity of the multiplex assay for the simultaneous detection of mixed infections (real or spiked) and low copy targets in high copy backgrounds. ideally, no loss in sensitivity should be observed when additional primers are added. however, if the multiplex assay is less sensitive, altering the ratio of primers/probes concentrations may prove useful. alternatively, changing the concentration of pcr reagents (enzyme, mg + , dntps etc.) may also be beneficial. some manufacturers are now producing real time reaction mixes specifically designed for use with multiplex assays, and provide guidelines on the optimal primer and probe concentrations to use. however, if all these factors fail to improve the sensitivity of the multiplex assay then some or all of the primer and probes may have to be redesigned. the number of targets detected in one assay is limited by the number of detection channels available on the real time platform and the number of fluorescent-labelled dyes available. newer machines tend to have five channels. although there are many fluorescent dyes available, many of the excitation/emission spectra overlap and thus only certain combinations can be used. at present we are using fam, vic, and cy detectors as these are optimal for the filter set utilised in the abi (please note that this may differ when using other platforms). syndrome based testing policies are ideal for rapid, high throughput testing. in our laboratory we offer a number of such "menus", which negate the need for clinical coding and allow samples to be tested immediately upon receipt (table ) . for example, all csf samples from patients with neurological illnesses such as encephalitis or meningitis are tested for enterovirus, hsv ( and ), vzv, ebv, cmv and hhv- regardless of patient or clinical details. similar testing protocols are in place for urethritis, gastroenteritis, respiratory illness and eye infections. however, although such policies aid high throughput and reduce turn around times (sample receipt until when result is ready), it should be noted that they may be more expensive and will occasionally produce results that are difficult to interpret, e.g. herpes viruses in respiratory samples (see below). automation of the extraction and liquid handling process has led to significant improvements in turn around times and allows high throughput with a reduced risk of user error. many manufacturers now supply automated equipment for the extraction of nucleic acid from diagnostic samples ( table ) . some manufacturers provide open platforms, which can be used with other suppliers' kits and reagents, while others provide complete extraction solutions. although universal extraction kits (dna and rna pathogens and most specimen types) are available, it should also be noted that different kits can be used for particular samples types and pathogens (e.g., rna or dna) and may be more sensitive for a particular application. although automated extraction has many advantages, laboratories should also consider complementing this service with a manual extraction system. this can be used for testing emergency samples that have arrived in the laboratory after an automated extraction has begun or for samples requiring special processing, not suited for automation, e.g. tissue. many suppliers also supply automated liquid handling equipment, which can facilitate the set up of large numbers of pcr reactions. traditionally most published and in-house developed real time pcr methods consist of the following standard parame-ters: a taq dna polymerase activation step (usually • c for - min depending upon pcr kit manufacturer) followed by - cycles of • c denaturation for - s and an annealing/extension cycle of • c for s. if an rna virus is to be detected, an additional min reverse transcription step is required before the taq dna polymerase activation. overall the reaction run time for a real time pcr is between and min. we have repeatedly shown using dilution series of a number of dna and rna viral pathogens that reducing the duration of the reverse transcription step, the denaturing and annealing/extension step by % can reduce the reaction run time of the assay significantly without any concurrent loss in sensitivity (table ) . overall our reaction run time has reduced to approximately min ( min for rt-pcr), freeing up pcr machines for further tests and allowing more testing within the working day. most dual labelled probe real time pcr assays are designed to utilise the same pcr parameters (i.e., denaturation step of • c for s and an annealing and extention step of • c for s). theoretically, multiple different real time pcr assays can be carried out at the same time on the sample plate. we have also shown that, where dna and rna reagents are purchased from the same supplier, and therefore have identical taq activation requirements, dna assays do not suffer any loss in performance when run through rt-pcr cycling conditions. this will allow laboratories greater flexibility and provide a rapid service. pre-prepared, frozen real time pcr reagents are user friendly and lead to reduced trt and improved quality control (qc) when compared to the preparation of pcr mixes from separate reagents. we have assessed two different methods of pre-prepared real time pcr reagents: frozen aliquots of pooled primers and probes, and frozen aliquots containing all real time pcr reagents. both systems have been assessed over relatively short time period (up to a maximum of weeks, which corresponds to the maximum period of time a pool would last before running out). ideally these would have been assessed over a longer period. we find that the pooled primer and probe approach best suits seasonal assays such as those for respiratory pathogens, whereas the latter approach is more suited to assays which are performed regularly throughout the year on a standard number of samples. the advantages and disadvantages are listed in table . we have introduced pooled primer and probes for the majority of our routine dna and rna tests. this has proved especially useful for our high throughput assays such as the 'respiratory screen', which consists of five triplex real time rt-pcr assays. for each multiplex assay, the operator needs only to mix three tubes containing pre-aliquotted reagents: an aliquot of mastermix containing rox reference dye, one containing enzyme mix (rt + taq), and an aliquot of primer probe pool (containing three sets of primers and probes, and sufficient water). in this way, mastermix can be prepared rapidly. the reagents have been carefully quality controlled and the possibility of pipetting or calculation errors at the time of preparation is reduced. the production of a large number of aliquots at the same time (sufficient for approximately tests) also facilitates inter-run reproducibility and assists in maintaining the quality of the results. while some mix is unavoidably wasted, the time saved and the reduced number of failed runs compensates for this cost, and during the summer months when sample numbers are much reduced, smaller aliquots can be prepared. table shows the ct values obtained from the coronavirus triplex assay using pooled primers and probed stored for up to weeks, demonstrating the stability of the reagents when stored at − • c. we have now been using the same lot number of pooled primer-probe for the coronavirus assay for in excess of weeks without loss of performance. with this system and pooled controls (see below) in use, we are now able to provide an efficient and reliable. . . . frozen pools of primers, probes, mastermix and enzyme the use of aliquots of frozen mastermix (containing all pcr reagents except template) is an alternative to frozen primer and probe aliquots described above. the laboratory user need only remove the desired number of aliquots (or plates if frozen in this format), defrost and then add the template to be tested. frozen aliquots are easier to use than the pooled primer and probe method, facilitate rigorous quality control and reduce the overall turn around times. however, they are less flexible than the primer probe aliquot system and wastage will be more expensive as it includes enzyme. furthermore, any mistakes in the making up of the aliquots will result in the loss of primers and probe and expensive mastermix. we have shown, using positive controls, that both rna and dna mastermix from a number of companies (applied biosystems, invitrogen, and qiagen) can be frozen for at least month with no loss of sensitivity. positive and negative controls are an essential part of any diagnostic pcr service. until recently, we, and many other laboratories, utilised two dilutions of a positive control for each virus to be tested (the end-point of a dilution series of cultured virus tested in the relevant assay (acting as a sensitivity control) and the dilution log less dilute). as a result, for each robot extraction run of wells, a substantial number of wells were required for the positive controls alone. the inclusion of negative extraction controls further reduces the possible number of extractions available for samples. the use of numerous controls increases the cost per sample and the turnaround times of the service. pooled controls are a significant improvement on the previous method and we now use separate pools containing respiratory viruses, and gastrointestinal pathogens. in order to develop a pooled respiratory or gastrointestinal control, each virus culture or stool extract was serially diluted and table ct of positive control on frozen primer/probe pools at , and weeks an end point established. for the respiratory virus control a 'high' positive control pool was prepared by adding an equal volume of the dilution logs above the end point for each of the culture fluids. a further in dilution of this 'high' positive control was prepared to produce the 'low' positive control. we now include just two respiratory controls on our robot extractions, freeing up an additional wells for other samples. the preparation of a large volume of control at the one time allowed better qc and reproducibility to be achieved. aliquots of control are stored at − • c and have been found to be stable for up to months so far. we have previously experienced lot-to-lot variation of both primers and probes resulting in reductions in test sensitivity. when a new batch of reagents is purchased we now run a performance test (using the new reagents at the same concentrations previously determined as optimal) by testing the 'old' and 'new' primer probe sets in parallel with the same positive control on the same pcr run. if the c t and r n values observed are comparable (newly prepared reagents must produce c t values falling within two standard deviations of the mean value determined for the reagents previously in use (when testing identical positive controls), the new reagents are released for routine use. if the assay is less sensitive than the previous assay then primer and/or probe optimisation should take place. ideally this should be done several weeks before the next batch is required for routine use as new probes or primers may need to be re-ordered. our experience over the winter of - is that re-optimisation has not been required for any of the respiratory assays. for validation of each real time pcr run we recommend the following. the c t of the positive control should be documented with each run and compared to the value derived from previous runs. this should help identify any loss in sensitivity that can be seen due to user error or degradation of pcr reagents. if the c t falls significantly below the expected value the run should be repeated (outwith two standard deviations of the mean value determined by previous runs (when testing identical positive controls). if the c t remains low or reduces further, new controls and pcr reagents may be required. in addition to this the overall fluorescence change should also be monitored with each run. reductions in fluorescence may cause interpretation difficulties and may also highlight a problem in the pcr reaction. as with changes in c t , large reductions in fluorescence may result in the need to repeat the pcr or introduce a new batch of controls and pcr reagents. ideally real time pcr tests should include an internal control in order to ensure confidence in negative results. there are many internal control pcr tests available targeting animal viruses (added to the sample before extraction) and synthetic controls (which are added to the mastremix), and human genes. however, the inclusion of such controls can be expensive as they may have to be carried out separately from the diagnostic assay. as a result many laboratories (including ourselves) do not use such controls on all tests. any laboratories performing real time pcr assays can perform quantitative assays with the addition of suitable standard quantitative controls to the assay, although a uniform sample type is required to obtain meaningful results (e.g., blood, urine). attempting to quantify virus in nonuniform sample types (such as respiratory samples or stool) is not recommended without thorough assessment of sampling reproducibility. in common with many laboratories, we prepare our quantitative standards (oligos or plasmids) in bulk, test these for acceptable linearity and slope (− . ) for a good -fold dilution series (we allow a range of . - . which equates to a variation of ± % in the efficiency of the reaction), and then aliquot this into volumes sufficient to last week at • c. aliquots are stored frozen at − • c until required. it is essential to track the c t values of the controls to check that the assay is performing satisfactorily, and to enable a smooth transition to a new control set when required (fig. ) . we record our c t values in the form of a shewart control chart (davies, ) . newly prepared standards (produced annually) must have c t values falling within two standard deviation of the mean value determined for the standards previously in use. a second issue with quantitative assays that do not use extracted material as quantitative controls is that these assays are sensitive to changes in extraction methodology or efficiency. we have recently moved to a more efficient extraction kit (qiaamp virus robot kit), but as our standards are plasmid or cellular dna based and are not extracted alongside the specimens, we are now reporting higher viral loads for the same sample than with previous extraction procedures. this observed change in viral loads is only a problem during the crossover period from one extraction procedure to another, as subsequent samples will be analysed in the light of the new baseline level. to ensure intra-run extraction consistency, a positive or an internal control (of known quantity) should be extracted and run at the same time as the samples to be tested. this control should be monitored in the same way as outlined above (see routine validation of each real time pcr run). once an assay (or a number of assays) has been introduced into routine service it is important to re-assess the sensitivity of the assay in relation to current circulating viruses. this can fig. . application of shewart control chart to track potential changes in assay performance. the mean ct values obtained for the × copies per ml standard were plotted over time. the average value of these ct values was calculated and plotted (red line) for each data set (along with two standard deviations above (pink line) and below (blue line) the average value). two standard deviations are generally accepted as the warning level in such analyses. the first 'jump' (a) represents a change in the set of standards used, and while this is not ideal, results in a much more reproducible assay. the second jump (b) is caused by a change in the primer-probe pool in use, and shows a significant change in the sensitivity of the assay. as a result of this analysis another batch of primer-probe pool was prepared and the results obtained returned to the acceptable range. for interpretation of the references to colour in this figure legend, the reader is referred to the web version of the article. be carried out using positive samples detected by an alternative test or by comparing primers and probe to new sequences stored in surveillance databases. although most assays target a conserved region of the viral genome, small changes in the target can result in false negative reactions due to primer and/or probe mismatches. if a loss in sensitivity occurs primer or probe sequences may need to be updated or a new assay may have to be developed. interpreting real time pcr results is a relatively straightforward process. in a fully optimised assay all positive results should show increases in fluorescence in a characteristic exponential curve. however there are still pitfalls that we feel users should be made aware of when interpreting data. occasionally samples may show "signal drift" (traces that increase in fluorescence as the pcr progresses but are not exponential) (fig. ) . signal drift can be produced for a number of reasons. true positive samples may show signal drift because of sub optimal pcr conditions, inhibition and primer mismatches. occasionally negative samples may also show signal drift. this may be due to probe breakdown resulting in a fluorescence increase. signal drift often occurs towards the end of the pcr reaction. some platforms allow multicomponent analysis of weak positive traces. this allows users to assess the changes of each fluorescent label in the reaction. genuine positive traces will show an exponential increase in the fluorescent signal whereas signal drift is often due to a change in the normalisation dye (e.g., rox). we currently repeat all positive samples with ct's greater than cycles as we feel these may be either low copy number positive samples or non-specific reactions. some well real time plates require sealing with optically clear plate seals before pcr can take place. on occasion these may not seal properly and pcr reagents evaporate during cycling. as a result of this a curve may be produced mimicking a positive pcr reaction. the correct placing of the threshold line is essential to allow accurate c t measurement. some of the computer software available with current real time pcr formats can automatically place the threshold line during result analysis. any sample with fluorescence above this line will be regarded as positive by the computer. always check the automatic placement of the threshold line as we have found that sometimes the computer will place it wrongly resulting in both false positive and negative results (fig. ). an alternative is to use a fixed threshold line. the use of such a system will ensure the real time assay is directly comparable to previous runs this should not preclude careful analysis of the data. the increased sensitivity of real time pcr means that, like nested pcr, occasionally positive results will be obtained that fig. . example of negative samples showing signal drift. the two sample shown in blue are showing an increases in flourescence when examined using the quantification option (shown above left). analysis of the raw cycling data (shown above right) shows that there is no increase in flourescence usually associated with a positive sample. are not in keeping with accepted knowledge. for example, herpes viruses in throat swabs should be interpreted with care. we often detect low positive ebv, hhv- , hsv or cmv in throat swabs in cases of respiratory infection. whether these are the cause of disease or unrelated re-activations are unclear. although these findings may be irrelevant to the clinical illness it is important that these results are not ignored. for as we gain more experience with these sensitive assays we may identify new, previously unrecognised syndromes attributed to particular viral pathogens. there is no doubt that in the coming years an increasing number of virology laboratories will utilise real time fig. . the turn around time of respiratory samples in - to - . pcr assays. as a result virology laboratories will be able to offer more tests and process more samples while reducing turn around times. this can be highlighted by our winter respiratory surveillance service (servis), which currently tests for pathogens. during the - season samples were tested with % of results reported within days of the samples arriving in the laboratory (fig. ) . for - when gel based pcr was used to detect influenza a and b, rsv and picornavirus, samples were tested in total. only . % of results were available in days with most results returned to users within days. with a slightly extended working day, real time pcr results ought to be reported within h of receipt. the routine use of real time pcr will have several benefits. first it will aid patient management (prognosis, treatment guidance and infection control) and may assist in the development of new antiviral therapies. real time pcr will also improve the sensitivity of the surveillance of viral pathogens, increasing our understanding of these important infections, providing accurate assessments of the morbidity and economic cost of disease and facilitating the implementation of public health prevention measures. . . which real time pcr chemistry is best for viral diagnosis? automated extraction and liquid handling equipment continual assessment of the sensitivity of real time pcr primers and probe basic principles of real time quantitative pcr preventing pcr amplification carryover contamination in a clinical laboratory pitfalls of quantitative real time reverse-transcription polymerase chain reaction a coloured version of the j chart or the amc-d j-chart molecular techniques for detection, identification and analysis of human papillomaviruses (hpvs) my favourite primers: real time rt-pcr detection of respiratory viral infections in four triplex reactions optimisation of pcr reactions using primer chessboarding evaluation of a multiplex real time reverse transcriptase pcr assay for detection and differentiation of influenza viruses a and b during the - influenza season in israel allelic discrimination by nick-translation pcr with fluorogenic probes real time pcr in the microbiology laboratory genogroup i and ii noroviruses detected in stool samples by real time reverse transcription-pcr using highly degenerate universal primers diagnostic value of real time capillary thermal cycler in virus detection rapid and sensitive method using multiplex real time pcr for diagnosis of infections by influenza a and influenza b viruses, respiratory syncytial virus, and parainfluenza viruses , , and molecular beacons: probes that fluoresce upon hybridization key: cord- - r n x authors: lemmon, gordon h; gardner, shea n title: predicting the sensitivity and specificity of published real-time pcr assays date: - - journal: ann clin microbiol antimicrob doi: . / - - - sha: doc_id: cord_uid: r n x background: in recent years real-time pcr has become a leading technique for nucleic acid detection and quantification. these assays have the potential to greatly enhance efficiency in the clinical laboratory. choice of primer and probe sequences is critical for accurate diagnosis in the clinic, yet current primer/probe signature design strategies are limited, and signature evaluation methods are lacking. methods: we assessed the quality of a signature by predicting the number of true positive, false positive and false negative hits against all available public sequence data. we found real-time pcr signatures described in recent literature and used a blast search based approach to collect all hits to the primer-probe combinations that should be amplified by real-time pcr chemistry. we then compared our hits with the sequences in the ncbi taxonomy tree that the signature was designed to detect. results: we found that many published signatures have high specificity (almost no false positives) but low sensitivity (high false negative rate). where high sensitivity is needed, we offer a revised methodology for signature design which may designate that multiple signatures are required to detect all sequenced strains. we use this methodology to produce new signatures that are predicted to have higher sensitivity and specificity. conclusion: we show that current methods for real-time pcr assay design have unacceptably low sensitivities for most clinical applications. additionally, as new sequence data becomes available, old assays must be reassessed and redesigned. a standard protocol for both generating and assessing the quality of these assays is therefore of great value. real-time pcr has the capacity to greatly improve clinical diagnostics. the improved assay design and evaluation methods presented herein will expedite adoption of this technique in the clinical lab. real-time pcr assays are gaining popularity as a clinical tool for detecting and quantifying the presence of both viral and bacterial pathogens, as reviewed in [ ] . com-pared to traditional culturing methods used in identification, real-time pcr is fast and cost effective. in addition, it can be quantitative and sensitive, in some cases greatly exceeding the sensitivity for conventional testing meth-ods. commercially distributed kits are available for pcrbased pathogen diagnostics, and pcr is no longer thought of merely as confirmatory to culture. however real-time pcr assays are limited by the quality of the primers and probes chosen. these primers and probes must be sensitive enough to match all target organisms yet specific enough to exclude all others. a common approach to developing a primer/probe combination is by using commercial software such as primerexpress ® (applied biosystems, foster city, ca, usa). this software asks the user to upload a dna sequence file, and then finds possible primer/probe sets that meet the assay criteria. generally a researcher will provide as input a gene region conserved throughout the taxa that the assay is being designed to detect. the software then provides possible primer/probe sets. the researcher chooses a representative signature. if there are single nucleotide polymorphisms (snps) within the chosen conserved region, a signature with consensus primers and probes is often chosen. next a blast [ ] search is performed to ensure that the primers are not hitting other targets. finally the signature is verified in vitro with laboratory strains. while this design approach may work acceptably well in the research laboratory, the clinical laboratory calls for a more thorough analysis to ensure detection of novel, diverse, and uncommon strains. these may appear, for example, as a result of spread by foreign travel or migration. whole genome based automated signature design [ ] presents a great improvement to the common method. however, in addition to better design strategies, methods for automated signature evaluation are needed. as additional sequence data becomes available, it is necessary to regularly reassess the predicted efficacy of a given signature. this analysis must include the predicted false negative and false positive rates for the developed signatures, and consider all available public sequence data. we have analyzed a number of real-time pcr assays found in the literature based on public sequence data. herein we report how well these signatures performed, offer a revised approach to pcr assay design, and use this approach to produce new assays predicted to have higher sensitivity and specificity. the literature was combed for recently published articles reporting real-time pcr assays for the clinical detection of bacterial and viral taxa. the primer and probe sequences were accumulated, with a preference for taqman assays. however, intercalating dye assays were also selected. papers reporting nucleotide sequences that could not eas-ily be copied from an online source were avoided. in total, signatures from papers were analyzed. local oracle databases have been constructed from the complete genome sequence data available at ncbi genbank, tigr, embl, img (jgi), and baylor hgsc. we used our "all_virus" and "all_bacteria" databases to find signature matches and predict false negatives and false positives. these databases were designed to contain only whole genomes and whole segments from segmented genomes. however, the heuristics used to separate whole genomes from partial sequences are not fail-proof due to inconsistency in sequence annotation within the public databases. consequently many sequences in these databases may show up as false negatives when they are actually just a section or segment of a genome that is not expected to contain the signature, and we manually sorted these sequences into true or false negatives. a freely available real time pcr analysis tool called taqsim [ ] was used to find public sequences that would match the primer/probe assay in question. taqsim uses blast searches to find sequences that match both forward and reverse primers and probe. to be reported as a "hit" the primers and probe must match in the required orientations relative to one another and the primers must be in sufficiently close proximity. the forward/reverse primers may fall on either the plus or minus strand, so long as the orientation relative to one another is appropriate. there may not be mismatches at the ' end of either primer. for each hit taqsim calculates the primer and probe melting temperatures as bound to the candidate hit sequence (accounting for mismatches) based on reaction conditions (reagent concentrations and hybridization temperature), and returns sequences predicted to be amplified. instead of replicating the various exact reaction conditions reported in each paper, very lenient settings were applied in all cases, essentially removing the screen for primer/probe vs. candidate hit tm by setting this threshold to °k, and instead checking for specificity by requiring that hits have fewer than mismatches per primer or probe. taqsim's predicted sequence hits were compared with sequences listed under a given set of ncbi taxonomy tree nodes. for instance, if a signature was reported to detect hepatitis b, then its set of taqsim hits would be compared with the set of sequences under node , corresponding to hepatitis b virus. sequences in both sets were considered true positives, sequences in the taqsim output that were missing from the chosen taxonomy nodes were considered false positives, and sequences that were in the taxonomy tree but missing from the taqsim output were considered false negatives. test statistics such as specificity and sensitivity (power) were then calculated. in this paper we define sensitivity and specificity as follows: the primary research articles were read carefully to determine what the authors had designed their primers/probes to detect. ncbi taxonomy nodes were chosen to represent these target organisms. this was not a trivial task, since many articles lack clarity as to which taxa, specifically, their assay should detect. for instance, the cytomegalovirus assay did not detect all sequences in the cytomegalovirus genus (taxonomy node ), but rather all sequences in the human herpesvirus species (taxonomy node ). none of the articles specified a taxonomy node for their signatures. perl scripting [ ] was used to help compare blast hits and taxonomy node sequences, and count false negative, false positive and true positive sequence matches. however some sequences required hand sorting due to the wide array of sequence types and annotations. these often represented segmented genomes, in which case many of the would-be false negative sequences simply represent a different segment than that on which the signature lands, so we manually tabulated them as true negatives. they may also represent plasmids. in these situations, a careful review of the genbank entry, and sometimes of the primary article cited by genbank, was necessary to determine if the sequence of interest was truly a false negative. although we attempt to include only complete genomes in our sequence database, because of inconsistencies in the annotation of sequence data some partial sequences nevertheless make it into our databases. any of these partial cds's documented as containing the target gene on which the signature was supposed to land were counted as false negatives, but those partial cds's not documented to contain the target gene were eliminated from the false negative pool because it is possible the signature could land on the unsequenced section with the target gene. our database also contains "glued fragments", which represent draft genomes "glued" together with hundreds of "n"s as a simple way to keep the separate contigs associated as part of the same genome. while we report false negatives from these draft genomes, it is possible that the signatures could land on gaps between the contigs, and that finished sequencing could result in re-classification as a true positive. tables , , , , summarize our analysis of various dna signatures. details of all true positive, false positive and false negative sequences are available from the authors. note that these results are in silico results; no laboratory testing was performed for verification, so that by stating that an organism is "detected" we mean that this is our prediction based on sequence data. a few notes of interest concerning the data in the tables are described below: the two human corona virus strains, e and oc are a frequent cause of the common cold [ ] . a taqman assay for e was predicted to perform perfectly, while an assay for oc turned up a number of false positives, all of which were animal corona viruses. a coxsackie b virus assay [ ] performed well, but a coxsackie b assay [ ] hit many other human coxsackie, echo, and entero viruses. four out of false negatives for a marburg virus assay [ ] were of the lake victoria variety. false negatives associated with a yellow fever signature [ ] included trinidad, french neurotropic, french viscerotropic, and vaccine strains. the filoviridae (ebola/marburg) assay [ ] detected only ebola viruses. staphylococcus aureus [ ] and enterobacteriaceae assays [ ] had low sensitivity. an escherichia coli assay [ ] hit shigella and vibrio sequences. many of these signatures [ ] had high sensitivity. combining several of them into a multiplex assay would probably improve sensitivity further. these signatures were designed using a minimal set clustering approach [ ] . while individual signatures have decent sensitivity, combining several signatures in one assay, as advocated in the publication greatly improved sensitivity. the signatures for hepatitis a are currently undergoing laboratory screening by the fda, and are performing well (g. hartman, personal communication). several reported signatures produced no predicted hits. these include assays for several flaviviruses [ , , ] , and s rrna assays [ ] for several bacteria. examination of blast output showed that in these cases either a primer or internal oligo (probe) did not have blast hits to target, there were too many mismatches per primer or probe sequence above the threshold specified in our anal- all but three are taqman signatures. the three intercalating dye type assays are bolded. yses, or there were mismatches at the ' end of a primer relative to target. it is possible that if the sequences of the samples used in the laboratory differ from available genomic data, or if the pcr reaction conditions are performed at low stringency (e.g. low annealing temperatures or high salt concentrations) these assays could in fact work in the laboratory. however, according to the genomic data available, a better match of primers and probes to target is possible and is usually desired for high sensitivity detection. targeting a number of the organisms for which currently published signatures were predicted to perform poorly, as well as some for which additional signatures may be desired (even though published signatures may perform well), we generated new signatures using minimal set clustering (msc) according to methods previously described [ , ] . msc begins by removing non-unique regions from consideration as primers or probes from each of the target sequences relative to a database of nontarget bacterial and viral sequences. the remaining unique regions of each target sequence are mined for all or many candidate signatures, without regard for conservation among other targets, yet satisfying user specifications for primer and probe length, t m , gc%, avoidance of long homopolymer runs, and amplicon length. all candidate signatures are compared to all targets and clustered by the subset of targets they are predicted to detect. signatures within a given cluster are equivalent, in that they are predicted to detect the same subset of targets, so by clustering we reduce the redundancy and size of the problem to finding a small set of signatures that detect all targets. nevertheless, finding the optimal solution of the fewest clusters to detect all targets is an np complete problem, so for large data sets we use a greedy algorithm to find a small number of clusters that together should pick up all targets. in the supplementary table, we often provide more than one alternative signature to detect a given equivalence group of genomes to serve as a backup should a signature perform poorly in laboratory testing. some of the signatures may have mismatches to some of their intended targets, although these mismatches are not predicted to reduce the t m of primer/probe hybridizing to target below typical taqman reaction conditions. none of these computationally predicted signatures have been screened in the laboratory, as this is beyond the scope of this paper. year target pseudomonas aeruginosa, escherichia coli, and neisseria meningitidis. as expected we found that false negatives were much more common than false positives. though signatures are generally based on conserved gene regions, they often fail to take into account all of the variation within a target set of organisms. this may be because the signatures were developed using sequence data from a handful of strains, rather than a thorough study of all strains publicly available. these false negatives may also represent sequences that have become available since the publication of the given signature. since new sequence data is made available at an ever increasing rate, there is great benefit in re-evaluating clinically used dna signatures regularly. when new sequence data leads to false negative predictions for a signature, one of two explanations can be given. the new sequences either represent recently recognized variation that has been around since the time the signature was published, or new variation, the result of mutation and natural selection. in either case, an improved or additional signature should be designed. high false positive or false negative rates do not necessarily indicate a "bad" dna assay. the quality of an assay must be considered in light of the milieu in which the testing will take place. in the clinical laboratory, a signature with high sensitivity but perhaps low specificity may be preferred over a test with lower sensitivity in cases where the putative pathogen requires immediate treatment or may spread quickly. the case of antibiotic resistant bacteria probably falls in this category. on the other hand, the nation's basis and biowatch programs insists on zero false positives, so as to avoid public disturbances due to false alarms, while still aiming for zero false negatives [ ] . one must also consider the type of false negative and false positive results to determine their relevance. for instance, in this article an assay for human corona virus oc [ ] what about such a match in a clinical lab in africa? on the other hand, the echovirus sequences that the coxsackie b assay [ ] can detect could produce misleading results in any clinical lab. the false negative and false positive rates presented in this study may vary substantially from those seen empirically. this is because the strains available in a laboratory may differ significantly from the sequence data available, or because the empirical protocol is more or less stringent than the sequence-based requirements we imposed, which allowed no more than mismatches per primer or probe for detection. we believe that as more target sequences become available, our predicted false negative rates will tend to increase for a given published signature both as a result of better sampling of diversity and as a result of failure to detect newly evolved variants. it has been estimated that a minimum of - genomes are needed in order to computationally design taqman pcr signatures likely to detect most strains, with those isolates chosen for sequencing that have been selected to span gradients of geographic, phenotypic, and temporal variation [ ] . even more than genomes are needed for particularly diverse organisms. thus, older signatures may not perform as well as newly developed signatures from the most up-to-date sequence data. a future study of interest would be a longitudinal look at how these rates continue to change over time as additional sequences become available. this study could be performed retrospectively, since sequence submission dates are easily obtained from public databases. we also hypothesized that the wider the intended scope of a signature, the lower its sensitivity would tend to be. the point is illustrated loosely in our data tables. twenty-six of the signatures with less than publicly available target sequences had sensitivities of (i.e. zero false negatives), while signatures with or more targets had an average sensitivity of . . however this approach only considers scope in the context of sequence data available. we tried to demonstrate the relationship between specificity and scope at a more fundamental level by grouping signatures by the taxonomic level of their target as shown in figure . however the results are misleading. in virology, taxonomic level is not a good indicator of nucleotide diversity. for instance, there is more diversity in the influenza a species then there is in the entire filoviridae family, which consists of only two known genera: ebola-like viruses and marburg viruses. a better approach might be to calculate nucleotide diversity as a function of phylogenetic branch length or shared k-mer clusters within a target taxonomy node. finally, we averaged the sensitivities of microbes by genome type as shown in table . note that the ssrna-rt category includes only hiv- . this chart demonstrates that creating signatures with high sensitivity becomes more difficult for target organisms with high mutation rates. current real-time pcr assay design approaches produce signatures with sensitivities generally too low for clinical use. we suggest that a rigorous approach involving false positive and false negative analysis should be the standard by which an initial assessment of signature quality is made. signatures must also regularly be reassessed as sequence data becomes available. for targets with wide nucleotide diversity, it becomes necessary to develop a set of signatures, for which we suggest a minimal set clustering approach that may also include signatures with degenerate/inosine bases. newrealtimepcrsigatures. fifty seven taqman pcr primer/probe combinations we predict to have higher sensitivity/specificity than current published assays. click here for file [http://www.biomedcentral.com/content/supplementary/ - - - -s .doc] sensitivity by taxonomy level figure sensitivity by taxonomy level. each colored diamond represents a real-time pcr assay examined in this paper. black bars indicate the mean, grey bars indicate the median. top and bottom of each box indicates th and th percentiles, and grey lines at whisker ends denote min and max values. the wide ranging sensitivities demonstrate both inconsistency in genetic diversity at a given taxonomy level, and inconsistency in signature design approaches. japanese encephalitis virus too many mismatches in either forward or reverse primer. several strains have mismatches at ' end of forward primer in addition to internal mismatches reverse primer only has a blast hit to one strain (angola ) saint louis encephalitis virus too many mismatches in the reverse primer, with mismatches at ' end as well as at other locations dengue virus reverse primer does not have any blast hits to target dengue virus forward primer has or mismatches at ' end for most strains, the probe has blast hits to only of the genomes available, and reverse primer only has a blast hit to genome dengue virus too many mismatches in forward primer and in some cases the probe too many mismatches in forward primer. however, they are at the ' end, so assay could still work for some strains pseudomonas aeruginosa no blast hits of probe to pseudomonas aeruginosa probe is not between or even in close proximity to the forward and reverse primers stenotrophomonas maltophilia no blast hits of probe to stenotrophomonas maltophilia probe only matches in of bases, which is unlikely to give a strong signal, since probe is unlikely to bind prior to the primers as desired for real time taqman chemistry. references basic local alignment search tool comprehensive dna signature discovery and validation the perl directory frequent detection of human coronaviruses in clinical specimens from patients with respiratory tract infection by use of a novel real-time reverse-transcriptase polymerase chain reaction the interferon inducer ampligen markedly protects mice against coxsackie b virus-induced myocarditis coxsackievirus b infection of human fetal thymus cells rapid detection protocol for filoviruses rapid detection and quantification of rna of ebola and marburg viruses, lassa virus, crimean-congo hemorrhagic fever virus, rift valley fever virus, dengue virus, and yellow fever virus by real-time reverse transcription-pcr a ' nuclease pcr (taq-man) high-throughput assay for detection of the meca gene in staphylococci algorithm for the identification of bacterial pathogens in positive blood cultures by real-time lightcycler polymerase chain reaction (pcr) with sequence-specific probes. diagnostic microbiology and infectious disease development of quantitative gene-specific real-time rt-pcr assays for the detection of measles virus in clinical specimens limitations of taqman pcr for detecting divergent viral pathogens illustrated by hepatitis a, b, c, and e viruses and human immunodeficiency virus development of mulitplex real-time reverse transcriptase pcr assays for detecting eight medically important flaviviruses in mosquitoes development of real-time reverse transcriptase pcr assays to detect and serotype dengue viruses comparative genomics tools applied to bioterrorism defense rapid development of nucleic acid diagnostics sequencing needs for viral diagnostics design and validation of an h taqman real-time one-step reverse transcription-pcr and confirmatory assays for diagnosis and verification of influenza a virus h infections in humans lion t: real-time quantitative pcr assays for detection and monitoring of pathogenic human viruses in immunosuppressed pediatric patients rapid reverse transcription-pcr detection of hepatitis c virus rna in serum by using the taqman fluorogenic detection system rapid detection of west nile virus from human clinical specimens, field-collected mosquitoes, and avian samples by a taqman reverse transcriptase-pcr assay development of a quantitative real-time detection assay for hepatitis b virus dna and comparison with two commercial assays sensitive and accurate quantitation of hepatitis b virus dna using a kinetic fluorescence detection system (tagman pcr) comparison of two quantitative cmv pcr tests, cobas amplicor cmv monitor and taqman assay, and pp -antigenemia assay in the determination of viral loads from peripheral blood of organ transplant patients differentiation of herpes simplex virus types and in clinical samples by a real-time taqman pcr assay development of a flurogenic polymerase chain reaction assay (taqman) for the detection and quantitation of varicella zoster virus rapid and sensitive detection of mumps virus rna directly from clinical samples by real-time pcr development of a real-time reverse-transcription pcr for detection of newcastle disease virus rna in clinical samples transfer and evaluation of an automated, low-cost real-time reverse transcription-pcr test for diagnosis and monitoring of human immunodeficiency virus type infection in a west african resource-limited setting rapid detection of enterovirus rna in cerebrospinal fluid specimens with a novel single-tube real-time reverse transcription-pcr assay use of applied biosystems ht sequence detection system and taqman assay for detection of quinolone-resistant neisseria gonorrhoeae comparison of a new quantitative ompa-based real-time pcr taqman assay for detection of chlamydia pneumoniae dna in respiratory specimens with four conventional pcr assays a lightcycler taqman assay for detection of borrelia burgdorferi sensu lato in clinical samples detection of medically important ehrlichia by quantitative multicolor taqman real-time polymerase chain reaction of the dsb gene we thank beth vitalis, jason smith, and tom slezak for helpful discussion and for encouraging this work, and kari allmon for entering the references. we gratefully acknowledge financial support from the intelligence technology innovation center. lawrence livermore national laboratory is operated by lawrence livermore national security, llc, for the u.s. department of energy, national nuclear security administration under contract de-ac - na . the authors declare that they have no competing interests. gl found real time pcr signatures in the literature, wrote perl scripts, and performed the analysis of published signatures. sg conceived of the research, designed new signatures, and provided guidance throughout the study. publish with bio med central and every scientist can read your work free of charge http://www.ann-clinmicrob.com/content/ / / key: cord- -xailjga authors: wang, xiaoli; zeng, daniel; seale, holly; li, su; cheng, he; luan, rongsheng; he, xiong; pang, xinghuo; dou, xiangfeng; wang, quanyi title: comparing early outbreak detection algorithms based on their optimized parameter values date: - - journal: j biomed inform doi: . /j.jbi. . . sha: doc_id: cord_uid: xailjga background: many researchers have evaluated the performance of outbreak detection algorithms with recommended parameter values. however, the influence of parameter values on algorithm performance is often ignored. methods: based on reported case counts of bacillary dysentery from to in beijing, semi-synthetic datasets containing outbreak signals were simulated to evaluate the performance of five outbreak detection algorithms. parameters’ values were optimized prior to the evaluation. results: differences in performances were observed as parameter values changed. of the five algorithms, space–time permutation scan statistics had a specificity of . % and a detection time of less than half a day. the exponential weighted moving average exhibited the shortest detection time of . day, while the modified c , c and c exhibited a detection time of close to one day. conclusion: the performance of these algorithms has a correlation to their parameter values, which may affect the performance evaluation. following the outbreak of severe acute respiratory syndrome [ ] in , there has been a growing recognition of the necessity and urgency of early outbreak detection of infectious diseases. in january , the national disease surveillance, reporting and management system were launched in china. the system which covers infectious diseases has the potential to provide timely analysis and early detection of outbreaks. however, as the passive surveillance system relies on accumulated case and laboratory reports, which are often delayed and sometimes incomplete, the opportunity to contain the spread of the disease is often missed. as increasing numbers of early outbreak detection algorithms are now being used in public health surveillance [ ] [ ] [ ] [ ] [ ] [ ] [ ] [ ] , there is a need to evaluate their performance. due to a lack of complete and real data pertaining from historical outbreaks, the perfor-mance of these systems have been previously difficult to evaluate [ ] . adding to these difficulties is the fact that the information obtained from historical outbreaks may be heterogeneous, due to changes in the outbreak surveillance criteria's over time. in order to compensate for missing or heterogeneous information, semisynthetic datasets can be created which contain the outbreak signals, using a software tool. by using this tool, the parameters of the outbreak including the desired duration, temporal pattern and the magnitude (based on a predefined criteria), can be specially set. this approach has been documented in a number of previous studies, which have compared the performance of early outbreak detection algorithms using simulated outbreaks [ ] [ ] [ ] [ ] [ ] [ ] [ ] [ ] . the simulation enables the performance assessment and provides much-need comparative findings about outbreak detection algorithms. however, there are still limited studies examining how the performance varies with the values of these algorithm parameters. our study aimed to observe the relationship between the algorithms' performance and their parameters values. the outcomes of this study may help improve the accuracy and objectivity of the evaluation of these algorithms and provide guidelines for future research and implementation. bacillary dysentery is one of the key epidemic potential diseases in beijing. it commonly occurs in summer and in regions with high population densities. with economic development and improvements in sanitary conditions in china, the incidence of bacillary dysentery has decreased substantially from to [ ] . between and , data from the national disease surveillance reporting and management system showed that the average incidence rate of bacillary dysentery was . cases per , in beijing. whilst there has been a substantial decline in the disease burden, bacillary dysentery continues to be a major public health problem in beijing. the observed daily case counts of bacillary dysentery from to in beijing were extracted from the national disease surveillance reporting and management system [ ] for this study. the onset date of illness and area code at the sub-district level was extracted for each reported case. this data was used as the baseline for the outbreak simulation. data from to were used to adjust and optimize the parameter values of the algorithms, while data from were used to evaluate the algorithms. the outbreak criteria was defined on the basis of the bacillary dysentery reporting criteria specified in the national protocol for information reporting and management of public health emergencies (trial) [ ] . this protocol was issued by the health emergency office of the ministry of health (moh) at the end of . in the protocol, a bacillary dysentery outbreak was defined as the occurrence of or more bacillary dysentery cases in the same school, natural village or community within - days. based on this definition, there was only one actual outbreak in the summer of . during this outbreak, children from a middle school were clinically diagnosed as having bacillary dysentery and four were culture positive for shigella sonnei. the first case became ill on the evening of the st of july and was taken to hospital the next day. two cases were reported on the nd of july, a further four on the rd of july, and two on the th july. as there were insufficient documents collected during the outbreak, a simulated outbreak signal had to be produced. before the simulation, the actual outbreak was excluded by replacing actual data with a -day moving average for fear of contamination. our simulation approach used semi-synthetic data, that was, authentic baseline data injected with artificial signals [ ] . the aegis-cluster creation tool (aegis-cct) was used to generate outbreak signals [ ] . first, the duration was fixed at three days and the outbreak magnitude varied from to . the outbreak magnitude was fixed at cases and the duration was varied from one to three days. the temporal progression of these outbreaks included a random, a linear, and an exponential growth spread ( signals for each temporal progression pattern). a total of different outbreak signals were finally simulated. considering the spatial distribution and seasonal variability of bacillary dysentery, we randomly selected ( for each pattern) from a possible sub-districts (townships), where the incidence was higher than the average incidence in beijing, and then randomly selected one day as the starting date of an outbreak from the high incidence seasons. the remaining six outbreak signals were randomly added to the low incidence seasons and areas. simulations injected into the baseline data from the selected sub-districts ( - ) were used to observe the relationship between the algorithm performance and the parameter value. this data allowed us to select the optimal combination of parameter values. simulations added to the baseline data from were used to evaluate the algorithm. in order to reduce sampling errors, means were calculated by repeating the sampling times. evaluation indices included sensitivity, specificity and time to detection [ ] . an outbreak was considered to be detected when a signal was triggered: ( ) within the same period as the start and end date of the particular simulated outbreak; and ( ) within the same sub-district as what the simulation was geographically located in. in our study, sensitivity was defined as the number of outbreaks in which p day was flagged, divided by the number of simulated outbreaks. specificity was defined as the number of days that were not flagged divided by the number of non-outbreak days. time to detection was defined as the interval between the beginning of the simulated outbreak and the first day flagged by the algorithm, divided by the number of simulated outbreaks. time to detection was zero, if the algorithm flagged a simulated outbreak on the first day. time to detection was three, if the algorithm did not produce a flag on any of the days during the period of the simulated outbreak. time to detection is an integrated index that reflects both timeliness and sensitivity of an algorithm. we intended to find a simple and practical criterion to evaluate the performance of these algorithms. generally, the parameter values with the shortest time to detection were considered as preferable. the disparity in specificity between the parameter values was also taken into consideration. priority was given to the value with the higher specificity, if the time to detection was either equal to or had a difference of less than half a day and the difference between the specificities was > . %. we compared the performance of five outbreak detection algorithms, the exponential weighted moving average (ewma), c -mild (c ), c -medium (c ), c -ultra (c ) and the spacetime permutation scan statistic model. we calculated the ewma, using a -day baseline based on day t À through till day t À within each sub-region [ ] . if the observed values were x i $ n (l, r ), the weighted daily counts of each sub-district were calculated as: in the algorithm, k ( < k < ) was the weighting factor, and k was the control limit coefficient [ , ] . they are the adjustable parameters. based on the range in values of k found in previous literature [ ] , k was set as < k . the adjustment interval for k and k was set as . and . , respectively. the moving standard deviation (s) was used as the estimate of r; and the moving average (ma) was used as the estimate of l. the cumulative sum (cusum) algorithm keeps track of the accumulative deviation between the observed and expected values. for cusum, the accumulated deviation s t was defined as: s_{ } = Ákr xt is the allowed shift from the mean to the detected. s t is the current cusum calculation, and s tÀ is the previous cu-sum calculation. we found that there was an aberration when the mean l shifted to l + kr x Á h was the decision value. in ears, k was set as and when s t > h = , an alarm would be trigged [ ] . when the denominator r xt equals to zero, . was taken to replace zero in ears. however, as both sides of the equation can be multiplied (s t = max ( , s tÀ + ((x t À (l + kr xt ))/r xt) ) > ) by r xt , the decision value was changed to hr xt (referred to as h). biosense originally implemented the c , c and c methods but has since modified the c method (referred to as w ). in our study, we did not use the threshold; k or decision values set in ears, rather we adjusted these values to achieve a preferable efficiency for aberration detection. additionally, we did not use . when r xt was , rather the actual value. based on the previous literatures [ , , ] , we determined the value range of h and k, as r h r and < k . , respectively. the adjustment interval for k and h was set as . and . r, respectively. we modified the three original cusum referred to as c , c and c to c , c and c in the reporting of the results in this study. the equation is written as c was the sum of c t , c tÀ and c tÀ derived from c . ma was the moving sample average and s was the moving standard deviation of the case count reported from baseline. ma and s were the moving sample average and moving standard deviation of the case count reported during baseline period, with a -day lag. the moving standard deviation (s) was used as the estimate of r; and the moving average (ma) was used as the estimate of l. the length of the baseline comparison period for all three methods was -days in order to account for the day of the week effect [ , ] . the space-time permutation scan statistic model utilizes thousands to millions of overlapping cylinders to define the scanning window, each of which is a possible candidate for an outbreak. the circular base represents the geographical area of the potential outbreak from zero to some designated maximum value. the height of the cylinder represents the time period of a potential cluster. the probability function for any given window is proportional to [ , ] : where c zd was the observed number of cases in subzone z and during day d. c was the total number of observed cases during the whole study phase t for the whole study region. c a was the observed case count scanned in cylinder a. the generalized likelihood ratio (glr) was calculated as a measure of the evidence that cylinder a contains an outbreak. among the many cylinders evaluated, the one with the maximum glr constitutes the space-time cluster of cases that is least likely to be a chance occurrence and, hence, is the primary candidate for a true outbreak. the size and location of the scanning window is under dynamic change [ ] . the maximum temporal cluster size was determined by considering the incubation period of the disease studied. for bacillary dysentery, the average incubation period was - days. therefore, the maximum temporal cluster size in this study was set as ( d, d, d and d). the maximum spatial cluster size can be determined in virtue of the geographical area or the proportion of the whole population. since data on the proportion of the population in each sub-district were unavailable, the maximum spatial cluster size in this study was set as ( , , and km), referring to the geographical area of each sub-district. the performance was analyzed using p values of . . analyses were undertaken using excel, spss software (version . for windows; spss inc., chicago, il), aegis-cct (available from http://sourceforge.net/projects/chipcluster/), java programming (available from http://java.com/zh_cn/) and satscan (available from www.satscan.org). spss was used for data processing, descriptive statistics and the chi-square test. the bonferroni correction was applied for multiple comparisons to control the family wise error rate. the significance level a for an individual test was calculated by dividing the family wise error rate ( . ) by the number of tests [ ] . ewma and the cumulative sum were coded by java programming to find out whether the incidence level was abnormal. satscan was used to analyze the clustering of cases in different sub-districts in beijing based on space-time permutation scan statistics and whether the incidence level was abnormal. the correlation coefficients between the three evaluation indices (sensitivity, specificity and time to detection) and parameter values were calculated. table showed the correlation coefficients with pearson's r and p values. all algorithms showed strong relation between the evaluation indices and the parameter' values, except space-time permutation scan statistic. great majority of the correlation was statistically significant, with p values less than . (two-tailed). however, for space-time permutation scan statistic, specificity showed no relation to the spatial cluster size. only when the maximum temporal cluster size was set as d, both sensitivity and time to detection exhibited a significant correlation with the spatial cluster size (p < . ). figs. - describe the average sensitivity, specificity and time to detection of the five algorithms. the top plot of fig. shows the sensitivity versus k values for the three control limit coefficients (k). in all of the combinations of k and k values, the sensitivities were greater than %. as k increased from to . , the sensitivity also increased. the middle plot of fig. shows the specificity of the three k values. specificity of the three k values had a similar change trend by k value, increasing until k = . , and then declining gradually. the bottom plot of fig. shows the effect of k values on detection timeliness of ewma. time to detection declined gradually with the increasing k values. among these combinations of different k and k values, k = . , k = . showed the shortest detection time, with a specificity of . %. there were only two combination of k and k values that had a detection time longer than half a day (k = . , k = . and k = . , k = . ). out of the remaining combinations, there were which had specificity greater than . %. within these combinations, k = . , k = . showed the greatest specificity ( . %). according to the evaluation criteria, we concluded that k = . , k = . was the optimal parameter for ewma. fig. shows the influence of different h and k values on sensitivity, time to detection and specificity. the sensitivity was shown to decrease as k increased. as the sensitivity decreased, time to detection increased. among the combinations of h and k values, (h = r, k = . ) had the shortest time to detection of . day (specificity: %). there were combinations with a detection time of half a day longer than (h = r, k = . ). all of these combinations had specificities greater than %, with the highest one being . %, when h = r, k = . . according to the evaluation criteria, (h = r, k = . ) was found to be the optimal combination for c . the relationship between performance and the combination of h and k values for c is shown in fig. . we found that sensitivity declined as k increased from . to . . in comparison, specificity and time to detection increased as sensitivity declined. the combination of (h = r, k = . ) showed the shortest detection time ( . d), with a specificity of . %. similarly, combinations had a detection time which was half a day longer than (h = r, k = . ). the specificities for all of these combinations was greater than . %, with the highest one recorded at . %, when h = r, k = . . accordingly, (h = r, k = . ) was thought the optimal combination for c . fig. shows the influence of sensitivity, time to detection and specificity of h and k values for c . the specificity and time to detection had an overall growth of k value. sensitivity declined gradually as k increased. among the combinations of h and k values, (h = r, k = . ) had the shortest time to detection ( . d), with a specificity of . %. likewise, there were combinations with a detection time half a day longer than (h = r, k = . ). out of these combinations had specificities greater than . %, the highest one being . %, when h = r and k = . . consequently, (h = r, k = . ) was thought as the optimal combination for c . we found that the space-time permutation scan statistics exhibited no real difference in the specificity when the parameter combinations were changed (table ) . when the maximum temporal cluster size was set as d and the maximum spatial cluster size of km, the detection time was found to be the shortest. this combination also resulted in the highest specificity and sensitivity. thus the optimal parameter was taken as d (maximum temporal cluster size) and km (maximum spatial cluster size). five commonly used algorithms were evaluated by comparing the performance with their optimized parameters values. the performance of these algorithms is shown in table with p values. according to bonferroni's procedure, the significance level a for an individual test was calculated by dividing the family wise error rate ( . ) by four. this was found to be . . of the algorithms evaluated, space-time permutation scan statistics had a higher average specificity than any other algorithms (p < . ), followed by ewma ( . %), while c showed the lowest specificity ( . %). ewma had the shortest time to detection ( . d), while c showed the longest time to detection of one day. space-time permutation scan statistics had a relatively longer time to detection compared to ewma ( . d), but this difference was not statistically significant (p = . > . ). according to the evaluation criteria and statistical test, we could conclude that space-time permutation scan statistics was the optimal algorithm, followed by ewma. spacetime permutation scan statistics had a specificity of . %, which meant that only one false alarm occurred per days, whereas ewma was evaluated to trigger one false alarm for every days. the burden of bacillary dysentery has long been thought to be great in many developing countries [ ] . detecting outbreaks in their early stages may prevent secondary infections, and subsequently an epidemic from occurring. the benefits of this extend not only to the individual, but also to the community in terms of morbidity prevented and costs saved. from the case study in , the outbreak was detected when the accumulated number of cases reached the threshold ( cases in days within the same geographic area). the problem with this method of detection is that the optimal opportunity to curb an outbreak is often missed. in the event of a pandemic influenza or another emerging inflection, missing this opportunity may have national or global implications. we observed that the effects of the same algorithm varied significantly with different parameter values. for example, the time to detection and specificity were . % and . d for c (h = r, k = . ) versus . % and . d for c (h = r, k = . ). if the performance of c and c were compared with these values, c (h = r, k = . ) seemed to be better than c (h = r, k = . ) according to the evaluation criteria, which might lead to the conclusion that c was more effective than c . in fact, c (h = r and k = . ) had a detection time of . d and a specificity of . %, . % higher than . % (c , h = r, k = . ). in this case, c (h = r and k = . ) were better than c (h = r and k = . ). the difference in performance of the two algorithms is largely caused by the difference between parameters' values. therefore, parameter values should be optimized prior to the performance evaluation of algorithms. a wide range of outbreak detection algorithms are available including: temporal, spatial and spatial-temporal [ ] . in this study, we used both the temporal and spatial information of the reported cases. the temporal information refers to the onset date of the illness, and spatial information refers to the sub-district where the case currently resides at. cusum and ewma are commonly used to analyze the temporal data, as they can be adjusted to identify a meaningful change from the expected range of data values. we calculated the daily case counts reported for each sub-district, and then judged whether the change from the expected value was significant within each sub-district. so in our study, cusum and ewma can also give us both the temporal and spatial information of the signal. our study focused on the correlation between algorithm parameter values and their performance. by calculating the correlation coefficient and comparing the performance of different algorithms with various values, we observed a strong correlation between them. the differences in the parameter values may have resulted from a difference in the performances among these algorithms. consequently, we recommend that before evaluating the effectiveness of an outbreak detection algorithm, parameter values should be optimized to remove the noise which has resulted from the potential influence of parameter value for a given disease. in our study we found that space-time permutation scan statistics and the ewma outperformed other algorithms both in terms of timeliness and accuracy for detecting bacillary dysentery outbreaks. ewma applies weighting factors which decrease exponentially. the choice of weighting factor k is the key for successful outbreak detection. with proper k value, ewma control procedure can be adjusted to be sensitive to a small or gradual drift in the process. we feel that adjusting k value should be an imperative step before applying ewma into practice. space-time permutation scan statistics consider both the temporal and spatial factors. the scanning window is under dynamic change to avoid selection bias. however, space-time scan statistics do not consider population movements. in addition, space-time scan statistics can only identify clusters in simple regular shapes. if the cluster does not conform to a regular shape, the algorithm may have a poor performance. therefore, when space-time permutation scan statistics are used to detect the outbreaks, it is imperative to understand the cluster shape. only in the right shape, can space-time permutation scan statistics demonstrate a high detection efficacy. aside from these limitations, the use of space-time permutation scan statistics allowed the early outbreak detection for bacillary dysentery. previously, hutwagner et al. [ ] compared the time to detection with simulation based on influenza like illness and pneumonia data. in her study, c , c and c were found to have an increasing time to detection. in comparison, we found a decline in the detection time for our modified c , c and c . these differences in the time to detection calculations may explain the differences between the two studies. in our study, when the algorithms failed to detect the simulated outbreak, time to detection was set as the largest value ( days). as we know, c , c and c have increasing sensitivities. obviously, as the sensitivity increased from c to c , the number of missed outbreaks decreased and consequently the time to detection declined accordingly. an integrated time to detection might be recommended, in order to address this limitation [ ] . theoretically, the optimal parameter value can maximize the algorithm's ability to detect aberration in disease incidence and minimize the probability of producing a false alarm. the balance between the accuracy and timeliness is still a matter of debate. in our study, we set simple and practical evaluation criteria's. considering the time to detection integrating effect of sensitivity, we simplified the three evaluation indices to two, time to detection and specificity. the former reflected both the timeliness and sensitivity, and the latter reflected the accuracy of outbreak detection. we made timeliness the priority over accuracy due to bacillary dysentery's short incubation period and the fact that it can be both food-borne and water-borne. when deciding which index should be given the priority, practitioners should take the length of incu- bation, the mode of transmission and the current situation (climatic, social, demographic, economic factors, etc.) into consideration. the variation in patterns of the evaluation indices with the change of parameter values observed in our study was found to be consistent with previous related studies [ , , , , , ] . for example, hutwagner et al. [ ] observed that c , c and c had increasing sensitivity, but a decreasing specificity as the sensitivity increased. in our study, we also observed this change in sensitivity and specificity in our modified c , c and c . in our study we observed a growth in sensitivity and specificity as weighing values increased from to . . it seemed that the range of weighting values from . to . enabled a better performance. this recommendation was also made by jackson et al. [ ] , who suggested weighing values of . and . for ewma. there are several factors which may limit the generalization of our findings. to apply these five algorithms, information on the specific setting (workplaces, schools etc.) is often required. this information is usually not available in the current national disease surveillance reporting and management system in china. consequently, the sensitivity of the five algorithms may be less when a bacillary dysentery outbreak occurs in a school, as the cases may be scattered in different sub-districts. it is therefore important to collect extra information on workplaces, schools and other units. due to a lack of actual outbreaks, we injected simulated outbreaks into the baseline so we could undertake a performance assessment on these outbreak detection algorithms. we changed the size, magnitude, temporal progressive pattern, season and spatial distribution of bacillary dysentery, in order to have a variety of outbreak conditions to test. as these are approximations, it is difficult to evaluate how close our simulations came to the actual outbreak. consequently, further research is needed in predicting the actual performance of these algorithms. advisors of expert sgoha, yung rwh, peiris jsm. effectiveness of precautions against droplets and contact in prevention of nosocomial transmission of severe acute respiratory syndrome (sars) a model-adjusted space-time scan statistic with an application to syndromic surveillance the bioterrorism preparedness and response early aberration reporting system (ears) national bioterrorism syndromic surveillance demonstration program ambulatory-care diagnoses as potential indicators of outbreaks of gastrointestinal illness -minnesota wsare: what's strange about recent events? time series modeling for syndromic surveillance disease outbreak detection system using syndromic data in the greater washington dc area measuring outbreak-detection performance by using controlled feature set simulations bio-alirt biosurveillance detection algorithm evaluation evaluating detection of an inhalational anthrax outbreak an evaluation model for syndromic surveillance: assessing the performance of a temporal algorithm comparing syndromic surveillance detection methods: ears' versus a cusum-based methodology comparing aberration detection methods with simulated data a simulation study comparing aberration detection algorithms for syndromic surveillance simulation for assessing statistical methods of biologic terrorism surveillance an open source environment for the statistical evaluation of outbreak detection methods approaches to the evaluation of outbreak detection methods analysis about epidemic situation of dysentery in near upon fourteen years in beijing conceptual model for automatic early warning information system of infectious diseases based on internet reporting surveillance system national protocol for information reporting and management of public health emergencies (trial) a software tool for creating simulated outbreaks to benchmark surveillance systems the exponentially weighted moving average (ewma) rule compared with traditionally used quality control rules statistical quality control methods in infection control and hospital epidemiology, part i: introduction and basic theory evaluation and extension of the cusum technique with an application to salmonella surveillance the cusum chart method as a tool for continuous monitoring of clinical outcomes using routinely collected data evaluating cluster alarms: a space-time scan statistic and brain cancer in a space-time permutation scan statistic for disease outbreak detection multiple comparison procedures updated a multicentre study of shigella diarrhoea in six asian countries: disease burden, clinical manifestations, and microbiology algorithms for rapid outbreak detection: a research synthesis a simulation model for assessing aberration detection methods used in public health surveillance for systems with limited baselines evaluation of school absenteeism data for early outbreak detection key: cord- -hcj jmbm authors: myers, kyle r.; tham, wei yang; yin, yian; cohodes, nina; thursby, jerry g.; thursby, marie c.; schiffer, peter e.; walsh, joseph t.; lakhani, karim r.; wang, dashun title: quantifying the immediate effects of the covid- pandemic on scientists date: - - journal: nan doi: nan sha: doc_id: cord_uid: hcj jmbm the covid- pandemic has undoubtedly disrupted the scientific enterprise, but we lack empirical evidence on the nature and magnitude of these disruptions. here we report the results of a survey of approximately , principal investigators (pis) at u.s.- and europe-based research institutions. distributed in mid-april , the survey solicited information about how scientists' work changed from the onset of the pandemic, how their research output might be affected in the near future, and a wide range of individuals' characteristics. scientists report a sharp decline in time spent on research on average, but there is substantial heterogeneity with a significant share reporting no change or even increases. some of this heterogeneity is due to field-specific differences, with laboratory-based fields being the most negatively affected, and some is due to gender, with female scientists reporting larger declines. however, among the individuals' characteristics examined, the largest disruptions are connected to a usually unobserved dimension: childcare. reporting a young dependent is associated with declines similar in magnitude to those reported by the laboratory-based fields and can account for a significant fraction of gender differences. amidst scarce evidence about the role of parenting in scientists' work, these results highlight the fundamental and heterogeneous ways this pandemic is affecting the scientific workforce, and may have broad relevance for shaping responses to the pandemic's effect on science and beyond. by mid-april , the cumulative number of deaths due to covid- had reached approximately , with nearly , deaths per day in the u.s. and , deaths per day in europe . throughout the u.s. and europe, schools and workplaces were typically required to be closed and restrictions on gatherings of more than people were in place in most countries . for scientists, not only did this drastically change their daily lives, it severely limited the possibilities of using traditional workspaces as most institutions had suspended "non-essential" activities on campus [ ] [ ] [ ] [ ] [ ] [ ] [ ] [ ] . to collect timely data on how the pandemic affected scientists' work, we disseminated a survey to u.s.-and europe-based scientists across a wide range of institutions, career stages, and demographic backgrounds. we identified the corresponding authors for all journal articles indexed by the web of science in the past decade, and then randomly sampled , u.s.-and europebased email addresses (see si s for more). we distributed the survey on monday april th, , about month after the world health organization declared the covid- pandemic. within one week, the survey received full responses from , individuals who self-identified as faculty or pis from academic or non-profit research institutions. respondents were located in all states in the u.s. ( . % of the sample, figure s a ), countries in europe ( . % of the sample, figure s b ), and were affiliated with the full spectrum of research fields listed in the survey. for more on the response rate, sampling method, and a comparison to a national survey of doctorate-level researchers, see si s . motivated by prior research on scientific productivity [ ] [ ] [ ] [ ] [ ] , the survey solicited information about scientists' working hours, how this time is allocated across different tasks, and how these time allocations have changed since the onset of the pandemic. we asked scientists to estimate changes to their research output-the quantity and impact of their publications-in coming years relative to prior years. we also solicited a wide range of characteristics including field of study, career stage (e.g., tenure status), demographics (e.g., age, gender, number and age of dependents in the household), and other features (e.g., institution closure and whether the respondent was exempt from any closures). details on the survey instrument are included in si s , and table s reports summary statistics for all the respondents used in the analyses. to understand the immediate impacts of the pandemic, we compare the reported level and allocation of work hours pre-pandemic and at the time of the survey. figures a and b illustrate two primary findings. first, there is a sharp decline in total work hours, with the average dropping from . hours per week pre-pandemic to . at the time of the survey (diff.=- . , s.e.= . ). in particular, . % of scientists reported that they worked hours or less before the pandemic, but this share increased nearly six-fold to . % by the time of the survey (diff.= . , s.e.= . ). second, there is large heterogeneity in changes across respondents. although . % reported a decline in total work hours, . % reported no change, and . % reported an increase in time devoted to work. this significant fraction of scientists reporting no change or increases in their work hours is notable given that . % of respondents reported their institution was closed for non-essential personnel. to decompose these changes, we compare scientists' reported time allocations across four broad categories of work: research (e.g., planning experiments, collecting or analyzing data, writing), fundraising (e.g., writing grant proposals), teaching, and all other tasks (e.g., administrative, editorial, or clinical duties). we find that among the four categories, research activities have seen the largest negative changes. whereas total work hours decrease by . % on average, research hours have declined by . % (teaching, fundraising, and "all other tasks" decrease by . %, . %, and . %, respectively). comparing the share of time allocated across the tasks ( figure c -f), we find that research is the only category that sees an overall decline in the share of time committed (median changes: - . % for research, % for fundraising, + . % for teaching, and + . % for all other tasks). overall, these results indicate that scientists' research time has been disrupted the most, and the declines in time spent on the other three categories are mainly due to the decline in total work hours. furthermore, correlations suggest that research may be a substitute for each of the three other tasks (see si s . and figure s ). still, despite the large negative changes in research time, substantial heterogeneity remains, as . % reported no change and . % reported spending more time on research. the sizable heterogeneity begs the question as to what factors are most responsible for the observed heterogeneous effects among scientists. to unpack the varied effects of the pandemic, we first examine across-field differences. figure a depicts the average change in reported research time across the different fields we surveyed. fields that tend to rely on physical laboratories and time-sensitive experiments -such as biochemistry, biological sciences, chemistry and chemical engineering -report the largest declines in research time, in the range of - % below pre-pandemic levels. conversely, fields that are less equipment-intensive -such as mathematics, statistics, computer science, and economics -report the lowest average declines in research time. the difference between fields can be as large as four-fold, again highlighting the heterogeneity in how certain scientists are being affected. these field-level differences may be due to the nature of work specific to each field, but may also be due to differences in the characteristics of individuals that work in each field. to untangle these factors, we use a lasso regression approach to select amongst ( ) a vector of field indicator variables, and ( ) a vector of flexible transformations of demographic controls and pre-pandemic features (e.g., research funding level, time allocations before the pandemic). the lasso is a datadriven approach to feature selection that minimizes overfitting by selecting only variables with significant explanatory power , . we then regress the reported change in research time on the lasso-selected variables in a post-lasso regression, allowing us to estimate conditional associations for each variable selected (see si s ). comparing figure a and b, we find that the contrast between the "laboratory" or "bench science" fields versus the more computational or theoretical fields is still significant in the post-lasso regression, indicating that differences inherent to these fields are likely important mediators of how the pandemic is affecting scientists. although we cannot reject a null hypothesis of no change, there is also suggestive evidence of an increase in research time for the health sciences, possibly due to work related to covid- . importantly, we also find that most of the variation across fields is diminished once we condition on the individual-level features selected by the lasso, which suggests a large amount of heterogeneity is due to these individual-level differences. indeed, the standard deviation of the twenty field-level averages of reported changes in research time is . %. by contrast, the standard deviation of the individual-level residuals from these fieldlevel averages-that is, how much each individual's response differs from the average in their field-is . %, indicating there is substantial variation across individuals even within the same field. to illustrate the raw individual-level variation, we measure the average change in reported research time across demographic and other group features ( figure c ). given the persistent gender gap in science - , we include interactions with the female indicator to explore potential gender-specific differences. we find that there are indeed widespread changes across the range of individual-level features we examined. yet, when we use the lasso and regression to control for the field differences documented in figure a , we find marked changes in the relevance of certain individual-level features. figure d plots the post-lasso regression coefficients associated with the demographic and careerstage characteristics and reveals four main results. first, career stage appears to be a poor predictor of the impacts of the pandemic, as conditional changes in research time for older versus younger and tenured versus untenured faculty are statistically indistinguishable. second, scientists who report being subject to a facility closure also report only minor unconditional differences in their research time ( figure c ), and this feature is not selected by the lasso as a relevant predictor for changes in research time. third, there is a clear gender difference. holding field and all other observable features fixed, female scientists report a . % larger decline in research time (s.e.= . ). fourth, child dependent care is associated with the largest effect. reporting a dependent under years old is associated with a . % (s.e.= . ) larger decline in research time, showing a substantially larger effect than any other individual-level features. reporting a dependent to years old is also associated with a negative impact, ceteris paribus, but that decline is smaller than the decline associated with dependents under years old. this is consistent with shifts in the demands of childcare as children age. having multiple dependents is associated with an additional . % decline (s.e.= . ) in research time. overall, these results are consistent with preliminary reports of differential declines in female scientists' productivity during the pandemic , . our findings further indicate that some of the gender discrepancy can be attributed to female scientists being more likely to have young children as dependents ( . % of female scientists in our sample report having dependents under the age of , compared to . % of male and other scientists, s.e. of diff.= . ). for further results related to the other three task categories, see si s . . to estimate the potential downstream impact of the pandemic, we also asked respondents to forecast how their research publication output in and -in terms of the quantity and impact of their publications-will compare to their output in and . we randomly assigned respondents to make a forecast for one of six possible scenarios where they were to take as given the duration of the pandemic to be , , , , , or months from the time of the survey. for more on how we use this introduced random variation and adjust scientists' forecasts to account for underlying trends in publication output, see si s . . figure a plots the distribution of the estimated changes in publication quantity and impact due to the pandemic. we find that, on average, quantity is projected to decline . % (s.d.= . ). for comparison, prior estimates show that in the biomedical sciences, receiving a grant of approximately one million dollars from the national institutes of health raises a pi's short run publication output by - % , , suggesting that a projected decline of % is not negligible. moreover, the decline in output is not limited to quantity, as impact is projected to decline by . % on average (s.d.= . ). to understand which scientists are most likely to forecast larger declines in their output due to the pandemic, we repeat the lasso-based regression approach using these forecasts as dependent variables. these analyses uncover two notable findings ( figure b ). first, all of the features selected as relevant are related to caring for dependents. as in the case of research time, reporting a dependent under years old is associated with the largest declines. second, gender differences in these forecasts appear attributable to differential changes associated with dependents. reporting a -to -year-old dependent is associated with a . % (s.e.= . ) and . % (s.e.= . ) lower forecast of publication quantity and impact, respectively, but only for female scientists (see si s . for the field-level results). we find that most of the same groups currently reporting the largest disruptions to research time also report the worst outlook for future publications. the correlations between reported change in research time and forecasted publication output are . for quantity (p-value < . ) and . for impact (p-value < . ). while understanding the relationships between time input and research output is beyond the scope of this study, we repeat the analysis, including the changes in reported time allocations to test if they moderate the effects we observe. we find that, while the post-lasso regression coefficients associated with the selected demographic features generally become smaller, a statistically significant relationship remains in most cases even when conditioning on the (lasso-selected) change in research time. this suggests the forecasted declines associated with reporting young dependents are not simply explained by the direct change in time spent on research ( figure s ). we further investigate how these publication forecasts may depend on the expected duration of the covid- pandemic by plotting the (randomized) expectation shown to the survey respondent against the estimated net effect of the pandemic ( figure c) . a linear fit indicates that, for every month that the pandemic continues past april , scientists expect a . % decrease in publication quantity (s.e.= . ) and a . % decrease in impact (s.e.= . ) due to the pandemic. these marginal effects may appear small relative to the others documented in this paper, but it is important to note that they are on a similar scale as economic forecasts for the u.s. and europe, which (as of may ) project economic declines in the range of . - . % per month ( - % for ) . still, these results could also reflect uncertainties or errors inherent to these forecasts, or strong personal beliefs about the timeline for the pandemic that are not easily swayed by the survey's suggestion. our results shed light on several important considerations for research institutions as they consider reopening plans and develop policies to address the pandemic's disruptions. the findings regarding the impact of childcare reveal a specific way in which the pandemic is impacting the scientific workforce. indeed, "shelter-at-home" is not the same as "work-from-home" when dependents are also at home and need care. because childcare is often difficult to observe and rarely considered in science policies (aside from parental leave immediately following birth or adoption), addressing this issue may be an uncharted but important new territory for science policy and decision makers. furthermore, it suggests that unless adequate childcare services are available, researchers with young children may continue to be affected regardless of the reopening plans of institutions. and since the need to care for dependents is by no means unique to the scientific workforce, these results may also be relevant for other labor categories. more broadly, many institutions have announced policy responses such as tenure clock extensions for junior faculty. of u.s. university policies we identified that provided some form of tenure extension due to the pandemic, appeared to guarantee the extension for all faculty (see si s . for more). institutions may favor such uniform policies for several reasons such as avoiding legal challenges. but given the heterogeneous effects of covid- we identify, it raises further questions whether these uniform policies, while welcoming, may have unintended consequences and could exacerbate pre-existing inequalities . while this paper focuses on quantifying the immediate impacts of the pandemic, circumstances will continue to evolve and there will likely be other notable impacts to the research enterprise. the heterogeneities we observe in our data may not converge, but instead may diverge further. for example, when research institutions begin the process of reopening, there may be different priorities for "bench sciences" versus work that involves human subjects or that requires travel to field sites. and research requiring international travel could be particularly delayed; all of which could lead to new productivity differences across certain groups of scientists. furthermore, individuals with potential vulnerabilities to covid- may prolong their social distancing beyond official guidelines. in particular, senior researchers may have incentives to continue avoiding inperson interactions , which historically facilitate mentoring and hands-on training of junior researchers. the possibility of a resurgence of infections suggests that institutions may anticipate a reinstatement of preventative measures such as social distancing. this possibility could direct focus toward research projects that can be more easily stopped and restarted. funders seeking to support high-impact programs may have similar considerations, favoring proposals that appear more resilient to uncertain future scenarios. lastly, although we have focused on two of the denser geographic regions of scientific output in this study, the pandemic is having a substantial impact on research worldwide. in the coming years, researchers may be less willing or able to pursue positions outside of their home nation, which may deepen or alter global differences in scientific capacity. future work expanding our understanding of how the pandemic is affecting researchers across different countries, at different institutions, and in different points of their life and career could provide valuable insights to more effectively protect and nurture the scientific enterprise. the strong heterogeneities we observe, and the likely development of new impacts in the coming months and years, both argue for a targeted and nuanced approach as the world-wide research enterprise rebuilds. . kitchener, c. women academics seem to be submitting fewer papers during coronavirus. 'never seen anything like it,' says one editor. https://www.thelily.com https://www.thelily.com/women-academics-seem-to-be-submitting-fewer-papers-duringcoronavirus-never-seen-anything-like-it-says-one-editor/ ( the study protocol has been approved by the institutional review board (irb) from harvard university and northwestern university. informed consent was obtained from all participants. figure s reports the results from a similar exercise focusing on fieldlevel differences. we find the same three fields associated with the largest declines in research time -biochemistry, biology, and chemistry -also forecast the largest pandemic-induced declines in their publication output quantity, ceteris paribus. c. average estimated changes in publication outputs per the randomized duration of pandemic respondents were asked to assume for their forecasts (either , , , , , or months from the time of the survey, mid-april ). .......................................................................................................................... ........................................................... s additional results .................................................................................................................................... spent on different tasks ................................................................................ -research tasks by groups ............................................................... forecast results ................................................................................. ................................................................................ s supplementary tables ........................................................................................................................... s supplementary figures ......................................................................................................................... s references for supplementary information ......................................................................................... to compile a large, plausibly random list of active scientists, we leverage the web of science (wos) publication database. the wos database is useful for two reasons: ( ) it is one of the most authoritative citation corpuses available and has been widely used in recent science of science studies - ; ( ) among other large-scale publication datasets, wos is the only one, to our knowledge, with systematic coverage of corresponding author email addresses. we are primarily interested in active scientists residing in the u.s. and europe. we start from million wos papers published in the last decade ( - ). in an attempt to focus on scientists likely to still be active and in a more stable research position, we link the data to journal impact factor information (wos journal citation reports), and exclude papers published in journals in the bottom % of the impact factor distribution for its wos-designated category. we use the journal impact factor calculated for the year of publication, and for papers published in , we use the latest version ( ). we then extract all author email addresses associated with papers. for each email address in this list, we consider it as a potential participant if: ( ) it is associated with at least two papers in the ten-year period, and ( ) the most recent country of residence, defined by the first affiliation of the most recent paper, is in the u.s. or europe. we have approximately . million unique email addresses after filtering, with about , in the u.s. and , in europe. we then randomly shuffled the two lists separately and sampled roughly , email addresses from the u.s. and , from europe. we oversampled the u.s. as a part of a broader outreach strategy underlying this and other research projects. we recruited participants by sending them email invitations through with the following text: we build on field classifications used in national surveys such as the u.s. survey of doctorate recipients (sdr) to categorize fields in our survey, aggregating to ensure sufficient sample sizes within each field. the notable additions we make to the fields used in these other surveys are to include: business management, education, communication, and clinical sciences. these fields reflect major schools at most universities and/or did not immediately map to some of the default fields used in the sdr (i.e., the "health sciences" field in sdr does not include medical specialties). out of a total of , emails sent, approximately , emails were directly bounced either due to incorrect spelling in the wos data or the termination of the email account. in hopes of soliciting a larger sample, we also undertook snowball sampling by encouraging respondents to share the survey with their colleagues as well. overall , individuals entered the survey and , continued past the consent stage. of those that did not, were not an active scientist, post-doc, or graduate student and thus not within our population of interest, did not consent, and did not make any consent choice. when a respondent continued past the consent stage, we asked them to report the type of role they were in. out of the , consenting responses, there , responses from faculty or principal investigators (pis), , responses from post-doctoral researchers, from graduate students in a doctoral program, and from retired scientists. of the remaining respondents were some other type of position and another did not report their position. this yields an estimate of a response rate of approximately . %. first, our low response rate may reflect the disruptive nature of the pandemic, but it also raises concerns for generalizability of our results. however, after we received feedback from the initial distribution that many individuals had received the email in their "junk" folder, we became concerned with our distribution being automatically flagged as spam. based on spot-checking of five individuals that we ex-post identified as being randomly selected by our sample, and who we had professional relationships with, found that in four of the five cases the recruitment email had been flagged as spam. we know of no systematic way of estimating the true spam-flagging rate (nor how to avoid these spam filters when using email distributions at this scale) without using high-end, commercial-grade products. additionally, as with any opt-in survey, there may be correlations between which scientists opt-in and their experiences about which they want to report. for example, scientists who felt strongly about sharing their situation, whether they experienced large positive or negative changes, may be more likely to respond, which would increase the heterogeneity of the sample. furthermore, there may also be non-negligible gender differences that arise not due to actual differences in outcomes but due to differences in reporting known to occur across genders [ ] [ ] [ ] [ ] [ ] . for our analyses, we focus entirely on responses from the sample of faculty/pis. from the full sample of pis, we retain respondents who reported working for a "university or college", "nonprofit research organization", "government or public agency", or "other", and excluding responses from individuals who report to work for a "for-profit firm". we also restrict the sample to respondents whose ip address originated from the united states or europe (dropping , responses from elsewhere). we then drop observations that have missing data for any of the variables used in our analyses: responses do not report their time allocations, do not report their age, do not report the type of institution they work at, and do not report their field of study. altogether, this amounts to dropping observations. given the relatively small subset of our sample dropped due to missing data, we do not impute missing variables as this introduces unnecessary noise . the summary statistics for the final sample used in the analyses are reported in figure s and the geographic distribution of respondents is shown in figure s . to estimate the generalizability of our respondent sample, we use the public microdata from the survey of doctorate recipients (sdr) as the best sample estimates of the population of principal investigators in the u.s. the sdr is conducted by the national center for science and engineering statistics within the national science foundation, sampling from individuals who have earned a science, engineering, or health doctorate degree from a u.s. academic institution and are less than years of age. the survey is conducted every two years, and we use the latest data available ( cycle). for this comparison, we focus only on university faculty in both our survey and the sdr. we also constrain our sample to only include fields of study with a clear mapping to the sdr categories. the sdr focuses only on researchers with ph.d.-type degrees, and so it does not capture researchers with other degrees still actively engaged in research (i.e., researchers with only m.d.s). this means we exclude "architecture and design," "business management," "medicine," "education," "humanities," and "law and legal studies." figure s compares respondents between our sample and the sdr sample. figure s a illustrates differences on demographics and career-stage features, including raw differences as well as those adjusted by field. we find only a small difference in age and no difference in partner status. our survey oversamples on female scientists, those with children, and untenured faculty. these differences persist after conditioning on the scientist's reported field. that we ultimately find female scientists and those with young dependents to report the largest disruptions suggests that these individuals may have been more likely to respond to the survey in order to report their circumstances. the geographic distributions are relatively similar, with slight oversampling of west and undersampling of south. lastly, we find a significant but small oversampling of u.s. citizens. we also compare the distribution of research fields (fig. s .b) . overall the distributions are relatively similar. we appear to oversample most significantly on "atmospheric, earth, and ocean sciences" and "other social sciences." while we undersample most significantly on the biological sciences, "mathematics and statistics," and "electrical and mechanical engineering". there does not seem to be a clear pattern with these field-level differences, as we undersample fields that ultimately report being across the spectrum of disruptions (i.e., mathematics and statistics reports some of the smallest disruptions, and the biological sciences are amongst the most disrupted). the unconditional changes reported by each group of scientists is informative of how the pandemic affected researchers overall. but it does not allow us to infer whether groups reporting larger or smaller disruptions are doing so for reasons inherent to that group (i.e., the nature of work in certain fields, or the demands of home life unique to certain individuals) or because the individuals that select into that group tend to also be disrupted for unrelated reasons. this motivates a multivariate regression analysis to explore whether changes associated with a group of individuals change after conditioning on other observables. however, selecting which of an available set of covariates (or transformations thereof) to include in a regression is notoriously challenging. the lasso method provides a data-driven approach to this selection problem by excluding covariates from the regression that do not improve the fit of the model , . when using the lasso, our general approach is to include a vector of indicator variables for the fields or demographic/career groups of interest, along with an additional set of controls. when focusing on differences across fields, we include the demographic/career variables in the control set, and vice versa. the control variables common to all lasso-based analyses are: pre-pandemic level of time allocations and totals, pre-pandemic share of time allocations, pre-pandemic funding estimate, and indicators for the type of institution (academic, non-profit, government, or other) and the location (state if in u.s., country if in europe). to make minimal assumptions about the functional form of the control variables, we conduct the following transformations to expand the set of controls: for all continuous variables we use inverse hyperbolic sine (which approximates a logarithmic transformation while allowing zeros), square and cubic transformations, and we interact all indicator variables with the linear versions of the continuous variables. we perform the lasso using the lasso linear package in stata © software. we use the defaults for constructing initial guesses, tuning parameters, number of folds (ten), and stopping criteria. we use the two-step cross-validation "adaptive" lasso model where an initial instance of the algorithm is used to make a first selection of variables, and then a second instance occurs using only variables selected in the first instance. the variables selected after this second run are then used in a standard post-lasso ols regression with heteroskedastic robust standard errors. we are interested in the effect of the covid- pandemic on research output. as an initial estimate of what this effect could be, we asked respondents to forecast how their research output in and will compare to their prior output in and . this framing was chosen for its simplicity; however, it does not provide a direct estimate of the pandemic effect. for this effect, we could have asked how the respondent expects their output to be in and compared to what they would otherwise expect their output to have been in and had the pandemic not occurred. clearly, this is more complicated. but since we chose the simpler framing, we must account for some underlying factors before arriving at figures closer to what scientists think the effect of the pandemic will be (or our estimates thereof). these raw year-to-year forecasted changes in publication outputs will be influenced by four major factors: ( ) changes due to the pandemic to date; ( ) anticipated future changes due to the pandemic; ( ) the respondent's expectations about how long the pandemic will last; and ( ) regular trends in the evolution of publication output across different individuals and fields (e.g., if female scientists have continually been increasing their number of publications produced each year, then in the absence of the pandemic we might expect this trend to continue into the near future). again, we are primarily interested in ( ) and ( ). to overcome ( ), we randomly assign respondents to make forecasts for one of possible scenarios where they were to take as given the duration of the pandemic to be either , , , , , or months from the time of the survey. in some analyses, we condition on this variable to control for variation due to perceptions about the length of the pandemic. in others, we explore the effect of these different perceptions directly to infer how scientists perceive disruptions may evolve as the pandemic does or does not continue to persist. with respect to the issue of differential trends across individuals and fields, we first note that the time scale we are concerned with (approx. years) is small enough that we expect the majority of individuals to not change in terms of their observables. this is because all of our time-dependent observables used in the analyses are based on groupings of years. still, to more quantitatively address this issue, we use historical data and another lasso-based regression model to project scientists' publication output in and , using their observable features from the survey and publication data since . our assumption is that these projections can approximate what scientists would have forecasted in the absence of the pandemic--they provide a crude counterfactual. given the short timeframes involved, and the rich observable data we possess, we hypothesize that the room for significant biases or deviations are small relative to the acrossindividual variation. due to data quality limitations, we are only able to connect % of respondents to their publication records, but a comparison of observables indicates that there are no meaningful differences between those scientists connected to their publication record, and those not (see figure s ). since we observe the variables used in these projections for all respondents, we can project out trends for all scientists in our sample. while the measurement of publication quantity is straightforward, the measurement of quality or, as it was asked in the survey, "impact" is not. following a long line of science of science research , we use citation counts as the best available proxy for quality. we follow the state of the art in terms of adjusting and counting these citations in a manner that does not conflate across-field differences . the lasso-based projection proceeds as follows. first, we demean the publication measures at the year level. this is because we do not want to attribute aggregate year-to-year variations across the entire sample to actual changes in net output, since these fluctuations can very plausibly be linked to changes in the web of science (wos) coverage over time, and we are much more concerned with differential trends amongst different fields and/or different individuals. next, we use the lasso to select which of the observables are the best predictors of publication counts and citations. the major difference between this lasso-based approach and the others used in this paper is that, here, we interact all observables with flexible time trends (i.e., squared, cubic, and inverse hyperbolic sine transformations of the year variable) to allow differential trends across groups. finally, we project out these expected output measures as a function of the selected covariates and their corresponding coefficients from a post-lasso ols regression. importantly, we project out of sample just two years so that we have estimates of the counterfactual trends for and . with these estimates of respondents' counterfactual forecasts in hand, we then simply subtract them from scientists actual reported forecasts to arrive at our estimate of scientists' forecast of the "net effect" of the pandemic. figure s plots the distributions of the unadjusted forecasts and these net effects for both the quantity and impact measures. the adjustment does not substantially change the distribution, but we are more confident in these estimates as "pandemic effects" for the aforementioned reasons. figure s plots the reported changes in research time (y-axes) against the reported changes in time allocated to the other three task categories (x-axes). the figures are binned scatterplots, and linear fits of the data suggest that research may be a substitute for the other categories. a % increase in fundraising, teaching, or all other tasks is associated with a decline in research by . % (s.e.= . ), . % (s.e.= . ), and . % (s.e.= . ), respectively. we lack exogenous variation in the data that can clearly shift the time allocated to one (or a subset) of tasks, so we cannot identify the extent to which these correlations reflect actual substitution patterns or unobserved factors. though the magnitudes and precision of these relationships suggests further investigations are certainly warranted to better understand how scientists allocate their time. figure s a and s b replicate figures b and d from the main text, respectively, instead using each of the other three task categories for the dependent variable. for the analysis focused on fields (fig. s a) , no clear patterns emerge with respect to changes in time spent fundraising or teaching. reported time changes in teaching may be due to a combination of reasons. first, during the pandemic, the demand for these activities is likely relatively stable (e.g., most academic institutions have moved classes online, but there are few reports of suspension of classes); and second, impacts due to the transition to online teaching may have taken place earlier, hence not captured by our survey. there is evidence that clinical science and biochemists are spending an increasing amount of time on the "all other tasks" category, which could plausibly be due to a redirection of effort directly towards pandemic-related (non-research) work. for the analysis focused on demographic groups (fig. s b) , we find that scientists reporting a dependent under years old tend to also report larger declines across all task categories. this result is consistent with an unsurprising hypothesis that these dependents require care that leads scientists to decrease their total work hours. the fact that there does not appear to be any substitution away from research towards these other categories for these specific individuals with young dependents suggests the association is driven by factors inherent to having a dependent at home, and not that these individuals also tend to select alternative work structures that has them performing less research and more of other tasks. figure s recreates figure b from the main text, but using the field-level lasso approach. forecasted changes in output are almost entirely confined to publication quantity (as opposed to impact), with the same fields of biology and chemistry that reported the largest declines in research time also forecasting the largest declines in publication output, here in the range of - % relative to what would have been expected otherwise. notably, some fields expect to publish more because of the pandemic, again highlighting the heterogenous experiences scientists are having due to the pandemic. figure s recreates figure b from the main text, but while including the reported changes in time allocated to each of the four task categories (in addition to the pre-pandemic reported time allocations as before). again, we find a similar set of dependent-related variables to be most predictive of forecasted publication changes, even though the reported change in research time is also selected as relevant by the lasso. for comparison, the forecasted disruption associated with a dependent under years old ( . % decline expected publication count) is approximately the same magnitude as the implied effect associated with a % decrease in research time. using internet searches, we attempted to identify university-level tenure clock extension policies put in place as a result of the covid- pandemic. while not a comprehensive list, we identified policies for universities, encompassing both public and private, small and large institutions. of the universities, have automatically applied a tenure clock extension to all faculty, with individuals having the ability to opt out - ; require applications but are automatically approved [ ] [ ] [ ] [ ] [ ] [ ] [ ] [ ] [ ] [ ] [ ] [ ] [ ] . four universities have not established unilateral policies [ ] [ ] [ ] [ ] . instead, they have either created a separate application process or added covid- -related impact to the list of reasons a faculty member may apply for an extension. table s . summary statistics. summary statistics for the main survey sample. "mean, with pubs." and "mean, miss. pubs." report the averages for the sub-samples that can and cannot be connected to their publication record in wos, respectively. the "t-stat" column reports the tstatistic from a test of mean differences between these two sub-samples. the two wos-based variables are "pub. quantity (number) since " (the sum of the author's number of publications in the wos record), and "pub. impact (eucl. citations) since " (the field-demeaned euclidean sum of citations to the author's publications in the wos record ). figure s . geographic distribution. plots respondent locations in u.s. (a.) and europe (b.), aggregated to preserve anonymity. figure s . comparison to u.s. university-based sdr respondents. summary statistics for demographic variables and fields common to both our survey and the u.s. survey of doctoral recipients (sdr). all comparisons are based on u.s.-located faculty or pis at universities or colleges that report affiliation with a field of study present in both surveys (note: all fields present in the sdr are present in our survey, but not vice versa). a. describes the sample averages for both samples and the mean differences in both the raw data ("diff.") and after adjusting for the different composition of fields in each sample ("diff., field adjusted") b. plots the share of respondents in each sample that affiliate with each of the fields common to both surveys. (*** p< . ; **p< . ; *p< . ) a. b. figure s . publication changes, raw and inferred pandemic effects. plots the distribution of changes to publication output. blue lines indicate publication quantity, red lines indicate impact. solid lines indicate the raw responses from the survey (which asked only about changes in publication output from - to - ), and dashed lines indicate our estimates of the implied effect due to the covid- pandemic based on the removal of group-specific trends in publication output. see the methodology section for more. figure b from the main text, also including the scientists' reported changes in time committed to each of the four task categories. the error bars indicate % confidence intervals, and only variables selected in the corresponding lasso selection exercises are included in the post-lasso regression. the coefficient corresponding to the " %change time, research" variable indicates the percent change in the scientists' forecasted quantity or impact associated with a % increase in the change in reported research time. for example, we estimate that scientists who reported a % larger decline in their research time forecast that the pandemic will cause them to produce . % fewer publications in - . covid- ) deaths. our world in data policy responses to the coronavirus pandemic science-ing from home how research funders are tackling coronavirus disruption safeguard research in the time of covid- coronavirus outbreak changes how scientists communicate the pandemic and the female academic early-career scientists at critical career junctures brace for impact of covid- how early-career scientists are coping with covid- challenges and fears how covid- could ruin weather forecasts and climate records productivity differences among scientists: evidence for accumulative advantage research productivity over the life cycle: evidence for academic scientists the economics of science faculty time allocation: a study of changeover twenty years incentives and creativity: evidence from the academic life sciences regression shrinkage and selection via the lasso regression shrinkage and selection via the lasso: a retrospective web of science as a data source for research on scientific and scholarly activity increasing dominance of teams in production of knowledge quantifying long-term scientific impact atypical combinations and scientific impact highly confident but wrong: gender differences and similarities in confidence judgments boys will be boys: gender, overconfidence, and common stock investment trouble in the tails? what we know about earnings nonresponse years after lillard, smith, and welch measurement error in survey data response error in earnings: an analysis of the survey of income and program participation matched with administrative data flexible imputation of missing data regression shrinkage and selection via the lasso regression shrinkage and selection via the lasso: a retrospective how to count citations if you must extension of the probationary period for tenure-track faculty due to covid- disruptions harvard offers many tenure-track faculty one-year appointment extensions due to covid- extending the reappointment/promotion/tenure review timeline covid- and tenure review response to covid- disruption: extension of the tenure clock. the university of alabama in huntsville memo on tenure-track probationary period extensions due to covid- . university of virginia office of the executive vice president and provost extension of tenure clock in response to covid- rule waivers: tenure clock extensions, leaves of absence, conversions, dual roles extension of the tenure-clock guidelines for contract extension and renewal. iowa state university office of the senior vice president and provost tenure clock extension due to covid- disruption faculty promotion and tenure tenure-track faculty: extension of tenure clock due to covid- . the ohio state university office of academic affairs promotion/tenure clock extensions due to covid- -faculty one-year opt-in tenure clock extension covid- guidance for faculty: extensions of tenure clock probationary period extensions for tenure-track faculty. the university of texas at austin office of the executive vice president and provost tenure rollback policy for covid- we thank alexandra kesick for invaluable help. this work is supported by the air force office of scientific research under award number fa - - - , national science foundation sbe , and the alfred p. sloan foundation g- - and g- - . key: cord- - ekgb zx authors: hjálmsdóttir, andrea; bjarnadóttir, valgerður s. title: “i have turned into a foreman here at home.” families and work‐life balance in times of covid‐ in a gender equality paradise. date: - - journal: gend work organ doi: . /gwao. sha: doc_id: cord_uid: ekgb zx this article explores the gendered realities of work‐life balance in iceland during the covid‐ pandemic, in particular how these societal changes reflect and affect the gendered division of unpaid labor, such as childcare and household chores. the study draws on open ended real‐time diary entries, collected for two weeks during the peak of the pandemic in iceland. the entries represent the voices of mothers in heteronormative relationships. the findings imply that, during the pandemic, the mothers took on greater mental work than before. they also described intense emotional labor, as they tried to keep everyone calm and safe. the division of tasks at home lay on their shoulders, causing them stress and frustration. the findings suggest that, even in a country that has been at the top of the gender gap index for several years, an unprecedented situation like covid‐ can reveal and exaggerate strong gender norms and expectations towards mothers. this article is protected by copyright. all rights reserved. the covid- pandemic is not only a health emergency and economic hazard but has also resulted in dramatic changes in people's personal lives, and roles within families have been disrupted. during the pandemic, many countries have taken drastic measures to reduce the spread of the virus, such as social distancing, lockdowns, and closing schools, public institutions, and workplaces. children and adults alike have been forced to stay at home for a shorter or longer time and upturn their lives as the home became the school, the workplace, the playground, sports facility, and family sanctuary. unesco has estimated that more than % of the world's student population, or around , billion students, has been affected by either temporary school closings or restricted services (unesco, ) . this entails increased care responsibilities for parents across the world. even though the number of dual earner households has been increasing for the last decades, findings of several studies indicate that women still bear the burden of childrearing and household labor in industrialized countries (alon, doepke, olmstead-rumsey, & tertilt, ; carlson, petts, & pepin, ; friedman, ; knight & brinton, ; t. miller, ; schwanen, ) . it can therefore be assumed that they are more affected by the closing of schools than their male partners. in fact, several studies (alon et al., ; andrew et al., ; carlson et al., ) and media coverage (see e.g. ascher, ; c. c. miller, ; topping, ) on the impact of covid- on families have indicated complications and challenges, as this unprecedented situation appears to have revealed or exaggerated existing gender inequalities and divisions within families. some have even referred to this strange situation as the s was revisiting homelife (ferguson, ) , indicating a backlash in terms of gender equality and power positions in the home during these circumstances. during previous crises, women have been more likely to either reduce their working hours or temporarily step down from work (alon et al., ; andrew et al., ) . we still pressure for the last few months and that mothers have spent less time on paid work and more time on household responsibilities as compared to fathers during the pandemic (andrew et al., ; carlson et al., ; collins, landivar, ruppanner & scarborough, ; craig & churchill, ; hennekam & shymko, ; manzo & minello, ; qian & fuller, ) . studies have indicated that young children tend to seek help and attention by interrupting their mothers, and that the mothers in turn experience time as more fragmented (collins, ; collins et al., ; sullivan & gershuny, ) which can become a bigger challenge in lockdown as the one during covid- . since the lockdown, more mothers participating in andrew's et al. ( ) research have reduced their working hours and those who have stopped working do twice as much child care and household duties as their male partners who are still working. conversely, in families where the male partner has stopped working but not the female, the parents share childcare and household duties equally even though the mother works at least five hours of paid work a day. qian and fuller ( ) argue that the pandemic is far from being an equalizer when it comes to gender equality, as their research indicates a widening gender employment gap among canadian parents with young children. the pandemic has not only affected schools, as many companies and businesses have been forced to adopt to the circumstances with more working-from-home and telecommuting opportunities for their workers (alon et al., ) . juggling childcare and paid work has been very challenging for parents, but then again, this has meant increased flexibility for many employees, flexibility that has often been discussed as the solution to a better work-life balance, especially for women (gatrell, burnett, cooper, & sparrow, ; sullivan, ; wheatley, ) . however, there are various intricacies around the interactions of gender equality and work-life balance in normal times, which seem to have intensified during the this article is protected by copyright. all rights reserved. pandemic as the pressure on parents' time increases (e.g. andrew et al., ; carlson et al., ) . iceland has been considered a frontrunner, even among the other nordic countries, in gender equality (the world economic forum, ), which makes it a particularly interesting setting in this regard. we believe that times like the covid- pandemic provide a unique opportunity to explore and shed light on deeply entrenched and gendered social structures within the organization of the family. in fact, research has already pointed in that direction (auðardóttir & rúdólfsdóttir, ) . thus, the focus of this study is to look at how the societal changes reflect and affect the gendered division of labor, especially concerning the unpaid labor of childcare and household chores, from the perspectives of mothers in heterosexual relationships. this was done by collecting daily real-time diary entries from almost mothers for two weeks during the peak of the pandemic in iceland while severe restrictions were being followed. important steps towards gender equality have been taken in the western part of the world over the years, not least in the nordic countries. these include improved legal frameworks, rising female employment and educational levels, and improvement in fathers' involvement in childrearing (evertsson, ; eydal & gíslason, ; gíslason & símonardóttir, ; jóhannsdóttir & gíslason, ) . despite these steps, the gender pay gap remains unbridged, reflecting the persistent idea of male provider roles (petersen, penner, & høgsnes, ; snaevarr, ) . iceland's reputation as the most gender equal country in the world has been quite prominent in public discourse and in the media, both in iceland and around the world. this media discourse has portrayed iceland as a paradise for women, implying that gender equality has more or less been achieved in iceland (hertz, ; jakobsdóttir, ; kilpatrick, ; this article is protected by copyright. all rights reserved. tuttle, ), which has even been used for international branding purposes (einarsdóttir, ) . despite the importance of recognizing that the ranking of gender equality as practiced by the global gender gap index, among others, has its limitations and overlooks important institutional variables such as social norms and values (einarsdóttir, ) , certainly iceland is doing well in international comparisons. women's educational attainment in iceland has steadily increased over the last few decades (bjarnason & edvardsson, ) , and in the year , icelandic women had the highest labor ratio among the oecd countries at . %. the same applies to men's labor force participation of . % (oecd, ) . despite this active participation in the labor force, icelandic women have established families at relatively young ages and the average birthrate has been rather high up until very recently in comparison with other northern european countries (hognert et al., ; jónsson, ) . in iceland, as elsewhere, women work part-time jobs in higher numbers, and mothers reduce their labor participation following childbirth more often than do fathers (gíslason & símonardóttir, ) . regardless of international trends towards increased active female participation in the workforce, the labor market is still very gender divided, and the rates of gender segregation both in line of work and educational choices are striking (dinella, fulcher, & weisgram, ) . the same manifestation applies to iceland (snaevarr, ) . over the last few decades, the government of iceland has taken some important steps in making laws and policies to facilitate fathers' involvement in childrearing responsibilities. the most substantial step is probably an act on shared parental leave passed in , which gave parents nine months in total, "dividing the nine months so that three are sharable while each parent has three that are strictly non-transferable" (gíslason & símonardóttir, , p. ) , and was lengthened by a month on january , (act on maternity/paternity leave and parental leave no. / with amendments). in iceland, research has indicated that discourses on motherhood in this article is protected by copyright. all rights reserved. relation to breastfeeding imply more intensive mothering that starts when the children are very young. this is somehow in opposition to the governmental emphasis on gender equality that aim to get fathers more in involved in parenting (gíslason & símonardóttir, ) . despite all these advancements, there are some signs that these have been achieved at a cost and there are some cracks in icelandic's glossy image as the frontrunner of gender equality (einarsdóttir, ) . in recent years, media coverage about people experiencing burnout has been more common, especially among professions like nurses and elementary school teachers (halldórsdóttir, skúladóttir, sigursteinsdóttir, & agnarsdóttir, ; the icelandic nurses' association, ; the icelandic teachers union, n.d.) , which in iceland are typically female professions. it appears that people are increasingly experiencing stress in their everyday live, which, if prolonged, can result in both poor physical and mental health (jónsdóttir, ) . over the last few years, research results from iceland have indicated that conflicts between work and family are quite frequent among icelandic parents, even though they do not consider housework alone to be a great burden (Þórsdóttir, ) . family obligations and issues related to the care of children are more likely to be woven into the mothers' working hours than fathers' (hjálmsdóttir & einarsdóttir, ) . there are also indications that parents are more likely to express difficulties when it comes to everyday chores than are workers without children and that parents experience conflict in balancing work and family (eyjólfsdóttir, ; hjálmsdóttir & einarsdóttir, ; Þórsdóttir, ) . work-life balance refers to the ability of every individual, regardless of gender, to coordinate work and family obligations successfully. work, in this context, refers to paid labor performed outside the home (wheatley, ) . studies have found that, when parents manage to balance family and working life, they are more satisfied with their life, which positively this article is protected by copyright. all rights reserved. impacts their mental and physical health (haar et al., ) . successful work-life balance can, therefore, be considered to be an important public health issue (lunau, bambra, eikemo, van der wel, & dragano, ) . a growing number of people describe increased time pressure in their daily lives and experience time being a scarce resource for all the task in their daily schedules (fyhri & hjorthol, ) . time is gendered, and bryson and deery ( ) have claimed that gender inequalities are sustained by differences in the use and experience of time among men and women and "that 'time cultures' are bound up with power and control" (p. ). research has indicated that men have, on average, more control over their time outside work than women. more claims are laid on women's time from family members. they feel more rushed in their daily lives and are more likely to be expected to attend to household work. women are also more inclined to multitask than men (bryson, ; craig & brown, ; friedman, ; rafnsdóttir & heijstra, ; sullivan & gershuny, ) . for the last few decades, some countries have been changing their policies to improve the opportunity parents have to balance work and family (gatrell, burnett, cooper, & sparrow, ; sullivan, ; wheatley, ) . such policies are often based on more access to subsidized childcare or flexibility. work flexibility has been argued to be desirable and a step towards gender equality, since it has enabled people's work-life balance (gatrell et al., ; haar et al., ; sullivan, ; wheatley, ) . alon et al. ( ) predict that the somehow forced flexibility of many workplaces caused by covid- might last after the pandemic has run its course and be beneficial for both mothers and fathers. nevertheless, work related flexibility has both pros and cons and can even cause stress. the division between work and home can become more blurred when the employees bring their work home and take care of family matters during working hours (hjálmsdóttir & einarsdóttir, ; wheatley, ) . it has also been argued that not all professions offer an this article is protected by copyright. all rights reserved. opportunity to enjoy the of taking work home or having different working hours. such flexibility is often dependent on educational level, as well as being related to the gendered division of the labor market (pedulla & thébaud, ) . female dominated profession, like teachers and nurses, often have strict attendance obligations in their workplaces and less opportunity for work flexibility (pétursdóttir, ; wheatley, ) . men enjoy the opportunity to have flexible working hours or work from home more often, and flexibility can be more likely to have a negative effect on women's careers (friedman, ) . as such, seemingly supportive policies can have different consequences for men and women (pedulla & thébaud, ) . the structure of the family as an institution has changed in recent years, including the composition of families and the roles of the genders, and each family member now has more complex roles (júlíusdóttir, ) . starting a family and having children has turned out to have different effects on the lives of men and women, and it seems to be less beneficial for mothers. more families now rely on dual-earnings, and although the number of females working in paid labor has been on the increase, there is still a lack of active participation among men in the home. this applies to iceland and many other countries (gíslason, ; petersen, penner, & høgsnes, ) . having children and family relations maintain and support gendered positions and divisions of labor in public and private lives. petersen et al. ( ) underline how important it is to take such aspects into consideration when it comes to the positions of men and women on the labor market. t. miller ( ) claims that the reasons behind caring practices and their gendered performances "can be multiple and are interrelated, operating at the interpersonal and broader structural, political, policy and cultural levels" (p. ). research has indicated that social structures and prevailing attitudes can influence the gendered division of labor in relationships this article is protected by copyright. all rights reserved. (dotti sani, ; evertsson, ) . household labor has often been referred to as invisible work (hochschild & machung, ) , and the conceptualization of family work can be ambiguous since scholars often use different explanations of what such work actually entails (robertson, anderson, hall, & kim, ) . here, we follow these lines of thought and the three constructs of family work, commonly referred to in family work studies: housework, childcare, and emotional labor. emotional labor relates to activities relevant to the emotional wellbeing of other family members and giving them emotional support (curran, mcdaniel, pollitt, & totenhagen, ) . in an attempt to distinguish between emotional labor and mental work, robertson et al. ( , p. ) suggest mental work as the fourth construct of family work which "includes the invisible mental work related to managerial and family caregiving responsibilities", such as managing, monitoring, scheduling, knowing, and organizing the family life. mental work cannot be delegated to someone who does not belong to the family, and within families, mothers are much more likely to be household managers (ciciolla & luthar, ; curran et al., ; hjálmsdóttir & einarsdóttir, ; robertson et al., ) . this type of work often goes unnoticed by other family members along with the mental burden that such responsibilities require but impacts the mother's wellbeing with feeling of being rushed and strained in everyday life (ciciolla & luthar, ; craig & brown, ) . it has also been pointed out that it can be difficult to detect mental work since it is quite often closely connected with other activities related to the family (robertson et al., ) . in addition, many parents, especially mothers, experience work-family guilt when combining work and family, experiencing conflict between the tasks in the public and private spheres (borelli, nelson, river, birken, & moss-racusin, ) , which can add to the mental load of everyday life. this article is protected by copyright. all rights reserved. also, people had to ensure that they kept a distance of at least two meters between individuals. this entailed closing of swimming pools, gyms, pubs, and museums. however, no changes were made to the organization of schools (government of iceland, b) from the previous measures. due to these actions, those who possibly could work from home were encouraged to do so (sveinsdóttir, ) . health, ), including no more than children in the same group and groups not being allowed to interact. it was common for students to attend school every other day, for school days to be shorter and for meals to be available for a small part of the student body. parents were, in some cases, encouraged to let their children stay at home if they possibly could, while parents in occupations such as doctors, nurses, and police were identified as priority groups. this meant that they were somewhat less affected by school closures and restrictions. students in th to th grade ( -to -year-olds) had to study from home via distance education. this article is protected by copyright. all rights reserved. after-school care was closed; sports and other extra curriculum actives were cancelled, and children were encouraged to only meet with the kids in their small groups outside school (icelandic association of local authorities, ). as in other countries, all these measures had severe impact on families with children, even though the schools technically never closed, and lockdowns were not imposed. this is the context in which this study was conducted in march and april of . on may , , social distancing restrictions were eased, meaning that all children's activities were more-or-less back to normal (government of iceland, a) -at least for the time being. this article draws from a real-time diary study conducted during the ban on public gathering in iceland. the first week of the diary study started on march th , and the second week kan, ; kitterød & lyngstad, ) . for the purpose of this study, we only analyze and present findings from the open diary entries. according to bolger et al. ( ) , diary studies are well suited to capturing the experiences and particulars of the life of the participants. since this is a real-time study with a minimum of time lapse between the experience and reflections, the likelihood of retrospection is minimized. one of the benefits of real-time diary studies like this one is that events are reported in a natural, spontaneous context. by doing so, the data becomes richer and important contextual information and meanings are pieced together to include in the study. this article is protected by copyright. all rights reserved. the sample is self-controlled as it consists of individuals who responded to an advertisement that we posted in various large and active icelandic facebook groups, such as brask & brall (a sales group with around . members), and through our own extended networks. facebook is the most popular social media in iceland, used regularly by nearly all icelanders (facebook nation, ), which makes it a good forum for reaching a considerable part of the population. in all, parents participated in the study, seven male and female. in an effort to shed light on the everyday life of mothers during covid- , we analyzed the open diary entries from female participants in heteronormative relationships, or mothers. about half of them lived in the reykjavík metropolitan area (n = ) while the others were spread around the country. the number of children in the homes of these mothers varied from one to six, but the majority (n = ) of the mothers had two children. the educational level of the participants was rather high, as a majority of participants held a university degree, with bachelor's degrees and with master's degrees. twenty-eight were in paid labor, four were on parental leave, one was an independent laborer, one was a student, one was both studying and working, one was on sick leave, and one was on disability. in most of the cases, both parents primarily or solely worked from home during the time of the study, and most of them were working full-time the whole period, even though some worked reduced hours due to the pandemic. in all cases, the children could attend schools up to some extent, but with severe restrictions of many sorts. after providing informed consent, participants were asked to answer a questionnaire consisting of background questions. then, they received a daily questionnaire via microsoft forms for two weeks. the purpose of the questionnaire was twofold; to collect structured time-use data (fisher, et al., ) , and open-ended diary entries in which participants would this article is protected by copyright. all rights reserved. write an "old style" diary, reflecting on everyday life during covid- . in the diary entries, participants were asked to reflect on their day, the impact of covid- on their life, division of household duties and responsibilities, and other issues they wanted to share. it is important to consider the risk of failure in distinguishing participants' reports of atypical experiences related to or caused by a major event or general experiences (bolger et al., ) . therefore, participants were asked to reflect specifically on their experiences in the context of covid- . the total word count of the written reflections was around . words, which provided us with rich qualitative data. we analyzed the written reflections drawing on braun and clarke's ( ) phases of thematic analysis. the text was sorted by date and participant before we read it several times, added notes, and discussed the content together. then, we coded the text, applying an inductive approach. this means that the initial coding of the diary entries was open and emphasized understanding the participants' experiences without engaging too much with existing literature and theories. similar codes and text segments were then collated in order to identify repeated patterns of meaning across the data: stress, work-life balance, and division of household duties. participants were promised confidentiality and that measures would be taken to prevent identification. we provided participants with a random personal participant number to ensure their anonymity. information that could link participants' names to the number was deleted right after the data collection period. participants were able to withdraw from the study at any time, and some did for unknown reasons. due to the limited time for the study, we decided to use the most convenient way possible to share information about the research and recruit participants, facebook. that probably affected both the number of participants, as the window of time to recruit this article is protected by copyright. all rights reserved. participants was limited, and how homogeneous the group became, particularly in terms of educational level. analysis of the data generated two themes, presented in two sections. the first concerns the complexities of work-life balance in covid- times, particularly the gendered interactions of stress, work-life balance, and mental work. the second section specifically draws on the emotional labor performed by the women in the study, some of which is represented by how conscious the women were of the well-being of their family members. the diary entries quite clearly described complications and stressful situations as the women were trying to juggle their time between work duties and childcare. they described how strained they were and how their stress level was increasing, using words like overwhelmed, frustrated, tired, annoyed, and angry to describe their situations. below are a few diary entries from mothers who were all working - % that reflect this. in the following example, a mother of a -year-old working in mass media, who worked entirely from home as did her husband, described one of her days like this: "i'm a little anxious because of all this, the situation in society. then, i do not have the energy to do much, only the necessary things. the child wore pajamas the whole day." she mentioned how the whole situation made her feel anxious and drained her energy. this was true of many of the other women, like this mother of three ( , and ) who worked in a nursing home explained: "now we have spent more than a month in quarantine and home-schooling. it has started to take its toll mentally, and the day today was difficult. i was almost in tears." her husband was still working in his workplace while she had taken a leave for the first weeks of covid- . juggling home-schooling, childcare, and work created a lot of pressure on the mothers and some of them described the guilt they were experiencing from feeling that they could not keep this article is protected by copyright. all rights reserved. up with everything. the next example is from a mother who worked full time at her workplace. she had two children, and years old, and wrote about her experience in the follow way. i experienced a slight panic attack on the way home over juggling all these different duties, and i cried a little. i went to the grocery store to get some time for myself and shopped for my sister who is in quarantine . . . no one has energy to start putting the kids to bed, so they went to sleep too late. . . jesus, how the parental-fuse is short, and i feel guilty about that. as these examples show, the mothers experienced stress, a lack of energy, and even guilt. as during 'normal' times mothers are more likely to experience work-family guilt, as they feel guilty about not being the best while not spending enough time with their kids, despite being on the run all the time (borelli et al., ; hjálmsdóttir & einarsdóttir, ) . during covid- , this pattern seems to have intensified, supported by research from other contexts as well (e.g. hennekam & shymko, ) . the levels of guilt and how it affected them was addressed by more participants. this mother had two children ( and years old) and was working full time. she and her husband were both working from home. i feel as if i should be able to organize my time better. the day passes, and i have not had time to enjoy one cup of coffee in peace. i do not sit down, but still the apartment is in chaos, the children neglected, and work unfinished. these examples show how much time pressure these mothers were under, and how they experienced guilt over not being able to complete their tasks, neither work nor family related. studies have shown that parents are under significant time pressure in their daily lives (fyhri & hjorthol, ) , especially women (sullivan & gershuny, ) . this pressure seemingly this article is protected by copyright. all rights reserved. increased greatly during the pandemic, as other research has indicated as well (alon et al., ; andrew et al., ) . the above example also indicates a level of multitasking as did entries from several other mothers in the study. according to previous studies (e.g. bryson, ; craig & brown, ; friedman, ; rafnsdóttir & heijstra, ; sullivan & gershuny, ) , women multitask more often than men. the experiences of these women indicate that their perceived time pressure and increased need for multitasking laid heavily on their shoulders. towards the end of the study, when restrictions because of covid- were somewhat lifted, some mothers mentioned that they had just realized how much constraint was caused by having to erase the boundaries between work and family life. in the following diary entry, a mother with a six-month-old child, who worked as a manager in a half time job, explained how. i went to my workplace for the first time in weeks. it was so different. i do not think that i realized until yesterday how much constraint comes from working from home with a child at home. i cannot wait until i can return to my workplace every day and create these boundaries between private life and work. this description is interesting in the light of how flexibility and working from home has often been portrayed as the solution to work-life balance, especially for women, to improve parents' opportunities to better balance work with home life (gatrell et al., ; wheatley, ) . some of the other mothers also described how the boundaries between home and work had been blurred. these experiences indicate that working from home can be difficult for parents, particularly mothers, since they find their work time being interrupted by other duties. this has been documented in previous research (e.g. wheatley, ) . alon et al. ( ) predicted that changes in working practices adopted during covid- might be permanent, but we argue that it is important to consider that working from home and having this article is protected by copyright. all rights reserved. flexible working hours must be considered very carefully in favor of the working parents, bearing in mind gendered social structures. it was clear from the diaries that these unprecedented times revealed or intensified unequal divisions of duties at home, which made the mothers realize and reflect on their positions at home. a mother of two ( and years old), who was a teacher working full time but had started working from home, as well as her husband, said that: today, there was a little clash at home. i have noticed that i usually write the diary before dinner, and a lot of work awaits me afterwards. i usually put the kids to bed, bathe them, tidy up endlessly (usually in the evenings when they are asleep), read, and tuck them in. today, i threw a tantrum over this, . . . but we had a good conversation, and everyone agreed to contribute more . . . [my husband] agreed with me that he could be more present in these daily routines around the kids and home. this example shows how being responsible for the kids and home was on her shoulders, as well being responsible for taking action to change the balance. a few days later, the same woman explained how she was starting to realize how the situation affected the division of tasks, partially because her husband prioritized differently, e.g., around work or exercise, and also because the children asked her for help even though their father was also at home. we knew that the division of tasks is rather equal in our everyday life, but now that we are both working from home, it is obvious that he takes his space when he needs to attend to 'his' things, and i run, and i sprint from my work much more than he does. this example shows how the mother was easily interrupted with household responsibilities, which is in accord with other research findings that suggest that mother's time is more often fragmented (collins, ; collins et al., ; sullivan & gershuny, ) . according to this article is protected by copyright. all rights reserved. andrew's et al. ( ) study, mothers more frequently combined their paid work with other activities during the pandemic. this illustration also supports the notion of time being gendered (bryson & deery, ) , as she perceived that her husband had more control over his time to tend to matters unrelated to work or family. this is in accordance with previous studies on gendered control of time among parents (bryson, ; friedman, ) and new research conducted during covid- that indicate that unpaid work performed by mothers has increased during the pandemic (craig & churchill, ; manzo & minello, ) . the responsibility to divide duties at home lay on the mothers' shoulders, as they explained in several diary entries. this shows how mental work (robertson et.al., ) was central to their gendered realities. as one said, "everyone has to have certain duties in the home if domesticity is supposed to work without me losing my mind." this mother had two teenagers and was working full time from home while her husband worked in his workplace. another one, who had two children ( and years old) and was working full time, explained her situation in this way. it is not easy working from home with a two-year-old. i had to make sure that his father takes him to his parent's home, who were away, so that i could get some peace. then, i put him down to nap after lunch and had to make sure that father and son woke up at the right time . . . usually, i must make sure that things work . . . how are you supposed to be an employee, parent, leisure worker, cook, and a teacher all at once? this outlines quite well how she experiences the responsibility of managing the household. the father is a participant, but she is the manager and carries responsibilities that add to the mental burden of everyday life (ciciolla & luthar, ) , exacerbating to the mental draining women have felt during covid- (hennekam & shymko, ) . another this article is protected by copyright. all rights reserved. mother, with a two-year-old child, who worked full time from home along with her husband, similarly wrote that: i have turned into a foreman here at home. i am trying to get clearer oversight over what has to be done and activate my husband to prevent everything from becoming a mess, and i do not want to take care of it all by myself. so, i had a family meeting and put up a clear division of duties. this mother also wrote that, on an everyday basis, they did not have a clear division of tasks, but during covid- , it became necessary. this indicates that times of crisis can reveal deeply rooted norms and structures on gender roles within the home. the experience of another mother, who had three children ( , and years old), further supports this. she was a care worker and she and her husband were both working in their workplaces. i became tired today and reprimanded my husband. i take care of the management, division of tasks and responsibility for the children's education and practices. i feel like we are dangerously close to the gender development as it was before the middle of the last century. also, it is my responsibility to remind [him] of that this is not supposed to be like this, so that also adds to my basket of duties. all of these examples show how the situation during the pandemic revealed and exaggerated the mothers' roles as household managers (ciciolla & luthar, ; curran et al., ) . they planned and organized family life to make sure that everything worked. this is consistent with research from australia where mothers felt unsatisfied with the division of labor in their homes during the covid- (craig & churchill, ) . drawing on previous studies (e.g. craig & brown, ) , this invisible mental work became a burden for the women and clearly affected their everyday wellbeing. interestingly, this also added to their this article is protected by copyright. all rights reserved. duties, as they became somewhat responsible for getting other people in the household, particularly the fathers, to take on more responsibility to even the load. some of the women in the study described how they made an effort to hide their stress and anxiety from their children and other family members in order to ease the atmosphere and keep the family calm. in accordance with studies and theories of gendered aspects of emotional labor (ciciolla & luthar, ; craig & brown, ; robertson et al., ) , the women performed that kind of labor in addition to other duties. this is reflected in the words of a mother of two children, nine and ten, working full time mostly from home with a husband who mostly worked away from home. the days are getting really difficult, and i will take my first summer holiday tomorrow. the younger child is not happy about [the situation] and cries over everything that seems like adversity, even as little things like when she is asked to read or tidy up. the little patience i have is running out, but i try my best not to let her see it. the day after, the situation became worse, as the family was facing possible quarantine and they were waiting for further directions from a national team of contact tracers. she wrote this in her diary. now we possibly have to start days of quarantine. we will know tomorrow. at least we have to remain in quarantine for hours until the test results. i am pained by this situation, but i try to stay positive, especially with my husband and children. they cannot may not see [my] anxiety because then they become afraid. i continue to meditate and do yoga; everything will be ok. this article is protected by copyright. all rights reserved. as these diary entries show, this mother found it important to keep her anxiety to herself in order to keep the family calm. another mother with a five-and eight-yearold who worked in an elementary school was working full-time from home as did her husband. she described how difficult her day was, as one of her children cried a lot because she missed her friends so dearly. the day "was spent tending emotionally to the children." the women in the study had to devote time to emotional labor instead of work. another reflected on how she tried to calm the people around her. i am really focused on being well informed so that i can answer [questions] and calm elderly people and children around me. i am very cautious and try to follow up with my children on how to be careful without frightening them. one of the women explained how her husband was irritated because of the situation and tired because he was working shifts, so much so that he "exploded" at times. therefore, she made an effort to try to make sure that his irritation did not affect the children ( , and years old) too much. she was working % from home while he was working away from home. she explained. i take care of the children and the home every day, since he is asleep until he has to go to work or loafs around on the computer. everyone has a short fuse, but i make sure that i intervene and suggest a break, that everyone goes out, plays or when the children are starting to nag. it is difficult to be able to concentrate on work. this article is protected by copyright. all rights reserved. another example of the women's emotional labor included dealing with difficult thoughts and decisions related to the pandemic. a mother of two ( and years old) wrote that: despite a lot of physical resting lately, my mind has been spinning around worries and difficult decisions. should the children attend school or not? can i meet my father [who has heart problems] if i keep a m distance? is it necessary to disinfect all the groceries? according to curran et al. ( ) , this kind of work can be called emotional labor, as these women emphasize how they tend to the emotional wellbeing of other family members. this kind of labor was not limited to their children; it also applied to other relatives. for example, the emotional labor involved phone calls to parents or other relatives, sometimes several times a day. other studies have shown that this is often part of women's routines (ciciolla & luthar, ; robertson et al., ) . the months of covid- have been and are quite challenging for many families, and the drastic measures that have been taken to prevent its spread have meant severe changes to people's participation in everyday live and social contact (brooks et al., ) . in accordance with new research on the effect of covid- on everyday life (andrew et al., ; carlson et al., ; collins et. al., ; craig & churchill, ; manzo & minello, ; qian & fuller, ) , the time during the social restrictions was not easy. it is apparent from the diary entries of our participants that the period with the tightest restrictions was challenging for the mothers and their families, and they expressed feelings of frustration and being overwhelmed. despite advances in gender equality over the last decades, drastic events such as during the this article is protected by copyright. all rights reserved. covid- pandemic, can elicit situations that we do not necessarily pay attention to in our busy daily lives or even resist recognizing. in iceland, which has been portrayed as a "paradise for women" (jakobsdóttir, ) and which is considered a global frontrunner when comes to gender equality (the world economic forum, ), parents face challenges related to gendered realities, and gender equality has not been achieved regardless of what the dominant discourse may say. despite remarkably high labor participation, there are indications that women in iceland shoulder the greater burden of childcare and household labor (hjálmsdóttir & einarsdóttir, ; Þórsdóttir, ) , as elsewhere around the world (alon et al., ; knight & brinton, ; t. miller, ) . the diary entries of the mothers in the study demonstrates a gendered reality in which they experience burdens that seem to have escalated during the pandemic. it was stated in a media coverage that the covid- pandemic had brought back the s regarding gendered division of labor (ferguson, ) . the same phrase was used by one of our participants. some of the women wrote about how surprised they were about how much of the household chores and the childcare remained on their shoulders. despite some steps towards gender equality in the last few decades, there are few signs of a revolution, especially within the home. the focus on the struggle for gender equality has somehow been more on the public sphere, as reflected in the measures used for gender equality indexes that overlook the gendered division of labor in the home along with social norms and values (einarsdóttir, ) . one of the patterns identified in the reflections of the women in our study was how they seemed to be stunned by how uneven the division of labor turned out to be during the pandemic and how much time and energy they devoted to household chores and the management of the household, carrying out the mental work within the family. their experiences support the idea of time being gendered (bryson, ) , as they described how this article is protected by copyright. all rights reserved. their time was more restricted from childcare and household chores and how they prioritized their children's needs over work. when the families were pushed into the home due to lockdowns and social restrictions, women faced an uneven division of labor that they might have been too busy in our daily lives to observe or might have found difficult to acknowledge. we argue, based on this study as well as emerging findings from larger studies from different countries (andrew et al., ; collins et. al., ; craig & churchill, ; manzo & minello, ; qian & fuller, ) , that the situation caused by the pandemic brought to light pre-existing gendered performances and social structures, more than it caused drastic gendered division of labor in the home. in iceland, where the dominant discourses have centered on the country as a global leader in gender equality, the existing inequalities have been overlooked. our findings suggest that there is an uneven division of labor within icelandic homes as the mothers in the study bore the burdens of housework, childcare, emotional labor, and household mental work. if the aim is to close the gender gap both in the public and the private sphere, a focus on the gendered division of labor within the home is essential. the impact of covid- on gender equality. crc tr discussion paper series how are mothers and fathers balancing work and family under lockdown? the institute for fiscal studies coronavirus: 'mums do most childcare and chores in lockdown'. bbc news chaos ruined the children's sleep, diet and behaviour: gendered discourses on family life in pandemic times. gender, work & organization. online advance publication university pathways of urban and rural migration in iceland diary methods: capturing life as it is lived household time allocation. theoretical and empirical results from denmark gender differences in work-family guilt in parents of young children successful qualitative research. a practical guide for beginners the psychological impact of quarantine and how to reduce it: rapid review of the evidence time, power and inequalities public policy,'men's time'and power: the work of community midwives in the british national health service us couples' divisions of housework and childcare during covid- pandemic invisible household labor and ramifications for adjustment: mothers as captains of households is maternal guilt a cross-national experience? qualitative sociology, - . advance online publication covid- and the gender gap in working hours. gender, work & organization, - . advance online publication feeling rushed: gendered time quality, work hours, nonstandard work schedules, and spousal crossover dual-earner parents' couples work and care during covid- . gender, work & organization, - . advance online publication gender, emotion work, and relationship quality: a daily diary study sex-typed personality traits and gender identity as predictors of young adults' career interests men's employment hours and time on domestic chores in european countries all that glitters is not gold: shrinking and bending gender equality in rankings and nation branding gender ideology and the sharing of housework and child care in sweden samraeming fjölskyldulífs og atvinnu: hvernig gengur starfsfólki á íslenskum vinnumarkaði að samraema fjölskyldulíf og atvinnu? (m.sc. dissertation) iceland magazine i feel like a s housewife': how lockdown has exposed the gender divide. the guardian exploring new ground for using the multinational time use study. iser working paper series still a "stalled revolution"? work/family experiences, hegemonic masculinity, and moving toward gender equality children's independent mobility to school, friends and leisure activities work-life balance and parenthood: a comparative review of definitions, equity and enrichment parents, perceptions and belonging: exploring flexible working among uk fathers and mothers faeðingar-og foreldraorlof á Íslandi: Þróun eftir lagasetninguna árið mothering and gender equality in iceland: irreconcilable opposites? iceland eases restrictions -all children's activities back to normal stricter measures enforced in iceland: ban on gatherings of more than people outcomes of work-life balance on job satisfaction, life satisfaction and mental health: a study across seven cultures tengsl streituvaldandi þátta í starfsumhverfi, svefns og stoðkerfisverkja hjá millistjórnendum í accepted article this article is protected by copyright. all rights reserved. heilbrigðisþjónustu [correlation between stressful factors in the working environment coping with the covid- crisis: force majeure and gender performativity. gender, work & organization why iceland is the best place in the world to be a women. the guardian mér finnst ég stundum eins og hamstur í hjóli the second shift: working parents and the revolution at home high birth rates despite easy access to contraception and abortion: a cross-sectional study icelandic association of local authorities how to build a paradise for women. a lesson from iceland vinnutengd streita. orsakir, úrraeði og ranghugmyndir [work related stress. causes, resources, and misbeliefs Ársrit um accepted article this article is protected by copyright childbearing trends in iceland, - : fertility timing, quantum, and gender preferences for children in a nordic context fjölskyldur -umbreytingar, samskipti og skilnaðarmál reykjavík: félagsvísindastofnun háskóla Íslands measuring housework participation: the gap between "stylised" questionnaire estimates and diary-based estimates iceland has become the first country to officially require gender pay equality diary versus questionnaire information on time spent on housework-the case of norway one egalitarianism or several? two decades of gender-role attitude change in europe a balancing act? work-life balance, health and well-being in european welfare states ) mothers, childcare duties, and remote working under covid- lockdown in italy: cultivating communities of care nearly half of men say they do most of the home schooling. percent of women agree. the new york times paternal and maternal gatekeeping? choreographing care auglýsing um takmörkun á skólastarfi vegna farsóttar labour force statisics can we finish the revolution? gender, work-family ideals, and institutional constraint from motherhood penalties to husband premia: the new challenge for gender equality and family policy, lessons from norway within the aura of gender equality: icelandic work cultures, gender relations and family responsibility covid- and the gender employment gap among parents of young children balancing work-family life in academia: the power of time mothers and mental labor: a phenomenological focus group study of family-related thinking work gender differences in chauffeuring children among dual-earner families launamunur karla og kvenna [the pay gap between men and women key figures, statistics bad mum guilt': the representation of 'work-life balance'in uk women's magazines speed-up society? evidence from the uk and time use diary surveys kórónuveiran: fyrirtaeki hvött til að þjálfa fólk í fjarvinnu [the coronavirus: companies encourage to train workers for distance work the directorate of health and the department of civil protection and emergency management the icelandic teachers union. (n.d.). streita og kulnun the global gender gap report working mothers interrupted more often than fathers in lockdown -study. the guardian one country is making sure all employers offer equal pay to women covid- educational disruption and response good to be home? time-use and satisfaction levels among home-based teleworkers vinna og heimilislíf reykjavík: félagsvísindastofnun háskóla Íslands this article is protected by copyright. all rights reserved. key: cord- -u jm y authors: catty, jocelyn title: lockdown and adolescent mental health: reflections from a child and adolescent psychotherapist date: - - journal: wellcome open res doi: . /wellcomeopenres. . sha: doc_id: cord_uid: u jm y the author, a child and adolescent psychoanalytic psychotherapist working in the uk nhs, ponders the varied impacts of ‘lockdown’ on adolescents, their parents and the psychotherapists who work with them, during the covid- pandemic. she asks, particularly, how psychological therapies are positioned during such a crisis, and whether the pressures of triage and emergency can leave time and space for sustained emotional and psychological care. she wonders how psychoanalytic time with its sustaining rhythm can be held onto in the face of the need for triage on the one hand and the flight to online and telephone delivery on the other. above all, the author questions how the apparent suspension of time during lockdown is belied by the onward pressure of adolescent time, and how this can be understood by, and alongside, troubled adolescents. the time of the covid- virus brings a strange shifting of priorities to my professional life as a child and adolescent psychoanalytic psychotherapist working in a child and adolescent mental health service (camhs). covid- : the name itself encapsulates delay (flexer, , waiting in pandemic times) . building into the term the origins of the virus in , it provides a stark reminder that, having ignored warnings from the medical world and then the evidence before our eyes, we are now always already trying to catch up (horton, ) . the world is in crisis, but it is hard to position the acute and chronic crises of mental health work in the nhs against the unfolding crisis we see on our screens. are we high priority or low? frontline or routine? do we, like primary care staff, rush to 'man the barricades' (davies, , waiting in pandemic times) -anxiety about the possibility of redeployment is spreading among mental health staff even where they are entirely untrained for physical health care -or do we hunker down at home to conduct therapy online for the foreseeable future? (what is foreseeable about the future, now, for the young patients, depressed, anxious or enduring the turbulence of adolescence, for whom the future was only hazily in view in the first place?) mental health has traditionally been lamented as the poor relation within the national health service (nhs), with psychiatry under-valued and repeated cries to achieve parity between mental and physical health ignored. how, then, are we to consider the seriousness of psychological and emotional labour conducted in services such as camhs during a national crisis? talking to young people and children about their anxieties, or even their considerable distress, appears low priority when compared to doctors and nurses battling covid; yet an adolescent death by suicide remains one of the most catastrophic events imaginable, for family, friends and professionals alike. in the time of the virus, we are thus adrift in the prevailing geo-spatial metaphors of the age: nowhere near the 'front line', we may find ourselves thrust suddenly towards it if a teenager attempts to harm him-or herself. the world gives the impression of having halted adolescent time. exams are cancelled; school is out, or virtual; universities have sent their students home. for those in their teens, the covid- pandemic arrives at a crucial time in development, as they transition from childhood to adulthood. yet the time of adolescence itself often feels both chronic and acute, its difficulties regarded as perennial, even predictable, yet often plunging the young into crisis. disturbed adolescents may try to arrest a march of time that feels relentless by retreating into depression, or into their bedrooms: to halt their progress towards a future that is perceived as bleak, or simply unimaginable. what can we learn about time -now, in the time of covid- -from this sudden suspension of time which is not actually a suspension at all? this questioning of the future which is, curiously, so familiar to many of the young people whose mental health elicits our care? the decision to award gcse and a level results, rather than postpone the exams, could be seen as a shocking pronouncement: that time waits for no one, that adolescent progress cannot, must not, be halted -even if, for those awarded a grade less than that which they might have achieved, progress is thwarted. like their younger counterparts at the top of primary school, they must, even from their bedrooms, be ushered forwards to the brink at which they bid their school lives farewell. those struggling with the pressure of work and exams may be relieved, but their world has also crashed down upon them and many are disappointed. some lament a lack of control: the final academic effort, for which they were preparing, is denied them, and teachers, or government, will decide upon their grades. yet for some, for whom the pressure of external life has been unbearable, perhaps there is the possibility of respite, and the lockdown may provide them with much-needed time for recovery. adolescent development 'runs unevenly' (waddell, , p. ): how the time of covid- intersects into each individual trajectory will vary hugely. while the media portray the young as oblivious -gathering in parks, spitting defiantly in the faces of police or the elderly -we hear our young patients report their varying responses, almost always ambivalent, anxious. for those with depression, existential despair, sometimes born of inter-generational trauma and loss, is known to dominate (catty ed., ): how are they to believe that the future holds any promise when it appears to have been cancelled, or at least indefinitely postponed? for some, this will confirm a pre-existing belief, a bleakness. meanwhile, they worry about grandparents, parents and, increasingly, each other. there is an idea that psychoanalytic work with adults involves the recollection and processing of remembered trauma -that it is, as wordsworth wrote of poetry, 'emotion recollected in tranquillity' ( / , p. ) -while therapy with children and adolescents is conducted during and alongside the unfolding of their key emotional dramas. theory and clinical practice afford many contradictions of this dichotomy; yet it remains meaningful to conceptualise adolescent therapy as a 'being alongside' a teenager as they live through their most turbulent of times. how does lockdown impact on this sense of immediacy? during lockdown, young people are suffering a crisis that we appear to share with them, at least in this basic way: we too sit in our homes as we engage them in their therapy. keeping a focus on the particularity of their experience -the extent to which the national crisis may or may not be impacting on their internal dramas -will need close attention. yet perhaps they have something to tell us about uncertainty -about the future, about the passing of time -that they have long feared we did not understand. for some, we have finally entered into their world. there are implications here, too, for our work with their parents, now that we feel ourselves to share their most immediate circumstances: we are all in lockdown; we are all worried about our ageing parents; we are all, increasingly, worried about the young. crisis time in adolescent mental health services relies on a redamber-green system of case-flagging. now only the reddest of the red cases can be seen in person, anxiously diverted from accident and emergency departments to the community clinic to avoid contamination. while those on duty manage these most critical of crises in person, the rest of the team connect to their patients via telephone and video-conferencing. fears that mental health work will be deemed such low priority as to justify sending therapists into the medical settings for which they would be entirely, shockingly, unprepared, seem to abate as authorities determine that mental health emergencies are themselves 'priority'. at the same time, the urgency of attending to an unfolding mental health crisis is becoming clearer: articulated in a recent 'call for action' to include data collection on the psychological, social and neuroscientific effects of the pandemic on both the general population, vulnerable groups and those with the virus (holmes et al., ) . what, then, are the implications of mental health triage in this new world? in the early weeks of the lockdown, we wonder whether to activate a crisis response by focusing only on emergencies, keeping in touch with our regular patients for more frequent, but briefer, telephone updates. implicitly, we are invoking ideas of triage (focusing only on emergencies in any detail or depth) and support (finding out how our patients are managing, rather than working with them). yet it is clear that such a model will not serve us well in the longer-term: if nearly the whole camhs population is provided with brief, intermittent support rather than treatment, logic dictates that their mental health will deteriorate. yet does such a distinction between support and treatment hold in a time of crisis? it is a distinction that has always been uncomfortable where it privileges the activity of psychological therapists over other mental health specialisms, such as nursing, occupational therapy or social work (deemed to be providing 'support' or 'risk management'); yet it has enabled us to retain an emphasis on the 'work' that is involved in psychological treatment and the process that unfolds between the participants in psychotherapy, patient and therapist. what the nature of such work may be during lockdown remains to be seen. meanwhile, mental health emergencies among the teenage population seem to have plummeted: we wonder, where are they? have they too been suspended? there is anxiety about when the dam may break; an increase in anxiety, depression and self-harm are expected in the population as a whole (holmes et al., ) . for those that come in, we find ourselves contorting the familiar nhs language of 'risk': do we mean suicide risk or covid- risk? where is 'safe' for a year-old determined to kill herself, or a year-old who has taken an overdose? a mother asks whether, were her teenage son to harm himself, she would be allowed to be with him in hospital; we cannot advise her. the focused maternal care that a teenager may specifically crave in such desperate moments becomes the one thing he would deprive himself of; the choices facing those with suicidal thoughts become starker now. we ask ourselves, can we provide a reassuring presence dressed in protective mask and goggles? or should we retreat behind a computer or smartphone screen, through which we can, at least, be seen as ourselves? how do we keep time in such a crisis? there is a rhythm that psychotherapists and their patients come to live and breathe: the regular pulse of the psychoanalytic session, whether weekly or more frequent; the predictability of the starting time; the inevitability of the session end or the week's wait. this rhythm underpins the duration of a therapy as it unfolds in time and is the bedrock of the 'containment' (bion, ) that psychoanalysis offers (baraitser & salisbury, , waiting in pandemic times) . can this rhythm, based on the fifty-minute hour, be maintained over the telephone or protected with the same boundaries as in the clinic? in the rush of psychotherapists to online platforms and the telephone, can we maintain this steady pulse? for a teenage patient, does it still feel like his session time if he knows his therapist is going to ring? will it still feel like time to stop if we are wrapped in the cocoon of sound provided by a telephone call in a quiet room; or if we have been trying to focus on each other's faces in a shaky video call? despite the fact that most teenagers are more familiar with online discourse than we are, this shift raises issues of space too. is it intrusive to conduct therapy online with an adolescent, looking into that most private of spaces, their bedroom? alternatives are unlikely when families are crammed together conducting school and home lives under one roof. what is it like for a depressed adolescent to know that his therapist is telephoning from her own home? or for a troubled teenage girl, reliant on self-harm to embody her misery, to bring her therapist into her home on a smartphone screen? decisions continue to need making: despite the impression that time has been suspended, in fact it waits for nobody. an offer of time-limited psychotherapy for a girl of seventeen-and-a-half is paused: can it still be done? the time-frame provided by the therapy model was to fit neatly into the time that remains for her as a camhs patient: upon her eighteenth birthday, she will be discharged. despite the impression that the world has stopped turning, time is marching on. nothing sums up better the paradox of the crisis for adolescents or gives the lie more obviously to the notion of shutdown, suspension or postponement. time is still passing. all data underlying the results are available as part of the article and no additional source data are required. this paper was written in the first two weeks after lockdown, when emergency presentations nationally were hugely reduced (bmj, ); by the time of publication, it could be anecdotally observed that emergency presentations of adolescents in a state of mental health crisis had increased. the child and adolescent psychoanalytic psychotherapist, jocelyn catty reflects on how psychological therapies are positioned during a crisis such as the covid- pandemic. the author questions how the psychoanalytic session is maintained over online platforms and telephone consultations. furthermore, catty addresses essential reflections of how the crisis leaves time for emotional and psychological care in a time characterized by the pressure of triage and emergency. the introduction outlines the immediate and dark consequences for young people, their school achievements, mental health, social development, and the opportunity to get adequate treatment. she points at the specific developmental challenges of covid- putting young people at high risk for lagging behind in this important transitional phase into adulthood. the manuscript is well-written and contains highly needed questions regarding adolescent mental health during a crisis as the pandemic. it emphasizes the uncertainness in therapy rooms and improvised therapy rooms at home worldwide. the paper raises the urgency for young people and the need for society to take their situation seriously. jocelyn catty fears that only mental health triage will be offered to young people and a generation will be deprived of the opportunity for treatment. the paper is formulated as a warning bringing into the discussion some problematic sides of video consultations. the paper is submitted as a research article, however, we read it to be an opinion article. thus, evaluation according to research standards is not applicable. the paper does not provide sufficient details of method and analysis to allow replication of others, neither are any conclusions drawn -as there are no results and the manuscript is lacking both qualitative and quantitative data. however, as an opinion article, the paper is a structured and well written, article. however, the manuscript might be more useful to a broad clinical readership if the author moved beyond the rather pessimistic undertone regarding psychotherapy during a crisis and explored alternative perspectives. as there is evidence as well as clinical experiences during the ongoing pandemic that some young people in some periods of therapy might profit even more from video consultations. in addition, video consultations give the therapist an opportunity to follow the young person and offer treatment even when students according to change in school or study situation move to other places. some questions the author may consider: how does the therapists' attitude towards online therapy or telephone consulting affect the therapy delivered on these platforms? how is the therapist marked by the current crisis and shaped by being forced to deliver therapy on alternative platforms? might it be that the physical distance to the therapist for some adolescents facilitates a greater emotional closeness to the psychotherapy -and therapist? the manuscript ends rather abruptly with loads of important questions to reflect on without stating a clear take-home message. overall, this is a timely and much-needed essay. it is written nicely with rich metaphor and moving examples of how these last months has changed the entire field of psychotherapy -both for patients and most therapists. the paper might be improved by modifications addressing the detailed comments below and perhaps a mention of alternative perspectives to better recognizing the complexity of this important matter. are all the source data underlying the results available to ensure full reproducibility? no source data required are the conclusions drawn adequately supported by the results? partly containment, delay, mitigation': waiting and care in the time of a pandemic this paper was developed in collaboration with colleagues working on the waiting times research project (see waitingtimes. exeter.ac.uk). we confirm that we have read this submission and believe that we have an appropriate level of expertise to confirm that it is of an acceptable scientific standard. this is a provocative and timely article. catty has used the remarkable interruption of normal camhs psychotherapy practice by the covid- crisis to explore and illuminate the importance of time and 'the future' for adolescents and the, often overlooked, impact of rhythm in the psychotherapeutic process.the importance of the article would seem to lie in its engagement with a rapidly changing (and novel) challenge to practice. the trade offs between (masked) face to face sessions with digital but unmasked consultations are well noted.like many writing about covid catty appears to accept the 'looming mental health' epidemic it will cause while observing that initially referrals fell. it might be worthwhile to revisit previous national crises (eg. wwii) when predictions of mass psychological casualties were found to be baseless.catty acknowledges that her thinking is based on the first few weeks of lockdown and a folllow up article after a couple of months to compare her thoughts with what transpires would be of considerable interest. are all the source data underlying the results available to ensure full reproducibility? no source data required are the conclusions drawn adequately supported by the results? yes competing interests: no competing interests were disclosed.reviewer expertise: social psychiatry and the application of psychotherapeutic principles in adult disorders.i confirm that i have read this submission and believe that i have an appropriate level of expertise to confirm that it is of an acceptable scientific standard. key: cord- - hf axps authors: tull, matthew t.; barbano, anna c.; scamaldo, kayla m.; richmond, julia r.; edmonds, keith a.; rose, jason p.; gratz, kim l. title: the prospective influence of covid- affective risk assessments and intolerance of uncertainty on later dimensions of health anxiety date: - - journal: j anxiety disord doi: . /j.janxdis. . sha: doc_id: cord_uid: hf axps the covid- pandemic is likely to increase risk for the development of health anxiety. given that elevated health anxiety can contribute to maladaptive health behaviors, there is a need to identify individual difference factors that may increase health anxiety risk. this study examined the unique and interactive relations of covid- affective risk assessments (worry about risk for contracting/dying from covid- ) and intolerance of uncertainty to later health anxiety dimensions. a u.s. community sample of participants completed online self-report measures at a baseline assessment (time ) and one month later (time ). time intolerance of uncertainty was uniquely associated with the time health anxiety dimension of body vigilance. time affective risk assessments and intolerance of uncertainty were uniquely associated with later perceived likelihood that an illness would be acquired and anticipated negative consequences of an illness. the latter finding was qualified by a significant interaction, such that affective risk assessments were positively associated with anticipated negative consequences of having an illness only among participants with mean and low levels of intolerance of uncertainty. results speak to the relevance of different risk factors for health anxiety during the covid- pandemic and highlight targets for reducing health anxiety risk. beginning in late , a severe acute respiratory syndrome coronavirus began to rapidly spread across the globe, becoming an unprecedented public health event (centers for disease control and prevention [cdc] , ; world health organization [who], b). on january , , the who announced that covid- was a public health emergency of international concern, and in march , pandemic status was reached. currently, over million confirmed cases of covid- have been reported worldwide, and over , people have died from the disease (cdc, ; who, b) . within the u.s. alone, there have been over . million confirmed cases of covid- , with over , mortalities attributed to the virus (cdc, ) . due to covid- 's long incubation period, ease of transmission, high mortality rate (relative to the seasonal flu), and lack of pharmacological interventions (linton et al., ; shereen, khan, kazmi, bashir, & siddique, ) , governments worldwide have had to implement extraordinary physical distancing interventions in an attempt to slow the spread of the virus, reduce covid- mortality rates, and minimize the burden on the health care system. within the u.s., implementation of stay-at-home orders began in mid-march , with most states having such orders in place by early april (mervosh, lu, & swales, ) . although no vaccine or established treatments for covid- are currently available, strict stay-at-home orders within the u.s. are beginning to ease. specifically, all states have taken steps to reopen businesses throughout may , with most moving to rescind stringent stay-at-home orders in sensations (e.g., muscle soreness, shortness of breath, sore throat) as an indication of illness, infection, or some other threat to physical health taylor & asmundson, ) . at high levels, health anxiety may contribute to increased body vigilance, catastrophic misinterpretation of bodily sensations, and illness behavior (e.g., reassurance seeking on the internet, frequent and unnecessary visits to a doctor or emergency room, excessive collection of personal protective equipment; asmundson et al., ; asmundson & taylor, b; taylor & asmundson, ) . in the context of a pandemic, individuals with elevated health anxiety may be particularly likely to experience an increase in awareness and catastrophic misinterpretation of bodily sensations that result in maladaptive safety-seeking behavior (asmundson & taylor, b; taylor, ) . for example, a recent study found that health anxiety was associated with covid- related anxiety and cyberchondria (i.e., the repeated carrying out of health-related internet searches in an attempt to obtain reassurance or reduce health-related anxiety ; jungmann & witthöft, ) . given the potential negative consequences associated with health anxietyrelated behaviors in the context of a pandemic (e.g., increased doctor visits may overwhelm the health care system, stockpiling of personal protective equipment may decrease or eliminate its availability to others in need), there is a need to identify individual difference factors that may increase risk for health anxiety in the context of the current covid- pandemic. one such risk factor for health anxiety may be an individual's perceived likelihood of becoming infected with or dying from covid- . past research has found that individuals with elevated healthy anxiety are more likely to cognitively overestimate their risk for illness (hadjistavropoulos, craig, & hadjistavropoulos, ; marcus & church, ) . however, health behavior models are increasingly highlighting the relevance of affect-laden risk or vulnerability assessments (vs. more cognitively-based assessments where individuals j o u r n a l p r e -p r o o f health anxiety during covid- deliberately estimate the probability or likelihood of a particular health event) to psychological outcomes, emphasizing the relative importance of the extent to which individuals feel that they are at risk for or worry about certain health events (i.e., affective risk assessments; loewenstein, weber, hsee, & welch, ; janssen, van osch, lechner, candel, & de vries, ; janssen, waters, van osch, lechner, & de vries, ) . for example, janssen, van osch, et al. ( ) found that affective risk assessments about cancer risk were more strongly related to cancerspecific health anxiety than cognitive risk assessments. likewise, affective risk assessments have been found to be more highly related to behavioral intentions and health behaviors than cognitive risk assessments (janssen, van osch, et al., ; janssen, waters, et al., ) . given evidence that worry states may increase attentional bias to threatening stimuli (mogg & bradley, ; mogg, mathews, & eysenck, ) , individuals who experience greater worry about their perceived risk for covid- infection and mortality may be more likely to notice and attend to bodily sensations that could be indicative of covid- infection (e.g., muscle pain, shortness of breath, cough, chills), resulting in increased health anxiety over time. given the unpredictability and variability associated with covid- symptom presentations, as well as the potentially long incubation period associated with this virus (i.e., symptoms may present themselves anywhere from to days following exposure), the association between covid- affective risk assessments and health anxiety may be particularly strong for individuals with high intolerance of uncertainty. intolerance of uncertainty is broadly defined as a cognitive and emotional tendency to react negatively to uncertain situations or unpredictable future events (freeston, rhéaume, letarte, dugas, & ladouceur, ) , and has been identified as a key factor in the development and maintenance of problematic worry (buhr & dugas, ; dugas, freeston, & ladouceur, ; freeston et al., ) . in addition to j o u r n a l p r e -p r o o f health anxiety during covid- demonstrating a relationship with numerous anxiety disorders (boelen & reijntjes, ; carleton et al., ; gentes & ruscio, ; holaway, heimberg, & coles, ) , intolerance of uncertainty has been associated with increased health anxiety and catastrophic health appraisals. inhibitory facets of intolerance of uncertainty (e.g., diminished functioning in the face of uncertainty) have been shown to predict health anxiety among medically healthy communitydwelling adults (fergus & bardeen, ) . further, intolerance of uncertainty has been found to moderate the relationship between the frequency of internet searches for health information and health anxiety among medically healthy adults in the community (fergus, ) . research has also found that intolerance of uncertainty moderates the relationship between catastrophic health appraisals and health anxiety among medically healthy college students, with this relationship emerging as significant only among individuals with high intolerance of uncertainty (fergus & valentiner, ) . more recently, asmundson and taylor ( a) identified intolerance of uncertainty as a potential individual difference factor that may increase risk for covid- related anxiety. in the context of the covid- pandemic, high intolerance of uncertainty may further exacerbate worry and negative affect associated with perceived risk for covid- infection and mortality, contributing to heightened health anxiety. moreover, given that intolerance of uncertainty may increase the likelihood that ambiguous experiences are perceived as threatening (byrne, hunt, & chang, ) , high covid- affective risk perceptions may be more likely to prompt catastrophic misinterpretations of benign bodily sensations as an indication of illness. the goals of the present study were to examine the prospective relations of covid- affective risk assessments and intolerance of uncertainty to health anxiety dimensions one month j o u r n a l p r e -p r o o f later, as well as the moderating role of intolerance of uncertainty in the relations of covid- affective risk perceptions to later health anxiety. we predicted that both covid- affective risk perceptions and intolerance of uncertainty would predict later health anxiety dimensions, controlling for health anxiety at baseline. further, we predicted that the relationship between covid- affective risk assessments and health anxiety would be strongest among individuals with high (vs. mean or low) levels of intolerance of uncertainty. participants were a nationwide community sample of adults from states in the u.s. who completed a prospective online study of health and coping in response to through an internet-based platform (amazon's mechanical turk; mturk). participants completed an initial assessment (time ) from march , through april , , and a follow-up assessment (time ) approximately one month later between april , and may , . the study was posted to mturk via cloudresearch (cloudresearch.com), an online crowdsourcing platform connected to mturk that allows additional data collection features (e.g., creating selection criteria; chandler, rosenzweig, moss, robinson, & litman, ; litman, robinson, & abberbock, ) . mturk is an online labor market that provides "workers" with the opportunity to complete different tasks in exchange for monetary compensation, such as completing questionnaires for research. data provided by mturk-recruited participants have been found to be as reliable as data collected through more traditional methods (buhrmester, kwang, & gosling, ) . likewise, mturk-recruited participants have been found to perform better on attention check items than college student samples (hauser & schwarz, ) and comparably to participants completing the same tasks in a laboratory setting (casler, bickel, & j o u r n a l p r e -p r o o f health anxiety during covid- hackett, ). studies also show that mturk samples have the advantage of being more diverse than other internet-recruited or college student samples (buhrmester et al., ; casler et al., ) . for the present study, inclusion criteria consisted of: ( ) u.s. resident, ( ) at least a % approval rating as an mturk worker, ( ) participants ( . % women; . % men; . % non-binary; . % transgender, . % other) ranged in age from to years (m = . , sd = . ) at the initial assessment. all states in the u.s. were represented, with the exception of delaware, nebraska, new hampshire, north dakota, vermont, and west virginia. the most frequently endorsed states of residence were florida ( . %), california ( . %), pennsylvania ( . %), texas ( . %), and new york ( . %). most participants identified as white ( . %), followed by black/african-american ( . %), asian/asian-american ( . %), latinx ( . %), and native american ( . %). with regard to other participant demographic characteristics at the time assessment, % of participants had completed high school or received a ged, . % had attended some college or technical school, . % had graduated from college, and . % had advanced graduate/professional degrees. most participants were employed full-time ( . %), followed by employed part-time ( . %) and unemployed ( . %). annual household income varied, with . % of participants reporting an income of < $ , , . % reporting an income of $ , to $ , , and . % reporting an income of > $ , . finally, % of participants reported having a current medical condition (e.g., diabetes, hypertension, asthma) that would increase risk of complications from a covid- infection and . % reported living alone. across both assessments, few participants j o u r n a l p r e -p r o o f health anxiety during covid- reported having sought out testing for covid- ( %) or having a confirmed covid- infection ( . %). a demographic form was completed by all participants at the time and time assessments. information collected from the demographic form included age, sex, gender, racial/ethnic background, income level, highest level of education attained, employment status, the number of people in the household, state of residence, current medical conditions that could increase risk for susceptibility to and/or complications from covid- , whether participants had sought out testing for covid- , and whether participants had been infected with covid- . covid- affective risk was assessed at time using a -item self-report measure specifically created for this study. participants responded to questions about covid- -related worry about risk (i.e., "how worried are you about your level of risk…") in three domains: (a) contracting covid- , (b) dying from covid- , and (c) spreading covid- to others (should they have it). participants responded to each item using a -point likert-type scale ranging from (not at all worried) to (extremely worried). research using similar self-report items (e.g., klein, ; rose, ) has shown that affective risk assessments are highly correlated with behavioral intentions and health behaviors. given that few participants in this sample reported having a confirmed covid- infection, as well as our interest in evaluating personal affective risk assessments (vs. assessments of others' risks), only the items pertaining to contracting and dying from covid- were used. these items were summed to create a covid- affective risk index. internal consistency was acceptable in this sample (α = . ). the intolerance of uncertainty scale-short form (ius- ; carleton, norton, & asmundson, ) was used to assess intolerance of uncertainty at the time assessment. the ius- is a -item measure that assesses prospective and inhibitory anxiety. this scale was adapted from the -item intolerance of uncertainty scale (freeston et al., ) that was originally designed to measure six elements related to the inability to withstand uncertainty (i.e., emotional and behavioral consequences of being uncertain, beliefs that uncertainty reflects one's character, expectations that the future is predictable, frustration when the future is not predictable, efforts aimed at controlling the future, and inflexible responses during uncertain situations). example items include, "a small unforeseen event can spoil everything, even with the best of planning," and "i can't stand being taken by surprise." participants rate the extent to which they agree with each item on a -point likert-type scale ( = "not at all characteristic of me;" = "a little characteristic of me;" = "somewhat characteristic of me;" = "very characteristic of me;" = "entirely characteristic of me"). for the present study, responses to each item were summed to create an overall index of intolerance of uncertainty, with possible scores ranging from - and higher scores reflecting greater intolerance of uncertainty. although carleton et al. ( ) found that the ius- has a stable two-factor structure, recent studies have demonstrated that the majority of the measure's variance is accounted for by a single latent variable; consequently, it is recommended that a single, overall ius- score is used (hale et al., ; lauriola, mosca, & carleton, ; shihata, mcevoy, & mullan, ) . there is evidence for the reliability and construct validity of the ius- within non-clinical and community samples (carleton et al., ; carleton, collimore, & asmundson, ; lauriola et al., ) . internal consistency for this measure in this sample was acceptable (α = . ). the short health anxiety inventory (shai; salkovskis, rimes, warwick, & clark, ) is an -item measure that was used to assess different dimensions of health anxiety at the time and time assessments. the shai was modified to assess health j o u r n a l p r e -p r o o f health anxiety during covid- anxiety symptoms over the past week (vs. the past -months on the original measure). found that the shai assesses three dimensions of health anxiety: (a) illness likelihood (i.e., the perceived likelihood that a serious illness will be acquired, as well as intrusive thoughts about one's health; items); (b) body vigilance (i.e., attention to bodily sensations or changes in bodily sensations; items); and (c) illness severity (i.e., anticipated burden, impairment, or negative consequences associated with having a serious illness; items). depression and anxiety symptom severity at time were assessed using the -item version of the depression anxiety stress scales (dass- ; lovibond & lovibond, ) . the current study utilized the depression and anxiety symptom severity subscales as covariates. the dass- is a self-report measure that assesses the unique symptoms of depression, anxiety, and stress. participants rate the items on a -point likert-type scale indicating how much each item applied to them in the past week ( = "did not apply to me at all;" = "applied to me some of the time;" = "applied to me a good part of the time;" = "applied to me most of the time"). this measure has demonstrated good reliability and validity (antony, bieling, cox, enns, & j o u r n a l p r e -p r o o f health anxiety during covid- swinson, ; roemer, ) . internal consistency of the depression (α = . ) and anxiety (α = . ) symptom severity subscales in this sample were acceptable. all procedures received prior approval from the university of toledo's institutional review board. to ensure that the study was not being completed by a bot (i.e., an automated computer program used to complete simple tasks), participants responded to a completely automatic public turing test to tell computers and humans apart (captcha) at the time assessment prior to providing informed consent. participants were also informed on the consent form that "…we have put in place a number of safeguards to ensure that participants provide valid and accurate data for this study. if we have strong reason to believe your data are invalid, your responses will not be approved or paid and your data will be discarded." initial data were collected in blocks of nine participants at a time and all data, including attention check items and geolocations, were examined by researchers before compensation was provided. attention check items included three explicit requests embedded within the questionnaires (e.g., "if you are paying attention, choose ' ' for this question"), two multiple-choice questions (e.g., "how many words are in this sentence?"), a math problem (e.g., "what is plus ?"), and a free-response item (e.g., "please briefly describe in a few sentences what you did in this study"). participants who failed one or more attention check items were removed from the study (n = of completers of the time assessment). workers who completed the initial assessment and whose data were considered valid (based on attention check items and geolocations; n = ) were compensated $ . for their participation and invited to participate in the one-month follow-up assessment. health anxiety during covid- one-month following completion of the time assessment, participants were contacted via cloudresearch (litman et al., ) to complete the time assessment. this online platform allows researchers to email participants a link to follow-up assessments while maintaining anonymity (i.e., study personnel never see email addresses) by using their amazon worker id numbers (provided by mturk). of the participants who completed the initial assessment, % (n = ) completed the follow-up assessment. there were no significant differences in time intolerance of uncertainty or health anxiety dimensions between participants who completed (vs. did not complete) the follow-up assessment (ps > . ); however, participants procedures for assessing the validity of the time data (i.e., examining attention check items and geolocations) were similar to those used for the time assessment. participants who failed two or more attention check items at the time assessment were removed from the study (n = ); the remainder were compensated $ . for their participation. in addition, two participants were excluded for non-reconcilable differences in demographic data between the time and time assessments, and additional participants were excluded for incomplete data on the primary variables of interest, resulting in a final sample size of . results of the hierarchical linear regression analyses examining the main and interactive effects of time covid- affective risk and intolerance of uncertainty on time health anxiety dimensions are presented in table . the overall model was significant, accounting for % of the variance in the time illness likelihood dimension of health anxiety, f ( , ) = . , p < . , f = . . the addition of time covid- affective risk and intolerance of uncertainty in the second step of the model accounted for additional significant variance in time illness likelihood above and beyond time illness likelihood, Δr = . , f ( , ) = . , p < . , f = . , with both variables demonstrating a significant unique positive association with time illness likelihood. the addition of the interaction term did not significantly improve the model, Δr = . , f ( , ) = . , p = . , f = . . the addition of the interaction term did not significantly improve the model, Δr = . , f ( , ) = . , p = . , f = . . the overall model was significant, accounting for % of the variance in the time to ensure that the significant interaction could not be attributed to other demographic or psychiatric variables, the regression analysis was rerun with the following covariates included in this study sought to examine the unique and interactive prospective relations of covid- affective risk assessments (i.e., worry about risk for contracting or dying from and intolerance of uncertainty to health anxiety one month later. hypotheses were partially supported. first, as predicted, covid- affective risk assessments and intolerance of uncertainty at time were uniquely associated with later perceived likelihood that a serious illness would be acquired (i.e., illness likelihood subscale on the shai) and anticipated negative consequences of having a serious illness (i.e., illness severity subscale on the shai). these findings are consistent with past research demonstrating relationships between health anxiety and both intolerance of uncertainty (e.g., and concerns regarding perceived vulnerability to disease (e.g., duncan, schaller, & park, ). however, only intolerance of uncertainty at time was found to be uniquely associated with time body vigilance. the items assessing body vigilance on the shai focus on bodily sensations in general or aches and pains. although worry and anxiety regarding risk for contracting or dying from covid- j o u r n a l p r e -p r o o f health anxiety during covid- would be expected to amplify sensitivity to bodily sensations (consistent with a seek to avoid process; barlow, ) , it is possible that this process might be more evident for bodily sensations that are specifically associated with covid- infection (e.g., fever, shortness of breath, headache). however, as an individual difference factor that is not unique to covid- , intolerance of uncertainty may be more likely to increase awareness of bodily sensations in general to identify any potential sources of health threat, thus increasing a sense of certainty, control, or predictability. contrary to hypotheses, intolerance of uncertainty was not found to moderate the association between time covid- affective risk assessments and time illness likelihood or body vigilance. in addition, although intolerance of uncertainty was found to moderate the association between covid- affective risk assessments and time illness severity, the nature of this interaction was different than what was predicted. specifically, time covid- affective risk assessments were significantly positively associated with time illness severity only at mean and low levels of intolerance of uncertainty. at high levels of intolerance of uncertainty, no significant association was found between covid- affective risk assessments and health anxiety. this finding highlights the multiple ways in which individuals may develop anxiety surrounding the potential negative consequences associated with illness. even in the absence of an established vulnerability for the development of health anxiety (i.e., intolerance of uncertainty), elevated worry about risk for contracting or dying from covid- appears to be sufficient for the greater anticipation of negative consequences associated with having an illness. the experience of frequent worry thoughts surrounding risk for covid- infection or mortality may increase health anxiety by contributing to the increased generation of potential catastrophic outcomes that could occur if one were infected with the virus. indeed, in other health conditions j o u r n a l p r e -p r o o f (e.g., irritable bowel syndrome), worry has been found to contribute to increased suffering through catastrophizing (lackner & quigley, ) . however, among individuals high in intolerance of uncertainty, covid- affective risk assessments seem less relevant to later health anxiety, providing further evidence that intolerance of uncertainty may be a strong risk factor for the development or exacerbation of health anxiety. such a finding is consistent with previous research showing that intolerance of uncertainty predicts health anxiety above and beyond other established anxiety risk factors (e.g., anxiety sensitivity, negative affect; fergus & bardeen, ) . study limitations warrant consideration. first, all outcomes were assessed using selfreport questionnaires, which have the potential to be influenced by social desirability biases or recall difficulties. in addition, we used an unpublished, two-item measure developed specifically for the purposes of this study to assess covid- affective risk assessments. although this measure demonstrated associations with our other variables in the expected direction, it is possible that our measure did not provide a comprehensive evaluation of covid- affective risk assessments. at the time this study began, other validated measures of covid- affective risk assessments were not available. however, since that time, several measures have been published that may provide a better assessment of covid- affective risk assessments or the stress and anxiety associated with covid- more generally, such as the covid stress scales (taylor et al., ) and the coronavirus anxiety scale . in addition, our measures of intolerance of uncertainty and health anxiety were not specific to covid- ; thus, findings cannot speak to the extent to which intolerance of uncertainty surrounding the covid- pandemic in particular influences anxiety surrounding the experience and consequences of covid- related bodily sensations. in addition, given our recruitment methods and sample j o u r n a l p r e -p r o o f health anxiety during covid- (i.e., self-selected mturk workers), results may also not generalize to the larger u.s. population, adults in other countries, or particularly vulnerable populations (e.g., individuals with chronic medical conditions; health care workers; hospitalized patients). replication of our findings is needed within other samples. in addition, although covid- affective risk assessments and intolerance of uncertainty were found to predict later health anxiety, it is important to note that average health anxiety levels at time were not at clinical levels (mean shai scores among individuals with hypochondriasis = . ; alberts et al., ) . moreover, it is not clear if the levels of health anxiety observed in this study are associated with engagement in adaptive or maladaptive health behaviors. health anxiety is conceptualized as a dimensional variable (taylor & asmundson, ) , and moderate levels of health anxiety may be functional in the context of a pandemic, increasing motivation to engage in protective behaviors such as social distancing, hand washing, and wearing a mask when outside of the home. studies employing multiple follow-up assessments are needed to determine whether the health anxiety stemming from covid- affective risk assessments and intolerance of uncertainty predicts later engagement in adaptive or maladaptive health behaviors. likewise, research is needed to examine the impact of the covid- pandemic on health anxiety within more vulnerable populations, such as individuals with pre-existing illness anxiety disorder or generalized anxiety disorder. despite limitations, findings lend support to the hypothesis that the covid- pandemic will result in elevated health anxiety (asmundson & taylor, b) , and add to the growing body of literature on the mental health consequences of this pandemic (cao et al., ; gonzález-sanguino et al., ; harper et al., ; huang & zhao, ; jungmann & witthöft, ; lee et al., ; mckay et al., ; moghanibashi-mansourieh, ; zhang et al., ). specifically, our findings demonstrate that covid- affective risk assessments and intolerance of uncertainty are uniquely associated with various dimensions of health anxiety one month later. moreover, in addition to providing further evidence that high levels of intolerance of uncertainty may increase risk for later health anxiety, results highlight one pathway (i.e., affective-based risk assessments) through which individuals without high levels of intolerance of uncertainty may still be susceptible to later health anxiety during this time. specifically, the extent to which individuals feel that they are at risk for covid- infection and death was associated with elevated health anxiety one-month later among individuals with mean and low levels of intolerance of uncertainty. as such, findings highlight a number of potential targets for preventing the development of severe health anxiety that could lead to maladaptive behaviors during the current pandemic. for example, acceptance-and mindfulness-based behavioral interventions (e.g., acceptance-based behavioral therapy for generalized anxiety disorder; roemer, orsillo, & salters-pedneault, ) may be particularly useful for addressing worry about risk for contracting or dying from covid- . psychoeducation on effective behaviors for mitigating risk for covid- infection may also reduce worry, and ultimately health anxiety, by modifying risk assessments and increasing a sense of control. cognitive-behavioral interventions that specifically target intolerance of uncertainty (e.g., hebert & dugas, ; ladouceur et al., ) may also have utility in reducing risk for future health anxiety during this particularly stressful and indeed uncertain time. j o u r n a l p r e -p r o o f the short health anxiety inventory: psychometric properties and construct validity in a non-clinical sample health anxiety, hypochondriasis, and the anxiety disorders the short health anxiety inventory: a systematic review and meta-analysis psychometric properties of the -item and -item versions of the depression anxiety stress scales in clinical groups and a community sample health anxiety: current perspectives and future directions coronaphobia: fear and the -ncov outbreak how health anxiety influences responses to viral outbreaks like covid- : what all decision-makers, health authorities, and health care professionals need to know anxiety and its disorders intolerance of uncertainty and social anxiety investigating the construct validity of intolerance of uncertainty and its unique relationship with worry amazon's mechanical turk: a new source of inexpensive, yet high-quality, data? comparing the roles of ambiguity and unpredictability in intolerance of uncertainty the psychological impact of the covid- epidemic on college students in china it's not just the judgements-it's that i don't know": intolerance of uncertainty as a predictor of social anxiety increasingly certain about uncertainty: intolerance of uncertainty across anxiety and depression fearing the unknown: a short version of the intolerance of uncertainty scale separate but equal? a comparison of participants and data gathered via amazon's mturk, social media, and face-to-face behavioral testing coronavirus (covid- ) online panels in social science research: expanding sampling methods beyond mechanical turk intolerance of uncertainty and problem orientation in worry perceived vulnerability to disease: development and validation of a -item self-report instrument cyberchondria and intolerance of uncertainty: examining when individuals experience health anxiety in response to internet searches for medical information anxiety sensitivity and intolerance of uncertainty: evidence of incremental specificity in relation to health anxiety intolerance of uncertainty moderates the relationship between catastrophic health appraisals and health anxiety the consequences of covid- pandemic on mental health and implications for clinical practice why do people worry a meta-analysis of the relation of intolerance of uncertainty to symptoms of generalized anxiety disorder, major depressive disorder, and obsessive-compulsive disorder mental health problems and social media exposure during the covid- outbreak cognitive and behavioral responses to illness information: the role of health anxiety resolving uncertainty about the intolerance of uncertainty scale- : application of modern psychometric strategies functional fear predicts public health compliance in the covid- pandemic attentive turkers: mturk participants perform better on online attention checks than do subject pool participants introduction to mediation, moderation, and conditional process analysis: a regression-based approach behavioral experiments for intolerance of uncertainty: challenging the unknown in the treatment of generalized anxiety disorder a comparison of intolerance of uncertainty in analogue obsessive-compulsive disorder and generalized anxiety disorder generalized anxiety disorder, depressive symptoms and sleep quality during covid- outbreak in china: a web-based cross-sectional survey thinking versus feeling: differentiating between cognitive and affective components of perceived cancer risk the importance of affectively-laden beliefs about health risks: the case of tobacco use and sun protection health anxiety, cyberchondria, and coping in the current covid- pandemic: which factors are related to coronavirus anxiety? the shape of and solutions to the mturk quality crisis comparative risk estimates relative to the average peer predict behavioral intentions and concern about absolute risk pain catastrophizing mediates the relationship between worry and pain suffering in patients with irritable bowel syndrome efficacy of a cognitive-behavioral treatment for generalized anxiety disorder: evaluation in a controlled clinical trial hierarchical factor structure of the intolerance of uncertainty scale short form (ius- ) in the italian version clinically significant fear and anxiety of covid- : a psychometric examination of the coronavirus anxiety scale incubation period and other epidemiological characteristics of novel coronavirus infections with right truncation: a statistical analysis of publicly available case data turkprime. com: a versatile crowdsourcing data acquisition platform for the behavioral sciences risk as feelings manual for the depression anxiety stress scales are dysfunctional beliefs about illness unique to hypochondriasis? anxiety regarding contracting covid- related to interoceptive anxiety sensations: the moderating role of disgust propensity and sensitivity see how all states are reopening see which states and cities have told residents to stay at home attentional bias in generalized anxiety disorder versus depressive disorder attentional bias to threat in clinical anxiety states assessing the anxiety level of iranian general population during covid- outbreak suicide mortality and coronavirus disease -a perfect storm practitioner's guide to empirically based measures of anxiety efficacy of an acceptance-based behavior therapy for generalized anxiety disorder: evaluation in a randomized controlled are direct or indirect measures of comparative risk better predictors of concern and behavioural intentions? the health anxiety inventory: development and validation of scales for the measurement of health anxiety and hypochondriasis covid- infection: origin, transmission, and characteristics of human coronaviruses a bifactor model of intolerance of uncertainty in undergraduate and clinical samples: do we need to reconsider the twofactor model? psychological assessment the psychology of pandemics: preparing for the next global outbreak of infectious disease treating health anxiety: a cognitive-behavioral approach development and initial validation of the covid stress scales mental health and psychosocial considerations during the covid- outbreak rolling updates on coronavirus disease (covid- ) use of hydroxychloroquine and chloroquine during the covid- pandemic: what every clinician should know the differential psychological distress of populations affected by the covid- pandemic * . ** . ** . ** . ** . ** . ** . t ius ** . ** . ** . ** . ** t illness likelihood t body vigilance t illness severity t illness likelihood t body vigilance t illness likelihood affective risk = covid- affective risk assessments ius = intolerance of uncertainty scale; illness likelihood = short health anxiety inventory illness likelihood subscale; body vigilance = short health anxiety inventory body vigilance subscale; illness severity = short health anxiety inventory illness severity subscale key: cord- - tgtstd authors: ferranti, erin p.; wands, lisamarie; yeager, katherine a.; baker, brenda; higgins, melinda k.; wold, judith lupo; dunbar, sandra b. title: implementation of an educational program for nursing students amidst the ebola virus disease epidemic date: - - journal: nursing outlook doi: . /j.outlook. . . sha: doc_id: cord_uid: tgtstd abstract background the global ebola virus disease (evd) epidemic of / prompted faculty at emory university to develop an educational program for nursing students to increase evd knowledge and confidence and decrease concerns about exposure risk. purpose the purpose of this article is to describe the development, implementation, and evaluation of the evd just-in-time teaching (jitt) educational program. methods informational sessions, online course links, and a targeted, self-directed slide presentation were developed and implemented for the evd educational program. three student surveys administered at different time points were used to evaluate the program and change in students' evd knowledge, confidence in knowledge, and risk concern. discussion implementation of a jitt educational program effectively achieved our goals to increase evd knowledge, decrease fear, and enhance student confidence in the ability to discuss evd risk. these achievements were sustained over time. conclusion jitt methodology is an effective strategy for schools of nursing to respond quickly and comprehensively during an unanticipated infectious disease outbreak. the ebola virus disease (evd) epidemic of / presented atlanta-area health care providers, health care professions schools, and students a unique challenge to quickly prepare for the care of evd-infected aid workers from african countries affected by this disease. the decision to accept these patients resulted in the activation and expansion of the serious communicable diseases unit (scdu) at emory university hospital (feistritzer, hill, vanairsdale, & gentry, ) . intense public interest followed the decision and resulted in tremendous media coverage. between july and september , , > , stories went out on broadcast and > , print stories were written mentioning emory and ebola ("telling the story," ). some of the attention heightened the fear and anxiety associated with caring for individuals in our community because of the highly infectious nature of evd. people spoke out on social media, fearing that our caring for these patients put our larger community at risk. in response to the public outcry, susan grant, the chief nurse for emory healthcare wrote in the washington post, "we can either let our actions be guided by misunderstanding, fear, and self-interest or we can lead by knowledge, science and compassion. we can fear, or we can care." (grant, ) . the emory university nell hodgson woodruff school of nursing (nhwsn) is located on the same campus as emory university hospital and is also adjacent to the centers for disease control and prevention (cdc). both the cdc and emory healthcare are key partners for the clinical and public health education of our student nurses. the treatment of patients with evd at emory university hospital, combined with our cdc colleagues' response to the evd epidemic in africa and the status of atlanta being a major international transportation hub, necessitated a swift response by key public health faculty and administration of the nhwsn to educate our students and fellow faculty colleagues and staff members about evd. evd education needed to include modes of transmission, risk for exposure and transmission, signs and symptoms of infection, therapy, and counseling techniques to allay fear and anxiety associated with living in atlanta and working or training within the health care facilities treating evd-infected patients. it was our goal to increase evd knowledge, decrease fear, and enhance students' confidence in their ability to discuss evd risk with family and friends. just-in-time teaching (jitt) is an online educational approach that can be used to rapidly disseminate important information in an efficient and effective way to address learning needs during a crisis (chotani et al., ) . jitt approaches have been used to quickly disseminate information after large-scale disasters and public health epidemics, such as the global outbreak of severe acute respiratory syndrome (sars) that occurred in the early s (o'connor et al., ; yang et al., ) . providing information expeditiously during complex humanitarian emergencies, such as a disease outbreak, is essential to quelling the fears of nursing students, who may encounter affected patients during clinical rotations, and communities who are uncertain about essential facts and who might be influenced by media coverage that at times dwells on unpleasant details and fuels the public's apprehensions (stirling, harmston, & alsobayel, ; "teaching in a time," ) . to respond to the emergent evd epidemic, we designed a comprehensive and targeted approach to educate our students. the purpose of this article is to describe the development, implementation, and evaluation of this educational effort. early in the fall semester of , we arranged for lunch-and-learn presentations, inviting all community members to learn more about the evd outbreak in africa. we invited colleagues from the cdc to present information about their experiences in sierra leone, one of the evd-affected countries. interested students and faculty attended other educational events at our university's school of public health. specific to information about the evd patients being cared for at emory's scdu, many attended a town hall meeting held jointly with the medical school where attendees heard directly from the scdu team that was caring for the individuals with evd. in addition to the opportunities provided to learn more about evd across our campus, the faculty decided that because our undergraduate nursing students were engaging in clinical training within the health care facility caring for patients with evd, a more comprehensive and targeted approach to educate our students was needed. additional goals for providing education were to increase student knowledge of evd risks and ways to mitigate exposure, decrease fear of evd, and enhance students' confidence in discussing evd with others, including family, friends, and patients. faculty course coordinators of classes addressing professional role content for each cohort of undergraduate students created an ebola information page on their electronic course sites. the ebola information content included links to cdc, emory healthcare, and other atlanta-area health care evd policies and guidelines. in addition, a -slide powerpoint presentation was developed using cdc guidelines and the newly developed emory healthcare ebola preparedness protocols. the presentation, posted on the course sites, included an overview of the evd outbreak, evd facts, modes of transmission, signs and symptoms of early and later stage infection, emory university's evd-specific travel policies, emory healthcare's publically available ebola preparedness protocols, and cdc's published "frequently asked questions" and answers. the presentation was designed for students' self-directed viewing and learning. participants targeted for this educational program were all undergraduate student nurses enrolled in our prelicensure bachelor of science in nursing (bsn) program at nhwsn in fall and spring . inclusion criteria included all enrolled undergraduate students; there were no exclusion criteria. sample size was determined by the size of the enrolled undergraduate student population. this target group consisted of a total of undergraduate students who were % women, % white, % black, % asian, % hispanic, and % multiracial/ethnic or undeclared. consultation with the emory university institutional review board confirmed that this project met exemption criteria. early in december , the project's data manager (b.b.) invited all students in the bsn program to participate in the evd self-directed education program described previously, via e-mail. the data manager did not serve in a faculty capacity to any of the students at the time. the e-mail stated that faculty were interested in students' perceptions about educational information that had already been provided to them about evd and their experiences and level of comfort with discussing evd with others, such as family members. the e-mail invitation described that the new education program included completing a pretest, viewing a powerpoint slideshow, and completing two post-tests (immediately after the training and five weeks later) and that participation was completely voluntary. the pre-and post-tests were identical. students were enrolled in one of three classes, and faculty members teaching those classes agreed to offer a small incentive in the form of bonus points to students who participated in the study. application of the bonus points was determined by the individual faculty member for each course. the invitation e-mail included a link to the pretest, which was hosted on the research electronic data capture (redcap) platform. redcap allowed for tracking participants for comparison on pre-and posttest results by linking student identification numbers that were loaded into the system. time to complete the pretest was estimated to be about min. the pretest link remained active for days after the initial e-mail was sent. the pre-test survey consisted of three demographic questions, one item related to who they may have already provided any evd information to, two questions related to the student's confidence level providing education to others about evd, one item asking if they felt they needed additional evd training, knowledge questions, two questions related to the student's level of concern about their risk to evd, and one question about attendance at recent campus educational programs about evd. a few of the students either did not receive the initial e-mail or did not receive a valid link to the pre-test; these issues were resolved by sending e-mails to these students individually. the survey link was e-mailed again to students who experienced technological difficulties; no duplicate surveys occurred as a feature of the redcap system. three days after the pre-test survey closed, faculty of the students' classes posted the powerpoint slideshow on their online course sites, and participants were directed where to find the slideshow for viewing. participants did not have access to the powerpoint slideshow before completing the pre-test. time to completely view the slideshow was estimated to be about minutes. the powerpoint slideshow was available to view for three days. students who participated in the pretest were invited via e-mail to complete the post-test. the post-test was also hosted on the redcap platform, and survey responses were linked by student identification numbers. the post-test link remained active for three days. after the post-test, students were on an academic break for approximately five weeks. when classes resumed in january , students who had participated in the pre-and post-test surveys were invited to participate in a follow-up post-test. the purpose of this follow-up post-test was to assess the retention of knowledge and any changes in confidence in addressing evd concerns after a major school break. the redcap system linked survey results across the three tests with student identification numbers. the data manager provided student identification numbers to faculty for the purpose of awarding bonus credit. bonus credit was awarded to students who completed all three surveys. statistical analyses were performed using the statistical package for the social sciences, version (spss, chicago, il). statistical significance was set at p <. a priori. data were reviewed for completeness. any skipped or missing items were summarized and reported for items not related to the knowledge test. missing items on the knowledge test portion were treated as incorrect responses. most of the data collected were categorical and ordinal in nature; thus, most items were summarized using percentages and frequencies. age was normally distributed, so the mean and standard deviation were reported. descriptive statistics were compiled for all student characteristics, demographics, knowledge test scores, training needs, comfort, and confidence items. knowledge scores were computed as the percentage correct out of the evd content items. the two concern items and three confidence items were scaled with four response ordinal categories (not at all, somewhat, very, and extremely). the two concern items were averaged together, and the three confidence items were averaged together. reliability was assessed for each of the averaged items using standardized cronbach alpha and the split-half spearmanebrown formula (eisinga, grotenhuis, & pelzer, ) . average concern was significantly right skewed and still ordinal in nature and was dichotomized into subjects who were "not at all" concerned (score ¼ ) vs. those who were somewhat to extremely concerned (scores > ). average confidence was also right skewed and still ordinal in nature and was also dichotomized into subjects who were "not at all" or "somewhat" confident (scores ) and those who were "very" or "extremely" confident (scores > ). multilevel modeling (mlm) instead of repeated measures analysis of variance was used to test for changes over time for the three time points for the continuous knowledge scores because mlm uses all available data and adjusts for missing data over time (hedeker & gibbons, ) . for the dichotomized items for concern, confidence, and needing additional training, generalized multilevel modeling (gzmlm) was used for these binary response variables with logit link function (e.g., logistic regression) to test for changes over time. age was also included as a covariate since older students may have had more confidence or comfort levels. for consistency, age was included in all the mlm/gzmlm models. for all models, pairwise comparisons were made between responses at all three time points using sidak type i error ratee adjusted p values ("ibm spss statistics for windows, version . ," ). pairwise differences between t and t evaluated the initial improvements immediately after training, between t and t evaluated the longer term improvements from baseline, and between t and t evaluated the sustained or retained effects from the training. baseline surveys were completed by ( %) of eligible undergraduate students. the age of students participating in this study ranged from to years with an average age of . (standard deviation ¼ . ) years. the majority were female ( . %) with . % juniors, . % seniors, and . % accelerated undergraduates. when asked who the students had previously provided any information about the ebola outbreak to, the majority (> %) said friends and family, and slightly less than half ( %) said fellow students (table ) . of the who completed the baseline surveys, ( . %) completed the immediate post-test survey and ( . %) completed the final post-test survey. the who did not complete all three surveys were not significantly different from the who completed all three in age, gender, baseline knowledge, concern, confidence, or wanting additional training. initially, the students scored . ( . ) on the knowledge test and improved immediately after training with scores averaging . ( . ) which was significantly higher than baseline ( p < . ; table , figure ). their knowledge scores were well retained by the third time point with average scores of . ( . ) * evd knowledge test scores analyzed using multilevel modeling (mlm). y dichotomous outcomes analyzed using generalized multilevel modeling (gzmlm) with binary responses with logit link functions. the categories indicated by the counts and percents reported were the target category for the binary response logit link gzmlm. z one subject skipped answering the concern items at time . n u r s o u t l o o k ( ) e with no significant loss in knowledge scores from time ( p ¼ . ). at baseline, only half ( . %) were not at all concerned about their risk (averaged from concern as a health care provider and as an atlanta city resident). these two items showed good internal consistency and reliability with a standardized cronbach alpha and split-half spearmanebrown coefficient of . . this percentage of students not at all concerned did increase significantly over time with improvements from baseline to time ( p ¼ . ) with . % not at all concerned by time (table , figure ). when looking at the two individual concern items (concern as a health care provider and concern as a resident), the levels of not at all concerned was consistently lower for risk as a health care provider, but both showed steady increases over all three time points (table ) . at baseline, slightly more than half ( . %) stated they did need additional training about evd, but this decreased significantly over time with significant decreases from time to time ( p ¼ . ) and from time to time ( p ¼ . ) with overall decreases from time to time ( p < . ) down to only . % wanting additional training by time (table , figure ). at baseline, only . % of the students were very or extremely confident in their average ability to discuss evd with family/friends, answer questions about evd transmission, and convey a calm message about the general public risk for evd. these three items showed high internal consistency and reliability with a figure e estimated percentages (means and % confidence intervals) from multilevel models. note: all estimated means and % confidence intervals were adjusted for age as a covariate in the multilevel models. test scores were analyzed using multilevel models (mlms) and the other three outcomes (additional training [% yes], average concern [% not at all], and confidence average [% very or extremely]) were analyzed using generalized multilevel models (gzmlms) with binary responses and logit link functions. p values are provided for each pairwise comparison between the three time points and were adjusted using sidak pairwise error rate correction. standardized cronbach alpha of . . the average confidence increased significantly from baseline to immediate post-test to . % ( p < . ), but this confidence level decreased slightly by the final post-test at time down to . % which was not significantly less than time ( p ¼ . ) and was still significantly higher than baseline ( p < . ; table , figure ). when looking at the individual confidence level items, the lowest confidence levels were for discussing evd with family/ friends and answering questions about evd transmission. the confidence levels for conveying a calm message about the general public's risk for evd were consistently higher across time (table ) . a final detailed summary of the percentage of correct answers to the individual knowledge test items at all three time points is provided in table . reviewing this table shows that the weakest knowledge areas were for knowing how the ebola virus infection is diagnosed (item with baseline knowledge at . %), knowing how long protection lasts for people who recover from ebola (item with baseline knowledge at . %), and knowing whether you can still contact the virus from a person not showing symptoms (item with baseline knowledge at . %). these three items all showed improvement from baseline to immediate post-test at time , but items and showed the poorest retention by the third time point at the final post-test. two additional items with lower levels of knowledge at baseline were item with only . % knowing if supportive care was currently the only treatment available for ebola patients and item with only . % knowing how health care responders returning from other countries should be monitored. however, after training, both of these items showed significant and sustained improvement with knowledge levels above %. the remaining knowledge test items showed reasonable levels of knowledge at baseline above % that improved to % and higher over time. the west african evd epidemic of / that brought ebola-infected patients to the metro-atlanta area and to a hospital in which our student nurses were completing clinical education rotations provided a unique opportunity for the faculty at the nhwsn to prepare our student nurses for a major, fear-provoking public health event and to test the effectiveness of an educational program. there is little study devoted to the response of health professional schools, particularly schools of nursing in the event of an unforeseen infectious disease outbreak such as evd (stirling et al., ) . furthermore, there is little guidance for how to n u r s o u t l o o k ( ) e swiftly and effectively prepare nursing students for such events, both in their roles as patient providers and community educators. the implementation of a jitt educational program effectively achieved our goals to increase evd knowledge, decrease fear, and enhance students' confidence in their ability to discuss evd risk. furthermore, these achievements were sustained over time. this demonstration educational program highlights the effectiveness of self-directed learning, especially in times of a threatening disease outbreak. limitations to this educational program included a substantial decrease in the number of student participants who completed the final survey from the baseline measurement time point. this decrease aligned with the level of course credit or bonus points provided to students, indicating greater student motivation to complete the full program when credit was awarded in meaningful ways to students. giving extra credit points could also be a limitation of the program findings as it may not be representative of students who did not need extra credit (i.e., students with better course grades). the challenge with implementing consistent bonus points was having differing courses over two separate semesters. greater coordination among faculty and throughout the courses might have helped to encourage student participation. the student nurse population at emory university is primarily female, reflecting common gender norms of the nursing profession. this may, however, limit the generalizability of these findings to other more gender-balanced student groups. the jitt methodology and self-directed learning are effective means of increasing knowledge and confidence and decreasing risk concern among student nurses. in this era of globalization, when any communicable illness is "only a plane ride away" and intense media coverage can increase fear and anxiety, jitt is a successful method of delivering evidencebased information to students in a timely manner. schools of nursing must have the tools and resources to respond quickly and comprehensively during an unanticipated infectious disease outbreak to protect their students and staff, to prevent disease, and to be empowered advocates of accurate information in the midst of an epidemic. r e f e r e n c e s just-in-time lectures the reliability of a two-item scale: pearson, cronbach or spearman-brown? care of patients with ebola virus disease i'm the head nurse at emory. this is why we wanted to bring the ebola patients to the u.s. the washington post longitudinal data analysis statistics for windows, version . . armonk risk communication with nurses during infectious disease outbreaks: learning from sars an education programme for nursing college staff and students during a mers-coronavirus outbreak in saudi arabia telling the story chinese disasters and justin-time education key: cord- - rmm hfb authors: faes, c.; abrams, s.; van beckhoven, d.; meyfroidt, g.; vlieghe, e.; hens, n. title: time between symptom onset, hospitalisation and recovery or death: a statistical analysis of different time-delay distributions in belgian covid- patients date: - - journal: nan doi: . / . . . sha: doc_id: cord_uid: rmm hfb background there are different patterns in the covid- outbreak in the general population and amongst nursing home patients. different age-groups are also impacted differently. however, it remains unclear whether the time from symptom onset to diagnosis and hospitalization or the length of stay in the hospital is different for different age groups, gender, residence place or whether it is time dependent. methods sciensano, the belgian scientific institute of public health, collected information on hospitalized patients with covid- hospital admissions from participating hospitals in belgium. between march , and june , , a total of , covid- patients were registered. the time of symptom onset, time of covid- diagnosis, time of hospitalization, time of recovery or death, and length of stay in intensive care are recorded. the distributions of these different event times for different age groups are estimated accounting for interval censoring and right truncation in the observed data. results the truncated and interval-censored weibull regression model is the best model for the time between symptom onset and diagnosis/hospitalization best, whereas the length of stay in hospital is best described by a truncated and interval-censored lognormal regression model. conclusions the time between symptom onset and hospitalization and between symptom onset and diagnosis are very similar, with median length between symptom onset and hospitalization ranging between and . days, depending on the age of the patient and whether or not the patient lives in a nursing home. patients coming from a nursing home facility have a slightly prolonged time between symptom onset and hospitalization (i.e., days). the longest delay time is observed in the age group - years old. the time from symptom onset to diagnosis follows the same trend, but on average is one day longer as compared to the time to hospitalization. the median length of stay in hospital varies between and . days, with the length of stay increasing with age. however, a difference is observed between patients that recover and patients that die. while the hospital length of stay for patients that recover increases with age, we observe the longest time between hospitalization and death in the age group - . and, while the hospital length of stay for patients that recover is shorter for patients living in a nursing home, the time from hospitalization to death is longer for these patients. but, over the course of the first wave, the length of stay has decreased, with a decrease in median length of stay of around days. there are different patterns in the covid- outbreak in the general population and amongst nursing home patients. different age-groups are also impacted differently. however, it remains unclear whether the time from symptom onset to diagnosis and hospitalization or the length of stay in the hospital is different for different age groups, gender, residence place or whether it is time dependent. sciensano, the belgian scientific institute of public health, collected information on hospitalized patients with covid- hospital admissions from participating hospitals in belgium. between march , and june , , a total of , covid- patients were registered. the time of symptom onset, time of covid- diagnosis, time of hospitalization, time of recovery or death, and length of stay in intensive care are recorded. the distributions of these different event times for different age groups are estimated accounting for interval censoring and right truncation in the observed data. the truncated and interval-censored weibull regression model is the best model for the time between symptom onset and diagnosis/hospitalization best, whereas the length of stay in hospital is best described by a truncated and interval-censored lognormal regression model. the time between symptom onset and hospitalization and between symptom onset and diagnosis are very similar, with median length between symptom onset and hospitalization ranging between and . days, depending on the age of the patient and whether or not the patient lives in a nursing home. patients coming from a nursing home facility have a slightly prolonged time between symptom onset and hospitalization (i.e., days). the longest delay time is observed in the age group - years old. the time from symptom onset to diagnosis follows the same trend, but on average is one day longer as compared to the time to hospitalization. the median length of stay in hospital varies between and . days, with the length of stay increasing with age. however, a difference is observed between patients that recover and patients that die. while the hospital length of stay for patients that recover increases with age, we observe the longest time between hospitalization and death in the age group - . and, while the hospital length of stay for patients that recover is shorter for patients living in a nursing home, the time from hospitalization to death is longer for these patients. but, over the course of the first wave, the length of stay has decreased, with a decrease in median length of stay of around days. the world is currently faced with an ongoing coronavirus disease (covid- ) pandemic. the disease is caused by the severe acute respiratory syndrome coronavirus (sars-cov- ), a new strain of the coronavirus, which was never detected before in humans, and is a highly contagious infectious disease. the first outbreak of covid- occurred in wuhan, province hubei, china in december . since then, several outbreaks have been observed throughout the world. on february , , a cluster of covid- cases was confirmed in italy, the first european country affected by the virus. one week later, several imported cases were reported in belgium, after a week of school holidays. as from march , the first generation of infected individuals as a result of local transmission was confirmed in belgium. there is currently little detailed knowledge on the time interval between symptom onset and hospital admission, nor on the length of stay in hospital. however, information about the length of stay in hospital is important to predict the number of required hospital beds, both for beds in general hospital and beds in the intensive care unit (icu), and to track the burden on hospitals (vekaria et al. ) . the time delay from illness onset to death is important for the estimation of the case fatality ratio (donnely et al., ) . individual-specific characteristics, such as, e.g., the gender, age and co-morbidity of the individual, could potentially explain differences in length of stay in the hospital and are therefore important to correct for. therefore, in the present study, we investigate the time of symptom onset to hospitalization and the time of symptom onset to diagnosis, as well as the length of stay in hospital. more specifically, we consider and compare parametric distributions for these event times enabling to appropriately take care of truncation and interval censoring. in section , we introduce the epidemiological data and the statistical methodology used for the estimation of the parameters associated with the aforementioned delay distributions. the results are presented in section and avenues of further research are discussed in section . the hospitalized patients clinical database is an ongoing multicenter registry that collects information on hospital admission related to covid- infection. the data are regularly updated as more information from the hospitals are sent in. at the time of writing this manuscript, the data were available until june , . the individual patients' data are collected through online questionnaires: one with data on admission and one with data on discharge. data in the survey, there is information about , patients, hospitalized between march , and june , , including age and gender. from these, , of the hospitalized patients are females and , are males. hospitalized patients are less than years old, , individuals are between and years of age, , are between and years of age and , have an age above years. from these patients, it is known that , live in a nursing home and , do not. table shows that a large proportion of the hospitalized + patients are known to live in a nursing home facility (about % for patients aged - and % for patients aged +). as expected, below the age of years, there is only a very small proportion of patients that come from a nursing home facility. the survey contains information on , patients hospitalized during the initial phase of the outbreak (between march and march ); , patients in the increasing phase of the outbreak (between march and march ); , in the descending phase (between april and april ); and , individuals at the end of the first wave of the covid- epidemic (between april and june ). the time trend in the number of hospitalizations is presented in figure . black dots represent the number of patients included in the national surveillance survey and the red dots show the reported number of confirmed hospitalizations in the population. the time trend in the survey matches well with the time trend of the outbreak in the whole population, though with some under-reporting in april and may. the date variables were checked for consistency. observations identified as inconsistent were excluded for analyses related to the inconsistent dates. a flow diagram of the exclusion criteria is displayed in figure . the time of symptom onset and time of hospitalization is available for , patients. the date of symptom onset is determined based on the patient anamnesis history made by the clinicians. patients that were hospitalized before the start of symptoms (i.e., patients) were not included. these include patients with nosocomial in- fections admitted prior to covid- infection for other long-term pathologies, then got infected at hospital and developing covid- -related symptoms long after admission. patients reporting a delay between symptoms and hospitalization of more than days (i.e., patients) were also not included, because it is unclear for these patients whether the reason for hospital admission was covid- infection. a sensitivity analysis including patients with event times above days is conducted. patients with missing information on age (i.e., patients) or gender (i.e., patients) were not included in the statistical analysis. this resulted in a total of , patients which were used to estimate the distribution of the time between symptom onset and hospitalization. the time of symptom onset and time of diagnosis is available for , patients. some of these were diagnosed prior to having symptoms ( ) or experienced symptoms more than days before diagnosis ( ), and are excluded as these might be errors in reporting dates. similarly, the delay between symptoms and detection time is truncated at days; but a sensitivity analysis including these patients is performed. in total, patients were removed because of missing information on age and/or gender, resulting in , patients used in the analysis of the time from symptom onset to diagnosis. the time between hospitalization and discharge from hospital is available for , patients, either discharged alive or dead. for patients that were hospitalized before the start of symptoms (i.e., patients), we use the time between the start of symptoms and discharge. patients with negative time intervals ( patients) are excluded for further analysis. another patients were discarded because of missing covariate information with regard to their age or gender. from these patients, we know that , recovered from covid- , while , died. from the hospitalized patients, there is information about the length of stay at icu for , patients. note that we analyzed an anonymized subset of data from the hospital covid- clinical surveillance database of the belgian public health institute sciensano. data from sciensano was shared with the first author through a secured data transfer platform. as there exist large differences between healthcare systems in different countries, the reporting lag and time to hospitalization can be very different amongst countries (who, ). in this section, we describe the observed delay from symptom onset to hospitalization, from symptom onset to diagnosis and the length of stay in hospital during the first wave of covid- infections in belgium. statistical analysis results thereof are presented in section . the observed distribution of the delay from symptom onset to hospitalization (left panel) and to diagnosis (right panel) is presented in figure . the observed length of stay in hospital and in icu is presented in figure for all patients as well as separately for those that died or recovered from the disease. summary information about these distributions is presented in tables a and a in the appendix. note that the empirical distributions shown in in figure do not explicitly account for truncation of the event times at the end of the study. more specifically, the relative frequencies of short-term stays in the hospital are inflated by the absence of patients with larger lengths of stay that are still in hospital at the end of the study period, and therefore missing in the data. consequently, these graphs should be interpreted with care. while the observed delay between symptom onset and hospitalization is between and days, % of the hospitalizations occur within days after symptom onset. this is however shorter in the youngest age group (< years) and in the elderly group (> years). also patients coming from nursing homes seem to be hospitalized faster as compared to the general population. over the course of the first wave, the observed time between symptom onset and hospitalization was largest in the increasing phase of the epidemic (between march and march ). the time between symptom onset and diagnosis is very similar, ranging between and days, with % of the diagnoses occurring within days after symptom onset. it should be noted that these observations are based on hospitalized patients, and non-hospitalized patients might have a quite different evolution in terms of their symptoms. as non-hospitalized patients were rarely tested in the initial phase of the epidemic, no conclusions can be made for this group of patients. the observed median length of stay in hospital is days, with % of the patients have values ranging between and days. % of the patients stay longer than days in the hospital. the median length of stay seems to increase with age (from days in age group < to in age group − , in age group − and days in age group > ). on the other hand, with time since introduction of the disease in the population, the length of stay seems to decrease, though this might be biased due to incomplete reporting of los in patients who are actually still admitted at the time of writing. therefore, these observed statistics should be interpreted with care. similar results are observed for the length of stay in icu. different flexible parametric non-negative distributions can be used to describe the delay distributions, such as the exponential, weibull, lognormal and gamma distributions (held et al., ). however, as the reported event times are expressed in days, the discrete nature of the data should be accounted for in the estimation of the distributional parameters with regard to the respective delay distributions. different techniques are used in literature to take this into account. donnely we use interval-censoring methods originating from survival analysis to deal with the discrete nature of the data, to acknowledge that the observed time is not the exact event time (sun, ) . let x i be the recorded event time. instead of assuming that x i is observed exactly, it is assumed that the delay is in the interval (l i , r i ), with l i = x i − . and r i = x i + . for x i ≥ and l i = = − and r i = . for x i = . as a sensitivity analysis, we compare this assumption with the wider in addition, the delay distribution is often truncated, either because there is a maximal clinical delay period (e.g., time between symptom onset and hospitalization is at most days) or because the hospitalization is close to the end of the study (e.g., if hospitalization is days before the end of the study, the observed length of stay cannot exceed days) and partial information about patients still being hospitalized is not part of the database. we therefore use a likelihood function accommodating the right-truncated and interval-censored nature of the observed data to estimate the parameters of the distributions (cowling et al., ) . the likelihood function is given by in which t i is the (individual-specific) truncation time and f (·) is the cumulative distribution function corresponding to the density function f (·). we truncate the time from symptom onset to diagnosis and the time from symptom onset to hospitalisation to days (t i ≡ ). the length of stay in hospital is truncated at t i = e − t i , in which t i is the time of hospitalization and e denoted the end of the study period (june , ). in addition, to account for possible underreporting in the survey, each contribution is weighted by the post-stratification weight w i ≡ w t defined as w t = n t n t t n t , where t is the day of hospitalization for patient i , n t the number of hospitalizations in the population on day t and n t is the number of reported hospitalizations in the survey on day t . we assume a weibull and lognormal distribution for the delay time distribution. the two parameters of each distribution are regressed on age, gender, nursing home and time period. a maximum likelihood approach is used for parameter estimation. the bayesian information criterion (bic) is used to select the best fitting parametric distribution and the best regression model among the candidate distributions/models. only significant parameters are included in the final model. in addition, the delay distributions are summarized by their estimated mean, median, th, th, th and th quantiles, as these can be helpful in guiding policy decision making and future covid- modeling approaches. overall, the delay between symptom onset and hospitalization can be described by a truncated weibull distribution with shape parameter . and scale parameter . . the overall average delay is very similar to the one obtained by abrams table : summary of the regression of the scale (λ) and shape (γ) parameters for reported delay time between symptom onset and hospitalization and between symptom onset and diagnosis, based on a truncated weibull distribution: parameter estimate, standard error and significance ( * corresponds to p-value< . ; * * to p-value < . and * * * to < . ). the reference group used are females of age > living in nursing home that are hospitalized in the period - to - . lang delay distribution. however, there are significant differences in the time between symptom onset and hospitalization between males and females, among different age groups, between living statuses (nursing home, general population or unknown) and between different reporting periods. as the truncated weibull distribution has a lower bic as compared to the lognormal distribution (bic of , and , for weibull and lognormal distributions, respectively), results for the weibull distribution are presented. in table , the regression coefficients of the scale (λ) and shape parameters (γ) of the weibull distribution are presented. the impact on the time between symptom onset and hospitalization is visualized in figure , showing the model-based %, %, %, % and % quantiles of the delay times. age has a major impact on the delay between symptom onset and hospitalization, with the youngest age group having the shortest delay (median of day, but with a quarter of the patients having a delay longer than . days). the time from symptom onset to hospitalization is more than doubled in the age groups - and - as compared to the age group < (median close to days and a delay of more than . days for a quarter of the patients). in contrast the increase in time between symptom onset and hospitalization is % in the age group + as compared to the youngest age group < (median delay of . days, with a quarter of the patients having a delay longer than . days). after correcting for age, it is observed that the time delay is somewhat higher when patients come from a nursing home facility, with an increase of approximately days. note that in the descriptive statistics, we observed shorter delay times for patients coming from nursing homes. this stems from the fact that + year old's have shorter delay times as compared to patients of age - , but the population size in the + group is much larger as compared to the - group in nursing homes. this is known as simpson's paradox. and although statistical significant differences were found for gender and period, we observe very similar time delays between males and females and in the different time periods (see figure a in the appendix). note, however, that there are indeed differences, but mainly in the tails of the distribution; with, e.g., the % longest delay times between symptoms and hospitalizations observed for males. the time between symptom onset and diagnosis is also best described by a truncated weibull distribution (shape parameter . , scale parameter . ). as again the truncated weibull distribution has a lower bic value as compared to the lognormal distribution (bic values of , and , for weibull and lognormal, respectively), results for the weibull distribution are presented. parameter estimates are very similar to the distribution for symptom onset and hospitalization, and are presented in table . the median delay between symptom onset and diagnosis is approximately one day longer as compared to the median delay between symptom onset and hospitalization. the diagnosis was typically made upon hospital admission to confirm covid- infection. this is why the date of admission is very close to date of diagnosis. the same ef-fects of age and nursing home are found for the time between symptom onset and diagnosis, as compared to the time to hospitalization. especially at the increasing phase of the epidemic, the time between symptom onset and diagnosis was longer as compared to the time between symptom onset and hospitalization (see figure a ), but this delay has shortened over time. as a sensitivity analysis, a comparison is made with an analysis without truncating the time between symptom onset and hospitalisation or diagnosis. results are presented in figure a , and are very similar to the once presented here. in addition, a sensitivity analysis assuming that the time delay is interval censored with time intervals defined as (x i − , x i + ) is presented in figure a , yielding very similar results. it was also investigated whether or not there a difference between neonati (with virtually no symptoms, but diagnosed at the time of birth or at the time of the mothers testing prior to labour) and other children. for all children < years of age, we found a median time from symptom onset to hospitalization and diagnosis to be and . days, respectively. if we only consider children > years of age, a small increase is found ( . ( . - . ) days for time to hospitalization and . ( . - . ) for time to diagnosis). a summary of the estimated length of stay in hospital and icu is presented in table and figure based on the lognormal distribution. the lognormal distribution has a slightly smaller bic value as compared to the weibull distribution for the length of stay in hospital (bic value of , for weibull and , for lognormal) and for the length of stay in icu (bic value of , for weibull and , for lognormal). the median length of stay in hospital is close to days in the age group less than years old, but % of these patients stay longer than . days in hospital for females and more than . days for males and % thereof stay longer than days for females and days for males. the length of stay increases with age, with a median length of stay of around . days for females aged - and . days for males aged - . a quarter of the patients in age group - stay longer than days and % stay longer than days. this further increases for patients above years of age, with a median length of stay of around . and . days for female and male patients aged - years and . and . days for female and male patients above years of age. a large proportion of the elderly patients stay much longer in hospital. a quarter of these patients stay longer than . - . days for patients of age - years and longer than . - days for patients of age above . some very long hospital stays are observed in this age group, with % of the stays being longer than and days for females and males in the age group - years, and and days in the age group +. no significant difference is found for patients coming from nursing homes. over the course of the first wave, the length of stay has slightly decreased, with a decrease in median length of stay of around days from the first period to later periods. note that this result is corrected for possible bias of prolonged lengths of stay being less probable for more recently admitted patients. therefore, this might be related to improved to better clinical experience and improved treatments. the length of stay in icu (based on the lognormal distribution) is on average . days for patients below years of age, with a quarter of the patients staying longer than . days in icu. similar to length of stay in hospital, also the length of stay in icu increases with age. the median length of stay in the age group - years is . , in age group - . , while in age group + it is slightly shorter ( . days). again, it is observed that a quarter of the patients in age group - stay longer than days in icu, in age group - . days and in + days. patients living in nursing homes stay approximately days longer in intensive care. no major difference is observed in the length of stay in icu between males and females, though some prolonged stays are observed in males as compared to females. similar as the overall length of stay in hospital, the length of stay in icu has decreased over time (with a decrease of days from the first period to the later periods, and an additional days in the last period). table summarizes the length of stay in hospital for patients that recovered or passed away. the lognormal distribution has the smallest bic value for time from hospitalization to recovery and the weibull distribution for time from hospitalization to death. figure also table : summary of the regression of the log-mean (µ) and log-standard deviation (σ) parameters for length of stay in hospital for recovered patients and patients that died, based on lognormal distribution and weibull distribution: parameter estimate, standard error and significance ( * corresponds to p-value< . ; * * to p-value < . and * * * to < . ). the reference group used are females of age > living in nursing home that are hospitalized in the period - to - . patients that recovered, the length of stay in hospital increased with age (the median age in age group < is days, which increases to days in age group - years, days in age group - years and days in age group +). in contrast to previous results, we observe that patients living in nursing homes leave hospital approximately day faster as compared to the general population. however, the % longest stays in hospital before recovery are longer for patients living in nursing homes. but, while the length of stay in hospital for patients that recover increases with age for all age groups, the survival time of hospitalized patients that died is lower for the age groups - years (median time of . days) and + (median time of . days) as compared to the age group − years (median time of . days). also large differences are observed amongst patients coming from nursing homes or not, with the time between hospitalization and death being approximately days longer for patients living in a nursing home. no significant differences are found between males and females. as a sensitivity analysis, a sensitivity analysis assuming that the time de-lay is interval censored by (x i − , x i + ) is presented in figure a . results are almost identical to the previously presented results. it was also investigated whether the smaller duration of hospitalization for < years can be due to the neonati, for which the duration of stay is often determined by duration of postdelivery recovery of the mother. and indeed, the length of stay in hospital for the youngest age group increases slightly if we take out the children of years of age to . ( . , . ) days for males and . ( , . ) days for females. the length of stay in hospital for recovered patients increases to . ( . , ) days for males and . ( . , . ) days for females of age between and years of age, making it very similar to the − years old patients that recovered. no impact was observed on the length of stay in icu. previous studies in other countries reported a mean time from symptom onset to hospitalization of . days in singapore, . days in hong kong and . days in the uk (pellis et al., ). other studies report mean values of time to hospitalization ranging from to . days (linton et al., , kraemer et al., and ferguson et al., . in belgium, the mean time from symptom onset to hospitalization overall is . days, which is slightly longer as compared to the reported delay in other countries, but depending on the patient population, estimates range between and . days in belgium. the time from symptom onset to hospitalization is largest in the age group - years old, followed by the - years old. if we compare patients within the same age group, it is observed that the time delay is somewhat higher when patients come from nursing home facility, with an increase of approximately days. the time from symptom onset to diagnosis has the same behaviour, with a slightly longer delay as compared to time from symptom onset to hospitalization. to investigate the length of stay in hospital, we should make a distinction between patients that recover or that die. while the median length of stay for patients that recover varies between days (in the age group < ) to . (in the age group +), the median length of stay for patients that die varies between . days (in the age group +) and . days (in the age group − ). over all patients, the median length of stay varies between . days (in the age group > ) to (in the age group +). in general, it is observed that the length of stay in hospital for patients that recover increases with age, and males need a slightly longer time to recover as compared to females. but, patients living in nursing homes leave hospital sooner as compared to patients in the same age group from the general population. in contrast, the time between hospitalization and death is longest for the age group - years, with shorter survival time for the age groups - years and +. the length of stay in hospital for patients that die is longer for patients coming form nursing homes, as compared to patients from the same age group from the general population. a similar trend is observed for the length of stay in icu. the length of stay in belgian hospitals is within the range of the once observed in other countries, though especially the length of stay in icu seems short in belgian hospitals. rees different sensitivity analysis indicated that the results are robust to some of the assumptions made in the modeling. however, alternative methods could still be investigated to improve the estimation of the delay distributions. first, alternative distributions can be used, having more than two parameters and thus more flexibility, e.g., generalized gamma distributions (for which the gamma, exponential and weibull distributions are special cases). second, a truncated doubly-interval censored method could be considered to account for the uncertainty in both time points determining the observed delays (and their intervals). finally, the impact of severity of illness and co-morbidity on the length of stay in hospital is very important. this was not investigated in this study as this information was not made available, but is an important factor to investigate in future analyses. epidemiological determinants of spread of causal agent of severe acute respiratory syndrom in hong kong estimating incubation period distributions with coarse data incubation period and other epidemiological characteristics of novel coronavirus infections with right truncation: a statistical analysis of publicly available case data robust reconstruction and analysis of outbreak data: influenza a (h n )v transmission in a school-based population estimation of the serial interval of influenza rapid establishment of a national surveillance of covid- hospitalizations in belgium simid covid- team, beutels, p., hens, n. ( ) modeling the early phase of the belgian covid- epidemic using a stochastic compartmental model and studying its implied future trajectories hospital length of stay for covid- patients: data-driven methods for forward planning covid- length of hospital stay: a systematic review and data synthesis key: cord- -uu aykoy authors: johnston largen, kristin title: two things can be true at once: surviving covid‐ date: - - journal: dialog doi: . /dial. sha: doc_id: cord_uid: uu aykoy nan as i sit in california, currently under a "stay at home" order to help stop the spread of the covid- pandemic, i have been reflecting on worship, the role of the church in providing healthcare, and our sacramental life. colleagues have asked for my thoughts on liturgy in the midst of a public health crisis, so i have decided to put things on paper so that others may be included in the conversation. many have already published resources on moving worship into the online environment, but a large percentage of those resources have approached it from a practical perspective rather than a theological or historical one. the title of my essay is a riff on luther' s open letter "whether one may flee from a deadly plague," which has made a resurgence during this health crisis. the letter has always been a favorite of mine: i assigned it during the four years i taught introduction to lutheranism, and it served as a conversation partner in my dissertation on liturgical rites of healing. quotes from the letter have circulated around facebook, especially as luther (near the end of the letter) provides very practical advice during a plague. the main point of the letter is that pastors and city officials are to work together to physically and spiritually care for those affected by disease, and those who are not bound by such responsibility should be free to leave without burdening the conscience. but before we use luther's letter as our urtext for the church's response during covid- , we must remember that it seems unthinkable for luther (at least in the letter itself) that people would not be able to attend worship during a plague. our situation is different, with local, state, and federal governments providing recommendations and orders to stay home and not gather in groups. our understanding of science is also different, being on this side of the scientific revolution with a better grasp of how germs spread. luther's science is not our science, so that must be considered when reading his specific recommendations on liturgical practices during health crises. on the other hand, the letter reminds us that the church must work alongside civil authorities in preventing a plague from spreading, which means that congregations must follow the orders not to gather. attempting to spiritualize this pandemic as being the will of god-either as punishment or an opportunity to repent--is dangerous. the theological and historical concerns that i raise below are as i interpret our lutheran traditions, drawing on luther and the book of concord. the practical suggestions that i offer that differ from our customary practices are understood to be in extremis-in an emergency like we are facing today. as luther reminds his readers at the beginning of the letter, all christians must "come to their own decision and conclusion" (luther, , p. ). purpose" one thing that must be addressed before reflecting on particular issue is to define worship from a theological perspective or, to use john witvliet's ( ) modes of liturgical discourse, in terms of "deep meaning and purpose." for lutherans the primary theological understanding of worship is as a dialogue between god and humans, or as luther says in his torgau sermon, "where our dear [god] may speak to us through [the] holy word and we respond to [god] through prayer and praise" (luther, , p. ; see also luther, , p. . ) . distinct from some other christian traditions, lutheran worship is primarily about god coming to us through the means of grace, which are concrete and external ways that god make godself known in the worship event. these are primarily preaching, the sacraments, absolution, and other liturgical practices. at the same time, lutheran worship also has a participatory response included in it. this is the second half of luther's torgau definition. active participation ("full, active, conscious," as would be articulated four centuries later at the second vatican council) is the other half of the dialogue between god and humans. participation is a diverse thing, just as christ was incarnated into a diverse human reality. the response is the response by the entire congregation, not just that of worship leaders or a select few. i think this should give us pause when we look at livestreaming (rather than web conferencing), as that calls into question the participatory ability of digital worship. unlike some theological traditions, lutherans are under no obligation to go to worship. yet, this question actually misses the point if we attend to our definition of worship-it is through worship that we know who god is and how god operates in the world. it contains external forms so that god's word may exert its power more publicly (luther, , p. . ) . in fact, luther ( , p. . ) believes worship is so important that we should have it daily, but understands that sunday as the chief day of the week is handed down from ancient times. in her book @worship: liturgical practices in digital worlds, liturgical theologian teresa berger ( , p. ) cautions us from too quickly succumbing to the false dichotomy of "real" and "virtual." such distinction can equate "virtual" with "non-real," which automatically privileges faceto-face relationships and practices over technology mediated. god primarily operates through means (neighbor, preaching, sacraments) ; thus, technologically mediated communication can certainly be the medium of god's own communication. another theological concept that helps bridge the gap between the so-called "virtual" and "real" is one that was used during the reformation to describe christ's presence in the sacrament. luther argued that, because of the ascension, christ had the attribute of ubiquity, meaning that he is available where he promised to be. for luther and his disagreements with the reformed, this meant that christ is truly present as the elements of the lord's supper on every altar. this same logic can be used in the digital environment: christ has promised to be present among gatherings of christians and in the midst of those who suffer (the theology of the cross). if such is true, then christ can be present online that transcends time and geography, just like christ's presence in the sacrament. theologian deanna thompson ( ; , march) follows this argument, claiming that virutal community is real community, mediating the body of christ. one of the arguments that i heard in my previous work of helping faculty teach online is the assumption that online coursework and whole-person formation contradict one another, since the online environment is all about the mind. in my experience of teaching online, i know this is not true. berger ( , p. ) affirms this by stating that the online environment definitely does have a physical effect on users. the types of relationships that can occur online can be described as "low stakes," meaning that they are "not associated with any cost, friction or risk" (simanowski, , p. ). does such a "low threshold" (berger, , p. ) for commitment allow one to flee from any sort of responsibility in the relationship, or does it allow for the freedom of openness without the increased possibility for negative consequences? constructing intentional community, as what (or should) already exist in our regular congregations, differs from the low stakes approach that one could see in facebook. in our current situation, the online environment is operating parallel to the communities that would occur in person if the health crisis did not exist. thus, such relationships are actually "high stakes" as this moment is temporary with the assumption that these relationships will continue outside technology. the phrase that health officers are using is "social distancing," the clinical term that encourages people to stay out of public spaces or larger gatherings. while limiting contact and large groups is an important step in reducing the pandemic peak, the term creates another set of problems. it is physical distancing that can reduce the spread of germs, but we must continue social encounters in this crisis even while limiting physical encounters. social networking and web conferencing technology, something i have been using since spring break to teach the remainder of the semester, can foster and assist us in maintaining our neighborliness in the midst of covid- . maintaining both physical and social distancing can lead to isolation, which can make the health situation (especially mental health) worse. in some of the discourse i have seen online in the last weeks, i have noticed a discrepancy in terminology. when it comes to using video-based technology to broadcast online, two terms usually appear: livestream and web conference. although both of these practices use webcams and microphones, their level of interactivity is quite different. the livestreaming approach is unidirectional, which is how one currently watches television and youtube. the broadcaster creates the material, and those who watch consume the material. participation at best is passive and could be analogous to a pre-reformation understanding of the mass. the main role of the worshiper is to watch at the important moments, while simultaneously engaging in their own devotional practices. livestreaming (and admittedly web conferencing if the feature is enabled) allows for recording, meaning those unable to participate at the scheduled time could join in when they are able, but this can increase the individualism present in contemporary society today. also, taking seriously the role of the holy spirit means that it would be nearly impossible to replicate the live action in a recording (spadaro, , p. ) . the web conferencing approach is bidirectional and multidirectional. it allows for both proclamation and response through the same online tool, which is not the case with livestreaming. the "congregation" is part of the interactivity just as the worship leaders. this better simulates the dialogical nature of lutheran worship that i defined earlier. in his letter, luther ( , p. ) encouraged his readers to continue to participate in the weekly proclamation of the word through the sermon. he understands that central to christian life is the preaching and practice of god's word (luther, , p . ) . historically, the service of the word has been the primary sunday liturgy for lutherans. some may wish to dispute this because our confessional documents and luther himself assume a weekly celebration of the sacrament (see melanchthon, , p. . ; luther, , p. . ). but we know that this was the ideal, and various circumstances usually prevented or hampered attaining this ideal: a shortage of pastors in early american efforts, laity still feeling unworthy to receive, the assumption that frequent reception would diminish the sacrament's specialness. the trend toward restoring weekly celebration of the lord's supper came with the early work done alongside the ecumenical liturgical renewal movement of the mid-twentiethcentury. and still, among many lutheran congregations, weekly communion is not yet the practiced norm, although it is assumed in the newest worship books. when visiting my family in minnesota, the congregation where we worship still celebrates the sacrament twice a month. this attempt to restoring weekly sacramental celebrations has in some places turned into an overcorrection, with the assumption that every time the congregation gathers, or even a subset of the congregation gathers, the liturgy is deficient if the sacrament is absent. this practice has led to a phenomenon that looks more like votive masses than regular worship, in which the eucharistic liturgy appears to be for particular intentions ("votums") rather than the means of grace. the language around these votive (and sometimes private) masses becomes about mutual union and friendship, professing the faith we share, which is contrary to our understanding of the sacrament (melanchthon, , p. . ) . these votive-like celebrations often separate the proclamation of the word from the sacrament, such that the sacramen-tal elements become the primary action rather than maintaining the historic order and balance (see evangelical lutheran church in america, , p. ) . one problem identified by those who advocate for perpetual fasting during this pandemic is that the sacrament must be celebrated within the assembly, as articulated in the use of the means of grace, principle (evangelical lutheran church in america, , p. ). yet, this neglects the fact that the online environment is an assembly gathered for worship, while not raising objections to sacramental celebrations outside the assembly (e.g., church council meetings, retreats, etc.). the best advice would be to fast from the sacrament as long as possible, even in the midst of desiring it. the season of lent provides a scheduled opportunity to do so, as we prepare ourselves for the annual celebration of christ's death and resurrection (lange, , march ) . as many congregations are already doing, these services of the word can easily take place in the online environment, especially through the communal nature of web conferencing. but the current health crisis may run many months; it already is running into eastertide and after. this requires other solutions (see below). because of the centrality of the proclamation of the word, lutheran congregations have not replaced the sunday liturgy with daily prayer; this was the custom in many anglican parishes pre- prayer book. daily prayer in its ideal form is daily worship that does not occur on sundays, as the readings assigned are primarily from the other parts of scripture. the three main offices-morning prayer, evening prayer, night prayer-focus the gospel on the invariable canticles from luke's gospel, rather than the reading of a gospel lectionary text. the daily prayer offices are also better suited for smaller gatherings rather than the presence of the entire worshipping congregation. this may make the most sense in the online environment, as the web conferencing technology works best with smaller groups, with these rites particularly suited as domestic (at home) rituals rather than in the worship space of the congregation. the music that accompanies daily prayer, especially the historic orders with the assigned chants, can be done with little-to-no accompaniment, and can easily be spread among many people (both rostered and lay) for leadership. it is with the celebration of the lord's supper (holy communion, eucharist) where we encounter the most difficulty in this public health crisis. even many advocates for digitally mediated worship stop short of agreeing on "online communion." the main argument is that "the christian faith is deeply incarnational, and that means wedded to physicality and matter. … [o]ffering communion online short-circuits the communal, embodied nature of the eucharist" (berger, , p. ) . these critiques are important, as they lift up one of the many layers of meaning for the sacrament, namely, its physicality and incarnational nature. this physicality of the means of grace connect with our own physicality to remind us that our human/bodily nature is a god-given gift. the external nature of the sacraments provides the needed certainty to which our faith can cling/grasp (luther, , p. . ) . when the sacramental elements are not possible in any way, the words of institution themselves serve the role of comfort and healing. the brandenburg church order notes that lay people can use the words of institution without administering the elements (they could not do that) so that the sick could "feed on the word" (rittgers, , p. ) . this is a natural extension from luther's claim that the sole source of comfort for christians is the word (rittgers, , p. ) . the proclamation of the gospel, aural and edible, is in service of consolation (treu, , p. ). the church's ministry to the sick has usually included bringing communion to those who are unable to attend sunday worship. this tradition dates to the second-century writings of justin martyr in his description of christianity to roman officials. in narrating an outline for the sunday liturgy, he describes the role of the deacons as the ones who bring the lord's supper to those who are absent (chapters and ). the deacons, since they did not have the role to "consecrate" the sacrament (justin assigns that to the "president of the assembly"), would have brought the already-consecrated elements as an extension of the assembly's sunday worship. this practice has continued to the present day and is seen in the current lutheran tradition with lbw's "distribution of holy communion to those in special circumstances" and elw's "sending of holy communion." i think the lbw's title better lifts up the issues we face today-we are in "special circumstances" that require us to rethink our customary practices. and this reevaluation makes it necessary that we be creative, especially since our congregations are all dealing with different restrictions and situations. extending our practices of "special circumstances" would be the ideal solution for distributing communion during these times. in contexts that are not in a quarantine-like state where visits are still allowed, a minister of communion would bring the sacrament to individuals or small groups in their homes or other arranged places. the caveat is that in many places, the assembly is not gathering on sunday mornings, so how would the extended distribution be extended from something that is not happening? if allowed under the civil orders, ministers of communion would gather with the pastor for a full eucharistic liturgy (including the proclamation of the gospel), receive the sacraments themselves, and then carry them to those who are not allowed to be present. preaching, which importantly connects to the distribution of the sacrament, could be recorded so that it could be played in the remote locations when the sacrament is distributed. it would be important that the minister gives the communion to the other person (and vice versa) so that the communal nature of communion continues. the more difficult context is when gatherings in general are disbanded by order of civil officials. one might argue that churches could exempt themselves from such a situation (e.g., two kingdoms), but luther ( , p. ) reminds us in his letter that part of the responsibility for both the body and soul means doing what it takes not to spread infection; in fact, it is considered sinful to not avoid places and persons in the case of possible infection. yet, there are possibilities even here. while maintaining physical distancing, it would be possible to deliver the sacrament to households, like permitted for food delivery. even though the sacrament is not mere bread and wine as served at the table, luther ( , pp. . - , . ) still calls it food (and medicine) for both the body and soul, nourishing us in a different mode than regular table food. again, pastors and ministers of communion would need to find a way to maintain the intimate connection between the proclamation of the gospel and the sacrament, so that we do not privilege one over the other. as suggested above, preaching could happen remotely through digital means and people would be able to receive communion. it is the distribution and reception-the "for you"-that is the central action of the sacrament, so ideally it would be someone else who distributes communion (luther, , p. . ; formula of concord, , p. . ) . as thomas schattuer ( , p. ) notes, the lutheran mass "culminated in the reception of the sacrament." this could occur among family members in a household, roommates in other situations, medical professionals and patients in a healthcare facility, and so on. this is the most difficult part of the discussion on digital worship in a health crisis. i have read essays on both sides of the argument, and most seem to be talking past one another. the ideal response is, as i have stated above, to fast from receiving the sacrament. in the small catechism, luther notes that the benefits of the sacrament are "forgiveness of sins, life and salvation," which makes it different from baptism (and the necessity of that for salvation). yet, in the large catechism, luther ( , p. . ) provides additional benefits: comfort, new strength, and refreshment. the sacrament also "a pure, wholesome, soothing medicine that aids you and gives life in both soul and body" (luther, , p. . ) . so while the lord's supper is not salvifically necessary, it certainly could be considered pastorally necessary. before musing on what online communion may look like, i want to offer three caveats. the first is that the sacrament remains unimpaired even if we handle or use it unworthily (luther, , p. . ; formula of concord, , p. . ). this is not to excuse bad sacramental practice or to justify doing whatever we want with the lord's supper. rather, it does provide some comfort as we attempt to adjust to an unthinkable situation in a public health crisis. the second is that no one should deter someone from receiving the sacrament (luther, , p. . ) . especially in times of pastoral necessity, the lord's supper should be received. luther ( , p. ) saw this as a requirement for pastors to minister to the sick and dying. it is part of the full work of ministry in the midst of a health crisis: preaching, teaching, exhorting, consoling, visiting, and administering the sacrament (luther, , p. ) . but, when visits are not allowed, the last two in this list of work must be rethought using the tools we have in our time. the third is that we should not doubt that christ the word can certainly accomplish what he promises (formula of concord, , p. . ) . this simple argument was central to luther's disagreements with zwingli at marburg-if jesus promises (as he states in the words of institution) to truly be present, then we should not doubt those words or attempt to construe them to mean something else. so, is it possible to have online communion? i hesitate to answer in the definitive, but i provide here some theological rationale for doing so in extremis. by online communion i mean having worshippers gather in community through web conferencing with their own bread and cup as originating from their pantries. the two main objections that are raised by this are: ( ) the lord's supper requires contact, and ( ) it is akin to self-communion. the first objection should not be taken lightly, as generations of christians have gathered physically to participate as the ecclesial body of christ in the sacramental body of christ. distribution is important in lutheran theology, which regularly happens from person-to-person (see luther, , p. . - ) . unfortunately, in many dire situations, that is not possible and can be even dangerous; recall luther's exhortation that ministers are also responsible for not spreading infection (luther, , p. ). an even stronger objection related to contact is that the sacramental event must happen in-person, which would preclude the sacraments happening remotely. prior to the technological revolution, such remote sacramental event would be unthinkable and thus would not have come up in the theological discourse, certainly not during the reformation. to me this is a question of "use" and "action," the two words that the concordists identify as best expressing luther's sacramental theology in his "confession concerning christ's supper." the right use of the sacrament is reception and faith (schattauer, , p. ) . the sacrament cannot be present separate from its intended use (formula of concord, , p. . ). this is the closest the lutheran tradition comes to defining the "how" of the sacrament, as that was not the important question in debating the lord's supper (the "who," "what," and "why" were primary in the regular celebration of the lord's supper, it is the presiding minister who completes steps one through three, and then the recipient of the sacrament completes steps four and five. yet, the concordists do not appear to say anything about the necessity of the presiding minister doing one and three, only two-speaking (or singing) the words of institutionbecause of having the proper "ministry or office" (formula of concord, , p. . ). christ's body and blood is truly present as the sacramental bread and wine in its use and action, which i would argue can extend over digital means in the midst of the online community. the concordists insist on the language of use and action to prevent misuse of the sacrament through eucharistic adoration/reservation and corpus christi processions (formula of concord, , p. - . ). any adoration of christ occurs when the community gathers for sacramental worship, not as adoration of the sacrament itself. the distribution extra nos prevents self-communion, which is second objection. i find it peculiar that such objection is raised in regard to online communion, when generations of lutheran pastors have communed themselves during the eucharistic liturgy (rather than having an assisting minister commune them) without much objection. in fact, the use of the means of grace, application a, permits such practice (evangelical lutheran church in america, , p. ) . the role of the presiding minister in all of this is to proclaim the words of institution, just like the preacher proclaims the gospel-both of these are understood as "showing forth" christ (melanchthon, , p. . ) . the presiding minister is not acting in persona christi as a show or example of christ's action at the last supper (melanchthon, , p. . ) . rather, the presiding minister is to proclaim christ through the audible and edible word (aural and sacramental) because christ is the word. in his liturgical reforms, luther underscores these ritually by requiring the words of institution to be spoken publicly and the eliminating the manual acts with the elements (schattauer, , pp. , ) . such reforms cause the eucharistic liturgy to focus on "christ's entire life and the meaning of that life for human salvation," rather than on reenacting a particular moment in christ's life (wandel, , p. ). when this essay is published online, most people will still be under "stay at home" or quarantine orders, and that may also be the case once this essay is published in hardcopy. while community continues to happen through online means, christians will be unable to gather in worship spaces for easterthe culmination of the liturgical year-in order to protect the vulnerable among us. this new life proclaimed in the death and resurrection of christ is a constant reminder that all christians are called to care for the neighbor in both their physical and spiritual life. during this time of being 'alone together,' i have been reflecting on the lectionary, especially the gospels for the second and third sundays of easter. like thomas, who missed the first appearance of resurrected jesus in the locked room, we may doubt christ's true presence unless we see and experience what has always happened in the past. yet, christ still comes to us, even when we do not experience church as in previous days. like the two disciples on the road to emmaus, we may not understand these events that have taken place, where our easter expectations have been disrupted by things outside out control. yet, christ still comes to us, even when we cannot gather physically to break bread as we have in previous days. pastors, deacons and all christians are responsible for the well-being of all people during this time, which may also include adjusting sacramental practice in extremis. the debate over online or virtual communion is not new, but the current health crisis has brought it to the foreground, and the ending of the covid- pandemic will not stop the debate. as we continue to figure out how to be church in the st -century, we will need to attend to the many layers of meaning inherent in our practices. luther ( , . ) argues that those who do not desire the sacrament actually despise it. this is what luther means by connecting the elements with the word (luther, , pp. . , . ) . although the apology does seem to assume the presiding minister is the one who distributes, this does not necessarily align with today's practice as articulated in the use of the means of grace, principle (evangelical lutheran church in america, , p. ). kyle kenneth schiefelbein-guerrero https://orcid.org/ - - - kyle kenneth schiefelbein-guerrero spadaro, a. ( ) . cybertheology: thinking christianity in the era of the internet, m. way (trans.). new york: fordham university. thompson, d. ( ) . the virtual body of christ in a suffering world. nashville, tn: abingdon press. thompson, d. ( , march lutherjahrbuch, , - . wandel, l. ( ) . the eucharist in the reformation: incarnation and liturgy. cambridge: cambridge university. witvliet, j. ( ) . teaching worship as a christian practice: musings on practical theology and pedagogy in seminaries and churchrelated colleagues. reformed journal. https://reformedjournal. com/teaching-worship-as-a-christian-practice-musing-on-practicaltheology-and-pedagogy-in-seminaries-and-church-related-colleges/. doi: . /dial. should christians practice "virtual communion" in time of a plague? perhaps surprising to liturgical christians, but surely surprising to the public at large, if they cared, is that during this coronavirus-covid- pandemic a parochial debate also has gone viral; well, viral within our subculture. this debate concerns whether holy communion is legitimate when done "virtually" over the internet. this debate is serious. sometime it has been viscerally reactive, claiming that some pastors just want to do their "own thing" or even that such centering on the eucharist implies fetishism. i am grateful to observe instead that the conversation has become more civil as the involuntary fasting from the lord's supper extends into many weeks. more often now "both" sides recognize that they argue from common conviction; that we dearly cherish (rather than fetishize) the eucharist. still, i fear that a higher love yet has been missing from much of the conversation. i would prefer not to join the public conversation insofar as the conversation already seems premised on privileging doctrine over human wholeness. i do not like these terms on which the debate is set: "pro or con, care for holy things is more important than care for human lives." it is a hidden premise with a faulty disjunction. it forgets that lutheran doctrine has always carried within itself the quality of a quatenus, that we hold and must hold certain things doctrinally high insofar as they convey the promise of the gospel and that the gospel itself holds highest god's loving intention for the wholeness of human being, what the gospel of john thematizes as abundant life. so i join with a different premise. god's intention that the gospel be proclaimed in word and act to bless human beings with wholeness of life now and forever is the point. this requires that we are freed from self-preoccupation with ourselves so to serve others in the same love with which god embraces us all. luther, of course, as in his treatise on why christians should not flee in a time of plague, reminds us that self-care is required for other-care. that treatise defined in stark relief the ultimacy of the office of ministry's call to loosen and break the bonds of despair and anxiety as zealously as we can as loving service in itself and as reinforcing god's algorithmic formula in our temporal terms for the health/salvation (salus) of all; you know, "god's work, our hands." so much for prolegomena. but what about the weighty doctrinal loci that we also sincerely do hold dear (me included, if that is not clear to some) and bear on the presenting question? these include justification, the church, the sacraments, and the office of the ministry. these are the first and primary steps in the augsburg confession and, indeed, display a particular logic or trajectory we sometimes fail to see. further inputs for our thinking include some basic anthropology and some beyond-basic metaphysics. many on both sides have written fine constructive theology written on the matter in more general and popular terms. i will explore these lutheran premises with an eye for those whose questions are more dogmatically impelled. i will conclude with a coda in praise of the mystical body of christ, our appreciation of which is regrettably understated, if at all extant. no mere editorial (however longish) such as this can explore fully these loci. but i hope i can add some nuance to a mutually respectful dissensus. i have long sloganized that the vocation of the church is "the objectification of justification." human beings are fickle folk. emotional dis-ease routinely subsumes rational equanimity. anxiety is a chronic condition. when stressors of many kinds set us off, we can be locked into moral and spiritual trauma. all the dis-ease (and diss-ease) is caused in some way by sin, ours or another's. ptsd, moral injury, and spiritual trauma are contemporary names for the manifestation of the ancient general category of sin. depression and despair collude. one's whole physical and emotional and moral being is inhabited by these demons and only an-other can evict them. in other words, we are captive to our subjectivity and only an objective other can save us. concomitantly, only an-other carrying an-other's word in-with-under other objects objectively brings the saving grace word spoken and acted to us. the primal human need for a good word from outside oneself sets the initial logical ordering of the augsburg confession (ca). it makes sense, of course, for the first article to be about god. we start with the ultimate, however abstract the concept may be. then there is history, that is, sin and alienation. with article ii, in other words, the topic is more empirical, concrete, "objective." not to belabor the point, but the confession gets ever more "objective." jesus is our real and accessible rescue from alienation (iii). justification is the lutheran grammar to speak that (iv). the church (v), then, is the historical and empirically objective "paying forward" of justification as the very body of christ that evokes the life of new obedience (vi) and "is" church only as and when it proclaims the word and administers the sacraments (vii). the trajectory in this ordering underscores that any rescue of humanity from its alienation from god and self must come from a historical palpable "other." oswald bayer (a lutheran's lutheran) states the same. the augsburg confession and the smalcald articles insist that justifying faith "comes in the promise of the gospel and the alien righteousness (iustitia aliena) of christ that comes with it only in this manner must always receive proper emphasis." the gospel is never one's private possession. in other words, subjectivity cannot have the day (martin luther's theology, a contemporary interpretation, , p. ). external markers constitute the sacramental event. in the lord's supper those are (a) "the social and concurrently naturalcultural moment" of shared eating and drinking; (b) the actualization of a "definitive communal relationship between god and humanity" taking place within a physical assembly; (c) convened by and "through the performative word that has been addressed" to the assembly through bread and wine; (d) the whole action of which is empowered by the presence of the resurrected crucified jesus (bayer, pp. - ). then the perhaps surprising remark: the public external character of this word event and the private freedom of the individual "are correlates and empower and support one another." religion is surely not a private affair. but neither is it a form of heteronomy. alien righteousness must assume priority, but neither is it coercive. (bayer, ). hold high the objective othernessthe alien-work of god. yet do not dismiss the integrity of the communicant's trust and desire. do it all with and within the objectivity of what is physical. there are two points in this fine summary that bear especially on our subject. the first concerns the relationship of physicality, communication, and location. as much as we emphasize the priority of the objectified grace of god in the elements of water, bread, and wine, it is puzzling to suppose nevertheless that the finite object that conveys grace (finitum capax infiniti) is bounded. bounded by what? well, it has been said forcefully in this debate on "virtual" communion that it is only legitimate when one assembly gathers around one loaf. that assembly must be physical and gathered in one place. but other scenarios that challenge that point are very familiar to us. suppose that the assembly is physically present, but is numbered in the thousands or even tens of thousands, as with churchwide assemblies and youth gatherings. there, of course, no one questions the subjectivity of adolescents gathered around and given the body and blood in the forms of hundreds of loaves and hundreds of cups. one loaf and one cup are lifted up at the center table and thousands of morsels and sips are consumed by de facto house churches without walls around the stadium, even behind walls and in other rooms by "overflow crowds." also, the distant eyes in the last row of the upper deck would not even be able to see the loaf and cup lifted were it not for the jumbo-tron screens hanging high over the arena. communication happens in multiple modes, most personally at distances of less than six feet. after that, from ear-aids to massive amplifiers, blue-toothed bridges, and towers and satellites convey the same grace in and with sound and light waves; the same intended original intimacy of god's voice in human voice to human ears still says "for you." why start and stop with one loaf and one very local assembly? of course we prefer that. in our subjectivity we prefer that. we respond more readily to the very familiar, to what has become intimate and intuitive. but infinity does not stop at walls; the real incarnate and divine christ shows up in emmaus, closed upper rooms, to roman military converts, under the floorboards with wwii american prisoners in the philippines (true story), and shares his incarnate divinity (communicatio idiomatum); wherever christ pleases. does the power of god, which for fickle human consciousness necessarily begins at the physical, end with the physical? well, yes, actually, but for comfort's sake christ does so even over great distances, metaphorically concomitant with entangled quantum particles. as bayer avers, the sacramental action begins with the physical but addresses (and redresses) subjectivity. the sacrament cannot begin with subjectivity. but enfleshed grace means to go to and through subjectivity so to move and change the receiver the more into christ. remember (re-member), christ counters zwingli's astral-projection direction and, as promised always, comes to us. the point of counter-precedents to the norm of one loaf in local assemblies in real time is very clear. what we already do already proves agreeable exceptions to the norm. we have already practiced-enthusiastically so!-virtual communion. and-of course!-we have always found ways to commune those who are sight and hearing impaired. we do not insist on an "ableist" assembly. the objective character of the eucharist is never purely so. it cannot be, because communication is not like that. thank goodness, still, the accent on the "other" is still more than on the receiver. new mass communication software platforms do not change that accent. if we argue that they do, we reveal our bondage to an aristotelian metaphysical stipulation of eucharistic conditions that fog our memory that christ's promises hold wherever he wants to, including the space/time relative and quantum qualities of postmodernity. perhaps the real error in this debate is not that we lack real communication and presence to each other when communion happens over longer distances, but that we have attached the word "virtual" to it. modalities have changed because metaphysical understandings have changed. we are not talking about donning headsets and entering into an alternate reality, as if being church is like going to the "feelies" predicted by orwell. we are talking about a real objectified message of forgiveness and liberation and re-union into a holy and incarnated comm-union that is just as real as the most localized assembly of two or three people in one space. since the stone was moved, we have always said this; we have always prayed this. our communion at the table happens with the saints and angels of all time and space. "with angels and archangels" we lift and consume bread and cup. might we not be at that blinking point of awakened insight now of actually converging our metaphysics and communion practices with the mystical poetry we have sung for millennia? let us just say it. the eucharist has always been "virtual" and the communion of saints has been our infinitum capax finiti. we physical and fickle folk are embraced by a boundless communion that graces and feeds our return to the holy and beloved community. indeed, we physical and folk are incarnated with christ no matter the visible or invisible walls. "virtual" in this frame does not mean fake. nor does it mean "spiritualist." the subjectivity of human perception does not and is commanded not to place bounds on the precise objects with and through which god gets to us. "virtual" here at least connotes the extension and incarnation of christ's physical body and blood beyond artificial boundaries of time and space, as the risen christ first did with walls of stucco and wood. the ubiquitous christ will be the incarnate christ and vice versa. one other predicate of "objectification" requires attention, at least for now. it is the function of the eucharistic presider, and so bears on the office of ministry. the reformers were clear that all the baptized are of the priesthood of all believers, and that the baptized are thus also servants in the spiritual estate, not only temporal. but ca v is not thereby collapsed with ca xiv. it is for the sake of good order that the pastoral office is distinct from the service of all the baptized. what all can do by virtue of their baptism cannot be done by all at the same time. the result would be chaos. so lw : : "because we are all priests of equal standing, no one must push himself forward and take it upon himself, without our consent and election, to do that for which we all have equal authority. for no one dare take upon himself what is common to all without the authority and consent of community. and so it is that a pastor is one who is rightly called (rite vocatus) by his/her community of faith. the call and the command make one a pastor of the divine word (lw : ). this call comes from the faith community and is for the good order of the faith community ( corinthians : ). there are vital presuppositions in the lutheran understanding of the pastoral office. "vital," remember, has both to do with being "central" and being "life-giving" (vita). we have reviewed already the necessity that the gospel comes from outside the receiver's subjectivity. someone, an-other, a selected and called one, must proclaim the gospel in its purity and see to it that the sacraments are ministered rightly. that someone, the pastor, attends not only to the proclamation and the giving. he or she attends to the subjectivity of the receivers too. pastors know their people. knowing their people implies a reciprocal relationship of trust between pastor and people. the shaping of sermons and-i submit-the contextual understanding and framing of a sacramental occasion requires such mutual trust. the communicator of alien righteousness, in other words, is herself not at all alien to the receiver. good order does not imply that the pastor "does it all," however. pastoral "control" of a whole parish life, including the manner of its sacramental worship, does not belong to this conception of ministry. one wonders whether a pastor does not exhibit a privileged clericalism when the self-control volume level goes past , as if one held a personal sense of ontological difference between the pastoral office and the rest of the baptized priesthood. the life of a congregation managed by such a relationship may happen on time, concordant with a stiff lip of upper reine lehre. but that is not necessarily good order. good order resonates in a faith community when people are known for what they can do well for each other, including tasks of prayer and lay eucharistic ministry, even lay preaching, and maybe even on rare occasion when lay sisters and brothers commune each other under the express permission and direction of the pastor. characterizing all of such a congregation's life is a living trust, a deep and respectful loving relationship that shapes the worship community and precedes its gathering. this is why "online communion" can be understood to be as real as a more "concrete" local assembly. the figure on the video monitor is known and trusted. the gospel proclaimed and the word acted in the words of institution, to be sure, are effective no matter the trust level (otherwise there is that donatism matter). but the christian community already in relationship and rightly ordered completes the circuity however dispersed in space and time the already palpably related assembly is. let me be clear. it is always "better than good order" to commune together as one particular assembly within the shining affinity and infinity of all the saints and angels. that is the normative good for our subjectivities. but in a time of exigency when a fast from the eucharist is involuntary, it is not pastorally caring after a surfeit of heteronomously imposed fasting days to tell the flock to "remember what you ate" as if that is the same as the call to "remember your baptism." we need the manna. god means us to have food for the journey in the wilderness. and when the counsel in such days says that prayer and meditation and listening to the word is "just as effective anyway," does that logic not undercut the very reasons the same counselors once argued for regular celebration of holy communion? does it not in itself betray a favored "spiritualism," if not even a closeted gnosticism? yes, god comes to us in many ways, and can be seen to do so in many places, but only after christ is revealed to us in the indissoluble nexus of word and sacrament (so wrote luther when writing on the pun of "crystal," christall, christ-in all). there comes the time in an exigency when god's people, threatened deeply in our subjectivity during just such times, gotta eat. ignatius of antioch's apt synonym, "the medicine of immortality," is meant from faith for faith in such days. much more could and should be said, but is not necessary here. the evangelical effect of "virtual" communication (though not yet virtual communion) has been so very consequential and beautiful in the life of the congregation i am called to serve. great stories can and will be told. "virtual communion" is a responsible step in in extremis times for the encouragement and continued formation of the faithful individually and together. i do not mean this as "normative," as if this should regularly replace the side-by-side body language of the local worship assembly. i intend this argument as the exception that proves the rule. it is an interim measure that in and by the holy spirit's power will console and move from "inside-out" god's people further in the way of trust and loving service to this dis-eased world. and when the spirit brings us as a local assembly more palpably back together around the font and table, we will be the more grateful that we were re-membered as christ's body even as we were too long apart. university of houston doi: . /dial. "the voice of one crying out in the wilderness" there is something strange going on in our weather system. for the past months our island, located in the northern part of the atlantic, has been literally closed down more than dozen times. this means that there have been no flights, international or domestic, roads have been closed down (either in parts of the country, or the whole country), schools have been closed, and electricity has been out in certain areas, for hours up to days, all because of the weather. this is a huge concern to all of us who live here in iceland, and even as i write this, we are in the midst of one of these events. there is really nothing "normal" about it, and questions about its relationship to a changing climate are compelling. but it is too soon to draw any conclusions. patterns have to have time to develop. at the same time, our glaciers are melting, right in front of our eyes, because of increase in temperature, which also are warming up the sea around the island, causing big changes, and real threats, to our fishing practices, as some fish species are leaving, seeking cooler waters elsewhere, while new arrive. it takes time for the fish industry to adapt, and the uncertainty is challenging for people, especially in the small fishing towns around the country, to say nothing of our whole economy. like everywhere else, icelanders have been slow to wake up to the seriousness of a warming climate, but gradually people are realizing that this means that life cannot go on like usual any longer. the swedish teenage girl, greta thunberg, who started school strike for the climate in august of , has made a huge impact in our country and elsewhere, by directing people's attention to the alarming reports scientists have been writing for years about the serious impact of global warming. because of greta, young people in iceland are starting their own school strike for the climate, and by doing that they have put much needed pressure on our government to act according to their commitment to the paris agreement, from december . it has been breathtaking to watch what has happened since the greta thunberg school strike for the climate started, less than years ago, outside of the swedish parliament. greta was only fifteen years old, and this was her own initiative. her parents supported her, although reluctantly to begin with, because they worried about her health and how the publicity would affect her. her aim was to remind swedish politicians of the climate crisis and their responsibility to react to the crises, three weeks before the fall election . after the election greta decided she would continue her strike until the day the swedish government had fulfilled their promises to meet the conditions of the agreement reached in paris, and reiterated at other climate conferences. so her strike goes on, but she is certainly no longer by herself. what started as a one-person act, has gradually developed into a world-wide movement, which, it is safe to say, has made greater impact than any other climate initiative. by speaking in clear terms, and making radical decisions, like not to fly, greta has managed, at her young age, to bring people all over the world, out to the streets, demanding responsible actions from those in charge, as well as individuals, who are contributing to the climate crisis by their daily behavior. there is something about her either/or rhetoric that makes people pay attention. she herself has said that the reason why she tends to see things as black and white is because she has asperger's syndrome. she also has told the story of her childhood, and how she became severely depressed after hearing about climate change at early age and realizing that people were not doing anything about it. after suffering for years from eating disorders and selective mutism, she was able to overcome her life-threatening condition, and start to eat and talk again, by speaking up and actively fighting for responsible reaction to the climate crisis. it is clear that for greta this is about life and death, and that is the message she wants to convey. during the past year and a half, greta has been invited to speak at numerous rallies, as well as exclusive meetings such as the european parliament, houses of parliament in london, the unites states congress, and the united nations. true to her black and white worldview, greta insists that we have to stop our emission of greenhouse gases; "either we do that or we don't," has been her repeated message. there is something profoundly prophetic about her "clear text" rhetoric. it is not simply about actions but also about a change of heart, and mind. speaking to the european economic and social committee in brussels, in february , greta challenged her audience to do their homework, because "once you have done your homework," she insisted, "you realize that we need new politics, we need new economics where everything is based on a rapidly declining and extremely limited remaining carbon budget." but, to greta, "that is not enough." what is needed is "a whole new way of thinking." instead of political systems based on competition, we need to cooperate and work together and to share the resources of the planet in a fair way. we need to start living within the planetary boundaries, focus on equity and take few steps back for the sake of all living species. we need to protect the biosphere, the air, the oceans, the soil, the forests. for greta there is no compromise, "no lukewarm, and neither cold nor hot" way of thinking (rev. . ), you are either for or against, either willing to save the planet, and our future, or not. the burning house is a compelling metaphor, painted in strong colors. "our house is on fire. i am here to say, our house is on fire," greta said in her address to the world economic forum in davos in january . she concluded her speech with this powerful, no beating-around-the-bush message: we must change almost everything in our current societies. there is something not right, when our kids and teenagers are missing out of school in order to protest and fight for the future of our planet, our common home, and the future of all of us who live here now, as well as future generations. there is something strange going on, and the young people are getting it. once again civil disobedience is proving to be an important tool against unjust systems, which are protecting the few, and not caring for the rest. this is what climate justice is all about. it reminds us that those who have contributed the least to the climate crisis are suffering the most; the poor, women, and children in the global south. those of us who belong to the privileged part of the world, need to start thinking globally; we need to look at the bigger picture. we cannot continue to think just about us, and our economy. greta thunberg argues what it all boils down to is the choice between money and the environment. there are multiple ways we can respond responsibly to the current crisis we are faced with, not only highly technical, and financially costly solutions. a book called drawdown: the most comprehensive plan ever proposed to reverse global warming ( ), lists, for example, education of girls as the sixth most important solution, and family planning as number seven, right after refrigeration, wind turbines, reduced food-waste, plant-rich diet and tropical forests; and before solar farms, and rooftop solar. there is no surprise that people, who are paying close attention to the discourse about the climate crises, are worrying about the future. eco-anxiety is a growing concern, especially among the youth. melting glaciers, higher temperatures, and severe storms are among the signs of climate change that are raising the awareness, and even anxiety, of the people in iceland. it is important that people realize that something can still be done. greta thunberg has warned all of us that talk about hope can indeed keep us away from actions. at the un climate change conference, in katowice in poland in december , greta gave a powerful talk, in front of world leaders, climate scientists, and other participants. she concluded her talk with those words: until you start focusing on what needs to be done rather than what is politically possible, there's no hope. we cannot solve a crises without treating it as a crisis. we need to keep the fossil fuels in the ground and we need to focus on equity. and if solutions within this system are so impossible to find then maybe we should change the system itself? we have not come here to beg world leaders to care. you have ignored us in the past and you will ignore us again. you've run out of excuses and we're running out of time. we've come here to let you know that change is coming whether you like it or not. the real power belongs to the people. a prophetic voice; a challenging, encouraging, and compelling voice. but will she be able to move us into action? only time will tell. trauma, eco-spirituality, and transformation in frozen : guides for the church and climate change i recently became captivated by the film frozen . i was in florida for a psychotherapy professional training and one night decided to take myself on a date. nothing fancy; i was intentionally looking for something not too thought provoking or activating-just dinner and a movie. little did i anticipate how disney's new animated film would capture my imagination, heart, and theological intrigue. while there is enough material in my thoughts and consciousness to fill out a book (keep your eyes out for one in the future-the proposal is already in the works), i wanted to share a few reflections on how disney's frozen can provide a lens for trauma, transformation, and the essential call for our faith communities to step more fully into an eco-spirituality as a means of fully incarnated repair. warning: spoilers ahead! first things first. "trauma," as i am using it, refers to any experience that overwhelms our capacity to respond to the challenges in our environment and results in either an over constriction or an over expansion. trauma is less about the event or experience itself and more about the ways in which it impacts us as individuals, communities, or global ecology. when faced with a significant threat that overwhelms our capacity for resiliency, we are at risk of developing symptoms of traumatic response. in its simplest form, traumatic responses cause us to be smaller or less than we truly are in an effort to protect ourselves from further wounding. we either shrink to escape further blows or we build and reside behind walls to project a larger image. trauma and transformation are the beating heart of frozen . as olaf wisely queries, "did you know that an enchanted forest is a place of transformation?" just as trauma entrenches us in protective patters; transformation calls from the beyond, into the unknown, and into the promise of authentic flow. transformation often requires us to enter into liminal places, the spaces betwixt and between, where our familiar habits are tested. these spaces, either geographically or relationally, disrupt our habits of constriction or fleeing protection and generate opportunities and wiggle room for the new. they require courage and offer hope for connection, fullness, and completion. the heartbeat pulsing through the film begins in the opening scene in which elsa and anna play with snow toys. anna explores the narrative that love, in this instance between a distressed damsel and a "fancy" prince, will save the day. elsa, meanwhile, weaves a story of trapped fairies and "the fairy princess who breaks the spell and saves everyone." their play prompts their father to tell a story of a real enchanted forest and how he became king. his story paints a picture of colonialism and subsequent acts of violence that rend the connection among the elemental spirits, the northuldra people (based on the sami people) from arendelle, and begins a cascade that separates anna from elsa, and their family from their community. while initially told from the perspective of king agnarr, the driving quest of the film is to discover the truth, brave the trauma of the truth, and make amends or reparation thus breaking the spell. at the center of the tale is elsa's quest to follow the lure of the voice that calls to her into the unknown and toward the source that holds memory and truth. along the way elsa must show her power to befriend the elemental spirits and witness the pieces of truth they hold. from the wind, she sees her parents as children and meets the northuldra people. the fire spirit shows her that she is not alone in hearing the call. the water spirit challenges her to recognize the limits of her power and to depend on another to go the distance. the earth giants, through the prompting of anna, break the wall that is the origin and symbol of violence and mistrust. it is only through the befriending and partnership with the elemental spirits that elsa finds her way home to who she fully is and anna steps into her power. the origins of trauma and separation in frozen are located in the deceptive "gift" to build the dam and stop the flow of water and connection thus weakening the elemental spirits and leading to violence. transformation occurs by venturing into the unknown, befriending the elemental spirits and indigenous communities, courageously witnessing the source violence of trauma, and taking concrete actions to break the spell and restore resiliency and vitality. so, what wisdom can the church glean from frozen ? first, in the midst of our ecological global crisis, we must find the courage to venture into the unknown. what are more sustainable practices? how do we speak with confidence about the limits of our solidified patterns and hope of restored connection? second, we need to find the fortitude to witness the ways in which we have enacted violence against one another, the earth, and non-human beings and the conviction to change, dismantle the dams we have built to enhance our power while limiting the magic of the natural world, and make reparations. humanity's chronic history of violence has profound implications for planetary health and eco-diversity. as we move forward in this critical period of ecological viability, who will we show ourselves to be? will we extend our awareness of the creation narrative in genesis and live into a renewed confession that yhwh created the earth and all of her creatures and they are good? as the fires in california and, more acutely, australia, have made clear, the loss of animal life as a consequence of our unchecked impact on climate is devastating. will we protect the children of the earth from our unfettered goblin of destruction or will we break the spell and save everyone? as communities of faith, we have an opportunity to mend the traumatic wounds of colonial violence, humbly seek forgiveness and understanding for our histories of collective violence toward indigenous peoples, and offer reparations (in whatever form is appropriate) to our intra-and interspecies siblings. we must find the courage to befriend those who frighten us in their efforts to protect themselves from our histories of violence and to join with them to heal the traumas that threaten to freeze and drown. transformation is formed through the courage to venture into the unknown, the willingness to listen, witness, and befriend, the moral fortitude to break down the walls that were erected in fear, and connect ever more fully to the elemental spirits of the planet and, through those connections, to who we are meant to be. we are the ones we have been waiting for. can we fully step into our power and show our self who is made in the image of the divine? grounding flight wellness center doi: . /dial. from the redwood forests and the cedars of lebanon to the tree of good and evil in the garden of eden, far back into the groundswells of the archaic human imagination, the experience of "treehood" (paul tillich) has claimed human hearts and minds all around this good earth for countless generations. i myself, in my own mundane way, have been captivated by existential encounters with trees, real or imagined, ever since i can remember. but much as i have self-consciously and enthusiastically lived with, thought about, and contemplated trees my whole life, i have never explored that experience itself. i want to make a start at doing that here, with the hope that this might prompt others, particularly members of american christian communities, to go and do likewise, in fresh ways. the first tree i ever fell in love with was a lombard poplar. i grew up in an exurban setting, near buffalo, new york. one side of the family land was lined with these tall, cylindershaped trees, which had already grown to full height, perhaps feet tall, when i was a child. usually without my parents knowing, i would on occasion climb up one of those trees as high as i dared. the branches were fragile, but, for a slim year-old, that climb was safe, or so i thought back then. on those ascents, i often imagined myself to be a kind of heroic adventurer. i would station myself maybe feet above the ground for a spell, as i surveyed our house below and the fields beyond, and felt the wind bending the tree and brushing my face. it was a boy's dream. for those moments, i lived ecstatically, in another world, thanks to that poplar tree. in retrospect, i can imagine that those tree-climbing adventures must have had an important psychological function for me. i was an unhappy child at times, a condition that i only began to understand some years later when i was in therapy during my college years. high up in one of those trees, i suppose that i was able to leave those familial tensions behind, if only for a short time. in therapy, i came to understand that, among other family dynamics, i had had a conflicted relationship with my father. he was a kind and caring man, but i began to realize that he was also distant at some deeper level. enter the world of trees. perhaps thanks to his german heritage-germans typically cherished their parks, perhaps more than other ethnic groups-my father loved trees. one of his uncles, who was also of german descent, had a top position in the buffalo parks department in the late nineteenth century. that uncle oversaw the implementation of a plan to plant what turned out to be many thousands of sweepingly gracious elm trees, along both sides of many of the city's parkways. in those days, long before the onset of dutch elm disease, buffalo was elm city without the name. that history behind him, my father often found times to take his mind off his busy professional life-he was a dentist-by planting and caring for trees all around our sizeable property. and he often enlisted me to work with him, which was always a joy for me. those were some of the times when i truly felt close to him and when, i believe, he truly felt close to me. adventurous joy with those poplar-climbings and warm personal bonding with those tree-plantings and that tree-care with my father-those were some of the deeper experiences of my younger years which i came to cherish as i grew into adulthood. also, during my high school years, my family had the means to travel to many of the nation's great national parks during extended summer vacations. under my father's tutelage on those trips, i came to affectionately know many trees, the majestic redwoods of california, for example, or the effervescent quaking aspens of utah. during the years of my doctoral studies, i found a way to read every volume of the collected works of john muir, even though those works were obviously not immediately germane for my chosen field of academic research, twentieth century german theology. john muir then led me to the much more famous henry david thoreau. i think, in retrospect, that i read muir first, and thoroughly, because he was so deeply imbued with calvin's theology, whether he fully understood that or not, and since, by that time, i had immersed myself in calvin's thought, along with luther's, both of whom, i came to believe and then subsequently to argue, were dedicated champions of the goodness of creation and the glories and the mysteries of the natural world, in particular. i read muir and thoreau, ironically perhaps, at the same time that i was working on my doctoral dissertation on the great karl barth's-highly problematical-theology of nature. barth's theology as a whole, seminal indeed as it was, never helped me to understand, much less to affirm, my longstanding love for trees. muir's and then thoreau's encounters with trees did. the result was a theological proposal, on my part, for a new way to understand my love-or anyone's love-for trees. barth had adopted what was, at the time, a more or less conventional theological way to understand human relationships with other creatures, a theme developed by many thinkers in his era, but which was most often associated with the name of the jewish philosopher, martin buber, and his book, i and thou. buber contrasted an i-thou relation, which he thought of in intimate, personalistic terms, with an i-it relation, which he defined as an objectifying relationship between a person and a thing. so, when someone says to his or her partner, authentically, "i love you," that is an i-thou relationship. when he or she picks up a hammer and hits a nail, that is an i-it relationship. buber and others-among them, barth-who gave this way of thinking currency, were eager to protect and then to celebrate the authenticity of genuine human relationships and to reject any kind of objectifying relationships between humans and other humans. humans should always be regarded as ends-in-themselves, according to this way of thinking, and should never be treated as objects to be manipulated. what, then, about my relationship with trees? the i-thou, i-it way of thinking does not account for my love of trees. trees are not persons. you cannot communicate with a tree the way you can communicate with your spouse, as a thou. are all trees, therefore, in truth mere objects? was that lombard poplar which i adored when i was years old merely an object i used, like a ladder, to climb up into the sky? or was it, in truth, a creature in its own right, worthy of my respect, even adulation? wasn't it the case that i not only clung to that tree, forty feet above ground, for safety's sake, but also to embrace it? that tree, for me, back then was no mere object. it was something else. but what? buber recognized this problem in an appendix to the second edition of i and thou. he even imagined a relationship to a tree that is somehow akin to an i-thou relationship, but he self-consciously chose not to try to think that through. i decided that i myself would give it a try. in my first scholarly article, i argued that a revision of buber's thought was required. hence my title: "i-thou, i-it, and i-ens." i wanted to be able to talk about the trees that i loved as ends-in-themselves, no longer as mere objects. in that article, to illustrate i-ens relationships, i drew attention not only to the praxis of thinkers like thoreau and muir with regard to nature, but also to luther's and calvin's visions of earthly creatures. both reformers, like thoreau and muir, portrayed those creatures in non-objectifying terms and indeed celebrated those creatures as ends in themselves, as, in some sense, charged with the mystery of god. luther saw miracles in nature everywhere and stood in awe of them. calvin considered the whole of nature to be a theater of divine glory and celebrated that glory enthusiastically. in ensuing publications, i employed the constructs of i-thou, i-it, and i-ens as a kind of silent interpretive key to open up the whole sweep of classical christian theology in a new way. i argued that -notwithstanding lynn white jr.'s then widely hailed critique of the christian tradition as ecologically bankrupt, alleging that christians have almost always treated nature as a mere object, something to be manipulated-we can trace a major christian tradition that richly affirmed the natural world in its own right. that way of thinking i could have called the ens-tradition. in retrospect, i think that my reflections about buber's way of thinking and my historical investigations were existentially dependent on my early encounters with treehood. likewise for my conversion to environmental activism, along the way. that happened, emphatically, after i first began to work my way through books like rachel carson's silent spring and stewart udall's the quiet crisis in the early s. it was natural, as it were, for me in those days, and subsequently, not only to love trees in their own right, but also to do all that i could do to protect them, along with the whole world of god's earthly creatures. but my life with trees by no means came to expression just in youthful encounters or in mid-life scholarly writings or even in longstanding commitments to environmental or ecojustice activism. i also have been blessed throughout my life by rich encounters with a range of particular trees. this story has unfolded in several locations, but i want to mention only one here, the old farmhouse at hunts corner, in southwestern maine, which has been a home away from home for me and my family for more than forty years. at hunts corner, notwithstanding the human incursions here and there and the ominous pipeline in particular, i have developed cordial relationships over the years with many of the trees on our land, i-ens relationships as i think about them. i have learned to call many of those trees by name and sometimes greet them, when no other humans are around. our plot was in all likelihood a farmland years ago. the west side of our land is marked by one of those famous stone walls that defined the farm fields in historic new england. the oldest trees tend to be near that wall or to be growing from an adjacent, steep and stony incline, which never could have been farmed. one mother oak, in particular, has fascinated me ever since i first noticed it. it is enormous. i cannot put my arms even half way around its mammoth base. the poor tree has been hammered and seared over its long lifetime by the elements. the top of its central trunk was apparently sheared off, perhaps decades ago. but the tree has lived on. near that mother oak grow a number of smaller, but nevertheless sizeable descendants. i once walked through that area with a neighbor and he eagerly explained to me that i could make a lot of money if i were to have those oaks cut down for commercial sale. grand old towering mother white pines also grow in that area and elsewhere on our land. my brother, gary, and i once cut down one of those giants after it had died, this, for safety reasons. i did not want it to fall on anyone, particularly on my grandchildren, who sometimes had ventured out near that tree, at the edge of the forest. treehood should not be romanticized. a tearful older father once told me, in a long, quiet conversation, how he had lost his daughter to a tree, in the prime of her life. this was the story that onlookers reported. his daughter and her two toddlers had been picnicking in a park. on their way home, she was watching them run on playfully ahead of her. at one point, she saw a large tree falling down on to the children. she ran desperately to push them out of the way, which she did. but she herself was killed. that story was in my mind, as were my own grandchildren, all the time my brother and i were working to take down that immense, but dead pine tree at the edge of our forest. huge it was. gary and i barely had the strength together to roll pieces from that tree's trunk into the woods to their final resting places. early on in my family's tenure at hunts corner, i began to carve out paths in the back forest, where that mother oak and a number of the great white pines live and where american beeches are now moving in. closer to our house, i have planted a variety of individual trees over the years or occasionally cut away competitors, in order to allow some extant trees to flourish. perhaps the most striking of all the tree planting that laurel and i have done over the years was the operation that she and i once performed on what was, for us at the time, a nameless sapling. it was march, early on in our experience with the world of rural maine. what we did was sheer, youthful folly. laurel had decided at that time, that, come the next spring, we would turn over a plot just back of our house, where she would begin to create a perennial garden. but there stood that large sapling right in the middle of that space! without much thought, we decided that we would try to move that tree, right then. the ground was frozen, of course. i had to use an ax to cut out the ball of the roots. once cut free, we could barely drag that ball out of its earthen socket. now what? we decided to roll it maybe forty yards to the western side of our land. there, using the ax again, and a pick-ax, i hollowed out a cavity for that big, frozen root ball. finally, we were able to slide that sapling and the mass of its frozen roots into that hole. it was only then that it dawned on us that we had planted that tree close to the church next door, a pristine, white, wooden building, which easily could have appeared on some new england calendar cover. but that was that. never mind that sapling. the church building appeared to be as picturesque as ever. we hurried on into the house to warm ourselves by the franklin stove. little did we know back then that that nameless sapling, more than years later, would magically turn into a graceful and fulsome red maple whose sumptuous branches would then completely cover our vista of the whole church building! that iconic structure is gone from our angle of vision for much of the year. there may be a parable hidden in this ironic tale, but, if so, i have yet to discover what it is. sadly, the sugar maple i planted at the front of our property many years ago recently died. it was painful for me to observe that large and lovely tree die over the course of several seasons and then to witness it standing there, barren, a skeleton, all by itself. true, stories like these sometimes have a blessed ending, according to one of the central themes of the christian faith, from death comes life. over the many years that we have lived at hunts corner, mostly from the early spring through the late fall, we have used our old iron stove in the kitchen steadily, sometimes even on cool summer nights. and we obtain fuel for those fires almost always from standing dead-wood, which we cut down at various places on our land and then drag in, cut up, split, and stack. that was to be the story of that dead sugar maple. we would give thanks for it one more time, so i thought, as it would later warm both our kitchen and our hearts. but that dead sugar maple's transition to firewood was not as smooth as i had anticipated. that project turned out to be an adventure. for many years, my brother and i have helped each other with forest and other chores at our respective rural homes, his in western connecticut. he learned to love trees the same way i did, working with our father on the grounds of our exurban buffalo home. after various childhood and adolescent skirmishes, some of them harsh, gary and i have remained close over the years and have grown even closer in these our golden years, especially by assisting each other outdoors either in connecticut or maine, for days at a time. that towering dead sugar maple had to be cut so that it would fall away from the street, not on to the street, where it might block or even hit some speeding car that was passing by. with some anxiety, i admit, i nevertheless trusted gary to cut that tree just so that it would fall precisely where it was supposed to. i had witnessed gary "place" (his term) falling trees in just the right locations many times. when this tree began to undulate, however, it did not immediately fall away from the street as gary had cut it to fall. the tree just stood there trembling, not falling in any direction! what was going to happen? with some sense of urgency (!) and with a long rope tying him to that oh-so-perilously oscillating tree, as it was readying itself to fall in one direction or another, gary dashed to a spot far away from the street and then pulled on the rope, again and again, until the tree finally fell toward him (it crashed down a few feet to his left!) and not on to some unsuspecting car that might have been speeding up or down our road. quite a feat for one who was at that time about to turn eighty! as i am constantly aware, trees are not always our friends. but thankfully, in this case, gary was able to coax that tree in a friendly direction, narrowly escaping injuring himself or anyone else. i have saved for last what is for me the best news about treehood. some years ago, laurel and i purchased a then ten-foot tall purple beech sapling, and planted it in our hidden garden. long before, we had come to adore the gigantic hundred-yearold purple beeches we had encountered in mt. auburn cemetery, near our massachusetts home. in this finite world, those great trees are, for me, the best natural symbols of eternal life that i can imagine. laurel and i have decided to have our ashes interred at the base of our own purple beech, which now rises high above us in the hidden garden. i have affixed a foot-high celtic cross-made of cementat the base of our purple beech, which one day will not only mark the place of our buried ashes, but will also announce the truth, for those who have ears to hear, that has claimed my own soul self-consciously since the first days of my theological study to these my octogenarian years, predicated on a reading of colossians : ff.: the crucified and risen lord is the cosmic christ, both now and forever-"…[a]ll things have been created through him and for him. he himself is before all things, and in him all things hold together" (col. : f. nrsv). i saw that cosmic christology in the figures and designs on the historic celtic crosses that i encountered during a trip to ireland with laurel in , along with throngs of other spiritual seekers. i concluded then that the classical celtic saints were by no means essentially nature mystics, as many who have been fascinated with them in our time have believed. no, their spirituality of nature was consistently an eschatological celebration of the cross and resurrection. for the great celtic saints, the love of the seas and the earth and its creatures and the love of jesus christ, crucified and risen from the dead, is the same love, now and forever. hence i was overjoyed when i found and then was able to buy that cement celtic cross at home depot for $ . . i eagerly carried it off to implant it in the earth next to the purple beech in our hidden garden. i wanted to announce that someone believes-or that someone, whose ashes are interred there, once did believe-that that tree, marked by that cross, is-or was-for that believer the lignum vitae. i cannot imagine the story i am telling here ending otherwise, for i now realize that my world, from the days of my childhood on, always has been, is, and, i hope, always will be, the world of treehood. treehood, rightly construed, has a justice dimension. think of the remarkable work of nobel laureate wangari maathai (d. ), who started the green belt movement in kenya, which has planted more than million trees in africa, in order to fight erosion, to create firewood, to give work to poor women, and, generally, to reestablish the health of the whole earthly biotic community. wangari's work presupposed that trees have their own standing, that trees, essentially, are not first and foremost objects for capitalist exploitation, whether directly, through commercial development, or indirectly, through the destructions wrought by impoverished peoples. nor, in wangari's perspective, were trees essentially a means for the wealthy temporarily to escape from the contradictions of modern industrial society, under the rubric of "ecotourism." carson, r. ( ). silent spring. new york: houghton & mifflin; udall, s. ( ) . the quiet crisis. minneapolis, mn: fortress, , ch. . for an account of my engagement with celtic spirituality, see santmire, h. p. ( ) . the church is a global network. it has a presence around the world that is almost unsurpassed by any other organization or movement. the church has contact with other religions at all levels, and cooperates with a wide range of humanitarian organizations. it is in dialogue with world leaders, not least via the un system. the church has a presence in many places around the world that are not readily accessible. during crises and disasters, the church is often there before they happen, while they are happening, and long after the immediate relief work has been phased out. this is an obligation in an era in which the world must learn to live with the climate crisis and its consequences. we know that those people who have contributed least to global warming are often those most severely affected by climate change. we know that social challenges such as poverty, migration, and the global health situation are directly linked to environmental and climate issues. there is a need for climate justice. the issue is how we humans interact with the natural environment, of which we are a part. we therefore have to take action based on what feels most meaningful in our lives. we must therefore talk about the sacrifices that we can make together, so that our children and the children of others can have a future. the climate crisis is exacerbated by lifestyles that make greed seem like a virtue. resolving it will be difficult for as long as people and nature are viewed only from the perspective of economics and technology. only when we actually distinguish between our needs and our desires can we achieve fair and just climate goals. when will we learn to say, "enough is enough!"? what we think about and feel about nature really matters. is it a mechanism that simply keeps on rolling? an unlimited source of raw materials? our recreation area? our enemy? a place of endless harmony and balance? a system involving a constant battle for survival? how we relate to nature as creation reveals how we relate to the very basis of existencewhich we call god. the churches in the east and the west have developed somewhat differing points of focus with regard to humankind and creation. put in simple terms, western tradition has developed a deep trust in rationality and science. this has contributed to a demystifying of nature and humankind's role in creation. its secrets were dissolved in measurability. humans came to understand themselves to be rulers of nature, rather than stewards who are responsible for and have to care for something that they do not actually own. the emphasis was put on humankind's function. theologians in the east have talked more about nature as a mystery that cannot be fully described, not even with the most excellent measuring instruments available in the world of science. nature meets us and shows itself to us, but never fully. as humans, we are part of this mystery. each human being is itself a miniature cosmos, a microcosm. here, the relationship is at the forefront. the western view has a tendency to see too little concreteness, and something romantic, in this approach. but the fact is that a full understanding of our role as human beings requires both perspectives: function and relationship, doing and being. it is a characteristic of being human that we can have an indepth understanding of ourselves based on the relationships in which we are involved: to ourselves, to each other, to the entire creation, and to the ground of being itself. we can also gain a deep understanding of our mission as human beings, our function: why are we actually here? as we face the climate crisis, we need to focus on rational action inspired by the best science available, while also needing to have an existential understanding of how and why we feel and act as we do. destroying biodiversity; wrecking forests and wetlands; poisoning water, soil, and air-all these are violations of our mission as human beings. theology calls it a sin. this sin arises from our inability to see the earth as our home, a sacrament of community. our natural environment unites all the people on earth with every living thing, in a way that transcends any differences in faiths and convictions that may exist between us humans. experiencing the beauty of nature means a lot to us. but we are also created for another type of beauty: that people have quality of life, live in harmony with nature, meet in peace and help each other. if we want to have an ecologically, socially, economically, and spiritually sustainable approach to the world-which we must have-individual or commercial solutions will never be sufficient. this is why spiritual maturity is now required. such maturity means being able to see the difference between what i want and what the world needs. it can understand that the climate crisis is rooted in human greed and selfishness. it can elevate us above fear, greed, and fundamentally unhealthy ties. if we want technological development, fair and just economic systems, ecological balance, and social cohesion to work together to create a sustainable future on our earth, we also need a conversion, a new state of mind. a renewal of our humanity (in the dual sense of the word). it is not sufficient for us to only address the symptoms if we really want healing and wholeness. like pope francis, we are of the opinion that we are in urgent need of a humanism that is able to bring together different areas of knowledge, including economics, to form a more integrated and integrating vision. science, politics, business, culture, and religion-everything that is an expression of humankind's dignity-need to work together to put our earthly home on a more stable footing. real stature among leaders and rulers of various kinds becomes apparent when we in difficult times can maintain high moral principles and focus on the long-term common good. in these days, the bishops of the church of sweden will be issuing a bishops' letter about the climate that highlights these issues in more detail. the climate deadline is coming ever closer. indecision and negligence are the language of death. we must choose life. give the earth the opportunity to heal, so that it can continue to provide for us and so that people can live in a world characterized by fairness, justice, and freedom. archbishop antje jackelén doi: . /dial. thoughts while sheltering … in the midst of a crisis it is easy to make statements that later seem unnecessarily alarmist. i do not think i am the kind of person who normally sounds alarmist (but who really who thinks that they are alarmist?), but it is hard to imagine that covid- will not remake our lives in ways we never could have envisioned a few months ago. it is hard to imagine that our lives-collectively and individually-will not be forever changed. some thoughts: . i do not generally read god's wrath and judgement into current events and i am not prepared to do that now. that said, i find myself wondering if covid- will not change our lives in a manner similar to the tower of babel (genesis ). this story is, among other things, an account of how a united humanity became divided. i wonder if covid- will not threaten to (further?) divide us. . others have written and commented about the relationship between covid- and climate change. many of these people are much more knowledgeable and smarter than me. i plan to listen even harder to them. for much of my adult life, i have attempted to walk with a light environmental footprint (e.g., i have walked or rode a bicycle to work for over years). in the past year, my wife and i have doubled down on such practices and we walk with an even lighter environmental footprint. i have a feeling that others might be joining me in the future. . i wonder if covid- will not accelerate the already fast pace of secularization in western societies. the churchfairly or not-has been associated with the status quo and thereby irrelevance by many people (especially younger people). with "physical distancing" forcing worshipping communities to meet virtually and disembodied, can they matter? what is life together if it is virtual and disembodied? i am not a luddite who wants to destroy technological tools. i am, after all, writing this because of the miracle of modern computer technology. however i think that social media and virtual life supplements, not supplants, embodied life. . i wonder if covid- might not reverse-or at least stem the tide of-the pattern of secularization and irrelevancy of the church. the church mattered in the early middle ages because of its commitment to caring for the sick and vulnerable. i am thinking, for example, of gregory the great who while he was pope used the wealth of the church to feed the hungry and care for the poor. is covid- such a moment for the church? . i think about hospitals and monasteries in the middle ages. they were beacons and refuges for christians fleeing plague and pestilence. hospitals and monasteries were beacons and refuges for christians to care for others who were fleeing plague and pestilence. these hospitals and monasteries were outposts of civilization in wildernesses of savagery and barbarianism. does the church need to reclaim that part of its history and heritage and make it more central to its mission and identity? . i think also of the babylonian exile. israel had to rethink what it meant to be faithful when it had no temple for people to worship in and bring their offerings to. what will it mean for us in the st century if we cannot gather together in the ways we have always gathered? i finish here with only six thoughts. god worked for days and then rested. i will rest also with these six thoughts and observe a kind of sabbath. i will be thinking on god and the ways of god and who we are called to be. my seventh thought is a sabbath thought. it is a thought, such as it is, of worship, prayer, and contemplation. david c. ratke doi: . /dial. the corona crisis unmasks prevailing social ideologies the current covid pandemic shows that dominant ideologies of our age-from individualism to social constructivism-fall short in meeting reality by disregarding the wider ecological community in which we are situated. human beings, for sure, play an increasing role in cultivating, shaping, and also destroying our shared world. maybe the corona virus experience teaches us to recover the importance of human communities as well as our place in ecological communities? it seems that neither individualism nor social constructivism stand the test of reality. if there is anything the corona crisis teaches us, it is that our lives are interconnected. there is no human being who only inhabits his or her own little world, and who is in charge. we are part of a great human community-for good and evil. we infect each other, yes, but we also live off each other's infectious smiles. what would our lives be like without close eye contact and bodily expressions of welcome? community is the first and most important part of our lives, and during quar-antine we experience how much we miss the normal social interaction with each other. in the meantime, we are thrown back on ourselves, or the very closest ones. there resides some truth in every ideology, otherwise it could not attract our attention, and be infectious. liberalism is the view that every citizen should have as much freedom to live as possible. most of us agree on this value across the political spectrum from left to right. yet the fact is that we are the blacksmiths not only of our own happiness, but also of our misfortune. the misery is that we cannot know in advance. but more than that, we also share the misfortune of others. self-restraint is necessary precisely because it is a primary fact that my desire for freedom and movement can put others in bondage and immobility. since the s, existentialism has been a very widespread ideology. it still is under the guise of being against all ideology. since jean-paul sartre, existentialism has argued that you are what you do. it is your free decisions that give you the essence and character of your particular humanity. existentialism is a humanism, as sartre called his -program. true it is that we live every day with small choices about where to go, but the idea that we are "decision-makers" all the day is an extremely forced view. fortunately, most of us do as we usually do, and if we do not, others would not be able to count on us. a human being who constantly decides pro or con would be an incalculable human being, a constantly ticking bomb under enduring relationships. an existentialism without the "humanism of the other person" (levinas) is a monster beyond the possibility of attunement and self-correction. the ideology of existentialism lies in its individualism. fortunately, however, we live in communities where we, as resonating beings, constantly tune in to each other. hopefully, we also live the greater part of our lives in a pre-conscious stream of experience that precedes our small and large decisions. otherwise, we would quickly become sleepless persons, incarcerated in a hyperactive consciousness, and eventually we would become insane. in short: it is not very often my decisions that determine who i am. rather, it is the sum of the resonance-and-dissonance experiences of my life that determines who i am and what i do. the community exists before the conscious self-awareness of the ego. alongside the over-spiritualized view of existentialism, we find another ideology, which sees a human being as the exclusive owner of a physiological body, curved in around itself. we could call it the skin-and-hair ideology. bodies, however, do not only include skin and hair, bones, and internal organs, for we live as socially and ecologically extended bodies. what we can learn from biology is that our bodies are in constant exchange between one's own body and everyone else's. humans are "holobionts," an organismic space for a variety of lifeforms. in discussing the nature-nurture problem, the controversy has been about how much genes (nature) determine us, and how much our society (nurture). but inside our body we do not only carry our specific human genome, but also a wider microbiome, made up of all the viruses, bacteria, and fungi that have entered our body from the outside into our nose, throat, ears, and not least gut-through food consumption, fluids, and the inhalation of air. overall, we should be grateful for the world of microbiota, for most microorganisms are symbiotic. without bacteria and viruses, we would curl up on the floor with abdominal pain, and we could never be able to "make decisions." overall, we need to be good friends with our bacteria and viruses. only a few are as harmful as covid- , and here we naturally have to go into counter-procedures such as cleansing, preventing, and quarantining. as far the medical science goes, we do not have a cure at present. hence, the respirators will be running until the corona infection is over. let me now address what i see as the most widespread ideology in our time, at least in the academy: social constructivism. this ideology has spread from sociology to psychology, politics and pedagogy, and social constructivism has ended up being a quasi-orthodox consensus ideology within the humanities and substantial parts of theology as well. this movement's first epicenter was the book the social construction of reality, written by sociologists thomas luckman and peter l. berger in , followed up by berger's the sacred canopy in , focusing on the religious construction of reality. being a circumspective scholar, berger soon after realized the weaknesses of social constructivism but by then the ideology had already infected wider parts of the social sciences. its thesis was, and henceforth is, that human societies construct reality through language perception and the maneuvers of rhetoric, political, and social engineering, and that human societies do so under only a minimal resistance from pre-linguistic reality, including nature. again, there is an aspect of truth in this idea. the ways in which we use language and discursively define the boundaries of society do indeed have impact on the public perception of reality. for example, the political responses to the corona crisis show the considerable impact of our political constructions of reality. state leaders are the ones who have capacity to define states of emergency, and in an exceptional situation governments act as quasi-sovereign powers that determine the social reality for the general population. it cannot be different, but political decisions differ from country to country. do we proceed as in south korea and taiwan (with large screenings of the population and subsequent quarantines)? do we do as in denmark (with an early and strict lockdown but initially without many corona tests)? or, do we choose like sweden (avoid strong coercion but appeal to the population)? by comparison, the overarching federal strategy in the united states still (march , ) seems oscillating. this being the case, social constructivism does not sit well with a common sense realism: it is the de facto spread of the covid- infection, and the subsequent fatalities, that will determine whether our political measures have worked or not. in an infectious world, politics combines wait-and-see attitude with post hoc maneuvers. even the most powerful politicians cannot talk away covid- . either you have it or you do not have it. either you infect others or you do not. either you will see an exponential spread or you will see a flat rising curve. thus, it is the spread of infections that tests politics, not the other way around. social and political constructs are not capable of defining reality. it seems that even the most clumpydumpy politicians are beginning to understand the reality test, after having tried to downplay covid- rhetorically. accordingly, what we need in the academy is a thorough revision of the prevailing ideologies of our age: individualism, social constructionism, discourse theory, etc. we need a biocultural and ecological paradigm shift within the social and human sciences, including theology. otherwise we see people of faith as individual faith decision-makers, and we overburden one another with overheated appeals to letting god come to our mind, as if we could conceptually enframe god. yet if god at all is, god is prior to our consciousness and our self-aware pious decisions. if god is, god is present to the child, and present to us when we are using our full energy and attention in solving a problem, when we are falling asleep, when we are aging and entering into states of dementia and no longer in conscious contact with god. the faith of any individual (each in his or her individual manner) is rather about tuning into a deeper reality-a reality which is already there, as the prime and pervasive source of resonance, present in a divine personal form beyond my own little personhood. faith is about plugging in, of moving into the prior reality of the divine self-communication, in words as well as beyond words. similarly, revising the assumptions of the skin-and-hair ideology, jesus is not a bygone entity, a "composite entity" of (a) a divine entity, (b) a skin-and-hair body, and (c) a particular lonely soul, as some analytical theologians redescribe chalcedonian christology in a so-called "compositional christology." but god was not incarnate in a man cave, but conjoined the shared flesh of humanity, shared also with non-human creatures beyond the skin of jesus. by becoming incarnate in jesus and in his extended body (also called the reign of god), god is no less present in the compressed respirator tents than in the open sunlight and fresh air. god is radically being there, being there with others and being there for others, not least for us who are gasping for fresh air. now back to us who hope to survive and go on. what kind of a reality do we hope to wake up to after the corona crisis? i guess we are waking up to a deeper sense of how much we miss one another, after we have had to separate ourselves from each other. we are missing the abillity to look each other in the eyes (not mediated through a screen), missing to give hands and hugs. we are missing the deep meaning of having skin. i hope that we may rediscover that our community is prior to me as individual, and that the interests of others precede my considerations of myself. it seems to be obvious that the corona crisis has unmasked the castles in the air that we have erected in our ruling ideologies, not least within the academy. individualism and social constructivism-both presuppose a remoteness of human existence from the world of which we are part. both tend to see individuals and communities as isolated islands, who are ceaselessly at work in imposing a human order into a presumably blank world slate. yet nature is not a blank slate, but is full of multiple life and regenerative powers. moreover, nature is not just "out there" but also "in here." we carry nature deep within ourselves, and our entire existence and well-being depends on it. this should not come as a surprise to theologians who speak of god as the benevolent creator of all that is, on the fields and work life, in our houses, and in ourselves. we need to be more than humanists in order to be truly humane. we can no longer pretend not to be deeply connected to circuits larger than ourselves, for we are at once symbolic creatures, living in cultures, and symbiotic creatures that benefit from the rich world of viruses, bacteria, and fungi. at the same time, however, we are also vulnerable beings. this has always been the case but in a global world, this has become even clearer because we travel as much as we do, and live as close to each other as we do in the big cities. covid- does something about us before we do anything about it. every moment, awake or asleep, our immune system trains in capturing the viruses and bacteria that make us sick. let us hope that the self-generative powers of nature, endowed by god the creator, will be strong enough to handle the covid- in most of us, until we some day can find a vaccine. in the meantime, let us look forward to being able to return to our beloved communities. "into the community" could conveniently become the new mantra after the corona era. university of copenhagen doi: . /dial. the covid cross pandemic. it is not a word that falls easily from the lips. in a highly scientific and technological society it may strike one as a bid odd, like something from a more primitive past. that is the power of nature and a sobering reminder that while we have come to control many things in it, nature still can transcend our power and understanding, even with fatal results. this pandemic has reminded us all too clearly how limited human power is. it also brings into clear focus how thin and vulnerable human society is when the whole world can be turned upside down in a matter of weeks. in such a world being ravaged by an "invisible enemy," where is one to turn? the fact that one cannot see it or easily trace it places in the heart a fear and anxiety not unfamiliar from the middle ages. the existential experience is the same. we are left with a feeling of vulnerability against an unknown power greater than ourselves and for which we as yet do not have any strong defenses. evolutionary biology crashes into human society. to "shelter in place" and "social distance" are pretty basic but limited responses, ones not unfamiliar from centuries ago. we have been driven back to the most elemental of human responses, isolation. where, then, is god in the midst of pandemic? here incarnation meets the deepest of human needs, affirming god's identification with and understanding of our suffering and anxiety on the covid cross. when there is no obvious ultimate cause or reason, perhaps the only possible source is god. but, if god, then why would a good god do such a thing? for divisive theological dualists the next step is natural, god must be mad at us for something we have done and is punishing us. since it cannot be our fault, the search is then on for a scapegoat, whether it be 'gays' with the aids crisis, new orleans' perceived licentiousness for hurricane katrina, or america's secularism for / . for some today the source must be china, the lgbtq community, or environmentalists. it is theodicy at its most brutal, and it must be challenged. a free creation and human greed combine to make an international disaster, not divine intervention. it is here that the cross confronts the covid- virus, not with platitudes or panaceas, with naming and blaming, but with the affirmation that god is with us. the first century world of jesus was a time of disease and death such that much of jesus' ministry was spent in healing from disease and disabilities. it was not unfamiliar to him. such is the nature of enfleshment. if one takes enfleshment with all biological seriousness, as niels gregersen does in his concept of "deep incarnation," (see the cross of christ in an evolutionary world), we can understand that god identifies with human suffering at the most basic of biological levels. the suffering of the covid virus is not foreign to god and therefore we are not left alone within it. it means that god is with us in all the biological suffering of an evolutionary world. while the source of the virus is not definitively confirmed, currently it is believed to have originated in bats (as a number of other coronaviruses have), which perhaps bit a pangolin (a sort of plated anteater), which, as an endangered species, was illegally captured and sold at an illegal wild animal market in wuhan, china. the source of the pandemic? human greed. to paraphrase winston churchill, "never have so few done such harm to so many." theologically we would call this a result of human sin. it requires human capacity to take something biologically derived and place it on the world market. had the pangolin been left alone, perhaps this would not have happened. at such a time of anxiety and isolation, there is a deep longing for hope, meaning, and perhaps forgiveness. to understand the enfleshment of god as deep incarnation, connecting throughout all biological creation, means that no creature, including the human, is truly separated from god, especially those who are dying alone from the virus. if god is truly present to us at the most intimate levels of our existence, then so too is the divine promise. this takes immanuel, "god with us," to a whole new level and connects the present suffering from the covid- virus to the cross of christ. it affirms that even if our cognitive faculties or awareness are not functioning well (or at all) that god is still with us. it is not our awareness of god that makes god's grace effective in our lives but god's awareness of us! that is the ground of our hope, not our own reason or strength, even as we pray that a medical solution may soon be found. as creator to creation one might metaphorically say that god is "entangled" (non-local, relational holism) with creation, ourselves included, at the foundational levels of material existence analogous to entangled subatomic particles (see simmons, the entangled trinity: quantum physics and theology). deep incarnation is a way of thinking christologically about the redemptive entanglement of the creator with the whole of creation, giving us hope and release from fear and anxiety as this is carried up into and transformed by god. this foundational relationality then grounds divine presence in a suffering world and provides a connectivity for accompaniment and hope in the midst of decline and loss. it is the covid cross. such accompaniment is also expressed through the medical professionals and others who are working tirelessly, and with some personal risk, to help everyone survive throughout the world. this too is an expression of god's care and love within an entangled creation. transcending one's self-interest for the sake of the ill other can certainly be understood as a gift of the spirit. pandemic reminds us that we too are part of that same entangled creation and that we are also our brothers' and sisters' keepers for we are all in this together. perhaps this may be one of the most hopeful outcomes from such a horrible pandemic. doi: . /dial. there is the existential angst that comes with self-quarantine and the awareness of why it is necessary-we call it "plague dread." and then there are the various levels of explanation, the micro-meanings, you might say. and then there is the mystery-the big meaning, macro-meaning. each of us will fill in the dread with the facts of our own life. i am approaching age , with at least three of what the media call "underlying conditions"-more than enough empirical ground for me to dread the coronavirus. almost hourly, we hear precise scientific descriptions of the virus. these descriptions are crucial, because they enable competent people-physicians, nurses, and researchers-to treat the disease and even prevent its spread. the scientific theory of evolution helps me understand our situation. the coronavirus is an example of an evolutionary process wrapped within larger evolutionary processes. the behavior of the virus follows darwinian expectations. all of the processes that take place within our bodies-from the nano and molecular levels to the cells-follow the same evoiutionary pattern. these evolutionary processes within us are fundamentally ambiguous in that they bring us life and they also bring us death. leonard hummel and gayle woloschak describe this ambiguity in their fine book, chance necessity, love: an evolutionary theology of cancer (cascade books). this presents us with a dilemma-we are grateful for the life-giving work of our internal body processes, and we dread the deadly work of those processes. like cancer, the presence of coronavirus is fully "natural." nature within us is "naturally" ambiguous. further, these micro-evolutionary processes take place within a much larger story of evolution with several chapters: the evolution of life, which began millions of years ago, within the larger billion year-long story of planet earth's evolution, within the still larger story of cosmic evolution, billion years in the telling. our response to covid- is to resist the flow of evolution and redirect it. that is what our practice of medicine is about, the attempt to redirect evolutionary processes in our favor. the long processes of evolution bend because of our efforts. this reminds me how infinitesimally small we are, and yet how amazingly gifted we are. evolution has brought us life and also the skill to reorder evolution itself. nevertheless, despite our efforts, even when they are successful, the struggle with evolution takes its toll-and that means injury and death. in my caseevolution in my mother's womb caused me to be born with spina bifida, which, though moderate in severity, has radically impacted the last years of my life. even as i write, i am aware of the mystery (note the capital "m") that wraps around us. we-and these incomprehensible processes of evolution-float in a sea of mystery. why is it that our existence is woven on this vast and complex loom of evolution? why has god chosen this particular way of bringing us into life and sustaining us? many thinkers down the millennia have pondered this "why?"-and they have given us no satisfying final answers. we can probe mystery, but we cannot resolve it like a puzzle. the book of job speaks to me at this point. when job raised the question and demanded god's response, the voice from the whirlwind spoke to him: your mind is too small and weak to comprehend the height and depths of mystery-you simply must accept it and trust it. the existentialist albert camus acknowledged the mystery, and he believed it is indifferent to human hopes and longings; we cry out for answers for our lives, but in return we hear only silence-he called it ultimate absurdity-absurdity with a capital "a." his novel the plague is the story of life during a plague. the plague was indifferent to human existence, the epitome of absurdity. others have called the mystery enemy, malevolent, intending to destroy us, if it can. christian faith calls the mystery friend, redeemer, suffering god. much like the message of job-death at the hands of the mystery is real; our attempts to understand it are futile; but the same mystery is our redeemer. we can trust it. after all, evolution is a process-faith believes the process is going somewhere, and that "somewhere" is in the life of god. the life of god is love, which is why in the midst of plague we find love, caring for others. medically, for most people our current plague will not have serious consequences. psychologically and economically, it will damage most people, at least to some degree. a small percentage of people will die. all of us will be borne along the same evolutionary process into our future. and for all of us, that future will be god's gift to us. think of the image of a train. some of us will get off the train at this station, everyone will get off sooner or later, at different stops. every station's name will be the same, "god's destination-love." to imagine that my words may speak to you well by the time they reach you seems like magical thinking. no one seems to know exactly where we are. our slow and then sudden awareness of the impacts of coronavirus left us in an existentially halted, almost eschatological space: we were caught in a world incredibly arrested and incredibly new at the same time. we're deeply aware of old tensions of injustice and vulnerability pulling taut, and simultaneously many of us feel the grit of the irreducible relationality of our bodies and planet anew. we've picked up familiar embodied routines in vital work and mundane practices, and yet now many of the familiar kin that once nourished us with convivial learning, signs of peace, earthly delights, bread and wine are learning to do so again with virtual creativity or picking up pieces. if we are honest with each other, there have been many world-ending plagues before-many apocalypses "now and then," as catherine keller says. native peoples know well the injustices and radical loss of histories of settler violence and plague; so too do lgbtiq folks know the ways that homophobia shaped responses to hiv/aids crises. even theologians from julian of norwich to martin luther knew the risks of bodied life together. we are "mutually bound" in moments like this one, luther himself wrote in his now much-cited letter, "whether one may flee from a deadly plague." these moments of crisis won't be the last, as much as we aim to prevent loss. earthly creatures are vulnerable and resilient, enfleshed with possibilities both tragic and felicitous. if the old kingdom of our everydayness met its match in the new kingdom of the present, the coming future that cultivates such anxiety in so much of our theological and ethical communities already only intensifies with unknowns. as life began to shift in ireland, i was waist-deep in a sabbatical research-ing the complex emotional, affective, and felt responses to the climate crises of our time. from eco-anxiety to environmental despair to climate grief, the present and anticipated losses aggregate and will continue to do so. the affective and emotional energies that mutually bind us in the midst of our planetary crises-including that of pandemic-are just as much part of the crises and ethical responses as the scientific approaches we desperately need. in an interview with the harvard business review, expert on grieving david kessler (known especially for his work with elisabeth kübler-ross), reflected that what the pandemic brings with it is "a number of different griefs." we (in all of the manifold diversity that term names) are grieving our imagined present, a sense of normalcy, our planet, our loved ones, our work, our relationships, our habits of interactions in the world, and more. more particularly, kessler argues that something called "anticipatory grief" is in the air. "anticipatory grief," he says, "is that feeling we get about what the future holds when we're uncertain." in unhealthy ways, anticipatory grief morphs into shifting anxieties and end of the world imaginaries. in richer ways, it acknowledges that our lives undergo transformation into the future and that we must find ways to re-story the present, cultivate resilience, imagine and take action for better future societies. honoring grief is an active process that we undertake together in moments like these. and sometimes that process means we do the long, hard work of actively grieving our loved ones and gentle hopes for our future as they really do change forever. outside of pastoral care, affect, emotions or feeling rarely get much consideration in systematic or constructive theology. theology, in its rational and patriarchal guises, so often belittles affective archives as beneath the intellectual purity of doctrinal thinking. yet, theology at its richest and most compelling is felt, is inscribed emotional depth-not as cheap sentimentality or sensationalism, but as imaginative wondering, grieving, transforming, pacing in awe and praise in the middle of the night, lamenting loss in the middle of the day, and crying in terror or joy. even the driest of systematic theology sometimes can't escape tears when it anticipates our own angst and anticipations. when we do, we human animals make theology and theopoetics with everything we've got. we unleash our manifold imaginations to handle newness, especially in times of immense cultural grief. i want my theology to learn how to grieve better, especially in a time of pandemic. most researchers into climate grief will tell you that learning how to grieve a present moment opens up the possibilities of our relational connection. these psychologists, literary theorists, scholars of environmental humanities, and poets ask society to move beyond feelings of ethical individuality (e.g., if only i made "greener" choices) to ethical collectivity (e.g., if only we organized for structural transformation to a better world). grieving means we are thinking about relationality, shared worlds, and communal possibilities human and more-than-human. deep calls to deep, and the pathos of shared imagination can cultivate attentiveness to those who need care. that connectivity of spirit may lead to collectively questioning and lamenting power structures or unjust relationships in the world. questioning may lead to refiguring expectations to ask what the next possible course of action might be. that's just one possible route. along that route, the most curious feature of the literature of environmental despair is a persistent emphasis on the importance of play for times of transformation and collective grief. the vitality of playfulness may seem counterintuitive when everything is so dour. think, however, of the creativity emergent in our moment: churches playing with virtual connection, people taking up sourdough starters and knitting, movie nights with strangers over twitter, students coloring rainbows for their windows to encourage, reenergized hikes and reimagined forms of community, families performing skits and songs. play is how we grow, open our minds to what is next, and learn to create with what materials we have. even in moments of dire need, new creation can begin to emerge to help us connect and feel our way out in new imaginative and physical planetary landscapes. playing and creating joy in the wasteland, making possibilities in the midst of the ruins of dashed hopes is just another name for theology. it seems like a good model for divine creativity: divinity that grieves and transforms in response to our common life; divinity that cocreates out of playfulness with an unfolding creation still called "good." how are you doing? if you ask me that question, i have two very different answers, both of which are true. the first one is that i am fine, and i have much to be thankful for: my health is good, and so is the health of my family; i have a safe home and plenty of food; i have a job and discretionary income to buy hiking poles when i decided that hiking is my new covid- passion; and am able to get outside for long runs and long walks. the second one is, i am not doing great. i miss my routine, and i am anxious and disoriented. i feel like i am not very useful right now, and that is extremely painful. i miss my students in particular, and my colleagues and friends as well-i miss being with them in person, and i am sick of zoom. i am still grieving the loss of holy week and easter services, and i wonder what church is going to look like when we can finally gather again. and, i am missing being able to travel and see friends and family. as i said, both of these things are true. i share this because i wonder if you are having some of the same feelings, and if you are, i want to encourage you that it is ok. on the one hand, it is important to acknowledge and give thanks for your blessings; on the other hand, it is important to acknowledge your feelings of frustration and anxiety. it is important to both support and nourish others when we can, and also have a good cry and even a little tantrum when we need to-do not go crazy however; presumably there are oth-ers in your house who might be startled by your screaming. we are in uncharted territory, all adjusting to a new normal that seems to continually take from us, and we need to give ourselves permission to take time to recalibrate. but even in the midst of it all, we do not lose hope. even if we cannot see it, because the end of the tunnel still seems so far away, there is light there waiting for us. we will get through this, and we will find ourselves on the other side. we will be together once more, and my hope is that we will treasure the daily rhythm of our lives-and the people we share it withall the more for their absence. in the meantime, care for yourselves as best you can, and care for others. accept mediocrity in some things-now is not the time for perfection. do not lose heart. persevere. breathe. love. and when in doubt, love some more. united lutheran seminary philnevahefner@gmail.com doi: . /dial. global christianity and theological education: introduction to "dialogue in dialog" the papers published in this issue's "dialogue in dialog" were initially presented in two successive luther colloquies held by united lutheran seminary in and . the essays by madipoane masenya and elieshi ayo mungure were written for a colloquy on "theology and exegesis in african contexts," along with the essay by andrea ng'weshemi that appeared in the spring issue of dialog. the essays by timothy wengert, kristopher norris, and david brondos were written for a colloquy on "theological education in the lutheran tradition". the purpose of united lutheran seminary's luther colloquy is to explore the legacy of luther and the lutheran reformation for modern, global, and ecumenical christianity. readers may be interested in the logic behind and the connection between these particular topics. the topics are intimately connected-on the one hand, because the future shape of the church in africa will be determined partly by the accessibility of theological education and the appropriateness of curricula and methods to african contexts. the contributions by masenya, mungure, and ng'weshemi richly demonstrate this point. in turn, the vitality of the church in america may depend on our continued willingness to hear voices that remind us of our connectedness to the global church and our embeddedness in a global society-by our willingness to hear voices that remove the blinders we inherit simply by being born into a particular context and by accepting its structures and self-justifications as given and just. faith in the gospel gives us eyes to see the world anew, to see god present and active and redeeming even where chaos and death seem to abound. but faith comes from hearing, and we in north america need to open ourselves to the power of hearing christians from contexts other than our own and to living in mutual care for one another. theological education plays no small part in inculcating and practicing these habits of hearing and caring. as i remarked at the beginning of the colloquy, many of the luther biographies that rolled off the presses to mark the supposed th anniversary of the reformation spoke of the unintended consequences and even the failure of luther's efforts. in this telling, luther aspired to reform the universal church, but he ended up the leader of a particular church; and the ensuing competition between particular churches and the political authorities aligned with them produced primarily oppression and warfare, before giving way to skepticism and, after a long and weary journey, the separation of church and state that we prize and the pervasive unbelief that we in the church lament. there is much to unpack in this grand narrative stretching "from luther to unbelief"-and this is not the place. but i will say two things. first, judged by luther's own standards, the reformation is not a failure as long as the church lives, the church gathered by the holy spirit through word and sacrament, the church sent into the world to proclaim and serve. luther knew full well that the church is constantly assailed by the false worship of gods less than god. he may not have been so unable to comprehend our world as we sometimes assume! the church today is and can be a force to repudiate the worship of lesser gods and to offer in their place the fullness of god's life and meaning. the second thing to be said is that the story of a straight line from luther to secularization is a story of the northern, western world-a story that readily occludes from view anyone but ourselves. it is a story that is somewhat defensible as an exercise in european and north american self-understanding; it is indefensible as a story that assumes the only meaningful chapter in the story of reformation christianity unfolds between wittenberg and gettysburg, between scandinavia and minnesota. the well-documented shifts in global christian population (including in the lutheran communion) need not be reviewed here. suffice to say: the majority of christians now reside outside of north america and europe, and in due time, the largest body of lutherans will probably be found in sub-saharan africa. there is no question that appropriate remembrance of the reformation in the church should recognize that our past, present, and future are global. that global context, in turn, becomes the context for theological education no matter where it occurs. in my introduction to the colloquy on theological education, i made these remarks: we live in a moment when theological education-in the seminary context, at leastfaces massive challenges. on the one hand, there is declining enrollment; on the other hand, the rising costs of doing business, including high property costs for older schools with residential campuses. there is also the challenge of serving new populations of seminary students: many students now come to seminary as second-, third-, or fourth-career students, as mature adults with significant obligations to family and community. whether first or later career, students come as parttime students, as commuter students, as distance-learning students. how are their needs to be met, so that they can meet the needs of christians? and if the church is to proclaim the gospel in every place of need, how do we train students for those contexts? one thing is for certain, when graduates leave seminarythey will find a church that needs them. in fact, they will find a church that many times more of them. this fact reminds us that it is not only theological education in the seminary that must be discussed; it is not only the education of pastors that needs to be discussed; the question is: how can church leaders of diverse vocations-pastors, deacons, and others-take their education, go forth, and educate through word and deed as part of their broader vocation? in moments of challenge, we are always in danger of finding ourselves in a reactive state. monumental decisions are suddenly demanded, and one simply does the best one can with faith, acting on principle but on the basis of limited information and limited prior reflection. the resulting action is inevitably constrained both by practical limitations-what else can we do?-and by intellectual constraints-what else can we imagine? what can we imagine if we have not had the time to reflect and study? it is urgent that we use the time we now have to study and imagine, that we think about the purposes of theological education and the ways that theological education must respond to changing contexts-a changing church, a changing worldon the basis of our enduring commitments, above all, our commitment to serve christ's church. as i planned this colloquy, i did not invite speakers to weigh in on any particular set of current proposals for seminary education. in order to evaluate this or that current proposal, in order to imagine alternatives faithful to the mission of the church, we need to bring to bear the insights of our tradition, of theology and history, and of our global church body. i thus invited speakers to address changes and innovation in theological education that occurred in moments of great pressure and even crisis-the reformation itself, the rise of nazi germany-and in the complicated history of christian expansion around the globe. such investigations give us insight into how those who came before us responded to the call of theological education in concrete, difficult circumstances. we can learn much from the thoughts and actions of those who have gone before us, from their successes and their failures: christianity does not invent itself ex nihilo with every new generation; rather we carry into the future a vibrant, living, diverse tradition, grounded in the greatest gift handed down to us, the heart and sum of our tradition, the gospel. theological education is not a task for the seminary alone; it is a core task of the entire church. while the term today often refers to seminary education, what we do at seminary is educate educators. we educate those who must educate others not only about basic doctrinal teachings but also about the depths of christian theological reflection and insight into scripture. we educate those who must teach others not only about doctrine and theology, but also about how we might worship, live, and work together as the church. shaped by seminary, church leaders in turn shape flocks and publics that will go forth and witness to the gospel (i.e., teach others about the gospel), including through the faithful exercise of vocation. theological education is a broad venture, and among the sixteenth century confessions, lutherans were uniquely concerned with the education of the rural peasantry-no other confession in the sixteenth century produced so much literary material that was aimed at rural and small church ministry, at the "simplest" of pastors. this is a tradition that we ought to be proud of; it is a tradition that we carry forward not only in our concern for rural and small church ministry, but fundamentally in our concern that theological education is for all christian peoples in all contexts. today, we look toward a future marked by big challenges and consequential decisions, and as we survey this future and seek to chart our way through it, we are standing on ground that has already shifted. this leads me to the final point i wish to make: as fallen human beings in a fallen world, we frequently respond to change with trepidation; an uncertain future stirs anxiety. much of the movement that has occurred in theological education in recent decades, however, ought to give us cause for hope: we now have women as well as men engaged in theological study and proclaiming the gospel; our understanding of the gospel is now enriched by diverse voices; we have long been a global church, we are now better aware of and better prepared to listen to witnesses from other parts of the world. diverse and global perspectives on scripture and theology and the life of the church challenge us and enrich us in our own contexts. the gospel is a magnificent thing to behold, and church is stronger for seeing its truth and work from different perspectives. tradition is not a zero-sum thing, as if adding a new voice drowns out the old-indeed, new voices can help us see better the depths of what martin luther and so many others wanted to teach us. the richer our field of study, the better we are prepared to do god's work in the world. in conclusion, i want to underline one further groundshift that i have already mentioned: many seminarians, many church leaders in training, are now second-, third-, fourthcareer students. many, whether first or later career, take on the challenge of higher theological study in diverse life circumstances. their willingness to undertake this work is a gift of god. they bring-all students bring-a great diversity of experiences, insights, talents, and vocational skills that strengthen the ministry of the church-that help the church to proclaim the gospel effectively to more people in more walks of life. this too is not a zero-sum game: we are all one in christ, who uses diverse gifts, who welcomes diverse forms of worship, who alone is our redeemer. theological education does face challenges, but it will go on as long as the church goes on. and, as isaiah holds, the word of the lord endures forever. all of us who are involved in theological education-in other words, every committed believer-is called to undertake the venture of theological learning and theological teaching in the spirit of faith, with joyous confidence in both god's direction of our paths and god's redemption of our failings. and we are called, too, to be learners at the feet of the great cloud of witnesses, the great company of teachers who came before us and who span the globe around us. we are called to be learners first from our divine teacher. @worship: liturgical practices in digital words. london: routledge the use of the means of grace: a statement of the practice of word and sacrament digital worship and sacramental life in a time of pandemic sermon at the dedication of the castle church, torgau mn: fortress. (original work published ce whether one may flee from a deadly plague mn: fortress. (original work published ce book of concord: the confessions of the evangelical lutheran church d.). first apology. (original work published ca. ce apology of the augsburg confession book of concord: the confessions of the evangelical lutheran church the reformation of suffering: pastoral theology and lay piety in late medieval and early modern germany from sacrifice to sacrament: eucharistic practice in the lutheran reformation facebook society: losing ourselves in sharing ourselves key: cord- - eycqf o authors: robertson, colin; nelson, trisalyn a.; macnab, ying c.; lawson, andrew b. title: review of methods for space–time disease surveillance date: - - journal: spat spatiotemporal epidemiol doi: . /j.sste. . . sha: doc_id: cord_uid: eycqf o a review of some methods for analysis of space–time disease surveillance data is presented. increasingly, surveillance systems are capturing spatial and temporal data on disease and health outcomes in a variety of public health contexts. a vast and growing suite of methods exists for detection of outbreaks and trends in surveillance data and the selection of appropriate methods in a given surveillance context is not always clear. while most reviews of methods focus on algorithm performance, in practice, a variety of factors determine what methods are appropriate for surveillance. in this review, we focus on the role of contextual factors such as scale, scope, surveillance objective, disease characteristics, and technical issues in relation to commonly used approaches to surveillance. methods are classified as testing-based or model-based approaches. reviewing methods in the context of factors other than algorithm performance highlights important aspects of implementing and selecting appropriate disease surveillance methods. early detection of unusual health events can enable coordinated response and control activities such as travel restrictions, movement bans on animals, and distribution of prophylactics to susceptible members of the population. our experience with severe acute respiratory syndrome (sars), which emerged in southern china in late and spread to over countries in months, indicates the importance of early detection (banos and lacasa, ) . disease surveillance is the principal tool used by the public health community to understand and manage the spread of diseases, and is defined by the world health organization as the ongoing systematic collection, collation, analysis and interpretation of data and dissemination of information in order for action to be taken (world health organization, ) . surveillance systems serve a variety of public health functions (e.g., outbreak detection, control planning) by integrating data representing human and/or animal health with statistical methods (diggle, ) , visualization tools (moore et al., ) , and increasingly, linkage with other geographic datasets within a gis (odiit et al., ) . surveillance systems can be designed to meet a number of public health objectives and each system has different requirements in terms of data, methodology and implementation. outbreak detection is the intended function of many surveillance systems. in syndromic surveillance systems, early-warning signals are provided by analysis of pre-diagnostic data that may be indicative of people's care-seeking behaviour during the early stages of an outbreak. in contrast, systems designed to monitor food and water-borne (e.g., cholera) pathogens are designed for case detection, where one case may trigger a response from public health workers. similarly, where eradication of a disease in an area is a public health objective, surveillance may be designed primarily for case detection. alternatively, where a target disease is endemic to an area, perhaps with seasonal variation in incidence, such as rabies, monitoring space-time trends may be the primary surveillance objective (childs et al., ) . surveillance systems differ with respect to a number of qualities which we term contextual factors. for evaluation of surveillance systems, this is well known, as the evaluative framework set out by the centre for disease control and prevention (cdc) encompasses assessment of simplicity, flexibility, data quality, acceptability, sensitivity, predictive value positive, representativeness, timeliness, and stability (buehler et al., ) . selection of appropriate methods for space-time disease surveillance should consider system-specific factors indicative of the context under which they will be used (table ) . these factors are summarized in table , and are the axes along which we will review methods for space-time disease surveillance. there has been rapid expansion in the development of automated disease surveillance systems. following the bioterrorism attacks in the united states, there was expanded interest and funding for the development of electronic surveillance networks capable of detecting a bioterrorist attack. many of these were designed to monitor data that precede diagnoses of a disease (i.e., syndromic surveillance). by may there were an estimated syndromic surveillance systems in development throughout the us (buehler et al., ) . due to the noisy nature of syndromic data, these systems rely heavily on advanced statistical methods for anomaly detection. as data being monitored in syndromic systems precede diagnoses they contain a signal that is further removed from the pathogen than traditional disease surveillance, so in addition to having potential for early warning, there is also greater risk of false alarms (i.e., mistakenly signaling an outbreak) (stoto et al., ) . one example is a national surveillance system called biosense developed by the cdc in the united states. bio-sense is designed to support early detection and situational awareness for bioterrorism attacks and other events of public health concern (bradley et al., ) . data sources used in biosense include veterinary affairs and department of defense facilities, private hospitals, national laboratories, and state surveillance and healthcare systems. the broad mandate and national scope of the system necessitated the use of general statistical methods insensitive to widely varying types, quality, consistency and volume of data. two methods used in biosense are a generalized linear mixed-model which estimates counts of syndrome cases based on location, day of the week and effects due to seasonal variation and holidays. counts are estimated weekly for each syndrome-location combination. a second temporal surveillance approach computed for each syndrome under surveillance is a cumulative sum of counts where events are flagged as unusual if the observed count is two standard deviations above the moving average. the selection of surveillance methods in biosense considered factors associated with heterogeneity of data sources and data volume among others. another example is provided by a state-level disease surveillance system developed for massachusetts called the automated epidemiological geotemporal integrated surveillance (aegis) system, where both time-series modelling and spatial and space-time scan statistics are used (reis et al., ) . the modular design of the system allowed for 'plug-in' capacity so that functionality already implemented in other software (i.e., satscan) could be leveraged. in aegis, daily visit data from emergency department facilities are collected and analyzed. the reduced data volume and greater standardization enable more advanced space-time methods to be used as well as tighter integration with the system's communication and alerting functions (reis et al., ) . decisions on method selection and utilization are based on a variety of factors, yet most reviews of statistical methods for surveillance data compare and describe algorithms from a purely statistical or computational perspective (e.g., buckeridge et al., ; sonesson and bock, ; . the selection of statistical approaches to surveillance for implementation as part of a national surveillance system is greatly impacted by design constraints due to scalability, data quality and data volume whereas the use of surveillance data for a standalone analysis by a local public health worker may be more impacted by software availability, learning curve, and interpretability. selection of appropriate statistical methods is key to enabling a surveillance system to meet its objectives. a frequently cited concern of surveillance systems is how to evaluate whether they are meeting their objectives (reingold, ; sosin and dethomasis, ) . a framework for evaluation developed by the cdc considers outbreak detection a function of timeliness, validity, and data quality (buehler et al., ) . the degree to which these factors contribute to system effectiveness may vary for different surveillance systems, especially where objectives and system experiences differ. for example, newly developed systems in developing countries may place a table contextual factors for evaluation of methods for space-time disease surveillance. the spatial and temporal extent of the system (e.g., local/regional/national/ international) scope the intended target of the system (e.g., single disease/multiple disease, single host/multiple host, known pathogens/unknown pathogens) function the objective(s) of the systems (outbreak detection, outbreak characterization, outbreak control, case detection, situational awareness (mandl et al., ; buehler et al., ) , biosecurity and preparedness (fearnley, ) ) disease characteristics is the pathogen infectious? is this a chronic disease? how does it spread? what is known about the epidemiology of the pathogen? technical the level of technological sophistication in the design of the system and its users (data type and quality, algorithm performance, computing infrastructure and/or reliability, user expertise) greater emphasis on evaluating data quality and representativeness, as little is known about the features of the data streams at early stages of implementation (lescano et al., ) . algorithm performance is usually measured by sensitivity, specificity and timeliness. sensitivity is the probability of an alarm given an outbreak, and specificity is the probability of no alarm when there is no outbreak. timeliness is measured in number of time units to detection, and has been a focus of systems developed for early outbreak detection (wagner et al., ) . the importance of each of these measures of performance need to be evaluated in light of the system's contextual factors outlined in table . our goal in this review of approaches to space-time disease surveillance is to synthesize major surveillance methods in a way that will focus on the feasibility of implementation and highlight contrasts between different methods. first, we aim to place methods in the context of some key aspects of practical implementation. second, we aim to highlight how methods of space-time disease surveillance relate to different surveillance contexts. disease surveillance serves a number of public health functions under varying scenarios and methods need to be tailored and suited to particular contexts. finally, we provide guidance to public health practitioners in understanding methods of space-time disease surveillance. we limit our focus to methods that use data encoded with both spatial and temporal information. this paper is organized as follows. the next section describes space-time disease surveillance. following, is a description of different statistical approaches to spacetime disease surveillance with respect to the contextual factors outlined in table . we conclude with a summary and brief discussion of our review. methods for space-time disease surveillance can address a surveillance objective in a variety of ways. most methods assume a study area made up of smaller, nonoverlapping sub-regions where cases of disease are being monitored. the variable under surveillance is the count of the number of cases. in retrospective analysis, the data are fixed and methods are used to determine whether an outbreak occurred during the study period, or characterize the spatial-temporal trends in disease over the course of the study period (marshall, ) . in the prospective scenario, the objective is to determine whether any single sub-region or collection of sub-regions is undergoing an outbreak (currently), and analysis occurs in an automated, sequential fashion as data accumulate over time. prospective methods require special consideration as data do not form a fixed sample from which to make inferences about (sonesson and bock, ) . parallel surveillance methodologies compute a test statistic separately for each sub-region and signal an alarm if any of sub-regions are significantly anomalous (fig. a) . while in vector accumulation methods, test statistics in a parallel surveillance setting are combined to form one general alarm statistic (fig. b) . conversely, a scalar accumulation approach com-putes one statistic over all sub-regions for each time period (frisen and sonesson, ) (fig. c ). for example, rogerson ( ) used the tango ( ) statistic to monitor changes in spatial point patterns. statistical tests in space-time disease surveillance generally seek to determine whether disease incidence in a spatially and temporally defined subset is unusual compared to the incidence in the study region as a whole. thus, this class of methods is designed to detect clusters of disease in space and time, and suit surveillance systems designed for outbreak detection. most spatial cluster detection methods such as the geographical analysis machine (openshaw et al., ) , density estimation (bithell, ; lawson and williams, ), turnbull's method (turnbull et al., ) , the besag and newell ( ) test, spatial autocorrelation methods such as the gi * (getis and ord, ) , and lisas (anselin, ) , and the spatial scan statistic (kulldorff and nagarwalla, ) are types of statistical tests. the development of methods for space-time cluster detection naturally evolved from these purely spatial methods. we can stratify methods in the statistical test class into three types: tests for space-time interaction, cumulative sum methods, and scan statistics. space-time interaction of disease indicates that the cases cluster such that nearby cases in space occur at about the same time. the form of the null hypotheses is usually conditioned on population, and can factor in risk covariates such as age, occupation, and ethnicity. detecting the presence of space-time interaction can be a step towards determining a possible infectious etiology for new or poorly understood diseases (aldstadt, ) . additionally, non-infectious diseases exhibiting space-time interaction may suggest the presence of an additional causative agent, such as a point source of contamination and/or pollution or an underlying environmental variable. these methods require fixed samples of space-time data representing cases of disease. all tests for space-time interaction consider the number of cases of disease that are related in space-time, and compare this to an expectation under a null hypothesis of no interaction (kulldorff and hjalmars, ) . the knox test ( ) uses a simple test statistic which is the number of case pairs close both in space and in time. this count is compared to the null expectation conditional on the number of pairs close only in space, and the number of pairs close only in time; i.e., the times of occurrence of the cases are independent of case location. a major shortcoming of the knox ( ) method is that the definition of ''closeness" is arbitrary. mantel's ( ) test addresses this by summing across all possible space-time pairs, while diggle et al. ( ) identify clustering at discrete distance bands in the space-time k function. for infectious diseases, it is likely that near space-time pairs are of greater importance, so mantel suggests a reciprocal transformation such that distant pairs are weighted less than near pairs. the mantel test can in fact be used to test for association between any two distance matrices, and is often used by ecologists to test for interaction between space and another distance variable such as genetic similarity (legendre and fortin, ) . the reciprocal transformation used in the mantel statistics assumes a distance decay effect. while this may be appropriate for infectious diseases, for non-infectious diseases or diseases about which little is known, this assumed functional form of disease clustering may be inappropriate. a different approach is taken by jacquez ( ) where relations in space and time are defined by a nearest neighbour relation rather than distance. here, the test statistic is defined by the number of case pairs that are k nearest neighbours in both space and time. when space-time interaction is present, the test statistic is large. another method for testing an infectious etiology hypothesis given by pike and smith ( ) , assesses clustering of cases relative to another control disease, though selection of appropriate controls can be difficult. the scale of the disease surveillance context can impact the selection of space-time interaction tests because these tests are sensitive to changes in the underlying population at risk (population shift bias). therefore, large temporal scales will be more likely to exhibit changes in population structure and introduce population shift bias. an unbiased version of the knox test given by kulldorff and hjalmars ( ) accounts for this by adjusting the statistic by the space-time interaction inherent in the background population. changes in background population over time can be incorporated into all space-time interaction tests using a significance test based on permutations conditioned on population changes. however, this obviously requires data on the population over time which may not always be easy to obtain. space-time interaction tests are univariate and therefore only suitable for testing cases of a single disease. consideration of multiple host diseases is possible, though there is no mechanism to test for interaction or relationships between different host species. another major consideration is the function of the surveillance system or analytic objective. interaction tests can only report the presence or absence of space-time interaction. they give no information about the spatial and temporal trends in cases, nor consider naturally occurring background heterogeneity. a final point is that these tests use case data, and therefore require geo-coded singular event data, making these methods unsuitable when disease data are aggregated to administrative units. cumulative sum methods for space-time surveillance developed out of traditional statistical surveillance applications such as quality control monitoring of industrial manufacturing processes. in cusum analysis, the objective is to detect a change in an underlying process. in application to disease surveillance, the data are in the form of case counts for sub-regions of a larger study area. a running sum of deviations is recalculated at each time period. for a given sub-region, a count y t of cases at time t is monitored as follows where s t is the cumulative sum alarm statistic, k is a parameter which represents the expected count, so that observed counts in exceedence of k are accumulated. at each time period, an alarm is signalled if s t is greater than a threshold parameter h. if a cusum is run long enough, false alarms will occur as exceedences are incrementally accumulated. the false-positive rate is controlled by the expected time it takes for a false alarm to be signalled, termed the in-control average run length, denoted arl . the arl is directly related to the threshold value for h, which can be difficult to specify in practice. high values of h yield long arl and vice versa. in practice, approximations are used to estimate a value for h for a chosen arl (siegmund, ) , though this remains a key issue in cu-sum methods. the basic univariate cusum in ( ) can be extended to incorporate the spatial aspect of surveillance data. in this sense, cusum is a temporal statistical framework around which a space-time statistical test can be built. in an initial spatial extension, rogerson ( ) coupled the (global) tango statistic ( ) for spatial clustering in a cusum framework. for a point pattern of cases of disease, compute the spatial statistic, and use this value of the statistic to condition the expected value at the next time period. observed and expected values are used to derive a z-score which is then monitored as a cusum (rogerson, a) . one scalar approach taken by rogerson ( b) is to monitor only the most unexpected value, or peak, of each time period as a gumbel variate (gumbel distribution is used as a statistical distribution for extreme values). an additional approach is to compute a univariate cusum in a parallel surveillance framework (woodall and ncube, ) . here the threshold parameter h must be adjusted to account for the multiple tests occurring across the study area. yet this approach takes no account of spatial relationships between sub-regions (i.e., spatial autocorrelation). cusum surveillance of multiple sub-regions can be considered a multivariate problem where a vector of differences between the observed and expected counts for each sub-region is accumulated. spatial relationships between sub-regions can be incorporated by explicitly modelling the variance-covariance matrix. rogerson and yamada ( ) demonstrate this approach by monitoring a scalar variable representing the multivariate distance of the accumulated differences between observed and expected over all sub-regions. this is modelled as , and p is a variance-covariance matrix capturing spatial dependence, and s t is a  p vector of differences between observed and expected cases of disease in time t for each p sub-region (rogerson and yamada, ) . cusum methods are attractive for prospective disease surveillance because they offer a temporal statistical framework within which spatial statistics can be integrated. they therefore overcome one of the limitations of traditional spatial analysis applied to surveillance in that repeated testing over time (and space) can be corrected for. a full description of the inferential properties of the cusum framework is given by rogerson ( a) . these methods are therefore most appropriate for long temporal scales, especially when historical data are used to estimate the baseline. multivariate cusum given by rogerson and yamada ( ) is for a singular disease over multiple sub-regions, but could be used to monitor multiple diseases over multiple sub-regions. this may be most applicable in a syndromic surveillance application. the simplicity of univariate cusum makes training and technical expertise less of a factor than the multivariate case. multivariate cusum is also more difficult to interpret and specification of the threshold parameter requires simulation experimentation or a large temporal extent from which to establish a baseline. scan statistics developed originally for temporal clustering by naus ( ) test whether cases of disease in a temporally defined subset exceed the expectation given a null hypothesis of no outbreak. the length of the temporal window is varied systematically in order to detect outbreaks of different lengths. this approach was first extended to spatial cluster detection in the geographical analysis machine (openshaw et al., ) . the spatial approach looks for clusters by scanning over a map of cases of disease using circular search areas of varying radii. kulldorff and nagarwalla ( ) refined spatial scanning with the development of the spatial scan statistic which adjusts for the multiple testing of many circular search areas. the spatial scan statistic overcomes the multiple-testing problem (common to many local spatial analysis methods) by taking the most likely cluster defined by maximizing the likelihood that the cases within the search area are part of a cluster compared to the rest of the study area. significance testing for this one cluster is then assessed via monte carlo randomization. secondary clusters can be assessed in the same way and ranked by p-value. in kulldorff ( ) , the spatial scan statistic is extended to space-time, such that cylindrical search areas are used where the spatial search area is defined by cylinder radius, and the temporal search area is defined by cylinder height. in prospective analysis, candidate cylinders are limited to those that start at any time during the study period and end at the current time period (i.e., alive clusters). significance is determined through randomization and comparing random permutations to the likelihood ratio maximizing cylinder in the observed data. an additional consideration to take account of multiple hypothesis testing over time (correlated sequential tests) is given by including previously tested cylinders (which may be currently 'dead') in the randomization procedure (kulldorff, ) . the space-time scan statistic (kulldorff, ) approaches the surveillance problem in a novel way and aptly handles some key shortcomings of other local methods (multiple testing, locating clusters, pre-specifying cluster size). however, a limitation is that the expectation is conditional on an accurate representation of the underlying population at risk, data which may be hard to obtain. in long term space-time surveillance scenarios, accurate population estimates between decennial censuses are rare or must be interpolated. in syndromic applications, where cases are affected by unknown variations in care-seeking behaviours, the raw population numbers may not accurately reflect the at-risk population. in kulldorff et al. ( ) , the expected value for each unit under surveillance is estimated from historical case data rather than population data. generating the expected value from the history of the process under surveillance is most suitable for real-time prospective surveillance contexts where the current state of the process is of interest. this extension allows the application of the space-time scan statistic in a wider range of surveillance applications. a remaining limitation of the cylindrical space-time scan statistic is the use of circular search area over the map. the power of the scan statistics that use circularbased search areas decline as clusters become more irregular in shape, for example, for cases clustered along a river valley or where disease transmission is linked to the road network. the spatial scan statistic has been extended to detect irregularly-shaped clusters in patil and taillie ( ) and tango and takahashi ( ) . extensions of these approaches to space-time are active areas of research. a space-time version of the tango and takahashi ( ) method uses spatial adjacency of areal units added incrementally up to k nearest neighbour units which are connected through time to form -dimensional prism search areas (takahashi et al., ) . a similar approach is given by costa et al. ( ) . however, these methods are very computationally intensive. scan statistics are one of the most widely used statistical methods for outbreak detection in surveillance systems. space-time scan statistics are able to detect and locate clusters of disease, and can condition expected counts for individual sub-regions on population data or on previous case data, making these methods suitable for implementation where data volume is large. the scope of scan statistics, like most statistical tests, is limited to monitoring case data, either case event point data or counts by sub-region. scan statistics are best served to detect and locate discrete localized outbreaks. secondary clusters can be identified by ranking candidate clusters by their likelihood ratio. yet region-wide outbreaks cannot be detected with scan-statistics because of the assumed form of a cluster as a compact geographical region where cases are greater than expected. novel space-time methods that search for raised incidence via graph-based connectivity may model spatial relationships of disease processes more accurately than circular search areas. however, the computational burden and complexity of these approaches limits their use to expert analysts and researchers. at the root of the problem is a conceptual discrepancy between the definition of a disease outbreak (which disease surveillance systems are often interested in detecting) and a disease cluster (defined by spatial proximity) which is common to all statistical testing methods for space-time surveillance (lawson, ) . model-based approaches to surveillance developed recently as the need emerged to include other variables into the specification of our expectation of disease incidence. for example, we often expect disease prevalence to vary with age, gender, and workplace of the population under surveillance. statistical models allow for these influences to adjust the disease risk through space and time. a second impetus for the development of statistical models for disease surveillance is that a large part of epidemiology concerned with estimating relationships between environmental variables and disease risk (i.e., ecological analysis) provided a methodological basis from which to draw. modelling for space-time disease surveillance is relatively recent, and this is a very active area of statistical surveillance research. again we stratify statistical models into three broad classes: generalized linear mixed models, bayesian models, and models of specific space-time processes. generalized linear mixed models (glmm) offer a regression-based framework to model disease counts or rates using any of the exponential family of statistical distributions. this allows flexibility in the expected distribution of the response variable, as well as flexibility in the relationship between the response and the covariate variables (the link function). one application of this approach to prospective disease surveillance for detection of bioterrorist attacks is given by kleinman et al. ( ) . here, the number of cases of lower respiratory infection syndromes in small geographic areas act as a proxy for possible anthrax inhalation. a glmm approach is used to combine fixed effects for covariate variables (i.e., season, day of the week) with a random effect that accounts for varying baseline risks in different geographic areas. in kleinman et al. ( ) , the logit link function is used in a binomial logistic model to estimate the expected number of cases y it in area i for time t. this is a function of the probability of an individual being a case in area i at time t and the number of people n it in area i at time t. this expectation is conditional on a location specific random effect b i and is then converted to a z-score and evaluated to determine if it is unusual (i.e., an emerging cluster). this approach was extended to a model using poisson random effects in kleinman ( ) . the use of glmm in prospective surveillance has also been suggested for use in west nile virus surveillance due to the ease with which covariates can be included and flexibility in model specification (johnson, ) . the glmm approach has attractive advantages as a flexible modelling tool. particularly, relaxation of distributional assumptions, flexibility in link functions, and the ability to model spatial relationships (at multiple spatial scales) as random effects make glmm useful for prospec-tive space-time disease surveillance. the scale and scope of the surveillance context does not limit a model-based approach, and models may be even more useful when data abnormalities such as time lags occur (as estimates can be based on covariates alone). one feature of glmm that are important for many disease surveillance contexts are the ease with which spatial hierarchies can be incorporated. ecological relationships that are structured hierarchically that impact disease emergence (e.g., climate, vegetation, vector life-cycle development) can be represented and accounted for. further, human drivers of disease emergence (e.g., land-use policies, travel patterns, demographics) are often organized hierarchically through administrative units. in social sciences glmms are often used (i.e., multi-level models) that incorporate these 'contextual effects' on an outcome variable. a further advantage of glmms is their ability to incorporate spatial variation in the underlying population at risk by conditioning the expected value on the random effect component (b i in eq. ( )). where fewer people are present, the expected value is adjusted toward the mean. this can somewhat account for the small-numbers problem of smrs in epidemiology, reducing the likelihood of estimating extremely low expected values in rural areas. bayesian models have been used extensively in disease mapping studies (best et al., ; lawson, ). analysis of disease in a bayesian framework centers around inference on unknown area-specific relative risks. inference on this unknown risk distribution is based on the observed data y and a prior distribution. these are combined via a likelihood function to create a distribution for model parameters which can be sampled for prediction. bayesian models have been applied for retrospective space-time surveillance (e.g., macnab, ) and are now being developed for prospective space-time disease surveillance. the basic bayesian model can incorporate space and time dependencies. in abellan et al. ( ) a model is described where the counts of disease are taken to be binomial distributed, and the next level of the model is composed of a decomposition of the unknown risks into model parameters for general risk, spatial effects, temporal effects, and space-time interaction. estimation requires specifying prior distributions for each of the model components and sampling the posterior distribution via monte carlo markov chain (mcmc) methods. here, the authors describe space-time bayesian models for explanation of overall patterns of disease, speculating on their use in disease surveillance contexts. rodeiro and lawson ( a) offer a similar model based on a poisson distributed disease count. specifically, the counts y i are poisson with mean a function of the expected number of cases e ij in location i at time j and the area-specific relative risk rr ij . similar to abellan et al. ( ) , the log (rr ij ) are decomposed into spatial effects u i , uncorrelated heterogeneity v i , temporal trend t j , and space-time interaction c ij : again, these components need prior distributions specified. for the spatial correlation term, a conditional autoregressive model (car) is suggested for modelling spatial autocorrelation. residuals are then extracted from model predictions for incoming data and can be used to assess how well the data fits the existing model. as discussed in rodeiro and lawson ( a) , monitoring residuals in this way makes the detection of specific types of disease process change feasible by adjusting how residuals are evaluated. while adding to the complexity of the analysis, this may be of great use in a surveillance application. alternative proposals such as bayesian cluster models with ''a priori" cluster component for spatiotemporal disease counts was developed by yan and clayton ( ) . more recently, bayesian and empirical bayes semi-parametric spatiotemporal models with temporal spline smoothing were developed for the analysis of univariate spatiotemporal small area disease and health outcome rates (macnab, a; macnab and gustafson, ; ugarte et al., ) and multivariate spatiotemporal disease and health outcome rates (macnab, b) . tzala and best ( ) also proposed bayesian hierarchical latent factor models for the modelling of multivariate spatiotemporal cancer rates. these spatiotemporal models, with related bayesian and empirical bayes methods of inference, may also be considered for disease surveillance applications. the statistical methodology for applying bayesian models to surveillance in space-time is still being developed, and as such these approaches are suited primarily to researchers. bayesian models are attractive because they allow expert and local knowledge of disease processes to be incorporated via the specification of prior distributions on model parameters. however, this can also be a drawback, as a subjective element is introduced to the model. it is generally recommended that sensitivity analysis be conducted on a variety of candidate priors for model parameters (e.g., macnab and gustafson, ; macnab, a) . these technical aspects of model-fitting require advanced statistical training. a further complexity of bayesian models is estimation. mcmc methods are required for generating the posterior distributions for these types of models and are computationally very demanding (although see rodeiro and lawson, b) . this might negate the use of these approaches in surveillance contexts that require daily refitting of models (i.e., fine temporal resolution), however, monthly or annual model refitting may be possible. as with glmms, bayesian models lend themselves to modelling hierarchical spatial relationships, and this can be important for both ecological and humanmediated drivers of disease emergence. some modelling approaches to surveillance have been designed to model specific types of spatial processes, generally represented as a realization from a statistical distribution. while all models require some distributional assumptions, those considered here purport to associate specific statistical processes with disease processes in the context of surveillance. in held et al. ( ) , a model is based on a poisson branching process whereby outcomes are dependent on both model parameters describing a particular property (e.g., periodicity) and past observed data. spatial and space-time effects can also be included as an ordinary multivariate extension. a useful aspect of this formulation for disease surveillance is the separation of the disease process at time t into two parts: an endemic part v and an epidemic part with conditional rate ky tÀ the endemic component can also be adjusted for seasonality, day of the week effects and other temporal trends. extended to the multivariate case, the model becomes where the endemic rate adjusted by the number of people in area i at time t, and area-specific previous model estimates for the epidemic part. spatial dependence can be incorporated by adding a spatial effects term that accounts for correlated estimates in ky i;tÀ via a weights matrix. however, this type of model yields separate parameters for each geographical unit. a point process methodology for prospective disease surveillance is presented in diggle et al. ( ) . point data representing cases are modelled with separate terms for spatial variation, temporal variation, and residual spacetime variation. the method is local, in the sense that recent cases are used for prediction, producing continuously varying risk surfaces. however, there are also global model parameters which estimate the background variation in space and time estimated from historical data. outbreaks are defined when variation in the residual space-time process exceeds a threshold value c. different values for the threshold parameter are evaluated and exceedence probabilities are mapped. model parameters are fixed allowing the model to be run daily on new data. however, as noted in diggle et al. ( ) , this may fail to capture unknown temporal trends, and periodic refitting may be required. a different approach is given by järpe ( ) , which instead of decomposing the process into separate components, monitors a single parameter of spatial relationships in a surveillance setting. this is similar in spirit to rogerson's work (rogerson, ) monitoring point patterns with spatial statistics, though here a specific underlying process is assumed: the ising model. the ising model represents a binary-state two dimensional lattice (sites coded or ). there are two parameters for the ising model; one governs the overall intensity (probability of a site being a ), and another the spatial interaction (probability of nearby sites being alike). in järpe ( ) , the intensity parameter is assumed equal and unchanging, and the surveillance is performed on the interaction parameter under different lattice sizes and types of change. the interaction parameter is essentially a global measure of spatial autocorrelation. this can then be monitored using temporal surveillance statistics such as cusum. since the properties of the underlying model are known, järpe is able to detect very small changes in spatial autocorrelation which could indicate the shift of a disease from endemic to epidemic. while significant spatial autocorrelation is often present at both endemic and epidemic states, changes in clustering can reveal threshold dynamics of the process in a surveillance setting. this is a common feature of forest insect epidemics (peltonen et al., ) . further, the effect of the lattice size can easily be estimated, and as lattice size is increased, sensitivity to changes in the interaction parameter increases as well. while most methods discussed thus far have been developed with the analysis of aggregated counts of disease in mind, analysis of sites on a lattice may have applicability in certain disease surveillance contexts. for example, square lattices are used for remotely sensed image processing, and surveillance of the presence or absence of a disease in these sampling units using an ising modelbased approach could incorporate remotely sensed environmental covariates (e.g., normalized differential wetness index) as is commonly done for zoonotic disease risk mapping and forecasting (kitron et al., ; rogers et al., ; wilson, ) . however, it is unclear how covariates are included in the ising model. this highlights an important point with model-based approaches to prospective surveillance: the main advantage of models is to incorporate extra information and to estimate smooth relative risks, yet as models grow in complexity they become more difficult to re-fit. this has implications for how suitable models are in different surveillance contexts. where the temporal scale is large, expected counts can be based on observed data rather than using census or other data sources. this is particularly important where diseases follow seasonal trends. with limited temporal data available, estimating model parameters may make be impacted by regular variation in disease occurrence. for surveillance systems monitoring many small areas (i.e., large spatial scale), the held et al. ( ) model would be of limited value as separate parameters need to be estimated for every sampling unit. broad scale patterns over large areas might better captured by the point process approach of diggle et al. ( ) . although here, case event data with fine spatial resolution is required. for all modelling approaches, complex decisions are required such as what covariates to include, how often to re-fit the model, how to test incoming data for fit against the existing model which require advanced statistical knowledge. this limits the applicability of modelling approaches to advanced analysts and researchers except for use in a black-box sense by analysts and public health practitioners. surveillance models can be tailored to detect specific types of disease process changes, such as a regionwide increase, or small changes in spatial autocorrelation suggesting a shift from endemic to epidemic states. however, models also required additional tests to determine if incoming data differ from the expected (i.e., modelled) pattern of cases. thus, in practice surveillance models are best utilized to estimate a realistic relative risk, and can then be combined with statistical tests such as cusum (järpe, ) and scan statistics . research into space-time disease surveillance methods has increased dramatically over the last two decades. many new methods are designed for specific surveillance systems, or are in experimental/developmental stages and not used in practical surveillance. here, we report on some newly developed approaches for public health surveillance to alert readers to the most recent developments in these emerging research areas. while test and model-based approaches to surveillance build on classical statistical methods, many recent spacetime disease surveillance methods have been developed specifically to take advantage of advanced computing power and data sources. these approaches include networks (reis et al., ; wong and moore, ) simulation-based methods such as agent-based models (eubank et al., ) and bootstrap models (kim and o'kelly, ) , and hidden markov models (madigan, ; sun and cai, ; watkins et al., ) . other new methods are designed to address limitations of existing surveillance methods. one problem for most methods of surveillance, is the specification of the null hypothesis, or expected disease prevalence. while expected rates are generally conditional on population data, spatial heterogeneity in the background rates are rarely accounted for. that is, complete spatial randomness (csr) is the underlying null model. goovaerts and jacquez ( ) have used geostatistical approaches, estimating spatial dependence of background rates via the semivariogram, to develop more realistic null models for disease cluster detection. the geostatistical framework has the advantage of estimating spatial dependence from the data, rather than defining it a priori via a spatial weights matrix as is common in disease mapping models. another problem common to most surveillance methods is that maps of disease represent either home address (case events) or small areas (tract counts). unusual clusters on the map imply heightened risk is associated with those locations. however, movement of animals and people decouples the location of diagnosis from disease risk by modifying exposure histories. methods that account for mobility may be an important area for future surveillance, especially in the context of real-time, prospective outbreak detection. the relationship between case, location, and exposure is further complicated by disease latency periods, which gives rise to space-time lags in diagnoses (schaerstrom, ) . this may be most important in the context of retrospective cluster analysis and investigation of possible environmental risk factors. statistical tests have been developed to account for exposure history and mobility for case-control data (jacquez and meliker, ) and case-only data (jacquez et al., ) . kernel-based approaches to risk estimation that incorporate duration at each location have been utilized for amyotrophic lateral sclerosis (sabel et al., ) . the general approach is to model and analyze the space-time path of individuals in the sense of hägerstrand ( ) . as personal location data continues to become ubiquitous due to new technology such as gps-enabled cell phones, surveillance methods that account for individual space-time histories may see more application in public health surveillance. the development of space-time disease surveillance systems holds great potential for improving public health via early warning and monitoring of health. the selection of which method(s) to implement in a given context is dependent on a variety of factors (table ) . this review has demonstrated that there is no best method for all systems. there are many aspects to consider when thinking about methods for space-time disease surveillance. many of the methods described in this review are active areas of research and new methods are constantly being developed. as more data sources become available, this trend is expected to continue, and the methods described here provide a snapshot of options available to public health analysts and researchers. a brief outline of some of the factors reviewed and how they relate to surveillance methods is given below. the spatial scale of the surveillance context is an important factor for selecting appropriate methods. spatial effects (i.e., clustering) are likely only of interest when cases/counts collected over a relatively large, heterogeneous area are being analyzed. over smaller more homogeneous areas, where spatial effects are negligible, temporal surveillance is optimal. when space-time surveillance is warranted, choice of which surveillance approach to use may be impacted by how spatial effects can be incorporated. where spatial scale is small, one would likely focus on either process models or statistical tests which use an underlying distribution for the null hypothesis (i.e., poisson model). the temporal scale of surveillance is also important. large temporal scales can use either testing or modelling methods, and most suit methods where baselines are estimated from previous cases, such as with the space-time permutation scan statistic. short temporal scales are not appropriate for models when diseases have complex day of the week effects or seasonal variation in incidence. scale will also affect the computational burden placed on the system. many approaches reviewed here, particularly statistical tests such as scan statistics, use approximate randomization to generate a distribution of a test statistic under the null hypothesis. methods that utilize randomization procedures, while powerful, impose constraints when applied with large spatial-temporal datasets. most methods are designed for a single disease and all methods are suitable for single host diseases, but finer detail in case distribution may be important for multiple host zoonotic diseases. stratification into separate diseases by host type will result in a loss of information as associations between host types will be lost. as zoonotic diseases make up the majority of emerging infectious diseases (greger, ) , multiple host surveillance methods are required. multivariate tests such as multivariate cusum can be used to monitor multiple signals. modelling approaches can also be used by creating a generalized risk index as the variable under surveillance. multivariate extensions to existing methods can be used to monitor associations between two diseases, for example, human and animal strains of the same pathogen. the objective of surveillance is one of the main drivers of method selection. all statistical tests are commonly used for outbreak detection. in general, modelling approaches are better suited to monitoring space-time trends. for what has been termed situational awareness, multiple signals are usually monitored. this is often the case in large syndromic applications such as biosense and essence. these contexts are best suited to a modelling approach, as often heterogeneity needs to be modelled with covariates. consideration of technical expertise is required for practical disease surveillance. broadly speaking, greater statistical expertise is required for model-based methods than testing (understanding model assumptions, parameterizing models, preparing covariate data, and interpreting output), while testing concepts are generally easier to grasp. however, for epidemiologists already familiar with generalized linear mixed models, some model approaches that incorporated space and time may be quickly attainable, such as that of kleinman et al. ( ) . yet for analysts from a health geography or spatial analysis background, testing methods might be more familiar. in any case, the use of space-time surveillance methods in public health will only increase in the future, and it is important that training and education keep pace with the changing methods available for surveillance data analysis. use of space-time models to investigate the stability of patterns of disease an incremental knox test for the determination of the serial interval between successive cases of an infectious disease local indicators of spatial association-lisa spatio-temporal exploration of sars epidemic the detection of clusters in rare diseases a comparison of bayesian spatial models for disease mapping an application of density estimation to geographical epidemiology biosense: implementation of a national early event detection and situational awareness system algorithms for rapid outbreak detection: a research synthesis syndromic surveillance and bioterrorism-related epidemics framework for evaluating public health surveillance systems for early detection of outbreaks predicting the local dynamics of epizootic rabies among raccoons in the united states a space time permutation scan statistic with irregular shape for disease outbreak detection second-order analysis of space-time clustering statistical analysis of spatial point patterns point process methodology for on-line spatio-temporal disease surveillance modelling disease outbreaks in realistic urban social networks signals come and go: syndromic surveillance and styles of biosecurity spatial and syndromic surveillance for public health the analysis of spatial association by use of distance statistics accounting for regional background and population size in the detection of spatial clusters and outliers using geostatistical filtering and spatial neutral models: the case of lung cancer in long island the human/animal interface: emergence and resurgence of zoonotic infectious diseases innovation diffusion as a spatial process a statistical framework for the analysis of multivariate infectious disease surveillance counts a k nearest neighbour test for space-time interaction in search of induction and latency periods: space-time interaction accounting for residential mobility, risk factors and covariates case-control clustering for mobile populations surveillance of the interaction parameter of the ising model prospective spatial prediction of infectious disease: experience of new york state (usa) with west nile virus and proposed directions for improved surveillance a bootstrap based space-time surveillance model with an application to crime occurrences spatial analysis of the distribution of tsetse flies in the lambwe valley, kenya, using landsat tm satellite imagery and gis generalized linear models and generalized linear mixed models for small-area surveillance a generalized linear mixed models approach for detecting incident clusters of disease in small areas, with an application to biological terrorism a model-adjusted spacetime scan statistic with an application to syndromic surveillance the detection of space-time interactions prospective time periodic geographical disease surveillance using a scan statistic a spacetime permutation scan statistic for the early detection of disease outbreaks the knox method and other tests for space-time interaction spatial disease clusters: detection and inference spatial and syndromic surveillance for public health bayesian disease mapping; hierarchical modeling for spatial epidemiology applications of extraction mapping in environmental epidemiology spatial pattern and ecological analysis statistical analyses in disease surveillance systems a bayesian hierarchical model for accident and injury surveillance spline smoothing in bayesian disease mapping mapping disability-adjusted life years: a bayesian hierarchical model framework for burden of disease and injury assessment regression b-spline smoothing in bayesian disease mapping: with an application to patient safety surveillance spatial and syndromic surveillance for public health implementing syndromic surveillance: a practical guide informed by the early experience the detection of disease clustering and a generalized regression approach a review of methods for the statistical analysis of spatial patterns of disease visualization techniques and graphical user interfaces in syndromic surveillance systems. summary from the disease surveillance workshop the distribution of the size of the maximum cluster of points on a line using remote sensing and geographic information systems to identify villages at high risk for rhodesiense sleeping sickness in uganda a mark geographical analysis machine for the automated analysis of point data sets upper level set scan statistics for detecting arbitrarily shaped hotspots spatial synchrony in forest insect outbreak roles of regional stochasticity and dispersal a case-control approach to examine diseases for evidence of contagion, including diseases with long latent periods if syndromic surveillance is the answer, what is the question? aegis: a robust and scalable real-time public health surveillance system monitoring changes in spatio-temporal maps of disease online updating of space-time disease surveillance models via particle filters predicting the distribution of tsetse flies in west africa using temporal fourier processed meteorological satellite data surveillance systems for monitoring the development of spatial patterns a set of associated statistical tests for spatial clustering monitoring spatial maxima monitoring change in spatial patterns of disease: comparing univariate and multivariate cumulative sum approaches spatial clustering of amyotrophic lateral sclerosis in finland at place of birth and place of death apparent and actual disease landscapes. some reflections on the geographical definition of health and disease sequential analysis: tests and confidence intervals a review and discussion of prospective statistical surveillance in public health evaluation challenges for syndromic surveillance-making incremental progress syndromic surveillance: is it worth the effort? large-scale multiple testing under dependence a flexibly shaped space-time scan statistic for disease outbreak detection and monitoring a class of tests for detecting 'general' and 'focused' clustering of rare diseases a flexibly shaped spatial scan statistic for detecting clusters monitoring for clusters in disease: application to leukemia incidence in upstate new york bayesian latent variable modelling of multivariate spatiotemporal variation in cancer mortality spatio-temporal modeling of mortality risks using penalized splines the emerging science of very early detection of disease outbreaks disease surveillance using a hidden markov model emerging and vector-borne diseases: role of high spatial resolution and hyperspectral images in analyses and forecasts multivariate cusum quality-control procedures classical time-series methods for biosurveillance world health organization. global early warning system for major animal diseases, including zoonoses (glews) a review of public health syndromic surveillance systems a cluster model for space-time disease counts this project was supported in part by the teasdale-corti global health research partnership program, national sciences and engineering research council of canada, and geoconnections canada. the authors would like to thank dr. barry boots for direction and suggestions during the starting phase of this research. key: cord- - k ttvfq authors: dabachine, yassine; taheri, hamza; biniz, mohamed; bouikhalene, belaid; balouki, abdessamad title: strategic design of precautionary measures for airport passengers in times of global health crisis covid : parametric modelling and processing algorithms date: - - journal: j air transp manag doi: . /j.jairtraman. . sha: doc_id: cord_uid: k ttvfq presently, the negative results of a pandemic loom in a threatening manner on an international scale. facilities such as airports have contributed significantly to the global spread of the covid- virus. therefore, in order to address this challenge, studies on sanitary risk management and the proper application of countermeasures should be carried out. to measure the consequences over passenger flow, simulation modelling has been set up at casablanca mohammed v international airport. several scenarios using daily traffic data were run in different circumstances. this allowed the development of some assumptions regarding the overall capacity of the airport. the proposed simulations make it possible to calculate the number of passengers to be processed in accordance with the available check-in counters based on the proposed sanitary measures. the aviation sector has been experiencing an unprecedented crisis since march . indeed, almost all airports have been paralyzed following the outbreak of the covid- pandemic. euro control had announced a significant % reduction in the number of flights by may [ , ] . the flow of international traffic contributed significantly to the spread of the virus worldwide [ ] . in europe, for example, it seems that the areas least affected by the virus are those where no international airport is located. one of the main characteristics of covid- is its long incubation period, which currently averages . days [ ] .contagiousness during the incubation period is one of the reasons why covid spreads so widely compared to other viruses, making it extremely difficult to exclude the possibility of asymptomatic passengers passing through the airport [ , ] . according to the international civil aviation organization (icao) [ ] , international air traffic will experience a significant decline in the order of % to % in the number of international passengers in compared to . the airports council international (aci) [ ] estimates that airports will lose two fifths of their passenger traffic or more than $ billion in revenues in compared to the status quo. the international air transport association (iata) [ ] , which represents the airlines, estimates that revenue passenger-kilometers will decrease by % in compared to . over time, economic activity will resume as governments strive to restore economic growth and recover, which will require a resumption of airport activity. however, airports need to be reopened gradually while being aware of the potential risk generated by the hyper mobility experienced so far in order to avoid a second wave of the pandemic. to date, there is no published solution for a passenger flow management system within the framework of the new health constraints. nevertheless, certain rules and standards are defined by iata to guarantee the quality requirements of the passenger assistance service in the terminal area, on which we have based our proposals for additional measures in line with sanitary requirements for such a pandemic. social distancing is one of the main measures agreed, which will affect the capacity of the airport, although the distances recently adopted by some airports make this issue a subject for debate. accordingly, this study seeks to determine the necessary parameters for passengers distancing in order to minimize the potential spread of the virus without compromising the airport's ability to manage the flow of passengers. therefore, this study proposes simulations and discussions on the possible effects of these measures and test their applicability. the proposed solution differs from other passenger flow management systems in that it introduces preventive measures mandated by the world health organization (who), as well as health precautionary measures at airports. this has led us to study the movement of passengers in this context in order to develop a parametric model capable of adjusting the health measures to the expected flow of passengers and also of verifying its usefulness. it has been decided to carry out the study in the country's busiest and best-equipped terminal, based on statistical data relative to the time of congestion. this is intended to evaluate the actions taken so that it is possible to determine if they have had viable results. this document proposes a simulation tool to better manage the flow of passengers, as part of an approach that inte-grates quality of service standards and the new requirements of health regulations within airports. this paper is divided into eight sections: the first section highlights the immediate and lasting impact on the aviation sector in the wake of the covid- pandemic crisis. the second section outlines context information for the simulation process. in the third section, the conception is presented. the fourth section is about the mathematical modelling of the variables parameters. then in section five the simulation is exposed. the validation model and the simulation analysis are discussed. in the sixth section results and discussion are presented in the seventh section. in the last section, some conclusions are drawn. casablanca mohammed v international airport has three terminals with a total capacity of million passengers per year and is the hub of the moroccan airport network. the surface area of each terminal is respectively , for t , , for t and , for t . it is connected to more than international destinations by weekly frequencies operated by airlines. the t terminal will be used in this study because of its capacity, state-of-the-art infrastructure and equipment that meets current international standards for safety, security and quality of service [ ] . under the international health regulations (ihr), airport authorities are required to establish effective contingency plans and arrangements to deal with events that may constitute a public health emergency of international concern. the current outbreak of a new coronavirus disease (covid- ) has spread across several borders, resulting in demands for detection and management of suspicious cases. in order to implement the sanitary passenger flow management model proposed in this article, it is assumed that the guidelines for the detection and management of sick travelers suspected of being infected are applied in accordance with the interim guidelines for the management of ill travelers published on february [ ] , and that the prevention rules defined by the world health organization (who) are implemented within airport facilities [ ] . in addition, we have proposed additional precautionary measures, such as physical separators between passengers and users, barriers between boarding gates, and signage indicating itineraries, so that departing and arriving passengers do not cross each other. attention to the los here is essential, especially in times of crisis. indeed, it is an indicator that allows to observe the fluidity in the treatment of the flow of passengers. in times of crisis, it is easy to observe overflows, passengers who get angry because of the conditions, etc. compliance with the los partly makes it possible to avoid having to manage overflows due for example to a long waiting time. this is why it remains an indicator which, even in times of crisis, must be watched. international standards have been established for terminals. the study's approach advocates that measures relating to waiting time, queue size and passenger handling rates should follow the iata quality of service standards [ ] illustrated in the section . table . present the quality of service scale represented by iata [ , ] . the occupancy of a waiting area varies considerably according to the time spent by a passenger in that area. waiting time at the various modules is a key factor in the quality of service and an essential parameter in the sizing and study of a terminal's capacity. it is extremely difficult to establish a precise relationship between waiting time, level of service and available space per passenger. a first indicator of the quality of service is the space available per passenger in waiting and traffic areas, translated on the level of service scale by space allocation ratio expressed in per passenger. table . presents the ratios recommended by iata for each passenger as a function of quality of service. a simplified way to approach the problem is to set maximum acceptable wait times. table . shows the maximum waiting times, in minutes, recommended by iata for each processing module based on quality of service [ ] , [ ] .these maximum acceptable waiting times are to be adapted according to the context, the airport's service quality objectives and the type of traffic [ ] . generally speaking, for a passenger, a waiting time is unacceptable as soon as it exceeds minutes [ ] . regarding passenger flow management, it is based on the concept of the faucet filling a leaking bucket, as shown in figure , where the faucet represents the flow of passengers, and the leakage represents the flow of processed passengers, while the filling of the bucket results from the difference between incoming and processed passengers [ , ] passengers are required to proceed to the check-in area in accordance with the rules in force and to follow the process laid down for departing passengers, [ ] as illustrated in figure . the simulation model contains a central processing unit and a display unit. the input data is divided into two groups as shown in figure . the first part in blue corresponds to flight data, passenger data, check-in surface and resources. the second part in green represents the variable parameters, which include the speed of processing, passengers' movement models, social force and deviations, and the distribution of pre-departure time [ ] . the input flight data are retrieved from the open data provided by the air navigation service provider. a dis- tinction is made between national and international passengers as they do not share the same process within the airport structure. in order to have the most realistic and accurate simulation possible in the management of passenger flows, the data used are based on the airport's summer period. this makes it possible to check its applicability. on the basis of the number of movements per hour, taken from the actual timetables. the passenger flow can be obtained as illustrated in figure . . present the evolution of passenger flows over one day during disembarking and boarding. the flight slots are such that the largest number of departing passengers are between . am and . am as well as : pm and : pm. casablanca mohammed v international airport handles an average of , passengers a day. the number of embarking and disembarking passengers is roughly balanced with a ratio of , passengers embarking for , passengers disembarking. the peak hour counts passengers boarding from : am to : am in the morning, while passengers board between : pm and : pm. the random, non-directional motion model is a unidirectional motion with the option of changing position forward or remaining stationary. the steps are assumed to be independent. therefore, the subsequent steps only depend on their current position x. x is a random variable. the probability ′ ( , ) that x is occupied at time t (probability of stay) is the result of the transition possibilities to leave the position x ′ ( → ′ ) or to enter it from the outside ′ ( ′ → ). as expressed below, a forward transition is defined by equation . immobility is defined by equation . residence probabilities ( , − ) and ( ′ , − ) indicate the probability with which the respective positions are filled at the previous point in time. if the transition probabilities are weighted by the residence probabilities, then the probability of occupation ( , ) from position x to time point t results according to the equation . for the spatial mapping of the random walk, a fixed position is assumed = and the following position + is determined by the addition of a random variable in the equation . in order not to induce a preferred direction of movement, the probability of the random walk according to the following model is assumed by the following equation since for each time step t the same decision has to be made, it is therefore possible to obtain the probability of arriving at a point k with a number n of steps by a binomial distribution (equation ). in general, the standard deviation increases with the number n of measurements taken according to equations and below: according to the limit plant theorem, in the case of → ∞ the binomial distribution b(n,f) opposes a normal distribution ( * , * * ). departing passengers are distributed according to their arrival time at the airport. all departing passengers are generated according to a flight number. this generation is based on the probability density function of a horizontally shifted normal distribution (equation ), given by the time in minutes prior to arrival (t), average arrival time ( ) before departure and standard deviation ( ). the normal distribution has been demonstrated to be well suited to passenger arrivals at airports [ , ] detected a rightward asymmetry and therefore used johnson's distribution to get a better fit as with a normal distribution. as passenger arrival time is a critical point in determining the percentage of travelers missing their flight due to sanitary barriers as well as determining the level of quality of service, five strategies with different arrival distributions are studied, which are presented in figure . the abscissa values represent the time remaining until the departure of the aircraft. thus, the origin represents the departure of the aircraft. in the horizontal offset of the distribution function, a buffer time (tp)is included. this guarantees that all passengers are generated in the airport's boarding lounge at least min before it is time for departure. the buffer time assumption is based on information provided by airlines. possible waiting times at security checks are not taken into account in the first instance. table gives an overview of the strategies studied with varying average arrival times ( ), of standard deviations ( ) and buffer times ( ). from strategy to strategy , the mean time of arrival and standard deviation were gradually increased by minutes or minutes, respectively, which means that the time range within which passengers arrive at the station was extended. strategy is a variant of strategy with a -minute delay from the departure time. the processing times for security checks are based on iata recommended measures with an average processing time of seconds. the processing delays during the various operations in the check-in area are based on the operators' speed of processing passengers. the accumulation of passengers in the check-in zone is done by the arrival of passengers according to their presentation profile and the occupancy rate of the check-in zone. the difference between the arrival at the airport and the processing of passengers gives the fill rate of the check-in zone. the speed of the passenger processing (equation ) depends on resources and increases by the number of operators (n) with the rate of absorption (v). given the possible differences that exist according to the age and gender of the different passengers, an average obstaclefree walking speed was suggested by [ ] . the mean value is . km/h with a standard deviation of . km/h, which corresponds to approximately . m/s and . m/s. based on this distribution of values, this value distribution was accepted as an input to generate the preferred walking speed for the simulation. a similar data on preferred walking speeds has already been successfully used by [ ] furthermore, both minimum ( . km/h) and maximum ( . km/h) walking speeds were established, approaching the transition speed of running [ , ] during the simulation, actual walking speeds are influenced by the presence of social forces [ ] . as a result, obstacles can slow down the speed within a certain period of time. on the other hand, thrust forces coming from behind can increase speed or even cause walking to deviate [ ] . mathematically, the basis of the social forces model is now formulated [ , , ] and have recently given an interpretation in terms of optimal control and differential play [ ] . the position of a passenger can be represented by a point ( ) in space, which changes continuously over time t, so the speed ( ) is given by the following equation : indeed, if the global social force ( ) represents the sum of the different systematic influences of different environmental factors on a passenger's behavior , and the fluctuation term ( ) reflects random behavioural variations resulting from voluntary or involuntary deviations from optimal behavior, the following equation for passenger acceleration or deceleration and change of direction is obtained by equation : according to the description of ( ) (equation ), an acceleration force is taken into account ( ) repulsive effects, ( ) due to boundaries, repulsive interactions ( , , , ) with other passengers , and attraction effects ( , , ). in the equation , the single-force terms are discussed. each passenger has his own speed of travel into the direction of his/her next destination. deviations of the actual velocity from the desired velocity = * due to disturbances (by obstacles or avoidance maneuvers) are corrected within the so-called "relaxation time" ≃ : under normal circumstances, the desired speed (equation ) is approximately gaussian distributed with a mean value of . m/s, possibly smaller, and a standard deviation of around . m/s. in order to make up delays, the desired speed ( ) is often increased over time. it can be described, for instance, by the following formula which is the maximum desired velocity and ( ) the initial one, which corresponds to the planned departure speed (equation ) [ ] . this parameter, which is time-dependent, reflects nervousness or impatience of passengers. according to our description of ( ), an acceleration force is taken into account ( ) repulsive effects, where ( ) indicates the average speed over the desired motion direction. basically, long waiting times decrease the actual speed while desired speed increases. tragically, at high pressures, crowding effects can occur and people may find themselves at risk to lose the social distance separating them [ ] . to avoid this type of situation, passengers must keep a certain distance from the barriers at all times. the closer the barrier is, the more [ ] uncomfortable a passenger feels [ ] . this effect can be described (eqaution ) by a repulsive force ,which decreases monotonically with the distance ‖ ‖ ‖ − ‖ ‖ ‖ between the place ( ) of passengers and the nearest point of the barrier in the simplest case, this force can be expressed in terms of a repulsive potential : similar repulsive force terms ( , , , ) can describe that each passenger keeps a distance from other passengers according to the situation of . simulations performed in this paper have defined the repulsive interaction force according to the following formula : using the distribution equation and within the iata airport terminal manual [ ] , we have come to present the pattern of arrival earliness at the check-in area of terminal of casablanca mohammed v international airport for domestic ( figure and international flights (figure ) . the program is aimed to obtain the appropriate passenger distribution.the program is developed by utilizing python functions,the program consists of worksheets: arrival distribution, input data, daily distribution, and chart. the figure and figure showing the passenger flow rate at check-in are provided at intervals of ten minutes before departure time. they boh also show that the pattern will be different depending on the time of day. there are three different periods applied, from : to : , : to : , and : to : . the time slot between . and . is not taken into account as it is a low-traffic period the comparison of passenger arrival earliness is shown in figure these show the difference of passenger arrival earliness for international and domestic flights [ ] . for international flights, the last passengers should arrive minutes before departure time. for domestic flights, the passengers may arrive much later. however, passengers on international and domestic flights now have to arrive earlier, as they have to undergo additional formalities and respect sanitary measures, described in the next section. there may also be mandatory quarantine and testing. the flight and passenger data displayed in the simulations correspond to a typical summer peak departure time ( : a.m.) per day. around this hour, there are about flights for a total of passengers, with . % travelling in boeing b- s, . % in embraer e- s, . % in b- s and . % in b- s. the flights operate out of terminal t , reserved essentially for the national airline royal air maroc. as the terminal complies with iata standards, according to the design standards [ , ] the check-in area measures no more than for a total of check-in counters. check-in counters as they are organised today ( figure ) do not provide any physical separation between check-in operators and passengers. therefore it is necessary to establish a procedure for managing passengers when carrying out check-in formalities in compliance with the sanitary rules required. [ ] the simulations support two possible scenarios. the first scenario considers the closure of one out of every two counters in the absence of plexiglass separation panels. the second scenario assumes the installation of separation panels between the queues and the operators as well as side-by-side counters, thus bringing all the counters into operation . each scenario runs three simulations in order to adjust the social distancing between passengers. for instance, the first simulation operates a distance of m, the second a distance of . m and the third a distance of m. normally, passengers must be present at the airport hours before departure for international flights. the simulation program is developed to manage the flow of passengers in the check-in areas considering the new sanitary measures. some display screen of the simulator is shown in figure and figure . according to the parameters input (distance between two passengers, the capacity of the checkin area, the average processing time) illustrated in figure the simulation program will estimate in steps of minutes according to the open check-in counters, the cumulative percentage and number of passengers present in the checkin area, the passages registered and those in the process of being checked in as illustrated in figure in order to validate the proposed model, a turing test [ , ] was carried out to observe the behavior of the system in figure : sanitary measure parameters extreme conditions, allowing data to be collected in order to compare them with real data from terminal of casablanca mohamed v international airport. figure represents, for each of the three simulations performed in the first scenario, the queue length observed in units of people before the aircraft departure. with a social distance of m (red curve), the accumulation of passengers is very fast. a high point is observed h before departure with people still waiting. overall, at the departure of the aircraft, there are still passengers that have not yet been treated. based on the closing time for check-in, which usually occurs minutes before departure, passengers will not have been processed. with a social distance of . m (orange curve), the accumulation of passengers is fast. a high point is also observed h before departure with passengers still waiting. at the end, minutes before departure, there are still people left in the queue. knowing that minutes before the departure of the aircraft the check-in is closed, it is then people who, at that time, will not have been processed yet. with a social distance of m (green curve), the accumulation of passengers is slower. the processing of passengers is smoother. as in the other two scenarios, there is a high point in the waiting time, as in the other two scenarios, hour minutes before departure, with people still waiting. minutes before departure, there are still passengers in the queue and the check-in is about to close. in all three scenarios here, the number of departing passengers, despite a fluctuation in the social distance measure, is not processed in the time available. indeed, even with a distance of m, minutes before the check-in counter close for departure there are still passengers in the queue. the solution of opening only one check-in counter in two to ensure the application of health measures is therefore not an optimal solution in terms of passenger flow management. figure represents, for each of the three simulations in the second scenario, the observed queue length in units of people before the aircraft's departure. with a social distance of m (red curve), the observed accumulation of passengers is rapid. a peak is observed h before departure with people still waiting. however, contrary to what could be observed previously, minutes before departure, all the passengers could be processed. with a social distance of . m (orange curve), the accumulation of passengers is slower. a peak is also observed h before departure with people still waiting. however, contrary to what was observed previously, minutes before departure, all passengers were processed. with a social distance of m (green curve), the total number of passengers is almost non-existent. the processing of passengers is done in a fluid manner. a slight high point is observed as in the two other scenarios h before departure with people waiting. however, contrary to what could be observed previously, h before departure, all passengers could be processed. with more check-in counters open, it is easier to comply with social distancing measures while at the same time ensuring an efficient passenger processing flow. indeed, in all simulations of the first scenario, there are still passengers who are unable to board the aircraft because they are still in the queue, whereas in all simulations of the second scenario all passengers manage to finish checking in. it should therefore be said that an effective measure to maintain the proper management of the flow of passengers processing would be to have a system for separating the checkin counters and the associated queues in order to be able to open as many counters as possible and thus be able to contain the flow of passengers arriving at the departure point. in the case of a large influx of passengers, the requested capacity may exceed the possible processing capacity. this creates queue hoarding. figure shows the waiting-time variation for a distance measurement of m, in the case of the first scenario. the grey line represents the number of passengers arriving at the check-in counter. the high point of passenger presentation occurs h before departure with people. the red line represents the number of passengers that are processed in time. with only check-in counters open, it is not possible to exceed a maximum of passengers processed in a minute interval. this limits capacity and creates a wait as passengers arrive. the blue line represents the passengers remaining in the queue. a high point in the total number of remaining passengers can be observed hour minutes before the flight departure with passengers waiting. here it can be observed that the cumulated number of passengers cannot be absorbed, because even after the departure there is still passengers waiting. the capacity to handle the flow of departing passengers was therefore exceeded. the limited number of check-in counters opened due to the diversion measures did not allow the flow to be processed. here we can see that the total number of passengers has been absorbed. indeed, minutes before departure there are no more passengers left in the queue. there was therefore good management of the capacity to handle the flow of departing passengers. the higher number of check-in counters made it possible to process the flow. the simulation program provides a fairly accurate representation of the passenger flow. it provides information it also complies with iata measures for passenger processing. however, despite its effectiveness, the simulation program has limitations in its use. the surface area of the checkin counters is not allocated according to the distribution of flights. this means that flights are treated uniformly without distinction. passengers are treated for all flights together and it is assumed that the treatment is homogeneous. in addition, since the iata table distributing the volume of space per passenger is very extensive, the simulation was carried out for a particular parameter present at check-in and not according to all the parameters. also, the calculation of the surface area of the check-in hall is made according to dgac standards and not with a real calculation carried out within the terminal in question. persons with reduced mobility and those with assistance are not included in the simulation. they were considered to be passing through a separate lane specifically assigned to them. similarly, the time required for additional potential health measures before the check-in area is not taken into account in the calculation. the simulation models used are stochastic and dynamic. they represent a close approximation of the real system and incorporate most of its main features. the analysis of the results obtained attests to the proper use of the proposed passenger flow management solution. in times of health crisis, the tool for setting parameters allows to ensure the application of the required distances while anticipating the saturation of the check-in area. however, its use remains non-exhaustive and limited to a case study related to terminal t , where the treatment of flights are considered to be uniform, as it is dedicated exclusively to the national airline. however, this does not necessarily apply to the other terminals. in times of health crisis, the tool for setting parameters allows to ensure the application of the required distances while anticipating the saturation of the check-in area. the simulations showed that it would be possible to support the processing of passengers at the airport using considerable re-sources (indeed, it would be necessary here to open all positions for check-in). under this condition alone, it is therefore possible to maintain an acceptable performance in terms of respectability of schedules and an acceptable los in terms of waiting times. however, its use remains non-exhaustive and limited to a case study related to terminal t , where the treatment of flights are considered to be uniform, as it is dedicated exclusively to the national airline. while this study reveals the weaknesses of traditional passenger flow management and underlines the need for a fully integrated management system to meet service quality requirements, the proposed tool will help to keep ahead of unforeseen events, avoiding bottlenecks and long waiting times, while ensuring that sanitary measures such as social distancing is maintained during an international health crisis. the question arises to know to what extent such an allocation of resources (here we are only looking at recording, but we must think of other crossing points on which it is also necessary to intervene) would be economically bearable and if a possible adjustment of the schedule, or a decrease in the frequency of flights would lead to a decrease in the resource requirements necessary to maintain performance. in addition, this research could also be extended to other areas of the airport facing similar challenges. the effect of human mobility and control measures on the covid- epidemic in china clinical characteristics of coronavirus disease in china a methodological framework to evaluate the impact of disruptions on airport turnaround operations: a case study novel coronavirus ( -ncov) early-stage importation risk to europe effects of novel coronavirus (covid- ) on civil aviation: economic impact analysis guide to airport performance measures puts over half of passenger revenues at risk visite du nouveau terminal de l'aéroport mohammed v au profit de la presse nationale world health organization, management of ill travellers at points of entry -international airports, ports and ground crossings -in the context of the covid- outbreak world health organization, coronavirus disease (covid- ) situation report - , tech. rep., world health organization antecedents and consequences of passenger satisfaction with the airport airport terminal reference manual capacité des aérogares passagers -guide technique, france,paris, tac/sina groupe documentation et diffusion des connaissances (ddc) edition competitiveness vis-à-vis service quality as drivers of customer loyalty mediated by perceptions of regulation and stability in steady and volatile markets building an integrated model of future complaint intentions: the case of taoyuan international airport transfer passengers' perceptions of airport service quality: a case study of incheon international airport the determinants of air passenger traffic at turkish airports evaluating passenger services of asia-pacific international airports entwicklung eines individuenbasierten modells zur abbildung des bewegungsverhaltens von passagieren im flughafenterminal social force model for pedestrian dynamics evaluation of pedestrian walking speeds in airport terminals are transitions in human gait determined by mechanical, kinetic or energetic factors? experimental research of pedestrian walking behavior freezing by heating in a driven mesoscopic system pedestrian and evacuation dynamics self-organized pedestrian crowd dynamics: experiments, simulations, and design solutions pedestrian and evacuation dynamics simulating dynamical features of escape panic validating expert system prototypes using the turing test fundamentals of a turing test approach to validation of ai systems j o u r n a l p r e -p r o o f  the aviation sector has been experiencing an unprecedented crisis sincemarch due to the covid- pandemic and a solution need to be found for the gradually reopening and the risk minimisation  this article proposes simulations and discussions on the possible effects of the social distanciation measures and to test their applicability in an international airport.  a bibliographic study of recent work in the same field of the article  distribution of passengers according to various scenarios  mathematical modelling of the problem key: cord- -eydkfqi authors: feng, mingxiang; shaw, shih-lung; fang, zhixiang; cheng, hao title: relative space-based gis data model to analyze the group dynamics of moving objects date: - - journal: isprs j photogramm remote sens doi: . /j.isprsjprs. . . sha: doc_id: cord_uid: eydkfqi the relative motion of moving objects is an essential research topic in geographical information science (giscience), which supports the innovation of geodatabases, spatial indexing, and geospatial services. this analysis is very popular in the domains of urban governance, transportation engineering, logistics and geospatial information services for individuals or industrials. importantly, data models of moving objects are one of the most crucial approaches to support the analysis for dynamic relative motion between moving objects, even in the age of big data and cloud computing. traditional geographic information systems (gis) usually organize moving objects as point objects in absolute coordinated space. the derivation of relative motions among moving objects is not efficient because of the additional geo-computation of transformation between absolute space and relative space. therefore, current giss require an innovative approach to directly store, analyze and interpret the relative relationships of moving objects to support their efficient analysis. this paper proposes a relative space-based gis data model of moving objects (rsmo) to construct, operate and analyze moving objects’ relationships and introduces two algorithms (relationship querying and relative relationship dynamic pattern matching) to derive and analyze the dynamic relationships of moving objects. three scenarios (epidemic spreading, tracker finding, and motion-trend derivation of nearby crowds) are implemented to demonstrate the feasibility of the proposed model. the experimental results indicates the execution times of the proposed model are approximately – % those of the absolute gis method for the same function of these three scenarios. it’s better computational performance of the proposed model when analyzing the relative relationships of moving objects than the absolute methods in a famous commercial gis software based on this experimental results. the proposed approach fills the gap of traditional gis and shows promise for relative space-based geo-computation, analysis and service. moving objects are the most common and important component in a diverse range of phenomena, such as human mobility (fang et al., ; jiang et al., ; almuhisen et al., ) , urban transportation , tang et al., tu et al., ) , ship logistics in the ocean (yu et al., b; fang et al., ) and even animal migrations (bastille-rousseau et al., ) . many research projects have been driven and improved by moving-object data analysis, such as individual/group behavior analysis, path discovery and behavior prediction. because of the large amount of moving objects in real applications, these analyses require a powerful gis data model to store, analyze and interpret physical and contextual information (e.g., absolute topology and relative motion) of moving objects. current gis data models usually record essential information for moving objects within an absolute space. in absolute space, geocoded locations are bound to previously existing geometry and topology relationships among the corresponding points in the space (meentemeyer, ; couclelis, ) . therefore, moving objects are always represented as a series of observations that consist of an id, location and time (hornsby and egenhofer, ; spaccapietra et al., ) , alongside additional information such as activities (wang and cheng, ; shaw and yu, ) and semantics vandecasteele et al., ) . furthermore, these models describe their movement processes and interactions alongside basic geographical information (such as land use, poi, and events). in fact, these models require a large https://doi.org/ . /j.isprsjprs. . . received november ; received in revised form april ; accepted may amount of computation to support analysis from the perspectives of individual or groups of moving objects (which could be called relative space). moreover, several analysis, such as the surrounding dynamics and motion trends of nearby crowds, are critical to moving objects in highly complex and dynamic environments according to the personal requirements of decision-making (e.g., on relax life style, feeling of safety). current gis data models must be improved in terms of the geocomputation of these analyses. actually, relative space is an instinctual approach to moving objects and is a powerful theoretic framework to represent the surrounding dynamics and motion trends of moving objects in nearby crowds. traditional relative space is often studied in the research communities of mathematical theories (einstein, ) , physics (veltman, ; ruggiero, ) , astronomy (rots et al., ) and aerospace science (sinclair et al., ) . currently, very few gis data models have been built for relative space, which requires additional space-transformation computations to implement this instinctual approach in applications. in relative space, relative dynamic relationships between moving objects are easy to build independently of whether they can be geocoded by coordinates. importantly, the analysis of moving objects in relative space could easily follow instinctual requirements. therefore, the motivation of this paper is to create a relative space-based gis data model of moving objects and propose some basic gis operators for analyzing moving objects, which changes the analysis of current absolute space-based gis models and facilitates the efficient computation of real-time relative relationship dynamics, such as the surrounding dynamics and motion trends of crowds near moving objects. the contributions from this paper are summarized as follows: • a relative space-based gis data model of moving objects (rsmo) is introduced to construct, operate and analyze moving objects' relative relationships. • a relationship-querying algorithm and relative dynamic patternmatching algorithm are introduced to analyze the dynamic relationships of moving objects. • three scenarios (epidemic spreading, tracker finding, and derivation of the motion trends of nearby crowds) are implemented to demonstrate the feasibility of the proposed model, which also shows better performance compared to absolute methods. this paper is structured as follows. related work is discussed in section . section proposes the modeling process, structure and operation of this model. section illustrates the implementation of rsmo and the two algorithms. experiments with three case studies and performance testing are reported in section . section discusses the proposed model. finally, conclusions are presented in section . moving objects are a very popular representation requirement in current gis applications because of the large volume of their tracked trajectories, e.g., cars in cities, vessels at sea, and even people around the world. therefore, many research projects have been conducted to store, manage and query moving-object data in the communities of computer science and geographical information science. in computer science, several data structures have been defined to support the storage and querying of moving objects. the first structure is the storage unit in dbmss (database management systems). güting and his group abstracted moving objects into moving points and moving regions and developed a discrete model in a dbms to store, manage and query moving objects' data (forlizzi et al., ; güting et al., güting et al., , güting and schneider, ) . based on abstracted data types, moving objects could be managed and queried by using sql. these works provided a solid foundation to support research on moving objects. the second is tree structures. moving objects were usually abstracted as points (ranu et al., ) , single continuous polynomials (ni and ravishankar, ) , or non-regulated sequences of roads in transportation networks (sandu popa et al., ) . then, these objects were indexed by the tree's derivatives, for example, b + -trees (sandu popa et al., ) , r-trees (yue et al., ) , d r-trees (xu et al., ) , fn (fixed network) r-trees , pa-trees (ni and ravishankar, ) , and grid trees (yan et al., ) . these tree-based indices could facilitate efficient queries on moving objects. the third structure is moving object-oriented databases. to satisfy the demand of faster processing with big data, postgreesql , mon-godb , monetdb (boncz et al., ) , and cassandra (hospital et al., ) were developed to store and analyze large volumes of moving objects' information. in the community of geographical information science, traditional gis data models, such as field-based models (e.g., raster-based models) (cova and goodchild, ) and feature models (e.g., object-and vector-based models) (tang et al., ) , usually depended on absolute coordinates in their reference frames to describe spatial object's motion or relationships. these models were often used to represent some key concepts that were related to moving objects, i.e., the place, activity and semantics. the concept of "place" was used to indicate the location of activity, which usually complied with a certain upper-bound distance and lower-bound duration (kang et al., ) . human daily activities were always represented as a sequence of place visits (do and gatica-perez, ) . additionally, space time path was a key concept in classical time geography framework that was used to represent and analyze activities and interactions in a hybrid physical-virtual space (shaw and yu, ) and social space (yin and shaw, ) . several basic semantics of moving objects were derived from the trajectories of moving objects, for example, episodes, stops, and moves (ilarri et al., ) . based on these basic semantics, researchers (parent et al., ; bogorny et al., ; jiang et al., ; wan et al., ) built semantic trajectories from semantic point sequences. semantic trajectories were easy for urban planners and migration or transportation agencies to use to mine spatial-temporal mobility patterns. moving object-oriented operators (i.e., range queries, nearestneighbor queries) were derived to support the analysis of moving objects' relationships. the first common operator is range queries, which specify a value of moving objects to fall within a lower and upper boundary zhan et al., ; xu et al., ; yue et al., ) , e.g., finding all the objects of a specific traveler between am and am. filtering and refining are two important steps of range queries. filtering determines candidate locations (such as a node in a tree structure) that contain specified attribute values and overlap the query area in space and time, while refining retrieves well-matched objects under the querying conditions through some advanced filtering techniques, such as statistics-based approaches (zhan et al., ) and time-based partitioning techniques (yue et al., ) . the second common operation is nearest-neighbor (nn) queries, which find the nearest neighbors to a given object in a database for a specific time interval. two similar phases of nn queries include searching for candidate units based on the idea of node pruning with large coverage and determining the well-matched results according to various conditions, such as time limits (güting et al., ) , obstacles (gao et al., ) , spatial-network structures (cheema et al., ) , reverse nearest neighbors (cheema et al., ) and movement uncertainties (niedermayer et al., ) . these operators are usually applied by calculating the distance between the coordinates of moving objects to derive relative metrics such as the distance, direction and time. this situation could produce repeated computations and misrepresent the relative relationships of moving objects without a geocoding technique. a wide spectrum of applications can be conducted for individuals, groups or the public. the first application is the movement-behavior analysis of moving objects, for example, individuals (gonzález et al., ; ando and suzuki, ; gao et al., ; renso et al., ; song et al., ) and groups (zheng et al., ; li et al., ; gupta et al., ; mcguire et al., ; liu et al., ) . the second application is path recommendation from historical trajectories (luo et al., ; dai et al., ; yang et al., ; zheng et al., ) . the third application is location prediction for tourists (lee et al., ; yu et al., a) , navigators (li et al., ; besse et al., ) , and driverless vehicles . these applications require that the dynamic relationships of moving objects in space over time are inferred. however, very few models can directly organize the dynamic relationships of moving objects. these applications greatly depend on computationally intensive infrastructures. the studies about relative space mainly appear in the fields of vessels in maritime and airplane in aviation. in maritime, the moving object data model also can be applied for assessing collision risk (bye and aalberg, ; fang et al., ) , predicting vessel behavior (zissis et al., ; xiao et al., ) and planning path (cummings et al., ; hornauer et al., ) . in aviation field, motion guidance and control (yu et al., ; sun et al., ; li and zhu, ; zhu et al., ) for spacecraft rendezvous, position and attitude estimation (philip and ananthasayanam, ; qiao et al., ) between the chaser and the target satellites were also referred to the applications of moving object data model. these studies aim to control the moving object in real time to avoid risk and complete the established movement in the ocean or aerospace, where no appropriate global reference for them. those studies focus on the real-time motion control to respond to surrounding moving objects in a high dynamic scenario. these missions need an efficient data model to handle relationships of moving objects as well. in short, current research on moving objects rarely organizes their dynamic relationships directly because of a lack of relative space-based data models and analytic frameworks in the communities of computer science and geographical information science. this paper attempts to fill this gap by introducing a relative space-based gis data model to analyze the group dynamics of moving objects, which should reduce the intensive computation that occurs when deriving the relationships of moving objects and provide better analytic performance than traditional absolute coordinate-based gis data models. this section introduces a relative space-based gis data model of moving objects (rsmo). rsmo is extended from the space-time cube (stc) structure in arcgis. an stc is a three-dimensional cube that includes space-time bins in the x and y dimensions and time in the t dimension and represents limited space coverage over a fixed period. the basic idea of rsmo is to directly record moving objects' relationships and relative dynamics with a three-dimensional space-time cube to facilitate transfer to absolute space by maintaining the current locations of moving objects via additional space-time bins. this data model reduces the calculation of moving objects' relationships and relative dynamics in traditional gis data models to provide efficient relative space-based queries between moving objects for real applications with cars, vessels, people, etc. rsmo is designed to support some basic functions such as moving objects' motion storage and organization, queries based on relative relationships, relative motion-pattern analysis and mining, and transformation between absolute space and relative space. the following section will first introduce rsmo and then the basic functions that depend on rsmo. before introducing the proposed rsmo, this section defines some basic concepts at first as follows: definition : space-time cube (stc) is a three-dimensional ( d) euclidean space containing of a d geographical space (x and y) plus time (t) for visualization and analysis of objects' behavior and interactions across space and time (bach et al., ) . definition : space-time bin (stb) is a basic unit of space-time cube, which lies in the fixed position based on their x and y dimensions in d geographical space and t dimension in time to represent a limited location with a certain time. definition : relative relationship bin (rrb) is a derived stb with a substituted reference framework of reference object, target object and time (fig. ). it contains a quadruple < t, ref_obj, tar_obj, relationship > , which indicates the relationships of corresponding reference object (ref_obj) and target object (tar_obj) during a specific time t. definition : relative relationship matrix is a matrix of all moving objects' rrbs in the time t and within a specific environment. definition : relative relationship cube is a time series of relative relationship matrices, which is organized in three-dimensional space (two dimensions in space and one dimension in time) according to time. the proposed rsmo model extends the structural organization of space-time bins in arcgis to model the relative relationships among moving objects and facilitate their relative space-based analysis. here, the motion in the stc is represented by the relationships of moving objects, which is a natural approach to represent the actual cognitive processes of moving objects in an environment. fig. illustrates the extended-entity-relationship (eer) diagram of the rsmo model, which shows the elements (entities, relationships and attributes) and the hierarchical relationships among these elements. this model defines six basic entity types, i.e., "object", "relative location", "relationship", "relative relationship bin", "relative relationship matrix" and "relative relationship cube". here, object represents the moving object in a real scenario, such as the people in fig. , vehicles/ unmanned ground vehicles, unmanned aerial vehicles, vessels, planes, and so on. relative location is the position of any object relative to other object (fig. ) , which can be represented by the relative distance and angle. relationship indicates the mutual spatial or social relationships among objects, for example, closeness, friendship, colleague does not require much additional computation to build relationships among objects from the coordinates in traditional stcs and provides a natural approach to analyze a moving object's behavior. fig. provides an example of rsmo's organization. fig. (a) illustrates the spatial trajectories of five moving objects (objects labelled as a, b, c, d and e). fig. (b) illustrates a right-handed coordinate system for detecting objects' relative location from each object's view. fig. (c) shows a geometric transformation method from relative locations to ) of moving objects are organized akin to x and y coordinates. in this matrix, rows represent "reference" objects and columns represent their detected "target" objects. fig. (e) shows a relative relationship cube that includes all relative relationship matrices sorted by time. the proposed model replaces the orthogonal coordinate axes (x, y) in the stc with two object axes that represent reference and target objects to store their relationships. recording the changing relationships between moving objects is insufficient because this process only allows relationship comparison between two different moving objects and cannot support transformation with absolute space to match the computational tasks in current gis tools. to solve these problems, an additional object r (called the initial reference object), which is a fixed coordinate in absolute space, was added to rsmo (the object r in fig. (b) ). this object represents a fixed location acting as a local reference system. by comparing the location change in this local reference system, each object's motion relative to itself can be derived. after determining the coordinates in absolute space, this object also facilitates transformation between relative space and absolute space. the details of this procedure will be introduced in . . . this relative relationship cube forms the basic structure of rsmo. this section introduces a set of basic operators for the implemented gis functions. those operators are organized into six classes according to the type of relationship outcome. ( ) query processing is an operator to get the direct relative relationship in terms of relative distance and angle between objects and time. ( ) group relative dynamic calculation is an operator to derive the relative dynamic characteristics of the group. ( ) transformation between absolute space and relative space is an operator to transform the data between absolute space (e.g. coordinates) and relative space (e.g. relative distance and angle). ( ) initial reference object transformation is an operator to change the reference object of relative relationship cube, and update all relative relationships of all objects. ( ) attribute transformation is an operator to derive the relationship attributes based on relative relationship cube, such as, closeness. ( ) relative relationship dynamic pattern transformation is an operation to derive the relative dynamic pattern between object, such as moving trends of them. these operators are expanded and developed from stcs to adapt the features of a moving object's motion in relative space. the following subsections will describe these classes in detail. query processing searches for the relative mutual relationships of moving objects and their changes. the six operators of query processing are explained as follows: ( ) point extraction retrieves the relationships between designated objects at specific times. this operator is a basic function for querying and analysis. ( ) time drilling retrieves the dynamic processes of the relationships between designated objects. ( ) target drilling extracts the specific reference object's relationships with other target objects at a designated time. ( ) relationship curvilinear drilling retrieves the target objects that present a specific relationship with the designated reference object. this operator can be used to extract the reference object's behavior relative to other objects, such as the interactions between two moving objects if "distance < m". the reference object's interactions can be expressed as a planar d curve that consists of target objects for each time. ( ) time cutting retrieves all the objects' mutual relationships at any designated time. ( ) reference cutting retrieves the movement of all objects relative to the reference object. detailed descriptions of these query-processing operators are listed . obtain relative relationship bins with different references and target objects by time drilling; . obtain the results from fun(), whose input is relative space-time lines from the previous step; . fill the results in the corresponding position of the two-dimensional matrix built by the reference and target axes. . obtain relative relationship bins with different references and target objects by target drilling; . obtain the results from fun(), whose input is relative space-time lines from the previous step; . fill the results in the corresponding position of the two-dimensional matrix built by the reference and time axes. note: in group relative dynamic calculations, min in example is used to find the minimum in the dataset. process details the procedures in the illustration and provides text to help readers understand this operator. flattening(axis, fun()) is used to express a uniform form. two parameters are required: the axis indicates the input of the operator and fun() is a function to process the values along the axis. additionally, fun() can be set by the user for different purposes. in table . group relative dynamic calculation computes the group relative dynamic characteristics (i.e., average distance, interaction frequency in social media, and the closeness of social relationships) based on the relationship changes of all objects. the two operators time flattening and target flattening are explained as follows: ( ) time flattening computes the relative dynamic statistical characteristics between each pair of objects during the entire period and retrieves a matrix that contains the results. for instance, the average distance is calculated through a statistics function, and the unit in the resulting matrix shows the average distance between each pair of objects, which infers the closeness of their relationship during the entire period. ( ) target flattening computes the relative dynamic statistical characteristics of all the target objects relative to each reference object at each time stamp and retrieves a matrix that contains the results. for example, if we want to calculate the interaction frequency, we can obtain a result matrix that indicates the interaction changes of each reference object relative to other objects over time. detailed descriptions of these group relative dynamic calculations are listed in table . the purpose of this operator is to build a transformation method between absolute space and relative space. this operator is used to adapt current gis modules because most gis software analyzes moving objects only in absolute space. if we know the coordinates ( x y ( ) the increment in the x and y directions ( x, y) in absolute space can be calculated with equation ( ) by distance decomposition: ( ) based on ( x, y), the coordinates of a in absolute space can be expressed with eq. ( ): initial reference transformation transforms the relationship of any moving object with another object to that of another object. this process changes the analysis view between objects, which could support the relative space-based analysis of any individual object in parallel. a detailed description of initial reference transformation is listed in table . below, we present an example to explain this process. an example is introduced to explain the transformation process in fig. (a) . a is the initial reference object of object a at t . if we want to shift the initial reference object from a to b (object b at t ), where b is the new initial reference object, we compute all the objects' unknown relationships with b . for example, fig. (a) shows the unknown mutual relationships b -a and b -c in the new cube's matrix. the mutual relationship between b and c ( fig. (b) ) can be derived by the following steps. first, the coordinates in absolute space are calculated with the transformation operator between absolute space and relative space from section . . . second, the distance between b and c is calculated by the law of cosines. the distance between a b and a c can be derived by computing the difference between ang a c , and ang a b , . then, the distance between b and c (dis finally, the angle between b and c is calculated by the following rotation method to ensure uniqueness. ( ) the rotations of b and c relative to object a (yaw a b , and yaw a c , ) are calculated with equation ( ) thus, we set shift(a ,x) as the transform function to convey the initial reference object from a to x , where the purpose of attribution transformation is to label new attributions to each relative relationship bin and filter bins with any specific conditions. the two operators (labeling and filtering) are explained as follows: ( ) labeling adds new attributions to each relative relationship bin note: label(fun()) and filter(fun()) are used to express uniform forms. the parameters of fun() in labeling is a function to acquire new attributes based on the relative relationships of bins. and fun() in filtering is used to select the relative relationship bins based on the new attribute by labeling (e.g. closeness is intimate). additionally, fun() in two operators can be set by user for different purposes. based on the conditions or classifier set from fun(). ( ) filtering is used to remove some bins from the results of the labeling step. ( ) detailed descriptions of these attribution transformations are listed in table . the goal of relative relationship dynamic pattern transformation is to analyze the motion features of moving objects in relative space. two common operators are explained as follows: ( ) trending computes motion features (such as distance changes, angle changes, relative speeds and relative acceleration) by comparing relationship changes between each pair of objects. ( ) matching discovers relative motion patterns with the condition of a movement pattern, such as accompaniments and trackers. the detailed descriptions of these relative relationship dynamic pattern transformations are listed in table . a prototype (fig. ) was developed to implement the proposed rsmo model. all the functions store, manage and analyze relative space data were encoded into a dynamic link library within c++ environment. the visualization of this prototype was developed by the qgis (quantum gis) framework in a python environment. two typical analysis modules were implemented, namely, query processing and relative relationship dynamic pattern transformation. the query processing contained all six operators (point extraction, time drilling, target drilling, time curvilinear drilling, time cutting and target cutting). two relative relationship dynamic pattern matching algorithms (accompaniments and trackers) were implemented in this prototype. a query algorithm called query processing is described here to for retrieving relative relationship in relative space. the main idea of this algorithm are to organize the relative relationship as three dimensions, namely, reference object, target object and time, which is based on model's structure, and then to index relative relationship by corresponding objects and time. by using a composite index consisting of reference object, target object and time, this algorithm can quickly find out the relative relationship completely covered by the given restricted objects and time. in the query, this algorithm retrieves relationships by accessing the relative relationship according to the corresponding composite index of given object and time, instead of extracting each relative relationship bins of reference or target object one by one in the cube. therefore, this algorithm can be integrated with all the operators shown in section . . and constrained by six parameters, namely, cube, query_type, c_ref, c_ tar, c_t, and rel, which are explained in algorithm . cube is a structure of map, whose key consists of a reference and target object, while value is also a map whose key is the time and value is the relationship. the detail of the query processing algorithm is described in algorithm . when comparing this algorithm with traditional gis querying in absolute space, this algorithm can avoid many redundant computational address coordinates, which can improve the efficiency of the relative relationship querying. we will test this approach in section . . algorithm , which is called pattern matching, illustrates the process of searching a specific relative pattern in relative space. dynamic pattern is represented as a space-time trajectory (bao et al., ) , which is consist of specific locations in absolute space (shown in fig. ) . given a specific pattern (pattern_s), each object's distance to the pattern_s needs to be calculated to measure its similarity to pattern_s. finally, the trajectory, whose distances are all in matching threshold (in fig. ) , would be got as the outcome of pattern matching in absolute space. in relative space, dynamic pattern is represented as a relative relationship bin series, which is composed of bin with specific relative distance. in this algorithm, the bins with each object's relative distance to pattern_s can be directly got by reference cutting at first. then, they are labeled according to whether their relative distance is in matching threshold. finally, the relative relationship bin series are obtained, which match the pattern_s in all time. this algorithm provides a more efficient way for relative relationship dynamic pattern matching. it simplifies the spatial computation to measure the similarity on basis of reference cutting and labeling in the proposed model. the details of the pattern matching algorithm are described in algorithm . algorithm : pattern matching (cube, c_ref, pattern_s) input: cube -the relationships between all objects in a relative space-time cube form; c_ref -the reference object, an index on ref of a relative relationship bin in cube; pattern_s -a specific relative relationship pattern to be found, which is expressed as a time series that consists of a triple of time, rel. output: the object in cube, which produces a trend of pattern_s relative to c_ref. let q be a sequence of bins in the cube. q contains pairs of the form < objects_pair, relptr > , where objects_pair is the index on attribute ref and tar in a pair form and relptr is a map structure whose time is the key and relationship is the value; let objectslist be the set of objects that contains the codes of all the objects in the scenario; let timestamps be the total number of time stamps in the data; let * be any value; let labeling be the function that get the relative relationship bins, which match the pattern_s; this algorithm allows users to search objects with specific motion patterns relative to dynamic reference objects. this advantage can help users discover several meaningful behaviors of objects that are hidden in a moving group, for example, being followed. the proposed rsmo model was tested with three application scenarios, such as epidemic spreading, tracker finding, and the motiontrend derivation of nearby crowds. each case study was used to demonstrate the feasibility and advantages of the proposed model. this paper collected pedestrians' (experimental subjects) walking trajectories on the street in the three experimental scenarios. a lenovo phab pro phablet was used to record their trajectories in both absolute and relative space. this device is openly available and equipped with sensors for motion tracking. this phablet can record the position and rotation angle relative to the starting pose of the device. the gps receiver recorded the motion in absolute space and relative space. fig. shows all trajectories of the experimental subjects walking along a road in wuhan city, china. the detailed information includes the following fields in table : • "user id" is the unique identifier for each object. • "time stamp" is the time when the location was recorded. • "longitude" and "latitude" were used to record the pedestrians' location in absolute space with gps receiver. • "x increment" and "y increment" show the relative locations in the reference frame that was built by the user's starting pose. • "rotation angle" records the change in angle relative to the initial pose in a counterclockwise direction. • "initial azimuth" is the angle of the user's face relative to the north. the collected data needed to be pre-processed before saved into the proposed model as the following steps. ( ) derive relative distance and angle between all objects. first, the relative distance could be computed by the operator of transformation between absolute space and relative space with the coordinates in the fields of longitude and latitude in collected data. then, the relative angle was computed based on the movementdirection azimuth ( absolute ), which was derived from rotation angle ( ) and initial azimuth ( ) according to the equation ( ). ( ) set the initial reference object for the model. initial reference object was set as object at t in this experiment by computing all objects' relationships to object at t . the operator of transformation between absolute space and relative space was implemented for this task under the condition that each objects' coordinates (longitude and latitude), rotation angle and initial azimuth are known in collected data. the first scenario involved finding the person who contacted with an epidemic carrier. in the research community of epidemiology, the spread of diseases is closely related to the spatial and social activities of patients, and the propagation of diseases greatly depends on the trajectory of a patient's social activities. thus, tracing the back-propagation path of a virus and finding close contacts are meaningful to prevent the spread of diseases. this experiment assumed a severe acute respiratory syndrome (sars) carrier's walking path to find its close contacts. sars is a viral respiratory disease of zoonotic origin that is caused by the sars coronavirus (sars-cov). the primary route of transmission for sars is the contact of mucous membranes with respiratory droplets or fomites (world health organization, ) . the research about respiratory droplets transmission shows that the largest droplets will totally evaporate before falling m away (xie et al., ) . therefore, a person will be identified as a close contact if the distance to the carrier (whose id is in fig. ) is less than m. therefore, this paper uses m as the parameter for querying close contacts. this query was implemented by combining the reference cutting and labeling operators (shown in fig. (a) and (b) ). first, the relative relationships (distance) of all the experimental subjects relative to the carrier were derived by reference cutting. here, a matrix (shown in fig. ) was used to show the change in distance to the carrier for each experimental subject; blue means a closer distance to the carrier, and red means a farther distance to the carrier. then, the relative relationship bin indicating a close contact (in green) was determined by labeling for distances below m. thus, the subjects ( and ) were found as close contacts. reference cutting (a) and labeling (b) are used to search the relative relationship bins containing close contacts in the data model, which are labeled in red. the close contacts' relative locations to the carrier are marked in (c). and their location in absolute space are shown in (d) by the operator of transformation between relative space and absolute space. by using the transformation between absolute space and relative space , this study could also locate the exact locations of close contacts. fig. (c) and (d) show the two locations of the subjects and in relative space and absolute space. one was much close to the carrier, the first being at . m at : : and the other at . m at : : . according to these querying results, public health agencies should take actions to control these two people. thus, this query is very helpful for public health agencies to control epidemic spreading if embedded into any real-time gis analytical systems. the second scenario was to find any tracker of a subject in a crowd. in the research community of crime, tracking is a basic behavior for further crime (beauregard et al., ) . finding trackers are helpful to take caution actions to prevent crime. usually, the tracker's behavior is similar to other normal pedestrians in the crowd. therefore, victims and public security organizations may experience difficulty discovering a tracker in absolute space. in this experiment, we assumed that the subject carried large amounts of cash, and he had to be aware of potential trackers. to remain unnoticed, trackers usually stay within a certain range from the target to neither be found nor lose the target. this distance range is a critical parameter to build specific patterns for trackers. according to the sociological theory (moore et al., ) , trackers are easily found if their distance to the victim is less than m. therefore, this study set the parameter of m as the lower distance limit for this query. on the other hand, the upper distance limit must guarantee that the target is within the tracker's sight. this study set the upper distance limit as m to ensure that the tracker did not lose the target. this parameter was calculated according to the waiting time of intersections in this study area ( s). therefore, s × . m/s = m, where . m/s is the normal average walking speed (fitzpatrick et al., ) . sometimes, the distance is probably larger than m, which may cause the tracker to lose the target; thus, the tracker will speed up to follow the target. in this case, the tracker must find the target within s because of the constraints of traffic lights. the conditions of tracker finding can be expressed as follows: ( ) the distance between the tracker and target is always [ m, m); ( ) the time that the distance between the tracker and target is not [ m, m) is below s, which occurs only once while speeding up because of the traffic-light locations in the study area. three main operators (reference cutting, labeling, matching) were used to find the tracker by meeting the above conditions. similar to the close-contact querying of epidemic spreading, this study used the reference cutting operator to find a matrix with all subjects' relationships relative to . then, this process used the labeling operator to label the relative distance to all relative relationship bins. fig. shows the labeled results. in this figure, bins that were labeled in red and yellow had a relative distance less than m, while bins that were labelled in blue had a relative distance larger than m. fig. illustrates the change in the relative distances between all the subjects and the target. two related patterns could be derived through matching: ( ) the subject was close to the target most of the time (i.e., , and ), or ( ) the tracker lost the target after a very short time (i.e., ). fig. shows the speed pattern of this subject. obvious acceleration was observed between t and t , meeting the second condition of identifying the tracker. based on the above results, the four subjects were viewed as trackers. in order to demonstrate the advantages of the proposed approach, this study also shows the trajectories of and . by the operator of transformation between absolute space and relative space, the trajectories of these individuals and the target did not present obvious abnormal features compared to other trajectories in absolute space ( fig. (a) ). however, their relative trajectories relative to the target presented an obvious accompaniment feature in relative space ( fig. (b) and (c)). therefore, this proposed approach can mine hidden patterns that are ignored by traditional gis data models. the third scenario was to derive the motion trends of nearby crowds. the motion trend of a nearby crowd is an important feature of surrounding dynamics and has a large influence on decision-making processes (fang et al., ) . this concept is necessary for individuals to understand nearby motion trends to avoid heavy congestion or stampedes. this experiment assumed the subject as a walker who fig. . distances of all the subjects relative to . fig. . acceleration of , which was positive when the target was lost. m. feng, et al. isprs journal of photogrammetry and remote sensing ( ) - hoped to be aware his surroundings, for example, how many people moved closer to or farther from him and whether they were accelerating towards him. five main operators (reference cutting, trending, labeling, filtering and time flattening) were used to implement this application. first, the relative relationships of all the experimental subjects relative to the feng, et al. isprs journal of photogrammetry and remote sensing ( ) - walker were derived by using reference cutting. then, a matrix that contained the motion trend (relative distance change and relative acceleration) was calculated through the trending operator. this matrix was labeled with a legend of colors in fig. through the labeling operator, which illustrates changes in the relative trend motion of the other subjects compared to the walker. in fig. , the red bins in the matrix mean that these subjects are accelerating towards the walker, orange bins mean that the subjects are decelerating towards the walker, gray bins mean that the subjects are accelerating away from the walker, blue bins mean that the subjects are decelerating away from the walker and yellow bins mean no change in distance and speed. the gray, blue and yellow relative relationship bins were removed with the filtering operator. fig. (a) shows the results after this operation. then, the number of bins where the subjects moved towards the walker was counted by time flattening, which counted any red or orange bins. the updated result is shown in fig. (b) , indicating that more than half of the individuals were moving closer to the walker from t to t ( : : - : : ) . similarly, fig. (c) shows the number of subjects that were accelerating towards the walker at each time, indicating that more than half of the subjects were accelerating towards the walker at eight times (t , t , t -t , t , t , t , t -t and t -t ) . this study transformed these bins from relative space to absolute space by the operator of transformation between absolute space and relative space, and the locations of surrounding people are shown in fig. (a) and (b) at : : and : : , respectively. by combining all the locations of surrounding people close to the walker, we could find the concentrated area of surrounding people, which is labeled in red in fig. (c) . in this case, the walker may have felt uncomfortable and potentially at risk in the crowd road. setting walker as initial reference object, this model derived the crowd's motion trends for all subjects by the operator of initial reference transformation. fig. showed the results of initial reference transformation from object at t to object at t . those who were moving closer to subject in fig. (a) didn't show clear motion trend for the new walker (subject at t ) in fig. (b) . this method is helpful for people-oriented routing services, such as pedestrian navigation or tourism. this study tested the performance of the proposed model in two aspects. one was the time complexity of querying functions; the other was time complexity between the proposed model and the absolute method in arcgis geodatabase. this study used a dataset (gramaglia et al., ) of vehicle trajectories on two highways around madrid, spain. the detailed information in these trajectories contained five fields (time, id, x position (m), y position (m) and speed (m/s)). before the comparison, this study chose datasets for this open-access dataset that contained m subjects for n times, where m = , , , , , and n = , or . table lists these datasets and the sizes of their storage spaces. all the computational tasks were singlethreaded implemented on a dell desktop (four intel(r) xeon(r) processors with cpu e - v @ . ghz, g of ram and a -bit operating system). six main functions (point extraction, time drilling, reference cutting, querying close contacts, finding trackers and deriving the motion trends of nearby crowds (deriving trend)) were selected to test the time table lists the execution times for the datasets and fig. illustrates the computation time of each function among these datasets. both tables show that the computation time of all the selected functions was less than s if the data volume of the datasets was smaller than gb. the execution times significantly increased when the dataset contained more than . * records. among the six functions, point extraction and finding track spent significantly less time than the other four functions when the records increased to * in the dataset. this result shows that the proposed model could work effectively with more than gb of data and meet behavior-analysis requirements in scenarios with subjects. the second comparison examined the time efficiency between the proposed model and the absolute model (geodatabase in arcgis). the proposed model presents an obvious advantage in terms of conducting three functions (point extraction, time drilling, reference cutting) because these functions only require a simple query, while the absolute model requires additional relative relationship transformation. the other three functions (querying close contacts, finding tracker and deriving the motion trend of crowd nearby) are meaningful to real situations and require a relatively complicated implementation. therefore, this study compared the second set of three functions that were implemented by the proposed model and absolute model. in arcgis, "buffer" and "overlay analysis" were used to implement querying close contacts. the point features of the reference subject's trajectory were the input features of buffer analysis in arctoolbox. the outputs of this operation were face features that represented the close-contact area. then, the intersect tool in overlay analysis could be used to check the status of the subjects' trajectories inside or outside the close contact area. however, no appropriate tools exist in arcgis to directly implement finding tracker or deriving the motion trend of crowd nearby. in terms of finding tracker, we computed each subject's distance relative to the reference subject at each time via coordinates in geodatabase and filtered the subjects according to the conditions in section . . in terms of deriving the motion trend of crowd nearby, we calculated all the subjects' relative speeds and relative accelerations relative to the reference subject and statistically counted five types of motion trends, as described in section . . table lists the execution times of the three selected functions for the proposed model and absolute gis method with of the datasets; these last two datasets were omitted because the absolute gis method would stop responding if the volume the of dataset increased to . gb. fig. illustrates the computation time of each function among these datasets. the execution times of the proposed model were approximately - % those of the absolute gis method for the same function. the computation time of querying close contacts for the proposed model was approximately % that for the absolute gis method, the most evident improvement from the proposed model. this improvement considerably increased with increasing dataset volume ( fig. (a) ), and the execution-time difference was s when the volume was . gb. the second function was deriving the motion trend of the crowd nearby, whose time-efficiency improvement was similar to that of querying close contacts. the execution time of the proposed model was approximately % that of the absolute gis method. additionally, the improvement from the proposed model became significant if the dataset's records were larger than * (fig. (c) ). the execution-time difference was s when the volume was . gb. the execution time of the third function (finding tracker) with the proposed model was only %- % that of the absolute gis method; this improvement was not as evident as that for the previous two functions. in fig. (b) , the execution time from the proposed model and absolute gis method steadily increased as the number of dataset records increased. however, the execution-time difference only reached . s when the dataset's volume was . gb, an insignificant increase compared to the previous two functions. this result indicates that the proposed model exhibits better efficiency when implementing these functions compared to the absolute gis method. in this study, the execution times of the three selected functions (querying close contacts, finding tracker and deriving the motion trend of crowd nearby) for the proposed model and absolute gis method with of the datasets are used to indicate the efficiency improvement of the proposed model. thus, this section discusses the improvement in respect of model's structure and operator for the previous three case studies. ( ) in the case of epidemic spread and close-contact querying, this paper analyzed the processes of the proposed model and absolute gis method for querying close contacts of the carrier. in absolute gis, this task could be achieved by buffer analysis in arcgis software. corresponding buffer areas related to a carrier had to be built while it was moving all the time in absolute space. however, in the proposed model, there is no need to build many buffer areas to recognize relationships in all time. it only executed once reference cutting to get all object's relationships to the carrier, no matter how many objects in the scenario. this is fundamental reason for the better performance of our model in this case study. table and fig. (a) show that more relationships had to be recognized in absolute gis with the increasing number of objects in the scenario, which leads to the execution time increase more quickly compared with the proposed model. ( ) in the case of tracker finding, the pattern matching approach in absolute space needs to compute the relative relationships of moving objects first, then uses data structure to store the time series of relative relationships between target objects with others, and finally filters the time series of relative relationships with distance constraints to match the pattern of the tracker. it is a complicated computation process in absolute gis method. the proposed model directly extracted each object's relationship referred to the target in relative space by reference cutting. then, the operators of trending and matching were used to find tracker. these computation processes are simple matrix-based computation. therefore, it could have a better computation performance than the pattern matching approach in absolute space. results in table and fig. (b) demonstrate the execution times by the proposed model is much shorter than that in absolute gis method. ( ) in the case of deriving the motion trends of nearby crowds, motion trends of crowd nearby in absolute gis method need to computing each object's distance to the walker from coordinates in each time first, then compares the time series of relative distance to find the moving trend of close or far away, and finally to plot it as a distribution map in absolute space, which supports the choosing of walk routes. this computation task is also an implicated computation process. the proposed model could analyze their distance change characteristic to the walker by the operators of trending and matching, which are also matrix-based computations suitable to computer environment. table and fig. (c) shows that the proposed model saved approximately % of the execution time compared to absolute space-based gis method.. in short, the proposed model outperforms the absolute space-based approach in relative motion analysis by simplifying the process of relationships computation and querying.. by extending stc in arcgis, this paper presents a novel model for storing, managing and analyzing moving objects based on their relationships in relative space. in this model, the x and y in stc are substituted as the reference and target objects, and the stb in stc are extended to store the data structure related to relative relationship among moving objects. to support the analysis of moving objects, this paper introduces six classes of operators according to the type of relationship outcome, namely, query processing, group relative dynamic calculation, transformation between absolute space and relative space, initial reference object transformation, attribute transformation, relative relationship dynamic pattern transformation. then, two common used algorithms, relative relationship querying and relative relationship dynamic pattern matching, are introduced on the base of these defined operators. the former algorithm organizes the relative relationship as three dimensions (reference object, target object and time) based on the model's structure, and queries relative relationship by corresponding objects and time. the later algorithm implements reference cutting and labeling to match each object's relative motion with specific pattern, which simplifies similarity measure in absolute space. this study collected the trajectories of walking pedestrian and used these data to demonstrate the feasibility of the proposed model. finally, the proposed model was successfully tested in three real-life scenarios to analyze dynamic and complex relationships between moving objects. the results validated the capabilities of these designed functions and indicated that the proposed model saved up to %- % execution time compared with traditional absolute gis methods. and the proposed model could be widely applied in the domains of public health, security, individual-based services. therefore, the contributions of this paper could be summarized as follows: ( ) a relative space-based gis data model of moving objects (rsmo) is proposed to construct, operate and analyze moving objects' relative relationships in relative space. ( ) relative space-based relationship-querying algorithm and relative fig. . curves of the execution times for "querying close contacts" (a), "finding tracker" (b) and "deriving the motion trends of nearby crowds" (c) with the proposed model (blue line) and geodatabase (red line). (for interpretation of the references to colour in this figure legend, the reader is referred to the web version of this article.) dynamic pattern-matching algorithm are proposed on the base of the proposed model. ( ) three real-life scenarios are implemented to demonstrate the feasibility and usefulness of this proposed model in discovering interesting behavior hidden in crowd (e.g. querying close contact, finding tracker and deriving the crowd's motion trends). moreover, the computational time of these real-life scenario analysis could save up to - % while compared with absolute space based methods. the proposed model still has a big disadvantage. the data volume of the proposed data model was much larger than that of the traditional absolute space-based data organization. the compression of these relative data is a challenge. future research need study efficient compressing algorithms to simplify the data pre-process and reduce the volume of relative data. the handling of multiple types of relationships could also be examined in this model, e.g., social and mental relationships. powerful operators (i.e., closeness analysis in groups) in the social domain could be integrated into the model to support the analysis of social phenomena. moreover, high-performance computing technologies such as cloud computing could be incorporated with the proposed model to design more efficient algorithms for relative relationship querying and pattern mining. detecting behavior types of moving object trajectories role-behavior analysis from trajectory data by cross-domain learning a descriptive framework for temporal data visualizations based on generalized space-time cubes: generalized space-time cube algorithms for mining human spatial-temporal behavior pattern from mobile phone trajectories animal movement in the absence of predation: environmental drivers of movement strategies in a partial migration system target selection patterns in rape destination prediction by trajectory distribution-based model constant -a conceptual data model for semantic trajectories of moving objects monetdb/x : hyper-pipelining query execution maritime navigation accidents and risk indicators: an exploratory statistical analysis using ais data and accident reports continuous reverse k nearest neighbors queries in euclidean space and in spatial networks space, time, geography extending geographical representation to include fields of spatial objects supporting intelligent and trustworthy maritime path planning decisions personalized route recommendation using big trajectory data network-matched trajectory-based movingobject database: models and applications the places of our lives: visiting patterns and automatic labeling from longitudinal smartphone data a brief outline of the development of the theory of relativity on the relationship between crowd density and movement velocity spatiotemporal model for assessing the stability of urban human convergence and divergence patterns automatic identification system-based approach for assessing the near-miss collision risk dynamics of ships in ports another look at pedestrian walking speed a data model and data structures for moving objects databases modeling temporal effects of human mobile behavior on location-based social networks continuous nearest-neighbor search in the presence of obstacles understanding individual human mobility patterns vehicular networks on two madrid highways a mobility simulation framework of humans with group behavior modeling secondo: an extensible dbms architecture and prototype efficient k-nearest neighbor search on moving object trajectories a foundation for representing and querying moving objects moving objects databases trajectory planning with negotiation for maritime collision avoidance modeling moving objects over multiple granularities bignasim: a nosql database structure and analysis portal for nucleic acids simulation data semantic management of moving objects: a vision towards smart mobility a density-based approach for mining movement patterns from semantic trajectories activity-based human mobility patterns inferred from mobile phone data: a case study of singapore extracting places from traces of locations next place prediction based on spatiotemporal pattern mining of mobile device logs model predictive control for spacecraft rendezvous in elliptical orbit effective online group discovery in trajectory databases t-desp: destination prediction based on big trajectory data exemplar-amms: recognizing crowd movements from pedestrian trajectories revealing travel patterns and city structure with taxi trip data intersection delay estimation from floating car data via principal curves: a case study on beijing's road network finding time period-based most frequent path in big trajectory data mining trajectories of moving dynamic spatio-temporal regions in sensor datasets geographical perspectives of space, time, and scale nonverbal communication: studies and applications indexing spatio-temporal trajectories with efficient polynomial approximations probabilistic nearest neighbor queries on uncertain moving object trajectories semantic trajectories modeling and analysis relative position and attitude estimation and control schemes for the ÿnal phase of an autonomous docking mission of spacecraft relative position and attitude estimation of spacecrafts based on dual quaternion for rendezvous and docking indexing and matching trajectories under inconsistent sampling rates how you move reveals who you are: understanding human behavior by analyzing trajectory data representations of time coordinates in fits-time and relative dimension in space relative space: space measurements on a rotating platform indexing in-network trajectory flows a gis-based time-geographic approach of studying individual activities and interactions in a hybrid physical-virtual space geometric interpretation of the tschauner-hempel solutions for satellite relative motion prediction of human emergency behavior and their mobility following large-scale disaster a conceptual view on trajectories adaptive nonlinear robust relative pose control of spacecraft autonomous rendezvous and proximity operations a spatial data model design for featurebased geographical information systems uncovering urban human mobility from large scale taxi gps data coupling mobile phone and social media data: a new approach to understanding urban functions and diurnal patterns from movement data to objects behavior using semantic trajectory and semantic events perturbation theory and relative space smopat: mining semantic mobility patterns from trajectories of private vehicles a spatio-temporal data model for activity-based transport demand modelling vdnet: an infrastructure-less uavassisted sparse vanet system with vehicle location prediction consensus document on the epidemiology of severe acute respiratory syndrome (sars maritime traffic probabilistic forecasting based on vessels' waterway patterns and motion behaviors how far droplets can move in indoor environments--revisiting the wells evaporation--falling curve range queries on multi-attribute trajectories efficient location-based search of trajectories with location importance scalable space-time trajectory cube for path-finding: a study using big taxi trajectory data exploring space-time paths in physical and social closeness spaces: a space-time gis approach relative dynamics estimation of noncooperative spacecraft with unknown orbit elements and inertial tensor modeling user activity patterns for next-place prediction revealing the linkage network dynamic structures of chinese maritime ports through automatic information system data time-based trajectory data partitioning for efficient range query range search on uncertain trajectories splitter: mining fine-grained sequential patterns in semantic trajectories spatial-temporal travel pattern mining using massive taxi trajectory data probabilistic range queries for uncertain trajectories on road networks online discovery of gathering patterns over trajectories robust model predictive control for multi-step short range spacecraft rendezvous real-time vessel behavior prediction the research was supported in part by the national natural science foundation of china (grants , ), and the fundamental research funds for the central university. key: cord- - aa cut authors: clavijo, nathalie title: reflecting upon vulnerable and dependent bodies during the covid‐ crisis date: - - journal: gend work organ doi: . /gwao. sha: doc_id: cord_uid: aa cut this paper is a short narrative on how feminism helped me find a balance in my life and how this balance has been disrupted with the covid‐ crisis. i reflect on how this crisis is showing our vulnerabilities as human beings. this crisis reflects how our bodies depend on each other, moving away from the dominant patriarchal ontology that perceives bodies as being independent (butler, ). i reflect on how this crisis is letting the most vulnerable in situations of survival because the infrastructures (butler, ) that support our bodies are not functioning. at the same time, this crisis is providing visibility to certain occupations that are dominated by issues of race, class and gender. these occupations are being at least temporarily rehabilitated to their central position in society. we are living a time where we could show, through our teaching, possible resistance to the neoliberal ontology that captured humanity. we are living a time where we could show, through our teaching, possible resistance to the neoliberal ontology that captured humanity. keywords: vulnerability; gender; covid- ; dominated occupations before embracing an academic career, i worked for several years in a company where i was a management accountant. at the age of thirty, i ticked all the boxes that helped maintain masculine domination in my professional and worse, deep in my personal life. i had to small sons, had taken the decision to change my contract on a part-time basis and was married to a man who always believed his work was more important than mine. i was in charge of taking my sons to day-care and school in the morning and in the evening. whenever there was a problem at school or at day-care, i was the one receiving the calls and running to find a solution. when i would have to go for business travel, i would leave my sons to the nanny, take the plane at am in the morning and come back running at pm to pick them up. i remember crying a lot, filling so much guilt and so much anger. if my sons' father did not do his part of parental work, it was simply because he was too tired and had to concentrate on his career. i felt so much pain at the time; so much guilt; so much oppression. in , my manager told me at an annual evaluation: "nathalie, you are a confirmed management accountant but you will not go up the latter because you have children. but you know it's normal, my wife is living the same situation". stronger and more beautiful than i had ever been. most of all, i felt free and had liberated myself from the guilt that comes with the different identities one can embody as a mother, a partner, a researcher, a professor… of course, nothing is easy. of course, i am still vulnerable to gendered norms but at least feminist theories have helped me construct strategies to resist to these norms (butler, ) . the problem is that my partner and the people who know me see me as someone extremely strong, who can cope with anything and always stays up at times when many would fall down. whenever i am facing an issue, my partner and parents just say: "don't worry, you'll find a way out". that is about the only thing they will say. they do not believe me when i tell them that i am weaker than they think and sometimes i would need more support that they can imagine. yes, i am still seen like a sponge ready to absorb the family, economic and social this article is protected by copyright. all rights reserved. everything collapsed around me. everything i had taken so many years to build, to find the right personal balance, went to dust. many parents are experiencing right now the same difficult days i am going through: organizing my work, working sometimes at am because i really cannot think of any other timeslot for work, my zoom conferences while my sons are playing in the room next door, homeschooling a -year-old boy, a -year-old boy and a year-old boy at the same time, thinking about meals, laundry, calling family to make sure everyone is fine etc. when my mother asks me how i am doing, i tell her it is really difficult to handle everything. she simply answers: "you'll be fine, you're used to multi-tasking". you know what came back to my face like a boomerang during this crisis? guilt, that horrible guilt. although i am with my boys all day, i do not feel like i am appreciating that time with them because my mind is thinking of all the work i have to cover. i also feel guilty because i feel useless being home during the crisis we are living. i feel that i am taking dirty care upon myself again. two weeks ago, my -year-old hurt himself while playing with his older brother. when it happened, i was heading a zoom conference. my boy came into my office screaming. i was so angry at him because he was disturbing the conference and he "knew the rules". i muted my this article is protected by copyright. all rights reserved. microphone, asked him to "shut up" because i still had one hour of zoom. when i ended my conference, i went to see him. he was still crying. his wrist was hurting. i told him he would be fine. deep inside of me, i hoped he was ok because i needed to work for at least half a day the next day and i did not want to spend time going to see a doctor or going to the hospital's emergency unit, especially during pandemic times: i had to work. well, the next day, i went with him to the emergencies. he had cried all night; i was told his wrist was broken. guilt. i had put my work before my son's well-being. guilt. he must have been in so much pain during the last hours. when we were at the hospital, a nurse was taking care of him when my year-old asked her: "have you seen your children? are they ok?" the nurse looked at me. i saw the pain in her eyes. she smiled at my son and said she would see her little girl tonight. i think i will never forget that nurse's pain in her eyes. her eyes were clearly saying: "no, i have not seen my children". i had been telling my sons how much sacrifice care workers are making for the common good. i had told them care workers are working at least hours a day and many of them were not seeing their children very often. it is one thing to be conscious about such situation and telling your children about it because you want them to understand others' sacrifices for the common good. it is a completely different thing to see the pain in a nurse's eyes. when we left the hospital, my son said: "i don't think her daughter is going to see her". my family and i have been all together at home for weeks now. i still feel frustration and guilt but i have also tried to look at what this crisis is showing us. if many of us have felt that this article is protected by copyright. all rights reserved. our lives have collapsed, part of the reason is because some of the infrastructures (associations, schools, day care, stores, offices…) that support our bodies (butler, ) are not functioning during this crisis. we are living a situation where bodies need each other, where we depend on each other, but the access to infrastructure is reduced or impossible. this situation is a real-life case where we, as scholars, will be able to show our different audiences that the dominant patriarchal ontology that thinks the body as independent is over. during this crisis, i have thought of feminism as the act of putting myself aside for a while because bodies that are in a much more dominated position than i am are in real pain. women and men with violent partners are stuck home with very little ways to escape. children with a violent parent are left on their own. the government has offered alternatives in these difficult times to help those who suffer. for example, the government has declared in the media that because it is difficult for women to call the police when they are home with their aggressor, women can now seek help when they go to a pharmacy. it strikes me how the government keeps conceiving violence within a heterosexual matrix where it is systematically a woman who suffers from the violence of a man. this type of discourse might be blocking persons who are in a non-heterosexual relationship from seeking help. all of us are over-consuming media networks; therefore, violent partners know that pharmacies have become an alternative to calling the police. what about children? social workers are still working but are not allowed to go to people's homes. at a time where the vulnerable would need the support of the infrastructure even more, they are left on their own. this article is protected by copyright. all rights reserved. this crisis has also brought to the front the tremendous inequalities that exist in terms of education. according to the government, schools have completely lost contact with about % of children; the most vulnerable ones. what do you need to be able to do homeschooling? a computer, a phone, the internet, a printer and paper. some of us my take this equipment for granted but the problem is that not all families possess this equipment. what else do you need? at least one parent who will be able to help children in organizing their work and help them understand lessons. what about those parents who left school too early to be able to accomplish these tasks? what about those parents who simply do not know how to teach (and there are many!)? in normal times, the poorest children in france can eat at their school canteen for euro per meal. french school canteens offer healthy meals with a starting course, main course, cheese, bread and a desert. for these children it is sometimes the only proper meal they are able to eat during the day. what meals are they having during this crisis? the infrastructures supporting our bodies are not functioning. vulnerable bodies are trying to literally survive through the crisis. i have also been thinking about bodies who are fighting for the common good. i am thinking of dominated occupations where race, class and gender play a significant role in rendering them invisible in normal times. just like the nurse i was mentioning earlier, care workers are not home like i am; they are not even able to see their children as much as i am. in fact, in france, their children are being taken care of by other women who are working in day care and schools that remain open for the needs of what the government has called "essential" occupations. most of all, these occupations are exposing themselves to unimaginable risk for the common good. this article is protected by copyright. all rights reserved. for the time being, these occupations are more vulnerable than i am; therefore, it is fine for me to put myself aside for a little while and reflect on what is happening. at pm at night, i go to my balcony with my sons and applaud for a minute all these occupations that were invisible before the crisis. i have mentioned care workers but we also applaud cashiers, garbage collectors, truck drivers, teachers… at pm at night, my neighbors and i also hit our saucepans with a spoon to protest against the last years of neoliberal decisions that have weaken our health system. society should not be paying for the decisions of a so-called elite. at the same time, society is learning what « essential » occupations are. for feminist researchers, the essential role of these occupations seems obvious, but for french society, it is not always the case. before the covid- crisis, these occupations were considered as peripheral; now they seem to have become central. society is temporarily providing recognition to these workers but i surely hope this recognition will be more than symbolic in the future. a debate is rising in france regarding the low levels of remuneration that these "essential" occupations have accepted for so many years. society seems to be struck by the strong decorrelation that exists between an occupation's salary and its central role for the common good. on april th , president macron's speech on television mentioned the following: "we will also have to remember that our country, today, stands entirely on women and men whom our economies recognize and pay so poorly. "social distinctions can only be based on common utility. the french wrote these words more than years ago. today, we must take up the torch and give full force to this principle" https://www.elysee.fr/emmanuel-macron/ / / /adresse-aux-francais- -avril- this article is protected by copyright. all rights reserved. i did not know if i had to laugh or cry at this comment. i find so much hypocrisy in such words because it is the neoliberal system that president macron supports which has worsen such misrecognition. at least, french society might remember president macron's words to act in the future. feminist research has been debating this crucial point for so many years and has a lot to offer in these debates to understand the structural norms that have led to such misrecognition. feminist research can also contribute in finding ways of reconciling these occupations with their own power to act. if we want recognition to run in the long term, for those of us who teach, one possible way to start is by educating our audiences, starting with our students. i personally teach accounting in a business school, providing knowledge to future managers in big corporations. this audience has been educated in a context where they have been taught to become entrepreneurs of the self and to constantly maximize their individual performances (brown, ; cooper, ) . this crisis illustrates how vulnerable we are; how taking care of the other is central. an entrepreneur of the self cannot survive without support, without infrastructure (butler, ). an entrepreneur of the self fully depends on others. how many entrepreneurs have had to stop their activity? how many are struggling to pay their bills, to survive? before teaching what financial performance is, we should start teaching what social justice is. my colleagues and i are building up a course called "accounting for the common good" that will start during the fall semester. we are mobilizing feminist theories to educate future managers. these difficult times act as a reflection of what feminist research has been exposing for so many years. this article is protected by copyright. all rights reserved. our goal is to put social justice at the heart of accounting. our goal is to teach our future managers what this crisis has taught us. this is a difficult time where we might feel scared and lost. it is also a time of hope. a time where we can show resistance to the neoliberal ontology that has captured humanity. many of our countries are paying the consequences of neoliberalism during this crisis because finance was at the heart of it all. we are vulnerable and dependent human beings. without social good, everything will collapse again and again. bibliography: neo-liberalism and the end of liberal democracy rethinking vulnerability in resistance entrepreneurs of the self: the development of management control since . accounting se défendre: une philosophie de la violence: zones key: cord- - rvlnqxk authors: li, zhi-chun; huang, hai-jun; yang, hai title: fifty years of the bottleneck model: a bibliometric review and future research directions date: - - journal: transportation research part b: methodological doi: . /j.trb. . . sha: doc_id: cord_uid: rvlnqxk abstract the bottleneck model introduced by vickrey in has been recognized as a benchmark representation of the peak-period traffic congestion due to its ability to capture the essence of congestion dynamics in a simple and tractable way. this paper aims to provide a th anniversary review of the bottleneck model research since its inception. a bibliometric analysis approach is adopted for identifying the distribution of all journal publications, influential papers, top contributing authors, and leading topics in the past half century. the literature is classified according to recurring themes into travel behavior analysis, demand-side strategies, supply-side strategies, and joint strategies of demand and supply sides. for each theme, typical extended models developed to date are surveyed. some potential directions for further studies are discussed. the bottleneck model was first introduced by vickrey in , aiming at addressing the departure time choices of commuters on a bottleneck-constrained highway during the morning rush hours. in this model, all individuals are assumed to have an identical preferred time to arrive at their destination and incur a schedule delay cost proportional to the amount of time that they arrive early or late. commuters choose their departure time to minimize their own travel cost based on a trade-off between bottleneck congestion delay cost and schedule delay cost of early or late arrival. this model is able to model the formation and dissipation of queuing behind the bottleneck in a simple and tractable way, thus making it a benchmark representation of the dynamics of traffic congestion in peak period. the past years (from to ) have witnessed significant progress in the bottleneck model research since the pioneering work of vickrey ( ) . a lot of insights into understanding the features of traffic congestion in peak period have been obtained via the bottleneck model. these insights cover various aspects, such as behavioral analysis (e.g., the nature of shifting peak, inefficiency of unpriced equilibria, behavioral difference of heterogeneous commuters, connection between morning and evening commutes, effects of commuter scheduling preferences), demand management (e.g., congestion / emission / parking pricing and tradable credit schemes, relationship between bottleneck congestion tolling and urban structure), and supply management (e.g., bottleneck / parking capacity expansion). the insights also play an important role in deeply understanding the essence of commuters' travel behavior during morning/evening peak periods, and in evaluating and making reasonable transport policies for alleviating peak-period traffic congestion. to date, there have been a few reviews on the topic of bottleneck models or their variations (e.g., arnott et al., ; lindsey and verhoef, ; small, ; small, ) . these early reviews appeared in different years, aiming to track the development of the bottleneck models on some selected specific topics. because the research area is still growing and new disruptive trends of automation and sharing in mobility are emerging, it is timely to provide a state-of-the-art review of this area, particularly on the occasion of its th anniversary. this paper attempts to provide a systematic and critical review that differs from previous reviews in several aspects. first, it is meaningful to conduct a bibliometric study of the large body of literature to celebrate the th anniversary of vickrey's bottleneck model. to do so, we carry out a literature review to analyze the research progress on the bottleneck model research throughout the past half century. the review tries to cover all relevant topics published in journals rather than some specific ones as covered in the previous review papers. second, a bibliometric analysis approach is adopted that can trace the footprints underlying the scholarly publications by constructing network connections of the publications, journals, researchers, and keywords. with the aid of visualization technique (e.g., a software called vosviewer), the bibliometric approach can map the landscape of the knowledge domain of the bottleneck model studies, allowing us to clearly identify the distribution of publications by journal, influential papers, top contributing authors, and leading topics. third, based on the bibliometric analysis, a critical review on the previous relevant studies is provided, together with some discussions on the current research gaps and opportunities. it is noted that a bottleneck system consists of the following elements: users, the authority (or the government), and the bottleneck (i.e., transport infrastructure). from the perspectives of these elements, we categorize the literature into four classes: travel behavior analysis, demand-side strategies, supply-side strategies, and joint strategies of demand and supply sides. the travel behavior analysis from the users' perspective focuses on the equilibrium analysis of commuters' travel choice behavior, such as the choices of departure time, route, mode, and/or parking. the demand-side strategies from the government's perspective refer to the travel demand management strategies, such as congestion / emission / parking pricing and tradable credit schemes. the supply-side strategies from the transport infrastructure's perspective include such topics as bottleneck capacity expansion and parking capacity design. the joint strategies of demand and supply sides from both the government's and transport infrastructure's perspectives are a hybrid of demand-side and supply-side strategies. for each theme, typical models proposed in previous studies are reviewed. the remainder of this paper is organized as follows. in the next section, a bibliometric study is conducted. section presents a literature review based on the literature categorization. in section , some potential directions for further studies are discussed. finally, section concludes the paper. this section provides a general bibliometric analysis of various bottleneck model studies. the bibliometric analysis uses quantitative methods to classify bibliometric data and build up representative summaries. it has been recognized as a useful approach for analyzing the performances of journals, institutes and authors, as well as the characteristics of research fields or topics. with the aid of visualization technique (e.g., vosviewer software), the bibliometric networks, such as cocitation network, co-authorship network, and keyword co-occurrence network, can be constructed and visually presented. to measure the influences of publications, authors and journals, various bibliometric indicators are considered, including the number of publications, total citations, and citations per paper. in order to collect the publication data since , we scout the three well-recognized journal databases or search engines, namely web of science core collection, scopus, and google scholar, using such topics or keywords as bottleneck, bottleneck model(s), morning commute or commuting, and bottleneck congestion. we further retrieve literature by tracking the references cited by the papers searched from the three databases. in particular, we check all references citing the original work of vickrey ( ) , entitled "congestion theory and transport investment". after repeated sifting and checking, a total of relevant papers during the period of - are finally retrieved. during the years of - , this topic received growing attention, with a total of relevant papers published, and the number of relevant publications per five years exceeds , more than the total number of publications during the first years ( ) ( ) ( ) ( ) ( ) ( ) ( ) ( ) ( ) ( ) ( ) ( ) ( ) ( ) ( ) ( ) . during the past years from to , this topic attracted further increasing interest, and a total of relevant papers were published, accounting for . % of total number of publications in the past years. particularly, the largest amount emerges in the most recent years of - , with publications (about . % of total number of publications). this continued growing tendency clearly shows that the bottleneck model is still an important and hot research topic in the field of transportation and this tendency is expected to continue in coming years. table shows the top journals (over a total of journals) by the number of published related papers. it can be seen that these journals mainly belong to "transportation" and "economics" categories in terms of the journal categories in the jcr (journal citation reports) published by thomson reuters. transportation research part b (tr-b for short, a leading journal in transportation field) leads table with papers (accounting for . % of the total number of publications), followed by journal of urban economics (jue, a leading journal in urban economics field) with papers (accounting for . %). the total percentage of papers published in these two journals (a total of papers) reaches nearly half of the total transportation research part c economics of transportation transportation research record transportation research part e regional science and urban economics journal of transport economics and policy american economic review transportmetrica a: transport science applied economics journal of public economics transportmetrica b: transport dynamics number of publications (about . %). the number of papers published in each of transportation science (ts), transportation research part a and part c (tr-a, tr-c) reaches or more. notably, as a young journal founded in , economics of transportation published papers on this topic. in order to look at the co-relation among journals publishing the publications, a bibliographic coupling of the journals is conducted, as shown in fig. . the size of a solid circle (or a vertex) represents the number of publications related to the topic of the bottleneck model in a journal. the line between circles represents the co-citation relationship of journals. the color of the line represents the cluster of journals, such as the journal categories of economics or transportation. the width of the lines between circles represents the co-citation degree or intensity between journals (i.e., the total number of co-citations of the documents in the journals concerned). specifically, a thick line means a strong co-citation degree between journals, and vice versa. it can be seen that the papers published in tr-b have highly been cited by those published in tr-a, tr-c, tr-e, jue, ts, trr (transportation research record), economics of transportation, transportmetrica a, jtep (journal of transport economics and policy), networks and spatial economics, and journal of public economics. the papers published in jue have a strong citation with those published in tr-b, rsue (regional science and urban economics), and economics of transportation. we now look at the most influential papers about the topic of the bottleneck model during the past five decades, which are determined according to total citations or average citations per year. it should be pointed out that in this paper, citation count is based on the sci/ssci citation databases. here, sci/ssci means science citation index expanded and social science citation index in the web of science core collection. table shows the top most influential papers, each having more than citations. it can be noted that the top most influential papers are vickrey ( ) , small ( ) and adl ( a) in terms of the total number of citations. here, "adl" respectively refers to the first letters of the surnames of three scholars, i.e., arnott r, de palma a, and lindsey r. all the most cited papers are from american economic review (aer), which is a well-recognized top economics journal. particularly, the pioneering work of vickrey ( ) is the most influential paper with the highest total citations of , and the highest average citations of . per year. there are papers, each having more than citations, and papers, each having an average of no less than citations per year. it should be pointed out that the work of fosgerau and karlstrom ( ) , entitled "the value of reliability", has had a total of citations, and a second ranking in terms of average citations per year, regardless of its short publication history. out of the top papers are from tr-b, from ts, from jue, and from tr-a. in terms of total citations, out of the top most influential papers were written by adl. in terms of average citations per year, out of the top most influential papers were published in tr-b, with publication years being between and and research focuses on tradable credit schemes ( nie and yin, ; xiao et al., ) and values of travel time and its variance ( fosgerau and karlstrom, ; fosgerau and engelson, ) . in order to understand the co-citation relationship of authors in the bottleneck model research, fig. shows the bibliographic coupling network of authors, in which a solid circle represents a researcher and an edge represents the co-citation between a pair of researchers. the size of a solid circle represents the number of papers published by a researcher, and the width of an edge represents the co-citation intensity between studies by that pair of authors. it can be noted that as far as the total number of related publications is concerned, the authors, such as de palma a, lindsey r, huang hj, yang h, fosgerau m, arnott r, verhoef et, zhang hm, liu w, and van den berg vac, are the most productive, influential top authors (see also table ) because they are associated with large circles. table further shows the top influential authors in terms of total number of publications (no less than papers), together with total number of citations and average citations per paper. the research institute and country/area of the associated authors have also been indicated in this table. it can be seen that de palma a leads the list in the total number of publications, with publications. it is followed by lindsey r and huang hj, each having publications. authors have more than publications. small ka, adl, and daganzo cf are the top most influential authors in terms of average citations per year. de palma a leads this table in the total number of citations ( ), and small ka leads this list in the average citations per paper, reaching an average of citations per paper. the other author, having more than citations per paper, is arnott r, reaching an average of citations per paper. in order to identify the research hotspots in the bottleneck model research, fig. shows the bibliographic coupling network of keywords, in which the size of the solid circle represents the number of occurrences of a keyword, and the width of the line represents the occurrence degree of the two keywords connected by that line. one can find some high-frequency keywords in fig. , such as "traffic congestion", "bottleneck model", "transportation", "commuting", "morning commute", "travel time", "costs", "travel behavior", "traffic control", "numerical model", "traffic management", "scheduling", "departure time choice", "user equilibrium", "parking", "road pricing", "congestion pricing", "travel time variabilities", and "heterogeneity". in order to make the review clearer, a cluster analysis of the papers is conducted. we categorize the papers into four classes in terms of their research focuses: travel behavior analysis, travel demand management (i.e., demand-side strategies), infrastructure operations and management (i.e., supply-side strategies), and joint strategies of demand and supply sides. the travel behavior analysis mainly focuses on the analysis of the trip and/or activity scheduling behavior of travelers through building various travel choice behavior models, such as departure time / route / parking / mode choices, morning vs evening commutes, piecewise constant vs time-varying scheduling preferences, normal congestion vs hypercongestion, homogeneous vs heterogeneous users, individual vs household, deterministic vs stochastic situations, single vs multiple bottlenecks, and analytical approach vs dta (dynamic traffic assignment) approach. travel demand management focuses on a set of strategies and policies to reduce travel demand, or to redistribute the demand in space and/or time, including congestion / emission / parking pricing and their effects on urban system. infrastructure operations and management means to determine the optimal capacity or service level of infrastructure elements (e.g., road bottleneck, parking lot, airport, port). joint strategies are a hybrid of both demand-side and supply-side strategies. among these modules, travel behavior analysis is a basis of the travel demand management studies and the infrastructure operations and management studies. the travel demand management strategies and the infrastructure operations and management strategies interplay through demand-supply interaction. the interrelationships among them are shown in fig. . the shaded part in fig. represents the joint strategies of demand and supply management. in the following section, we will provide a systematic review of the bottleneck model studies published in the past half century based on the classification in fig. . the classical vickrey's bottleneck model aims to model the departure time choice behavior of commuters during the morning commute. for the convenience of readers, the detailed formulation of the classical bottleneck model is provided in appendix. in this subsection, some basic assumptions underlying this model are presented. various extensions to relax these assumptions are then reviewed. these extensions include considerations of other travel choice dimensions (e.g., route / parking / mode choices), morning-evening commutes, time-varying scheduling preferences, vehicle physical length in queue and hypercongestion, heterogeneous users, household travel and carpooling, stochastic models and information, multiple bottlenecks, and dta-approach bottlenecks. the classical vickrey's bottleneck model, as a stylized representation of the dynamics of traffic congestion, has been widely recognized as an important tool for modeling the formation and dissipation of queuing at a bottleneck in rush hours. in the model, it is assumed that homogeneous commuters travel from a single origin (home) to a single destination (workplace) along a single road that has a bottleneck with a fixed capacity during the morning rush hours. all commuters choose their departure time based on a trade-off between the bottleneck queuing delay and the schedule delay of arriving early or late. equilibrium is reached when no individual has an incentive to alter his/her departure time. the attractiveness of vickrey's bottleneck model lies in its ability to derive closed-form solutions for equilibrium departure interval (i.e., the departure times of the first and last commuters from home), equilibrium departure rate, equilibrium queuing delay at the bottleneck, and equilibrium cumulative departures and arrivals. the derivations of these analytical solutions are built on some strong assumptions, stated as follows. ) departure time choice for morning commute. the classical bottleneck model involves only the departure time choice dimension for morning commute. the other travel choice dimensions, such as route, mode and parking choices, and the evening commute are not taken into account. in reality, commuters may also decide on their travel route and/or travel mode besides departure time, subject to parking capacity constraint. moreover, their travel decisions are usually based on day-long schedules, but not on morning or evening activity schedule only. some studies, e.g., de palma and lindsey ( a) ; zhang et al., ( ) , and li et al., ( ) , showed that commuters' morning and evening departuretime decisions are interdependent under some conditions, and the morning and evening departure patterns for specific individuals are not symmetric. it is, thus, necessary to consider multiple travel choice dimensions of commuters on a whole day basis. ) piecewise constant scheduling preferences. vickrey's bottleneck model assumes that the value of travel time and the value of schedule delay of arriving early or late are constants, usually denoted by three parameters α, β and γ , respectively. however, some empirical studies have confirmed that the marginal utility of time for performing an activity at a certain location changes over time (see, e.g., tseng and verhoef, ; jenelius et al., ; hjorth et al., ; peer and verhoef, ; peer et al., ) . it is therefore meaningful to relax the assumption of piecewise constant scheduling preferences to develop a scheduling model with time-varying marginal activity utilities. ) normal congestion with point queue (or vertical queue). vickrey's bottleneck model assumes that traffic flow does not fall under heavily congested conditions and flow increases with density, which is called "normal (or ordinary) congestion". however, in reality, a phenomenon of hypercongestion (i.e., flow decreases with density) may occur in the downtown areas of major cities during rush hours. on the other hand, the point queue or vertical queue assumes that any vehicle that has to queue before passing through a bottleneck is stacked in a vertical pile at the bottleneck, i.e., vehicles stack vertically and queues take place at a point. a vertical queue does not occupy any road space and has no influence on upstream approaching vehicles. however, in reality, vehicles have physical lengths, which influence the movements of vehicles at the bottleneck and thus the queuing delays, particularly the queue spillback may block upstream link. ) homogeneous individuals. the traditional bottleneck models mainly focus on individuals' travel choice behavior, and assume that the travel choice decision of an individual in a household is independent of that of other individuals. however, in reality, the interdependencies between household members (e.g., due to limited number of cars) indeed influence the activity schedules of household members. the classical bottleneck model also assumes that all commuters are homogeneous, i.e., they have the same desired arrival time and the same values of travel time and schedule delay. however, some studies have shown that there are big differences in the travel choice behavior of heterogeneous commuters due to their different travel preferences. there is thus a need to consider the heterogeneity of users. ) deterministic model. a bottleneck system is in general a dynamic and stochastic system. the dynamicity and stochasticity result from various random events, ranging from non-recurrent random incidents, such as traffic accident, vehicle breakdown, signal failure, adverse weather and earthquake, to recurrent fluctuations in travel demand and capacity by time of day, day of week, and season. the travel time or queuing delay at the bottleneck is thus a stochastic variable. furthermore, commuters may not have perfect information about the traffic condition, and thus cannot perceive the travel time accurately, leading them to make travel choice decisions somewhat haphazardly. ) only one bottleneck. the classical vickrey's bottleneck model assumes that there is only one single bottleneck on the highway connecting commuters' home and workplace. however, in reality one commuter often traverses multiple bottlenecks on his/her way to work, e.g., a y-shaped highway corridor with upstream and downstream bottlenecks. it is thus worthwhile to extend the single-bottleneck model to a multi-bottleneck case. ) analytical approach. the classical bottleneck model has well-defined analytical solutions, as shown in appendix. this is because it tackles a single bottleneck only. in order to promote the realistic applicability of the model, it is necessary to extend the analytical bottleneck model to a general network with many links and many od (origin-destination) pairs. to do this, a point-queue dta approach (i.e., treating the queues on congested links in the network as a point bottleneck) and a traffic simulation technique may be adopted. these aforementioned assumptions play a significant role in deriving the analytical solutions of the bottleneck model and in revealing the nature of congestion dynamics. however, they also restrict the model's explanatory power and applications for a general case due to ignorance of many realistic characteristics of traffic system. to strengthen the realism of the bottleneck model, these assumptions have been relaxed in the literature through various extensions, which are in turn reviewed as follows. the classical vickrey's bottleneck model concerns only the departure time choice of commuters. in the literature, some extensions have been made to incorporate other travel choice dimensions, such as route / parking / mode choice dimensions. in terms of route choice dimension, arnott et al., ( b) presented a simultaneous departure time and route choice model for a network with one od pair and two parallel routes. it showed that at the no-toll equilibrium, the number of users on each route coincides with that in the social optimum. optimal uniform and step tolls divert users towards longer routes, but only slightly. an optimal time-varying toll eliminates queuing without affecting route usage. arnott et al., ( ) and liu and nie ( ) proposed multi-class departure time and route choice models for identifying the behavioral difference of different user classes. siu and lo ( ) and further addressed the simultaneous departure time and route choice problem in a bottleneck system with uncertain route travel time. recently, kim ( ) empirically estimated the social cost of traffic congestion in the us using a simultaneous departure time and route choice bottleneck model. it was shown that the annual cost of congestion borne by all us commuters is about billion dollars. these aforementioned simultaneous departure time and route choice models have derived some insights into understanding the nature of commuters' route choice behavior during the morning commute. however, these studies usually considered only one od pair and an auto-only bottleneck system. they also assumed that total travel demand was fixed (or inelastic) and the parking, as a source of congestion at the destination, was ignored. besides route choice dimension, parking dimension has also been incorporated in the bottleneck model studies, as shown in table . it can be seen in table that most of the parking studies considered the morning commuting problems from home to work through a bottleneck-constrained route. some exceptions are zhang et al., ( zhang et al., ( , , who studied the integrated morning and evening commutes through one two-way route with one bottleneck each way. some studies, such as arnott et al., ( b) , zhang et al., ( zhang et al., ( , , and liu ( ) , made a strong assumption about the parking order, i.e., commuters park outwards from (or inwards to) the cbd. qian et al., ( , divided the parking lots into two discrete classes: a closer and a farther parking cluster. others, like yang et al., ( ) and liu et al., ( a , considered parking reservation issues without/with expiration time and the effects of parking space constraints on the departure time and parking location choices. liu and geroliminis ( ) examined the effects of cruising-for-parking on commuters' departure time choices using mfd (macroscopic fundamental diagram) approach. however, all these studies did not concern the parking duration issue ( lam et al., ; li et al., ) , which directly affects the parking turnover and thus the real-time number of available parking spaces in a parking lot. in addition, the bottleneck model has also been extended to consider mode choice dimension. for the convenience of readers, we summarize in table some principal contributions to the multi-modal bottleneck problems. it can be noted that most studies considered two physically separated modes (auto and rail), and thus cannot consider the congestion interaction between modes. moreover, some studies, such as tabuchi ( ) , danielis and marcucci ( ) , and gonzales and daganzo ( ) , ignored the effects of passenger crowding discomfort in transit vehicles on commuters' travel choices. however, some studies, such as huang ( , ), huang et al., ( ) , de palma et al. ( ) , showed that in-vehicle passenger crowding discomfort has a significant effect on passengers' travel choices. in order to achieve a social-optimum system, the in-vehicle passenger crowding in transit vehicles should be incorporated in the transit service optimization, together with passenger wait time at transit stops due to insufficient vehicle capacity. as previously stated, the standard bottleneck model focuses mainly on morning commuting problems, and little attention has been paid to evening or day-long commuting problems. this may be because the evening commuting is usually seen as a note: " × " represents "no" and " √ " represents "yes". symmetric reverse process of the morning commuting. some studies, such as vickrey ( ) , de palma and lindsey ( a) , gonzales and daganzo ( ) , and li et al., ( ) , have shown that the morning and evening equilibrium departure patterns are not symmetric under some conditions, e.g., the bottleneck system has multiple alternative travel modes, or commuters are heterogeneous in terms of their preferred work start/end times and/or the values of travel time and schedule delay. although investigation of the morning and evening commuting problems in isolation may provide some important insights, in reality commuters usually make travel decisions based on their day-long activity schedules. to date, only a few published papers have involved the analysis of day-long commuting problems. for example, zhang et al., ( ) presented an integrated day-long commuting model that links the morning and evening commuting trips via parking location choice. gonzales and daganzo ( ) incorporated mode choice dimension in the integrated morning and evening commuting problem. daganzo ( ) further examined the two-mode day-long commuting problem when the wish times of arriving at and departing from workplace follow a continuous distribution. the day-long commuting models mentioned above adopted a trip-based modeling approach, and thus the time allocations of commuters for activities and travel during a day cannot be properly addressed. different from the trip-based morning-evening commuting models, zhang et al., ( ) presented a day-long activitytravel scheduling model to address commuters' time allocations among activities and travel during a day. their model connects the home-to-work commute in the morning and the work-to-home commute in the evening via work duration. li et al., ( ) investigated the properties of the day-long activity-travel scheduling model. they presented a sufficient and necessary condition of interdependence between the morning and evening departure-time decisions, i.e., the marginal utility of work activity is not a constant, but depends on both the clock time of day and the work duration (implying a flexible work-hour scheme). recently, zhang et al., ( ) further investigated autonomous vehicles oriented morning-evening commuting and parking problems. these previous studies usually considered a simple activity chain, namely home-work-home chain. however, in reality commuters may engage in other activities before work (e.g., taking the kid to school) or after work (e.g., shopping or recreation). it is thus meaningful to incorporate other activity participations in the day-long activity-travel scheduling model. vickrey's bottleneck model assumes that the value of travel time and the values of schedule delays of arriving early and late are constants α, β, and γ , respectively (see eq. (a ) in appendix). this assumption has been widely adopted in various extensions or variations of vickrey's bottleneck model. however, some previous empirical studies have confirmed that the marginal activity utility varies in time and space. vickrey ( ) formulated the departure time choice model for the morning commuting problem, in which the utilities derived from time spent at home and at work are linear functions of time. tseng and verhoef ( ) estimated the scheduling model of the morning commuting problem, in which marginal utilities vary nonlinearly over time of a day. jenelius et al., ( ) explored the effects of activity scheduling flexibility and interdependencies between different segments in a daily trip chain on delay cost and value of time. hjorth et al., ( ) empirically estimated different types of activity scheduling preference functions (including const-step, const-affine or const-exp formulations) and compared them to a more general form (exp-exp formulation) with regard to model fit, based on the stated preference survey data collected from car commuters traveling in the morning peak in the city of stockholm. abegaz et al., ( ) used stated preference data to compare the valuation of travel time variability under a structural model where trip-timing preferences are defined in terms of timedependent utility rates (i.e., a "slope model") against its reduced-form model where departure time is assumed to be optimally chosen. fosgerau and small ( ) presented a dynamic model of traffic congestion in which scheduling preferences are endogenously determined. this is different from the traditional activity scheduling models, in which the scheduling preferences are assumed to be exogenously given. developed an activity-based bottleneck model for investigating the step tolling problem, in which the activity scheduling utilities of commuters at home and at work vary by the time of day. it showed that ignoring the preference heterogeneity of commuters would underestimate the efficacy of a step toll. recently, li and huang ( ) investigated the user equilibrium problem of a single-entry traffic corridor with continuous scheduling preferences. the results showed that the introduction of continuous scheduling preferences makes inflow rate of early arrivals first increase and then decrease. even though the introduction of continuous scheduling preferences can smooth the departure rate of commuters and make the user equilibrium flow pattern more stable, a series of shock waves still exist due to discontinuities in departure rates or sharply decreasing inflow rate at the entry point of the corridor. these aforementioned studies mainly focused on evaluation or comparison of the effects of different forms of the activity scheduling preference functions. other important factors were ignored. for example, the marginal utilities of commuters may change by gender, travel mode, income level, and so on. it is meaningful to reveal the effects of these heterogeneities on the activity scheduling preferences and to empirically calibrate the scheduling preference functions of various activities through field surveys. the point-queue assumption in vickrey's bottleneck model significantly facilitates the calculation of queuing delay at the bottleneck. however, it cannot account for the influence of vehicle queuing on upstream approaching vehicles due to ignoring the physical lengths of vehicles in queue. lago and daganzo ( ) investigated two important aspects: queue spillovers caused by insufficient road space, and merging interactions caused by the convergence of trips in a two-origin and single-destination network with limited storage space. they obtained some unexpected findings, e.g., ramp metering is beneficial, and providing more freeway storages is counterproductive. chen et al., ( ) explored the impact of queuelength-dependent capacity on travelers' departure time choices in the morning commute problem. it showed that multiple equilibria and even a continuum of equilibria may exist, and the equilibrium cost may be a locally decreasing function of the number of users. the standard model for analyzing traffic congestion with vehicle queue length consideration usually incorporates a relationship between volume, speed and density of traffic flow. there is a well-defined inverse-u-shaped relationship between traffic volume and density. however, most of traffic flow models focus on the situation of "ordinary (or normal) congestion", in which traffic volume increases as traffic density increases (or travel speed decreases as traffic volume increases). this is because it is believed that traffic flow does not fall under heavily congested conditions. however, in reality, the phenomenon of hypercongestion may occur, especially in the downtown areas of major cities during rush hours. hypercongestion refers to traffic jam situations where traffic volume decreases as traffic density increases. small and chu ( ) presented tractable models for handling demand fluctuations for a straight uniform highway and for a dense street network located in a central business district (cbd) . for the cbd model, they employed an empirical speed-density relationship for dallas texas to characterize hypercongested conditions. arnott ( ) presented a bathtub model of downtown rush-hour traffic congestion that captures the hypercongestion phenomenon. it was shown that when demand is high relative to capacity, applying an optimal time-varying toll can generate benefits that may be considerably larger than those obtained from standard models and that exceed the toll revenue collected. fosgerau and small ( ) combined a variable-capacity bottleneck with α − β − γ scheduling preferences for a special case with only two possible levels of capacity. it showed that the marginal cost of adding a traveler is especially sensitive to the low level of capacity, and under hypercongestion the policies (an optimal toll, a coarse toll, and metering) can be designed so that travelers gain even without considering any toll revenue. fosgerau ( ) extended the bathtub model to assess the effects of the policies of road pricing, transit provision and traffic management under hypercongestion. it showed that the unregulated nash equilibrium is also the social optimum among a wide range of potential outcomes and any reasonable road pricing scheme would be welfare decreasing when the speed of alternative transit mode is high enough such that hypercongestion does not occur in equilibrium. large welfare gains can be achieved through road pricing when there is hypercongestion and travelers are heterogeneous. gonzales ( ) further considered the hypercongestion issue in a multi-modal context. it showed that hypercongestion may arise when modes are not priced, a stable steady equilibrium state can emerge when cars and high-capacity transit are used simultaneously, and there always exist fixed coordinated prices (i.e., fixed difference of prices) for cars and transit to achieve a stable equilibrium state without hypercongestion. in order to derive a closed-form solution for no-toll equilibrium for hypercongestion, arnott et al., ( ) proposed a special bathtub model through adapting the simplest bottleneck model to an isotropic downtown area where the congestion technology entails velocity being a negative linear function of traffic density. liu and geroliminis ( ) adopted an mfd approach to model the hypercongestion effects of cruising-for-parking in a congested downtown network. in addition to the hypercongestion issues, the traffic flow model that describes the relationship between velocity and density has also been extended for investigation of continuum corridor problems (see, e.g., arnott and depalma, ; depalma and arnott, ; li and huang, ; lamotte and geroliminis, ) . it should be mentioned that the tractability is a major challenge for the models with hypercongestion consideration. this is because the travel time of a traveler is determined by the decisions of other travelers throughout the duration of the trip, as pointed out in fosgerau and small ( ) . therefore, it is difficult to derive analytical solutions for a general case with general scheduling preferences, heterogeneous users, or travel time uncertainty. a simulation method has thus to be used. the standard bottleneck model has a strong assumption that commuters are homogeneous, i.e., all commuters have the same preference for arriving early or late and have an identical value of time. this assumption has been relaxed in the literature to consider the heterogeneity of commuters, such as heterogeneities of travel preferences and work start time (e.g., flexible or staggered work hours). the heterogeneity may be represented in discrete or continuous form. the discrete type of heterogeneity means that all commuters are divided into several groups and the commuters in one group are assumed to have the same preference and work start time. the continuous type of heterogeneity assumes a continuous distribution for the preference and/or work start time. considering user heterogeneity is important for achieving accurate estimates of welfare effects of various policy measures, such as congestion pricing, ramp metering, capacity investment, and flexible work schedules. table provides a summary of bottleneck model studies involving heterogeneous users. it can be seen that the existing studies mainly focused on the case of piecewise constant scheduling preferences (i.e., α − β − γ preferences), and considered the users' heterogeneities in the following ways: ( i ) identical preferred arrival time and discretely / continuously distributed scheduling preference parameters; ( ii ) discretely distributed preferred arrival time and identical / discretely distributed scheduling preference parameters; and ( iii ) continuously (including uniformly) distributed preferred arrival time and identical / continuously distributed scheduling preference parameters. however, the cases of discretely (continuously) distributed preferred arrival time but continuously (discretely) distributed scheduling preference parameters are not investigated yet, which provide a research opportunity for further study. it can also be seen that most studies concerned the proportional heterogeneity (i.e., all commuters have the same ratios of β/ α and γ / α), which helps derive the departure or- two discrete groups with same γ / β analytical der of different commuter groups and analytical equilibrium solutions. some studies, such as newell ( ) , lindsey ( ) , ramadurai et al., ( ) , doan et al., ( ) , and liu et al., ( c) , have relaxed this assumption to consider a general heterogeneity structure (i.e., α, β and γ are allowed to vary independently). some properties of the model (e.g., the existence and uniqueness of solution) have been discussed. however, it is difficult to derive analytical solution of the model, which poses a challenge of designing an efficient solution algorithm. in addition, the marginal utility of an activity generally varies over time, as previously stated. it is thus necessary to relax the assumption of α − β − γ scheduling preferences to consider the case of time-varying scheduling preferences. most of the previous bottleneck model studies focused on individual-based trips, and assumed that each household member makes activity-travel scheduling decisions independently. however, in reality, a large number of morning commute trips are indeed household-based travel, i.e., a multi-person trip among household members rather than a single-person trip. the interdependency between household members could influence the activity participation of each household member. therefore, the intra-household interaction should be considered in the activity-travel scheduling models. de palma et al. ( ) proposed a variant of vickrey's bottleneck model of the morning commute, in which individuals live as couples and value the time at home more when together than when alone. the results showed that the cost of congestion is higher for couples than for single individuals because the cost of arriving early rises proportionally more than the cost of arrival late decreases. the costs can be even higher if spouses collaborate with each other when choosing their departure times. jia et al., ( ) explored the departure time choice problem of the household travel (commuter and children) in a home-school-work trip chain with two preferred arrival times (a school start time and a work start time). liu et al., ( b) further considered a hybrid of household travel (home-school-work trip chain) and individual travel (home-work trip). the findings showed that by appropriately coordinating the schedules of work and school, the traffic congestion at highway bottleneck and thus the total travel cost can be reduced. zhang et al., ( ) further investigated and compared the morning commuting equilibrium solutions in the "school near workplace" and "school near home" networks. it was shown that the dynamic commuting equilibrium solution is significantly affected by school locations, and in the ''school near home" network, households always arrive at school no later than the desired school arrival time. these abovementioned studies considered the morning trip-timing decisions of couples ( de palma et al., ) and of a parent and his/her children ( jia et al., ; liu et al., b ; zhang et al., ) . however, for a family with two workers (husband and wife), couples must decide who takes the children to school in the morning and when, and who brings them back home in the evening. a parent has to trade-off not only his/her own schedule convenience with that of his/her spouse, but also the schedules of children. it is therefore meaningful to address the morning-evening activity scheduling issues with intra-household interaction consideration. carpooling or ridesharing refers to the case in which multiple persons travel together in an auto by sharing the cost. with carpooling, the seat capacity of an auto can be utilized more efficiently and the average individual travel costs, such as fuel cost, toll, and the stress of driving, are reduced. carpooling is also recognized as a more environmentally friendly and sustainable way to commute by reducing vehicular carbon emissions as well as the need for parking spaces. recently, xiao et al., ( ) incorporated carpooling behavior in morning commute problem with considering parking space constraint at destination. three modes, namely solo-driving, carpooling, and transit, were considered. it was shown that the departure period of solo drivers covers the departure period of carpoolers, and as the number of parking spaces decreases, the number of solo-drivers decreases gradually, while the number of carpoolers first increases and then decreases. liu and li ( ) examined the morning commute problem in the presence of ridesharing program. commuters simultaneously choose departure time from home and role in the program (solo driver, ridesharing driver and ridesharing rider). ma and zhang ( ) further explored the dynamic ridesharing problem on a highway with a single bottleneck together with parking. they designed the schemes with different ridesharing payments and shared parking prices. recently, presented a variable-ratio charging-compensation scheme to investigate dynamic ridesharing problem using bottleneck model approach. different objectives of the platform were considered, including minimization of system disutility, maximization of platform profit, and minimization of system disutility subject to zero platform profit. yu et al., ( ) incorporated users' heterogeneities in the carpooling problem based on the traditional bottleneck model, and revealed the effects of heterogeneities on the efficiency of carpool subsidization. all the aforementioned studies are based on a corridor with a single bottleneck, and thus cannot consider the interaction between flows of different links in the network (i.e., network effects). extending the single-bottleneck model to a general network with multiple od pairs and multiple bottlenecks could help deepen understanding of the effects of carpooling services on the urban transport system. the carpooling services can be implemented through mobile platforms in which passengers can call on riding services and drivers can respond to the service requests. the existing related studies considered a single carpooling platform. it is meaningful to examine the competition and/or collaboration among multiple carpooling platforms. transportation systems are stochastic, dynamic and nonlinear systems due to various disturbance factors from supply side and/or demand side, such as traffic accident, bad weather, and within-day and/or day-to-day demand variations. considering the impacts of stochastic factors on transportation systems has important implications for promoting the resilience and reliability of the systems. table lists some major studies incorporating uncertainty effects in the bottleneck problems. it can be noted that most of previous studies focused on the supply uncertainty caused by travel time variation or capacity randomness. it is somehow surprising that no studies take into account the effects of the uncertainty in demand side in the bottleneck problems, though there are a few publications involving joint fluctuations in both supply and demand sides (e.g., arnott et al., ; fosgerau, ) . in order to model the uncertainty effects of bottleneck system, a probability distribution function needs to be specified for the random variables concerned. in this regard, most studies adopted a general distribution, and a few studies adopted some specific distributions, such as uniform distribution (e.g., xiao et al., wang and xu, ; zhang et al., ) , exponential distribution ( tian and huang, ) , uniform and exponential distributions (noland et al., (noland et al., , , and gumbel distribution ( xiao and fukuda, ) . in terms of modeling method, expectation value model is usually adopted in stochastic optimization problems. however, in order to well capture the risky attitudes of travelers towards random fluctuations in demand and/or supply sides, some studies have also incorporated the effects of travel time variation in the objective functions of models, such as fosgerau ( ) ; borjesson et al., ( ) ; engelson and fosgerau ( ) , and xiao et al., ( ) . it should be mentioned that in the field of travel time variability or reliability, fosgerau and his collaborators have made a number of works by using a model of time-varying scheduling preferences. for instance, they presented a new measure of travel time variability ) and explored the relationships among different measures of the cost of travel time variability ( engelson and fosgerau, ) . they also derived the value of travel time variability fosgerau and fukuda, ) , and revealed the relationship between the mean and variance of travel delay in dynamic queues with random capacity and demand ( fosgerau, ) . for a systematic review of the values of travel time and travel time reliability, the interested readers may refer to small ( ) . it should be pointed out that these previous studies are mainly based on expectation value (risk-neutral) model or meanvariance (risk-averse) model. it is meaningful to consider other measures of risk, such as var (value at risk) and cvar (conditional value at risk). this could lead to a difference in the value of travel time variability. moreover, these previous studies assumed that the scheduling preferences are exogenously given. incorporating endogenous scheduling preferences, as presented in fosgerau and small ( ) , is also an important direction for further studies. obviously, variability or uncertainty in supply and/or demand sides involves lack of information about how a stochastic process is realized. thereby, its analysis naturally invites considering the effects of information provision. travelers may have only partial information about traffic conditions before or during a trip. with the aids of various information and communication technologies (e.g., global navigation satellite system, global positioning system), real-time traffic information can be collected and disseminated efficiently. travelers can adjust their activity and travel schedules through day-to-day leaning and the traffic information guidance. in this regard, noland ( ) examined the congestion effects of providing commuters with pre-trip information and found that information provision does not necessarily bring benefits to the commuters using the information. ziegelmeyer et al., ( ) investigated the impact of public information about past departure rates on congestion level and travel cost based on learning model and the adl's bottleneck model. liu et al., ( a) considered the effect of travelers' inertia in the day-to-day behavioral adjustment due to traffic information updating. khan and amin ( ) studied the effects of heterogeneous information (market penetration and accuracy) on traffic congestion. explored the impact of the cost of information provision on information quality provision strategy. zhu et al., ( ) examined the dayto-day departure time adjustment process of travelers with bounded rationality based on long-term historical knowledge (or short-term travel experience) and real-time information provision. however, all the aforementioned studies only considered a simplified case of one single route, which may not be able to capture the full impact of information on traffic congestion. it is thus necessary to extend it to a general network with multiple routes in a further study. some studies have relaxed the assumption that commuters pass through only one bottleneck during commuting peak periods to consider the case of passing through multiple bottlenecks during a trip. kuwahara ( ) analyzed the equilibrium queuing patterns at a two-tandem bottleneck on one freeway for which some commuters may pass through both bottlenecks. arnott et al., ( b) studied a y-shaped highway corridor with two upstream bottlenecks and one downstream bottleneck. they found that expanding the capacity of one of the upstream bottlenecks can raise total travel cost (i.e., a paradox occurs), and metering access to reduce effective upstream capacity can improve efficiency. optimal capacity for an upstream bottleneck is equal to, or smaller than, the optimal capacity for the downstream bottleneck. kim ( ) further analyzed the dynamic equilibrium queuing patterns for a two-tandem bottleneck with two origins and one destination. it was found that in some cases, a queue does not occur at the upstream since the departure rate is always equal to its capacity at equilibrium. in order to avoid traffic congestion at a two-tandem bottleneck, the downstream bottleneck should be enlarged prior to the upstream bottleneck. further demonstrated the bottleneck paradox phenomenon by an experimental method in a y-shaped bottleneck network with two groups of commuters. the commuters in group one pass through only the downstream bottleneck, whereas the commuters in group two must pass through both upstream and downstream bottlenecks. they found that the observed departure times at aggregate level are in close agreement with the equilibrium solution. akamatsu et al., ( ) discussed the existence and uniqueness of the solution of departure-time choice equilibrium for a corridor with multiple discrete bottlenecks and heterogeneous users. these previous related studies mainly focused on a specific occasion (e.g., a two-tandem or y-shaped bottleneck structure), and thus the results obtained might not be applicable to a general network. therefore, further investigations on general bottleneck networks are needed. the bottleneck models presented in the aforementioned literature usually adopted analytical approaches because they treated only some simple cases with one or two routes. in order to apply the bottleneck models to real large-scale networks, a dta-based bottleneck modeling approach was presented, which was inspired by vickrey's bottleneck model. in this approach, the usual components of vickrey's bottleneck model are applied separately to the links in the network. in this regard, de palma and his colleagues have developed a dynamic network model, called metropolis, in which the travel mode, departure time and route choices can be endogenously determined. metropolis has been implemented both with a vertical queue for each link (i.e., the physical length of a queue is not considered), and with a horizontal queue which means the queue on one link can affect other links (i.e., queue spillback effects). the model is solved using microsimulation, and has been applied to evaluate various policies, such as congestion pricing (see, de palma and lindsey, ; de palma et al., , de palma et al., . for more details of metropolis, please refer to de palma et al. ( ) and de palma and marchal ( ) . besides the aforementioned various topics, some other topics related to the bottleneck model have also been studied, summarized as follows. i. properties of equilibrium solution. smith ( ) showed the existence of user equilibrium solution for a singlebottleneck model with homogeneous users. daganzo ( ) proved the uniqueness of the user equilibrium solution. newell ( ) extended the analysis to the case of heterogeneous linear schedule delay functions. an elastic demand version of the bottleneck model was analyzed in arnott et al., ( a) . ii. variations of model formulation and solution algorithm. de palma et al. ( ) proposed a stochastic departure time choice logit model to consider the commuters' perception errors of utility. han et al., a , han et al., b reformulated vickrey's bottleneck model as a partial differential equation formulation. otsubo and rapoport ( ) presented a discrete version of vickrey's bottleneck model and a solution algorithm for computing the equilibrium solution. nie and zhang ( ) proposed numerical solution procedures for the morning commute problem. guo and sun ( ) considered personal perception in the travel cost function, aiming to incorporate the commuters' psychological tastes towards early arrival at the workplace in the bottleneck model. iii. doubly dynamic adjustments (day-to-day and within-day dynamics). ben-akiva et al., ( presented a dynamic simulation model to describe the evolution of queues and delays from day to day. ben-akiva et al., ( ) further presented a framework for evaluating the effects of traffic information systems based on the doubly dynamic adjustment model incorporating the drivers' information acquisition and integration. guo et al., ( ) considered the bounded rationality factor due to individual's limited cognitive level and imperfect information in the doubly dynamic bottleneck model. iv. time varying bottleneck capacity. the classical bottleneck model usually assumes a constant or invariant bottleneck capacity. using optimal control theory, yang and huang ( ) presented a bottleneck model with queue-dependent capacity and elastic demand for design of time-varying toll schemes, and found that queues must not be eliminated in the optimal state of the system. zhang et al., ( ) presented another bottleneck model in which the bottleneck capacity varies exogenously over time, in discrete steps. they derived user equilibrium and system optimal traffic patterns with (exogenously) time-varying capacities and the optimal tolls leading to the system optimum pattern. v. ramp metering. arnott et al., ( b) suggested an optimal metering policy to improve the efficiency of a y-shaped bottleneck system. o'dea ( ) found that in the bottleneck model, metering can produce a sizable benefit and should not be regarded as a substitute for congestion pricing. shen and zhang ( ) designed a pareto-improving metering strategy for a multi-ramp linear freeway based on an analysis of priority order of the ramps. these studies mainly focused on a simple bottleneck system or a freeway. we can extend it to a general network in a further study. queuing delay is a pure deadweight loss for the society and results in inefficient use of transportation infrastructure. in order to make efficient use of transportation resources, congestion pricing has been widely suggested as a viable measure to internalize externalities caused by queuing at the bottleneck so as to relieve peak-period traffic congestion. congestion pricing scheme is generally based on the economic theory of marginal cost pricing and is a mechanism to improve social benefit . for comprehensive reviews of congestion pricing, readers can refer to lindsey et al., ( ) , van den berg ( , and fosgerau and van dender ( ) . a substantial stream of research has been conducted on bottleneck congestion pricing. table provides a summary of the bottleneck congestion pricing studies. it should be pointed out that multi-modal bottleneck tolling studies are not shown in table , and readers can refer to table . it can be seen in table that many of the existing studies focused on the topics of step tolling, users' heterogeneity, and tradable credit scheme. based on different assum ptions, the step bottleneck tolling studies can be classified into three main categories: the adl model of arnott et al., ( a arnott et al., ( , a arnott et al., ( , , the laih model of laih ( , laih, , and the braking model of lindsey et al., ( ) and xiao et al., ( ) . the laih model implicitly assumed that separate queues exist for tolled users and untolled users who arrive before the toll is turned off. despite this strong assumption, the laih model is useful for estimating the approximate efficiency of a multi-step toll scheme. the adl model assumed that a mass of commuters departs just after the toll is lifted. the braking model considered that as the end of the tolling period approaches, drivers have an incentive to stop before reaching the tolling point and wait until the toll is switched off. the congestion pricing studies shown in table usually fall into the family of the piecewise constant α − β − γ preferences, and focus on the case of normally recurrent traffic congestion. further investigated the congestion tolling problems in a framework of time-varying scheduling preferences. they compared the single-step and multi-step toll schemes with linear time-varying and piecewise constant marginal activity utilities. arnott ( ) and fosgerau ( ) incorporated the hypercongestion phenomenon in the congestion tolling problems using the bathtub model. vehicular use also causes environmental externality, besides congestion externality. in order to control vehicle pollution emissions and improve air quality, emission tax policy has been suggested. bulteau ( ) proposed a microeconomic model of urban toll system to internalize the negative externality effects (congestion and pollution) generated by vehicular use. in the proposed model, two modes of transportation (i.e., cars and public transport) were taken into account, and the vehicle emission rate was implicitly assumed to be a constant. based on the bottleneck congestion model of arnott et al., ( a , arnott ( ) , bernstein and muller ( ) , mun ( ) , daganzo and garcia ( ) social optimum toll to totally eliminate bottleneck queue during morning peak pareto-improving time-varying toll in a time window minimize total social cost or user cost elastic travel demand braid ( ) , arnott et al., ( a) , yang and huang ( ) consider the elasticity of travel demand to travel cost maximize total social surplus step tolling laih ( laih ( , , arnott et al., ( a arnott et al., ( , a , fosgerau ( , lindsey et al., ( ) , van den berg ( , gonzales and christofa ( ) , knockaert et al., ( ) , ren et al., ( ) , bao et al., ( ) , xu et al., ( ) single-step or multi-step tolling scheme as a substitute for the time-varying tolling scheme typical models: adl model, laih model, lindsey et al. braking model route or lane substitute braid ( ) , hall ( ) two routes: one tolled and one free a portion of lanes are tolled heterogeneous travelers cohen ( ) , arnott et al., ( arnott et al., ( , reward scheme rouwendal et al., ( ) , yang and tang ( ) a reward scheme means a subsidy instead of a penalty of tolling fare-reward scheme for transit users regulatory regime of bottleneck de palma and lindsey ( b lindsey ( , , fu et al., ( ) private regime (profit maximization) public regime (welfare maximization) mixed regime stochastic environments yao et al., ( ) , , hall and savage ( ) stochastic toll stochastic capacity a), three alternative toll schemes were compared: a fine toll (time-varying toll), a coarse toll (varies according to the peak period and off-peak period), and a uniform toll (constant over time). the policy of redistributing the gains from urban tax to public transport was also evaluated. liu et al., ( b) presented a variable speed limit scheme to reduce total traffic emissions and travel costs based on vickrey's bottleneck model and constant vehicle emission rate assumption, and evaluated its effectiveness in improving traffic flow efficiency of the bottleneck system. these studies only considered one od pair with one route and assumed a constant vehicle emission rate. it is meaningful to extend these studies to incorporate the network effects and the changing vehicle emissions by vehicle type and speed. tradable credit scheme has recently been advocated as a useful tool for regulating the externalities caused by vehicular use and as a promising substitution for congestion pricing scheme. this is because such scheme does not involve money transfer from the public to the government, which can significantly increase the public acceptability towards the scheme ( yang and wang, ) . studied various parking permit schemes in a many-to-one network, in which each origin is connected to a single destination by a bottleneck-constrained highway and a parallel transit line. they compared three parking permit distribution schemes for commuters living in different origins: uniform, pareto improving, and system optimum schemes. liu et al., ( a ,b) further developed tradable parking permit schemes to realize parking reservations for homogeneous or heterogeneous commuters in terms of their values of time. nie and yin ( ) proposed a general analytical framework for design of system optimal tradable credit scheme and analysis of the efficiency of tradable credit scheme for a two-route system. their results showed that the tradable credit scheme could provide substantial efficiency gains for a wide range of scenarios. tian et al., ( ) examined the efficiency of a tradable credit scheme in a competitive highway/transit network with continuous heterogeneity in terms of individuals' values of time. xiao et al., ( ) explored the efficiency and effectiveness of a tradable credit system with identical and non-identical commuters. credits are tradable between the commuters, and the credit price is determined by a competitive market. the credit system consists of a time-varying credit charged at the bottleneck and an initial credit distribution to the commuters. nie ( ) proposed a market-based tradable credit scheme for managing traffic congestion at critical bottlenecks (e.g., bridges and tunnels). it was assumed that users who avoid traveling in the peak-time window will be rewarded with mobility credits and those who do not will pay a congestion toll in the form of credits or cash. the travelers may trade their credits with each other. it was shown that the best choice of the rewarding-charging ratio is , i.e., each peak-time user is charged one credit and each off-peak user is awarded one credit. shirmohammadi and yin ( ) designed a tradable credit scheme to maintain the queue length of the bottleneck to be less than a queue length threshold specified by the authority. sakai et al., ( ) proposed a model for designing pareto-improving pricing scheme with bottleneck permits for a v-shaped two-to-one merging bottleneck. they showed that the first-best pricing scheme for this v-shaped network does not always achieve a pareto improvement, because the cost of one group of drivers is increased due to the permit pricing. xiao et al., ( ) presented two tradable parking permit schemes for a corridor system with three alternative travel modes, i.e., transit, driving alone and carpool, when the parking supply at destination is insufficient. it was found that the prices of parking permits, regardless of whether the trip is completed as a carpool or not, decrease with the parking supply, and the price that a solo driver should pay is higher than that a carpooler should pay. the tradable uniform parking permit scheme is more efficient than the tradable differentiated parking permit scheme for solo-driving and carpooling travelers. it can be noted that these abovementioned studies mainly focused on a single od pair with one or two routes, and the transaction costs for trading the credits were usually ignored. it is thus important to look at the impacts of network effects and transaction costs on the effectiveness of the tradable credit schemes. in reality, markets always create speculators. it is also meaningful to look at the effects of collusive behavior among credit purchasers. in addition, when the parking supply is insufficient at destination, one may first park his/her car at a park-and-ride lot and then transfers to a transit vehicle to reach final destination. it is therefore necessary to extend the existing models to incorporate the park-and-ride services in a further investigation. the redistribution of toll revenue is an important factor influencing the public acceptability of the toll schemes and thus practical implementation. adler and cetin ( ) developed an analytical model for a two-node two-route network, aiming to explore a direct redistribution approach in which money collected from the drivers on a more desirable route is directly transferred to the users on a less desirable route. it was shown that this model about toll collection and subsidization would reduce the travel cost for all travelers and totally eliminate the wait time in the queue. compared with the social optimal solution, the direct redistribution model yields almost identical results. mirabel and reymond ( ) analyzed the impact of toll redistribution on total cost and on modal split between railroad and road based on the two-mode model of tabuchi ( ) . in their model, it was assumed that toll revenue from road was redistributed to public transport. two kinds of road toll regimes were considered, i.e., a fine toll and a uniform toll. it was shown that a toll policy is more efficient as long as toll revenue is directed towards public transport when the railroad fare is equal to average cost. these previous studies mainly focused on the redistribution of toll revenue for public transport improvement purpose. it will be meaningful to take into account other use purposes, such as transportation infrastructure investment and fiscal revenue, and to determine the optimal redistribution proportion among different uses. although the first-best time-varying toll may eliminate queuing completely, congestion toll scheme may not be politically feasible. parking charging can be considered as a possible substitute for the congestion tolling because parking charges seem to be much easier to implement than congestion tolls. similar to congestion tolls, parking charges may be used to disperse demand over time so as to reduce congestion and gain efficiency. zhang et al., ( ) presented a morningevening commuting model to determine a location-dependent parking fee scheme that optimizes the commuter morning / evening commuting pattern. analyzed the regulatory schemes of parking market: price-ceiling and quantity tax/subsidy schemes. it was shown that both price-ceiling and quantity tax/subsidy regulations can efficiently reduce system cost and commuter cost under certain conditions, and help ensure the stability of the parking market. fosgerau and de palma ( ) determined the optimal parking charges and evaluated the benefits of parking pricing as an alternative to congestion tolls. zhang and van wee ( ) proposed a duration-dependent parking fee scheme, and compared it with three other pricing regimes: no charging, optimal time-varying road tolls, and a combination of optimal time-varying road tolls and location-dependent parking fees. ma and zhang ( ) derived dynamic parking charges for a bottleneck system with ridesharing, in which all travelers were assumed to participate in the ridesharing program, i.e., a traveler was either a driver or a passenger. as a substitute for parking pricing, parking permit schemes have also been studied in the literature ( liu et al., b ; xiao et al., ) , as presented in subsection . . . the aforementioned studies did not consider commuters' time spent on searching for available parking spaces. the search for parking spaces comprises a wasteful commuting component that contributes to traffic congestion, and thus should be considered in commuting cost. on the other hand, parking facilities are usually supplied by both private firms and public sector. it will be interesting to examine this mixed market and compare it with the extreme cases of either private-only or public-only parking provision regime. in addition, a mixed market consisting of solo-driving and ridesharing should be investigated for analyzing the effects of parking pricing on the market. ship queuing and waiting at a general anchorage to enter the berth under the port congestion are similar to the auto queuing and waiting at the road bottleneck. the congestion pricing concept for a road bottleneck has been extended to address the port congestion pricing issues. in this regard, laih and his collaborators have undertaken a number of studies (see, laih and hung, ; laih et al., , laih et al., chen, , laih and chen, ; laih and sun, ) . they derived optimal time-varying and/or step toll schemes to eliminate or decrease the port congestion. by levying port congestion tolls, the departure schedules of container ships can be rationally changed, and thus the arrival times of container ships at the busy port can be smoothed or dispersed. as a result, the queuing delays of container ships for port entry decrease. they also derived the resultant changes of container ships' departure schedules after levying port congestion tolls. however, they did not consider the redistribution of port congestion charges, which can help promote the public acceptability of port congestion charging scheme. in the literature, there are some studies about airport congestion pricing issues. for example, daniel ( ) proposed an airport runway congestion pricing model (i.e., a bottleneck model with time-dependent stochastic queuing) for estimating congestion prices and capacities for large hub airports. the proposed stochastic bottleneck model combines stochastic queuing, time-varying traffic rates, and intertemporal adjustment of traffic in response to queuing delay and fees. daniel and pahwa ( ) showed that the stochastic bottleneck model of daniel ( ) can generate more realistic traffic patterns than earlier models, such as deterministic bottleneck model of vickrey ( ) . harback ( , daniel and harback, ) adopted the stochastic bottleneck model to address the airport congestion pricing issues for major us hub airports. daniel ( ) further determined the equilibrium congestion pricing schedules, traffic rates, queuing delays, layover times, and connection times by time of day for four canadian airports (toronto, vancouver, calgary, and montreal). daniel ( ) examined the efficiency and practicality of airport slot constraints using a deterministic bottleneck model of landing and takeoff queues. it was shown that slot constraints at us airports would be ineffective, and effective slot constraints require many narrow slot windows. silva et al. ( ) studied airlines' interactions and scheduling behavior, together with airport pricing, using a combination of a deterministic bottleneck model and a vertical structure model that explicitly considers the roles of airlines and passengers. wan et al. ( ) treated terminal congestion and runway congestion separately, and studied its implication for design of optimal airport charges and/or terminal capacity investment. to capture the difference between these two types of congestion, they adopted a deterministic bottleneck model for the terminal and a conventional congestion model for the runways. they showed that the welfare-optimal uniform airfares do not yield the first-best outcome. the first-best fares charged to the business passengers are higher than the leisure passengers' fare if and only if the relative schedule-delay cost of business passengers is higher than that of leisure passengers. these airport pricing studies usually focused on a single airport, and did not consider the effects of airport pricing on the competition and collaboration among regional airports (e.g., the airports of hong kong, guangzhou, shenzhen, zhuhai, and macao in the greater bay area of china), which deserves a further study. queuing delays at the bottleneck during the morning and evening commutes may be an important factor influencing household residential location choice, which shapes urban spatial structure of a city ( mun et al., ) . arnott ( ) incorporated the departure time choice into a model of urban spatial structure by using vickrey's bottleneck model. it was shown that in contrast to the standard static model (without time dimension), congestion tolling in the bottleneck model can cause urban form to become less concentrated, and thus may have less pronounced effects on urban spatial structure than was previously thought. fosgerau and de palma ( ) introduced spatial heterogeneity into the bottleneck model via considering dynamic congestion in an urban setting where trip origins are spatially distributed. it was shown that at equilibrium, travelers sort according to their distances to the destination; the queue is always unimodal regardless of the spatial distribution of trip origins; and the travelers located beyond a critical distance from the cbd tend to gain from tolling, even when toll revenue is not redistributed, while nearby travelers lose. gubins and verhoef ( ) considered a monocentric city with a traffic bottleneck located at the entrance to the cbd. the commuters' departure times, household residential locations, and lot sizes are all endogenously determined. they showed that road pricing may lead to urban sprawl, even when the collected toll revenue is not redistributed back to the city inhabitants. takayama and kuwahara ( ) further developed a model considering commuters' heterogeneity, departure time and residential location choices in a monocentric city with a single bottleneck. the results showed that commuters sort themselves temporally and spatially according to their values of time and schedule delay flexibility. imposing a congestion toll without redistributing toll revenue causes the physical expansion of the city, which is opposite to the results of traditional location models. franco ( ) examined the effects of change in downtown parking supply on urban welfare, mode choice and urban spatial structure using a general spatial equilibrium model of a closed monocentric city with two transport modes, endogenous residential parking supply and bottleneck congestion at the cbd. xu et al., ( ) presented an integrated model of urban spatial structure and traffic congestion for a two-zone monocentric city in which the two zones are connected by a congested highway and a crowded railway. the commuters' departure time and mode choices are governed by a bottleneck model, and the endogenous interactions between travel and residential relocation choices are analyzed. fosgerau et al., ( ) presented a unified model of the bottleneck model and the monocentric city model. the model generates a number of new insights regarding the interaction between congestion dynamics and urban spatial equilibrium. unlike the traditional static congested city models, their model leads to an optimal city that is less dense in the center and denser in the suburb than the city at the laissez-faire equilibrium. this result is similar to that in gubins and verhoef ( ) . vandyck and rutherford ( ) developed a spatial general equilibrium model to study economy-wide and distributional implications of congestion pricing in the presence of agglomeration externalities and unemployment. fosgerau and kim ( ) presented a new monocentric city framework that combines a discrete urban space with multiple vickrey-type bottlenecks. they confirmed empirically the relationship between residential location choice and trip-timing choice, i.e., commuters traveling a longer distance tend to arrive at work early or late (i.e., at off-peak times) while commuters with a shorter distance tend to arrive at the peak time. these aforementioned studies considered the role of households' residential location decisions in shaping urban spatial structure, but ignored the role of firms' location decisions. in a further study, the effects of both households' and firms' location decisions should be simultaneously considered in the analysis of urban spatial equilibrium. the classical vickrey's bottleneck model has also been employed to address transit passenger travel choice behavior and transit system optimization issues. kraus and yoshida ( ) incorporated the commuter's time-of-use decision into a model of transit pricing and transit service optimization, in which waiting time at a transit stop was treated analogously to queuing time at the highway bottleneck. it was shown that increased ridership leads to higher average user cost, and the relationship between service frequency and ridership does not conform to the well-known square root principle. yoshida ( ) further studied the effects of passengers' queuing rules at transit stops (including the first-in-first-out and the random-access queuing) on the mass-transit policies, such as the number of trains and runs, scheduling, and pricing. the results showed that when the shadow value of a unit of waiting time exceeds that of a unit of time late for work, the passengers' queuing discipline does not have any effect on the optimal or second-best mass-transit policy. otherwise, the aggregate travel cost with random-access queuing is lower than that with first-in-first-out. tian et al., ( ) analyzed the equilibrium properties of commuters' trip timing during the morning commute on a many-to-one linear corridor transit system with considering in-vehicle passenger crowding effect and schedule delay cost. monchambert and de palma ( ) considered a bi-modal competitive system, consisting of a public transport mode (bus), which may be unreliable, and an alternative mode (taxi). the results showed that the public transport service reliability at the competitive equilibrium increases with the taxi fare, and the public transport service reliability and thus patronage at equilibrium are lower than those at the first-best social optimum. de investigated trip-timing decisions of rail transit users who trade off in-vehicle passenger crowding costs and disutility from traveling early or late. three fare regimes, namely no fare, an optimal uniform fare, and an optimal time-dependent fare, were studied and compared, together with determination of the optimal long-run number and capacities of trains. wang et al., ( ) designed the policies of transit subsidies (including cost and passenger subsidies) from either government funding or road toll revenue to circumvent the downs-thomson paradox appearing in a competitive highway/transit system. yang and tang ( ) proposed a fare-reward scheme for managing rail transit peak-hour congestion with homogeneous commuters, in which a commuter is rewarded with one free trip during pre-specified shoulder periods after taking a certain number of paid trips during the peak hours. such a fare-reward scheme aims to shift commuters' departure time to reduce their queuing at stations in an incentive-compatible manner while keeping the transit operator's revenue intact. tang et al., ( ) further considered the heterogeneous commuters, in terms of commuters' scheduling flexibility (i.e., arrival time flexibility interval), and proposed an incentive-based hybrid fare scheme, which combines the fare-reward scheme with a non-rewarding uniform fare scheme. it was shown that the hybrid fare scheme can create a revenue-preserving win-win-win situation for the transit operator, flexible commuters and non-flexible commuters. these previous studies have provided many insights into understanding the travel choice behavior of transit passengers, operations and scheduling of transit services, and the effects of various transit policies, such as transit service pricing and subsidies. however, they usually consider transit mode only or two physically isolated modes (e.g., auto and rail). in reality, auto and bus share the same roadway, and thus the interaction between them cannot be ignored. the congestion externality caused by intermodal interaction should be considered in the transit fare pricing, together with the in-vehicle crowding externality in transit vehicles. arnott et al., ( a) concerned the capacity expansion issue of a road bottleneck with homogeneous commuters. it was shown that the self-financing result (i.e., toll revenue exactly covers its capital cost) holds even when the variation of the toll by time of day is constrained (e.g., a coarse toll). arnott and kraus ( ) investigated under what circumstances the first-best pricing and investment rules (i.e., first-best self-financing rule, or trip price equals marginal cost) for a congestible bottleneck facility apply when both the time variation of the congestion charge is constrained and users are different in unobservable characteristics so that the same congestion charge must be applied to heterogeneous users, in terms of work start time or value of time. their findings indicated that the first-best self-financing rule holds if the congestion externality is anonymous, independent of user type. thereby, marginal cost pricing of a congestible facility is feasible even if users differ in observationally indistinguishable ways, when a completely flexible toll is employed. but, when there are constraints on the time variation of the toll (e.g., uniform toll), marginal cost pricing is infeasible and a variant of ramsey pricing is (secondbest) optimal. liu et al., ( a) designed a highway use reservation system to allocate highway space to potential users at different time intervals. they also evaluated the efficiency of the reservation system. lamotte et al., ( ) addressed the capacity allocation issue of a road between two vehicle types (i.e., conventional and bookable autonomous vehicles), using a variant of the bottleneck model. these studies usually assumed a fixed total travel demand, a single travel mode and a deterministic environment. in further studies, these assumptions can be relaxed to consider elastic demand, multiple travel modes and/or stochastic situation. qian et al., ( , investigated the design problems of parking capacity, parking fee, and access time when all parking lots in the parking market are operated by multiple profit-driven private operators or by a welfare-driven social planner. franco ( ) examined how the changes in cbd parking supply affect residential land rents, residential parking supply, mode choice, welfare, air pollution, share of auto users, population densities and city size, and whether the self-financing theorem holds in the context of the urban spatial model. liu ( ) presented an equilibrium model of departure time and parking location choices for optimizing the parking supply that minimizes the total system cost (i.e., the sum of travel cost and social cost of parking supply) under either user equilibrium or system optimum pattern. he found that the optimal planning of parking with autonomous vehicles is significantly different from that without autonomous vehicles. zhang et al., ( ) further analyzed the optimal parking supply strategy for autonomous vehicles to minimize the total system cost based on an integrated morning-evening commuting model. these previous studies did not concern the competition of different parking types (e.g., on-street and off-street) and the parking facility ownership issues (private and public), which can be considered in further studies. in the literature, there are a few studies involving joint strategies of capacity investment and demand management. for instance, arnott et al., ( ) explored the welfare effects of a toll-financed capacity expansion (i.e., toll revenues are used to finance transport investment) using a bottleneck model with user heterogeneity consideration. it was shown that if initial capacity is sufficiently small, a toll-financed expansion leaves all drivers better off. xiao et al., ( ) studied the feasibility of expanding bottleneck capacity by toll revenue. the results showed that if the revenue generated by the optimal flat toll is used to finance the capacity expansion, the trip cost of each commuter is reduced in the long run. however, the revenue from the optimal flat toll can never cover the capital cost of constructing the optimal capacity for minimizing the total system cost under constant returns. qian et al., ( ) derived the optimal parking capacity, fee, and access time which altogether yield the minimum total social cost. wan et al., ( ) investigated the joint impacts of airport terminal capacity expansion and time-varying terminal fine toll on passenger demand (including business and leisure passengers) and airport system. these previous studies usually assumed a constant returns to scale and piecewise constant scheduling preferences, which can be extended to consider other returns to scale (e.g., increasing) and time-varying scheduling preferences. in the previous subsections, we have reviewed the literature about bottleneck model studies from the perspectives of travel behavior analysis, demand-side strategies, supply-side strategies, and joint strategies of both demand and supply sides. in spite of broad extensions conducted since the pioneering work of vickrey ( ) , there are still some limitations in the existing related studies, summarized as follows. (i) as shown in section . , various strong assumptions are often made in the related studies, aiming to simplify the model and derive analytical solutions. such simplicity may lead to a large deviation of the model results from the actual values, and thus restricts explanatory power and real applications of the model. in order to model more realism, it is necessary to relax these assumptions in further studies. (ii) the existing studies have mainly focused on the topics of travel behavior analysis and demand-side strategies (particularly on congestion tolling). however, only limited attention has been paid to the topics of supply-side strategies (e.g., financing mode for capacity expansion due to fiscal deficit) and joint strategies (e.g., using congestion tolls to finance capacity expansion). the disposition of toll revenue also lacks adequate research. these topics provide potential research opportunities for further studies. (iii) driving effects of information technology innovation on social development, such as sharing economy and smart mobility, are seldom incorporated in the previous related studies. as such, rapid development of new technologies has been bringing about significant social reform, which is changing people's behavior and reshaping urban development. by incorporating these factors causing social changes, the bottleneck model could continue to provide new theoretical insights. according to the literature review and analysis of the limitations of existing related studies presented in the previous section, one can identify some new and important gaps and opportunities for further studies, presented as follows. one solution to the bottleneck congestion is to expand the capacity of the bottleneck. such expansion needs a huge capital cost, which imposes a heavy financial burden on local authority. in order to broaden the range of fiscal sources for bottleneck capacity expansion, various franchising programs, such as build-operate-transfer (bot) or public-private partnership (ppp) projects, have been implemented in practice to encourage private sectors to invest in massive transit projects. in a bot contract, the private investor negotiates with the government to finance, design, construct, and operate transportation infrastructure for a certain period (i.e., a concession period). upon the expiration of the concession period, the government will take over the infrastructure. a ppp contract, as another procurement model of public projects or services, implies a collaborative agreement between private sectors and government targeted at financing, designing, implementing and operating infrastructure and services. partnerships between private sectors and government provide advantages to both parties. the technology and innovation of private sectors can help provide better public services through improved operational efficiency. the government provides the private sector with incentives to deliver projects on time and within budget. the ppp contract specifies the rights and obligations of each party, embodying risk and revenue allocations between the parties. it is important to address the bot or ppp contract design issues of the bottleneck capacity expansion, particularly under a situation of the shortage of funds. congestion pricing schemes have been operating for years in a few countries and regions, such as singapore, london, stockholm, and milan. such schemes are not worldwide implemented yet due to low public acceptance, which is caused by the following factors: privacy, equity, complexity, and uncertainty ( gu et al., ) . the privacy issue means that the itineraries of travelers are recorded by the charging facilities at different locations. the equity issue im plies that congestion pricing hurts the poor from using road facilities and makes the road resources become a privilege of the rich. the complexity issue concerns the desire for a simple and well-understood proposal for calculation of congestion charges. the uncertainty issue includes the uncertainty in the effectiveness of the proposed scheme, and the uncertainty in revenue allocation. in order to improve public acceptance towards congestion pricing policy, the redistribution of toll revenue from congestion pricing is a critical issue. the government should make a reasonable allocation scheme of toll revenue to improve people's livelihood, such as expanding road capacity, improving public transit services, and reducing taxation. to achieve strong public support, the details of the use of toll revenue should be publicized to the society. it is well known that the main economic principle behind congestion pricing is to internalize the congestion externality caused by transportation. transportation contributes to environmental externality due to vehicular pollution emissions, besides congestion externality. in order to control air pollution level and improve air quality, clean air action programs have been launched in some large chinese cities, such as beijing and shanghai. the measures adopted in the program include subsidizing use of clean energy (e.g., electric or natural gas vehicles), retrofit of old motorized vehicles, and purification of vehicular pollutant emissions (e.g., free provision of vehicular exhaust purifier). to achieve financial sustainability, it is proposed to levy the emission taxes and redistribute part of the emission taxes to fund the aforementioned programs. therefore, further studies can be focused on how to redistribute the emission tax revenue, which will affect the practical implementation of emission pricing scheme and the public acceptance towards this scheme. auto sharing or ridesharing, as an emerging hot topic in the filed of transportation, may have a significant effect on the auto ownership rationing. it is expected that implementation of ridesharing has a potential to reduce the maximum number of autos and parking spaces required in the transportation system, which affects the traffic congestion level and the residential location choice and thus spatial distribution of residents in the urban system. in the ridesharing service system, the platform for ridesharing (e.g., didi or uber) plays an important role in matching shared autos and passengers , and the fleet size, service price or subsidy for ridesharing can help adjust the shared auto utilization rate, balance the modal split, and thus relieve the traffic congestion level of the system. further studies can, therefore, be made to consider the relationships among ridesharing, auto ownership rationing, and urban spatial structure, and to investigate the fleet size, pricing or subsidizing problem of ridesharing in a competitive multi-modal transportation system. the competition and collaboration between different ridesharing platforms and between ridesharing platform and public transit are also an important direction for future study. it is widely recognized that the rapid developments of information and communication technologies have significantly changed people's learning, work and life styles. for example, telecommuting or teleworking, as an alternative work arrangement, becomes a growing trend in the information age. telecommuting will drive people away from workplaces, and thus save office space in urban areas and change the household residential location choice farther from the workplaces, leading to a more spread-out city. it will also reduce the number of work trips and thus the demand for ground transportation, leading to reduction in energy consumption, traffic congestion and air pollution. however, telecommuting reduces the chance and time for teamwork and face-to-face communications. as a result, team productivity may actually suffer, which hurts the productivity of individual's firm and the urban economy. regardless of its two sides, the telecommuting has recently become a major working mode of various professions due to outbreak of covid- across the globe, making people more aware of its importance. on the other hand, rapid developments of new technologies also change the mobility of people and goods. it is believed that the emerging g and self-driving technologies will revolutionize the transportation industry. the g technology will enable road users and transportation infrastructure to communicate with everything else on the road. the self-driving technology can drive vehicles automatically, and thus the car users do not need to carry out the driving task and thus they can spend in-vehicle time in the autonomous cars on work or leisure activities, yielding extra activity utility. the end-to-end connectivity across the city with the g technology allows autonomous vehicles to drive close to each other through cooperating and platooning technologies, thus leading to increased network capacity and decreased traffic congestion in the peak period. the g technology can also alert autonomous vehicles of change of traffic conditions, such as collisions, weather and traffic accidents, through direct and real-time communication from vehicle to vehicle, causing increased safety and reliability on the road. it is naturally needed to investigate the effects of new technology revolution on the movement behavior of people and goods and to design an efficient and sustainable urban system. the goal of this paper is to undertake a broad literature review of the bottleneck model research over the past half century. the review undertaken in this paper uses a bibliometric analysis approach, in which the literature data of a total of relevant papers are extracted from three well-recognized journal databases or search engines, namely web of science core collection, scopus, and google scholar. this analysis identifies the leading topics, top contributing authors, influential papers, and distributions of publications by journal, allowing readers to track how and where the literature has evolved. the literature is classified in terms of recurring themes into four main categories: travel behavior analysis, demand-side strategies, supply-side strategies, and joint strategies of demand and supply sides. for each theme, typical models proposed in previous studies are reviewed. based on a systematic review, we have identified some main gaps and opportunities in the bottleneck model research, which provides potential avenues for future research in this important and exciting area. by incorporating technological progress in the new digital era, the bottleneck model research keeps pace with the times and thus to contribute to new theoretical development. we are grateful to professor robin lindsey and three anonymous referees for their helpful comments and suggestions on earlier versions of the paper. the work described in this paper was jointly supported by grants from the national key research and development program of china ( yfb ), the national natural science foundation of china ( , / ), and the nsfc-eu joint research project ( ). the second author made a presentation entitled "the bottleneck and corridor problems" at the international workshop on transport modeling held in auckland, new zealand, on january - , , and had a heated discussion with the other two authors of this paper. this discussion led them to write a th anniversary review of the bottleneck models, as presented in this paper. however, the opinions expressed here are those of the authors themselves. the classical bottleneck model describes the departure time choice of commuters during morning commute. every morning, n homogeneous commuters travel from home to work along a highway containing a bottleneck with a capacity s . to simplify the analysis, all commuters want to reach the workplace at an identical preferred arrival time t * . without loss of generality, the free-flow travel time from home to work is assumed to be zero. thus, a commuter arrives at the bottleneck immediately after leaving home and arrives at the workplace immediately after leaving the bottleneck. when the arrival rate at the bottleneck exceeds the bottleneck's capacity, a queue develops. those who arrive early or late face a schedule delay cost. commuters choose their departure times based on a trade-off between the bottleneck congestion and the schedule delay cost. let c ( t ) denote the travel cost of commuters departing from home to work at time t . it is composed of queuing delay cost at the bottleneck and schedule delay cost of arriving early or late. let t ( t ) be the queuing delay time at the bottleneck at time t. c ( t ) is then given as where α is the unit cost of travel time, β is the unit cost of arriving early, and γ is the unit cost of arriving late. according to the empirical study of small ( ) , the relationship γ > α > β should hold. the queuing delay time t ( t ) equals the queue length d ( t ) divided by the bottleneck capacity s , i.e., t (t) = d (t) /s , where d ( t ) is the difference between the cumulative arrivals and cumulative departures by that time, i.e., one can also consider a preferred arrival time window [ t * − , t * + ], where is a measure of work start time flexibility. no penalty of schedule delay is incurred if a commuter reaches the destination within the time window. otherwise, a penalty of schedule delay takes place. for example, vickrey ( ) assumed a uniform distribution of t * over an interval, and hendrickson and kocur ( ) generalized it to a general distribution. where r ( t ) is the departure rate of commuters from home at time t and t q is the time at which the queue begins. at the equilibrium, all commuters have the same travel cost c ( t ) regardless of their departure time. this means d c(t ) / d t = , ∀ t ∈ ( t q , t q ) , where t q is the time when the queue ends. one can thus derive the equilibrium departure rate r ( t ) as where ˜ t is the departure time from home at which a commuter can arrive at workplace punctually, i.e., ˜ t + t ( ˜ t ) = t * . eq. (a ) shows that the equilibrium departure rate curve is piecewise constant. in the morning peak period ( t q ,t q ), the capacity of the bottleneck is fully utilized, and thus t q − t q = n/s holds. at the equilibrium, the first and last commuters do not face a queue, their queuing delays are zero, and their schedule delay costs must thus be equal, expressed as β( t * − t q ) = γ t q − t * . (a ) from eq. (a ) , t q − t q = n/s and ˜ t + t ( ˜ t ) = t * , one obtains the resultant equilibrium travel cost is c = ( βγ / (β + γ ) )( n/s ) . from equilibrium condition c ( t ) = c ( t q ) = c ( t q ) and eqs. (a ) and ( a ), one can derive the queuing delay time as eq. (a ) shows that a queue builds up linearly from t q to ˜ t and then dissipates linearly until it disappears at t q . this means that the queuing delay curve is piecewise linear. testing the slope model of scheduling preferences on stated preference data a direct redistribution model of congestion pricing the corridor problem with discrete multiple bottlenecks analytical equilibrium of bicriterion choices with heterogeneous user preferences: application to the morning commute problem congestion tolling and urban spatial structure a bathtub model of downtown traffic congestion schedule delay and departure time decisions with heterogeneous commuters economics of a bottleneck departure time and route choice for the morning commute does providing information to drivers reduce traffic congestion? a temporal and spatial equilibrium analysis of commuter parking route choice with heterogeneous drivers and group-specific congestion costs a structural model of peak-period congestion: a traffic bottleneck with elastic demand properties of dynamic traffic equilibrium involving bottlenecks, including a paradox and metering the welfare effects of congestion tolls with heterogeneous commuters road pricing, traffic congestion and the environment: issues of efficiency and social feasibility information and time-of-usage decisions in the bottleneck model with stochastic capacity and demand the corridor problem: preliminary results on the no-toll equilibrium equilibrium traffic dynamics in a bathtub model: a special case financing capacity in the bottleneck model regulating dynamic congestion externalities with tradable credit schemes: does a unique equilibrium exist? transportation investigation of the traffic congestion during public holiday and the impact of the toll-exemption policy dynamic model of peak period congestion dynamic model of peak period traffic congestion with elastic arrival rates dynamic network models and driver information systems understanding the competing short-run objectives of peak period road pricing valuations of travel time variability in scheduling versus mean-variance models uniform versus peak-load pricing of a bottleneck with elastic demand peak-load pricing of a transportation route with an unpriced substitute partial peak-load pricing of a transportation bottleneck with homogeneous and heterogeneous values of time revisiting the bottleneck congestion model by considering environmental costs and a modal policy solving the step-tolled bottleneck model with general user heterogeneity optimal multi-step toll design under general user heterogeneity morning commute problem with queue-length-dependent bottleneck capacity endogenous trip scheduling: the henderson approach reformulated and compared with the vickrey approach commuter welfare under peak-period congestion tolls: who gains and who loses? the marginal social cost of travel time variability the uniqueness of a time-dependent equilibrium distribution of arrivals at a single bottleneck system optimum and pricing for the day-long commute with distributed demand, autos and transit a pareto improving strategy for the time-dependent morning commute problem congestion pricing and capacity of large hub airports: a bottleneck model with stochastic queues congestion pricing of canadian airports the untolled problems with airport slot constraints when) do hub airlines internalize their self-imposed congestion delays pricing the major us hub airports . comparison of three empirical models of airport congestion pricing departure times in y-shaped traffic networks with multiple bottlenecks bottleneck road congestion pricing with a competing railroad service stochastic equilibrium model of peak period traffic congestion congestion pricing on a road network: a study using the dynamic equilibrium simulator metropolis private toll roads: competition under various ownership regimes comparison of morning and evening commutes in the vickrey bottleneck model private roads, competition, and incentives to adopt time-based congestion tolling modelling and evaluation of road pricing in paris the economics of crowding in rail transit trip-timing decisions and congestion with household scheduling preferences private operators and time-of-day tolling on a congested road network real cases applications of the fully dynamic metropolis tool-box: an advocacy for large-scale mesoscopic transportation systems metropolis: modular system for dynamic traffic simulation morning commute in a single-entry traffic corridor with no late arrivals on the existence of pricing strategies in the discrete time heterogeneous single bottleneck model additive measures of travel time variability the cost of travel time variability: three measures with properties on the relation between the mean and variance of delay in dynamic queues with random capacity and demand how a fast lane may replace a congestion toll congestion in the bathtub congestion in a city with a central bottleneck the dynamics of urban traffic congestion and the price of parking the value of travel time variance commuting for meetings valuing travel time variability: characteristics of the travel time distribution on an urban road travel time variability and rational inattention the value of reliability commuting and land use in a city with bottlenecks: theory and evidence vickrey meets alonso: commute scheduling and congestion in a monocentric city trip-timing decisions with traffic incidents hypercongestion in downtown metropolis endogenous scheduling preferences and congestion road pricing with complications downtown parking supply, work-trip mode choice and urban spatial structure private road supply in networks with heterogeneous users coordinated pricing for cars and transit in cities with hypercongestion empirical assessment of bottleneck congestion with a constant and peak toll: san francisco-oakland bay bridge morning commute with competing modes and distributed demand: user equilibrium, system optimum, and pricing the evening commute with cars and transit: duality results and user equilibrium for the combined morning and evening peaks congestion pricing practices and public acceptance: a review of evidence dynamic bottleneck congestion and residential land use in the monocentric city day-to-day departure time choice under bounded rationality in the bottleneck model modeling the morning commute problem in a bottleneck model based on personal perception pareto improvements from lexus lanes: the effects of pricing a portion of the lanes on congested highways tolling roads to improve reliability a partial differential equation formulation of vickrey's bottleneck model, part i: methodology and theoretical analysis a partial differential equation formulation of vickrey's bottleneck model, part ii: numerical analysis and computation schedule delay and departure time decisions in a deterministic model estimating exponential scheduling preferences . fares and tolls in a competitive system with transit and highway: the case with two groups of commuters pricing and logit-based mode choice models of a transit and highway system with elastic demand modal split and commuting pattern on a bottleneck-constrained highway optimal utilization of a transport system with auto/transit parallel modes the value of travel time variability with trip chains, flexible scheduling and correlated travel times traveler delay costs and value of time with trip chains, flexible activity scheduling and information traffic managements for household travels in congested morning commute bottleneck model with heterogeneous information estimating the social cost of congestion using the bottleneck model bottleneck congestion: differentiating the coarse charge the user costs of air travel delay variability a new look at the two-mode problem the commuter's time-of-use decision and optimal pricing and service in urban mass transit equilibrium queueing patterns at a two-tandem bottleneck during the morning peak spillovers, merging traffic and the morning commute queueing at a bottleneck with single-and multi-step tolls effects of the optimal step toll scheme on equilibrium commuter behaviour economics on the optimal n-step toll scheme for a queuing port economics on the optimal port queuing pricing to bulk ships the optimal step toll scheme for heavily congested ports effects of the optimal port queuing pricing on arrival decisions for container ships effects of the optimal n-step toll scheme on bulk carriers queuing for multiple berths at a busy port optimal non-queuing pricing for the suez canal modeling time-dependent travel choice problems in road networks with multiple user classes and multiple parking facilities on the use of reservation-based autonomous vehicles for demand management the morning commute in urban areas with heterogeneous trip lengths user equilibrium in a bottleneck under multipeak distribution of preferred arrival time morning commute in a single-entry traffic corridor with early and late arrivals user equilibrium of a single-entry traffic corridor with continuous scheduling preference bottleneck model revisited: an activity-based perspective step tolling in an activity-based bottleneck model reliability evaluation for stochastic and time-dependent networks with multiple parking facilities existence, uniqueness, and trip cost function properties of user equilibrium in the bottleneck model with multiple user classes equilibrium in a dynamic model of congestion with large and small users step tolling with bottleneck queuing congestion handbook of transport systems and traffic control optimal information provision at bottleneck equilibrium with risk-averse travelers an equilibrium analysis of commuter parking in the era of autonomous vehicles modeling the morning commute for urban networks with cruising-for-parking: an mfd approach interactive travel choices and traffic forecast in a doubly dynamical system with user inertia and information provision expirable parking reservations for managing morning commute with parking space constraints efficiency of a highway use reservation system for morning commute a novel permit scheme for managing parking competition and bottleneck congestion effectiveness of variable speed limits considering commuters' long-term response managing morning commute with parking space constraints in the case of a bi-modal many-to-one network modeling and managing morning commute with both household and individual travels pricing scheme design of ridesharing program in morning commute problem departure time and route choices in bottleneck equilibrium under risk and ambiguity morning commute problem considering route choice, user heterogeneity and alternative system optima a semi-analytical approach for solving the bottleneck model with general user heterogeneity the morning commute problem with ridesharing and dynamic parking charges bottleneck congestion pricing and modal split: redistribution of toll revenue public transport reliability and commuter strategy peak-load pricing of a bottleneck with traffic jam optimal cordon pricing in a non-monocentric city flextime, traffic congestion and urban productivity the morning commute for nonidentical travelers traffic flow for the morning commute a new tradable credit scheme for the morning commute problem managing rush hour travel choices with tradable credit scheme numerical solution procedures for the morning commute problem commuter responses to travel time uncertainty under congested conditions: expected costs and the provision of information travel-time uncertainty, departure time choice, and the cost of morning commutes simulating travel reliability optimal metering in the bottleneck congestion model vickrey's model of traffic congestion discretized equilibrium at a bottleneck when long-run and short-run scheduling preferences diverge long-run versus short-run perspectives on consumer scheduling: evidence from a revealed-preference experiment among peak-hour road commuters the economics of parking provision for the morning commute managing morning commute traffic with parking modeling multi-modal morning commute in a one-to-one corridor network the morning commute problem with heterogeneous travellers: the case of continuously distributed parameters linear complementarity formulation for single bottleneck model with heterogeneous commuters a single-step-toll equilibrium for the bottleneck model with dropped capacity give or take? rewards versus charges for a congested bottleneck pareto-improving social optimal pricing schemes based on bottleneck permits for managing congestion at a merging section pareto-improving ramp metering strategies for reducing congestion in the morning commute tradable credit scheme to control bottleneck queue length on the existence and uniqueness of equilibrium in the bottleneck model with atomic users airlines' strategic interactions and airport pricing in a dynamic bottleneck model of congestion punctuality-based departure time scheduling under stochastic bottleneck capacity: formulation and equilibrium punctuality-based route and departure time choice the scheduling of consumer activities: work trips trip scheduling in urban transportation analysis valuation of travel time the bottleneck model: an assessment and interpretation the existence of a time-dependent equilibrium distribution of arrivals at a single bottleneck bottleneck congestion and modal split bottleneck congestion and distribution of work start times: the economics of staggered work hours revisited bottleneck congestion and residential location of heterogeneous commuters a pareto-improving and revenue-neutral scheme to manage mass transit congestion with heterogeneous commuters modeling the modal split and trip scheduling with commuters' uncertainty expectation the morning commute problem with endogenous shared autonomous vehicle penetration and parking space constraint tradable credit schemes for managing bottleneck congestion and modal split with heterogeneous users equilibrium properties of the morning peak-period commuting in a many-to-one mass transit system step-tolling with price-sensitive demand: why more steps in the toll make the consumer better off congestion tolling in the bottleneck model with heterogeneous values of time winning or losing from dynamic bottleneck congestion pricing? the distributional effects of road pricing with heterogeneity in values of time and schedule delay congestion pricing in a road and rail network with heterogeneous values of time and schedule delay autonomous cars and dynamic bottleneck congestion: the effects on capacity, value of time and preference heterogeneity multiclass continuous-time equilibrium model for departure time choice on single-bottleneck network regional labor markets, commuting, and the economic impact of road pricing visualizing bibliometric networks congestion theory and transport investment pricing, metering, and efficiently using urban transportation facilities a smart local moving algorithm for large-scale modularity-based community detection airport congestion pricing and terminal investment: effects of terminal congestion, passenger types, and concessions equilibrium trip scheduling in single bottleneck traffic flows considering multi-class travellers and uncertainty: a complementarity formulation e-hailing ride-sourcing systems: a framework and review dynamic ridesharing with variable-ratio charging-compensation scheme for morning commute overcoming the downs-thomson paradox by transit subsidy policies equilibrium and modal split in a competitive highway/transit system under different road-use pricing strategies an ordinary differential equation formulation of the bottleneck model with user heterogeneity the morning commute problem with coarse toll and nonidentical commuters managing bottleneck congestion with tradable credits the morning commute under flat toll and tactical waiting congestion behavior and tolls in a bottleneck model with stochastic capacity stochastic bottleneck capacity, merging traffic and morning commute on the morning commute problem with carpooling behavior under parking space constraint tradable permit schemes for managing morning commute with carpool under parking space constraint the valuation of travel time reliability: does congestion matter? on the cost of misperceived travel time variability constrained optimization for bottleneck coarse tolling pareto-improving policies for an idealized two-zone city served by two congestible modes analysis of the time-varying pricing of a bottleneck with elastic demand using optimal control theory mathematical and economic theory of road pricing on the morning commute problem with bottleneck congestion and parking space constraints managing network mobility with tradable credits managing rail transit peak-hour congestion with a fare-reward scheme congestion derivatives for a traffic bottleneck congestion derivatives for a traffic bottleneck with heterogeneous commuters commuter arrivals and optimal service in mass transit: does queuing behavior at transit stops matter? carpooling with heterogeneous users in the bottleneck model a new look at the morning commute with household shared-ride: how does school location play a role? impact of capacity drop on commuting systems under uncertainty integrated daily commuting patterns and optimal road tolls and parking fees in a linear city modelling and managing the integrated morning-evening commuting and parking patterns under the fully autonomous vehicle environment efficiency comparison of various parking charge schemes considering daily travel cost in a linear city improving travel efficiency by parking permits distribution and trading integrated scheduling of daily work activities and morning-evening commutes with bottleneck congestion analysis of user equilibrium traffic patterns on bottlenecks with time-varying capacities and their applications optimal official work start times in activity-based bottleneck models with staggered work hours day-to-day evolution of departure time choice in stochastic capacity bottleneck models with bounded rationality and various information perceptions road traffic congestion and public information: an experimental investigation key: cord- - lwjr authors: kaplan, edward h. title: containing -ncov (wuhan) coronavirus date: - - journal: health care manag sci doi: . /s - - - sha: doc_id: cord_uid: lwjr the novel coronavirus -ncov first appeared in december in wuhan, china. while most of the initial cases were linked to the huanan seafood wholesale market, person-to-person transmission has been verified. given that a vaccine cannot be developed and deployed for at least a year, preventing further transmission relies upon standard principles of containment, two of which are the isolation of known cases and the quarantine of persons believed at high risk of exposure. this note presents probability models for assessing the effectiveness of case isolation and quarantine within a community during the initial phase of an outbreak with illustrations based on early observations from wuhan. the novel coronavirus -ncov first appeared in december in wuhan, china [ ] . most of the initial cases were linked to the huanan seafood wholesale market, but person-to-person transmission was established quickly while viral transmission prior to the appearance of symptoms remains controversial [ , ] . from the same family as the sars and mers coronaviruses ( % and % fatality rates respectively [ , ] ), -ncov has also led to serious cases of pneumonia, albeit with a lower estimated fatality rate of - % at the present time [ ] . given that a vaccine cannot be developed and deployed for at least a year, preventing further transmission relies upon standard principles of containment, two of which are the isolation of known cases and the quarantine of persons believed at high risk of exposure (with the latter extended inside china to prevent travel to or from wuhan, and globally via the cancellation of air travel to and from china). what follows are some probability models for assessing the effectiveness of case isolation of infected individuals and quarantine of exposed individuals within a community during the initial phase of an outbreak with illustrations based on early observations from wuhan. the good news is that in principle, case isolation alone is sufficient to end community outbreaks of -ncov transmission provided that cases are detected efficiently. quarantining persons identified via tracing backwards from known cases is also beneficial, but less efficient than isolation. to begin, suppose someone has just become infected. absent intervention, assume that this infected person will transmit new infections in accord with a time-varying poisson process with intensity function λ(t) denoting the transmission rate at time t following infection. the expected total number of infections this person will transmit over all time (the reproductive number r ) equals and as is well-known, an epidemic cannot be self-sustaining unless r > [ , ] . it follows that a good way to assess isolation and quarantine is to examine their effect on r . but first, we take advantage of another epidemic principle, which is that early in an outbreak, the incidence of infection grows exponentially. so, suppose that the rate of new infections grows as ke rt where r is the exponential growth rate, and let ι denote the initial number of infections introduced at time . it follows that which is to say that the rate of new infections at chronological time t is the cumulation of all past infections times the chronological time t transmission rate associated with those past infections. simplifying and recognizing that e −rt λ(t) goes to zero (r is finite) yields the euler-lotka equation in the disease outbreak context, eq. can be understood as the composite of all sources of current infections. among all persons newly infected, the fraction whose infectors were infected between t and t + t time units ago equals is thus the probability density for the duration of time an infector has been infected as sampled from the infectors of those just infected. back to wuhan, where detailed study of the first confirmed -ncov cases was reported in [ ] . using only case data up to january , the exponential growth rate r was directly estimated to equal . /day [ ] . contact tracing from identified index cases was able to establish links to their presumed infectors. while it was not possible to pinpoint exact dates of infection, the dates at which symptoms in both infectees and (presumed) infectors occurred were determined, and the difference in these dates taken as a proxy for the elapsed time since infection of the infector (see [ ] for technical issues that arise from this approach). the resulting frequency distribution was then used to estimate b(t), which was fit as a gamma distribution with mean (standard deviation) of . ( . ) days [ ] . given these estimates of r and b(t), λ(t) = e rt b(t) and consistent with what was reported in [ ] as well as other studies employing different methods [ , ] . we can now model containment. starting with case isolation, suppose that an infected person is detected at time t d days following infection, and is isolated for τ i days. the effect of doing this is to erase all infections that would have been transmitted between times t d and t d +τ i . following the poisson model, the expected number of transmissions blocked equals clearly the sooner an infected person is detected (the smaller t d ) and the longer a person is isolated (the larger τ i ), the greater the number of infections that can be prevented. suppose that newly infected persons self-recognize their infection at the time when symptoms appear. this optimistic scenario equates the detection time to the incubation time for -ncov, and this incubation time distribution was reported to follow a lognormal distribution with a mean of . days and a th percentile of . days (which implies a standard deviation of . days) [ ] . denoting the incubation time density by f t d (t), the expected number of transmissions blocked by case isolation of duration τ i upon the appearance of symptoms, β i , is given by substituting λ(t) and f t d (t) as previously described yields β i 's of however, assuming that the time to detection is equal to the incubation time is very optimistic. indeed, the wuhan study revealed that the average time from onset of illness to a medical visit was . days [ ] , comparable to the incubation time. to obtain a more sobering view of isolation, suppose that an individual's time to detection is twice the incubation time. using the lognormal incubation density cited above, the new detection time distribution will also be lognormal but now with a mean (standard deviation) of . ( . ) days. applying eq. yields β i 's of . , . and . for isolations of , and unlimited days. even lifetime isolation fails to reduce transmission below threshold if the time to detection takes too long. given the amount of attention generated by news coverage and public service announcements, this second scenario is overly pessimistic. the real message is the importance of rapid (self) detection. what of quarantine? screening and quarantining individuals potentially exposed elsewhere upon entry to a community (as has been the case at airports) certainly can prevent the importation of new infections and their subsequent transmission chains, though at the cost of containing uninfected persons. beyond this, quarantine (typically at home where it is recommended that the exposed person not share immediate space, utensils, towels etc. with others) is meant for apparently healthy individuals discovered to be at risk of exposure via contact tracing with the idea that should they in fact have become infected, they would become ill without transmitting the virus and then report for isolation. however, quarantining uninfected contacts offers no benefit presuming the potential infector has already been identified and isolated, so the key question is whether such tracing would reach already infected but previously unidentified contacts in time to make a meaningful reduction in disease transmission. to present an optimistic view of tracing-driven quarantine, suppose that a newly infected person (referred to as the index from the standpoint of contact tracing) is immediately identified. instantaneous interview and tracing leads to the quarantine of our index's prior contacts, one of whom happens to be the infector (who is immediately isolated upon discovery). said infector, however, has already been infectious for some time before being identified via the index case. indeed, the probability density for the duration of time the infector has already been infected is given by eq. . suppose that the infector is placed in quarantine for τ q days. the expected number of transmissions that would be blocked, β q , is given by while the equations for β i and β q have the same structure, there is a key difference. the elapsed time from infection until an infected person enters isolation directly depends upon the time to recognize symptoms, which is related fundamentally to the incubation time distribution. the elapsed time from infection until an infected person enters quarantine/isolation via contact tracing, however, depends upon sampling from those newly infected and looking backwards to estimate the infector's elapsed duration of infection. using the previously estimated models for b(t) and λ(t), eq. yields β q 's of . , . and . for τ q 's of , , and unlimited days. the day quarantine proposed in [ ] would reduce the effective reproductive number to . − . = . , which is just under threshold. again, this is an optimistic view of contact tracing, for identification of the infector is presumed instantaneous at the index's time of infection. taking into account the detection delay in recognizing the index case would similarly delay the identification of the infector via contact tracing, reducing the number of transmissions that could be prevented as a result. there is no either/or choice between quarantine and isolation. using both leads to an infected person being detected at the minimum of the time a person selfdetects due to symptoms and the time a person would be identified via contact tracing. the expected number of infections prevented then follows from eq. after substituting the probability density for the minimum of the two detection times. to illustrate, assume independence between self-identification and contact-tracing detection times, that self-identification occurs at twice the incubation time, contact identification times follow b(t) as previously, and quarantine/isolation is unlimited in duration. the associated β i q denoting expected infections averted via isolation and quarantine now equals . , which reduces the reproductive number from . to . , well below the epidemic threshold. the preceding analysis has focused on reducing the reproductive number below , yet doing so can still lead to a large total number of infections. for example, reducing the reproductive number to . would lead to ten times as many infections in total as the extant number at the start of containment, as total infections in such a "minor" outbreak scales as /( − r ) [ ] . the modeling above is meant to be illustrative and surely could be improved in many ways. appropriate characterization of underlying statistical uncertainty, better operational modeling of how actual isolation, quarantine and contact tracing operate [ ] (including voluntary selfquarantine by untraced persons who might have been exposed), consideration of the costs of intervention as well as the public health benefits, and characterizing the appropriate level of resources to devote to this outbreak relative to other arguably more pressing public health concerns are all subjects deserving careful study. additional common-sense precautions such as regular handwashing, the use of facemasks, and other measures not considered here should help make such outbreaks even more manageable. one important suggestion is that people should receive flu shots, for in addition to protecting against influenza, vaccination would reduce the number of false positive -ncov cases reported since fewer people would have the common symptoms of both flu and coronavirus, and if a vaccinated person did get sick, it would raise the probability that the case is coronavirus as opposed to flu and make it more likely said person would seek care [ ] . there are other practical aspects to explore, including the development of a less-precise but more rapid diagnostic mechanism, determining how long one can safely delay ill patients with symptoms from coming to the hospital to help alleviate congestion, and figuring out how quickly airborne infection isolation rooms (negative pressure units) can be created by hacking the ventilation system in ordinary wards to increase isolation capacity [ ] . nonetheless, the modeling results obtained are reassuring. containment via isolation and quarantine has the capacity to control a community -ncov outbreak. early transmission dynamics in wuhan, china, of novel coronavirus-infected pneumonia transmission of -ncov infection from an asymptomatic contact in germany study claiming new coronavirus can be transmitted by people without symptoms was flawed middle east respiratory syndrome coronavirus transmission a novel coronavirus outbreak of global health concern estimation in emerging epidemics: biases and remedies infectious diseases of humans: dynamics and control nowcasting and forecasting the potential domestic and international spread of the -ncov outbreak originating in wuhan, china: a modelling study report : transmissibility of -ncov mrc centre for global infectious disease analysis. imperial college modeling to inform infections disease control analyzing bioterror response logistics:the case of smallpox act now to prevent an american epidemic: quarantines, flu vaccines and other steps to take before the wuhan virus becomes widespread implementing a negative-pressure isolation ward for a surge in airborne infectious patients acknowledgments i thank ron brookmeyer, forrest crawford, gregg gonsalves, robert heimer, albert ko, barry nalebuff, david paltiel, greg zaric and an anonymous referee for comments; any errors are my own. key: cord- -d hvukbu authors: faes, christel; abrams, steven; van beckhoven, dominique; meyfroidt, geert; vlieghe, erika; hens, niel title: time between symptom onset, hospitalisation and recovery or death: statistical analysis of belgian covid- patients date: - - journal: int j environ res public health doi: . /ijerph sha: doc_id: cord_uid: d hvukbu there are different patterns in the covid- outbreak in the general population and amongst nursing home patients. we investigate the time from symptom onset to diagnosis and hospitalization or the length of stay (los) in the hospital, and whether there are differences in the population. sciensano collected information on , hospitalized patients with covid- admissions from belgian hospitals between march and june . the distributions of different event times for different patient groups are estimated accounting for interval censoring and right truncation of the time intervals. the time between symptom onset and hospitalization or diagnosis are similar, with median length between symptom onset and hospitalization ranging between and . days, depending on the age of the patient (longest delay in age group – years) and whether or not the patient lives in a nursing home (additional days for patients from nursing home). the median los in hospital varies between and . days, with the los increasing with age. the hospital los for patients that recover is shorter for patients living in a nursing home, but the time to death is longer for these patients. over the course of the first wave, the los has decreased. the world is currently faced with an ongoing coronavirus disease (covid- ) pandemic. the disease is caused by the severe acute respiratory syndrome coronavirus , a new strain of the coronavirus, which was never detected before in humans, and is a highly contagious infectious disease. the first outbreak of covid- occurred in wuhan, province hubei, china in december . since then, several outbreaks have been observed throughout the world. as from march, the first generation of infected individuals as a result of local transmission was confirmed in belgium. there is currently little detailed knowledge on the time interval between symptom onset and hospital admission, nor on the length of stay (los) in hospital in belgium. however, information about the los in hospital is important to predict the number of required hospital beds, both for beds in general hospital and beds in the intensive care unit (icu), and to track the burden on hospitals [ ] . the time delay from illness onset to death is important for the estimation of the case fatality ratio [ ] . individual-specific characteristics, such as the gender, age and co-morbidity of the individual, could potentially explain differences in los in the hospital. therefore, we investigate the time of symptom onset to hospitalization and the time of symptom onset to diagnosis, as well as the los in hospital. we consider and compare parametric distributions for these event times enabling to appropriately take care of truncation and interval censoring. in section , we introduce the epidemiological data and the statistical methodology used for the estimation of the parameters associated with the aforementioned delay distributions. the results are presented in section and avenues of further research are discussed in section . the hospitalized patients clinical database is an ongoing multicenter registry in belgium that collects information on hospital admission related to covid- infection. the data are regularly updated as more information from the hospitals are sent in. the individual patients' data are collected through online questionnaires: one with data on admission and one with data on discharge. data are reported for all hospitalized patients with a confirmed covid- infection. the reporting is strongly recommended by the belgian risk management group, therefore the reporting coverage is high (> % of all hospitalized covid- cases) [ ] . at the time of writing this manuscript, there is information about , patients, hospitalized between march and june , including age and gender. table a (appendix b) summarizes the age and living status (living in nursing home or not) of the patients. age is categorized into age groups: the young population ( - years), the working age population ( - years), the senior population ( - years) and the elderly ( + years). it shows that a large proportion of the hospitalized + patients live in a nursing home facility (about % for patients aged - and % for patients aged +). the survey contains information on patients hospitalized during the initial phase of the outbreak (between march and march); patients in the increasing phase of the outbreak (between march and march); in the descending phase (between april and april); and individuals at the end of the first wave of the covid- epidemic (between april and june). the time trend in the number of hospitalizations is presented in figure a (appendix b). the time trend in the survey matches well with the time trend of the outbreak in the whole population, though with some under-reporting in april and may. the time variables (time of symptom onset, hospitalisation, diagnosis, and recovery or death) were checked for consistency. observations identified as inconsistent were excluded for analyses. details of the inclusion and exclusion criteria are provided in appendix a. some descriptive analyses of the event times are provided in appendix c. different flexible parametric non-negative distributions can be used to describe the delay distributions, such as the exponential, weibull, lognormal and gamma distributions [ ] . however, as the reported event times are expressed in days, the discrete nature of the data should be accounted for. reference [ , ] assume a discrete probability distribution parameterized by a continuous distribution. alternatively, reference [ ] estimate the serial interval using interval censoring techniques from survival analysis. reference [ , ] use doubly interval-censoring methods for estimation of the incubation distribution. we use interval-censoring methods originating from survival analysis to deal with the discrete nature of the data, to acknowledge that the observed time is not the exact event time [ ] . let x i be the recorded event time (e.g., los in hospital). instead of assuming that x i is observed exactly, it is assumed that the event time is in the interval (l i , r i ), with l i = x i − . and r i = x i + . for x i ≥ and l i = = − and r i = . for x i = . as a sensitivity analysis, we compare this assumption with the wider interval an additional complexity is that the delay distributions are truncated, either because there is a maximal clinical delay period or because the hospitalization is close to the end of the study. first, only patients reporting a delay between symptoms and hospitalization (or diagnosis) of at most days were included in the study, because it is unclear for the other patients whether the reason for hospital admission was covid- infection. in literature, times from onset of symptoms to hospital admission have been reported between and days (e.g., reference [ ] [ ] [ ] [ ] ), with no mention of observed delay times above days. second, if hospitalization is e.g., days before the end of the study, the observed los cannot exceed days. however, it has to be noted that only patients that have left the hospital are included in the survey, and as a result it will not include patients that are hospitalized near the end of the survey and have a long length of stay. this is a clear example of right-truncation (as opposed to right-censoring under which patients are still part of the study/data and only partial information is available on their length of stay). we therefore use a likelihood function accommodating the right-truncated and interval-censored nature of the observed data to estimate the parameters of the distributions [ ] . the likelihood function is given by in which t i is the (individual-specific) truncation time and f(·) is the cumulative distribution function corresponding to the density function f (·). we truncate the time from symptom onset to diagnosis and the time from symptom onset to hospitalisation to days (t i ≡ ). the los in hospital is truncated at t i = e − t i , in which t i is the time of hospitalization and e denoted the end of the study period ( june ). in addition, to account for possible under-reporting in the survey, each likelihood contribution is weighted by the post-stratification weight w i ≡ w t defined as w t = n t n t ∑ t n t , where t is the day of hospitalization for patient i, n t the number of hospitalizations in the population on day t and n t is the number of reported hospitalizations in the survey on day t. this weighted likelihood is also called pseudo-likelihood in the context of complex survey data, for which consistency and asymptotic normality has been shown [ ] . we assume weibull and lognormal distributions for the delay distributions. the two parameters of each distribution are regressed on age, gender, nursing home and time period (as well as interactions of these). by assuming both parameters to be covariate-dependent, we allow that both the mean and the range of the time to event variable varies in different population groups. the bfgs optimization algorithm is used to maximize the likelihood. convergence is reached for all considered models. the bayesian information criterion (bic) is used to select the best fitting parametric distribution and the best regression model among the candidate distributions/models. only significant covariates are included in the final model. overall, the delay between symptom onset and hospitalization can be described by a truncated weibull distribution with shape parameter . and scale parameter . . the overall average delay is very similar to the one obtained by [ ] , based on a stochastic discrete time model relying on an erlang delay distribution. however, there are significant differences in the time between symptom onset and hospitalization amongst different gender groups, age groups, living status and time period of hospitalization. as the truncated weibull distribution has a lower bic as compared to the lognormal distribution ( , and , for weibull and lognormal distributions, respectively), results for the weibull distribution are presented. in table , the regression coefficients of the scale (λ) and shape parameters (γ) of the weibull distribution are presented. the impact on the time between symptom onset and hospitalization is visualized in figure , showing the model-based %, %, %, % and % quantiles of the delay times. table . summary of the regression of the scale (λ) and shape (γ) parameters for reported delay time between symptom onset and hospitalization and between symptom onset and diagnosis, based on a truncated weibull distribution: parameter estimate, standard error and significance (* corresponds to p-value < . ; ** to p-value < . and *** to < . ). the reference group used are females of age > living in nursing home that are hospitalized in the period march to march. age has a major impact on the delay between symptom onset and hospitalization, with the youngest age group having the shortest delay (median of day, but with a quarter of the patients having a delay longer than . days). the time from symptom onset to hospitalization is more than doubled in the working age ( - years) and ageing ( - years) population as compared to this young population (median close to days and a delay of more than . days for a quarter of the patients). in contrast the increase is % in the elderly ( + years) as compared to the youngest age group (median delay of . days, with a quarter of the patients having a delay longer than . days). after correcting for age, it is observed that the time delay is somewhat higher when patients come from a nursing home facility, with an increase of approximately days. note that in the descriptive statistics, we observed shorter delay times for patients coming from nursing homes. this stems from the fact that + year old's have shorter delay times as compared to patients of age - , but the population size in the + group is much larger as compared to the - group in nursing homes. and although statistical significant differences were found for gender and period, we observe very similar time delays between males and females and in the different time periods (see figure a ). the differences occur in the tails of the distribution; with, e.g., the % longest delay times between symptoms and hospitalizations observed for males. the time between symptom onset and diagnosis is also best described by a truncated weibull distribution (shape parameter . , scale parameter . ). as again the truncated weibull distribution has a lower bic value as compared to the lognormal distribution ( , and , for weibull and lognormal, respectively), results for the weibull distribution are presented. parameter estimates are very similar to the distribution for symptom onset and hospitalization ( table ). the median delay between symptom onset and diagnosis is approximately one day longer as compared to the median delay between symptom onset and hospitalization. the time from symptom onset to diagnosis in males had a much wider range as compared to females. this is observed in the tails of the distribution, with the % longest delay times being days longer for males as compared to females. especially at the increasing phase of the epidemic, the time between symptom onset and diagnosis was longer as compared to the time between symptom onset and hospitalization (see figure a ), but this delay has shortened over time. to test the impact of some of the model assumptions, a comparison is made with an analysis without truncating the time between symptom onset and hospitalisation or diagnosis and wider time intervals (x i − , x i + ). results are presented in figures a and a , and are very similar to the once presented here. it was also investigated whether or not there a difference between neonati (with virtually no symptoms, but diagnosed at the time of birth or at the time of the mothers testing prior to labour) and other children. for all children < years of age, we found a median time from symptom onset to hospitalization and diagnosis to be and . days, respectively. if we only consider children > years of age, a small increase is found ( . ( . - . ) days for time to hospitalization and . ( . - . ) for time to diagnosis). a summary of the estimated los in hospital and icu is presented in table and figure based on the lognormal distribution. the lognormal distribution has a slightly smaller bic value as compared to the weibull distribution for the los in hospital ( , for weibull and , for lognormal) and for the los in icu ( for weibull and for lognormal). table . summary of the regression of the log-mean (µ) and log-standard deviation (σ) parameters for the length of stay in hospital and icu, based on the lognormal distribution: parameter estimate, standard error and significance (* corresponds to p-value< . ; ** to p-value < . and *** to < . ). the reference group used are females of age > living in nursing home that are hospitalized in the period march to march. a '/' indicates that this variable was not included in the final model. the median los in hospital is close to days in the youngest age group, but % of these patients stay longer than . ( . ) days in hospital for females (males), and % stay longer than ( ) days for females (males). the los increases with age, with a median los of around . ( . ) days for females (males) in the working age group. a quarter of the patients in age group - stay longer than days and % stays longer than days. this increases for patients above years of age, with a median los of around . ( . ) days for female (male) patients in the senior population group and . ( . ) days for female (male) patients in the elderly group. a large proportion of the elderly patients stay much longer in hospital. a quarter of these patients stay longer than . - . days for patients in the ageing group and longer than . - days for the elderly. some very long hospital stays are observed in these age groups, with % of the los being longer than ( ) days for females (males) in the ageing group, and ( ) days in the elderly. no significant difference is found for patients coming from nursing homes. over the course of the first wave, the los has slightly decreased, with a decrease in median los of around days from the first period to later periods. note that this result is corrected for possible bias of prolonged lengths of stay being less probable for more recently admitted patients. the los in icu (based on the lognormal distribution) is on average . days for the young patients, with a quarter of the patients staying longer than . days in icu. similar to los in hospital, also the los in icu increases with age. the median los in the working age population is . , in the senior population . , while in elderly it is slightly shorter ( . days). again, it is observed that a quarter of the patients in age group - stay longer than days in icu, in age group - . days and in + days. patients living in nursing homes stay approximately days longer in icu. no major difference is observed in the los in icu between males and females, though some prolonged stays are observed in males as compared to females. similar as the overall los in hospital, the los in icu has decreased over time (with a decrease of day from the first period to the later periods, and an additional days in the last period). table summarizes the los in hospital for patients that recovered or passed away. the lognormal distribution has the smallest bic value for time from hospitalization to recovery and the weibull distribution for time from hospitalization to death. for patients that recovered, the los in hospital increased with age (the median los is days for the young population, which increases to days in working age population, days in the senior population and days in the elderly). in contrast to previous results, we observe that patients living in nursing homes leave hospital approximately day faster as compared to the general population. however, in contrast, the % longest stays in hospital before recovery are longer for patients living in nursing homes. but, while the los in hospital for patients that recover increases with age for all age groups, the survival time of hospitalized patients that died is lower for the age groups seniors (median time of . days) and elderly (median time of . days) as compared to the working age group (median time of . days). also large differences are observed amongst patients coming from nursing homes or not, with the time between hospitalization and death being approximately days longer for patients living in a nursing home. no significant differences are found between males and females. a sensitivity analysis assuming that the time delay is interval censored by (x i − , x i + ) is presented in figure a . results are almost identical to the previously presented results. it was also investigated whether the smaller duration of hospitalization for < years can be due to the neonati, for which the duration of stay is often determined by duration of post-delivery recovery of the mother. and indeed, the los in hospital for the youngest age group increases slightly if we take out the children of years to . ( . , . ) days for males and . ( , . ) days for females. the los in hospital for recovered patients increases to . ( . , ) days for males and . ( . , . ) days for females of age between and years of age, making it very similar to the - years old patients that recovered. no impact was observed on the los in icu. table . summary of the regression of the log-mean (µ) and log-standard deviation (σ) parameters for length of stay in hospital for recovered patients and patients that died, based on lognormal distribution and weibull distribution: parameter estimate, standard error and significance (* corresponds to p-value < . ; ** to p-value < . and *** to < . ). the reference group used are females of age > living in nursing home that are hospitalized in the period march to march. a '/' indicates that this variable was not included in the final model. previous studies in other countries reported a mean time from symptom onset to hospitalization of . days in singapore, . days in hong kong and . days in the uk [ ] . other studies report mean values of time to hospitalization ranging from to . days [ , , ] . in belgium, the mean time from symptom onset to hospitalization overall is . days, which is slightly longer as compared to the reported delay in other countries, but depending on the patient population, estimates range between and . days in belgium. the time from symptom onset to hospitalization is largest in the working age population ( - years), followed by the elderly ( - ) years. if we compare patients within the same age group, it is observed that the time delay is somewhat higher when patients come from a nursing home facility, with an increase of approximately days. the time from symptom onset to diagnosis has a similar behaviour, with a slightly longer delay as compared to time from symptom onset to hospitalization. the diagnosis was typically made upon hospital admission to confirm covid- infection during the first wave, explaining why the time from symptom onset to hospitalization is very close to the time to diagnosis. to investigate the length of stay in hospital, we should make a distinction between patients that recover or that die. while the median length of stay for patients that recover varies between days (in the young population) to . (in the elderly), the median length of stay for patients that die varies between . days (in the elderly) and . days (in the working age population). in general, it is observed that the length of stay in hospital for patients that recover increases with age, and males need a slightly longer time to recover as compared to females. but, patients living in nursing homes leave hospital sooner as compared to patients in the same age group from the general population. patients living in nursing homes might be more rapidly discharged from hospital to continue their convalescence in the nursing home, whereas this is probably less the case for isolated elderly patients. in contrast, the time between hospitalization and death is longest for the working age population, with shorter survival time for the seniors and the elderly. the length of stay in hospital for patients that die is longer for patients coming form nursing homes, as compared to patients from the same age group from the general population. a similar trend is observed for the length of stay in icu. over the course of the first wave, the los has slightly decreased. this result is corrected for possible bias of prolonged lengths of stay being less probable for more recently admitted patients. therefore, this might be related to improved clinical experience and improved treatments over the course of the epidemic. but note that also varying patients profiles in terms of comorbidities or severity of disease over time can explain this trend, and it would therefore be interesting to correct for the patient's profile in a future study. the length of stay in belgian hospitals is within the range of the once observed in other countries, though especially the length of stay in icu seems shorter in belgian hospitals. reference [ ] report a median length of stay in hospital of days in china, and of days outside of china. the median length of stay in icu is days in china and days outside of china [ ] . reference [ ] report estimated length of stay in england for covid- patients not admitted to icu of . days and for icu length of stay of . days. it should however be noted that the criteria for hospital (and icu) admission and release might be distinct in the different countries. different sensitivity analysis indicated that the results are robust to some of the assumptions made in the modeling. however, alternative methods could still be investigated to improve the estimation of the delay distributions. first, alternative distributions can be used, having more than two parameters and thus more flexibility, e.g., generalized gamma distributions (for which the gamma, exponential and weibull distributions are special cases). second, a truncated doubly-interval censored method could be considered to account for the uncertainty in both time points determining the observed delays (and their intervals). third, there is possible reporting bias in the time of symptom onset, which can influence the results. finally, the impact of severity of illness and co-morbidity on the length of stay in hospital is very important. this was not investigated in this study as this information was not made available, but is an important factor to investigate in future analyses. funding: this work is funded by the epipose project from the european union's sc -phe-coronavirus- programme, project number . the authors declare no conflict of interest. the funders had no role in the design of the study; in the collection, analyses, or interpretation of data; in the writing of the manuscript, or in the decision to publish the results. a flow diagram of the exclusion criteria is displayed in figure a . the time of symptom onset and time of hospitalization is available for , patients. the date of symptom onset is determined based on the patient anamnesis history made by the clinicians. patients that were hospitalized before the start of symptoms (i.e., patients) were not included. these include patients with nosocomial infections admitted prior to covid- infection for other long-term pathologies, then got infected at hospital and developing covid- -related symptoms long after admission. patients reporting a delay between symptoms and hospitalization of more than days (i.e., patients) were also not included, because it is unclear for these patients whether the reason for hospital admission was covid- infection. a sensitivity analysis including patients with event times above days is conducted. patients with missing information on age (i.e., patients) or gender (i.e., patients) were not included in the statistical analysis. this resulted in a total of , patients which were used to estimate the distribution of the time between symptom onset and hospitalization. based on the patient anamnesis history made by the clinicians. patients that were hospitalized before the start of symptoms (i.e., patients) were not included. these include patients with nosocomial infections admitted prior to covid- infection for other long-term pathologies, then got infected at hospital and developing covid- -related symptoms long after admission. patients reporting a delay between symptoms and hospitalization of more than days (i.e., patients) were also not included, because it is unclear for these patients whether the reason for hospital admission was covid- infection. a sensitivity analysis including patients with event times above days is conducted. patients with missing information on age (i.e., patients) or gender (i.e., patients) were not included in the statistical analysis. this resulted in a total of , patients which were used to estimate the distribution of the time between symptom onset and hospitalization. the time between hospitalization and discharge from hospital is available for , patients, either discharged alive or dead. for patients that were hospitalized before the start of symptoms (i.e., the time of symptom onset and time of diagnosis is available for , patients. some of these were diagnosed prior to having symptoms ( ) or experienced symptoms more than days before diagnosis ( ), and are excluded as these might be errors in reporting dates. similarly, the delay between symptoms and detection time is truncated at days; but a sensitivity analysis including these patients is performed. in total, patients were removed because of missing information on age and/or gender, resulting in , patients used in the analysis of the time from symptom onset to diagnosis. the time between hospitalization and discharge from hospital is available for , patients, either discharged alive or dead. for patients that were hospitalized before the start of symptoms (i.e., patients), we use the time between the start of symptoms and discharge. patients with negative time intervals ( patients) are excluded for further analysis. another patients were discarded because of missing covariate information with regard to their age or gender. from these patients, we know that recovered from covid- , while died. from the hospitalized patients, there is information about the length of stay at icu for patients. note that we analyzed an anonymized subset of data from the hospital covid- clinical surveillance database of the belgian public health institute sciensano. data from sciensano was shared with the first author through a secured data transfer platform. the observed distribution of the delay from symptom onset to hospitalization and los in hospital are presented in figure a . summary information about these distributions are presented in tables a and a . while the observed delay between symptom onset and hospitalization is between and days, % of the hospitalizations occur within days after symptom onset. this is however shorter in the youngest age group (< years) and in the elderly group (> years). also patients coming from nursing homes seem to be hospitalized faster as compared to the general population. over the course of the first wave, the observed time between symptom onset and hospitalization was largest in the increasing phase of the epidemic (between march and march). the time between symptom onset and diagnosis is very similar, ranging between and days, with % of the diagnoses occurring within days after symptom onset. it should be noted that these observations are based on hospitalized patients, and non-hospitalized patients might have a quite different evolution in terms of their symptoms. as non-hospitalized patients were rarely tested in the initial phase of the epidemic, no conclusions can be made for this group of patients. the observed median length of stay in hospital is days, with % of the patients have values ranging between and days. % of the patients stay longer than days in the hospital. the median length of stay seems to increase with age (from days in age group < to in age group - , in age group - and days in age group > ). on the other hand, with time since introduction of the disease in the population, the length of stay seems to decrease, though this might be biased due to incomplete reporting of los in patients who are actually still admitted at the time of writing. therefore, these observed statistics should be interpreted with care. similar results are observed for the length of stay in icu. (figures a and a hospital length of stay for covid- patients: data-driven methods for forward planning epidemiological determinants of spread of causal agent of severe acute respiratory syndrom in hong kong rapid establishment of a national surveillance of covid- hospitalizations in belgium handbook of infectious diseases data analysis robust reconstruction and analysis of outbreak data: influenza a (h n )v transmission in a school-based population estimation of the serial interval of influenza estimating incubation period distributions with coarse data incubation period and other epidemiological characteristics of novel coronavirus infections with right truncation: a statistical analysis of publicly available case data statistical analysis of interval-censored failure time data clinical characteristics of hospitalized patients with novel coronavirus-infected pneumonia in clinical features of patients infected with novel coronavirus in clinical course and risk factors for mortality of adult inpatients with covid- in wuhan, china: a retrospective cohort study interim clinical guidance for management of patients with confirmed coronavirus disease (covid- ) modeling the early phase of the belgian covid- epidemic using a stochastic compartmental model and studying its implied future trajectories short doubling time and long delay to effect of interventions. arxiv the effect of human mobility and control measures on the covid- epidemic in china impact of nonpharmaceutical interventions (npis) to reduce covid- mortality and healthcare demand covid- length of hospital stay: a systematic review and data synthesis clinical course and outcomes of critically ill patients with sars-cov- pneumonia in wuhan, china: a single-centered, retrospective, observational study publisher's note: mdpi stays neutral with regard to jurisdictional claims in published maps and institutional affiliations. key: cord- - y e authors: magnusson, amanda; ahle, margareta; swolin‐eide, diana; elfvin, anders; andersson, roland e. title: population‐based study showed that necrotising enterocolitis occurred in space–time clusters with a decreasing secular trend in sweden date: - - journal: acta paediatr doi: . /apa. sha: doc_id: cord_uid: y e aim: this study investigated space–time clustering of neonatal necrotising enterocolitis over three decades. methods: space–time clustering analyses objects that are grouped by a specific place and time. the knox test and kulldorff's scan statistic were used to analyse space–time clusters in children diagnosed with necrotising enterocolitis in a national cohort of children born between and in sweden. the municipality the mother lived in and the delivery hospital defined closeness in space and the time between when the cases were born – seven, and days – defined closeness in time. results: the knox test showed no indication of space–time clustering at the residential level, but clear indications at the hospital level in all the time windows: seven days (p = . ), days (p = . ) and days (p = . ). significant clustering at the hospital level was found during – , but not during – . kulldorff's scan statistic found seven significant clusters at the hospital level. conclusion: space–time clustering was found at the hospital but not residential level, suggesting a contagious environmental effect after delivery, but not in the prenatal period. the decrease in clustering over time may reflect improved routines to minimise the risk of contagion between patients receiving neonatal care. necrotising enterocolitis (nec) is the most common gastrointestinal emergency among neonates, and it mainly affects preterm infants, with mortality rates ranging from % to %. the highest mortality rate is found among infants requiring surgery ( ) ( ) ( ) ( ) ( ) . the overall incidence of nec varies between studies, from . to . per live births ( , , ) . however, in extremely preterm and very low birth weight infants, the incidence is approximately % ( , ) . the pathogenesis of nec is multifactorial, and there are some factors that remain unknown ( , ) . most cases of nec occur sporadically. nevertheless, reports of clusters or outbreaks suggest that an infectious element could be a causal factor in nec ( , ( ) ( ) ( ) ( ) ( ) . this hypothesis is supported by the fact that improvements in infection-control procedures have stopped outbreaks of nec ( , ) . seasonal variations in the incidence of nec have been described, which also indicate that an infectious agent may contribute to the clustering of the disease ( , , ) . several microbial organisms have been proposed as possible causes of nec, for example klebsiella pneumonia, staphylococcus aureus, escherichia coli, clostridium difficile, norovirus and rotavirus, but no specific causative organism was identified in some outbreaks ( , , , , ) . it has also been suggested that overcrowding in neonatal intensive care units (nicus) has contributed to clusters of nec ( ) . nevertheless, the majority of reports describing outbreaks of nec are retrospective and based on observed suspected outbreaks that could just be random. abbreviations nec, necrotising enterocolitis; nicu, neonatal intensive care unit. this study investigated space-time clustering of necrotising enterocolitis from to using national swedish data on nearly . million births. clustering was found at the hospital level during - , but not during - , and not at the residential level. the decrease in clustering over time could be related to enhanced routines to minimise the spread of any potential necrotising enterocolitis inducing contagion between patients in the neonatal intensive care unit. furthermore, most of the described outbreaks have been on a hospital level, while clustering based on the mother's residential municipality has not been addressed ( , ( ) ( ) ( ) . in reports on nec outbreaks, the cluster concept tends to be used subjectively without a standard definition ( ) . our group previously presented a national, populationbased study on nec epidemiology and trends in sweden, which described an increase in the incidence of nec between and ( ). the same cohort was used in the present study to investigate space-time clusters of nec on two levels for closeness in space: the mother's residential municipality and the delivery hospital. furthermore, the present study was designed to examine whether there had been any change in the occurrence of space-time clusters over time, by studying two subperiods: - and - . a cohort of newborn infants with a diagnosis of nec was identified from the following registers held by the swedish national board of health and welfare: the national patient register, the swedish medical birth register and the national cause of death register. all children born between and in sweden with a discharge diagnosis of nec according to the th or th revision of the international classification of diseases -icd- code f or icd- code p were identified. the nec diagnosis was introduced to icd- in and is based on the modified bell nec staging criteria ( , ) . as it was not possible to identify the exact date for the nec diagnosis, the date of birth of the study subjects was used for time comparisons in the cluster analysis. further details about the identification process were previously described ( ) . an anonymised extract covering the background population of all children born in sweden during the same time period as the nec cases was also obtained from the birth register. this extract contained perinatal information and demographic data, including the municipality the mother lived in and the delivery hospital. sweden has a highly centralised care policy for very preterm and extremely preterm infants, based on intention to transfer mothers with a high risk of preterm delivery to a regional level three hospital before they give birth. as a result, most of the infants diagnosed with nec are admitted to the nicu at the hospital in which they were born. two methods were used to analyse for space-time interactions between nec cases: the knox space-time cluster analysis and kulldorff's space-time permutation scan statistic ( , ) . the knox test is based on an analysis of the proximity in space and time of all possible n(n À )/ distinct pairs of cases ( ) . each individual pair is classified into one of four cells in a table, with distance (close/not close) and time (close/not close) on the two axes, according to whether the two parts are close or not close to each other in terms of geographical distance and time. a pair of cases is regarded as being in close proximity if their dates of birth are close and if their geographical locations at the time of birth are close. closeness in the date of birth was divided into time windows of seven, and days apart. two geographical levels were used to define closeness in space: the mother's residential municipality and the delivery hospital. the number of pairs of cases observed in close proximity was compared with the expected number of pairs, which was obtained from the cross-products of the column and row totals. if the observed number of pairs of cases exceeded the expected number of pairs, there was evidence of space-time clustering. the magnitude of the excess, or deficit, was estimated by calculating the strength of clustering using the equation s = [(o-e)/e] , where s was the strength, o was the number of pairs of cases observed and e was the expected number of pairs. to study any changes over time in nec clustering, the population was divided into two cohorts according to the subjects' year of birth: - and - . the knox test was used to compare the two time periods, and the binomial test was used to compare the change in incidence of nec in the two time periods. in addition, kulldorff's scan statistic, based on a spacetime permutation model, was used to identify the presence of space-time and purely temporal clusters of cases ( ). kulldorff's scan statistic is based on the number of observed cases among all births that have taken place within a circle of varying radius in space in one dimension and in a time window with a varying duration in the other dimension. the statistic is centred at all geographical locations to look for possible clusters. thus, the circular window is flexible in location, size and time. for the analyses of clustering on the residential level, we used the geographical coordinates of the centre of the mothers' residential municipality. for the analyses of clustering at each delivery hospital, we used a purely temporal scan statistic, with a time window of varying duration. the number of observed cases in a cluster was compared to what would have been expected if the spatial and temporal locations of all cases were independent of each other, so that there was no space-time interaction. as described by kulldorff et al., the scan statistic makes minimal assumptions about the time, geographical location or size of the cluster and can be adjusted for both purely spatial and purely temporal variations ( ) . the poisson distribution was used for testing the statistical significance of the difference between the observed and expected number of pairs in the knox test. kulldorff's scan statistic was assessed by monte carlo hypothesis testing in simulations, which meant that the smallest p value we could get was . ( ) . statistical significance was set at p < . . the study used stata statistical software, version (statacorp lp, college station, tx, usa) and satscan, version . . (kulldorff m. and information management services inc., ma, usa) for the statistical analyses ( ) . the study was approved by the regional ethical review board of link€ oping (dnr / - ). the study was based on a total of births from to , and the patient characteristics are described in table . information about the delivery hospital and the mothers' residential municipality was missing for , and , children, respectively. we identified cases of nec, including pairs of twins. each twin pair with nec was counted as one instance of nec for the cluster analyses. information about the mother's residential municipality and delivery hospital was missing for and seven of the cases, respectively. after we excluded the second twins and the births with missing information on municipality or delivery hospital, there were cases for the analyses based on municipality and cases for the analyses based on delivery hospital. due to the centralised care of preterm infants in sweden, of the nec cases ( %) occurred at a hospital that did not match the residential municipality of the mother. to be specific, % of all the nec cases among extremely preterm births, with a gestational age under weeks, and % of all the nec cases among term births, with a gestational age over weeks, occurred at a hospital that was not the closest to the mother's municipality. the cohort in the first time period, - , consisted of infants and cases of nec, resulting in an nec incidence of . per live births. during the second time period, - , the cohort consisted of infants and cases of nec, giving an nec incidence of . per live births. there was a significant increase in the incidence of nec in the second time period compared to the first time period (p < . ) ( table ). the knox test did not indicate any space-time clustering at a residential level in any of the studied time windows of seven, or days. there was a significant space-time clustering at a hospital level, with the strongest clustering at a time window of seven days (s = . , p = . ) ( table ). the knox test is sensitive to time-related shifts in the background population, which can give biased results. we therefore performed separate analyses for each of the two time periods. the first time period showed significant space-time clustering of nec in the time windows of seven and days, with the strongest clustering at seven days (s = . , p = . ) ( table ) . during the second time period, there was no significant clustering at a hospital level in any of the studied time windows. kulldorff's scan statistic at a residential level, kulldorff's scan statistic only identified one single space-time cluster of four cases during days in january . the four cases came from four different municipalities within a radius of kilometres. at a hospital level, the purely temporal cluster analysis identified seven instances of temporal clusters at seven different hospitals (table ). in four of the seven clusters identified by kulldorff s scan statistic, the cluster only consisted of two patients. however, several of these clusters occurred in hospitals with a low number of deliveries and few expected cases of nec in the given time interval. of the seven statistically significant clusters, five occurred during november to april and only two clusters occurred during may to october. the present study showed that nec occurred in clusters at a hospital level, as found with both the knox test and kulldorff's scan statistic. when we compared two different time periods - - and - , using the knox test, significant clustering was only found in the early time period and the strongest significance was found using seven days as the time window. our results showed no signs of space-time clustering related to the mother's residential municipality with the knox test and only one single cluster with kulldorff s scan statistic. several possible explanations for clustering on a hospital level have previously been described. one explanation is that nec is associated with a nosocomial infection spread from one child to another in a nicu. hill et al. described an outbreak of nec associated with klebsiella pneumoniae in all cases at one nicu ( ) . han et al. and alfa et al. described outbreaks of nec associated with the clostridium species ( , ) . a second possible mechanism for clustering on a hospital level could be transmission from the healthcare workers to the infants, as suggested by harbarth et al., who described an outbreak of enterobacter cloacae during a period of overcrowding and understaffing in the nicu ( ) . as the present study was a retrospective register study, no investigations could be carried out into whether the bacteria in the infants and among the staff contributed to the clusters. contamination of human milk fortifier or formula is a third possibility for clustering on a hospital level. van acker et al. described an outbreak of nec where the same bacteria were isolated from both the neonates with nec and the powdered milk formula ( ) . in sweden, most infants receive human breast milk in nicus, either from their mother or from a milk bank, but this milk is frequently enriched with human milk fortifier. a fourth possible explanation for clustering on a hospital level may be an accumulation of preterm births at referral hospitals due to referrals of at-risk pregnancies. this could theoretically lead to an overestimation of the number of clusters. the results from the knox test showed significant clustering during - , but not during - , which did not support an overestimation of clusters due to centralised care, as the centralisation of neonatal intensive care in sweden has increased over the last few decades. the finding of a decrease in clustering over time could be related to improvements in the neonatal intensive care of preterm infants. in this study, it was not possible to analyse whether the decrease in clusters was related to improved control of infection in the nicu, less overcrowding, better routines in the nicu or other reasons for reduced transmission of nec between patients. even though the knox test showed no significant clustering of nec during the last decade, kulldorff's scan statistic indicated that clusters of nec do still occur. clustering on a residential level would, as described above, indicate that nec is associated with causative agents, such as infections in the community. stuart et al. described a strong association with the norovirus in an outbreak of nec ( ). chany et al. showed a significant association between coronavirus infections and nec ( ) . these findings could indicate that the virus had its origin in the community and was then transmitted to the infants. the findings in the present study, in which the knox test found no clustering on a residential level and kulldorff's scan statistic found only one cluster, are strong indications against the theory that there is a connection between nec and infections spread in the community. however, when studying the clusters on a hospital level with the kulldorff s scan statistic, it was noticed that the majority of the clusters at a hospital level occurred during november to april, which is also the season when most infections in the community occur. our group has previously described this seasonal variation, with a peak in incidence of all cases of nec in november and a decrease in may ( ). the present study showed indications of space-time clustering of nec on a hospital level in sweden, but not at the level of the mother's residential municipality, suggesting a contagious environmental effect after delivery. the decrease in clustering on a hospital level over the last few decades may indicate that improved routines in modern neonatal care are effective in minimising the transfer of agents involved in the development of nec between patients in the nicu. however, continued awareness of signs of clusters is still warranted to further minimise the risk of environmental factors for nec being transferred from one patient to another. necrotising enterocolitis hospitalisations among neonates in the united states low birthweight, gestational age, need for surgical intervention and gramnegative bacteraemia predict intestinal failure following necrotising enterocolitis a cluster of necrotizing enterocolitis in term infants undergoing open heart surgery necrotising enterocolitis necrotizing enterocolitis epidemiology and trends of necrotizing enterocolitis in sweden epidemiology of neonatal necrotising enterocolitis: a population-based study variations in incidence of necrotizing enterocolitis in canadian neonatal intensive care units cluster of necrotizing enterocolitis in a neonatal intensive care unit: new mexico outbreak of necrotizing enterocolitis associated with enterobacter sakazakii in powdered milk formula an outbreak of necrotizing enterocolitis associated with norovirus genotype gii. epidemic occurrence of neonatal necrotizing enterocolitis cluster of late preterm and term neonates with necrotizing enterocolitis symptomatology: descriptive and case-control study a decrease in the number of cases of necrotizing enterocolitis associated with the enhancement of infection prevention and control measures during a staphylococcus aureus outbreak in a neonatal intensive care unit seasonal variation in the incidence of necrotizing enterocolitis nosocomial necrotising enterocolitis outbreaks: epidemiology and control measures necrotising enterocolitis: is there a relationship to specific pathogens? epidemiology of necrotizing enterocolitis temporal clustering in two neonatology practices neonatal necrotizing enterocolitis. therapeutic decisions based upon clinical staging necrotizing enterocolitis in neonates fed human milk a space-time permutation scan statistic for disease outbreak detection the knox method and other tests for space-time interaction the detection of space-time interactions modified randomization tests for nonparametric hypotheses satscan is a trademark of martin kulldorff. the satscantm software was developed under the joint auspices of martin kulldorff, the national cancer institute, and farzad nosocomial colonization with klebsiella, type , in a neonatal intensive-care unit associated with an outbreak of sepsis, meningitis, and necrotizing enterocolitis an outbreak of necrotizing enterocolitis associated with a novel clostridium species in a neonatal intensive care unit an outbreak of clostridium difficile necrotizing enterocolitis: a case for oral vancomycin therapy? outbreak of enterobacter cloacae related to understaffing, overcrowding, and poor hygiene practices association of coronavirus infection with neonatal necrotizing enterocolitis we would like to thank nils-gunnar pehrsson and henrik eriksson at statistiska konsultbyr an for their statistical expertise. this study was financed by grants from the alf agreement between the swedish government and county councils to sahlgrenska university hospital. the authors have no conflict of interests to declare. key: cord- -bk l authors: lin, chung‐ying; imani, vida; majd, nilofar rajabi; ghasemi, zahra; griffiths, mark d.; hamilton, kyra; hagger, martin s.; pakpour, amir h. title: using an integrated social cognition model to predict covid‐ preventive behaviours date: - - journal: br j health psychol doi: . /bjhp. sha: doc_id: cord_uid: bk l objectives: rates of novel coronavirus disease (covid‐ ) infections have rapidly increased worldwide and reached pandemic proportions. a suite of preventive behaviours have been recommended to minimize risk of covid‐ infection in the general population. the present study utilized an integrated social cognition model to explain covid‐ preventive behaviours in a sample from the iranian general population. design: the study adopted a three‐wave prospective correlational design. methods: members of the general public (n = , , m (age) = . , sd = . , male = , female = ) agreed to participate in the study. participants completed self‐report measures of demographic characteristics, intention, attitude, subjective norm, perceived behavioural control, and action self‐efficacy at an initial data collection occasion. one week later, participants completed self‐report measures of maintenance self‐efficacy, action planning and coping planning, and, a further week later, measures of covid‐ preventive behaviours. hypothesized relationships among social cognition constructs and covid‐ preventive behaviours according to the proposed integrated model were estimated using structural equation modelling. results: the proposed model fitted the data well according to multiple goodness‐of‐fit criteria. all proposed relationships among model constructs were statistically significant. the social cognition constructs with the largest effects on covid‐ preventive behaviours were coping planning (β = . , p < . ) and action planning (β = . , p < . ). conclusions: current findings may inform the development of behavioural interventions in health care contexts by identifying intervention targets. in particular, findings suggest targeting change in coping planning and action planning may be most effective in promoting participation in covid‐ preventive behaviours. statement of contribution: what is already known on this subject? curbing covid‐ infections globally is vital to reduce severe cases and deaths in at‐risk groups. preventive behaviours like handwashing and social distancing can stem contagion of the coronavirus. identifying modifiable correlates of covid‐ preventive behaviours is needed to inform intervention. what does this study add? an integrated model identified predictors of covid‐ preventive behaviours in iranian residents. prominent predictors were intentions, planning, self‐efficacy, and perceived behavioural control. findings provide insight into potentially modifiable constructs that interventions can target. research should examine if targeting these factors lead to changes in covid‐ behaviours over time. constructs and covid- preventive behaviours according to the proposed integrated model were estimated using structural equation modelling. results. the proposed model fitted the data well according to multiple goodness-of-fit criteria. all proposed relationships among model constructs were statistically significant. the social cognition constructs with the largest effects on covid- preventive behaviours were coping planning (b = . , p < . ) and action planning (b = . , p < . ). conclusions. current findings may inform the development of behavioural interventions in health care contexts by identifying intervention targets. in particular, findings suggest targeting change in coping planning and action planning may be most effective in promoting participation in covid- preventive behaviours. what is already known on this subject? curbing covid- infections globally is vital to reduce severe cases and deaths in at-risk groups. preventive behaviours like handwashing and social distancing can stem contagion of the coronavirus. identifying modifiable correlates of covid- preventive behaviours is needed to inform intervention. an integrated model identified predictors of covid- preventive behaviours in iranian residents. prominent predictors were intentions, planning, self-efficacy, and perceived behavioural control. findings provide insight into potentially modifiable constructs that interventions can target. research should examine if targeting these factors lead to changes in covid- behaviours over time. novel coronavirus disease infections, declared by the world health organization (who) as a pandemic (world health organization, a), have had unprecedented global effects on people's daily activities and way of life heymann & shindo, ; kobayashi et al., ; lin, ; pakpour, griffiths, chang, et al., ; pakpour, griffiths, & lin, a tang et al., ) . despite government actions such as enforced self-isolation, travel bans, and national lockdowns of non-essential services, schools, and universities, infection and mortality rates continue to rise (baud et al., ; heymann & shindo, ; wu & mcgoogan, ) . iran, as of june , is the tenth leading country in total reported cases of covid- and is continuing to experience a sharp rise in reported new cases of infections and deaths related to the infection: , total cases (+ , new cases) and , total deaths (+ new deaths; worldometer, ) . to date, there is no vaccine to protect against covid- infection and therefore, nonpharmacological interventions are the only currently available means to reduce the spread of infection and 'flatten the curve' of infection rates (kim, kim, peck, & jung, ) . in response, the who has proposed a global action plan aimed at reducing the spread of covid- infections (world health organization, b) . the plan highlights the importance of adopting a range of health protection behaviours including, for example, washing hands frequently, maintaining social distancing, practising respiratory hygiene, and self-isolating if feeling unwell (world health organization, b) . however, the who guidance is limited by the fact that it does not focus on understanding the mechanisms of action that underpin these preventive behaviours, or on strengthening individuals' capacity, to adopt them. application of theories of social cognition has demonstrated promise in providing an understanding of the determinants of preventive behaviours (hagger, cameron, hamilton, hankonen, & lintunen, ) . such theories help identify potentially modifiable factors that have been shown to be reliably related to behaviour. once identified, these modifiable factors can inform the content and design of behavioural interventions aimed at promoting increased adherence to preventive behaviours in health contexts (hagger, cameron, et al., ; kok et al., ) . in the current study, we aimed to identify the key social psychological factors that underpin uptake and maintenance of the covid- preventive behaviours advocated by the who (world health organization, b). we therefore focused on identifying the motivational and volitional determinants of covid- preventive behaviours among iranians based on an integrated model of behaviour that combined social psychological constructs from the theory of planned behavior (tpb; ajzen, ; ajzen & schmidt, ) and the health action process approach (hapa; schwarzer, ; schwarzer & hamilton, ) . the tpb is a prominent social cognition theory that has been frequently applied to predict multiple health behaviours (mcdermott et al., ; rich, brandes, mullan, & hagger, ) . intention is a focal construct of the theory and considered the most proximal predictor of behaviour. intention is a function of three belief-based constructs: attitudes (evaluation of the positive and negative consequences of the behaviour), subjective norms (perceived expectations of important others approving the intended behaviour), and perceived behavioural control (perceived capacity to carry out the behaviour). in addition, perceived behavioural control is proposed to directly predict behaviour when it closely approximates actual control. although the extant literature applying the tpb has shown that intentions consistently predict health behaviour and mediate effects of the social cognition constructs on behaviour (hagger, chan, protogerou, & chatzisarantis, ; hamilton, van dongen, & hagger, ; mceachan, conner, taylor, & lawton, ; rich et al., ) , the intention-behaviour relationship is imperfect (orbell & sheeran, ; rhodes & de bruijn, ) . therefore, dual-phase models of behaviour, such as the hapa (schwarzer, ; schwarzer & hamilton, ) , propose a post-intentional volitional phase in which individuals may employ a range of self-regulatory strategies to enact their intentions. one self-regulatory strategy that may lead individuals to effectively enact on their intentions is planning. according to the hapa, there are two types of planning: action planning and coping planning (schwarzer, ; schwarzer & hamilton, ; sniehotta, schwarzer, scholz, & sch€ uz, ) . action planning is a task-facilitating strategy and relates to how individuals prepare themselves in performing a behaviour. this includes making plans of when, where, and how to perform the specific behaviour. such plans connect the individual with good opportunities to act. coping planning is a strategy that relates to how individuals prepare themselves in avoiding foreseen barriers and obstacles that may arise when performing a specific behaviour, and potentially competing behaviours that may derail the behaviour. such plans protect good intentions from anticipated obstacles and competing behaviours. another important behavioural determinant proposed by the hapa is self-efficacy. in the hapa, self-efficacy is proposed to be important at all stages (i.e., motivational and volitional) of the health behaviour change process and is considered phase-specific (schwarzer & hamilton, ; zhang, fang, zhang, hagger, & hamilton, ; zhang, zhang, schwarzer, & hagger, ) . accordingly, several types of self-efficacy can be distinguished: action self-efficacy (an optimistic belief about personal agency during the pre-actional, motivational phase) and maintenance self-efficacy (an optimistic belief about personal agency during the post-actional, volitional phase). action self-efficacy reflects individuals' perceived capacity and confidence to engage in a behaviour in which they have not yet adopted or initiated (schwarzer & hamilton, ; zhang et al., , . maintenance self-efficacy refers to individuals' perceived confidence and ability in maintaining the behaviours they have already adopted and performed (schwarzer & hamilton, ; zhang et al., , . meta-analytic research has provided support for the hapa constructs of planning and self-efficacy in predicting health behaviours . previous research has also shown intention, planning, and self-efficacy to predict health preventive behaviours more specifically (caudwell, keech, hamilton, mullan, & hagger, ; cheng et al., ; fung et al., ; hamilton, kirkpatrick, rebar, & hagger, ; hou, lin, wang, tseng, & shu, ; lin, scheerma, yaseri, pakpour, & webb, ; lin et al., lin et al., , lin, updegraff, & pakpour, ; reyes fern andez, knoll, hamilton, & schwarzer, ; strong et al., ; zhang et al., ) . given the high rates of covid- infections worldwide, it is imperative that people engage in covid- preventive behaviours to 'flatten the curve' on rates of increase in new cases and, ultimately, reduce mortality rates from covid- infection. identifying the key theory-based determinants of key preventive behaviours (regular handwashing; respiratory hygiene practices; maintaining social distancing; self-isolating) will help to inform effective interventions to promote participation in these behaviours. the purpose of the current study was to examine the efficacy of an integrated theoretical model of behaviour that incorporated constructs that represent motivational and volitional processes from the tpb and hapa in predicting engagement in covid- preventive behaviours of iranian individuals. the tpb and hapa constructs of attitudes, subjective norms, perceived behavioural control, action self-efficacy, and intention represented effects in the motivational phase of behavioural decision-making. the hapa constructs of maintenance self-efficacy, action planning, and coping planning represented effects in the volitional phase of decision-making. the study adopted a three-wave correlational design with measures of constructs from the motivational phase taken at an initial data collection occasion (time ), constructs from the volitional phase taken at a first follow-up occasion (time ), and measures of covid- preventive behaviours taken at a second follow-up occasion (time ). study hypotheses are outlined in the next section and illustrated in figure . the target behaviour selected in the current study was covid- preventive behaviours, which comprised four specific actions: regular handwashing, respiratory hygiene practices, maintaining social distancing, and self-isolating. these behaviours all have the goal of preventing infection and spread of the virus in common and, therefore, have utility in attaining that goal. the proposed behavioural outcome, therefore, represents a behavioural category servicing a common goal. this is consistent with previous research examining the determinants of target behaviours that comprise multiple actions that service a particular goal. for example, researchers frequently aim to predict physical activity, which encompasses multiple actions (e.g., walking, cycling, swimming, running, going to the gym, playing various sports; cheng et al., ; fung et al., ; rhodes & de bruijn, ) . we adopted a behavioural outcome comprising multiple actions in the current study because these behaviours have a common goal and may, therefore, have common determinants. evidence for this comes from research examining the clustering of similar health behaviours, which demonstrates considerable consistency in the behaviours themselves and their determinants (e.g., kremers, de bruijn, schalmaa, & brug, ) . similarly, recent research has demonstrated that specific covid- preventive behaviours such as social distancing clusters with other healthrelated behaviours such as physical activity (bourassa, sbarra, caspi, & moffitt, ) . it is also important to note that although the determinants of the individual behaviours may differ at the level of the specific sets of beliefs that underpin the model constructs, when measuring the determinants at the global level, we expected the determinants to be consistent. finally, we also expected consistency among the selected preventive behaviours and aimed to ensure this was the case by examining whether measures of the behaviours indicated a latent behavioural variable in our analyses. in terms of specific model predictions, in the motivational phase of the proposed model, we expected that time attitudes, subjective norms, perceived behavioural control, and action self-efficacy would be associated with time intentions. in addition, time intentions and perceived behavioural control were expected to predict time behaviour. it was also expected that time action self-efficacy would predict time maintenance self-efficacy. with respect to model relationships in the volitional phase, it was expected that time intentions would predict time action planning and coping planning, and time behaviour. moreover, time maintenance self-efficacy was expected to be associated with time action planning and coping planning. finally, time maintenance self-efficacy, action planning, and coping planning were expected to predict time behaviour. a set of indirect effects consistent with theory was also specified. it was expected that time attitudes, subjective norms, and perceived behavioural control would predict time action planning and coping planning mediated by time intentions. in addition, attitudes, subjective norms, and perceived behavioural control were expected to predict time behaviour mediated by time intentions and time action planning and coping planning. we also expected time action self-efficacy would predict time action planning and coping planning mediated by time intentions and time maintenance self-efficacy. additionally, we expected that time action self-efficacy would predict time behaviour mediated by time intentions and time maintenance self-efficacy, action planning, and coping planning. finally, it was expected that time intentions and time maintenance self-efficacy would predict time behaviour mediated time action and coping planning. participants and procedure participants were iranian adults aged years and older recruited via online social media platforms. we posted the web link to the survey on three popular social media sites in iran: instagram, telegram, and whatsapp. we also posted the link to several email listservs with many subscribers nationally. to be eligible for inclusion, participants had to be aged years and older, had to provide consent to participate in the study, and had to have access to the internet. the link directed respondents to an initial page describing study aims and requirements, followed by the consent form and, finally, the survey measures. participants' were prompted to provide their telephone number, email address, or social media contact details in order to receive a link to the follow-up survey by sms, email, or social media. data were collected between february and march . this period is critical to the immediacy of the current data as the first confirmed cases of covid- infections in iran were reported on february in qom. by february, cases had been confirmed with a total death toll of four. total confirmed cases had increased to , with deaths by march . media coverage of the pandemic was widely broadcast by state and private media during the period, with state broadcasters providing information on guidelines to prevent the spread of infection and social distancing rules. covid- hotlines were set up at the time to provide help and guidelines on covid- issues. the study adopted a three-wave correlational design with -week intervals between each wave. participants (n = , ; male = , female = ) completed a survey at an initial data collection occasion (time ) comprising self-report measures of action selfefficacy, attitudes, subjective norms, perceived behavioural control, and intention. the survey also included self-report measures of demographic factors including age, sex, education level, and employment status. at a second data collection occasion (time ), participants (n = , , male = , female = , attrition rate = . % from time ) completed self-report measures of maintenance self-efficacy, action planning, and coping planning. at a third data collection occasion (time ), participants (n = , , male = , female = , attrition rate = . % from time ) self-reported their participation in covid- preventive behaviours performed over the past week. we conducted a statistical power analysis based on maccallum, browne, and sugawara ( ) model fit criterion to establish the required sample size to detect effects. the analysis suggested a sample size of , was required for a well-fitting model with an rmsea of . against a null model with an rmsea set at . , degrees of freedom, alpha set at . , and power set at . . data across each of the time points were matched using a code assigned to each participant. the study was conducted in accordance with the declaration of helsinki and was approved by the research ethics committee of blinded for review (qazvin university of medical sciences (ir.qums.rec. . )). all participants provided informed consent to participate prior to the first data collection occasion. participants completing measures at all three data collection occasions received points valued at irr , that were exchangeable for rewards. the points could be used to purchase healthy mobile phone apps like cognitive behavioural therapy, mindfulness, yoga, and weight management apps. only those participants who completed all three surveys were rewarded. psychological constructs were assessed on multi-item psychometric instruments developed using standardized guidelines and adapted to make reference to the target behaviour in the current study, participation in covid- preventive behaviours. we collected data on different constructs across the three time points to allay common method variance and to provide prospective prediction of key outcomes in the integrated model over time. brief details of the measures are provided below, and the full set of measures is available in table . questions were presented in persian, a language commonly used and widely spoken in iran. current measures were adopted from those used in previous studies to tap tpb (lin et al., ) , phase-specific self-efficacy (zhang et al., ) , and planning constructs. intention to perform the covid- preventive behaviours in the coming week was assessed using three items (e.g., 'in the coming week, i am willing to perform the covid- preventive behaviors every day'), scored = strongly disagree to = strongly agree. attitude was assessed using six semantic differential items in response to a common stem: 'for me, following the recommendation of the who on engaging in covid- preventive behaviors every day in the coming week is. . .'. this was followed by a series of bipolar adjectives (e.g., extremely bad-extremely good). responses were scored on five-point scales. subjective norm was assessed using two items measuring participants' perceptions of their important others' approval on performing the target behaviour (e.g., 'most people who are important to me would want me to perform the covid- preventive behaviors every day in the coming week'), scored = strongly disagree to = strongly agree. perceived behavioural control perceived behavioural control was assessed using three items measuring participants' perceptions of their control and confidence in performing the target behaviour (e.g., 'whether or not i perform the covid- preventive behaviors every day in the coming week is completely up to me'), scored = strongly disagree to = strongly agree. action self-efficacy was assessed using three items measuring participants' perceived confidence in initiating the target behaviours immediately (e.g., 'if you have not followed the recommendation of the who on the covid- preventive behaviors every day yet, do you have the confidence to start to follow the recommendation even if you have to force yourself doing so at the current stage'), scored = totally disagree to = totally agree. maintenance self-efficacy was assessed using four items measuring participants' confidence in maintaining the target behaviour in the long term (e.g., 'if you are able to follow the recommendation of the who on the covid- preventive behaviors every day, do you have the confidence to maintain it in the long term even if you are stressed out'), scored = totally disagree to = totally agree. action planning was assessed using three items measuring the extent to which participants had made a plan in terms of how, when, and with whom to perform the target behaviour (e.g., 'i have made a detailed plan regarding where to perform the covid- preventive behaviors every day'), scored = totally disagree to = totally agree. coping planning was assessed using three items measuring how much participants planned to overcome the obstacles preventing them from performing preventive behaviours (e.g., 'i have made a detailed plan regarding what to do if something interferes with my plans'), scored = totally disagree to = totally agree. participants self-reported their age (in years), sex (coded as male = , female = ), educational level (in years), and employment status (retired, homemaker, student, employed; coded as retired and homemaker = , student and employed = ). participants' covid- preventive behaviour was assessed over the last week of the study. participants reported their frequency of participation in four preventive behaviours recommended by the who: washing hands frequently, maintaining social distancing, practising respiratory hygiene, and staying home if feeling unwell ; world health organization, b; e.g., 'regularly and thoroughly clean your hands with an alcohol-based hand rub or wash them with soap and water'). before responding to the behavioural measure, participants were provided with a clear definition of the covid- preventive behaviours and recommendations for how and when they should be performed based on the who guidelines. moreover, these guidelines corresponded with those provided by state media released by the iranian ministry of health. therefore, participants were fully aware of the definitions of the preventive behaviours and the guidelines. responses to each behaviour were scored on five-point scales ( = almost never to = almost always) and were used to indicate a latent covid- preventive behavioural variable in subsequent analyses. higher scores indicated greater adherence to the who recommendations in engaging in covid- preventive behaviours. hypothesized relationships among the proposed integrated social cognition model were analysed using structural equation modelling (sem). the model was estimated using the amos software v . with a maximum-likelihood estimator and bias-corrected bootstrapped standard errors approach with , resamples. less than % of the data were missing and data were missing completely at random based on little's ( ) mcar test (v = . , df = , p = . ). missing data were imputed using the full information maximum-likelihood method. psychological and behavioural constructs were latent variables indicated by their respective sets of items. hypotheses of the proposed integrated model were tested by specifying structural relationships between latent variables (see figure ), with each latent variable indicated by the set of scale items for each, including the behaviour factor. age, sex, educational status, and employment status were included as non-latent control variables in the model. overall model fit with the data was assessed using multiple fit indices: the goodness-offit chi-square test, the comparative fit index (cfi), the tucker-lewis index (tli), the standardized root-mean-square residual (srmr), and the root-mean-square error of approximation (rmsea). as the chi-square test is highly oversensitive to even minor misspecification especially in large, complex models, values for the cfi and tli that exceeded . , and srmr and rmsea values that exceeded . and . , respectively, were considered indicative of satisfactory fit of the model with the data (hu & bentler, ) . reliability of the study measures (intentions, attitudes, subjective norm, perceived behavioural control, action self-efficacy, maintenance self-efficacy, action planning, coping planning, and covid- preventive behaviours) was examined using either cronbach's a or mcdonald's x coefficients and the composite reliability (cr) coefficient. values for a and x exceeding . , and cr values exceeding . , were considered indicative of adequate internal consistency. in addition, we also looked at the average variance extracted (ave) for each latent variable to ensure that items were contributing adequately to the construct they indicated, with values in excess of . considered satisfactory. the large sample size in the current study meant that most estimates of effects among study constructs in the sem were likely to exceed conventional criteria for statistical significance (ory & mokhtarian, ; wu, chang, chen, wang, & lin, ) . as a consequence, assessment of effect sizes of parameter estimates among constructs from the proposed sem was an imperative. effect sizes were evaluated using standardized path coefficients, which allowed for the interpretation of absolute and relative effect sizes of the coefficients against cohen's suggested rules of thumb. interpretation of effect sizes of standardized path coefficients for indirect effects was less easily interpretable as they comprised multiplicative composites of multiple effects. based on previously suggested rules of thumb, we judged standardized path coefficients for indirect effects equal to or exceeding . as non-trivial and effect sizes below this value as trivial (hagger, koch, chatzisarantis, & orbell, ; seaton, marsh, & craven, ) . demographic characteristics of participants who completed measures at each time point are presented in table . attrition analyses indicated that there were no significant differences in age (f( , , ) = . ; p = . ), gender distribution (v ( ) = . ; p = . ), educational level (f( , , ) = . ; p = . ), employment status (v ( ) = . ; p = . ), and psychological variables (wilks' k = . , f ( , , ) = . ; p = . ), and preventive behaviours (t( , ) = . ; p = . ) among participants who remained in the study at time and those who dropped out of the study at time or time . descriptive statistics, reliability coefficients, factor loadings, and average variance extracted for study measures are presented in table . cronbach's a and mcdonald's x coefficients all exceeded . , cr values were above . , and all ave values were above . supporting internal consistency and reliability of the measures. consistent with the acceptable ave values, factor loadings for each item on its respective latent factor was zero-order factor correlations among study constructs are presented in table . most of the correlations were small to medium in effect size (i.e., r range . to . ), and all were statistically significant. although each item representing a separate covid- preventive behaviour effectively indicated the latent behaviour variable, it was prudent to check the mean scores for each behaviour item to verify the consistency with which they were performed by participants. mean scores were highly consistent (m range = . to . ) with high consistency in their variability (sd range = . to . ). a one-way withinparticipants anova showed significant differences on each, which was unsurprising considering the large sample size. however, the small effect sizes for the differences (cohen's d range = . to . ) pointed to the consistency with which participants performed each behaviour, providing further justification for adopting a single covid- behaviour factor. mean scores, standard deviations, mean differences, and test of difference for each preventive behaviour item are presented in supplementary table s . the integrated social cognition model proposed in the present study had good fit with the data (v = , . , df = ; p < . ; cfi = . , tli = . , srmr = . , rmsea = . , % ci = [ . , . ]). path coefficients for the direct effects among study constructs in the model are summarized in figure and path coefficients for the direct, indirect, and total effects are presented in table . all proposed direct and indirect effects were statistically significant, although most effect sizes were small, with most of the standardized path coefficients less than . . perceived behavioural control had the largest effect on intention (b = . , p < . ), with much smaller effects for action selfefficacy, attitude, and subjective norms (bs < . , ps < . ). the largest direct effects on covid- preventive behaviours were for coping planning (b = . , p < . ), action planning (b = . , p < . ), and maintenance self-efficacy (b = . , p < . ), while effects for intentions and perceived behavioural control were much smaller (bs < . , ps < . ). importantly, effects of intentions were mediated by both action planning and coping planning, consistent with the hapa (total indirect effect, b = . , p < . ), with a non-trivial effect size, although a small residual effect of intention on behaviour (b = . , p < . ). along with the direct effect and the mediated effects through the continued planning constructs, there was also a total effect of intentions (b = . , p < . ), again with a non-trivial effect size. in addition, there were indirect effects of action self-efficacy on behaviour through intentions, maintenance self-efficacy, and the coping planning and action planning constructs (b = . , p < . ). furthermore, perceived behavioural control (b = . , p < . ) and maintenance self-efficacy (b = . , p < . ) had the largest total effects on behaviour. the total effect of perceived behavioural control comprised a direct effect (b = . , p < . ) and indirect effects through intention (b = . , p < . ) and action planning and coping planning (b = . , p < . ). the total effect of maintenance self-efficacy comprised a direct effect (b = . , p < . ) and indirect effects through the planning constructs (b = . , p < . ). although the items of our covid- preventive behavioural measure effectively indicated the latent behaviour variable, for completion we also explored whether the effects in our model differed according to the specific preventive behaviour adopted as the target behaviour. we therefore re-estimated our structural equation model with each of the four individual behaviours as the dependent variable, represented by singleindicator latent variables. results indicated high consistency in the pattern and size of the parameter estimates in each of the four models, and these were virtually unchanged from the estimates in the overall model. on the basis of these findings, our conclusions with respect to model effects remained unchanged (the analyses are summarized in tables s to s and figures s -s in the supplemental materials). the present study applied an integrated social cognition model to predict participation in covid- preventive behaviours among members of the iranian general public. findings lend support to the proposed relationships among the integrated social cognition model in identifying the determinants of covid- preventive behaviours. in particular, the research is consistent with previous studies applying the tpb and hapa to identify the determinants of health behaviours and the processes involved (hagger et al., ; mceachan et al., ; rich et al., ; . the current model suggests that perceived behavioural control, intentions, forms of planning, and maintenance self-efficacy are prominent note. age, sex, educational status, and occupational status were included as control variables in the structural equation model. ap = action planning; ase = action self-efficacy; b = unstandardized path coefficient; cp = coping planning; ll = lower limit of % ci; mse = maintenance self-efficacy; pbc = perceived behavioural control; sn = subjective norm; se = standard error; b = standardized path coefficient; % ci = % confidence interval of unstandardized path coefficient; ul = upper limit of % ci. *p < . ; **p < . ; ***p < . . behavioural determinants as they report non-trivial indirect and total effects on covid- preventive behaviours. current findings also support the importance of constructs representing both the motivational and volitional phases of action, again, consistent with previous research and syntheses of research applying the constituent theories (mceachan et al., ; . in particular, current findings support previous research applying these constructs to predict similar behaviours in other health-related contexts, such as hand hygiene behaviours and face mask wearing (contzen & mosler, ; zomer et al., ) , although the previous research was not conducted in the presence of a current pandemic while the current research was conducted at the peak of the ongoing covid- pandemic. while the pattern of effects among model constructs in the current study was consistent with theory and identified salient determinants of covid- preventive behaviours, the majority of effects were small in magnitude. even though the total effect of intentions on behaviour was non-trivial, substantive variance in behaviour remained unexplained. although shortfalls in the link between intention and behaviour are not uncommon in social cognition models (orbell & sheeran, ; rhodes & de bruijn, ) , the link in the current study is particularly modest and suggests that individuals were not following through on their intentions to perform these preventive behaviours. this is aptly illustrated by the average levels of both variables in the current study, with the value for intentions (m = . , sd = . ) exceeding the hypothetical midpoint on the five-point scale and larger than the value for behaviour (m = . , sd = . ), which was substantially below the midpoint. while it seems that coping planning and action planning accounted for a substantive proportion of the intention-behaviour relationship in the current study, results do not provide a sufficient explanation for the shortfall in the intention-behaviour relationship. the apparent reluctance to engage in these in preventive behaviours is surprising given the high level of threat posed by the covid- outbreak in iran and the widespread media coverage of the pandemic (tuite et al., ) . furthermore, a recent study identified elevated levels of fear of covid- in the general iranian population and, although we did not assess risk perceptions in the current study, theory suggests that risk perceptions may translate into increased intentions to perform preventive behaviours to minimize risk (rogers, ; schwarzer, ; schwarzer & hamilton, ) . however, one possible mitigating factor is that excessively heightened fear may be counterproductive in motivating individuals to engage in preventive behaviours (lin, ) . in fact, theory on illness beliefs and perceptions suggests that fear and beliefs reflecting high seriousness and consequences may motivate emotion-focused coping responses aimed at mitigating fear, such as avoidance or denial, neither of which may be focused on behaviours to manage the risk itself leventhal, leventhal, & contrada, ) . this is also consistent with research demonstrating that heightened risk perceptions may not translate into performance of preventive behaviours when self-efficacy is low (peters, ruiter, & kok, ) . however, these ideas remain speculative given we did not assess risk perceptions in the current study, and assessing risk perceptions and their interaction with self-efficacy on performing preventive behaviours may be an important avenue for future research. it is also important to consider possible contextual influences on the low covid- related behavioural response and modest intention-behaviour relationship in the current study. the study was conducted in the run-up to the persian new year on march . consequently, many iranians may have been reluctant to follow covid- preventive behaviours and resisted government and who recommendations. traditional new year's celebrations in iran involve large family gatherings and social events, festive behaviours that are ingrained and habitual, and form a strong part of the persian culture. given the cultural significance of this celebration, it is possible that the traditional festive behaviours may have taken precedence over performing covid- preventive behaviours, particularly the social distancing aspect, as they are incompatible. modest effect sizes among model constructs notwithstanding, the current study is among the first to provide preliminary evidence of the potentially modifiable constructs that relate to preventive behaviours known to be critical in minimizing the spread of covid- infections. current findings may contribute to efforts to increase populationlevel participation in preventive behaviours by signposting the constructs that should be targeted in behavioural interventions. research that identifies constructs that are reliably related to behaviour form an important part of the process by which interventionists develop behavioural interventions (hagger, moyers, mcanally, & mckinley, ; rothman, klein, & sheeran, ) . this can be coupled with recent research that has linked these constructs with sets of methods or techniques purported to change them based on theory and previous evidence. interventionists can therefore identify appropriate techniques that may be effective in affecting change in the behaviour of interest by targeting change in the target constructs, a mechanism of action (connell et al., ) . the current study, therefore, may provide part of the chain of evidence necessary to develop effective behaviour change interventions for covid- preventive behaviours. based on current evidence, interventionists should consider strategies that target change in perceived behavioural control, action and maintenance self-efficacy, and coping planning as these the constructs had the largest direct and indirect effects on covid- preventive behaviour. strategies known to promote self-efficacy include providing opportunities to experience success with the behaviour through, for example, demonstration, modelling, and positive feedback (warner & french, ) . these strategies could be tailored to focus on uptake of the behaviour in the motivational (e.g., demonstrating what is an appropriate social distance when waiting in line at a grocery store; showing effective handwashing technique and prompting practice) or maintenance (e.g., prompting individuals to identify an appropriate rule of thumb on keeping an appropriate social distance every time one is in a store; how to incorporate handwashing into a daily routine) phase. similarly, promoting effective coping planning entails prompting individuals to identify potential barriers to the target behaviour and identifying potential actions that can be put in place to mitigate them (e.g., for the barrier of not having access to handwashing facilities, an individual could plan to make sure they have a personal supply of alcohol-based hand sanitizer available; rhodes, grant, & de bruijn, ) . these strategies would form the content of communications delivered through various media (e.g., television, leaflets, posters, web-based messages) to the affected population. the current research has a number of strengths: ( ) identifying the determinants of a set of appropriate behaviours aimed at preventing spread of covid- , an infection that poses a substantive global health threat and a priority area for behavioural intervention; ( ) adoption of an appropriate integrated theoretical model that provides a set of a priori predictions on the motivational and volitional determinants of covid- preventive behaviours; ( ) recruitment of a large sample of participants in a population subjected to substantive threat of infection; and ( ) use of appropriate longitudinal study design, previously validated measures, data collection techniques, and analytic methods. however, a number of limitations to the current data should be noted. first, although the prospective design provides some basis for the temporal order of relationships among constructs, the current data are correlational, so inferences of causality were drawn from theory alone and not the data. furthermore, the prospective design did not model the covariance stability or change in constructs over time. this is an important caveat to consider when making recommendations for practice. while correlations between constructs and behavioural outcomes may provide some indication of potential targets for intervention, these data do not provide sufficient basis that affecting change in a construct will lead to change in a behavioural outcome, future research adopting panel designs that model change in constructs over time, and intervention or experimental designs that affect change constructs and observe their effects on behavioural outcomes, are needed. it is also important to note that the study was conducted over -week period, a relatively brief follow-up period. the short time period is appropriate given the high speed of transmission of the coronavirus, creating an imperative for immediate mass adoption of covid- preventive behaviours in the population to prevent widespread infection. however, the current study does not provide evidence on the extent to which model constructs predict covid- preventive behaviours over a longer period, and long-term follow-up would provide important data on long-term maintenance of these behaviours. moreover, it is important to note that the current study relied exclusively on self-report measures. although we adopted previously validated measures which demonstrated good reliability and construct validity, such measures have the potential to introduce error variance through recall bias and socially desirable responding. future studies may consider verification of behavioural data with non-self-report data such as data on infection rates. another important limitation is the aggregation of multiple covid- preventive behaviours into a single behavioural score representing covid- preventive behaviours with corresponding social cognition measures that made reference to those specific behaviours rather than the general category of covid- preventive behaviours. our original rationale for this was that these behaviours all service the same goal and, therefore, we would expect these behaviours to be closely aligned and therefore, have the same determinants and the same strength of effects within the proposed model. evidence for this comes from the high factor loadings of each behavioural measure on the latent covid- preventive behaviours variable, suggesting relative consistency in the way participants' performed these behaviours. in addition, estimation of the model with each of the behavioural items as the target behaviour demonstrated substantive consistency in model effects. taken together, these findings provide evidence that the pattern and size of model effects observed in the current study are consistent across the behaviours. nevertheless, we cannot unequivocally rule out idiosyncratic variation in the determinants, and the strength of their effects, on the model constructs for each specific preventive behaviour. this could only be done by examining the corresponding determinants of each specific behaviour separately and then testing the invariance of the model effects for the model for each behaviour. this remains an imperative for future research. in addition, the stem phrases used in the items might have presented difficulties for some participants to interpret the item meaning. for example, the self-efficacy items were prefixed with the phrase: 'if you are doing. . ..', and other items included the prefix: 'if you have not followed the recommendation of the who. . .'. participants without such experience or had followed recommendations might have had difficulties understanding the item content. finally, some of the items aimed at assessing covid- preventive behaviours might have been difficult for people to answer. problems with interpreting these items for some participants may have introduced additional error variance to the measures and, therefore, affected the strength of model relationships involving these variables. it is also important to acknowledge that we cannot confirm that the current model was conducted in a context where individuals were adopting the covid- behaviours for the first time. the current research was conducted during a period when it is likely that participants were just starting to introduce these new behaviours given that very few cases of covid- had been detected in iran at the time. nevertheless, we cannot unequivocally rule out that a proportion of the participants were not already enacting these behaviours, or rule out the possibility that some participants already had substantive experience with these behaviours, albeit with a different goal. so, while it is likely that the current research captured individuals when they were adopting behaviours for the first time, we cannot rule out the possibility of past experience with the behaviours. future research should consider the inclusion of past behaviour as an additional predictor in the model consistent with previous research applying social cognition theories (e.g., brown, hagger, & hamilton, ; chatzisarantis, hagger, smith, & phoenix, ; hagger et al., ; hagger, polet, & lintunen, ) . urgent action is required to stem the spread of covid- in order to 'flatten the curve' of infection rates and minimize stress on available resources and health care facilities, and, importantly, reduce mortality. the current study identified a number of important social psychological determinants of participation in covid- preventive behaviours, particularly forms of self-efficacy, perceived behavioural control, and planning. assuming these determinants are modifiable through intervention, the current research provides important formative data that may assist development of optimally effective behavioural interventions. however, the relatively low levels of participation in these preventive behaviours endemic in the current population are a concern. future research should consider testing the efficacy of behavioural interventions that target change in the constructs identified in the current study using appropriately matched behaviour change techniques. in addition, longitudinal studies adopting panel designs are also a priority to identify directional effects among theory constructs in this high priority context. interpretation, or writing of the report. martin s. hagger's contribution was supported by a finland distinguished professor (fidipro) award (# / / ) from business finland. the authors declare that they have no competing interests. the study was approved by the ethics committee of the qazvin university of medical sciences (ir.qums.rec. . ). participants completed an online informed consent form before beginning the survey. the following supporting information may be found in the online edition of the article: figure s . standardized path coefficients among constructs from the integrated social cognition model with hand hygiene as the target behavior. figure s . standardized path coefficients among constructs from the integrated social cognition model with practicing respiratory hygiene as the target behaviour figure s . standardized path coefficients among constructs from the integrated social cognition model with maintaining a one meter distance as the target behavior figure s . standardized path coefficients among constructs from the integrated social cognition model with staying at home if unwell as the target behaviour. associations between fear of covid- , mental health, and preventive behaviours across pregnant women and husbands: an actor-partner interdependence modelling fear of covid- scale: development and initial validation the theory of planned behavior changing behaviour using the theory of planned behavior real estimates of mortality following covid- infection. the lancet infectious diseases social distancing as a health behavior: county-level movement in the united states during the covid- pandemic is associated with conventional health behaviors the mediating role of constructs representing reasoned-action and automatic processes on the past behavior-future behavior relationship reducing alcohol consumption during pre-drinking sessions: testing an integrated behaviour-change model the influences of continuation intentions on the execution of social behaviour within the theory of planned behaviour extended theory of planned behavior on eating and physical activity links between behavior change techniques and mechanisms of action: an expert consensus study identifying the psychological determinants of handwashing: results from two cross-sectional questionnaire studies in haiti and ethiopia psychosocial variables related to weight-related self-stigma in physical activity among young adults across weight status the handbook of behavior change using meta-analytic path analysis to test theoretical predictions in health behavior: an illustration based on metaanalyses of the theory of planned behavior the common-sense model of selfregulation: meta-analysis and test of a process model known knowns and known unknowns on behavior change interventions and mechanisms of action the reasoned action approach applied to health behavior: role of past behavior and test of some key moderators using meta-analytic structural equation modeling child sun safety: application of an integrated behavior change model an extended theory of planned behavior for parent-for-child health behaviors: a meta-analysis. health psychology covid- : what is next for public health? the lancet assessing related factors of intention to perpetrate dating violence among university students using the theory of planned behavior cutoff criteria for fit indexes in covariance structure analysis: conventional criteria versus new alternatives predicting covid- preventive behaviours school opening delay effect on transmission dynamics of coronavirus disease in korea: based on mathematical modeling and simulation study communicating the risk of death from novel coronavirus disease (covid- ) a taxonomy of behavior change methods: an intervention mapping approach clustering of energy balance related behaviours and their intrapersonal determinants self-regulation, health, and behavior: a perceptual-cognitive approach social reaction toward the novel coronavirus (covid- ) can a modified theory of planned behavior explain the effects of empowerment education for people with type diabetes? a cluster randomised controlled trial of an intervention based on the health action process approach for increasing fruit and vegetable consumption in iranian adolescents a cluster randomized controlled trial of a theory-based sleep hygiene intervention for adolescents the relationship between the theory of planned behavior and medication adherence in patients with epilepsy a test of missing completely at random for multivariate data with missing values power analysis and determination of sample size for covariance structure modeling the theory of planned behaviour and dietary patterns: a systematic review and meta-analysis prospective prediction of health-related behaviors with the theory of planned behavior: a meta-analysis inclined abstainers': a problem for predicting health related behaviour the impact of non-normality, sample size and estimation technique on goodness-of-fit measures in structural equation modeling: evidence from ten empirical models of travel behavior assessing the fear of covid- among different populations: a response to ransing et al ( ) assessing the psychological response to the covid- : a response to bitan et al "fear of covid- scale: psychometric characteristics, reliability and validity in the israeli population assessing psychological response to the covid- : the fear of covid- scale and the covid stress scales threatening communication: a critical re-analysis and a revised meta-analytic test of fear appeal theory social-cognitive antecedents of hand washing: action control bridges the planning-behavior gap how big is the physical activity intention-behaviour gap? a meta-analysis using the action control framework planning and implementation intention interventions theory of planned behavior and adherence in chronic illness: a meta-analysis a protection motivation theory of fear appeals and attitude change moving from theoretical principles to intervention strategies: applying the experimental medicine approach modeling health behavior change: how to predict and modify the adoption and maintenance of health behaviors changing behavior using the health action process approach big-fish-little-pond effect: generalizability and moderation -two sides of the same coin action planning and coping planning for long-term lifestyle change: theory and assessment sleep hygiene behaviors in iranian adolescents: an application of the theory of planned behavior estimation of the transmission risk of the -ncov and its implication for public health interventions estimation of coronavirus disease (covid- ) burden and potential for international dissemination of predicting confidence and self-efficacy interventions q&a on coronaviruses (covid- ) covid- coronavirus pandemic further psychometric evaluation of the self-stigma scale-short: measurement invariance across mental illness and gender characteristics of and important lessons from the coronavirus disease (covid- ) outbreak in china: summary of a report of cases from the chinese center for disease control and prevention health beliefs of wearing facemasks for influenza a/h n prevention: a qualitative investigation of hong kong older adults predicting hand washing and sleep hygiene behaviors among college students: test of an integrated social-cognition model a meta-analysis of the health action process approach sociocognitive determinants of observed and self-reported compliance to hand hygiene guidelines in child day care centers we are most grateful to all participants. the study was funded by qazvin university of medical sciences. this study was supported by grants from the qazvin university of medical sciences. the funders were not involved in the study design, data collection, data analysis, data not applicable. all authors contributed to the study design, data interpretation, editing, and critical review of the manuscript. vi, nrm, and z gh collected data. ahp and cyl performed the data handling and data analysis and drafted the first manuscript. mdg interpreted the data and its analysis. kh and msh helped draft and revise the manuscript. all authors read and approved the final manuscript. key: cord- -jrl fowa authors: abry, patrice; pustelnik, nelly; roux, stéphane; jensen, pablo; flandrin, patrick; gribonval, rémi; lucas, charles-gérard; guichard, Éric; borgnat, pierre; garnier, nicolas title: spatial and temporal regularization to estimate covid- reproduction number r(t): promoting piecewise smoothness via convex optimization date: - - journal: plos one doi: . /journal.pone. sha: doc_id: cord_uid: jrl fowa among the different indicators that quantify the spread of an epidemic such as the on-going covid- , stands first the reproduction number which measures how many people can be contaminated by an infected person. in order to permit the monitoring of the evolution of this number, a new estimation procedure is proposed here, assuming a well-accepted model for current incidence data, based on past observations. the novelty of the proposed approach is twofold: ) the estimation of the reproduction number is achieved by convex optimization within a proximal-based inverse problem formulation, with constraints aimed at promoting piecewise smoothness; ) the approach is developed in a multivariate setting, allowing for the simultaneous handling of multiple time series attached to different geographical regions, together with a spatial (graph-based) regularization of their evolutions in time. the effectiveness of the approach is first supported by simulations, and two main applications to real covid- data are then discussed. the first one refers to the comparative evolution of the reproduction number for a number of countries, while the second one focuses on french departments and their joint analysis, leading to dynamic maps revealing the temporal co-evolution of their reproduction numbers. the ongoing covid- pandemic has produced an unprecedented health and economic crisis, urging for the development of adapted actions aimed at monitoring the spread of the new coronavirus. no country remained untouched, thus emphasizing the need for models and tools to perform quantitative predictions, enabling effective managements of patients or an optimized allocations of medical ressources. for instance, the outbreak of this unprecedented pandemic was characterized by a critical lack of tools able to perform predictions related to the pressure on hospital ressources (number of patients, masks, gloves, intensive care unit needs,. . .) [ , ] . as a first step toward such an ambition goal, the present work focuses on the pandemic time evolution assessment. indeed, all countries experienced a propagation mechanism that is basically universal in the onset phase: each infected person happened to infect in average more than one other person, leading to an initial exponential growth. the strength of the spread is quantified by the so-called reproduction number which measures how many people can be contaminated by an infected person. in the early phase where the growth is exponential, this is referred to as r (for covid- , r * [ , ] ). as the pandemic develops and because more people get infected, the effective reproduction number evolves, hence becoming a function of time hereafter labeled r(t). this can indeed end up with the extinction of the pandemic, r(t)! , at the expense though of the contamination of a very large percentage of the total population, and of potentially dramatic consequences. rather than letting the pandemic develop until the reproduction number would eventually decrease below unity (in which case the spread would cease by itself), an active strategy amounts to take actions so as to limit contacts between individuals. this path has been followed by several countries which adopted effective lockdown policies, with the consequence that the reproduction number decreased significantly and rapidly, further remaining below unity as long as social distancing measures were enforced (see for example [ , ] ). however, when lifting the lockdown is at stake, the situation may change with an expected increase in the number of inter-individual contacts, and monitoring in real time the evolution of the instantaneous reproduction number r(t) becomes of the utmost importance: this is the core of the present work. monitoring and estimating r(t) raises however a series of issues related to pandemic data modeling, to parameter estimation techniques and to data availability. concerning the mathematical modeling of infectious diseases, the most celebrated approaches refer to compartmental models such as sir ("susceptible-infectious-recovered"), with variants such as seir ("susceptible-exposed-infectious-recovered"). because such global models do not account well for spatial heterogeneity, clustering of human contact patterns, variability in typical number of contacts (cf. [ ] ), further refinements were proposed [ ] . in such frameworks, the effective reproduction number at time t can be inferred from a fit of the model to the data that leads to an estimated knowledge of the average of infecting contacts per unit time, of the mean infectious period, and of the fraction of the population that is still susceptible. these are powerful approaches that are descriptive and potentially predictive, yet at the expense of being fully parametric and thus requiring the use of dedicated and robust estimation procedures. parameter estimation become all the more involved when the number of parameters grows and/or when the amount and quality of available data are low, as is the case for the covid- pandemic real-time and in emergency monitoring. rather than resorting to fully parametric models and seeing r(t) as the by-product of their identification, a more phenomenological, semi-parametric approach can be followed [ ] [ ] [ ] . this approach has been reported as robust and potentially leading to relevant estimates of r(t), even for epidemic spreading on realistic contact networks, where it is not possible to define a steady exponential growth phase and a basic reproduction number [ ] . the underlying idea is to model incidence data z(t) at time t as resulting from a poisson distribution with a time evolving parameter adjusted to account for the data evolution, which depends on a function f(s) standing for the distribution of the serial interval. this function models the time between the onset of symptoms in a primary case and the onset of symptoms in secondary cases, or equivalently the probability that a person confirmed infected today was actually infected s days earlier by another infected person. the serial interval function is thus an important ingredient of the model, accounting for the biological mechanisms in the epidemic evolution. assuming the distribution f to be known, the whole challenge in the actual use of the semi-parametric poisson-based model thus consists in devising estimatesrðtÞ of r(t) with satisfactory statistical performance. this has been classically addressed by approaches aimed at maximizing the likelihood attached to the model. this can be achieved, e.g., within several variants of bayesian frameworks [ , , , ] , with even dedicated software packages (cf. e.g., https://shiny.dide.imperial. ac.uk/epiestim/). instead, we promote here an alternative approach based on inverse problem formulations and proximal-operator based nonsmooth convex optimisation [ ] [ ] [ ] [ ] [ ] . the questions of modeling and estimation, be they fully parametric or semi-parametric, are intimately intertwined with that of data availability. this will be further discussed but one can however remark at this point that many options are open, with a conditioning of the results to the choices that are made. there is first the nature of the incidence data used in the analysis (reported infected cases, hospitalizations, deaths) and the database they are extracted from. next, there is the granularity of the data (whole country, regions, smaller units) and the specificities that can be attached to a specific choice as well as the comparisons that can be envisioned. in this respect, it is worth remarking that most analyses reported in the literature are based on (possibly multiple) univariate time series, whereas genuinely multivariate analyses (e.g., a joint analysis of the same type of data in different countries in order to compare health policies) might prove more informative. for that category of research work motivated by contributing in emergency to the societal stake of monitoring the pandemic evolution in real-time, or at least, on a daily basis, there are two classes of challenges: ensuring a robust and regular access to relevant data; rapidly developing analysis/estimation tools that are theoretically sound, practically usable on data actually available, and that may contribute to improving current monitoring strategies. in that spirit, the overarching goal of the present work is twofold: ( ) proposing a new, more versatile framework for the estimation of r(t) within the semi-parametric model of [ , ] , reformulating its estimation as an inverse problem whose functional is minimized by using non smooth proximal-based convex optimization; ( ) inserting this approach in an extended multivariate framework, with applications to various complementary datasets corresponding to different geographical regions. the paper is organized as follows. it first discusses data, as collected from different databases, with heterogeneity and uneven quality calling for some preprocessing that is detailed. in the present work, incidence data (thereafter labelled z(t)) refers to the number of daily new infections, either as reported in databases, or as recomputed from other available data such as hospitalization counts. based on a semi-parametric model for r(t), it is then discussed how its estimation can be phrased within a non smooth proximal-based convex optimization framework, intentionally designed to enforce piecewise linearity in the estimation of r(t) via temporal regularization, as well as piecewise constancy in spatial variations of r(t) by graph-based regularization. the effectiveness of these estimation tools is first illustrated on synthetic data, constructed from different models and simulating several scenarii, before being applied to several real pandemic datasets. first, the number of daily new infections for many different countries across the world are analyzed independently. second, focusing on france only, the number of daily new infections per continental france départements (départements constitute usual entities organizing the administrative life in france) are analyzed both independently and in a multivariate setting, illustrating the benefit of this latter formulation. discussions, perpectives and potential improvements are finally discussed. datasets. in the present study, three sources of data were systematically used: • source (jhu) johns hopkins university provides access to the cumulated daily reports of the number of infected, deceased and recovered persons, on a per country basis, for a large number of countries worldwide, essentially since inception of the covid- crisis (january st, time series. the data available on the different data repositories used here are strongly affected by outliers, which may stem from inaccuracy or misreporting in per country reporting procedures, or from changes in the way counts are collected, aggregated, and reported. in the present work, it has been chosen to preprocess data for outlier removal by applying to the raw time series a nonlinear filtering, consisting of a sliding-median over a -day window: outliers defined as ± . standard deviation are replaced by window median to yield the pre-processed time series z(t), from which the reproduction number r(t) is estimated. an example of raw and pre-processed time series is illustrated in fig . when countries are studied independently, the estimation procedure is applied separately to each time series z(t) of size t, the number of days available for analysis. when considering continental france départements, we are given d time series z d (t) of size t each, where � d � d = indexes the départements. these time series are collected and stacked in a matrix of size d × t, and they analyzed both independently and jointly. model. although they can be used for envisioning the impact of possible scenarii in the future development of an on-going epidemic [ ] , sir models, because they require the full estimation of numerous parameters, are often used a posteriori (e.g., long after the epidemic) with consolidated and accurate datasets. during the spread phase and in order to account for the on-line/on-the-fly need to monitor the pandemic and to offer some robustness to partial/ incomplete/noisy data, less detailed semi-parametric models focusing on the only estimation of the time-dependent reproduction number can be preferred [ , , ] . let r(t) denote the instantaneous reproduction number to be estimated and z(t) be the number of daily new infections. it has been proposed in [ , ] that {z(t), t = , . . ., t} can be modeled as a nonstationary time series consisting of a collection of random variables, each drawn from a poisson distribution p p t whose parameter p t depends on the past observations of z(t), on the current value of r(t), and on the serial interval function f(�): the serial interval function f(�) constitutes a key ingredient of the model, whose importance and role in pandemic evolution has been mentioned in introduction. it is assumed to be independent of calendar time (i.e., constant across the epidemic outbreak), and, importantly, independent of r(t), whose role is to account for the time dependencies in pandemic propagation mechanisms. for the covid- pandemic, several studies have empirically estimated the serial interval function f(�) [ , ] . for convenience, f(�) has been modeled as a gamma distribution, with shape and rate parameters . and . , respectively (corresponding to mean and standard deviations of . and . days, see [ ] and references therein). these choices and assumptions have been followed and used here, and the corresponding function is illustrated in fig . in essence, the model in eq ( ) is univariate (only one time series is modeled at a time), and based on a poisson marginal distribution. it is also nonstationary, as the poisson rate evolves along time. the key ingredient of this model consists of the poisson rate evolving as a weighted moving average of past observations, which is qualitatively based on the following rationale: whenr is above , the epidemic is growing and, conversely, when this ratio is below , it decreases and eventually vanishes. non-smooth convex optimisation. the whole challenge in the actual use of the semiparametric poisson-based model described above thus consists in devising estimatesrðtÞ of r (t) that have better statistical performance (more robust, reliable, and hence usable) than the direct brute-force and naive form defined in eq . to estimate r(t), and instead of using bayesian frameworks that are considered state-of-the-art tools for epidemic evolution analysis, we propose and promote here an alternative approach based on an inverse problem formulation. its main principle is to assume some form of temporal regularity in the evolution of r(t) (we use a piecewise linear model in the following). in the case of a joint estimation of r(t) across several continental france départements, we further assume some form of spatial regularity, i.e., that the values of r(t) for neighboring départements are similar. univariate setting. for a single country, or a single département, the observed (possibly preprocessed) data {z(t), � t � t} is represented by a t-dimensional vector z r t . recalling that the poisson law is pðz ¼ njpÞ ¼ p n n! e À p for each integer n � , the negative log-likelihood of observing z given a vector p r t of poisson parameters p t is where r r t is the (unknown) vector of values of r(t). up to an additive term independent of p, this is equal to the kl-divergence (cf. section . . in [ ] ): given the vector of observed values z, the serial interval function f(�), and the number of days t, the vector p given by ( ) reads p = r � fz, with � the entrywise product and f r t�t the matrix with entries f ij = f(i − j). maximum likelihood estimation of r (i.e., minimization of the negative log-likelihood) leads to an optimization problem min r d kl (zjr � fz) which does not ensure any regularity of r(t). to ensure temporal regularity, we propose a penalized approach usinĝ r ¼ argmin r d kl ðz j r � fzÞ þ oðrÞ where o denotes a penalty function. here we wish to promote a piecewise affine and continuous behavior, which may be accomplished [ , ] using o(r) = λ time kd rk , where d is the matrix associated with a laplacian filter (second order discrete temporal derivatives), k�k denotes the ℓ -norm (i.e., the sum of the absolute values of all entries), and λ time is a penalty factor to be tuned. this leads to the following optimization problem: spatially regularized setting. in the case of multiple départements, we consider multiple vectors (z d r t , � d � d) associated to the d time series, and multiple vectors of unknown (r d r t , � d � d), which can be gathered into matrices: a data matrix z r t�d whose columns are z d and a matrix of unknown r r t�d whose columns are the quantities to be estimated r d . a first possibility is to proceed to independent estimations of the (r d r t , � d � d) by addressing the separate optimization problemŝ which can be equivalently rewritten into a matrix form: is the entrywise ℓ norm of d r, i.e., the sum of the absolute values of all its entries. an alternative is to estimate jointly the (r d r t , � d � d) using a penalty function promoting spatial regularity. to account for spatial regularity, we use a spatial analogue of d promoting spatially piecewise constant solutions. the d continental france départements can be considered as the vertices of a graph, where edges are present between adjacent départements. from the adjacency matrix a r d�d of this graph (a ij = if there is an edge e = (i, j) in the graph, a ij = otherwise), the global variation of the function on the graphs can be computed as ∑ ij a ij (r ti − r tj ) and it is known that this can be accessed through the so-called (combinatorial) laplacian of the graph: [ ] . however, in order to promote smoothness over the graph while keeping some sparse discontinuities on some edges, it is preferable to regularize using a total variation on the graph, which amounts to take the ℓ -norm of these gradients (r ti − r tj ) on all existing edges. for that, let us introduce the incidence matrix b r e�d such that l = b > b where e is the number of edges and, on each line representing an existing edge e = (i, j), we set b e,i = and b e,j = − . then, the ℓ -norm krb > k = kbr > k is equal to p t t¼ p ði;jÞ:a ij ¼ jr ti À r tj j. alternatively, it can be computed as krb > k ¼ p t t¼ kbrðtÞk where rðtÞ r d is the t-th row of r, which gathers the values across all départements at a given time t. from that, we can define the regularized optimization problem: optimization problems ( ) and ( ) involve convex, lower semi-continuous, proper and non-negative functions, hence their set of minimizers is non-empty and convex [ ] . we will discuss right after how to compute these using proximal algorithms. by the known sparsity-promoting properties of ℓ regularizers and their variants, the corresponding solutions are such that d r and/or rb > are sparse matrices, in the sense that these matrices of (second order temporal or first order spatial) derivatives have many zero entries. the higher the penalty factors λ time and λ space , the more zeroes in these matrices. in particular, when λ space = , no spatial regularization is performed and ( ) is equivalent to ( ) . when λ space is large enough, rb > is exactly zero, which implies that r(t) is constant at each time since the graph of départements is connected. optimization using a proximal algorithm. the considered optimization problems are of the form where f and g m are proper lower semi-continuous convex, and k m are bounded linear operators. a classical case for m = is typically addressed with the chambolle-pock algorithm [ ] , which has been recently adapted for multiple regularization terms as in eq. of [ ] . to handle the lack of smoothness of lipschitz differentiability for the considered functions f and g m , these approaches rely on their proximity operators. we recall that the proximity operator of a convex, lower semi-continuous function φ is defined as [ ] prox φ ðyÞ ¼ arg min in our case, we consider a separable data fidelity term: as this is a separable function of the entries of its input, its associated proximity operator can be computed component by component [ ] : where τ > . we further consider g m (�) = k.k , m = , , and k (r) ≔ λ time d r, k (r) ≔ λ space rb > . the proximity operators associated to g m read: where (.) + = max( ,.). in algorithm , we express explicitly algorithm of [ ] for our setting, considering the moreau identity that provides the relation between the proximity operator of a function and the proximity operator of its conjugate (cf. eq. ( ) of [ ] ). the choice of the parameters τ and σ m impacts the convergence guarantees. in this work, we adapt a standard choice provided by [ ] to this extended framework. the adjoint of k m , denoted k � m , is given by the sequence ðr ðkþ Þ Þ k n converges to a minimizer of ( ) (cf. thm . of [ ] ). input: data z, tolerance � > ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi p m¼ ; kk m k to assess the relevance and performance of the proposed estimation procedure detailed above, it is first applied to two different synthetic time series z(t). the first one is synthesized using directly the model in eq ( ), with the same serial interval function f(t) as that used for the estimation, and using an a priori prescribed function r(t). the second one is produced from solving a compartmental (sir type) model. for such models, r(t) can be theoretically related to the time scale parameters entering their definition, as the ratio between the infection time scale and the quitting infection (be it by death or recovery) time scale [ , ] . the theoretical serial function f associated to that model and to its parameters is computed analytically (cf., e.g., [ ] ) and used in the estimation procedure. for both cases, the same a priori prescribed function r(t), to be estimated, is chosen as constant (r = . ) over the first days to model the epidemic outbreak, followed by a linear decrease (till below ) over the next days to model lockdown benefits, with finally an abrupt linear increase for the last days, modeling a possible outbreak at when lockdown is lifted. additive gaussian noise is superimposed to the data produced by the models to account for outliers and misreporting. for both cases, the proposed estimation procedure (obtained with λ time set to the same values as those used to analyze real data in the next section) outperforms the naive estimates ( ), which turn out to be very irregular (cf. fig ) . the proposed estimates notably capture well the three different phases of r(t) (stable, decreasing and increasing), with notably a rapid and accurate reaction to the increasing change in the last days. the present section aims to apply the model and estimation tools proposed above to actual covid- data. first, specific methodological issues are addressed, related to tuning the hyperparameter(s) λ time or (λ time , λ space ) in univariate and multivariate settings, and to comparing the consistency between different estimates of r(t) obtained from the same incidence data, yet downloaded from different repositories. then, the estimation tools are applied to the estimation of r(t), both independently for numerous countries and jointly for the continental france départements. estimation of r(t) is performed daily, with t thus increasing every day, and updated results are uploaded on a regular basis on a dedicated webpage (cf. http://perso.ens-lyon.fr/patrice. abry. regularization hyperparameter tuning. a critical issue associated with the practical use of the estimates based on the optimization problems ( ) and ( ) lies in the tuning of the hyperparameters balancing data fidelity terms and penalization terms. while automated and data-driven procedures can be devised, following works such as [ ] and references therein, let us analyze the forms of the functional to be minimized, so as to compute relevant orders of magnitude for these hyperparameters. let us start with the univariate estimation ( ). using λ time = implies no regularization and the achieved estimate turns out to be as noisy as the one obtained with a naive estimator (cf. eq ( )). conversely, for large enough λ time , the proposed estimate becomes exactly a constant, missing any time evolution. tuning λ time is thus critical but can become tedious, especially because differences across countries (or across départements in france) are likely to require different choices for λ time . however, a careful analysis of the functional to minimize shows that the data fidelity term ( ), based on a kullback-leibler divergence, scales proportionally to the input incidence data z while the penalization term, based on the regularization of r(t), is independent of the actual values of z. therefore, the same estimate for r(t) is obtained if we replace z with α × z and λ with α × λ. because orders of magnitude of z are different amongst countries (either because of differences in population size, or of pandemic impact), this critical observation leads us to apply the estimate not to the raw data z but to a normalized version z/std(z), alleviating the burden of selecting one λ time per country, instead enabling to select one same λ time for all countries and further permitting to compare the estimated r(t)'s across countries for equivalent levels of regularization. considering now the graph-based spatially-regularized estimates ( ) while keeping fixed λ time , the different r(t) are analyzed independently for each département when λ space = . conversely, choosing a large enough λ space yields exactly identical estimates across départments that are, satisfactorily, very close to what is obtained from data aggregated over france prior to estimation. further, the connectivity graph amongst the continental france départements leads to an adjacency matrix with non-zero off-diagonal entries (set to the value ), associated to as many edges as existing in the graph. therefore, a careful examination of ( ) shows that the spatial and temporal regularizations have equivalent weights when λ time and λ time are chosen such that the use of z/std(z) and of ( ) above gives a relevant first-order guess to the tuning of λ time and of (λ time , λ space ). estimate consistency using different repository sources. when undertaking such work dedicated to on-going events, to daily evolutions, and to a real stake in forecasting future trends, a solid access to reliable data is critical. as previously mentioned, three sources of data are used, each including data for france, which are thus now used to assess the impact of data sources on estimated r(t). source (jhu) and source (ecdpc) provide cumulated numbers of confirmed cases counted at national levels and (in principle) including all reported cases from any source (hospital, death at home or in care homes. . .). source (spf) does not report that same number, but a collection of other figures related to hospital counts only, from which a daily number of new hospitalizations can be reconstructed and used as a proxy for daily new infections. the corresponding raw and (sliding-median) preprocessed data, illustrated in fig , show overall comparable shapes and evolutions, yet with clearly visible discrepancies of two kinds. first, source (jhu) and source (ecdpc), consisting of crude reports of number of confirmed cases are prone to outliers. those can result from miscounts, from pointwise incorporations of new figures, such as the progressive inclusion of cases from ehpad (care homes) in france, or from corrections of previous erroneous reports. conversely, data from source (spf), based on hospital reports, suffer from far less outliers, yet at the cost of providing only partial figures. second, in france, as in numerous other countries worldwide, the procedure on which confirmed case counts are based, changed several times during the pandemic period, yielding possibly some artificial increase in the local average number of daily new confirmed cases. this has notably been the case for france, prior to the end of the lockdown period (mid-may), when the number of tests performed has regularly increased for about two weeks, or more recently early june when the count procedures has been changed again, likely because of the massive use of serology tests. because the estimate of r(t) essentially relies on comparing a daily number against a past moving average, these changes lead to significant biases that cannot be easily accounted for, but vanishes after some duration controlled by the typical width of the serial distribution f (of the order of ten days). confirmed infection cases across the world. to report estimated r(t)'s for different countries, data from source (ecdpc) are used as they are of better quality than data from source (jhu), and because hospital-based data (as in source (spf)) are not easily available for numerous different countries. visual inspection led us to choose, uniformly for all countries, two values of the temporal regularization parameter: λ time = to produce a strongly-regularized, hence slowly varying estimate, and λ time = . for a milder regularization, and hence a more reactive estimate. these estimates being by construction designed to favor piecewise linear behaviors, local trends can be estimated by computing (robust) estimates of the derivativeŝ bðtÞ ofrðtÞ. the slow and less slow estimates ofrðtÞ thus provide a slow and less slow estimate of the local trends. intuitively, these local trends can be seen as predictors for the forthcoming value of r:rðt þ nÞ ¼rðtÞ þ nbðtÞ. let us start by inspecting again data for france, further comparing estimates stemming from data in source (ecdpc) or in source (spf) (cf. fig ) . as discussed earlier, data from source (ecdpc) show far more outliers that data from source (spf), thus impacting estimation of r and β. as expected, the strongly regularized estimates (λ time = ) are less sensitive than the less regularized ones (λ time = . ), yet discrepancies in estimates are significant, as data from source (ecdpc) yields, for june th, estimates of r slightly above , while that from source (spf) remain steadily around . , with no or mild local trends. again, this might be because late may, france has started massive serology testing, mostly performed outside hospitals. this yielded an abrupt increase in the number of new confirmed cases, biasing upward the estimates of r(t). however, the short-term local trend for june th goes also downward, suggesting that the model is incorporating these irregularities and that estimates will return to unbiased after an estimation time controlled by the typical width of the serial distribution f (of the order of ten days). this recent increase is not seen in source (spf)based estimates that remain very stable, potentially suggesting that hospital-based data are much less affected by changes in testing policies. this local analysis at the current date can be complemented by a more global view on what happened since the lifting of the lockdown. considering the whole period starting from may th we end up with triplets [ th percentile; median; th percentile] that read as given in table : source (ecdpc) provides data for several tens of countries. figs to reportrðtÞ and bðtÞ for several selected countries. more figures are available at perso.ens-lyon.fr/patrice.abry. as of june th (time of writing), fig shows that, for most european countries, the pandemic seems to remain under control despite lifting of the lockdown, with (slowly varying) estimates of r remaining stable below , ranging from . to . depending on countries, and (slowly varying) trends around . sweden and portugal (not shown here) display less favorable patterns, as well as, to a lesser extent, the netherlands, raising the question of whether this might be a potential consequence of less stringent lockdown rules compared to neighboring european countries. fig shows that whiler for canada is clearly below since early may, with a negative local trend, the usa are still bouncing back and forth around . south america is in the above phase but starts to show negative local trends. fig indicates that iran, india or indonesia are in the critical phase withrðtÞ > . fig shows that data for african countries are uneasy to analyze, and that several countries such as egypt or south africa are in pandemic growing phases. phase-space representation. to complement figs to , fig displays a phase-space representation of the time evolution of the pandemic, constructed by plotting one against the other the local average (over a week) of the slowly varying estimated reproduction numberrðtÞ and local trend, ð � rðtÞ; � bðtÞÞ, for a period ranging from mid-april to june th. country names are written at the end (last day) of the trajectories. interestingly, european countries display a c-shape trajectory, starting with r > with negative trends (lockdown effects), thus reaching the safe zone (r < ) but eventually performing a u-turn with a slow increase of local trends till positive. this results in a mild but clear reincrease of r, yet with most values below today, except for france (see comments above) and sweden. the usa display a similar c-shape though almost concentrated on the edge point r(t) = , β = , while canada does return to the safe zone with a specific pattern. south-american countries, obviously at an earlier stage of the pandemic, show an inverted c-shape pattern, with trajectory evolving from the bad top right corner, to the controlling phase (negative local trend, with decreasing r still above though). phase-spaces of asian and african countries essentially confirm these c-shaped trajectories. envisioning these phase-space plots as pertaining to different stages of the pandemic (rather than to different countries), this suggests that covid- pandemic trajectory resembles a clockwise circle, starting from the bad top right corner (r above and positive trends), evolving, likely by lockdown impact, towards the bottom right corner (r still above but negative trends) and finally to the safe bottom left corner (r below and negative then null trend). the lifting of the lockdown may explain the continuation of the trajectory in the still safe but. . . corner (r below and again positive trend). as of june th, it can be only expected that trajectories will not close the loop and reach back the bad top right corner and the r = limit. continental france départements: regularized joint estimates. there is further interest in focusing the analysis on the potential heterogeneity in the epidemic propagation across a given territory, governed by the same sanitary rules and health care system. this can be achieved by estimating a set of localrðtÞ's for different provinces and regions [ ] . such a study is made possible by the data from source (spf), that provides hospital-based data for each of the continental france départements . fig (right) already reported the slow and fast varying estimates of r and local trends computed from data aggregated over the whole france. to further study the variability across the continental france territory, the graphbased, joint spatial and temporal regularization described in eq is applied to the number of confirmed cases consisting of a matrix of size k × t, with d = continental france départements, and t the number of available daily data (e.g., t = on june th, data being available only after march th). the choice λ time = . leading to fast estimates was used for this joint study. using ( ) as a guideline, empirical analyses led to set λ space = . , thus selecting spatial regularization to weight one-fourth of the temporal regularization. first, fig ( top row) maps and compares for june th (chosen arbitrarily as the day of writing) per-département estimates, obtained when départements are analyzed either independently (r indep using eq , left plot) or jointly (r joint using eq , right plot). while the means of r indep andr joint are of the same order (' . and ' . respectively) the standard deviations drop down from ' . to ' . , thus indicating a significant decrease in the variability across departments. this is further complemented by the visual inspection of the maps which reveals reduced discrepancies across neighboring departments, as induced by the estimation procedure. in a second step, short and long-term trends are automatically extracted fromr indep and r joint and short-term trends are displayed in the bottom row of fig (left and right, respectively) . this evidences again a reduced variability across neighboring departments, though much less than that observed forr indep andr joint , likely suggesting that trends on r per se are more robust quantities to estimate than single r's. for june th, fig also indicates reproduction numbers that are essentially stable everywhere across france, thus confirming the trend estimated on data aggregated over all france (cf. fig , right plot) . video animations, available at perso.ens-lyon.fr/patrice.abry/deptregul.mp , and at barthes.enssib.fr/coronavirus/ixxi-sisyphe/., updated on a daily basis, report further comparisons betweenr indep andr joint and their evolution along time for the whole period of data availability. maps for selected days are displayed in fig ( with identical colormaps and colorbars across time). fig shows that until late march (lockdown took place in france on march th),r joint was uniformly above . (chosen as the upper limit of the colorbar to permit to see variations during the lockdown and post-lockdown periods), indicating a rapid evolution of the epidemic across entire france. a slowdown of the epidemic evolution is visible as early as the first days of april (with overall decreases ofr joint , and a clear north vs. south gradient). during april, this gradient rotates slightly and aligns on a north-east vs. south-west direction and globally decreases in amplitude. interestingly, in may, this gradient has reversed direction from south-west to north-east, though with very mild amplitude. as of today (june th), the pandemic, viewed hospital-based data from source (spf), seems under control under the whole continental france. estimation of the reproduction number constitutes a classical task in assessing the status of a pandemic. classically, this is done a posteriori (after the pandemic) and from consolidated data, often relying on detailed and accurate sir-based models and relying on bayesian frameworks for estimation. however, on-the-fly monitoring of the reproduction number time evolution constitutes a critical societal stake in situations such as that of covid- , when decisions need to be taken and action need to be made under emergency. this calls for a triplet of constraints: i) robust access to fast-collected data; ii) semi-parametric models for such data that focus on a subset of critical parameters; iii) estimation procedures that are both elaborated enough to yield robust estimates, and versatile enough to be used on a daily basis and applied to (often-limited in quality and quantity) available data. in that spirit, making use of a robust nonstationary poisson-distribution based semiparametric model proven robust in the literature for epidemic analysis, we developed an original estimation procedure to favor piecewise regular estimation of the evolution of the reproduction number, both along time and across space. this was based on an inverse problem formulation balancing fidelity to time and space regularization, and used proximal operators and nonsmooth convex optimization. this tool can be applied to time series of incidence data, reported, e.g., for a given country. whenever made possible from data, estimation can benefit from a graph of spatial proximity between subdivisions of a given territory. the tool also provides local trends that permit to forecast short-term future values of r. the proposed tools were applied to pandemic incidence data consisting of daily counts of new infections, from several databases providing data either worldwide on an aggregated percountry basis or, for france only, based on the sole hospital counts, spread across the french territory. they permitted to reveal interesting patterns on the state of the pandemic across the world as well as to assess variability across one single territory governed by the same (health care and politics) rules. more importantly, these tools can be used everyday easily as an onthe-fly monitoring procedure for assessing the current state of the pandemic and predict its short-term future evolution. updated estimations are published on-line every day at perso.ens-lyon.fr/patrice.abry and at barthes.enssib.fr/coronavirus/ixxi-sisyphe/. data were (and still are) automatically downloaded on a daily basis using routines written by ourselves. all tools have been developed in matlab™ and can be made available from the corresponding author upon motivated request. at the methodological level, the tool can be further improved in several ways. instead of using o(r) ≔ λ time kd rk + λ space krb > k , for the joint time and space regularization, another possible choice is to directly consider the matrix d rb > of joint spatio-temporal derivatives, and to promote sparsity with an ℓ -norm, or structured sparsity with a mixed norm ℓ , , e.g., kd rb > k , = ∑ t k(d rb > )(t)k . as previously discussed, data collected in the process of a pandemic are prone to several causes for outliers. here, outlier preprocessing and reproduction number estimation were conducted in two independent steps, which can turn suboptimal. they can be combined into a single step at the cost of increasing the representation space permitting to split observation in true data and outliers, by adding to the functional to minimize an extra regularization term and devising the corresponding optimization procedure, which becomes nonconvex, and hence far more complicated to address. finally, when an epidemic model suggests a way to make use of several time series (such as, e.g., infected and deceased) for one same territory, the tool can straightforwardly be extended into a multivariate setting by a mild adaptation of optimization problems ( ) and ( ), replacing the kullback-leibler divergence d kl (zjr � fz) by p i i¼ d kl ðz i j r � fz i Þ. finally, automating a data-driven tuning of the regularization hyperparameters constitutes another important research track. factors determining the diffusion of covid- and suggested strategy to prevent future accelerated viral infectivity similar to covid pooling data from individual clinical trials in the covid- era expected impact of lockdown in ile-de-france and possible exit strategies estimating the burden of sars-cov- in france the impact of a nation-wide lockdown on covid- transmissibility in italy measurability of the epidemic reproduction number in data-driven contact networks mathematical models in epidemiology a new framework and software to estimate time-varying reproduction numbers during epidemics the r package: a toolbox to estimate reproduction numbers for epidemic outbreaks improved inference of time-varying reproduction numbers during infectious disease outbreaks convex analysis and monotone operator theory in hilbert spaces image restoration: total variation, wavelet frames, and beyond proximal splitting methods in signal processing proximal algorithms. foundations and trends ® in optimization wavelet-based image deconvolution and reconstruction different epidemic curves for severe acute respiratory syndrome reveal similar impacts of control measures epidemiological parameters of coronavirus disease : a pooled analysis of publicly reported individual data of cases from seven countries epidemiological characteristics of covid- cases in italy and estimates of the reproductive numbers one month into the epidemic nonlinear denoising for solid friction dynamics characterization sparsest continuous piecewise-linear representation of data the emerging field of signal processing on graphs: extending high-dimensional data analysis to networks and other irregular domains a first-order primal-dual algorithm for convex problems with applications to imaging proximal splitting algorithms: relax them all! fonctions convexes duales et points proximaux dans un espace hilbertien. comptes rendus de l'acadé mie des sciences de paris a douglas-rachford splitting approach to nonsmooth convex variational signal recovery on the definition and the computation of the basic reproduction ratio r in models for infectious diseases in heterogeneous populations reproduction numbers and sub-threshold endemic equilibria for compartmental models of disease transmission figs and are produced using open ressources from the openstreetmap foundation, whose contributors are here gratefully acknowledged. mapdata©openstreetmap contributors. conceptualization: patrice abry, pablo jensen, patrick flandrin. key: cord- - j p e authors: suomi, aino; schofield, timothy p.; butterworth, peter title: unemployment, employability and covid : how the global socioeconomic shock challenged negative perceptions toward the less fortunate in the australian context date: - - journal: front psychol doi: . /fpsyg. . sha: doc_id: cord_uid: j p e unemployed benefit recipients are stigmatized and generally perceived negatively in terms of their personality characteristics and employability. the covid economic shock led to rapid public policy responses across the globe to lessen the impact of mass unemployment, potentially shifting community perceptions of individuals who are out of work and rely on government income support. we used a repeated cross-sections design to study change in stigma tied to unemployment and benefit receipt in a pre-existing pre-covid sample (n = ) and a sample collected during covid pandemic (n = ) by using a vignette-based experiment. participants rated attributes of characters who were described as being employed, working poor, unemployed or receiving unemployment benefits. the results show that compared to employed characters, unemployed characters were rated substantially less favorably at both time points on their employability and personality traits. the difference in perceptions of the employed and unemployed was, however, attenuated during covid with benefit recipients perceived as more employable and more conscientious than pre-pandemic. these results add to knowledge about the determinants of welfare stigma highlighting the impact of the global economic and health crisis on perception of others. the onset of covid pandemic saw unemployment climb to the highest rate since the great depression in many regions globally . over just one month, from march to april unemployment rate in the united states increased from . % to over . % and in australia the effective rate of unemployment increased from . to . % (australian bureau of statistics, ) . in australia, a number of economic responses were rapidly introduced including a wage subsidy scheme (jobkeeper) to enable employees to keep their employees connected to the workforce, one-off payments to many welfare recipients, and a doubling of the usual rate of the unemployment benefits (jobseeker payment) through a new coronavirus supplement payment. at the time of writing in july , many countries, including australia remain in the depths of a health and economic crisis. a rich research literature from a range of disciplines has documented the pervasive negative community views toward those who are unemployed and receiving unemployment benefits, with the extent of this "welfare stigma" being particularly pronounced in countries with highly targeted benefit systems such as the united states and australia (fiske et al., ; baumberg, ; contini and richiardi, ; schofield and butterworth, ) . the stigma and potential discrimination associated with unemployment and benefit receipt are known to have negative impacts on health, employability and equality (for meta-analyses, see shahidi et al., ) . in addition, the receipt of unemployment benefits co-occurs with other stigmatized characteristics such as poverty and unemployment (schofield and butterworth, a) . the changing context related to the covid crisis provides a novel opportunity to better understand the determinants of stigmatizing perceptions of unemployment and benefit receipt. negative community attitudes and perceptions of benefit recipients are commonly explained by the concept of "deservingness" (van oorschot and roosma, ) . the unemployed are typically seen as less deserving of government support than other groups because they are more likely to be seen as responsible for their own plight, ungrateful for support, not in genuine need (petersen et al., ; van oorschot and roosma, ) , and lacking reciprocity (i.e., seen as taking more than they have given -or will give -back to society; van oorschot, ; larsen, ; petersen et al., ; aarøe and petersen, ) . given the economic shock associated with covid , unemployment and reliance on income support are less likely to seen as an outcome within the individuals control and may therefore amplify perceptions of deservingness. prior work has shown that experimentally manipulating perceived control over circumstances does indeed change negative stereotypes (aarøe and petersen, ) . a number of experimental paradigms have been used to investigate perceptions of "welfare recipients" and the "unemployed." the stereotype content model (scm; fiske et al., ) , for example, represents the stereotypes of social groups on two dimensions: warmth, relating to being friendly and well-intentioned (rather than ill-intentioned); and competence, relating to one's capacity to pursue intentions (fiske et al., ) . using this model, the "unemployed" have been evaluated as low in warmth and competence across a variety of welfare regime types (fiske et al., ; bye et al., ) . the structure of stereotypes has also been studied using the big five personality dimensions (schofield and butterworth, b; schofield et al., ) : openness, conscientiousness, extraversion, agreeableness, and emotional stability (for background on the big five see: goldberg, ; hogan et al., ; saucier and goldberg, ; mccrae and terracciano, ; srivastava, ; chan et al., ; löckenhoff et al., ) . there are parallels between the big five and the scm: warmth relating to the dimension of agreeableness, and competence relating to conscientiousness (digman, ; ward et al., ; cuddy et al., ; abele et al., ) and these constructs have been found to predict employability and career success (barrick et al., ; cuesta and budría, ) . warmth and agreeableness have also been linked to the welfare-specific characteristics of deservingness (aarøe and petersen, ) . the term "employability" has been previously defined as a set of achievements, skills and personal attributes that make a person more likely to gain employment and leading to success in their chosen career pathway (pegg et al., ; o'leary, o'leary, , . while there are few studies examining perceptions of others, perceptions of one's own employability have been recently studied in university students, jobseekers (atitsogbe et al., ) and currently employed workers (plomp et al., ; yeves et al., ) , consistently showing higher levels of perceived employability being linked to personal and job-related wellbeing as well as career success. examining other's perceptions of employability may be more relevant to understand factors impacting on actual employment outcomes. a majority of studies examining other's perceptions of employability have focused on job specific skills study (lowden et al., ; dhiman, ; saad and majid, ) . building on this previous work, our own research has focused on the effects of unemployment by drawing on frameworks of big five, scm and employability in pre-covid samples (schofield and butterworth, b; schofield et al., ) . our studies consistently show that unemployed individuals receiving government payments are perceived as less employable (poorer "quality" workers and less desirable for employment) and less conscientious. we found similar but weaker pattern related to agreeableness, emotional stability, and the extent that a person is perceived as "uniquely human" (schofield et al., ) . further, we found that vignette characters described as currently employed but with a history of welfare receipt were indistinguishable from those described as employed and with no reference to benefit receipt (schofield et al., ) . findings such as this provide experimental evidence that welfare stigma is malleable and can be challenged by information inconsistent with negative stereotype (schofield and butterworth, b; schofield et al., ; see also petersen et al., ) . the broad aim of the current study was to extend this previous work by examining the impact of covid on person perceptions tied to employment and benefit recipient status. it repeats a pre-covid study of an australian general population sample in the covid context, drawing on the same sampling frame, materials and study design to maximize comparability. the study design recognizes that the negative perceptions of benefit recipients may reflect a combination of difference sources of stigma: poverty, lack of work, and benefit receipt. therefore, the original study used four different conditions to seek to differentiate these different sources: ( ) employed; ( ) working poor; ( ) unemployed; and ( ) unemployed benefit recipient. finally, for the covid sample we added a novel fifth condition: ( ) unemployment benefit recipient also receiving the "coronavirus" supplement. we except that the reference to a payment specifically applicable to the covid context may lead to more favorable perceptions (more deserving) than the other unemployed and benefit receipt characters. the study capitalizes on a major exogenous event, the covid crisis, which we hypothesize will alter perceptions of deservingness by fundamentally challenging social identities and perceptions of one's own vulnerability to unemployment. the study tests three hypotheses, and in doing so makes an important empirical and theoretical contribution to understanding how deservingness influences person perception, and understanding of the potential "real world" barriers experienced by people seeking employment in the covid context. the pre-covid assessment uses a subset of data from a pre-registered study, but this reuse of the data was not preregistered . we hypothesize that, at time (pre-covid assessment) we will find that employed characters will be rated more favorably than characters described as unemployed and receiving unemployment benefits, particularly on dimensions of conscientiousness, worker and boss suitability. moreover, we expect a gradient in perceptions across the four experimental conditions, from employed to working poor, to unemployed to unemployed receiving benefits and to show a similar trend for the other outcome measures included in the study. we hypothesize that the character in the unemployed condition(s) would be rated less negatively relative to the employed condition(s) at time , compared to time . we predict a two-way interaction between time and condition for the key measures (conscientiousness, worker and boss suitability) and a similar trend on other outcomes. we expect that explicit reference to the unemployed benefit character receiving the "coronavirus supplement" payment will increase the salience of the covid context and lead to more positive ratings of this character relative to the standard unemployed benefit condition in the pre-covid and covid occasions. two general population samples (pre-covid and covid ) were recruited from the same source: the australian online research unit (oru) panel. the oru is an online survey platform that provides access to a cohort of members of the general public who are interested in contributing to research. the oru randomly selects potential participants who meet study eligibility criteria, and provides the participant with an incentive for their participation. the sample for the time (pre-covid ) occasion was part of a larger study ( participants) collected in november . from this initial dataset, we were able to use data from ( . % female, m age = . [ . ] years, range: - ) participants who were presented with the one vignette scenario that we could replicate at the time of the social restrictions applicable in the covid context (i.e., the vignette character was not described as going out and visiting friends, as these behaviors were illegal at time ). the sample for time (covid ) was collected in may-june , at the height of the lock down measures in australia and included participants ( . % female, m age = . [ . ] years, range: - ). the two samples were broadly similar (see below), though the proportion of male participants at time was greater than at time . the pre-covid assessment at time was restricted to those participants who completed the social-distancing consistent vignette in the first place to avoid potential order/context effects. this provided, on average, respondents in each of the four experimental conditions. using the results from our previous published studies as indicators of effect size (schofield and butterworth, b; schofield et al., ) . monte carlo simulation was used to identify the time sample size that would provide % power to detect an interaction effect that represented a % decline in the difference between the two employment and two unemployment conditions on the three-key measures at the covid occasion relative to the pre-covid difference. this sample size of per condition also provided between and % power to detect a difference of a similar magnitude between the employed and unemployment benefit conditions across the two measurement occasions. given previous evidence that the differences between employed and unemployed/welfare conditions is robust and large for conscientiousness and worker suitability (schofield and butterworth, b) , the current study is also adequately powered to detect the most replicable effects of unemployment and welfare on perceptions of a person's character (even in the absence of the hypothesized interaction effect). the procedures were identical on both study occasions. participants read a brief vignette that described a fictional character, and then rated the character on measures reflecting personality dimensions, their suitability as a worker or boss, morality, warmth, and competence, and the participant's beliefs the character should feel guilt and shame, or feel angry and disgusted. at time (pre-covid context) participants then repeated this process with a second vignette, but we do not consider data from the second vignette. the key experimental conditions were operationalized by a single sentence embedded within the vignette that was randomly allocated to different participants (employed: "s/he is currently working as a sales assistant in a large department store"; working poor: "s/he is currently working as a sales assistant, on a minimum-wage, in a large department store"; unemployed: "s/he is currently unemployed"; and receipt of unemployment benefits: "s/he is currently unemployed, and is receiving government benefits due to his/her unemployment"). the four experimental conditions were identical at both time points. at time , an additional covid -specific condition was included (to maximize the salience of the covid context): "s/he is currently unemployed and is receiving government benefits, including the coronavirus supplement, due to his/her unemployment." all three study conditions will imply poverty/low income. in australia, few minimum-wage jobs are supplemented by tips, and so a minimum-wage job indicates a level of relative poverty. a full-time worker in a minimum wage job is in the bottom quartile of income earners (australian bureau of statistics, ). prior to the covid crisis and the increase in payment level, a single person with no dependents receiving unemployment benefits received approximately % of the minimum-wage in cash assistance. during covid and at the time of the data collection, the rate of pay exceeds the minimum-wage. several characteristics of the vignette character, including age and relationship status, were balanced across study participants. age was specified as either or years, relationship status was either "single" or "lives with his/her partner." the character's gender was also varied and names were stereotypically white. for time , manipulated characteristics yielded unique vignettes, comprised of four key experimental conditions (employed, working poor, unemployed, and unemployment benefits) × ages × genders × relationship statuses. for time , manipulated characteristics yielded unique vignettes, comprised of five key experimental conditions (employed, working poor, unemployed, unemployment benefits, and unemployed + coronavirus supplement) × ages × genders × relationship statuses. the vignette template construction is presented in figure including each component of the vignette that was randomly varied. in both studies, participants were required to affirm consent after debriefing or had their data deleted. participant comprehension of the vignettes was checked via three freeresponse comprehension questions about the character's age and weekend activities. participants who did not answer any questions correctly were not able to continue the study. personality, employability (suitability as a worker or boss), communion and agency, cognitive and emotional moral judgments, and dehumanization were included as the study outcomes. while not all personality or character dimension measures can be considered as negative or positive, higher scores were used in the study to indicate more "favorable" perceptions by the participants of the characters. the ten item personality inventory was used to measure the big five (gosling et al., ) and adapted to other-oriented wording (i.e., "i felt like the person in the story was. . .") (schofield et al., ) . two items measured each trait via two figure | outline of vignette construction in parts. bullet pointed options replace the underlined text, with gendered pronouns in each option selected to match character name. paired attributes. one item contained positive attributes and one contained negative attributes. participants indicated the extent to which "i think [name] is [attributes]" from (strongly disagree) to (strongly agree). the order of these items was randomized. agreeableness (α = . ) was assessed from "sympathetic, warm" and "critical, quarrelsome" (reversed); extraversion (α = . ) was assessed from "extraverted, enthusiastic" and "reserved, quiet" (reversed); conscientiousness (α = . ) was assessed from "dependable, self-disciplined" and "disorganized, careless" (reversed); openness to experience (α = . ) was assessed from "open to new experiences, complex" and "conventional, uncreative" (reversed); emotional stability (α = . ) was assessed from "calm, emotionally stable." and "anxious, easily upset" (reversed). the order of these items was randomized. single item measures: "i think [name] would be a good worker" (worker suitability) and "i think [name] would be a good boss" (boss suitability) were rated on the same scale as the personality measure. the order of these two items was randomized. higher scores indicated better employability. communion and agency was assessed using bocian et al. ( ) adaptation of abele et al. ( ) scale that measures the fundamental dimensions of communion and agency using twosubscales for each dimension. the morality and warmth subscales are seen as measures of communion (referred to as warmth in scm; fiske, ) ; while the competence and assertiveness subscales measure agency (what fiske refers to as competence in scm; fiske, ) . this subscale structure has been identified in multiple samples. participants indicated the extent to which "i think [name] [attributes]" from (not at all) to (very much so). morality (α = . ) was measured with six items, e.g., "is just, " "is fair"; warmth (α = . ) with six items, e.g., "is caring, " "is empathetic"; competence (α = . ) with five items, e.g., "is efficient, " "is capable"; and assertiveness (α = . ) with six items, e.g., "is self-confident, " "stands up well under pressure." these items were presented in a random order. dehumanization was measured with a composite scale of twoitems drawn from bastian et al. ( ) . based on prior research, we measured dehumanization with two items: "i think [name] is mechanical and cold, like a robot" and "i think [name] lacked self-restraint, like an animal" order of these two items was randomized. we reverse coded the two items for the analyses for consistency for the other variables, so that higher scores were indicative of more favorable perceptions. moral emotions were measured by four items that asked about emotional responses to the character that were framed as selfcondemning or other-condemning (haidt, ; giner-sorolla and espinosa, ) . two other-condemning items asked the participant about their own emotional response to the character in the vignette (anger: "[name]'s behavior makes me angry"; disgust: "i think [name] is someone who makes me feel disgusted, " α = . ). the two self-condemning items asked about the character's emotional response (guilt: " [name] should feel guilty about [his/her] behavior"; shame: "i think [name] should feel ashamed of [him/her]self "; α = . ). we reverse coded the two scales to ensure consistency with other variables, with higher scores indicative of more favorable perceptions. with the exception of the moral emotion (and communion and agency) scales that are new to this study and the previously tested openness to experience, our previous research has demonstrated differences between the ratings of employed and unemployed characters on the included outcome measures (schofield and butterworth, b; schofield et al., ) . we undertake the analysis using a four-step process. we use mixed-effects multi-level models, with the outcome measures nested within participants, and predicted by fixed (betweenperson) terms representing the experimental "condition, " "time" (pre-/covid ) and their interaction, and controlling for measure differences and allowing for random effects at the participant level: i) we initially assessed the effect of condition in the pre-covid occasion to establish the baseline pattern of results; ii) we then evaluated the interaction term and, specifically, the extent to which the baseline difference observed between employment and unemployment conditions is attenuated at time (covid occasion); iii) we tested the three-way interaction between condition, occasion and measure to assess whether this two-way interaction varies across the outcome measures; and if significant iv) repeated the modeling approach using separate linear regression models for each outcome measure. our initial model contrasts the two employed (employed and working poor) and unemployed (unemployed and benefit receipt) conditions. the second model examines the four separate vignette conditions separately, differentiating between unemployed and unemployed benefit conditions. finally, we contrast the three unemployment benefit conditions: ( ) unemployment benefit recipients at time ; ( ) unemployment benefit recipients at time ; and ( ) unemployment benefit recipients receiving the coronavirus payment at time . for all models, we consider unadjusted and adjusted results (controlling for participant demographics). to address a potential bias from gender differences between samples, post-stratification weights were calculated for the covid sample to reflecting the gender by age distribution of the pre-covid sample. all models were weighted. the two samples from time (pre-covid ) and time (covid ) were comparable on all demographic variables, except for gender (χ [ , ] = . , p < . ) and employment (χ [ , ] = . , p < . ): the gender distribution was more balanced at time with . % of males, compared to . % of males at time . there was also a significant increase in unemployment with . % of time participants out of work compared to . % of the time participants. this was likely reflective of the employment rate nearly doubling in australia during covid crisis. bivariate correlations showed significant positive correlations between all outcomes (p's < . ), except for extraversion that was only positively correlated with emotional stability, boss suitability, warmth, assertiveness, and competence (p's < . ). the results, both adjusted and unadjusted, from the initial overall multilevel model using a binary indicator of whether vignette characters were employed (those in the employed or working poor conditions) or unemployed (unemployed or welfare) and testing the interaction between vignette condition and time (pre-covid vs covid ) are presented in the supplementary table s . the adjusted results (holding participant age, gender, employment, and education constant) indicated that the unemployed characters were rated lower than the employed characters at time (b = − . ). this difference in the ratings of employed and unemployed characters was reduced in the covid assessment at time , declining from . to . , across all the outcome measures. the addition of the threeway interaction between condition, time and outcome measure significantly improved overall model fit, χ ( ) = . , p < . , indicating the interaction between condition and time varied over measures. a series of separate regression models considering each outcome separately (see supplementary table s ) showed a significant effect of condition (employment rated higher than unemployment) at time (pre-covid) for all outcomes except openness and extraversion. the lower ratings for unemployed relative to employed characters were significantly moderated at time on the competence, worker and boss suitability, and guilt/shame outcomes (p's < . ). the next set of analyses consider the four separate vignette conditions, differentiating between the unemployed and unemployed benefit recipient conditions. the overall mixedeffects multilevel model incorporating the four distinct vignette conditions provided evidence of significant effects for condition and condition by time in both adjusted and unadjusted models. the result for the adjusted model (table ) , averaged across the various outcomes, replicated the previous finding of a difference in ratings of employed and unemployed characters at time (pre-covid ): relative to the employed condition, there was no difference in ratings of the working poor, but the unemployed and the unemployed benefit recipient characters were rated less favorably. there was some evidence of a gradient across the unemployed characters: the average rating of the unemployed condition was higher than the unemployed benefit condition, though this difference was not statistically significant. in the presence of the interaction effect, the non-significant effect of time shows that, averaged across all the outcome measures, there was no difference in the rating of the characters in the employed condition on the pre-covid and covid occasions. we tested for the effect of sociodemographic characteristics as covariates in the adjusted models (employment and benefit receipt status, education, age, and gender) but found no main effects of any of the covariates except for gender: females tended to rate characters higher (b = . , % ci [ . , . ]) compared to males. testing the heterogeneity of these patterns across outcomes via the inclusion of a three-way interaction between vignette condition, occasion and measure significantly improved overall model fit, χ ( ) = . , p < . , prompting analysis of each outcome separately. the separate linear regressions for each outcome measure (supplementary table s ) show that ratings of unemployed benefit recipients at the time (pre-covid ) were significantly lower than the employed characters for all outcomes except openness and extraversion. statistically significant condition by time terms indicated that the unemployed benefit effect was moderated at time (covid ) for the three key outcome measures identified in previous research (conscientiousness, worker and boss suitability) and for the measure of guilt and shame. figure depicts this interaction for these four outcomes. these occurred in two profiles. for conscientiousness, worker and boss suitability, covid attenuated the negative perceptions of unemployed relative to employed characters, providing support for hypothesis . by contrast, covid has induced a new difference, such that participants thought employed characters should feel higher levels guilt and shame at time , compared to time . while the "working poor" condition was not central to the covid hypotheses, we note that we found no evidence that ratings of these characters on any outcome differed from the standard employed character, or that this difference was changed in assessment at time (covid occasion). the inclusion of the fifth covid -specific unemployment benefit condition did not generate more positive (or different) ratings than the standard unemployment benefit condition. overall mixed-effects multilevel models, both adjusted and unadjusted, indicated that participants in the coronavirus supplement condition (adjusted model: b = . , % ci [ . , . ]) and the general unemployed benefit recipient condition at time (adjusted model: b = . , % ci [ . , . ]) were both rated more favorably in comparison to unemployed benefit recipients at time . there was no difference between these two time benefit recipient groups (b = . , % ci [− . , . ]). these results did not support hypothesis . previous research has demonstrated that people who are unemployed, and particularly those receiving unemployment benefits, are perceived more negatively and less employable than those who are employed. however, the economic shock associated with the covid crisis is likely to have challenged people's sense of their own vulnerability and risk of unemployment, and altered their perceptions of those who are unemployed and receiving government support. the broad aim of the current study was to examine the potential effect of this crisis on person perceptions tied to employment and benefit recipient status. we did this by presenting brief vignettes describing fictional characters, manipulating key experimental conditions related to employment status, and asking study participants to rate the characters' personality and capability. we contrasted results from two cross-sectional general population samples collected before and during the covid crisis. the pre-covid assessment replicated our previous findings (e.g., schofield and butterworth, b) showing that employed characters are perceived more favorably than those who were unemployed and receiving government benefits on measures of conscientiousness and suitability as a worker. these findings supported hypothesis . in comparison, the assessment conducted during the covid crisis showed that unemployed and employed characters were viewed more similarly on these same key measures, with a significant interaction effect providing support for hypothesis . our third hypothesis, suggesting that n reference to the coronavirus supplement (an additional form of income support introduced during the pandemic) would enhance ratings of unemployed benefit recipients at the second assessment occasion, was not supported. we found that benefit recipients at time were rated more favorably than the benefit group at time , irrespective of whether this covid -specific payment was referenced. this suggests the broader context in which the study was conducted was responsible for the change in perceptions. we sampled participants from the same population, used identical experimental procedures, and found no difference over time in the ratings of employed characters on the key outcome measures of employability (worker and boss suitability) and conscientiousness. the more favorable ratings of unemployed and benefit receiving characters at time is likely to reflect how the exogenous economic shock brought about by the covid crisis challenged social identities and the stereotypes held of others . the widespread impact and uncontrollable nature of this event are inconsistent with pre-covid views that attribute ill-intent to those receiving to unemployment benefits (fiske et al., ; baumberg, ; contini and richiardi, ; bye et al., ) . we suggest the changing context altered perceptions of the "deservingness" of people who are unemployed as unemployment in the context of covid is less indicative personal failings or a result of one's "own doing" (petersen et al., ; van oorschot and roosma, ) . it is important to recognize, however, that the negative perceptions of unemployed benefit recipients were attenuated in the covid assessment, but they continued to be rated less favorably than those who were employed on the key outcome measures. in contrast to our findings on the key measures of employability and conscientiousness, the previous and current research is less conclusive for the other outcome measures. the current study showed a broadly consistent gradient in the perception of employed and unemployed characters for all outcome measures apart from openness and extraversion. findings on these other measures have been weaker and inconsistent across previous studies (schofield and butterworth, b; schofield et al., ) , and the current experiment was not designed with sufficient power to demonstrate interaction effects for these measures. there was, however, one measure that showed significant divergence from the expected profile of results. a significant interaction term suggested that study participants at the time (covid ) assessment reported that the employed characters should feel greater levels of guilt and shame than those who participated in the pre-covid assessment. in contrast, there was consistency in the ratings of unemployed characters on this measure across the two assessment occasions. while not predicted, these results are also interpretable in the context of the pervasive job loss that accompanied the covid crisis. haller et al. ( ) , for example, argue that the highly distressing, morally difficult, and cumulative nature of covid related stressors presents a perfect storm to result in a guilt and shame responses. the context of mass job losses may leave "surviving" workers feeling increasingly guilty. the main findings of the current study are consistent with previous experimental studies that show that the stereotypes of unemployed benefit recipients are malleable (aarøe, ; schofield et al., ) . these previous studies, however, have demonstrated malleability by providing additional information about unemployed individuals that was inconsistent with the unemployed benefit recipient stereotype (e.g., the external causes https://pursuit.unimelb.edu.au/articles/our-changing-identities-under-covid- of their unemployment). in contrast, the current study did not change how the vignette characters were presented or the experimental procedures. rather, we assessed how the changing context in which study participants were living had altered their perceptions: suggesting the experience of covid altered stereotypical views held by study participants rather than presenting information about the character that would challenge the applicability of the benefit recipient stereotype in this instance. perceptions and stereotypes of benefit recipients can be reinforced (and potentially generated) by government actions and policies. structural stigma can be used as a policy tool to stigmatize benefit receipt as a strategy to reduce dependence on income support and encourage workforce participation (moffitt, ; stuber and schlesinger, ; baumberg, ; contini and richiardi, ; garthwaite, ) . in the current instance, however, the australian government acted quickly to provide greater support to australians who lost their jobs (e.g., doubling the rate of payment, removing mandatory reporting to the welfare services) and this may have reduced the stigmatizing structural features of the income support system and contributed to the changed perceptions of benefit recipients identified in this study. the current study took advantage of a natural experimental design and replicated a pre-covid study during the covid crisis. the study is limited by the relatively small sample size at time , which was not designed for current purposes but part of another study. we were not able to include most of the participants from the original time study as most of the experimental conditions described activities that were illegal/inconsistent with recommend activity at the time of the covid lockdown and social restriction measures. finally, the data collection for the current study occurred very quickly after the initial and sudden covid lockdowns and economic shock, which is both a strength and a limitation for the generalizability of the results. the pattern of results using the same sampling frame offers compelling support for our hypothesis that the shared economic shock and increase in unemployment attenuates stigmatizing community attitudes toward those who need to receive benefits. our current conclusions would be further strengthened by a subsequent replication when the public health and economic crises stabilize, to test whether pre-covid perceptions return. the current study provides novel information about impact of the covid health and economic crisis, and the impact of the corresponding policy responses on community perceptions. this novel study shows how community perceptions of employment and benefit recipient status have been altered by the covid pandemic. these results add to knowledge about the determinants of welfare stigma, particularly relating to employability, highlighting societal level contextual factors. the raw data supporting the conclusions of this article will be made available by the authors, without undue reservation. the studies involving human participants were reviewed and approved by melbourne university human research ethics committee. the patients/participants provided their written informed consent to participate in this study. as led the review conceptualized by ts and pb. as and pb conducted the analyses and wrote up the review. ts led the data collection, reviewed and edited the manuscript, and provided data management support. this manuscript is based on previous extensive work by ts and pb on stereotypes toward the unemployed and welfare benefit recipients. all authors contributed to the article and approved the submitted version. this study was funded by the australian research council (arc) grant # dp . investigating frame strength: the case of episodic and thematic frames crowding out culture: scandinavians and americans agree on social welfare in the face of deservingness cues facets of the fundamental content dimensions: agency with competence and assertiveness-communion with warmth and morality perceived employability and entrepreneurial intentions across university students and job seekers in togo: the effect of career adaptability and self-efficacy employment and unemployment: international perspective. australia: labour force personality and performance at the beginning of the new millennium: what do we know and where do we go next? the roles of dehumanization and moral outrage in retributive justice three ways to defend welfare in britain the mere liking effect: attitudinal influences on attributions of moral character stereotypes of norwegian social groups stereotypes of age differences in personality traits: universal and accurate? reconsidering the effect of welfare stigma on unemployment warmth and competence as universal dimensions of social perception: the stereotype content model and the bias map unemployment persistence: how important are non-cognitive skills? employers' perceptions about tourism management employability skills higher-order factors of the big five stereotype content: warmth and competence endure a model of (often mixed) stereotype content: competence and warmth respectively follow from perceived status and competition fear of the brown envelope: exploring welfare reform with long-term sickness benefits recipients social cuing of guilt by anger and of shame by disgust the structure of phenotypic personality traits a very brief measure of the big-five personality domains the moral emotions a model for treating covid- -related guilt, shame, and moral injury personality measurement and employment decisions: questions and answers knowledge network hubs and measures of research impact, science structure, and publication output in nanostructured solar cell research gender stereotypes of personality: universal and accurate? employers' perceptions of the employability skills of new graduates. london: edge foundation universal features of personality traits from the observer's perspective: data from cultures an economic model of welfare stigma graduates' experiences of, and attitudes towards, the inclusion of employability-related support in undergraduate degree programmes; trends and variations by subject discipline and gender gender and management implications from clearer signposting of employability attributes developed across graduate disciplines pedagogy for employability deservingness versus values in public opinion on welfare: the automaticity of the deservingness heuristic psychological safety, job crafting, and employability: a comparison between permanent and temporary workers employers' perceptions of important employability skills required from malaysian engineering and information and communication technology (ict) graduates the language of personality: lexical perspectives patterns of welfare attitudes in the australian population are negative community attitudes toward welfare recipients associated with unemployment? evidence from an australian cross-sectional sample and longitudinal cohort community attitudes toward people receiving unemployment benefits: does volunteering change perceptions? the persistence of welfare stigma: does the passing of time and subsequent employment moderate the negative perceptions associated with unemployment benefit receipt? does social policy moderate the impact of unemployment on health? a multilevel analysis of welfare states the five-factor model describes the structure of social perceptions sources of stigma for means-tested government programs who should get what, and why? on deservingness criteria and the conditionality of solidarity among the public the social legitimacy of targeted welfare measurement of agency, communion, and emotional vulnerability with the personal attributes questionnaire age and perceived employability as moderators of job insecurity and job satisfaction: a moderated moderation model the supplementary material for this article can be found online at: https://www.frontiersin.org/articles/ . /fpsyg. . /full#supplementary-material key: cord- -y bd zz authors: rishu, asgar h.; marinoff, nicole; julien, lisa; dumitrascu, mariana; marten, nicole; eggertson, shauna; willems, su; ruddell, stacy; lane, dan; light, bruce; stelfox, henry t.; jouvet, philippe; hall, richard; reynolds, steven; daneman, nick; fowler, robert a. title: time required to initiate outbreak and pandemic observational research()() date: - - journal: j crit care doi: . /j.jcrc. . . sha: doc_id: cord_uid: y bd zz purpose: observational research focused upon emerging infectious diseases such as ebola virus, middle east respiratory syndrome, and zika virus has been challenging to quickly initiate. we aimed to determine the duration of start-up procedures and barriers encountered for an observational study focused upon such infectious outbreaks. materials and methods: at pediatric and adult intensive care units, we measured durations from protocol receipt to a variety of outbreak research milestones, including research ethics board (reb) approval, data sharing agreement (dsa) execution, and patient study screening initiation. results: the median (interquartile range) time from site receipt of the protocol to reb submission was ( - ) days; to reb approval, ( - ) days; to dsa completion, ( - ) days; and to study screening initiation, ( - ) days. the median time from reb submission to reb approval was ( - ) days. the median time for all start-up procedures was ( - ) days. conclusions: there is a lengthy start-up period required for outbreak-focused research. completing dsas was the most time-consuming step. a reactive approach to newly emerging threats such as ebola virus, middle east respiratory syndrome, and zika virus will likely not allow sufficient time to initiate research before most outbreaks are advanced. new emerging and reemerging infections such as ebola virus, middle east respiratory syndrome (mers-cov), and zika virus are a concern for the public, clinicians, health systems, and public health agencies. outbreaks and pandemics are perceived to occur at increasing frequency; however, they remain unpredictable in their time and location of onset [ ] . outbreaks increase patient morbidity and mortality, and cause additional burden on health care workers, facilities, and health agencies [ ] [ ] [ ] . surveillance can identify cases at an early stage and lead to prevention of broader spread. severe acute respiratory syndrome [ ] ; pandemic influenza a (h n ) - [ ] ; and, more recently, ebola virus [ ] , mers-cov [ ] , and zika virus have been characterized by challenges initiating observational research and a near inability to rapidly undertake interventional trials necessary to inform best practice and improve care of patients [ ] [ ] [ ] . this has prompted calls from patients, clinicians, funders, and policy makers to improve preparedness, including the capacity to undertake real-time research during such events. however, conducting studies and trials involves time-consuming start-up steps such as development of study protocol, establishing a budget and obtaining funding, research ethics board (reb) approval, organizing multisite collaboration, and data sharing agreements. the objective of this study was to determine the delay from protocol completion to study initiation and determine time spent in each of the necessary steps to identify and collect data in real time for new and emerging infection-related critical illness. this is a time-in-motion study accompanying a prospective surveillance project to assess the feasibility of screening and real-time data collection for severe acute respiratory infection (sari)-and outbreak-related critical illness. the parent prospective study aimed to screen all hospitalized critically ill patients on a daily basis for up to hours after admission to detect all cases of sari, the details of which are published elsewhere [ ] . the study included pediatric and adult intensive care units (icus) across canadian provinces. paper and electronic case report forms and daily and weekly screening log sheets were made available to all the sites to be used for data collection (appendix). the study was approved by each participating site's reb and was funded by the public health agency of canada, canadian critical care trials group, and heart and stroke foundation (ontario office). for the purpose of this study, the following data were collected: time required from protocol receipt by the site to reb submission, time required from reb submission to reb approval, time required from reb approval to data sharing agreement execution, time required from data sharing agreement execution to screening initiation, time required from protocol receipt to data sharing agreement execution, time required from protocol receipt to screening initiation, and overall time required for start-up procedures. categorical variables are presented as numbers and proportions. durations are presented as median, interquartile range (iqr), and ranges. all statistical tests were -tailed, and the significance level was set at p b . . table shows the median time required in each step along the pathway to initiate an observational study of outbreak surveillance in icus. overall start-up procedures required a median (iqr) of ( - ) days (range, - ). median (iqr) duration from protocol receipt to reb submission was ( - ) days (range, and protocol receipt to reb approval was ( - ) days (range, - days). time from protocol receipt to data sharing agreement receipt was ( - ) days (range, - ), protocol receipt to signed data sharing agreement was ( - ) days (range, - ), and protocol receipt to screening initiation was ( - ) days (range, - ). time from reb submission to reb approval was ( - ) days (range, , reb approval to data sharing agreement completion was ( - ) days (range, , and reb approval to screening initiation was ( - ) days (range, - ). time from data sharing agreement receipt to data sharing agreement completion was ( - ) days (range, - ), and data sharing agreement completion to screening initiation was ( - ) days (range, - ) (fig. ). in this multicenter study of severe acute respiratory infections, we observed that it took nearly year to complete all necessary start-up procedures before enrolment in the study could begin at all sites. obtaining an interinstitutional legal data sharing agreement required approximately months from protocol receipt to completion-the most time-consuming process. it took sites approximately ½ months after protocol receipt to be ready to submit to their reb yet only approximately ½ months for reb approval. our findings indicate that despite an existing in-icu infrastructure and capability for real-time data collection and reporting, observational research during an outbreak or pandemic is at risk of failing because of the time required for start-up procedures. seasonal influenza outbreaks provide a compelling annual example. if we do not initiate the study start-up process immediately after influenza season, we will not be ready for screening at the next. the time necessary for appropriate and necessary reb vetting and approvals has been reported previously for various clinical trials [ ] [ ] [ ] [ ] [ ] [ ] . however, none of the studies have identified the actual time required in initiating outbreak-related research at multiple sites. efficient research initiation during an outbreak or pandemic is critical considering the potential for outbreak expansion and greater morbidity and mortality without better understanding of risk factors for illness and transmission, clinical course, outcomes, and responses to treatment. although we studied timelines to initiate observational research, it is possible and in fact likely that start-up time for a clinical experimental trial would be even longer. this has been the experience during severe acute respiratory syndrome, pandemic influenza, mers-cov, ebola virus, and now zika virus [ ] [ ] [ ] . there are various reasons for delays in initiating outbreak-focused observational research both at the investigator level and at the administrative level. some of these reasons include ( ) developing the study protocol and case report forms in a short span of time [ ] , ( ) preparing reb applications, ( ) fixed meeting dates of institutional ethics boards followed by important and necessary back-and-forth communications [ ] , ( ) drafting and finalizing the data sharing agreements, ( ) lack of parallel reviewing of reb applications and data sharing agreements across institutions, and ( ) finalizing budget and arranging funding. there may be several possible ways to overcome these delays and be prepared ahead of time to conduct an outbreak-related study or trial. first, there is a need to have research-ready protocols-inwaiting for periods when seasonal or outbreak-related infections increase. this can be achieved through research-ready outbreak-related observational studies and trials using national and international networks [ ] , undertaking preemptive reb review of generic outbreakrelated observational study case report forms and protocols, establishing data sharing agreements where necessary ahead of time, and helping other centers similarly prepare. although ethical approval is mandatory for research involving human subjects, there are provisions in many jurisdictions for exempt reviews for studies involving public health emergencies, typically consisting of observational studies collecting already available and anonymized data [ , ] . similarly, collecting data as "quality table median time (in days) spent from receipt of protocol, reb submission, and finalization of data sharing agreements to task completion at study sites assurance" or "quality improvement" does not require reb approval in some provinces. if a multicenter observational study intends to collect nonidentifiable data from available information collected as a part of routine clinical care, which can be rapidly and efficiently used to generate new evidence, mechanisms are often in place to grant rapid assessments, and there exist guidelines to exempt certain studies from certain aspects of the review process [ ] . another approach will be to identify certain steps that take the longest duration among start-up procedures. in this study, we identified that data sharing agreements took to months to be fully executed. we found that some sites had limited research administration and regulatory staff, that some sites were busy with other ongoing research-related activities and that starting an unplanned research project introduced substantial demand on a system with already stretched capacity. hospitals often have unique research administrative structures. some university-affiliated hospitals were required to obtain reb approval from a university authority first and then from the local hospital to proceed, whereas others required data sharing agreements to be finalized before issuing the final reb approval. improving efficiency and parallel administrative activities for certain types of low-risk observational studies are a potential mechanism to mitigate delays in start-up procedures. centralized ethics approvals for pandemic research at provincial, state, and national levels may also help to improve efficiency and lessen workload for individual sites [ ] . having durable ( - years) protocols and generic approvals, to include anticipated ranges of pathogens and/or outbreaks meeting prespecified criteria, may also be more appropriate for outbreak and pandemic-related research as opposed to annual reapproval. having a tiered case report form that seeks to collect either a minimal amount of core clinical information or more detailed data, depending upon the clinical research resources of individual sites, might assist in both start-up and actual study-related workload and translate to greater enthusiasm, capacity, and shorter start-up times. the world health organization-international severe acute respiratory and emerging infection consortium clinical characterization protocol provides one such example [ ] . planning and preparedness before the next outbreak or pandemic strikes, during interpandemic periods, are essential for an effective research and subsequent clinical and health system response. because of previous experiences in delays, there is a need to have a strategic plan for the surveillance of these emerging infections, if not at all times then during times of increased local or national risk, and to develop a mechanism to augment existing public health reporting with richer clinical data. recent examples of research responses to new infectious diseases events include funding and initiation of interpandemic clinical trials by groups within the platform for european preparedness against (re)emerging epidemics [ ] and coordination of funding efforts through the formation of the global research collaboration for infectious disease preparedness [ ] . models of informed consent are one other important consideration for outbreak-and pandemic-related research [ ] . obtaining truly informed consent for research involving time-sensitive interventions, during critical illness, in the midst of an outbreak or pandemic is challenging. it can sometimes be difficult to locate and fully inform substitute decision-makers of critically ill patients in a timely manner for interventions targeted at prehospital care or during the period primary resuscitation. deferred consent may be appropriate for select emergency and time-sensitive interventions [ ] . waived consent may, occasionally, be appropriate when evaluating select interventions that fall firmly within the standard of care. the strengths of this study include prospective data collection; use of internationally employed case definitions and eligibility criteria for, in this case, sari-related outbreak activity; a fully operational webaccessible case reporting system [ ] ; and an experienced research team with expertise in outbreak and pandemic specific research. this is the first study to report the actual duration of time spent in each step to initiate multisite outbreak-related research. limitations to this study include lack of qualitative data from participating site research staff to better understand their perspectives regarding delays. future studies may focus upon this complementary aspect. also, the study was limited to major hospitals already carrying out critical care research, and therefore, we may be underestimating required timelines among centers without staff already familiar with the processes necessary for study start-up. finally, although this study was focused upon surveillance of sari during a period of global concern for many outbreak-causing pathogens-influenza a (h n , h n , h n ) and mers-cov-it was initiated during an interoutbreak period in canada; start-up time may be shorter or longer during an actual outbreak and has generalizable lessons for nonrespiratory outbreaks such as ebola and zika virus. in this study, we found that there is substantial start-up time required to initiate outbreak-related observational research that may impede on our ability to conduct research, generate knowledge to help care for patients, and prepare for future threats. our study stresses the need to have a nationally and internationally coordinated approach, with context-appropriate, tiered case report forms and preparatory work-protocol and case report form generation, data sharing agreements, and reb submissions-completed during the pre-and interoutbreak periods. to have the research mechanisms functional for real-time data collection and reporting when they are required, fig. . diagrammatic representation of median time (in days) spent from receipt of protocol, reb submission, and finalization of data sharing agreements to task completion at study sites. durable administrative and ethical approvals and data sharing agreements must be planned and executed before outbreaks and pandemics occur. influenza pandemics of the th century critical care capacity in canada: results of a national cross-sectional study development of a triage protocol for critical care during an influenza pandemic triaging for adult critical care in the event of overwhelming need critically ill patients with severe acute respiratory syndrome canadian critical care trials group h n collaborative. critically ill patients with influenza a(h n ) infection in canada kgh lassa fever program, viral hemorrhagic fever consortium, who clinical response team. clinical illness and outcomes in patients with ebola in sierra leone ksa mers-cov investigation team. hospital outbreak of middle east respiratory syndrome coronavirus early observational research and registries during the - influenza a pandemic clinical issues and research in respiratory failure from severe acute respiratory syndrome the challenges of treating ebola virus disease with experimental therapies influenza a (h n pdm )-related critical illness and mortality in mexico and canada time required for institutional review board review at one veterans affairs medical center time required to start multicentre clinical trials within the italian medicine agency programme of support for independent research time to activate lung cancer clinical trials and patient enrollment: a representative comparison study between two academic centers across the atlantic impact of institutional review board practice variation on observational health services research variations among institutional review board reviews in a multisite health services research study obtaining regulatory approval for multicentre randomised controlled trials: experiences in the stich ii trial international severe acute respiratory and emerging infection consortium (isaric) for debate: should observational clinical studies require ethics committee approval? should observational clinical studies require ethics committee approval? institutional review board consideration of chart reviews, case reports, and observational studies central institutional review board review for an academic trial network platform for european preparedness against (re-)emerging epidemics (prepare) global research collaboration for infectious disease preparedness (glopid-r) clinical research ethics for critically ill patients: a pandemic proposal key stakeholder perceptions about consent to participate in acute illness research: a rapid, systematic review to inform epi/pandemic research preparedness supplementary data to this article can be found online at http://dx. doi.org/ . /j.jcrc. . . . key: cord- -lshp u w authors: radoykov, s. title: in times of crisis, anticipate mourning date: - - journal: encephale doi: . /j.encep. . . sha: doc_id: cord_uid: lshp u w nan please cite this article in press as: radoykov s. in times of crisis, anticipate mourning. encéphale ( ), https://doi.org/ . /j.encep. . . last year, i graduated as a young fellow in psychiatry. around the same time, i lost three close relatives. two of my colleagues also lost cherished people, and their suffering deepened my own. i didn't think at the time that going through this painful grieving process would help provide me with strength and prepare me for the coronavirus pandemic ahead. healthcare professionals are currently striving to save as many lives as possible, as we face a new global viral threat. given the improvements in medical care in the last century, some patients are indeed saved every day. for other patients, that positive outcome is turning out to be impossible. with over , deaths worldwide [ ], many people are now grieving loved ones. grief is a process that has evolved over centuries to help humankind overcome anxiety around death and dying. its natural processes involve culture and tradition-based rituals that serve the purpose of overcoming the suffering while maintaining a healthy psychological bond and distance with the dead [ ] . unfortunately, due to confinement restrictions, there are reports of citizens neither being able to say goodbye to their loved ones, nor participating in essential mortuary rituals [ , ] . the surviving population will need their mental health in order to rebuild worldwide peace of mind and regain a sense of hope and prosperity. current safety regulations notwithstanding, we need to remember that most people will probably overcome the covid- infection, and many of them will be mourning other people. careful planning and attention should therefore be devoted to supporting patients and families in these challenging times and arranging for some form of last human contact, either in person or via remote technology. people deserve the right to actively engage in the death process of their closest loved ones, to participate in the mortu-ary rituals, and to know where their loved one's body is located or buried. furthermore, and because they have no choice in the matter, caregivers will also experience grief, as a result of witnessing many passing in a brief period of time. they will need time and the possibility to recognize, validate and share their own feelings of sadness, fear and helplessness. sometimes, as a team. helping mourning families will ultimately help caregivers mourn, as well. now more than ever, special care and consideration should be given to the end of life, in a sincere and straightforward way. dignity over fear. the author declares that he has no competing interest. other conflicts of interest: teaching psychotherapy and clinical hypnosis for several universities and teaching institutions. rituels de deuil, travail du deuil rd ed. france: la pensée sauvage hôpital cochin, , rue d'assas, paris, france e-mail address: dr@radoykov key: cord- - d j n authors: hong, hyokyoung g.; li, yi title: estimation of time-varying reproduction numbers underlying epidemiological processes: a new statistical tool for the covid- pandemic date: - - journal: plos one doi: . /journal.pone. sha: doc_id: cord_uid: d j n the coronavirus pandemic has rapidly evolved into an unprecedented crisis. the susceptible-infectious-removed (sir) model and its variants have been used for modeling the pandemic. however, time-independent parameters in the classical models may not capture the dynamic transmission and removal processes, governed by virus containment strategies taken at various phases of the epidemic. moreover, few models account for possible inaccuracies of the reported cases. we propose a poisson model with time-dependent transmission and removal rates to account for possible random errors in reporting and estimate a time-dependent disease reproduction number, which may reflect the effectiveness of virus control strategies. we apply our method to study the pandemic in several severely impacted countries, and analyze and forecast the evolving spread of the coronavirus. we have developed an interactive web application to facilitate readers’ use of our method. a recent work [ ] demonstrated that r is likely to vary "due to the impact of the performed intervention strategies and behavioral changes in the population". the merits of our work are summarized as follows. first, unlike the deterministic odebased sir models, our method does not require transmission and removal rates to be known, but estimates them using the data. second, we allow these rates to be time-varying. some timevarying sir approaches [ ] directly integrate into the model the information on when governments enforced, for example, quarantine, social-distancing, compulsory mask-wearing and city lockdowns. our method differs by computing a time-varying r , which gauges the status of coronavirus containment and assesses the effectiveness of virus control strategies. third, our poisson model accounts for possible random errors in reporting, and quantifies the uncertainty of the predicted numbers of susceptible, infectious and removed. finally, we apply our method to analyze the data collected from the aforementioned github time-series data repository. we have created an interactive web application (https://younghhk.shinyapps.io/ tvsirforcovid /) to facilitate users' application of the proposed method. we introduce a poisson model with time-varying transmission and removal rates, denoted by β(t) and γ(t). consider a population with n individuals, and denote by s(t), i(t), r(t) the true but unknown numbers of susceptible, infectious and removed, respectively, at time t, and by s (t) = s(t)/n, i(t) = i(t)/n, r(t) = r(t)/n the fractions of these compartments. the following ordinary differential equations (ode) describe the change rates of s(t), i(t) and r(t): with an initial condition: i( ) = i and r( ) = r , where i > in order to let the epidemic develop [ ] . here, β(t) > is the time-varying transmission rate of an infection at time t, which is the number of infectious contacts that result in infections per unit time, and γ(t) > is the time-varying removal rate at t, at which infectious subjects are removed from being infectious due to death or recovery [ ] . moreover, γ − (t) can be interpreted as the infectious duration of an infection caught at time t [ ] . from ( )-( ), we derive an important quantity, which is the time-dependent reproduction number r ðtÞ ¼ bðtÞ gðtÞ : time-varying sir based poisson model for covid - to see this, dividing ( ) by ( ) leads to where (di/dr)(t) is the ratio of the change rate of i(t) to that of r(t). therefore, compared to its time-independent counterpart, r ðtÞ is an instantaneous reproduction number and provides a real-time picture of an outbreak. for example, at the onset of the outbreak and in the absence of any containment actions, we may see a rapid ramp-up of cases compared to those removed, leading to a large (di/dr)(t) in ( ), and hence a large r ðtÞ. with the implemented policies for disease mitigation, we will see a drastically decreasing (di/dr)(t) and, therefore, declining of r ðtÞ over time. the turning point is t such that r ðt Þ ¼ ; when the outbreak is controlled with (di/dr)(t ) < . under the fixed population size assumption, i.e., s(t) + i(t)+ r(t) = , we only need to study i (t) and r(t), and re-express ( )-( ) as with the same initial condition. as the numbers of cases and removed are reported on a daily basis, t is measured in days, e.g. t = , . . ., t. replacing derivatives in ( ) with finite differences, we can consider a discrete version of ( ): iðt þ Þ À iðtÞ ¼ bðtÞiðtÞf À iðtÞ À rðtÞg À gðtÞiðtÞ; rðt þ Þ À rðtÞ ¼ gðtÞiðtÞ; where β(t) and γ(t) are positive functions of t. we set i( ) = i and r( ) = r with t = being the starting date. model ( ) admits a recursive way to compute i(t) and r(t): iðt þ Þ ¼ f þ bðtÞ À gðtÞgiðtÞ À bðtÞiðtÞfiðtÞ þ rðtÞg; for t = , . . ., t − . the first equation of ( ) implies that β(t) < γ(t) or r ðtÞ ¼ bðtÞg À ðtÞ < leads to that i(t + ) < i(t) or the number of infectious cases drops, meaning the spread of virus is controlled; otherwise, the number of infectious cases will keep increasing. to fit the model and estimate the time-dependent parameters, we can use nonparametric techniques, such as splines [ ] [ ] [ ] [ ] [ ] [ ] , local polynomial regression [ ] and reproducible kernel hilbert space method [ ] . in particular, we consider a cubic b-spline approximation [ ] . denote by b(t) = {b (t),. . .,b q (t)} t the q cubic b-spline basis functions over [ , t] associated with the knots = w < w < . . . < w q− < w q− = t. for added flexibility, we allow the number of knots to differ between β(t) and γ(t) and specify log bðtÞ ¼ when b ¼ � � � ¼ b q and g ¼ � � � ¼ g q , the model reduces to a constant sir model [ ] . we use cross-validation to choose q and q in our numerical experiments. denote by β ¼ ðb ; . . . ; b q Þ and γ ¼ ðg ; . . . ; g q Þ the unknown parameters, by z i (t) and z r (t) the reported numbers of infectious and removed, respectively, and by z i (t) = z i (t)/n and z r (t) = z r (t)/n, the reported proportions. also, denote by i(t) and r(t) the true numbers of infectious and removed, respectively at time t. we propose a poisson model to link z i (t) and z r (t) to i(t) and r(t) as follows: we also assume that, given i(t) and r(t), the observed daily number {z i (t), z r (t)} are independent across t = , . . ., t, meaning the random reporting errors are "white" noise. we note that ( ) is directly based on "true" numbers of infectious cases and removed cases derived from the discrete sir model ( ) . this differs from the markov process approach, which is based on the past observations. with ( ), ( ) and ( ), r(t) and i(t) are the functions of β and γ, since given the data (z i (t), z r (t)), t = , . . ., t, we obtain ðβ;ĝÞ, the estimates of (β, γ), by maximizing the following likelihood or, equivalently, maximizing the log likelihood function where c is a constant free of β and γ. see the s appendix for additional details of optimization. we then estimate the variance-covariance matrix of ðβ;γÞ by inverting the second derivative of −ℓ(β, γ) evaluated at ðβ;γÞ. finally, for t = , . . ., t, we estimate i(t) and r(t) byÎðtÞ ¼ nîðtÞ andrðtÞ ¼ nrðtÞ, whereîðtÞ andrðtÞ are obtained from ( ) with all unknown quantities replaced by their estimates; estimate β(t) and γ(t) bybðtÞ andĝðtÞ, obtained by using ( ) with (β, γ) replaced by ðβ;γÞ; and estimate r ðtÞ byr ðtÞ ¼βðtÞ=γðtÞ. estimation: let n be the size of population of a given country. the date when the first case was reported is set to be the starting date with t = , i = z i ( )/n and r = z r ( )/n. the observed data are {z i (t), z r (t), t = , . . ., t}, obtained from the github data repository website mentioned in the introduction. we maximize ( ) to obtainβ ¼ ðb ;b ; . . . ;b q Þ and γ ¼ ðĝ ;ĝ ; . . . ;ĝ q Þ. the optimal q and q are obtained via cross-validation. we denote by since the first case of covid- was detected in china, it quickly spread to nearly every part of the world [ ] . covid- , conjectured to be more contagious than the previous sars and h n [ ] , has put great strain on healthcare systems worldwide, especially among the severely affected countries [ ] . we apply our method to assess the epidemiological processes of covid- in some severely impacted countries. the country-specific time-series data of confirmed, recovered, and death cases were obtained from a github data repository website (https://github.com/ulklc/covid -timeseries). this site collects information from various sources listed below on a daily basis at gmt : , converts the data to the csv format, and conducts data normalization and harmonization if inconsistencies are found. the data sources include in particular, the current population size of each country, n, came from the website of worldometer. our analyses covered the periods between the date of the first reported coronavirus case in each nation and june , . in the beginning of the outbreak, assessment of i and r was problematic as infectious but asymptomatic cases tended to be undetected due to lack of awareness and testing. to investigate how our method depends on the correct specification of the initial values r and i , we conducted monte carlo simulations. as a comparison, we also studied the performance of the deterministic sir model in the same settings. fig shows that, when the initial value i was mis-specified to be times of the truth, the curves of i (t) and r(t) obtained by the deterministic sir model ( ) were considerably biased. on the other hand, our proposed model ( ), by accounting for the randomness of the observed data, was robust toward the mis-specification of i and r : the estimates of r(t) and i(t) had negligible biases even with mis-specified initial values. in an omitted analysis, we mis-specified i and r to be only twice of the truth, and obtain the similar results. our numerical experiments also suggested that using the time series, starting from the date when both cases and removed were reported, may generate more reasonable estimates. using the cubic b-splines ( ), we estimated the time-dependent transmission rate β(t) and removal rate γ(t), based on which we further estimated r ðtÞ, i(t) and r(t). to choose the optimal number of knots for each country when implementing the spline approach, we used -fold cross-validation by minimizing the combined mean squared error for the estimated infectious and removed cases. fig shows sharp variations in transmission rates and removal rates across different time periods, indicating the time-varying nature of these rates. the estimated i(t) and r(t) overlapped well with the observed number of infectious and removed cases, indicating the reasonableness of the method. the pointwise % confidence intervals (in yellow) represent the uncertainty of the estimates, which may be due to error in reporting. fig presents the estimated time-varying reproduction number,bðtÞĝðtÞ À , for several countries. the curves capture the evolving trends of the epidemic for each country. in the us, though the first confirmed case was reported on january , , lack of immediate actions in the early stage let the epidemic spread widely. as a result, the us had seen soaring infectious cases, and r ðtÞ reached its peak around mid-march. from mid-march to early april, the us tightened the virus control policy by suspending foreign travels and closing borders, and the federal government and most states issued mandatory or advisory stay-home orders, which seemed to have substantially contained the virus. the high reproduction numbers with china, italy, and sweden at the onset of the pandemic imply that the spread of the infectious disease was not well controlled in its early phases. with the extremely stringent mitigation policies such as city lockdown and mandatory mask-wearing implemented in the end of january, china was reported to bring its epidemic under control with a quickly dropping r ðtÞ in february. this indicates that china might have contained the epidemic, with more people removed from infectious status than those who became infectious. sweden is among the few countries that imposed more relaxed measures to control coronavirus and advocated herd immunity. the swedish approach has initiated much debate. while some criticized that this may endanger the general population in a reckless way, some felt this might terminate the pandemic more effectively in the absence of vaccines [ ] . fig demon- strates that sweden has a large reproduction number, which however keeps decreasing. the "big v" shape of the reproduction number around may might be due to the reporting errors or lags. our investigation found that the reported number of infectious cases in that period suddenly dropped and then quickly rose back, which was unusual. around february , a surge in south korea was linked to a massive cluster of more than , cases [ ] . the outbreak was clearly depicted in the time-varying r ðtÞ curve. since then, south korea appeared to have slowed its epidemic, likely due to expansive testing programs and extensive efforts to trace and isolate patients and their contacts [ ] . . estimated i(t), r(t), β(t), γ(t) , and r ðtÞ. the us (left) and china (right) are shown based on the data up to june , . the blue dots and the red dashed curves represent the observed data and the model-based predictions, respectively, with % confidence interval. more broadly, fig categorizes countries into two groups. one group features the countries which have contained coronavirus. countries, such as china and south korea, took aggressive actions after the outbreak and presented sharper downward slopes. some european countries such as italy and spain and mideastern countries such as iran, which were hit later than the east asian countries, share a similar pattern, though with much flatter slopes. on the other hand, the us, brazil, and sweden are still struggling to contain the virus, with the r ðtÞ curves hovering over . we also caution that, among the countries whose r ðtÞ dropped below , the curves of the reproduction numbers are beginning to uptick, possibly due to the resumed economy activities. we have developed a web application (https://younghhk.shinyapps.io/tvsirforcovid /) to facilitate users' application of the proposed method to compute the time-varying reproduction number, and estimated and predict the daily numbers of active cases and removed cases for the presented countries and other countries; see fig for an illustration. our code was written in r [ ] , using the bs function in the splines package for cubic b-spline approximation, the nlm function in the stats package for nonlinear minimization, and the jacobian function in the numderiv package for computation of gradients and hessian matrices. graphs were made by using the ggplot package. our code can be found on the aforementioned shiny website. the rampaging pandemic of covid- has called for developing proper computational and statistical tools to understand the trend of the spread of the disease and evaluate the efficacy of mitigation measures [ ] [ ] [ ] [ ] . we propose a poisson model with time-dependent transmission and removal rates. our model accommodates possible random errors and estimates a timedependent disease reproduction number, r ðtÞ, which can serve as a metric for timely evaluating the effects of health policies. there have been substantial issues, such as biases and lags, in reporting infectious cases, recovery, and deaths, especially at the early stage of the outbreak. as opposed to the deterministic sir models that heavily rely on accurate reporting of initial infectious and removed cases, our model is more robust towards mis-specifications of such initial conditions. applications of our method to study the epidemics in selected countries illustrate the results of the virus containment policies implemented in these countries, and may serve as the epidemiological benchmarks for the future preventive measures. several methodological questions need to be addressed. first, we analyzed each country separately, without considering the traffic flows among these countries. we will develop a joint model for the global epidemic, which accounts for the geographic locations of and the connectivity among the countries. second, incorporating timing of public health interventions such as the shelter-in-place order into the model might be interesting. however, we opted not to follow this approach as no such information exists for the majority countries. on the other hand, the impact of the interventions or the change point can be embedded into our nonparametric time-dependent estimates. third, the validity of the results of statistical models eventually hinges on the data transparency and accuracy. for example, the results of chinazzi et al. [ ] suggested that in china only one of four cases were detected and confirmed. also, asymptomatic cases might have been undetected in many countries. all of these might have led to underestimation of the actual number of cases. moreover, the collected data could be biased toward patients with severe infection and with insurance, as these patients were more likely to seek care or get tested. more in-depth research is warranted to address the issue selection bias. finally, our present work is within the sir framework, where removed individuals include recovery and deaths, who hypothetically are unlikely to infect others. although this makes the model simpler and widely adopted, the interpretation of the γ parameter is not straightforward. our subsequent work is to develop a susceptible-infectious-recovered-deceased (sird) model, in which the number of deaths and the number of recovered are separately considered. we will report this elsewhere. containment of covid- requires the concerted effort of health care workers, health policy makers as well as citizens. measures, e.g. self-quarantine, social distancing, and shelter in place, have been executed at various phases by each country to prevent the community transmission. timely and effective assessment of these actions constitutes a critical component of the effort. sir models have been widely used to model this pandemic. however, constant transmission and removal rates may not capture the timely influences of these policies. we propose a time-varying sir poisson model to assess the dynamic transmission patterns of covid- . with the virus containment measures taken at various time points, r may vary substantially over time. our model provides a systematic and daily updatable tool to evaluate the immediate outcomes of these actions. it is likely that the pandemic is ending and many countries are now shifting gear to reopen the economy, while preparing to battle the second wave of virus attack [ , ] . our tool may shed light on and aid the implementation of future containment strategies. coronaviruses: an overview of their replication and pathogenesis bats are natural reservoirs of sars-like coronaviruses discovery of seven novel mammalian and avian coronaviruses in the genus deltacoronavirus supports bat coronaviruses as the gene source of alphacoronavirus and betacoronavirus and avian coronaviruses as the gene source of gammacoronavirus and deltacoronavirus human coronavirus and severe acute respiratory infection in southern brazil. pathogens and global health evolving epidemiology and impact of non-pharmaceutical interventions on the outbreak of coronavirus disease johns hopkins cornonavirus resource center real-time epidemic forecasting for pandemic influenza mathematical models of infectious disease transmission modelling transmission and control of the covid- pandemic in australia challenges in control of covid- : short doubling time and long delay to effect of interventions individual vaccination as nash equilibrium in a sir model with application to the - influenza a (h n ) epidemic in france estimating epidemic parameters: application to h n pandemic data bayesian estimation of the dynamics of pandemic (h n ) influenza transmission in queensland: a space-time sir-based model. environmental research modeling super-spreading events for infectious diseases: case study sars deterministic sir (susceptible-infected-removed) models applied to varicella outbreaks an introduction to compartmental modeling for the budding infectious disease modeler risk analysis foundations, models, and methods a contribution to the mathematical theory of epidemics statistics based predictions of coronavirus -ncov spreading in mainland china. medrxiv a time delay dynamical model for outbreak of -ncov and the parameter identification epidemic analysis of covid- in china by dynamical modeling preliminary prediction of the basic reproduction number of the wuhan novel coronavirus -ncov effective containment explains sub-exponential growth in confirmed cases of recent covid- outbreak in mainland china lessons from the history of quarantine, from plague to influenza a. emerging infectious diseases a time-dependent sir model for covid- with undetectable infected persons sir model with time dependent infectivity parameter: approximating the epidemic attractor and the importance of the initial phase an epidemiological forecast model and software assessing interventions on covid- epidemic in china. medrxiv modeling count data methods for estimating disease transmission rates: evaluating the precision of poisson regression and two novel methods fitting outbreak models to data from many small norovirus outbreaks multi-species sir models from a dynamical bayesian perspective the estimation of the basic reproduction number for infectious diseases mathematical epidemiology of infectious diseases: model building transmission potential of smallpox: estimates based on detailed data from an outbreak measurability of the epidemic reproduction number in data-driven contact networks a time-dependent sir model for covid- with undetectable infected persons notes on r a practical guide to splines parameter estimation for differential equations: a generalized smoothing approach modelling transcriptional regulation using gaussian processes linear latent force models using gaussian processes latent force models mechanistic hierarchical gaussian processes empirical-bias bandwidths for local polynomial nonparametric regression and density estimation new reproducing kernel functions. mathematical problems in engineering a review of spline function procedures in r. bmc medical research methodology a note on the jackknife, the bootstrap and the delta method estimators of bias and variance covid- , chronicle of an expected pandemic covid- : how doctors and healthcare systems are tackling coronavirus worldwide closing borders is ridiculous': the epidemiologist behind sweden's controversial coronavirus strategy why a south korean church was the perfect petri dish for coronavirus coronavirus cases have dropped sharply in south korea r: a language and environment for statistical computing current status of global research on novel coronavirus disease (covid- ): a bibliometric analysis and knowledge mapping. available at ssrn investigating the cases of novel coronavirus disease (covid- ) in china using dynamic statistical techniques the impact of social distancing and epicenter lockdown on the covid- epidemic in mainland china: a data-driven seiqr model study. medrxiv covid- italian and europe epidemic evolution: a seir model with lockdown-dependent transmission rate based on chinese data the effect of travel restrictions on the spread of the novel coronavirus (covid- ) outbreak as china's virus cases reach zero, experts warn of second wave asian nations face second wave of imported cases key: cord- -q if li authors: simpson, ryan b.; zhou, bingjie; alarcon falconi, tania m.; naumova, elena n. title: an analecta of visualizations for foodborne illness trends and seasonality date: - - journal: sci data doi: . /s - - -x sha: doc_id: cord_uid: q if li disease surveillance systems worldwide face increasing pressure to maintain and distribute data in usable formats supplemented with effective visualizations to enable actionable policy and programming responses. annual reports and interactive portals provide access to surveillance data and visualizations depicting temporal trends and seasonal patterns of diseases. analyses and visuals are typically limited to reporting the annual time series and the month with the highest number of cases per year. yet, detecting potential disease outbreaks and supporting public health interventions requires detailed spatiotemporal comparisons to characterize spatiotemporal patterns of illness across diseases and locations. the centers for disease control and prevention’s (cdc) foodnet fast provides population-based foodborne-disease surveillance records and visualizations for select counties across the us. we offer suggestions on how current foodnet fast data organization and visual analytics can be improved to facilitate data interpretation, decision-making, and communication of features related to trend and seasonality. the resulting compilation, or analecta, of visualizations of records and codes are openly available online. disease surveillance systems worldwide face increasing pressure to maintain and distribute data in usable formats with clearly communicated visualizations to promote actionable policy and programming responses . decade-long efforts to sustain surveillance systems improve early outbreak detection, infection containment, and mobilization of health resources [ ] [ ] [ ] [ ] and create adaptive, near-time forecasts for disease outbreaks , . web-based platforms provide access to more accurate, timely, and frequent surveillance data. the world health organization's (who) flunet, for example, provides time-referenced data on worldwide influenza . publicly available downloads increase the flexibility for analyses and enables adaptive research due to frequent and timely reporting. the pandemic of novel coronavirus disease (covid- ) serves as a vivid demonstration of how limited access to publicly available high-quality data can stymy research. as the quantity and diversity of data available for processing, synthesizing, and communicating increases, new visual analytics, including complex multi-panel plots, must be considered to monitor trends, investigate seasonality, and support public health planning . these visualizations, and the methodologies used to generate them, must be standardized to enable comparability across time periods, locations, at-risk populations, and pathogens. however, current surveillance systems, including foodborne disease surveillance in the united states, often compress time series records to simplistic annual trends [ ] [ ] [ ] [ ] [ ] and describe seasonality by the month(s) with the highest cases per year or the first month of outbreak onset [ ] [ ] [ ] [ ] [ ] [ ] . visualizations using these annual trends or broad assessments of seasonality fail to utilize the full complexity of surveillance data and in some cases may be misleading. more specifically, these visualizations fail to provide detailed examination of how long-term trends change over time, how seasonality estimates vary by year or across locations, or how peak timing and amplitude estimates could change over time. the cdc foodborne disease active surveillance network (foodnet) provides preprocessed population-based foodborne-disease surveillance records and visualizations via foodnet fast, a publicly available data portal , . the foodnet fast platform contains rich demographic data, including age group, gender, and ethnic group, valuable for a broad spectrum of analyses. the visualizations aim to aid users in identifying trends of nine laboratory-confirmed foodborne diseases in select counties from ten us states and nationally. however, in the present form and due to substantial data compression, the available data and visualizations provided are limited in scope preventing the foodnet fast allows data download and visualization of these diseases for a user-specified time period. data downloads include information on the incidence of confirmed cases, monthly percentage of confirmed cases, distribution of cases by pathogen, and totals of cases, hospitalizations, and deaths. for multi-year periods, the portal aggregates totals and monthly percentages into single statistics for the full time period selected rather than showing individual years. this aggregation ensures case anonymity but monthly time units minimize the refinement of trend and seasonality analyses. to calculate monthly percentages of confirmed cases for all diseases in one year and one location, we had to download each state-year combination individually, for a total of files in ms excel format. to create a time series of total monthly cases by pathogen and location, we used data from two tables in each data download: annual counts of confirmed cases (long format) and monthly percentage of confirmed cases (wide format). we transposed the monthly percentages of confirmed cases from wide to long format and then multiplied them by the annual counts of confirmed cases (supplementary figure s ). since the provided monthly percentages are rounded to digit in the data download, calculated counts slightly under-or over-estimate annual totals. we did not round non-integer cases in our calculated time series to best preserve the monthly distribution of cases from the original data download. a monthly time series of confirmed cases of hospitalizations or deaths could not be reconstructed as described because no information is provided on their monthly percentages. we next calculated disease rates using confirmed monthly cases and annual population data. rates are preferred over counts since changes in counts could be a direct result of changes in the population catchment area of a surveillance system. the number of counties and states monitored in foodnet increased between and and has remained constant to date since (supplementary table s ). we downloaded county-level population estimates from the , , and us census bureau interannual census reports, which provide annual population estimates [ ] [ ] [ ] . we then estimated state-level foodnet population catchment area by adding all mid-year (july st ) populations of surveyed counties monitored in each year. next, we calculated the united states population catchment area by adding all state-level estimates for all surveyed counties for each year. finally, we developed a time series of monthly rates per , , persons for each pathogen and location by dividing monthly counts by annual population estimates and multiplying this quotient by , , . in addition to monthly rates, we calculated yearly rates by adding all monthly counts each year, dividing by the annual population, and multiplying this quotient by , , . modeling trends and seasonality. we estimated trend and seasonality characteristics using negative binomial harmonic regression (nbhr) models, which are commonly used to analyse count-based time series records with periodic fluctuations [ ] [ ] [ ] . these models include harmonic terms representing sine and cosine functions, which allow us to fit periodic oscillations. the regression parameters for these harmonic terms serve as a base for estimating important characteristics of seasonality: when the maximum rate occurs (peak timing) and the magnitude at that peak (amplitude). we calculated peak timing, amplitude, and their confidence intervals from nbhr model coefficients using the δ-method, which allow us to transform the regression coefficients of the model to seasonality characteristics based on the properties of the basic trigonometric functions (supplementary table s ) , . to estimate annualized seasonality characteristics, we applied a nbhr model for each study year and location with the length of the time series set to to represent the months of the year. we also estimated seasonality characteristics for the full time period. to show average trends across the entire -year period, we fit a nbhr model with three trend terms (linear, quadratic, and cubic) where the length of the time series varied according to when foodnet began surveying that location from to months. the selection of three polynomial terms was driven by the clarity of interpretation as a monthly increase and the potential for overall acceleration or deceleration, although other ways of assessing the trend such moving averages and spline functions could be also explored. plot terminology. we develop multi-panel visualization techniques using the best practices of current data visualization resources , and our own research , , . a multi-panel plot, as defined by our earlier work, "involves the strategic positioning of two or more graphs sharing at least one common axis on a single canvas . " these plots can effectively illustrate multiple dimensions of information including different time units (e.g. yearly, monthly), disease statistics (e.g. pathogens, rates, counts), seasonality characteristics (e.g. peak timing, amplitude), and locations (e.g. state-level, national). we use the following common, standardized terminology across visualizations to ensure comprehension: • disease -each of the nine reported foodnet infections, including campylobacteriosis (camp), listeriosis (list), salmonellosis (salm), shigellosis (shig), infection due to shiga toxin-producing escherichia coli o and non-o (ecol), vibriosis (vibr), infection due to yersinia enterocolitica (yers), cryptosporidiosis (cryp) and cyclosporiasis (cycl) • monthly rate -monthly confirmed cases per , , persons • yearly rate -total confirmed cases in a year divided by the mid-year population of all surveyed counties in that location (cases per , , persons) • frequency -the number of months reporting the disease rates in the same range • peak timing -the time of year according to the gregorian calendar that a disease reaches its maximal rate; for monthly time series, peak timing ranges in [ , [, i .e. from . (beginning of january) to . (end of december) • amplitude -the mathematical amplitude, or the midpoint of relative intensity; for nbhr models, the amplitude estimate reflects the ratio between the disease rate at the peak (maximum rate) and the disease rate at the midpoint (median rate) • foodnet surveyed county -the counties under foodnet surveillance as of • non-surveyed county -all remaining counties within a surveillance state as of . we present our analecta of visualizations allowing to describe trend, examine seasonal signatures, curves depicting characteristic variations in disease incidence over the course of one year, and understand features of seasonality, such as peak timing and amplitude across locations and diseases. we illustrate all visualizations using salmonellosis for the united states from - . the full analecta with time series data and code are available on our website (https://sites.tufts.edu/naumovalabs/analecta/) with data and code also available on figshare . describing trend. the interpretability of trends in a time series plot is greatly affected by the length and units of the time series. foodnet fast aggregates data annually, as shown in supplementary figure s , which provides clear, concise information on annual rates. in this example, the rate of salmonellosis remains largely unchanging over time with distinct outbreaks seen in and . as expected, by compressing data to annual rates, supplementary figure s masks within-year trends of disease rates. foodnet reports and publications similarly tend to show only inter-annual changes in disease counts or rates [ ] [ ] [ ] [ ] [ ] , . without more granular within-year variations, the viewer cannot determine if increased yearly rates are driven by erratic outbreaks in a specific month or higher rates across all months of the year. to capture within-year trends, we propose a multi-panel plot that combines information on monthly rates, inter-annual trends, and the frequency distribution of rates by utilizing the shared axes of individual plots (fig. ) . the right panel of fig. provides a time series of monthly rates with a nbhr model fit with three trend terms (linear, quadratic, and cubic). the inclusion of polynomial terms allows us to capture long-term trends (linear term) and their acceleration and deceleration over time (quadratic and cubic terms). the predicted trend line is shown in blue and its % confidence interval is in grey shades. the estimated median monthly rate is shown in red. the left panel depicts a rotated histogram of rate frequencies indicating the right-skewness of the monthly rate distribution. the histogram shares the vertical monthly rate-axis with the time series plot and is essential for connecting two concepts: the distribution of monthly counts on the base of their frequency and the distribution of monthly counts over time. two pictograms refer to the selected pathogen and location. figure shows the stability of seasonal oscillation in salmonellosis over time series with increased rates from - followed by a gradual decrease in rates through . while preserving the within-year seasonal fluctuations, the plot provides additional information. alternating background colours help distinguish differences in the shape of seasonal curves between adjacent years. an increasingly darker hue for the monthly rate values distinguishes more recent data from more historic data. contrasting background colours mixed with a gradual intensity of line hues, saturation, brightness, and transparency allow for greater focus and attention to trends in the data [ ] [ ] [ ] . the rotated histogram in the left panel of fig. shows the distribution of monthly rates and its degree of skewness due to months with high counts. we include the red median line to provide the most appropriate measure of central tendency for the skewed distribution. the shared vertical axis helps readers track those high values to a www.nature.com/scientificdata www.nature.com/scientificdata/ specific month in the time series. the distribution also justifies the use of negative binomial regression models to evaluate temporal patterns. by supplementing the time series plot with the distribution of monthly rates, we show a visual rationale for using appropriate analytical tools (negative binomial model, in this case) for calculating inter-annual trends. to better understand annual differences in seasonal behaviors, we propose a multi-panel plot that incorporates annual seasonal signatures, summary statistics of monthly rates, and radar plots (fig. ). given varying visual perceptions of these three ways of presenting seasonal patterns, we offer side-by-side comparisons that aim to increase comprehension. the top-left panel provides an overlay of all annual seasonal signatures, a set of curves depicting characteristic variations in disease incidence over the course of one year, where line hues become increasingly darker with more recent data and a red line indicates median monthly rates, as in fig. . the bottom-left panel provides a set of box plots for each month that aggregates information over the study period and provides essential summary statistics, including the median rate values and the measures of spread. the shared horizontal axis allows the two plots to be compared across the years using identical scales. to provide visual context, background colours were used to indicate the four seasons (winter, spring, summer and autumn). the right panel provides overlaying monthly rates using a radar plot where time is indicated on the rotational axis and rates are indicated on the radial axis. the radar plot emphasizes the periodic nature of seasonal variations in one continuous line with graduating colours. the colour hue of the lines, background colour, median line colour and the axis scales are uniform across all three panels. we also repeat the pictograms to refer to the selected pathogen and location. for salmonellosis, disease rates are highest in the summertime (with peaks in july and august) and lowest during the wintertime (with a well-defined february nadir). rate increases and decreases during equinox periods indicate bacterial growth rates due to more and less favourable climate conditions, respectively. the www.nature.com/scientificdata www.nature.com/scientificdata/ confidence interval (whisker), and outliers or potentially influential observations (markers) over the -year period. measures of distribution spread provide an insight for the dispersion of rates in each month: the variability of salmonellosis rates decreases in winter months closer to the february nadir but increases in summer months of july and august closer to the seasonal peak. unusually high values are indicative of erratic behavior characterized by spikes in specific months and years. the right-hand panel of fig. further emphasizes the periodic nature and the positioning of the seasonal peaks and nadirs. radar or spider plots describe time using a rotational axis where the radial distance from the centre of the plot depicts rate magnitude [ ] [ ] [ ] [ ] . radial axes, compared to perpendicular axes, show annual fluctuations as a continuous flow. this more clearly demonstrates declines of salmonellosis rates during nadir months (november to march) without the visual discontinuity of left panel visuals. to capture the advantage of a multi-panel plot (fig. ) , we incorporate the boxplot from fig. (lower left panel) with a calendar heatmap containing monthly rate values. in the heatmap, information for each individual year is shown as stacked rows of width (for each month of the year) where cell colour intensity represents the magnitude of monthly rates. like fig. , the heatmap illustrates the highest rates (shown as the darker cells) are in july and august. compared to stacked line plots, however, fig. provides an individual row for each year of the time series, allowing for greater decomposition, differentiation, and comparison of seasonal signatures across years. in this plot, seasonal changes are shown horizontally from left to right -from january to december and the yearly trend transition can be observed in a vertical view from bottom to top-from year to in the right panel. while fig. provides the annual variability of seasonal patterns, monthly rate values for each year are difficult to ascertain. instead, the emphasis is placed on similarities and differences of the seasonal curvature over time. in fig. , the attention shifts to comparing the intensity of rates per month of the year across years. here, we evaluate which months of the year are most intense across years using the intensity of each cell's colour hue to describe the intensity of rates. the fig. panel integrates information on both trends and seasonality along with the individual monthly values unlike any of the previously shown visualizations. yearly rates provide a bar graph for comparing fluctuations in inter-annual rates while the adjacent heatmap indicates the month(s) driving these fluctuations. in doing so, the calendar heatmap identifies whether inter-annual changes are driven by sporadic outbreaks or increased seasonal magnitude of rates. at the same time, the shared axis box plot provides an overview of the average seasonal signature for the entire time series, as emphasized in fig. . understanding seasonal features. detailed characterization of the timing and intensity of seasonal peaks requires a standardized estimation of peak timing and amplitude. this standardization improves upon implemented techniques of comparing months with the highest cases in a given year by applying the δ-methods to www.nature.com/scientificdata www.nature.com/scientificdata/ nbhr model parameters , . average seasonality characteristics can be estimated across the full time series while annual estimates allow for more granular comparisons between years. to depict point estimates and confidence intervals of seasonality characteristics, we use forest plots -a technique commonly used in meta-analyses , , . we develop a multi-panel forest plot to depict annual peak timing, annual amplitude, and their joint distribution, to better understand the relationship among the seasonal features and how it changes over time (fig. ) . figure is a multi-panel plot that incorporates two forest plots (one each for annual peak timing and amplitude estimates) and one scatterplot (for peak timing and amplitude) to describe seasonality features. the top-left www.nature.com/scientificdata www.nature.com/scientificdata/ panel shows peak timing estimates (as month of the year, ranging from . (beginning of january) to . (end of december) -horizontal axis) for each study year (vertical axis). the bottom-right panel shows amplitude estimates where the horizontal axis indicates the study year and the vertical axis shows the amplitude (ratio between the disease rate at peak and the median rate). the bottom-left corner shows the scatterplot of peak timing (horizontal axis) and amplitude (vertical axis) with markers representing each pair of annual estimates. measures of uncertainty ( % confidence intervals) are reflected in error bars of each marker; dashed red lines show median peak timing and amplitude estimates. forest plots in fig. provide a compact, clear, and comprehensive visual describing the stability of peak timing and amplitude, even without showing the entire seasonal signature. for example, salmonellosis peak timing and amplitude vary little each year indicating strong, stable seasonal peaks in july and august. consistent peak timing means practitioners could time preventive strategies, increase awareness for foodborne illnesses to prevent transmission, and inform food retailers of when food safety inspections should be in higher demand within their supply chains. consistent amplitude estimates show that the intensity of salmonellosis varies little over time, suggesting that federal food safety regulations have not greatly influenced the number of salmonellosis cases annually. this type of information is likely to benefit foodnet fast users. supplementary figure s provides an example of how a sporadic outbreak behavior can be depicted by forest plots of peak timing and amplitude estimates for shigellosis in ny. the lack of seasonality for shigellosis is shown by the broad confidence intervals for peak timing, spanning the entire year and beyond. figure s provides an example bar chart showing differences in the average annual incidence of salmonellosis for the ten foodnet-surveyed states. as with other foodnet visualizations, data has been compressed to show only average annual estimates. like in fig. , annual rates mask within-year seasonal variations, calling into question if differences in states are driven by single year outbreaks. the alphabetical organization of the horizontal axis makes states ranking and comparison more difficult than if they were ordered from highest to lowest rates. to ease comparisons of a single disease across www.nature.com/scientificdata www.nature.com/scientificdata/ geographic locations, we generated two multi-panel plots (figs. and ). these plots mirror the same techniques shown above but include multiple shared axes and multiple locations to draw spatial comparisons. supplementary figure s follows the same design as fig. ; we replicate this design for salmonellosis in all foodnet-surveyed states. we present all states in one plot in a descending order by the sum of yearly rates in each state and display all available data so that state level patterns can be compared. the box plot in the top panel provides an overview of the seasonal signature for the entire us. the bottom panel disaggregates the entire us by states. as shown, all states share similar peak timing in july and august for almost every surveillance year from - . for some states, like ga and ca, rates are densely concentrated from july to september with rapid decline from september to february and gradual incline from february to july. for other states, like ny and or, seasonal peaks are much less pronounced and rate differences are smaller between months. clear indication of missing data provides additional information on differences in reporting completeness not captured by previous figures. while heatmaps provide information on seasonal signatures, yearly rate bar graphs (right panel) capture state-level trends over time. states are stacked in the order of total cases from - , showing differences in the intensity of salmonellosis infection across states. comparisons within states between years help identify inter-annual rate changes over time. for example, while md and ca have generally declined in annual rates over the -year period, ga rates increased from - and steadily declined from - . in combination with heatmaps, yearly rates also allow for detailed assessment of sporadic outbreaks. for example, erratic outbreaks came from two monthly spikes in april and june for ct in while for nm in a multi-month outbreak lasted from may to july. by using shared horizontal and vertical axes, this plot eases the comparison of disease rates across months, years and states. it also helps to determine hotspots and detect potential co-occurrences of infection in different states. moreover, the plot can be periodically updated by adding new information, offering a sustainable approach to make consistent comparisons between historical data and data captured in the future. to compare seasonality features across locations, we designed a multi-panel plot similar to fig. to show average peak timing and amplitude estimates over the -year period for each state. in fig. the top-left panel plots peak timing estimates ordered from the earliest (or) to latest (ga) peak timing while the bottom-right panel plots amplitude estimates in order of magnitude. marker and line colours are used to differentiate the seasonality feature estimate and its measure of uncertainty between states. the bottom-left panel shows the relationship between peak timing and amplitude across states. figure s provides an example of foodnet fast bar chart showing differences in the total confirmed infections for each of the nine surveyed pathogens in the us from - . the visual shows that infections due to campylobacter and salmonella have the highest cumulative counts of infections while cyclospora has the lowest counts. while depicting these differences clearly, this visual lacks sufficient specificity for drawing more intricate comparisons between infections. how are counts or rates distributed by year? what are the within-year variations of rates by pathogen? how do seasonal signatures and their variability differ by pathogen? can axes be reordered or recalculated for easier comparisons between pathogen counts or rates? we propose two multi-panel plots (figs. and ) that improve the comparisons of multiple diseases for a given geographic location. figure replicates the plot design of fig. but emphasizes comparisons between pathogens for a single location. instead of a seasonal signature box plot, the top panel provides a scatterplot to illustrate the peak timing and amplitude of each pathogen. in combination with the heatmap in the bottom panel, these plots illustrate the strong seasonality of salmonellosis, campylobacteriosis, and stec in july and august and cryptosporidiosis in august. these seasonal peaks are consistent across almost all years suggesting a stable seasonal periodicity and strong alignment between infections. in contrast, infections caused by yersinia enterocolitica, vibriosis, listeriosis, and cyclosporiasis have much less pronounced seasonality and monthly rates much lower than salmonellosis or campylobacteriosis. yearly rates, shown in the right panel, indicate erratic outbreak behaviors for cyclosporiasis. given sizable differences in rates across diseases we applied a high-order calibration colour scheme. we also provide the same multi-panel, shared-axis visualization design seen in fig. for comparisons across pathogens. figure includes a forest plot of peak timing by disease pathogen (top-left panel), a forest plot of amplitude by pathogen (bottom-right panel), and a scatterplot between peak timing and amplitude estimates (bottom-left panel). as in fig. , average peak timing and amplitude estimates are calculated using nbhr models for the entire -year time series. comparisons between diseases allow for understanding the alignment of seasonal processes across pathogens as well as shared relative magnitudes in a specific location. in our case, most of the pathogens peak during the summertime except cyclosporiasis. however, if the selected diseases peak during winter months, we recommend adjusting the starting and ending months to center these peaks in the figure. in this study we offered ways of thinking on how public data platforms can be improved by using visual analytics to provide a comprehensive description of trends and seasonality features in reported infectious diseases. we emphasize the utility of multi-panel graphs by showing side-by-side different methods of depicting trends over time and features of seasonality, including disease peak timing and amplitude. we provided visual tools to show trends (fig. ) , examine seasonal signatures (figs. and ) and their characteristics (fig. ) , compare diseases across locations for trends (fig. ) and seasonal signatures (fig. ) , and drawing comparisons across pathogens for trends (fig. ) and seasonal signatures (fig. ) . we also provide guides on how to explore and compare trends and seasonality between multiple diseases and geographic locations using foodnet fast data. given varying visual perceptions, we offer side-by-side comparison of different tools aiming to increase comprehension and faster adoption of efficient graphical depictions. www.nature.com/scientificdata www.nature.com/scientificdata/ we developed a time series of monthly rates by reconstructing a time series of monthly counts (see fig. ) then dividing counts by the sum of all foodnet-surveyed counties' mid-year populations per state per , , persons. in this calculation, we recognize that average monthly percentages are rounded in the raw data file and do not sum to % annually for downloaded years. this rounding resulted in obtaining non-integer counts within our time series. to prevent modification of raw data files, we did not round counts to integers before or after calculating rates. no information is provided on the foodnet fast website for the definition of confirmed cases, and data downloads provide no metadata for distinguishing cases from hospitalizations and deaths. although the case definition is provided on the cdc website as "laboratory-confirmed cases (defined as isolation for bacteria or identification for parasites of an organism from a clinical specimen) and cases diagnosed using culture-independent methods" , it forces the user to assume that a confirmed case is any person with laboratory confirmed cultures of a specific pathogen who may or may not have been hospitalized or died from infection. foodnet also collects information on hospitalizations and deaths, but does not provide information on the monthly percentage of hospitalizations or deaths, so users are unable to reconstruct a monthly time series for deaths or hospitalizations. the foodnet fast platform states all confirmed diseases as "incidence" calculations. technical documentation on the foodnet website shows that the term incidence reflects cases per , persons (used interchangeably with a disease rate) with no distinction of whether these are newly introduced within the population (i.e. incidence) or the total persons diagnosed with a disease (i.e. prevalence) . we found that monthly rates can similarly be calculated by multiplying annual incidence rates and the monthly percentage of confirmed cases for each disease-state pair. differences between our calculations and this alternative method are no more than ± %. we suspect that rounding errors of average monthly percentages and differential population catchment areas for rate calculations cause these differences. as shown in supplemental table s , population of the surveillance catchment area is changing over time. oftentimes, publicly available surveillance datasets, including foodnet fast, do not include location-and year-specific population catchment area estimates, which are needed for calculating rates from diseases counts. as foodnet does not provide population catchment areas for calculating rates, it www.nature.com/scientificdata www.nature.com/scientificdata/ www.nature.com/scientificdata www.nature.com/scientificdata/ forces the user to assume that foodnet surveillance reaches the total population of a surveyed county (likely an overestimate), yet such oversight is easy to fix. three collaborators confirmed our monthly rate calculations for quality control. we applied the negative binomial harmonic regression nbhr models, commonly used in the time series analysis of counts and cases. while the use of nbhr models, specifically the inclusion of trigonometric harmonic oscillations, is similar to existing works on foodborne illnesses, these studies often incorporate harmonic oscillators only to adjust for or remove seasonal oscillations [ ] [ ] [ ] [ ] [ ] [ ] [ ] [ ] [ ] . we have extended the use of harmonic terms and develop the tools to estimate peak timing and amplitude , , . the developed δ-method provides a systematic calculation of confidence intervals for peak timing and amplitude estimates based on the results of harmonic regression models. in the proposed approach, we present the amplitude as the ratio of seasonal peak to seasonal median, which offers robust estimation even for rare or highly sporadic infections. these features are not available when traditional models, like auto-regressive integrated moving average (arima), are applied . measures of uncertainty enable formal testing and comparisons across diseases in the same location or locations for the same disease. in our previous works, we have demonstrated the broad utility of the δ-method and applications of peak timing and amplitude estimation in the context of epidemiological studies , , [ ] [ ] [ ] [ ] [ ] [ ] [ ] . we evaluate each state's cases individually as well as all national cases as the sum of all states' cases. our analysis evaluated all cases reported to foodnet fast irrespective of demographic factors such as age group, sex, or ethnic group. future analyses can consider conducting analyses using demographic factors available on the foodnet fast platform such as age group (< , - , - , - , - , - , - , - , + years), sex (male and female), and ethnic group (american indian and alaskan native, asian and pacific islander, black, multiple, white). to incorporate this information, our methodology for data extraction would need to be repeated for each subcategory or combination of categories desired (e.g. download files for males and files for females). www.nature.com/scientificdata www.nature.com/scientificdata/ future analyses can also consider differences in pathogen strain, which can only be obtained if extracting data for each pathogen-location-year combination (e.g. files for each of the diseases for each of locations or , files). foodnet fast, like many global disease surveillance databases, has no metadata describing missing data. foodnet fast reports missing counts using "n/a" for years when pathogens or locations were not under surveillance. however, there are also years when foodnet surveillance was live in a state, but a pathogen is missing from the data download. we believe that this missing data comes when, for a given year, a pathogen has total cases. however, we cannot specify whether absences of surveillance reporting came due to a breakdown in reporting or annual counts. without specification, we have set any year with "n/a" as missing due to no reported case information. when calculating peak timing and amplitude using the δ-methods, we applied nbhr models adjusted for harmonic seasonal oscillators and three trends (linear, quadratic, and cubic). we selected the polynomial terms as an example, yet researchers can consider alternative techniques for measuring seasonality such as splines, nonparametric regression, arima models, or their extensions. additionally, the cdc recommends using a mixed effects model when conducting time series analyses on foodnet fast data to account for differential population catchment areas and laboratory culture confirmation techniques pre-and post- , , . we focus on the analysis of individual states and diseases and adjust for population catchment variations by calculating monthly rates using county-level population estimates. future analyses could include detailed assessments between peak timing and amplitude across diseases, locations, and time periods. such analyses will help determine whether a synchronization of outbreak peaks occurs or if social, economic, or environmental factors influence peak timing and amplitude. future applications. this analecta of visualizations intends to communicate detailed information on foodborne outbreak trends and seasonality suitable for a general audience, public health professionals, stakeholders, and policymakers. future applications would involve the development of an interactive web-based platform allowing users to select the outcome, timeframe, and location of interest for educational training and research purposes. for example, public health researchers and practitioners could use this tool to generate insights related to long-term trends, changes in disease dynamics, or changes in populations at risk . information on when and where outbreaks are most common enable producers, distributors, and retailers to improve food safety practices to prevent these outbreaks. finally, this platform could aid policymakers in shaping public understanding of outbreak dynamics and using scientific evidence to refine public health policies. the analecta of our time series of monthly rates, data visualizations, and code used for all calculations and visualizations are available on our website (https://sites.tufts.edu/naumovalabs/analecta/). data and code can be directly downloaded from the website while visualizations are linked on the website to an external visualization repository. time series data and code are also available on figshare . visualizations on our website are provided in the same order as presented here: describing trends (fig. ) , examining seasonal signatures with the three standard techniques: line graphs, boxplots, and radar plots (fig. ) and heatmaps (fig. ) , characterizing features of seasonality (fig. ) , drawing comparisons across locations for trends (fig. ) and seasonal signatures (fig. ) , and drawing comparisons across pathogens for trends (fig. ) and seasonal signatures (fig. ) . file downloads are available for trend, seasonal signature, and annual time series visualizations. for images examining a single disease in a single location, downloads are formatted where the prefix abbreviates the location and the suffix abbreviates the pathogen (see supplementary table s ). for visualizations comparing multiple locations or diseases, the prefix "loc" indicates comparisons across locations while the prefix "dis" indicates comparisons across pathogens (see supplementary tables s ,s ). all statistical analyses were conducted using stata (se . ) software. all visualizations were created using r version . . and tableau professional . software. all software code is open access on our website (https:// sites.tufts.edu/naumovalabs/analecta/) and figshare, and is available for public reuse with proper citation of this manuscript . web-based infectious disease surveillance systems and public health perspectives: a systematic review detecting influenza epidemics using search engine query data innovation in observation: a vision for early outbreak detection use of unstructured event-based reports for global infectious disease surveillance algorithms for rapid outbreak detection: a research synthesis influenza seasonality: underlying causes and modeling theories flunet. global influenza surveillance and response systems (gisrs) visual analytics for epidemiologists: understanding the interactions between age, time, and disease with multi-panel graphs incidence and trends of disease with pathogens transmitted commonly through food -foodborne diseases active surveillance network preliminary incidence and trends of disease with pathogens transmitted commonly through food-foodborne diseases active surveillance network, us sites incidence and trends of diseases with pathogens transmitted commonly through food and the effect of increasing use of culture-independent diagnostic tests on surveillance-foodborne diseases active surveillance network disease with pathogens transmitted commonly through food and the effect of increasing use of cultureindependent diagnostic tests on surveillance -foodborne diseases active surveillance network centers for disease control and prevention (cdc). foodborne diseases active surveillance network: foodnet surveillance report. national center for emerging and zoonotic diseases produce-associated foodborne disease outbreaks comparing characteristics of sporadic and outbreak-associated foodborne illnesses increasing campylobacter infections, outbreaks, and antimicrobial resistance in the united states climate change, extreme events and increased risk of salmonellosis in maryland, usa: evidence for coastal vulnerability systematic review and meta-analysis of the proportion of campylobacter cases that develop chronic sequelae the campylobacteriosis conundrum -examining the incidence of infection with campylobacter sp foodnet fast: pathogen surveillance tool bacterial enteric diseases among older adults in the united states: foodborne diseases active surveillance network common source outbreaks of campylobacter disease in the usa temporal patterns of campylobacter contamination on chicken and their relationship to campylobacteriosis cases in the united states united states department of commerce united states department of commerce annual estimates of the resident population incorporating calendar effects to predict influenza seasonality in the shift in seasonality of legionellosis in the usa a negative binomial model for time series of counts visualization analysis and design information dashboard design: the effective visual communication of data dynamic maps: a visual-analytic methodology for exploring spatio-temporal disease patterns effects of data aggregation on time series analysis of seasonal infections an analecta of visualizations for foodborne illness trends and seasonality foodborne diseases active surveillance network - decades of achievements activities, achievements, and lessons learned during the first years of the foodborne diseases active surveillance network the spatial structure of epidemic emergence: geographical aspects of poliomyelitis in northeastern usa prime mover or fellow traveller: -hydroxy vitamin d's seasonal variation, cardiovascular disease and death in the scottish heart health extended cohort (shhec) season and outdoor temperature in relation to detection and control of hypertension in a large rural chinese population climate change impact assessment of food-and waterborne diseases a systematic review and meta-analysis of the campylobacter spp. prevalence and concentration in household pets and petting zoo animals for use in exposure assessments global prevalence of asymptomatic norovirus disease: a meta-analysis foodnet fast: pathogen surveillance tool faq assessing the impact of environmental exposures and cryptosporidium disease in cattle on human incidence of cryptosporidiosis in southwestern ontario do contamination of and exposure to chicken meat and water drive the temporal dynamics of campylobacter cases? review of epidemiological studies of drinking-water turbidity in relation to acute gastrointestinal illness seasonality and the effects of weather on camylobacter diseases increase in reported cholera cases in haiti following hurricane matthew: an interrupted time series model complex temporal climate signals drive the emergence of human water-borne disease association between community socioeconomic factors, animal feeding operations, and campylobacteriosis incidence rates: foodborne diseases active surveillance network (foodnet) climate, human behaviour or environment: individual-based modelling of campylobacter seasonality and strategies to reduce disease burden temperature-driven campylobacter seasonality in england and wales rotavirus seasonality: an application of singular spectrum analysis and polyharmonic modeling mystery of seasonality: getting the rhythm of nature seasonal synchronization of influenza in the united states older adult population geographic variations and temporal trends of salmonella-associated hospitalization in the u.s. elderly, - : a time series analysis of the impact of haccp regulation hospitalization of the elderly in the united states for nonspecific gastrointestinal diseases: a search for etiological clues assessing seasonality variation with harmonic regression: accommodations for sharp peaks intelligence advanced research projects activity (iarpa), via - . the views and conclusions contained herein are those of the authors and should not be interpreted as necessarily representing the official policies, either expressed or implied, of odni, iarpa, or the u.s. government. the u.s. government is authorized to reproduce and distribute reprints for governmental purposes notwithstanding any copyright annotation therein. the research was in part supported by the national science foundation (nsf) innovations in graduate education (ige) program, via grant award and by the united states department of agriculture (usda) national institute of food and agriculture (nifa) cooperative state research, education, and extension service fellowship meghan hartwick for editorial and technical assistance r.s. contributed to data extraction, formal analysis, and writing. b.z. contributed to data validation, conceptualization of visual aids, and visualization creation. t.m.a.f. contributed to data validation, review, and editing. e.n.n. contributed to methodology development, review and editing, supervision, project administration and funding acquisition. the authors declare no competing interests. supplementary information is available for this paper at https://doi.org/ . /s - - -x.correspondence and requests for materials should be addressed to e.n.n.reprints and permissions information is available at www.nature.com/reprints.publisher's note springer nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations. license, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the creative commons license, and indicate if changes were made. the images or other third party material in this article are included in the article's creative commons license, unless indicated otherwise in a credit line to the material. if material is not included in the article's creative commons license and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. to view a copy of this license, visit http://creativecommons.org/licenses/by/ . /. key: cord- -z aa wk authors: farewell, v. t.; herzberg, a. m.; james, k. w.; ho, l. m.; leung, g. m. title: sars incubation and quarantine times: when is an exposed individual known to be disease free? date: - - journal: stat med doi: . /sim. sha: doc_id: cord_uid: z aa wk the setting of a quarantine time for an emerging infectious disease will depend on current knowledge concerning incubation times. methods for the analysis of information on incubation times are investigated with a particular focus on inference regarding a possible maximum incubation time, after which an exposed individual would be known to be disease free. data from the hong kong sars epidemic are used for illustration. the incorporation of interval‐censored data is considered and comparison is made with percentile estimation. results suggest that a wide class of models for incubation times should be considered because the apparent informativeness of a likelihood depends on the choice and generalizability of a model. there will usually remain a probability of releasing from quarantine some infected individuals and the impact of early release will depend on the size of the epidemic. copyright © john wiley & sons, ltd. control of infectious diseases is a major public health concern. after an individual's exposure to infection, opposing biological processes take place both in the infecting organism and in the host and these result in either that individual's development of clinical evidence of the disease or in an imperceptible host-victory. during this variable period of time, the individual may in turn become infectious to others and thus play a part in generating or perpetuating an epidemic. historically, attempts have been made to prevent and control epidemics by isolating, for an arbitrary period of time after which the biological struggle could be assumed complete, any individuals who might be incubating the disease. the word 'quarantine', derived from the latin word quaresma, means and re ects the origin of the practice in the -day period of compulsory isolation of ships arriving in venice in the th century. as more has been learned about the di erent infections, quarantine periods have varied, but when a hitherto unknown disease appears it is extremely di cult to decide what arbitrary period should be applied. and yet this is especially important if there should be no e ective treatment for the disease or its infectious state. controlling or preventing an epidemic then depends solely on releasing no infectious individuals into the general community. but, as was noted earlier, the period of unperceived changes in the individual is variable. quarantine was one of the key aspects of infection control introduced during the recent severe acute respiratory syndrome (sars) epidemic. individuals who may have been exposed to the sars virus were quarantined for a ÿxed period of time, most commonly days. the premise was that those who may have been exposed, but who showed no signs of illness after days, were unlikely to come down with the disease. since sars was previously unknown, a quarantine policy o ered the only control. an important paper on epidemiological aspects of sars was that of donnelly et al. [ ] which made use of data from the hong kong experience with sars. the estimation of the incubation period in this paper was based on only ' patients with only one exposure to sars over a limited time scale with recorded start and end dates'. donnelly et al. [ ] assumed a gamma distribution for the incubation times, implicitly therefore assuming the possibility of very long incubation periods. the work reported here arose from a question related to the conÿdence a community should have that an individual who has passed through the sars quarantine period is disease-free and how long the quarantine period should be to make the probability of this very high. the concept of a maximum incubation time could be relevant to these considerations. there are many issues to be considered in setting a quarantine time, for example the extent of disruption to individuals' lives. also, and quite sensibly, it can be argued that there is unlikely to be a 'true' maximum incubation time. however, one motivation for a quarantine policy is the assumption that there is a reasonably well-behaved distribution of incubation times and some maximum time beyond which it is biologically quite implausible that symptoms may arise. this time could be the basis for setting a quarantine time. whether it is helpful to think about quarantine in this way is debatable. to inform this debate, we investigated what might reasonably be inferred about such a maximum incubation time based on the moderately sized samples that would typically be available in the early course of an epidemic. for comparison, brief consideration is also given to the estimation of tail behaviour in untruncated distributions. our general premise is that careful speciÿcation of the available knowledge concerning the incubation distribution must be central to public health decisions to control epidemics. the work reported here should be viewed primarily as an exploration of statistical methodology that might be useful for this purpose, not as a critique of other approaches or speciÿc estimates, such as those for sars. to illustrate the general principles involved, we follow donnelly et al. [ ] and consider a gamma distribution for incubation times. thus if t is the random variable representing an incubation time, with an observed value of t = t, then a gamma distribution for t is speciÿed by the probability density function where t¿ ; a¿ and s¿ . the expectation of this distribution is as and the variance is as . however, we now introduce the assumption that this distribution is truncated at some time m , so that ¡t ¡m , and that the density function for t now becomes assume that data are available on n incubation times t ; t ; : : : ; t n . maximum likelihood estimation of the parameters a; s and m can then be based on the likelihood function standard asymptotic distributional results for mles will not be applicable for the parameter m . in the consideration of inferential statements concerning m , there are parallels with je reys' 'bus problem' or more accurately, 'tramcar problem', raised in a letter to fisher on april [ , p. ] . a brief summary is that in a town it is known that tramcars are numbered consecutively and that a new arrival in the town observes a tramcar numbered . can the new arrival infer anything about the number of tramcars, say n , in the town? the problem can be extended by allowing the observation of more than one tramcar. je reys' considered the use of a prior proportional to =n , after showing that a constant prior leads to no useful inferential statements. a very similar problem is the estimation of n in binomial (n; p) models. in both situations, the choice of the prior can be shown to be highly in uential inferentially. in the tramcar problem, the maximum observed number is the mle for n and is su cient for its estimation if a uniform distribution is assumed for the observed numbers. it is, however, a biased estimate. a unique unbiased estimate can be derived but the question of optimal interval estimation remains. for the binomial problem, it has been shown that no unbiased estimator of n exists [ ] . for the purpose of this paper, we simply deÿne the mle of m and make no claims for its optimality in any sense. for public health purposes, the upper end-point of some interval of plausible values is more likely to be useful for decision making than a point estimate of the parameter. we consider the likelihood function simply as representing the information available from the data for inference concerning the unknown parameters. comparison of the shape of the likelihoods is su cient for the issues considered here and the likelihood function, particularly through providing ratios of likelihoods, is simply regarded as giving the relative plausibilities of parameter values [ , p. ]. since, by deÿnition, it is true that for m ¿t (n) , l p (m ) can be deÿned for t (n) m ¡∞, it will thus provide some indication of the values of m which are plausible given the observed data. it is frequently convenient to standardize this function so that the maximum value is one by dividing by the value of the likelihood function at the mles. this function can then be deÿned as wherem is the mle of m . while the mle of m will be the same irrespective of the distributional assumption made concerning t , the shape of the proÿle likelihood for m , and therefore the range of plausible values for m , will depend on the assumption and, in particular, on assumptions about the tails of the truncated distribution. while the gamma model is well known in epidemic theory, motivated by regarding the incubation period as a ÿxed number of independent and successive stages of infection, each exponentially distributed, alternatives to the gamma distribution should be considered from a model ÿtting perspective at least. for illustration, we consider the log-normal distribution. a log-normal regression model can be written as a location scale model y = log(t) = + e, where e follows a standard normal distribution f(e) = ( ) : exp(− : e ) and where ∈ r and ¿ . the development of a truncated log-normal model follows the development for the truncated gamma given in section . as does the likelihood development with ( , , m ) replacing (a, s, m ) as the set of model parameters. the use of this model is also considered in section . more general distributions than the truncated gamma and the truncated log-normal can also be considered. a convenient choice is the so-called log-gamma distribution of farewell and prentice [ ] which represents a reparameterization and extension of a generalized gamma distribution. with ; q ∈ r and ¿ , the log-gamma model can be written as the location scale model y = log(t) = + w, where the density f(w; q) for w is if q = and, when q = , is the standard normal distribution. the cumulative distribution function can be written as the log-gamma distribution includes the weibull (q = ) and exponential (q = = ) distributions as special cases as well as the gamma (q = = ) and log-normal (q = ). the distribution of w is negatively skewed for q¿ and positively skewed for q¡ . another alternative truncated distribution for incubation times is, therefore, the truncated log-gamma. the development of a proÿle likelihood for m will follow as in sections . and . with maximization over ; and q. the use of this more general distribution is also illustrated in section . we consider data from sars cases, a subset of cases in a hong kong hospital authority database, for which some information was available on time of infection. the data consist of the date of the appearance of the symptoms of sars and an earliest and latest possible date of exposure. initially, we restrict attention to cases whose interval of possible exposure times is less than days and also exclude cases recorded as having ÿrst symptoms on the date of exposure. these may represent questionable records or cases related to an unusually high level of exposure, possibly hospital acquired, not of general relevance for setting quarantine times for controlling community outbreaks. relatively short intervals of exposure times are used to provide some reasonably precise information concerning incubation, as is done in aids seroconverter cohorts [ ] . table i provides some comparison of the cases with infection intervals less than days with the cases with longer intervals. the variables examined were age, sex, health care worker status, vital status on hospital discharge and lactate dehydrogenase (ldh) level, where higher values of ldh re ect more severe disease. it can be seen that while the cases are similar in age, sex and worker status, there is a higher death rate and some evidence of more severe disease in the cases with the longer possible infection intervals. this may re ect the fact that more severe cases arriving at a hospital might well have had a longer period with the disease and be less able to characterize precisely their possible time of infection. the impact of extending the allowed interval size is examined later. there remains, of course, the implicit assumption that the cases with some information on infection time are a random sample of the entire distribution of cases. however, the possibility of biases in reporting, heterogeneity in routes of transmission or varying infectious doses of the sars coronavirus remains. table ii presents the longest and shortest possible incubation times for these patients as well as the average of these two times, rounded to the nearest day since that is how the data would normally be recorded. we consider ÿrst the averaged times. for the data set of averaged times, figure presents the proÿle likelihoods, l * p (m ), based on the gamma, log-normal and log-gamma models discussed in section . figure allows the comparison of the apparent information in the data set under the di erent modelling assumptions. for the truncated gamma model, the proÿle likelihood never drops below per cent suggesting that any value of m greater than the maximum time observed, , is plausible. thus it appears that the data is uninformative with respect to the maximum incubation time if a gamma distribution is assumed. however, the situation is di erent for the truncated lognormal model. while the mle for m is again days for this model, any value for m greater than . days makes the data more than times less plausible than does the mle of days. for public health purposes, it could therefore be argued that, based only on such data and an assumed log-normal model, that a quarantine time of days might be necessary to ensure that sars cases were not released 'too early'. recall that the focus here is on the upper limit of an interval of plausible values rather than any speciÿc estimator for the maximum incubation time. a possible reason for the widely di erent behaviour of the proÿle likelihood for the two models is a di erence in model ÿt. if we consider the more general log-gamma model that includes both of the other models as special cases, the proÿle likelihood for m is more informative than that based on a gamma model, but it never falls below a value of per cent. thus the apparent ability to rule out larger values of m under the log-normal model is not present if a less restrictive model assumption is made. this is true even though the maximum likelihood estimate of q is − : , a value close to the value q = corresponding to the log-normal model. the hypothesis of a truncated gamma distribution would not be supported within this class of models. the maximum likelihood estimates of the various models are given in figure along with a histogram of the data. the estimated log-normal and log-gamma distributions are quite similar while the truncated gamma does not appear to ÿt the data very well. all the models fail to some extent in re ecting the preponderance of short incubation times. since the use of the log-gamma model suggests there is little information for the estimation of a maximum incubation time, this may raise doubts about the assumption of a to illustrate the di erent behaviour of the proÿle likelihoods for the log-normal and loggamma models, figure plots the estimated log-gamma and log-normal models ÿt when the truncation time is taken to be m = days. this shows that the lack of ÿt is much more pronounced for the log-normal distribution than for the log-gamma thus reducing the plausibility of m = under the log-normal model. in general, as for the sars cases in hong kong, it will be di cult to specify precisely when the exposure leading to a case occurred. as with many other diseases therefore, the usual data on incubation times will derive from cases in which the exposure is known to be within a small window of time. this will generate interval-censored incubation time information for each case. assume that such information leads to a set of data {(t li ; t ui ); (i = ; n)} where, for individual i, the incubation time is known to lie between a lower limit, t li , and an upper limit, t ui . if we assume that f(t) = g(t)=g(m ) is the probability density function for the actual incubation time, as in section but where g(t) is not restricted to be a gamma distribution, then, with only minor modiÿcations, the development given there can be followed for interval-censored data. the assumption of a maximum possible incubation time m creates some complication because it will limit the intervals of possible incubation times. it is also convenient to assume that all incubation times are interval-censored and that m is only allowed to take values greater than max(t li ). this avoids any possibility of a case contributing to the likelihood via its probability density function for the smallest value of m and via a probability value otherwise. in principle, other cases could be taken to have a known incubation time if such times were below any plausible values for m , but in practice such accuracy does not exist in any event. this type of consideration arises in other non-standard likelihood inference problems [ ] . the likelihood function for the estimation of the parameters of g(t) and m can then be written a proÿle likelihood for m can be deÿned in the usual manner. however, it is not possible to determine immediately the mle,m , which will lie somewhere between the lowest allowed value, max(t li ) + , and max(t ui ). to illustrate the e ect of interval-censoring, we consider the data in table ii which show the lower and upper limits of the incubation times for the sars cases, for which average rounded times were used in section . we have subtracted . from the lowest time in days and added . to the highest to give appropriate intervals in continuous time and to make all observations interval-censored. as outlined earlier, it is convenient mathematically to make all observations interval-censored. observations with a single day of presumed exposure are given an interval of width day in our analysis but, in principle, a much narrower interval could be used if the precision could be justiÿed. a brief exploration suggests that this would have little impact on the likelihood. figure presents proÿle likelihoods for m based on the gamma, log-normal and log-gamma models of section . these plots are based on calculations of the likelihoods for values of m at intervals of . and beginning at min(t li ) + : = . for convenience, the mle of m has been taken to be the value among these which gives the largest likelihood. further precision could be achieved but is not likely to be important. it can be seen that while the general pattern of the likelihoods is similar to that in figure , with interval-censoring not even the log-normal likelihood drops to less than the per cent level. this is, of course, reasonable in the sense that much less precise information is being assumed about the incubation times and this must impact the precision of inferences. in spite of this slight, but perhaps important, change in the likelihoods, the ÿtted distributions are not much altered by the interval-censoring. for example, with the log-gamma model and table ii and ( : ; : ; − : ) the interval censored data in table ii . finally, to show the e ect of more extreme interval-censoring, we consider extending the set of data in table ii by including additional sars cases from hong kong whose period of possible exposure, which deÿnes the width of the interval within which their incubation time lies, is thought to be less than days rather than days. this produces a set of data of cases and figure presents the relative likelihoods for the three models based on these data. the proÿle likelihoods are seen to be substantially less informative with the gamma likelihood being virtually at for m values greater than . note that one of the additional cases has an interval of ( : ; : ) for their incubation time in days. the use of censoring intervals of width days is quite large in the context of sars and could not be recommended in practice. consideration of models for incubation times which incorporate truncation may provide valuable information for public health purposes. nevertheless, as is illustrated in the earlier sections, there might often be insu cient evidence to be very conÿdent about a maximum incubation time, even within the context of a particular model. in this situation, an alternative approach is to set a quarantine time on the basis of percentile estimation, i.e. a quarantine time might be set as the time below which per cent of cases are expected to develop. for comparison with the analyses presented earlier, the use of parametric models for this purpose is considered here. model choice will be important since the behaviour of a distribution in the tail is very model dependent. thus, the log-gamma model which incorporates a signiÿcant component of model choice through the parameter q might be recommended. a more ad hoc approach to model choice could be adopted although the uncertainty involved in the choice might be more di cult to incorporate into inferences. figure illustrates the best ÿtting log-gamma and log-normal distributions, not involving truncation, to the average incubation time data in table ii . the slightly better ÿt of the log-gamma at shorter times can be seen and there is some di erence in the tails. for the log-gamma, the probability of an incubation time greater than days is . while, for the log-normal, it is . . the mle for the log-gamma, in contrast to the case with truncated distributions, is further from the log-normal model withq = : . essentially this re ects the need for the distribution to drop more quickly at larger values of t . the estimated th percentiles for the log-gamma and log-normal distributions are . and . . conÿdence intervals for these values can be derived by simulating from the estimated asymptotic distribution of the mles to produce an interval within which per cent, say, of the corresponding simulated percentiles lie. this methodology has been compared with a delta method and a non-parametric bootstrap and performed well for the estimation of a complicated function of mles [ ] . based on a simulation of values, the corresponding per cent intervals are ( : ; : ) and ( : ; : ). interestingly, these values suggest the commonly adopted quarantine time for sars of days is associated with the possibility of 'releasing' approximately per cent of patients 'too early'. in fact, to ensure that this is the maximum fraction released, consideration should be given to longer quarantine times re ecting the upper endpoint of the estimation intervals. note that if the interval-censored data in table ii is used to ÿt the log-gamma model, then the estimated th percentile is . with a conÿdence interval of ( : ; : ), an interval per cent longer than that for the average data. the present paper explores methodology to characterize the available knowledge on incubation times early in an infectious epidemic. issues such as di erent routes of infection or di erent subsets of infectious individuals have not been discussed. in principle, the models used could be extended to incorporate explanatory variables deÿned by such factors. preliminary investigations of possible explanatory variables in the hong kong data did not reveal any strong relationships. we have made pragmatic decisions as to which data to include for model ÿtting. these might warrant revisiting in a more comprehensive analysis. also, since infection events cannot be observed, some data on incubation times will inevitably be 'guesses'. many aspects of the comparison of methodologies will not be altered by this but such data will naturally give rise to interval-censoring which the methodologies discussed here do allow. a further extension is to consider individuals with more than one period of possible exposure prior to the development of symptoms. meltzer [ ] considers a simple simulation approach to this. deÿnitive conclusions about the choice of statistical methodology are not warranted based on the investigations reported here. in the early days of an epidemic this will usually be the case. thus, the range of inferences based on di erent methodologies will often be the basis of decisions. nevertheless, some comments can be made. inference concerning a truncation parameter is apparently more informative the stronger the assumptions made about the form of the incubation distribution. in the absence of independent reasons to make such an assumption however, the use of a general model, such as the loggamma, for inference should be considered, at least as part of a sensitivity analysis. the key aspect to such inferences will be the shape of the tails encompassed in the model for the incubation times. in the absence of precise information on a truncation time, estimation of percentiles provides a natural way to ÿx quarantine times. it can also be argued that this approach is less risky, and more realistic, than making the assumption of a truncated distribution. because of its exibility in the tails, the log-gamma can also be recommended for percentile estimation. investigation of other methods is warranted. possibilities would include the use of sample quantiles to deÿne non-parametric conÿdence intervals for population quantiles [ , chapter xi, section . ] or the asymptotic distribution of sample quantiles [ , appendix a. . ] . whatever method is adopted, the uncertainty involved in any estimation of percentiles should be incorporated into public health decisions. in the setting of quarantine times, other factors must also be considered. meltzer [ ] presents evidence for some sars incubation times greater than days. it appears based on the data presented here that a quarantine time of days for sars might release one infectious patient in twenty. therefore, for a quarantined population of , this would correspond to individuals but the larger the quarantined population, the larger the number of released infectious individuals. thus the length of a quarantine period might well be set in light of the expected number of quarantined individuals. also consideration of the psychological and economic impact of quarantine on individuals and the population as a whole must be balanced against the risks associated with early release of infected individuals. finally, note that the implicit assumption in setting a quarantine time is that quarantine is isolation of x days from the supposed day of contact whereas it is often implemented as isolation of x days from the ÿrst day on which an individual is identiÿed as having been exposed to the disease. this may build in an additional margin of safety from the public health perspective. epidemiological determinants of spread of causal agent of severe acute respiratory syndrome in hong kong statistical inference and analysis. selected correspondence of r.a on maximum likelihood estimation of the binomial parameter n theoretical statistics a study of distributional shape in life testing extending public health surveillance of hiv disease on a singularity in the likelihood for a change-point hazard rate model a markov model for hiv disease progression including the e ect of hiv diagnosis and treatment: application to aids prediction in england and wales multiple contact dates and sars incubation periods mcgraw-hill kogakusha: tokyo, . copyright ? we thank the referees for their comments that led to an improved presentation. this work was supported by the medical research council (u.k.), the national science and engineering research council (canada) and the research fund for the control of infectious diseases of the health, welfare and food bureau of the hong kong sar government. key: cord- -ai wq authors: sheridan, gerard a.; boran, sinead; taylor, colm; o’loughlin, padhraig f.; harty, james a. title: pandemic adaptive measures in a major trauma center: coping with covid- date: - - journal: j patient saf doi: . /pts. sha: doc_id: cord_uid: ai wq nan i n light of the current global crisis due to covid- , communication among the scientific community is both time sensitive and imperative to curtail the projected strain that is predicted to overwhelm our global healthcare services. "social distancing" is now considered a vital measure in controlling this pandemic spread. we also know that healthcare workers with increased exposure times to the virus are more likely to contract the infection. we therefore describe some pragmatic "pandemic adaptive measures" (pams) that have been implemented by the orthopedic department in our level trauma center to reduce viral exposure times for patients and doctors: the most significant doctor-patient contact time occurs during the daily fracture clinic. typically, in our institution, patients are reviewed by doctors for a -hour period. with the announcement of the covid- pandemic, we immediately implemented virtual fracture clinics (vfcs). we already know that greater than two-thirds of patients may be managed virtually without ever needing to attend the fracture clinic in person. this reduces the patient-doctor interaction time and doctor-doctor interaction time, significantly improves the financial burden, and is met with satisfaction in up to % of patients. , since the introduction of the vfc, the total number of patients attending on average has dramatically reduced from to and we expect them to reduce further as the service develops. each doctor is now seeing on average patients instead of per clinic, and the average patientdoctor interaction time has reduced from to minutes in a single clinic. to reduce the time spent by patients in the emergency department, another pam introduced is the online clinical communication platform for the on-call trauma team. this mobile device application is general data protection regulations compliant and involves emergency department staff, house officers, residents, and the attending surgeon on call that day. immediate decision making allows for the accelerated discharge of patients not requiring emergency surgical intervention. this in turn reduces both patient-patient and doctor-patient contact times in the emergency department of our level trauma center. reducing the rates of intradepartmental contact time is possibly the most important behavioral change that we could instigate at an institutional level at this time. we know that healthcare workers are particularly vulnerable to viral contraction. if one member of staff becomes infectious, the risk of transmission within the department leading to significant numbers of surgical staff in selfisolation can become quickly overwhelming leading to the decimation of trauma service provision in the level center. in this respect, we reduced the numbers of staff attending the daily posttake trauma round to essential staff only (attending on call, resident on call, trauma coordinator). this significantly reduced the number of staff in close confines every morning from to . this shift has reduced the weekly doctor-doctor interaction time by minutes per week. before the pandemic outbreak, a weekly mdt review meeting was held where all postoperative cases would be discussed and critiqued by the entire orthopedic department with ancillary staff including physiotherapists, nurse specialists, and radiographers. this meeting is essential in maintaining a high standard of care with up to staff members in attendance at any one time. by transferring this meeting to an online multiuser platform, all staff members can login and review the cases presented from a remote location. this eliminates minutes of unnecessary intradepartmental exposure time for each staff member. introduction of the vfc also has ramifications for intradepartmental exposure. for example, each resident would typically staff clinic sessions per week ( minutes total). since the implementation of the vfc, this weekly exposure time has dropped to minutes and is likely to decrease further as the system evolves. to quantify the impact that these pams have had in our institution, consider the following histogram demonstrating a significant reduction in both doctor-patient interaction time (in clinic) and doctordoctor interaction time (in general) for a standard orthopedic resident on a weekly basis in our level trauma center (fig. ) . in summary, these pragmatic pams may be implemented by any surgical specialty facing the challenges of the covid- pandemic to reduce patient and doctor exposure times while simultaneously maintaining a high standard of trauma care at this challenging time. covid- : uk starts social distancing after new model points to potential deaths clinical characteristics of medical workers infected with new coronavirus pneumonia trauma assessment clinic: virtually a safe and smarter way of managing trauma care in ireland cost comparison of orthopaedic fracture pathways using discrete event simulation in a glasgow hospital cork, ireland sheridga@tcd.iethe authors disclose no conflict of interest. key: cord- -muemte p authors: lai, francisco tsz tsun title: association between time from sars-cov- onset to case confirmation and time to recovery across sociodemographic strata in singapore date: - - journal: j epidemiol community health doi: . /jech- - sha: doc_id: cord_uid: muemte p nan amid the coronavirus disease- (covid- ) pandemic, one of the most important indices of healthcare systems' performance in addressing the drastically increased burden is the average time to recovery of patients, the minimization of which indicates a strong capacity in handling the crisis and avoiding a total collapse of the systems. previous research has suggested the importance of early detection amid epidemic outbreaks to facilitate better management of the disease. nevertheless, seldom has any research examined the relationship between time from the onset of severe acute respiratory syndrome coronavirus (sars-cov- ) to case confirmation and time to recovery, as well as how this relationship varies across sociodemographic strata. from the singaporean official website on covid- , i extracted the records of recovered patients with symptomatic presentation in singapore, where the mortality rate from sars-cov- was estimated at only . % as of may . although a large fraction of the patient data was pending further update, the currently available data will suffice for preliminary purposes. using this data, a poisson regression analysis was implemented to examine the aforementioned relationship, with age, sex and nationality specified as potential moderators respectively interacting with time from onset to case confirmation in relation to time to recovery. as only secondary analysis of publicly available data was involved, no ethics approval was required. results showed that being -year older was associated with % more time to recovery and one additional week from onset to case confirmation was associated with a . % less time to recovery among singaporean female. this inverse association was % weaker among male, % weaker being -year older and % weaker with other south east asian nationalities. full numeric results are tabulated as table . the observed inverse relationship between time from onset to case confirmation and time to recovery is possibly due to a lower severity of the condition among patients with only mild symptoms, which took longer to arouse medical attention but eventually less time to treat. the increased complexities among male and older patients suggested in previous research may explain the observed weaker negative association, because these patients may be more likely to develop severe symptoms regardless of the time from onset to case confirmation. last, the weaker association among south east asian patients was possibly because of the systematic testing for foreign workers living in dormitories where notable outbreaks took place, such that time from onset to case confirmation no longer depended mainly on symptomatic presentation. the funding the author declares no specific grant for this research from any funding agency in the public, commercial or not-for-profit sectors. competing interests none declared. patient consent for publication not required. provenance and peer review not commissioned; internally peer reviewed. this article is made freely available for use in accordance with bmj's website terms and conditions for the duration of the covid- pandemic or until otherwise determined by bmj. you may use, download and print the article for any lawful, noncommercial purpose (including text and data mining) provided that all copyright notices and trade marks are retained. intensive care of patients with severe influenza during the epidemic in covid- singapore dashboard as virus deaths grow, two rich nations keep fatality below . %. bloomberg risk factors for severity and mortality in adult covid- inpatients in wuhan singapore spike reveals scale of migrant worker /infections. bbc news key: cord- -o e na authors: nan title: scientific session of the th world congress of endoscopic surgery, jointly hosted by society of american gastrointestinal and endoscopic surgeons (sages) & canadian association of general surgeons (cags), seattle, washington, usa, – april : poster abstracts date: - - journal: surg endosc doi: . /s - - - sha: doc_id: cord_uid: o e na nan purpose: to evaluate the efficacy of single-incision laparoscopic surgery for totally extraperitoneal repair (sils-tep) of incarcerated inguinal hernia. patients and methods: clinical setting a retrospective analysis of patients undergoing sils-tep for incarcerated hernia from may to august at kinki central hospital was performed. exclusion criteria sils-tep was contraindicated for the following conditions in our hospital: a history of radical prostatectomy; a small indirect inguinal hernia in a young patient; and unsuitable for general anesthesia. surgical procedure laparoscopic abdominal exploration through a single, . -cm, intraumbilical incision was performed. the incarcerated hernia content was gently retracted from the hernia sac into the abdominal cavity. in some cases, simultaneous manual compression on the incarcerated hernia from the body surface was required. if no bowel resection was needed, a standard sils-tep using mesh was performed following laparoscopic abdominal exploration and incarcerated hernia reduction. if bowel resection was required, inguinal hernia repair using mesh was not performed to avoid postoperative mesh infection, and two-stage sils-tep was performed - months after the bowel resection. results: fourteen patients ( men, women) with irreducible inguinal hernias, including with unilateral hernias and with bilateral hernias, underwent surgery. the patients' median age was years (range, - years), and median bmi was . kg/m (range, . - . kg/m ). of the patients, had acute incarceration, and had a chronic irreducible hernia. seven patients with acute incarcerated hernias underwent emergency surgery, and two of the seven patients needed singleincision laparoscopic partial resection of the ileum, followed by two-stage sils-tep. twelve patients, excluding two patients who required single-incision laparoscopic partial resection of the ileum, underwent laparoscopic exploration with hernia reduction followed by sils-tep. one case of chronic incarceration out of the twelve patients who underwent sils-tep after hernia reduction required conversion to kugel patch repair. the median operative times were min (range - min) for unilateral hernias and min (range - min) for bilateral hernias. the median blood loss was minimal (range - ml). the median postoperative hospital stay was day (range - days). the median follow-up period was months (range - months). a seroma developed in % ( / ) of patients and was managed conservatively. no other major complications or hernia recurrence were noted during the follow-up period. conclusions: sils-tep, which offers good cosmetic results, could be safely performed for incarcerated inguinal hernia. objective: introduction of mis in pediatric age group has been proved feasible and safe. there is considerable evolution with introduction of a number of invovation in mis pediatric inguinal hernia repair. high ligation of sac is the basic premise of surgical repair in pediatric inguinal hernias. there are different mis techniques broadly grouped into intracorporeal or intracorporeal with extracorporeal component namely the suturing. every techniques has its own complications. the main objective of our study was to focus on different anatomical pointers which can lead inadverent complications mainly bleeding and recurrence. methods and procedures: prospective review of hernias ( male and female) ( months- years) performed laparoscopically between september and june . under laparoscopic guidance, the internal ring was encircled extraperitoneally using a - non-absorbable suture and knotted extraperitoneally. data analyzed included operating time, ease of procedure, occult patent processus vaginalis (ppv), contralateral inguinal hernia, complications, cosmesis and recurrence. results: sixteen right ( %), left ( %) and bilateral hernia ( %) were repaired. five unilateral hernias ( . %), all left, had a contralateral ppv that was repaired (p= . ). mean operative time for a unilateral and bilateral repair were . ( - ) and . min ( - min) respectively. one hernia repair still recurred ( . %) even with all precautions and another had a post operative hydrocoele ( . %). one case ( . %) needed an additional port placement due to inability to reduce the contents of hernia completely. because of our techinique we could not find any adverent peroperative bleeding. there were no stitch abscess/granulomas, obvious spermatic cord injuries, testicular atrophy, or nerve injuries. conclusion: the results confirm safety, efficacy and cost effectiveness of laparoscopic inguinal hernia repair. during our per-operative analysis we focus to address the anatomical landmark to minimize future recurrence and peroperative surgical complications. we identified and named a point as j. point at the tip of triangle of "doom". that is most important point to address peroperatively. there is high chance of recurrence if that point is not encircled well or inadequately circled because of fear of iliac vessels injury. we aslo concluded that 'water dissection technique' is effective techniques in un-experienced hand and in early stages of laparoscopic hernia repair to prevent inadvertent iliac vessels injury. medstar georgetown university hospital, georgetown university school of medicine, introduction: incisional hernias following abdominal surgery can be associated with significant morbidity leading to decreased quality of life, increase in health care spending and need for repeat operations. patients undergoing gastrointestinal and hepatobiliary surgery for malignant disease may be at higher risk for developing incisional hernias. identifying these risk factors for incisional hernia development can help decrease occurrence. this will be the largest multi-institutional study looking at incidence of symptomatic hernia rates for major abdominal operations including colectomy, hepatectomy, pancreatectomy, and gastrectomy. methods and procedures: an irb-approved retrospective study within the medstar hospital database was conducted, incorporating all isolated colectomy, hepatectomy, pancreatectomy, and gastrectomy procedures performed across hospitals between the years of to . all patients were identified using icd- and icd- codes for relevant procedures and then subdivided into either having benign or malignant disease. exclusion criteria comprised of patients who had concomitant organ resection, or those undergoing organ transplant. data validation was performed to verify the accuracy of the data set. the rate of symptomatic incisional hernia rates (ihrs) were determined for each cohort based on subsequent hernia procedural codes identified and repairs performed. descriptive statistics and chi squared test were used to report ihrs in each group. results: during this -year span, a total of , major abdominal operations were performed at all institutions, comprising of , colectomies, , hepatectomies, , pancreatectomies, and gastrectomies. malignancy was the indication for surgery in , ( . %) colectomies, ( . %) hepatectomies, ( . %) pancreatectomies, and ( . %) gastrectomies. ihr in each cohort for benign vs malignant etiologies, respectively, are as follows: ( . %) vs ( . %) in colectomy (p= . ), ( . %) vs ( . %) in hepatectomy (p= . ), ( . %) vs ( . %) in pancreatectomy (p= . ), and ( . %) vs ( . %) in gastrectomy (p= . ) patients. conclusion: symptomatic incisional hernia rates following major gastrointestinal and hepatobiliary surgery ranges from . to . %. there was no significant increase in hernia rates in patients undergoing surgery for malignancy. patients undergoing colectomy for benign disease had a high incidence of symptomatic ihrs. introduction: prosthetic infections, although relatively uncommon, are a major source of cost and morbidity. the study aimed to evaluate the influence of mesh structure including the polymer type and mean pore size on bacterial adherence in a mouse model. methods: three commercially available hernia meshes were included in the study. for each mesh type, a cm square was surgically placed intraabdominally in mice. one mouse served as a control while an enterotomy was made in the subsequent mice to introduce a bacterial load onto the mesh. after hours the meshes were harvested. the inoculated meshes were then plated on agar plates and bacterial counts were counted after hours. the bacterial counts were compared between the various mesh types. results: the mean bacterial adherence was increased in the large pore mesh was colonies, for the small pore mesh was colonies, and in the biologic mesh group it was colonies. conclusions: through the use of a mouse model, the influence of mesh type and pore size on bacterial adherence was evaluated. meshes that have larger pores with a lower prosthetic load and the biologic mesh interestingly had lower early bacterial colonization after hours following an enterotomy. further evaluation with a longer incubation time could be helpful to determine the effect of bacterial colonization of mesh. hrishikesh salgaonkar, raquel maia, lynette loo, wee boon tan, sujith wijerathne, davide lomanto; national university hospital, singapore laparoscopic repair of groin hernias is widely accepted approach over open due to lesser pain, faster recovery, better cosmesis and decreased morbidity. however, there is still debate on its use in large inguino-scrotal hernias, recurrent hernias and history of lower abdominal surgery anticipating adhesions and difficulty in dissecting extensive hernia sac. retrospective analysis of prospectively collected data was done of patients undergoing laparoscopic repair of large inguino-scrotal, incarcerated groin hernia, recurrent cases after open or laparoscopic repair and history of previous lower abdominal surgery. between january to july , patients with large inguino-scrotal hernias, recurrent hernia, history of lower abdominal surgery, incarcerated femoral hernia underwent laparoscopic inguinal hernia repair. patient characteristics, operating time, surgical technique, conversion rate, complications and recurrence up to months recorded. patients had large inguino-scrotal hernia, recurrent hernia ( previous open, previous lap) , history of lower abdominal surgery ( lscs, appendectomy, prostatectomy, midline laparotomy), incarcerated femoral hernia, meshoma removal. patients underwent total extraperitoneal (tep) repair, transabdominal pre-peritoneal (tapp), needed conversion to open. mean operation time was min for unilateral and min for bilateral hernia. seroma formation seen in patients, minor wound infections treated conservatively. we conclude that the laparoscopic approach can be safely employed for the treatment of complex groin hernias; surgical experience in laparoscopic hernia repair is mandatory with tailored technique in order to minimize morbidity and achieve good clinical outcomes with acceptable recurrence rates. mesh fixation in ventral incisional hernia is a topic of ongoing debate. permanent and absorbable tacks are acceptable and widely used methods for mesh fixation. the purpose of this study was to compare outcomes of permanent tack fixation versus absorbable when used alone or with suture fixation in laparoscopic incisional hernia repairs. a retrospective review of all patients undergoing laparoscopic ventral hernia using tack fixation (absorbable/permanent) alone or in conjunction with suture fixation was queried from the ahsqc database. outcome measures included hernia recurrence rate, pain, quality of life, wound related issues, and hospital length of stay. propensity match scoring was performed to compare patients undergoing tack only fixation versus tack and suture fixation with a p-value of . considered significant. a total of patients were identified after propensity match scoring with who underwent repair with permanent tacks alone or with sutures and who underwent repair with absorbable tacks alone or with sutures. following matching there were no differences in bmi, age, hernia width/length, or baseline pain/ quality of life. there were no significant differences found in outcome measures including recurrence rates, pain and quality of life outcomes at days, months, and year, surgical site infection (ssi), and postoperative length of stay (p[ . ). there was a significant increase in any post op complication in the permanent tack fixation group compared to the absorbable tack fixation group ( % vs %, p. ) which is likely due to the increase in surgical site occurrences noted in the permanent tack fixation group ( % vs. %, p. ). based on this large data set, there are no significant differences in postoperative outcomes in permanent versus absorbable fixation in laparoscopic hernia repair except in surgical site occurrences. further study is needed to evaluate but at the present time, there is no convincing evidence that one type of fixation is superior to another in laparoscopic ventral hernia repair. introduction: inguinal hernia repair is the most common procedure in general and visceral surgery worldwide. laparoscopic transabdominal preperitoneal mesh hernioplasty (tapp) has been also popular surgical method in japan. single incision laparoscopic surgery is one of the newest branches of advanced laparoscopy, and its indication has been spread to not only simple surgery such as cholecystectomy, but also complex surgery. we report our experience with single incision laparoscopic tapp (s-tapp) for japanese patients with inguinal hernia. case description: a consecutive series of patients ( male, female) who underwent s-tapp during june to september in a single institution. twenty eight of the patients had bilateral inguinal hernia. the mean follow-up was days. the average age of the patients was . ± . years. establishment of the ports: a -mm vertical intra-umbilical incision is made for port access. one -mm optical port and two -mm ports were placed side-by-side through the umbilical scar. surgical procedure: the procedure was carried out in the conventional fashion with a wide incision in the peritoneum to achieve broad and clear access to the preperitoneal space, and an appropriate placement of polypropylene mesh ( dmaxtm light, bard) with fixation using the tacking device (absorbatack®, covidien). the hernia sac is usually reduced by blunt dissection, or is ligated and transected with ultrasound activated device. the peritoneal flap is closed by one suture with - pds and the - tacks using absorbatack®. discussion: in one patient, we encountered a large sliding hernia on the right side having sigmoid colon as content of the sac, which required conversion to the conventional laparoscopic procedure. there were nine recurrence cases after surgery of laparoscopic or anterior approach, and two cases after prostatectomy. there was no intra-operative complication. the mean operative time was . ± . min, and blood loss was minimum in all cases. the average postoperative stay was . ± . days. there was one recurrence case ( . %) months after the surgery. there was no severe complication after the surgery, but there were seromas ( . %) and one hematoma ( . %). two patients had blunt tactile sense in the area of the lateral femoral cutaneous nerve ( . %), which improved in two months. conclusion: our results suggest that s-tapp is a safe and feasible method without additional risk. moreover, cosmetic benefit is clear. however, further evaluation for postoperative pain and longterm complications compared to standard laparoscopic tapp mesh hernioplasty should be required. manuel garcia, md, daniel srikureja, md, marcos j michelotti, md, facs; loma linda university health introduction: prosthetic mesh use has become standard practice during ventral hernia repair to reduce the risk of recurrence. the ideal mesh is macro-porous which favors rapid cellular ingrowth and tissue integration, has limited tissue reactivity, low profile and weight, and has high tensile strength to add resilience to the repair. additionally, the material is expected to have good handling characteristics. currently, there is a wide variety of options for mesh. biosynthetic material (poliglycolic acid/trimethylene carbonate -pga/tmc) has been shown to behave well in terms of early vascularization and ingrowth as well as adequate long term tissue generation. gore® synecor® biomaterial is a composite mesh including two layers of absorbable biosynthetic material (pga/tmc) with one tridimensional non-absorbable macro-porous knit of dense ptfe mesh. it has shown good vascularization and ingrowth at days in animal examination. however, there is still no evidence of long term behavior of this mesh in human tissue. we present the first histologic analysis of this mesh year after placement in a human. objective: to perform a histologic analysis of the gore® synecor® biomaterial one year after placement in the human body. methods: after incidentally finding incorporated gore® synecor® mesh in a patient with prior ventral hernia repair year ago, during open bilateral inguinal hernia repair, a sample of mesh was taken and sent to pathology lab for analysis. tissue healing, vascularization, and ingrowth of the composite mesh were analyzed. results: histologic findings significant for a biomaterial consistent with a knitted ptfe material surrounded by mature fibrovascular tissue and foreign body inflammation consistent with expected healing response for this time frame. no evidence of any other biomaterial (pga/tmc) or evidence of infection. conclusion: gore® synecor® biomaterial has shown to be well integrated into appropriately healed tissue, with pronounced vascularization and ingrowth. the pga/tmc layers have been seen to be completely absorbed and replaced by collagen. these findings, in a human months sample, replicate what had been shown in animal specimens. method: from to , patients came to hospital with renal paratransplant hernia. they were evaluated for this study. the following data were collected from their records: age, gender, weight, age at graft rejection, surgical complications, treatment method and the treatment results with composite ptfe mesh. results: for laparoscopic repair of incisional hernia after renal transplant, the median interval between kidney transplantation and developing of incisional hernia was (range to ) days. predisposing factors were obesity, age over fifty years, and female gender. in six patients, hernia was large, and the repair was performed with using composite ptfe mesh. one patient had developed serous collection in surgical site, which was managed successfully with multiple punctures. hernia recurrence or infection was not noted in these patients during to months follow-up periods. conclusion: incisional hernia is not a rare entity after kidney transplantation. predisposing factors, such as obesity, age over years, and female gender have a role in its development. repeated surgeries in kidney recipients can increase the risk of incisional hernia. managing this complication by laparoscopic approach is a safe and effective method. sujith wijerathne, raquel maia, hrishikesh salgaonkar, wee boon tan, lynette loo, davide lomanto; national university hospital, singapore introduction: a femoral hernia is a less common type of hernia. it is estimated to account for less than % of all abdominal wall hernias. only about in every groin hernias are femoral hernias. they are found more commonly in females due to wider shape of pelvis. laparoscopy by offering magnification and better vision provides us the opportunity for clear visualization of the myopectineal orifice. laparoscopy seems to be a safe and feasible approach for femoral hernia repair in an asian population. case description: between and , consecutive patients with femoral hernia who underwent laparoscopic hernia repair were prospectively studied. patient demographics, hernia characteristics, operating time, conversion rate, intraoperative, postoperative complications and recurrence were measured. discussion: total of femoral hernias were repaired, on right and on left groin. this included patients with bilateral and unilateral hernia. concomitant obturator hernia were found. there were male and female patient. no conversion was reported. one patient had injury to bowel at the mm port entry site, without contamination, identified and managed immediately. patients developed seroma, all were managed conservatively except one who needed aspiration. peri-port bruising was noticed in patients and patients had hematoma. one patient with hematoma underwent excision of the organised hematoma. of the hematoma patient was on aspirin pre-operatively. no wound infection, chronic groin pain or recurrence was documented during follow up till date. conclusion: laparoscopic repair offers accurate diagnosis and simultaneous treatment of both inguinal and femoral hernia with minimum morbidity and good clinical outcomes. better visualisation and magnification gives us an opportunity to identify occult hernias which can be repaired during the same setting, thereby reducing the chance of recurrence and possible need for second surgery. laparoscopic repair has become the procedure of choice for the treatment of the majority of groin hernia at our institution. introduction: totally extraperitoneal (tep) repair that does not require peritoneal incisions is a good procedure that involves minimal visceral damage. however, balloon-or camera-assisted blunt dissections that are performed in a haphazard manner do not follow precise dissection of the fascia layer. furthermore, they have a disadvantage in that they are difficult to understand anatomically. we therefore developed a novel preperitoneal approach to resolve this issue. methods: a -mm trocar is inserted into the rectus abdominis sheath cavity after a small incision is made below the umbilicus and the posterior rectus sheath is exposed. a -mm trocar is inserted cm towards the pubic bone from the umbilicus. using forceps from this position, narrow branches that enter the posterior rectus sheath from the inferior epigastric vessels are dissected, thereby broadly exposing the anterior surface of the posterior rectus sheath. the third mm-trocar is inserted near the lateral margin of the rectus abdominis. on the outside, local anesthetic is injected beneath the posterior rectus sheath and the preperitoneal cavity is separated in fluid so that the peritoneum is not injured during posterior rectus sheath incision. a small incision is made to the posterior rectus sheath or attenuated posterior rectus sheath at one finger width higher than the expected upper margin of the prosthetic mesh. due to the effects of local injection, a sharp incision to the fascia can be made with an electric scalpel. utilizing this mechanism, the posterior rectus sheath aponeurosis and the lining transverse fascia and superficial preperitoneal layer are individually identified. once the preperitoneal cavity is reached, the peritoneal margin is determined in the lateral direction, and the peritoneum that is pulled due to pneumoperitoneum is separated from the preperitoneal fascia on the outside from the cranial side towards the deep inguinal ring. on the inside, the pneumoperitoneum pressure pushes the peritoneum inferiorly, leading to enlargement and increased visibility of the posterior rectus sheath deep fascia, which is dissected one layer at a time from the outside. the umbilical prevesical fascia is dropped inferiorly, and the dissection of the preperitoneal cavity necessary for mesh deployment is performed. results: by individually dissecting each fascia using emphysema through pneumoperitoneum and enlargement through local injection, the method for reaching the preperitoneal cavity could be successfully completed by following the dissection of the fascia layer without proceeding with the operation blindly, thereby resulting in the elimination of intraoperative bleeding and postoperative hematoma. introduction: in the field of abdominal wall reconstruction, the utility of drain placement is of debatable value. we present outcomes evaluating drain placement vs no drain placement at the time of robotic transversus abdominis release (rtar) technique with placement of mesh in the retromuscular position, a currently understudied subject. methods: retrospective review of a prospectively maintained hernia patient database was conducted identifying individuals who received either drain placement or no drain placement during abdominal wall reconstruction via the rtar technique from august to june at a single high volume hernia center. perioperative data and postoperative outcomes between the two groups are presented with statistical analysis for comparison and quality of life (qol) measures assessed using the carolina comfort scale. results: thirty-five patients were identified for this study, of which had drains placed intraoperatively in the retromuscular position at the conclusion of rtar (drn) and underwent rtar without the placement of draining devices (nd). the drn cohort had a mean bmi, defect area, mesh area, and operative time of . , cm , cm and minutes, respectively, compared to . , cm , cm , and minutes in the nd group. all cases utilized medium weight macroporous polypropylene synthetic implantable mesh materials in both the drn and nd subgroups. there were no reported postoperative complications, including no development of hematoma, seroma, or surgical site infections in either group. hernia recurrence was not identified in either the drn or nd cohorts through a mean follow up of days ( . months). there were no statistically significant differences in postoperative qol outcomes. conclusion: our series review suggests that the use of intraoperative drains may not afford any benefits with the rtar technique when mesh is placed in the retromuscular position. additional postoperative management associated with drain care may be unnecessary. surg endosc ( ) :s -s background: appendectomy is one of the most common operations performed during emergency surgery. although laparoscopic appendectomy (la) has become the treatment of choice, there is still a debate regarding the use of la for treating complicated appendicitis. in this retrospective analysis, we aimed to clinically compare la and open appendectomy (oa) for treating complicated appendicitis. methods: we retrospectively identified patients who underwent an operation for complicated appendicitis at our hospital; these patients were operated on between and july . [editor ] in total, patients underwent conventional appendectomy and patients were laparoscopically treated. outcomes included operation time, blood loss, length of hospital stay, and postoperative complications. logistic regression analysis was performed to analyze the concurrent effects of various factors on the rate of postoperative complications. objective: small bowel perforation has conventionally been dealt with open exploration, which frequently leads to many wound-related complications. wound infection is the major reason for increasing morbidity in these patients and delay recovery. laparoscopic surgery has various benefits over open surgery like, smaller wound, lesser pain and faster recovery. the aim of this study was to relay the advantages of minimally invasive surgery (mis) to patients with small bowel perforation to decrease postoperative wound complications and duration of hospital stay. methods: it is a retrospective study, including patients with small bowel perforation from to . of these , had traumatic etiology, had typhoid-related perforation and the remaining had a duodenal perforation. of them were male, and the average age was . years. only patients who presented within hours of perforation were included in the study. laparoscopic exploration was done on introducing camera from -mm infraumbilical port after intraperitoneal carbon dioxide insufflation. the remaining two -mm working ports were then introduced depending on the site of perforation once identified. the perforations were then repaired using intracorporeal single-layer suturing using polydioxanone - suture. the peritoneal cavity was given thorough lavage and abdominal drain placed in the pouch of douglas. fecal contamination was found in all the patients. a total of patients underwent conversion to open surgery due to inability to find the site of perforation laparoscopically. of the operated patients, patients developed port-site infection, and there were no major postoperative complications in the -week follow up period. conclusion: we conclude from our study that laparoscopic intervention in early small bowel perforation is a safe approach with favorable outcomes, especially with regards to wound complications, that are a major factor in increasing the morbidity in such patients postoperatively. laparoscopic approach leads to early discharge and recovery postoperatively. with the emerging era of laparoscopic surgery, leading to its easy accessibility, more patients can advantage from this technique when they arrive in emergency with intestinal perforation. s surg endosc ( ) :s -s introduction: pneumatosis intestinalis (pi), or gas in the bowel wall, can be seen on various imaging modalities. the pathophysiology behind pi is unclear. one theory proposes a mechanical cause (e.g. small bowel obstruction) while another proposes a bacterial etiology. management of pi in adults is difficult as often there is a benign clinical course. however, when paired with specific clinical features such as hepatic portal venous gas (hpvg) on imaging, the course of management changes as the suspicion of bowel ischemia increases. hpvg alone has been associated with a high mortality rate and a poor prognosis. management in this case becomes surgical. case presentation: we present a case of -year-old latino male who presented to the emergency room with abdominal pain and altered mental status. focused physical examination revealed a non-rigid abdomen, no rebound tenderness, no guarding, and diffuse tenderness only to deep palpation. ct scan of the abdomen and pelvis demonstrated moderate portal venous gas in the right and left hepatic lobes, an upper midline dilated small bowel loop with pneumatosis intestinalis, and a moderately distended stomach with gas and fluid. laboratory studies revealed metabolic acidosis and a lactic acid level of . mmol/l. due to these findings, bowel ischemia was suspected, and the patient was taken to the operating room for a diagnostic laparoscopy. the laparoscopy was converted to an exploratory laparotomy due to extensive adhesions. intraoperatively, there was no small bowel compromise and no identifiable transition point. extensive lysis of adhesions and repair of iatrogenic enterotomy were performed. patient tolerated the procedure well, clinically improved, and was discharged from the hospital. discussion: this case illustrates the difficulty in management of a patient with pneumatosis intestinalis and, specifically, hepatic portal vein gas seen on ct imaging. hpvg has traditionally been a harbinger of morbidity and mortality, but exploratory laparotomy revealed only diffuse abdominal adhesions and the absence of bowel ischemia despite high clinical suspicion. background: ventral hernia repair is one of the most common surgical procedures facing the general surgeon. there is little consensus as to the best surgical technique for complex scenarios. often these patients have complicating co-morbid conditions such as radiation therapy, that has an inevitable effect in the abdominal wall structures, which can lead to non-traditional repairs. case report: we present a case of a year-old female who underwent a tah/bso and right hemicolectomy which was complicated by wound dehiscence. she underwent primary repair and adjuvant whole pelvis radiation for her squamous cell carcinoma. subsequently, the patient developed acute obstructive symptoms do to a stricture within her small bowel and a large ventral hernia measuring cm with non-reducible abdominal contents below the level of the fascia more prominent in the suprapubic area. the patient's bmi was . . various considerations are important in planning a surgical repair in a previously irradiated field with loss of domain which include, minimal dissection, and the use of an atraumatic surgical techniqueque with either external oblique release or transversus abdominis muscle release (tar). we chose a a tar, as it provides wider myofascial release and dissection below the arcuate line towards the space of retzius and bogros allowing for a larger sublay mesh placement. also it avoids the need of skin flaps reducing the risk for wound complications in under-perfused tissue. the tar was performed successfully and there were no intraoperative and postoperative complications. her follow-up at months revealed no wound complications or hernia recurrence. conclusion: for patients with compromised tissue and loss of domain a tar technique may be useful when reconstructing complex abdominal wall hernias. it provides the core principals of hernia repair such as primary fascial closure, wide mesh overlap, and finally it provides a reliable approach for the under-perfused tissue without need of skin and soft tissue flap creation. outcomes in the management of cholecystectomy patients in the setting of a new acute care surgery service model: impact on hospital course larsa al-omaishi, bs, william s richardson, md; ochsner medical clinic foundation introduction: the acute care surgery (acs) model, defined as a dedicated team of surgeons to address all emergency department, inpatient, and transfer consultations, is quickly evolving within hospitals across the united states due to demonstrated improved patient outcomes in the non-trauma setting. the traditional model of call scheduling consisted of one senior attending and one senior resident on call per -hour shift. attendings were responsible for consults, previously scheduled operations, as well as clinic time. multiple recent studies have shown statistically significant improvements in several parameters of patient care by using acs including but not limited to . time from emergency department to surgical evaluation . time from surgical evaluation to operating room . operative time . percent laparoscopic . length of hospital stay . intra-operative complications (blood loss, perforation rates) . post-operative complications (fever, infection, redo) . cost. one study demonstrated a statistically significant cost savings for the acute care surgery model with respect to appendectomies, but not cholecystectomies. study design: a retrospective analysis of patients who underwent cholecystectomy in the setting of non-traumatic emergent cholecystitis was performed to compare data from two cohorts: the traditional model and the acs between january , and dec , at ochsner medical center, a -bed acute care center in new orleans. parameters gathered included . time from emergency department to surgical evaluation . time from surgical evaluation to operating room . operative time . percent laparoscopic . length of hospital stay . intra-operative complications (blood loss, perforation rates, conversion to open) . post-operative complications (fever, infection, redo). demographics were also collected including age, weight, height, ethnicity, asa, etc. inclusion criteria included: age[ and having undergone cholecystectomy between jan , and december , . exclusion criteria included choledocholithiasis, gallstone pancreatitis, ascending cholangitis, gangrenous cholecystitis, septic complications precipitating further procedures and delays, or researcher discretion. results: patients were initially identified as having undergone cholecystectomy within the allotted time period [ - , - , - , - ] . were excluded due to one of the reasons above. median patient age was years old and the average patient encounter was . days. conclusion: the acs model is better suited to manage emergent non-traumatic cholecystectomies than the traditional call service at our institution, as evidenced by several parameters. s surg endosc ( ) :s -s he nailed it background: nail guns are powerful tools and are widely used. injuries with these devices may be devastating due to the significant force they can deploy. patients and methods: we herein report a first case of a self inflicted abdominal injury with a nail gun. results: a year old male with history of coronary artery disease, type dm and early signs of dementia attempted to refill a nail gun. he lodged the device against his right abdomen while the air hose was still attached and then accidently fired nails into his abdomen. after he unsuccessfully tried to pull the nails out he drove himself minutes to our emergency room. he was hemodynamically stable on arrival; pain control was achieved, antibiotics were given and he received tetanus immunization. ct-scan showed the two foreign bodies penetrating from the ruq with one reaching the transverse colon. on emergency laparoscopy, the nails were found to have penetrated the thick omentum and the puncture site of one nail into the colon was identified. the omentum was resected off the colon and the right colon was completely mobilized. no additional injuries were found. the entrance area of the nails was then used to create a loop colostomy. the postoperative course was initially uneventful but the patient developed a severe posttraumatic inflammatory reaction of the fat tissue in the right upper quadrant and had to be readmitted for pain control and antibiotics were again administered. he recovered and was discharged with a plan for laparoscopically assisted colostomy closure after weeks. discussion: to the best of our knowledge this is the first reported isolated colonic injury by a nail gun. given the tremendous force of the device with unknown collateral damage to the surrounding tissue it was decided to manage the accident with a laparoscopic assisted colostomy using the entrance point of the nails for fecal diversion. introduction: it is difficult to diagnose obturator hernias by routine physical examination. obturator hernias are frequently complicated by ileus and the diagnosis is often first made from abdominal ct. obturator hernias are difficult to reduce, and often necessitate emergency surgery. they are common in elderly people, and they often had bad general condition. so it was high in the death rate. at our hospital, we first attempt to reduce the hernia from the body surface under ultrasonographic guidance. after relieving the strangulation, we perform radical operation electively in patients who are for possible for surgery under the general anesthesia. we perform laparoscopic repair for obturator hernias. obturator hernias are often complicated by other types of hernia. in these cases, we perform total repair. herein, we present a review of the patients who underwent surgery for obturator hernia at our hospital. methods: we review the data of cases of obturator hernia encountered by us from february to december . we performed total repair in three of the cases. however, it is difficult to procure a mesh that would be adequate for all the defects (inner inguinal ring, femoral ring, obturator). no single mesh can fit, because the inguinal and pelvic curves present opposing curves near the obturator. therefore, we placed two pieces of mesh available at our hospital ( d max [bard] and onlay sheet of kugel patch [bard] ) together in the patientswe could successfully cover all the defects using these two pieces of mesh and could fit the mesh to the pelvic shape by devising an appropriate connection between the meshes. results: we reviewed a total of operated cases for obturator hernia. the hernia was bilateral in cases, and complicated by other hernias in cases. we first determined the appropriate approach for the repair. we performed total repair in cases. they were no complications and no cases of recurrence. conclusion: our approach to the repair of obturator hernias was very useful. we can use the exact area and shape of the mesh needed in individual patients by this method. we show the method of shaping the mesh to fit the pelvic form. demin aleksandr, do, ajit singh, do, noman khan, do; flushing hospital introduction: internal hernias are known complications that are well documented to involve peterson's defect. in bariatric patient's post gastric bypass there is a high index of suspicion for internal hernias as well as a low threshold to operate. there have been some debates around the closure of the potential peterson's space with several studies advocating closure versus some which show that there is no difference in the rate of symptomatic internal hernias. we present a case of an unusual cause of small bowel obstruction due to internal hernia caused by a cecal volvulus. it is an atypical presentation however the patient was triaged and brought to the or within hours of admission. although it is rare there have been reports of internal hernias caused by other structures like congenital bands or natural potential spaces. there have been reports of unusual presentations of the cecum herniating through the foramen of winslow. the anatomical rearrangements after bypass create potential areas where an internal hernia can occur. in this case a bowel resection was undertaken due to the anatomical variation of the cecal bascule and cecal volvulus due to high rate of recurrence of this cecal pathology. majority of internal hernias do not require bowel resection especially when detected earlier and prompt surgical exploration is undertaken. mortality as direct consequence internal hernia is extremely rare. however late diagnosis of internal hernias can lead to catastrophic gut loss and may require lifelong tpn and/or visceral transplantation or autologous reconstruction. conclusion: careful history and physical of our bariatric patient can elicit the signs and symptoms of internal hernias and prevent the morbidity and mortality that can come with the complications of this condition. unusual presentations and causes are reason for prompt diagnosis and complete exploration. shingo ishida , naotsugu yamashiro , satoshi taga , koichi yano ; shinkomonji hospital, shinmizumaki hospital symptomatic cholelithiasis is common disease performed with laparoscopic cholecystectomy (lc). we will hesitate to operate if the patient is pregnant in the third trimester. pregnant patients undergoing laparoscopic surgery have been reported increasingly. however, most case reports are confined to patients in the first and second trimester. we report a patient who underwent lc in the third trimester and review the relevant literature. a -year-old woman in the third trimester ( w d) of pregnancy was seen in the emergency department of our hospital with a history of upper abdominal pain. there was no problem in the course of pregnancy. the result of the examination proved to be attack of gallstone colic. she was hospitalized the same day and underwent lc the next day. the base of pregnancy uterus was cm above the navel. we needed to consider the surgical approach, for example inserting the first trocar under left hypochondrium. operative duration was minutes. she complained abdominal distension at postoperative day (pod) and but there was no abnormality in the fetus. she was discharged on pod . after that she gave birth to a healthy baby. lc in third trimester of pregnancy was safely performed with obstetrics back up. weekday or weekend hospital discharge: does it matter for acute care surgery? ibrahim albabtain , roaa alsuhaibani , sami almalki , hassan arishi , hatim alsulaim ; kamc, background: hospitals usually reduce staffing levels over weekend. this raises the question of whether patients discharged over a weekend may be inadequately prepared and possibly at higher risk for adverse events post-discharge. the aim of this study was to assess the outcomes of common acute care surgery procedures for patients discharged over weekend, and identify the key predictors of early readmission. methods: this retrospective cohort study was conducted at a tertiary care hospital between january and december . surgical procedures included were cholecystectomy, appendectomy, and hernia repairs. patients' demographic, co-morbidities, complications, readmission and follow-up details were collected from the electronic medical records. predictors and post-operative outcomes associated with weekend discharge were identified by multivariable analysis using univariable and multivariable logistic regression models controlling for potential confounders. results: a total of patients were included. overall median age was years (iqr: , ). the majority of patients were female (n= , . %). patients ( . %) underwent a cholecystectomy, ( . %) an appendectomy, and ( . %) hernia repairs. weekend discharge was . % vs. . % of weekday discharge. patients discharged during weekend were younger ( . vs. , p-value. , mean) . post-discharge -day follow-up visits were significantly lower in the weekend discharge subgroup ( . % vs. . %, p-value . ). overall, -day readmission rate was . % (n= ), and did not differ between those of weekend and weekday discharge (or= . , % ci . - . ). conclusions: patients discharged on weekends tended to be younger in age and less likely to have chronic diseases. patients discharged over the weekend were less likely to follow up compared to weekday discharge patients. however, the readmissions rate did not differ between the two groups. intrauterine device (iud) migration out of the uterine cavity is a serious complication. its incidence in the us has been reported to be about . % annually. previously published systematic review supports the use of laparoscopic surgery for elective removal of migrated iucds from the peritoneal cavity. we present the safety and efficacy of the laparoscopic approach to this complication in the acute care setting. depicted is an otherwise healthy year old female with no previous surgical history who presented to the ed with worsening abdominal pain for one week with no associated symptoms. on physical exam, patient was non toxic. abdomen was moderately distended with guarding and rebound tenderness to palpation, no rigid. patient had been seen shorlty prior to ed admission by her obgyn and recent work up with abdominal/pelvic x-ray and ultrasound has revealed a misplaced iud in the transverse position (side ways). pregnancy test was negative. based on patient clinical presentation and recent radiologic findings, we decided to proceed with diagnostic laparoscopy. after systematic review of cavity, the foreign body was found to be incorporated within the greater omentum. we procceded, laparoscopically with omentectomy+foreign body removal. there were no perioperative complications, patiet was discharged on the following day. the use of laparoscopy in elective iud retrieval within in the abdominal cavity has been considered standard of care in surgical management to date. this poster demonstrates its use as an effective approach for safe removal of intra-abdominal foreign bodies also in the acute setting. symptomatic inguinal and umbilical hernias in the emergency department: opportunity lost? andrew t bates, md, jie yang, phd, maria altieri, chencan zhu, bs, salvatore docimo, jr., do, konstantinos spaniolas, md, aurora pryor, md; stony brook university hospital introduction: patients with symptomatic inguinal and umbilical hernias often present to the emergency department (ed) when their symptoms change or increase, usually not requiring emergent surgery. however, little is known about how often these patients present prior to eventual repair and whether they undergo surgery at the initial presenting institution. the aim of this study was to assess the clinical flow of patients presenting in the ed for inguinal and umbilical hernia. methods: all patients presenting to eds in new york state from to with symptomatic inguinal and umbilical hernias were identified using the new york state longitudinal hospital claims database (sparcs). patients were followed for records of hernia repair and subsequent inpatient and outpatient visits up to . results: , patients presenting to the ed for symptomatic inguinal hernia were identified. . % ( , ) of ed presentations resulted in inpatient admissions. , ( . %) had repair later and their average time from ed presentation to inguinal hernia repair was (± ) days. . % of patients who did not have subsequent surgery had only one ed visit. of those that underwent interval repair, . % had only one ed visit prior to surgery. for those patients with only one ed visit before repair, . % had repair at a different hospital, as opposed to . % if multiple ed visits were made. , umbilical hernia patients presenting to the ed were identified. . % ( , ) resulted in inpatient admission. , ( . %) had interval repair, with the average time from ed presentation to umbilical hernia repair being (± . ) days. % of patients who did not record of later repair presented to the ed once. of those patients who underwent repair, . % did so after one ed visit. for those patients with only one ed visit before repair, . % had repair at a different hospital, as opposed to . % if multiple ed visits were made. conclusion: a majority of patients with symptomatic inguinal and umbilical hernias that present to the ed do so once with no subsequent follow-up or repair. for those patients that undergo interval repair, a significant portion willnopt for surgery at other hospitals. a significant proportion of patients with acutely symptomatic inguinal/umbilical hernias who undergo interval repair after a previous ed visit, will opt for definitive surgery at another hospital facility. this represents a missed opportunity for continuity of care for providers and healthcare systems. nikhil gupta, dr, himanshu agrawal, dr, arun k gupta, dr, dipankar naskar, dr, c k durga, dr; pgimer dr rml hospital, delhi introduction: peritonitis is the inflammation of the serous membrane that lines the abdominal cavity and the organ contained therein and is one of the most common infections, and an important problem that a surgeon has to face. reproducible scoring system that allows a surgeon to determine the severity of intra-abdominal infections are essential to prognosticate the patient. this study was done to compare apache ii scoring and mpi score to assess prognosis in perforation peritonitis. methods: all patients admitted with hollow viscus perforation from st november till st march was included in the study. it was a cross sectional observational study. apache ii and mannheim peritonitis index (mpi) scoring systems were calculated in all the patients in order to assess their individual risk of morbidity and mortality. the outcome variables were studied postoperatively -post-operative wound infection, wound dehiscence, anastomotic leak, respiratory complications, duration of hospital stay, need of ventilator support and mortality. the inferences were drawn with the use of appropriate tests of significance. results: the study comprised of patients. neither apache ii nor mpi could predict postoperative wound infection. the mean apache ii score of subjects included in the study was . ± . with range of to and the mean mpi score of subjects included in the study was . ± . with range of to . apache ii was able to predict postoperative respiratory complications, post-operative need for ventilatory support, hospital stay duration and mortality while mpi was able to predict post-operative wound dehiscence, post-operative respiratory complications, post-operative need for ventilatory support and mortality. neither apache ii nor mpi could predict postoperative anastomotic leak and postoperative wound infection. conclusion: mannheim peritonitis index is a useful and simple method to determine outcome in patients with peritonitis. mpi is comparable to apache ii in assessing the prognosis in perforation peritonitis and can well be used in emergency setting in place of apache ii scoring when time is a definite constraint. microrna- and the prognosis of human carcinomas: a systematic review and meta-analysis chengzhi huang, mengya yu; guangdong general hospital (guangdong academy of medical science) muhammad nadeem , julian ambrus, md , steven schwaitzberg, md , john butsch, md ; university at buffalo, introduction: mitochondria is a small energy producing structure of a cell. mitochondrial myopathy (mm) is mixed disorder clinically, which can affect various systems besides skeletal muscle. mm starts with muscle weakness or exercise weakness. mm patients have decreased skeletal muscle mitochondrial function than the healthy person, because of weakened intrinsic mitochondrial function and decreased mitochondrial volume density. no one has studied the mm role in gerd and constipation so far. this study is aimed to see effects of mm on the gastrointestinal system specifically gastroesophageal reflux disease (gerd), gall bladder issues, and constipation. methods: between may and june , mm diagnosed patients at buffalo general hospital were included in this retrospective study. we assessed their demeester score for gerd and wexner's constipation questionnaire for constipation. demeester score[ and constipation score[ were set points for gerd and constipation respectively. data was analyzed by using spss version . mitochondrial enzymes were assessed by using their muscle biopsy report. results: out of ( . % female, . % male) mitochondrial myopathy patients, . % and . % were suffering from gerd and constipation respectively. . %, . % and . % patients had gall bladder issues, obstructive sleep apnea (osa) and fatigue respectively. mm gerd patients ( . % female, . male) had mean demeester score . (sd: . ) more than normal although . % patients were on gerd medications and . % patients had nadh cytochrome c reductase, cytochrome c oxidase and citrate synthase abnormal mitochondrial enzyme in mm associated gerd but . % mm patients had abnormal cytochrome c oxidase enzyme only. mm along with constipation had mean wexner's constipation score . (sd: . ) more than the normal although . % were taking enema, medications or digital assistance. % patients had cytochrome c oxidase and nadh cytochrome c reductase enzymes were abnormal in those patients. . % mm associated gall bladder issues patients had cytochrome c oxidase abnormal. . % mm associated gerd and constipation patients had gall bladder issues. conclusion: in this present study, we found that mm had effects on gastrointestinal system causing gerd, constipation and gall bladder issues. gerd, constipation and gall bladder problems are common in mm patients even patients are taking medications for gerd and constipation. cytochrome c oxidase, citrate synthase and nadh cytochrome c reductase are the most commonly impaired mitochondrial enzyme in mm patients and mm associated gerd, constipation and gall bladder issues patients. objectives: gulf war illness (gwi) is a chronic, multisymptom illness marked by cognitive and mood dysfunction and disrupted neuroendocrine-immune homeostasis affecting % of gw veterans. after + years, useful treatments are lacking and its cause is poorly understood, although exposures to pyridostigmine bromide and pesticides are consistently identified among the strongest risk factors. previous work in our laboratory using an established rat model of gwi identified persistent elevation of microrna- (mir- ) levels in the hippocampus whose gene targets are involved in cognition-associated pathways and neuroendocrine function, suggesting that mir- inhibition is a promising therapeutic approach to improve the complex symptoms exhibited by gwi. the purpose of this study was to identify broad effects of mir- inhibition in the brain by profiling the expression of genes known to play a critical role in synaptic plasticity, glucocorticoid signaling, and neurogenesis in gwi rats administered a mir- antisense oligonucleotide (mir- inhibitor). methods and procedures: nine months after completion of a -day exposure regimen involving gw-relevant chemicals and stress, rats underwent intracerebroventricular infusion of mir- inhibitor (n= ) or scrambled negative control oligonucleotide (n= ) and were implanted with -day osmotic pumps delivering . nmol/day. intranasal delivery of oligonucleotides was performed on additional rats (n= per group; daily for days) to determine whether mir- inhibition is achievable using a noninvasive procedure. hippocampi were harvested and quantitative pcr arrays were used to profile the expression of focused panels of genes important for ) synaptic alterations during learning and memory, ) signaling initiated by the glucocorticoid receptor (known mir- target), and ) neurogenesis. hippocampi were also analyzed by quantitative pcr to examine expression levels of endogenous mir- . results: upregulation ([ . fold change, p. ) of synaptic plasticity genes, glucocorticoid signaling genes, and neurogenesis genes was observed in the hippocampus of gwi rats infused with mir- inhibitor compared to scrambled control, consistent with a significant reduction (p\ . ) in mir- levels detected in rats receiving mir- inhibitor. altered gene expression and a reduction in mir- levels were not observed in rats after intranasal delivery. conclusion: mir- antagonism in the hippocampus upregulates the expression of several downstream targets involved in synaptic plasticity, glucocorticoid signaling, and neurogenesis and is a promising therapeutic approach to improve cognition, emotion regulation, and neuroendocrine dysfunction in gwi. further testing is being pursued to discover the optimal dose for intranasal administration to test viability of this option for ill gw veterans. nikhil gupta, dr, ananya deori, dr, arun k gupta, dr, dipankar naskar, dr, c k durga, dr; pgimer dr rml hospital, delhi background: the ultrasonic dissector, commonly known as the harmonic scalpel, has been in use for achieving haemostasis in surgery for almost yrs. its advantages in breast surgery, especially in the dissection of axilla, have been a matter of debate as previous studies have shown inconsistent results. this study compares the outcomes of the ultrasonic dissector in axillary dissection with that of the conventional electrocautery. methods: patients who were undergoing mrm and bcs with axillary dissection from november till march were included in the study. patients were randomized into two groups, group a undergoing axillary dissection with ultrasonic dissector and group b with electrocautery. the operative time, intra-op bleeding, post-op pain, post op drain volume, hospital stay and any other complications were noted in the two groups. results: the numbers of patients in both groups were each. group a had a significantly shorter operative time, both for axillary dissection ( . min vs. . min, p. ) and the total duration ( . vs. . min, p= . ). the blood loss was significantly less in group a, as measured by the mop count. there was significant reduction in the total post-op drainage volume, which resulted in fewer days of drain in-situ and the total number days stayed in the hospital. there was no significant change in the post-op complications such as haematoma, seroma, flap necrosis, oedema, etc. conclusion: with the use of ultrasonic dissector, the operative time, blood loss and the axillary drainage was significantly reduced. the axillary drainage in turn, reduced the hospital stay. there was no significant difference in terms of complications like haematoma formation, seroma formation, skin flap necrosis or oedema. for the statistical analysis, χ or fisher's exact tests to compare proportions and the nonparametric mann-whitney u test for analysis of values with abnormal distribution were used. discussion: the study included patients. all preoperative laboratory indicators were elevated. the laboratory tests do not demonstrate any statistical significance between these two groups. the group of the patients without stones in the cbd diagnosed by ioc was also divided in patients with diameters. ?mm and with diameters≥ . ?mm of the cbd. also in these two groups, the statistical analysis of the laboratory tests does not demonstrate significant difference. all patients underwent ioc. ioc showed stones in / patients ( . %) . a comparison of patients with and without stones at ioc showed similar mean times from hospitalization to surgery ( . background: housed in a high volume tertiary referral center, our division receives a large amount of transfers and referrals from outside institutions for patients who require completion cholecystectomies. in this study "completion cholecystectomy" refers to patients that meet one of three criteria: . previous subtotal cholecystectomy, . previously aborted cholecystectomy, or . previous cholecystectomy with incidental finding of cancer on pathology. traditionally, exploration of a reoperative field in the right-upper quadrant mandates an open approach due to dense adhesions and inflammation. over the past few years, we have found that robotic-assisted surgery has allowed us to perform these completion cholecystectomies in a minimally invasive fashion. methods: case logs and operating room billing logs were reviewed from to to identify all robotic-assisted cholecystectomies performed at our institution. review of all reports identified completion cholecystectomies. all additional variables including demographics, operative variables, and postoperative outcomes were determined from manual chart review of all consultation notes, operative reports, anesthesia records, progress notes, discharge summaries, and postoperative office visits. results: of the identified robotic-assisted completion cholecystectomies, patients had a previous subtotal cholecystectomy, patients had an aborted cholecystectomy, and patients had an incidental finding of t gallbladder carcinoma on pathology. fifteen patients ( %) underwent preoperative ercp either for choledocolithiasis or to determine biliary anatomy. average time from original procedure was months with . % of previous procedures performed in an open approach. average or time was . minutes, average ebl was . cc, and average length of stay was . days. one patient ( . %) was readmitted within days for nausea that resolved with antiemetics. three patients ( . %) had minor postoperative complications (clavien-dindo grade or ) which resolved with pharmacologic therapy. no patients suffered a -day mortality. all cases were completed in minimally invasive fashion without a conversion to an open procedure. conclusions: although rare, completion cholecystectomies present a challenging surgical scenario. although traditionally performed in an open approach, we have had success in recent years at our institution with a robotic-assisted approach to completion cholecystectomy. we feel that the robotic approach offers certain advantages in a hostile, reoperative field which allows us to perform these procedures in a minimally invasive fashion with no conversions to an open procedure to date. previously limited to case reports, this report of procedures represents the largest case series of robot-assisted completion cholecystectomies to our knowledge. s surg endosc ( ) :s -s background: percutaneous cholecystostomy tube (pct) has been used as a bridge treatment for grade ii-iii moderate to severe acute cholecystitis (ac) to "cool" the gallbladder down over several weeks and allow the inflammation to resolve prior to performing interval cholecystectomy (ic) and removal of the pct, often laparoscopically. the aim of this study was to assess the impact of timing of ic after pct on operative success and outcomes. methods: a retrospective review of electronic medical records of patients who were treated for ac with a pct, and subsequently underwent ic at our institution between january to december was performed. the patients were divided into three groups (n= each), based on the duration of the pct prior to ic, and these groups were comparatively analyzed. a comparative sub-analysis of clinical outcomes between patients who underwent surgery within the first week vs. third week or later after pct was also performed. results: a total of patients met the study criteria. each group had patients. there were no statistically significant differences between the groups in regards to age, gender, bmi, imaging findings, and indications for cholecystostomy tube placement. overall, there was no statistically significant difference in outcomes between performing ic within the first weeks, - weeks and [ weeks after pct placement. the length of stay, overall morbidity, clavien-dindo grade of complications and mortality were similar between the time intervals. however, a sub-analysis showed that patients who underwent ic within the first week of pct placement had statistically significant higher mortality rate (p= . ) compared to those who underwent ic[ weeks of pct placement. the two patients who died in our sample had ic within a week after pct placement. even though there was a statistically significantly higher morbidity rate in those who had ic[ weeks after pct, the clavien-dindo grade of these complications was lower than. conclusion: delaying ic to [ weeks after pct placement for ac is not associated with any improvement in patient morbidity, length of stay or rate of conversion from laparoscopic to open cholecystectomy. cholecystectomy within the first week of pct placement is associated with higher mortality rate than after weeks likely due to associated sepsis. introduction: the effect of intraoperative bile spillage during laparoscopic cholecystectomy (lc) on operative time (or time), length of stay (los), postoperative complication rates, and day readmission rates was analyzed. laparoscopic cholecystectomy is the gold standard operation for gallbladder disease in the united states. number of studies have shown that same day discharge in elective laparoscopic cholecystectomy is feasible and safe. bile spillage during this procedure can be a common occurrence in teaching institutions, however, data on the effects of operative outcomes is lacking. methods: this is a retrospective study analyzing all of the laparoscopic cholecystectomies performed at the brooklyn hospital center (tbhc), both emergent and elective, from to . patient data was collected on demographics, comorbidities, bile spillage, operative findings, complications, los, and day readmission rates. statistical analysis was performed using imb spss statistics v. . covaried analysis of variance (ancova) was performed on continues variables and significance levels were calculated. pearson's chi square significance level was calculated for all binomial variables. results: of the patients who underwent lc during this time period, intraoperative bile spillage was encountered in patients. interestingly, bile spillage was significantly more likely to be seen in elective cases over acute cases ( . % vs . %, p. ). there was a statistically significant increase in or time in cases where intraoperative bile spillage was encountered vs. cases where no bile spillage was encountered ( vs. min, p= . ). there was a significant increase in rate of conversion to open procedure when bile spillage was encountered ( . % vs. . %, p. ). drain placement rates increased, not surprisingly, when bile spillage was encountered ( . % vs. . %, p. ). there was no statistically significant difference in los between cases with bile spillage and cases without ( . days vs. . days). there was no significant increase in complication rate or day readmission rates. conclusions: intraoperative bile spillage significantly increases or time, conversion to open procedure, and drain placement. however, there was no significant effect observed of intraoperative bile spillage on length of stay, complication, and day readmission rates. thus, intraoperative bile spillage appears to have little clinical significance on surgical outcomes. however it may have an impact on overall healthcare costs. larger prospective studies evaluating the effect of intraoperative bile spillage on los, or time, complication rates, and day readmission rates are needed to analyze these effects further. tariq nawaz, md; rawalpindi medical university study design: prospective and observational study. place and duration: from january, to july . surgical unit ll, holy family hospital, rawalpindi. patients and methods: thousand patients with a diagnosis of cholithiasis were included. exclusion criteria are patient younger than year and older than year. calot's triangle dissection was done meticulously. cystic artery and hepatic artery anomalies and variations were observed and analyzed on spss . results: the age varies from to years. on the basis of distributional variation the cystic artery was single in % cases, branched in % cases and absent in % cases. on positional variations the cystic artery was superomedial to the cystic duct in % cases, anterior in % cases, and posterior in % cases and low lying in % of the cases. on the basis of length variation results showed that ( %) cases had a normal cystic artery. a short cystic artery was found in ( %) cases and a long cystic artery was present in ( %) cases. other arterial variations are of hepatic artery i.e moynihan's hump ( %) and and right hepatic artery present in calots triangle in % conclusions: for the safety of laparoscopic cholecystectomy one should be well aware of the anatomical variations of the cystic and hepatic artery. keywords: cholelithiasis, cholecystitis, laparoscopic cholecystectomy. as small as it gets: micro-invasive laparoscopic cholecystectomy using only two mm trocars and a needle grasper background: the majority of surgeons use four ports including for laparoscopic cholecystectomy (lc). multiple efforts have been made to reduce number and size of ports. left upper quadrant (luq). patients and methods: of lcs performed from / - / , ( %) were done using three instruments including cases in which trocars and the teleflex needle grasper were used. in cases only two mm trocars were (left upper quadrant (luq) and umbilicus) with the minigrasper being placed between the two. the gallbladder (gb) serosa was incised on both sides and a window was created behind the gb midportion and widened towards fundus and infundibulum. cystic artery (ca) and cystic duct (cd) were dissected out obtaining the critical view and after the last fundus adhesion was cut, ca and cd were secured with clips or endoloop. results: median age of women and men was . (range . - . ) years. lc was done for acute cholecystitis (n= ), chronic cholecystitis (n= ), biliary dyskinesia (n= ), choledocholithiasis (n= ). three patients had an ercp with bile duct clearance prior to the lc. in one case a keith needle was used to suspend the gb fundus for better exposure. twelve patients had additional procedures together with their lc (wedge liver biopsy ( ), lysis of adhesions ( ) , umbilical hernia repair ( ) , mesenteric/lymphnode biopsies ( ) . median or time was (range - ) minutes. the specimen was removed through the luq port site in patients. there were no vascular or bile duct injuries in this series. % of cases were done as outpatient procedures, % of patients required hours observation only three patients were hospitalized for medical reasons. conclusion: in selected cases with either small stones or biliary dyskinesia, lc with only two mm ports and a needle grasper is possible. the teleflex minigrasper can completely replace a port based grasper. introduction: the standard treatment for lithiasic acute cholecystitis remains the laparoscopic cholecystectomy despite the timing of surgery is still controversial. the aim of this prospective study is to evaluate the advantages and limitations of early laparoscopic cholecystectomy in a district hospital. methods and procedure: all patients undergoing laparoscopic cholecystectomy at the surgical department of "carlo urbani" hospital in jesi (italy) from may to september were consecutively enrolled. clinical data such as gender, age, bmi, comorbidity, previous abdominal surgery, previous acute cholecystitis were collected. subsequently, the patients were arranged in two groups according to the timing of intervention (early versus elective surgery). for each group, we compared data concerning surgery, such as operative time, intraoperative and postoperative complications, length of hospital stay and cost analysis. results: this study is a part of an ongoing research. so far, we collected laparoscopic cholecystectomies. ten ( %) of them were admitted with acute cholecystitis and were operated during the hospital stay (group a). group b included patients scheduled for elective surgery (n= ; %). the two groups were comparable with respect to clinical data. conversion to open approach was performed in cases, all of them in group b. mean surgical time was . ± . minutes in group a and . ± . minutes in group b (p= . ). no significant differences in intraoperative and postoperative complications rates were seen in the two groups, just a few in both of them. mean overall length of hospitalization was . ± . days in group a and ± . days in group b (p= . ), whereas the difference in length of postoperative hospitalization was not statistically significant. due to the extended hospitalization for group a, the cost increase as compared to group b was statistically significant, too. conclusions: early laparoscopy is comparable to delayed laparoscopy in terms of postoperative hospitalization and complications in the management of acute cholecystitis. a longer hospital stay among patients scheduled for immediate surgery may be associated with a more time-consuming diagnostic work-up before surgery. however, in future research we expect to enhance our cost analysis with more data regarding the costs incurred in the first hospitalization reserved to nonoperative treatment of group b inpatients with acute cholecystitis. s surg endosc ( ) introduction: with improvements in healthcare access and technology, admissions of octogenarian population with acute cholangitis (ac) are increasing. octogenarians are vulnerable to inferior outcomes. there is no study to evaluate factors predicting outcomes of ac in octogenarians. the aim of our study is identify factors predicting outcomes, and to evaluate the quick sequential organ failure assessment (qsofa) score and tokyo guidelines (tg ) severity grading for octogenarian patients with ac. methods: a retrospective review of octogenarian patients admitted with ac from january to december was performed. demographic profile, clinical presentation and discharge outcomes were studied. systemic inflammatory response syndrome (sirs), qsofa and tg severity grading scores were calculated. mortality is defined as death within days of admission or in hospital mortality. statistical analysis was performed using spss version . results: there were a total of patients admitted for ac, of which ( %) were octogenarians. majority (n= , %) were female, with a mean age of (range - ) years. majority were secondary to gallstones (n= , %), and ( %) were due to malignancies. ( %) and ( %) patients fulfilled sirs and qsofa criteria of severity respectively. ( %) and ( %) of patients had a tg severity grading of moderate and severe respectively. nine ( %) patients required inotropic support in the emergency department (ed) and ( %) patients were admitted to critical care unit (ccu). ( %) patients underwent endoscopic retrograde cholangiopancreatography (ercp) and ( %) underwent percutaneous transhepatic biliary drainage (ptbd) for biliary decompression. patients underwent index cholecystectomy. length of stay was . (range - ) days and -day mortality of %. multivariate analysis performed showed that an abnormal glasgow coma score (p= . ) and malignancy (p. ) predicted -day mortality. the use of ed inotropic support predicted ccu admission (p= ). a positive blood culture (p= . ), presence of malignancy (p. ), use of ed inotropes (p= . ), and index cholecystectomy (p= . ) predicted a longer length of stay. qsofa (p. ) and tg severity grading (p= . ) were predictive of -day mortality. sirs criteria did not predict -day mortality. conclusion: reduced consciousness and malignancy predicted -day mortality in octogenarian patients with ac. qsofa and tg severity grading system is superior to sirs criteria in predicting mortality of octogenerians with ac. our group has performed needlescopic grasper assisted silc (nsilc) to overcome these problems. we evaluate the technical feasibility, safety and benefit of nsilc versus three-port laparoscopic cholecystectomy (tplc). methods and procedures: this prospective randomized control study was conducted to compare the advantages if any between the nsilc and tplc. one hundred and forty eight patient were randomized into two groups, with one group underwent n slic ( patients) and a control group underwent tplc ( patients). basic information about the patient and diagnosis was collected. the surgical outcome that was composed with critical view of safety (cvs) time, major procedure time and total operation time, and the comparison of postoperative complication was made. result: nsilc group was consisted of male ( . %) and female ( . %), and tplc group was consisted of male ( . %) and female ( . %) (p= . ). the average age of nsilc group was . ± . years old, and tplc group was . ± . years old (p= . ). cvs time of tplc group was shorter than silc group (nsilc: . ± . min, tplc: . ± . min, p= . ), major procedure time (skin incision to gb removal from liver bed) of tplc group was shorter than nsilc group (silc group: . ± . min, tplc: . ± . min, p= . ). however, there was no significant difference in postoperative complication (nsilc: , tlc: , p= . ). conclusion: although cvs time, major procedure time, and operation time of silc were longer than tplc, overall clinical results were similar. nsilc is feasible and safe surgical procedure in patient with benign gallbladder disease. introduction: management of malignant biliary obstruction not amenable to surgery is usually by means of ercp or pthc. however, on occasions, these routes are not accessible and the alternate decompressive technique of percutaneous cholecystostomy (pc) has to be adopted. the aim of this study was to evaluate the efficacy and outcomes of pc in a highly selected series at a tertiary referral center. methods: we retrospectively reviewed all patients that had undergone pc from to . data collected included baseline demographics, comorbidities, details of pc placement and management, etiology of mbo, and post-procedure outcomes. the charlson comorbidity index (cci) was calculated for all patients at the time of pc. results: four hundred and eight patients underwent pc placement of which patients including ( %) males and ( %) females, with malignant biliary obstruction. the mean age at the time of pc placement was . ± . years of age, and the mean cci was . ± . for all patients. of mbo in all patients was due to pancreatic malignancies (n= ), cholangiocarcinoma (n= ), primary hepatic malignancies (n= ), secondary hepatic tumors (n= ), and ampullary carcinoma (n= ). pc tube complications were reported in ( %) patients. mean number of tube exchanges was . ± . . mean duration from pc tube placement to death was ± . days. total deaths were recorded. conclusion: pc placement appears to be a viable option in mbo in elderly and frail patients. in this cohort, pc may be a potential definitive management to improve quality of life. melanie boyle, daivyd palencia, philip leggett; houston northwest medical center background: there are very few studies assessing the relationship between gastroesophageal reflux and biliary disease. this is surprising as they share presenting symptoms as well as risk factors, particularly obesity. our group previously produced a review of patients in our practice who had undergone some type of reflux procedure. conclusions showed that the prevalence of gallbladder disease in our severe reflux population is much higher compared to that found in the general population. our goal of this study is to expand on that data to include a larger sample size to investigate the incidence of biliary disease in our reflux population and decide if this should influence our pre-operative algorithm for anti-reflux surgery patients. methods: we expanded on our previously performed retrospective review of patients that underwent laparoscopic fundoplication for reflux disease. we previously reviewed data from to . we are now looking at data from to . our expected sample size will include approximately patients, of which have currently been reviewed. our previous study included only . the surgery preformed was either a toupet or nissen fundoplication, and one underwent a dor. demographic data, imaging studies, and pathology results were reviewed. results: we looked at whether each patient who underwent antireflux surgery had a prior cholecystectomy either remotely or recently, underwent concomitant cholecystectomy, or had no biliary disease in their workup. the groups had similar age and were predominantly women. we once again demonstrated that the prevalence of gallbladder disease in our severe reflux population is much higher than the general population. when approaching a patient with gastroesophageal reflux disease, attention should be paid to gallbladder symptomatology as well. we recommend that it may be beneficial to include gallbladder ultrasound in pre-operative workup for antireflux surgery so that concomitant cholecystectomy can be performed if indicated. steven schulberg, do, jonathan gumer, do, matt goldstein, vadim meytes, do, george ferzli, md; nyu langone hospital -brooklyn introduction: acute cholecystitis is a common surgical disease with roughly , cholecystectomies performed in the us annually. the current dogma revolves around the " hour rule" advocating early cholecystectomy if within the window, and if beyond hours, conservative treatment and interval operation. in patients beyond the hour window, as well as with multiple comorbidities, advanced age, and other complicating factors, cholecystostomy has become an acceptable treatment as a bridge to interval cholecystectomy. while this has become an appropriate treatment modality, it does not come without its own set of complications. we aim to evaluate the rate of complications in our institution. methods: this is a retrospective review of all patients at our institution who underwent cholecystostomy placement between and . we evaluate the comorbidities, readmission rate, overall rate of complication associated with cholecystostomy tubes, and eventual definitive cholecystectomy. results: our cohort includes patients, % of whom were male, with a mean age of . we had an overall complication rate of . %, including tube dislodgements, leaking tubes, and misplaced tubes. all cause readmission rate was % and only % of patients who had cholecystostomy drains underwent interval cholecystectomy. conclusion: there has been much interest in treatment of acute cholecystitis in patients with multiple comorbidities. in review of our data, a surprisingly large number of patients had mechanical complications involving the cholecystostomy drain. in an era focused on decreasing readmission rates and their associated costs, drains carry a high risk of malfunction which will in turn, lead to increases in these two metrics. while there is more work to be done in the evaluation of early cholecystectomy versus cholecystostomy in this subgroup of patients, we suspect that early cholecystectomy in the medically optimized patient will lead to reduced length of stay and hospital costs as well as increased patient satisfaction. does selective use of hepatobiliary scintigraphy (hida) scan for diagnosis of acute cholecystitis, following equivocal nondiagnostic gallbladder ultrasonography, affect outcomes fahad ali, ba, amir aryaie, md, eneko larumbe, phd, mark williams, md, edwin onkendi, md; texas tech university health sciences center introduction: acute cholecystitis (ac) is diagnosed by characteristic gallbladder ultrasonographic findings (high specificity, low sensitivity). hepatobiliary scintigraphy (hida) may be needed to confirm ac (higher sensitivity and specificity). the aim of this study was to assess the impact of the current selective use of hida scan for sonographically equivocal cases of ac on outcomes. methods: a retrospective chart review of patients treated for ac at our institution ( / to / ) was performed. patients were divided into groups: the ultrasound only group (us-only) and the ultrasound-hida group (us-hida). timing of us and hida, and intervention for ac since presentation to emergency room (er), and their impact on outcomes were analyzed. ac severity was graded per the tg -tokyo guidelines. results: a total of patients were analyzed. the groups were statistically similar with regards to age, body mass index, asa class ii, iii and iv, extent of leukocytosis at presentation and liver functions test levels at presentation. in the us-only group, diagnostic ultrasound was obtained sooner, [median of (interquartile range, iqr . - . ) hours] from presentation to the er compared to the us-hida group, ) hours], p= . . hida was obtained after a median delay of . (iqr . - ) hours from a nondiagnostic ultrasound. majority of patients ( %) in the us-only group had mild (tg grade i) to moderate (tg grade ii) ac, while % of the us-hida group had moderate (tg grade ii) to severe (tg grade iii) ac (p= . ). despite this, more patients in the us-hida group ( %) had a "normal" non-diagnostic ultrasound compared to the us-only group ( . %), p. . seven patients in the us-hida group had no intervention due to normal hida scan ( ) , ac misdiagnosis due to liver cirrhosis ( ) , and severe medical comorbidities ( ) . more patients ( %) in the us-only group underwent laparoscopic cholecystectomy, compared to % in the us-hida group (p= . ). between the two groups, there was no significant differences in -day morbidity, mortality and reoperations. however, the length of stay was longer by a median of . days in the us-hida group (p= . ). conclusion: patients with moderate to severe ac are more likely to need hida scan due to a "normal" non-diagnostic ultrasound, have a delay in diagnosis, not have intervention for ac due to severe medical comorbidities and have lower chance of laparoscopic cholecystectomy. the length of hospital stay is significantly longer for these patient by a median of . days. introduction: benign gallbladder disease is commonly treated with laparoscopic cholecystectomy (lc). gallbladder cancer (gbc) is a rare malignancy characterized by high invasiveness and poor survival. in our institution, all gallbladder specimens are routinely sent to pathology, to rule out gbc. the purpose of our study was to assess the efficacy for routine histopathology of gallbladder specimens after cholecystectomy (cly) for all gallbladder disease. methods and procedures: after obtaining approval from our institutional review board, a retrospective review was conducted on all patients who underwent cly from june of to may were included in the study. the data obtained include gender, age, american society of anesthesiologist score (asa), body mass index (bmi), comorbidities, length of stay (los), radiological imaging and pathology results. independent t and chi-square tests were performed using ibm® spss® software. results: there were cly performed at our institution, of which ( %) were lc. females composed of ( %) patients and the median age was . ( %) gallbladder specimens were found to be cancerous. ( %) gallbladder specimens were benign. majority ( %) were chronic cholecystitis, ( %) were acute cholecystitis and ( %) were gangrenous cholecystitis. ( %) were found to be acalculus cholecystitis and ( %) were cholelithiasis. ( %) were found to be adenomyositis, and other. conclusion: in our institution, less than % ( ) of all gallbladder specimens were found to be cancerous. it would decrease cost and work load if gallbladder specimens are selectively sent to pathology. emanuel a shapera, md we sought to determine clinical factors associated with recurrent cholangitis in two las vegas community hospitals to aid providers in management of this disease. methods and procedures: retrospective, multi-center study. over ercps were analyzed between and . patients were identified as having multiple ( ) admissions for cholangitis per tokyo criteria. univariate and multivariate analysis was conducted. results: patients with a significantly (p. ) higher albumin level on admission ( . ) were discharged home more often than patients discharged to a facility or hospice ( . ). on multivariate analysis, non-home discharge was associated with lower albumin level at admission (p= . ) and greater maximum temperature prior to decompression (p= . ). increased hospital stay was associated with lower albumin level at admission (p= . ). a majority ( / ) of recurrent episodes involved stent placement, exchange or removal. patients ( %) had either biliary malignancy, gallbladder or both. blood cultures were drawn in % of all episodes and positive in %, e coli being the most common pathogen isolated. all patients had low hdl levels ( - , mean ) . conclusions: high fevers and poor nutritional status was associated with increased length of hospital stay and fewer home discharges. tumors, gallbladders and malfunctioning stents contribute substantially to morbidity. close follow up for indicated gallbladder removal, stent management and nutritional optimization is critical to reduce the burden of this disease. we compared the surgical method in neonate choledochal cyst between oec and lec. the perioperative and surgical outcomes that were reviewed included age, operative time, postoperative hospital stay, time to diet, and surgical complications. the patients were followed up for months (range, - months) . results: there was no difference in range of bile duct excision and manner of roux-en-y hepaticojejunostomy between oec and lec groups. there was no intraoperative complication in both groups and no open conversion in the lec group except one case which was ruptured choledochal cyst. the median age of oec and lec groups were days (range, - ) and . days (range, and median body weight at the time of operation were . kg (range, . - . ) and . kg (range, . - . ) , respectively. the median operative time was minutes (range, - ) in oec and . minutes (range, in lec groups and there was no significant difference between oec and lec groups (p= . ). intraoperative bleeding was minimal in both groups. the postoperative hospital-stay, time to start diet, and time to return to full feeding had no significant differences in both groups. after discharge, of ( %) oec patients experienced readmission due to cholangitis and ileus, while there were none in the lec group. conclusions: this study revealed that lec had better prognosis compared to oec. lec provided an excellent cosmetic result. so we suggest lec could be the treatment of choice for neonatal choledochal cyst. this is a small series, therefore future studies will have to include a larger number of patients and evaluate long-term follow-up. keywords: choledochal cyst, laparoscopy, neonate. laparoscopic narrow band imaging for intraoperative diagnosis of tumor invasiveness in gallbladder carcinoma: a preliminary study yukio iwashita, hiroki uchida, teijiro hirashita, yuichi endo, kazuhiro tada, kunihiro saga, hiroomi takayama, masayuki ohta, masafumi inomata; oita university faculty of medicine introduction: determining tumor invasiveness before operation is one of the most important unsolved issues in the management of gallbladder cancer. we hypothesized that the assessment of irregular vessels on the gallbladder wall may be useful for detecting subserosal infiltration. we present an initial report on the clinical usefulness of laparoscopic narrow band imaging (nbi) for the intraoperative diagnosis of tumor invasiveness in gallbladder carcinoma. methods: thirteen patients with gallbladder cancer were included in this study. patients with tumors located in the liver bed and those with definitive invasion observed on computed tomography findings were excluded from this study. gallbladders were observed using nbi and the microvasculature was evaluated. according to previous reports of endoscopic nbi, we defined four findings as positive: vessel dilatation, tortuousness, interruption, and heterogeneity. the nbi findings were compared with postoperative pathological findings. the study protocol was approved by the institutional review board of the oita university. results: the serosal surface of the tumor site and its microvasculature were successfully observed in all patients. laparoscopic nbi detected at least one abnormal finding in seven patients, and postoperative pathology showed subserosal infiltration accompanied by vessel invasion. on the contrary, six patients with no positive nbi findings showed mild or no subserosal infiltration and no vessel invasion. conclusions: our study indicated that laparoscopic nbi may be useful for diagnosing subserosal infiltration accompanied by a vessel invasion. shuichi iwahashi, mitsuo shimada, satoru imura, yuji morine, tetsuya ikemoto, yu saito, hiroki teraoku; department of surgery, tokushima university introduction: laparoscopic cholecystectomy (lap-c) is the standard operation for the benign diseases. we have reported reduced port lap-c (rpl-c) was safely and comparable method to sils-c and conventional lap-c (sages ) . in this time, we examined the utility of rpl-c containing the post-operative adverse event. procedures: the adjustment is the benign illness including the cholecystolithiasis, and advanced obesity and the cases of the inflammation remaining have been excluded. the incision is put and cut open the abdomen to the umbilical region, and camera port was inserted. we used mm flexible scope. mm forceps for holding of the gallbladder bottom and left hand of operator were inserted directly with no port. methods: rpl-c has been introduced in this department since july, . we performed cases of lap-c, containing sils-c and american style conventional lap-c, and we performed rpl-c has been performed already cases. we compared the patient background and the operation factor between rpl-c, sils-c, conventional lap-c. operators were young surgeons, they were not specialists of gastroenterological surgery or endoscopic surgery. results: the difference was not admitted in the age, gender, the physique, and the disease, and the difference was not admitted in hospital stay after the operation (rpl-c:sils-c:conventional lap-c= . ± . days: . ± . days: . ± . days) and the amount of blood loss (rpl-c:sils-c:conventional lap-c= . ± . ml: . ± . ml: . ± . ml) and operation time (rpl-c:sils-c:conventional lap-c= ± min: ± min: ± min). and surgical wound after rpl-c was cosmetically acceptable. regarding as the post-operative adverse event, there were no patients of bile duct injury. conclusion: in the patients on reduced port lap-c, there were no bile duct injuries of postoperative adverse event. reduced port lap-c is safely for young surgeons and comparable method. introduction: acute cholangitis is an ascending infection of the biliary tree secondary to obstruction and can be severe if proper intervention and treatment are not performed in a timely fashion. the most common management of cholangitis with ductal obstruction due to choledocholithiasis is intravenous hydration, empiric antibiotic therapy, endoscopic retrograde cholangiopancreatogram (ercp) with sphincterotomy and stone extraction with or without stent placement, followed by a delayed laparoscopic cholecystectomy. we present the case of a patient with blood clot obstruction of a common bile duct (cbd) stent after ercp with sphincterotomy and stone extraction. case presentation: a year old male presented to the emergency department with jaundice, right upper quadrant abdominal pain, truncal pruritis, nausea, vomiting, and fever. biochemical analyses and liver profile demonstrated an elevated white blood cell count, hyperbilirubinemia, and elevated liver enzymes consistent with cholestasis. biliary ultrasound demonstrated multiple gallstones and dilation of the cbd with a distal obstructing calculus. he proceeded to ercp where biliary cannulation was achieved, sphincterotomy performed, and a large amount of sludge and pus was drained. an mm stone was removed from the cbd by balloon sweep with completion cholangiogram demonstrating no filling defects. a stent was then placed in the cbd with adequate flow. following the procedure, the patient continued to have increasing hyperbilirubinemia. a repeat ercp revealed a large blood clot and continued bleeding at the previous sphincterotomy that resolved with epinephrine injection. the former stent was visualized in the proper position, removed with a snare, and found to be fully occluded with blood clots. after retrieval of additional clots, a new stent was placed with adequate return of bile. the patient recovered with resolution of his symptoms and hyperbilirubinemia with laparoscopic cholecystectomy. discussion: cholangitis is characterized by charcot's triad of right upper quadrant abdominal pain, fever, and jaundice due to an ascending bacterial infection of the biliary tree coinciding with obstruction of biliary flow most commonly from gallstones. cholangiography via ercp with associated sphincterotomy, stone extraction, and stenting is both diagnostic and therapeutic. while debated by endoscopists, stent placement has shown to reduce recurrent biliary complications, decrease length of hospital stay, and lessen morbidity. although pancreatitis is the most common cause of hyperbilirubinemia post-ercp, stent occlusion secondary to stones or blood clots should be considered to effectively treat patients. proper hemostasis is important in any procedure and close patient follow-up should be performed to prevent further complications. sarrath sutthipong, md, panot yimcharoen, md, poschong suesat, md; bhumibol adulyadej hospital background: choledochal cyst (cc) is a rare disease, characterized by dilatations of the extra-or/ and intrahepatic bile ducts. ccs occur most frequently in asian and female populations. cc is associated with biliary lithiasis and considered at risk of malignant transformation. todani's classification dividing cc into types is the most useful in clinical practice. the current standard treatment is complete cyst excision with roux-en-y hepaticojejunostomy and cholecystectomy for the extrahepatic disease (todani type i and iv). in this report we present our experience using a total laparoscopic technique to treat adult patients with cc in -year period. methods: a retrospective review of the records of the patients above years who underwent laparoscopic cyst excision and roux-en-y hepaticojejunostomy in our hospital between january and may was carried out. the data included the clinical presentation, investigation, perioperative details and complication. the type of cc was classified according to todani's classification. results: seven cases of cc were reviewed, females and male with mean age years (range - years). these included cases of todani type ib and cases of type a. the predominant symptoms were chronic abdominal pain and jaundice. a case of both pancreatitis and cholangitis were also seen. investigations included ultrasound with mrcp in cases and ercp in case. the mean operative time was hours and minutes ( hours minutes to hours range) with mean intraoperative blood loss ml (range - ml). all the resected specimens showed chronic inflammation. malignancy was not seen in any patients. the early postoperative complications included bile leakage with intra-abdominal collection in patients, which were managed conservatively (evidenced by clinical status and imaging study), re-operation was not required. the median duration of hospital stay was days (range - days). there was no perioperative mortality. all patients were followed up at , , and months postoperatively, late complication were not detected during each visit. conclusion: in our opinion, laparoscopic cyst excision and hepaticojejunostomy could offer more feasible and safe methods of treatment for ccs in adult patients with potentially less postoperative morbidity, a shortened length of stay and a lower blood loss when compared to the preferred open approach. however, we would need to study this on a larger sample of patients to report the efficacy and safety of laparoscopic approach. endoscopic trans-papillary gallbladder drainage (etgbd) in acute cholecystitis: a single center experience arun kritsanasakul, chotirot angkurawaranon, jerasak wannapraset, thawee rattanachu-ek, kannikar laohavichitra; rajavithi hospital background: surgery is the mainstay of treatment for cholecystitis, however, it may not be safe or feasible in some circumstances such as severe cholecystitis or cholecystitis in extremely high-risk patients. gallbladder drainage may be an appropriate alternative or a bridging option prior to cholecystectomy. endoscopic trans-papillary gallbladder drainage (etgbd) has been proposed as a modality that is feasible and effective in cholecystitis. objective: the primary outcome of this study is to evaluate the effectiveness of etgbd. the secondary outcome is to evaluate the safety, early experience outcomes, and complications of this procedure. methods: retrospective medical records review between january -december from a single tertiary referral hospital center, rajavithi hospital, bangkok, thailand. a total of patients who was diagnosed with cholecystitis and underwent etgbd. the procedure was performed at the endoscopic suite under light sedation via total intravenous anesthesia. the patient demographic data and procedures were collected. the technical success of etgbd was defined as decompression of the gallbladder by successful cystic duct stent placement. the clinical success was defined as resolution of symptoms and/or improved laboratory data or ultra-sonographic findings. results: a total of patients underwent etgbd. among these patients, were high risk for surgery due to age or comorbidity, had concomitant jaundice and was failure of medical treatment. both technical and clinical success of etgbd was achieved in of cases ( %). the two patients that did not achieve technical success were due to failure to cannulate guidewire through cystic duct and the other had trans-cystic guidewire perforation that needed surgical intervention. there were two intra-operative complications ( %). one was the patient who had trans-cystic guidewire perforation and another had anesthesia-related complication (hypoventilation requiring endotracheal intubation). there were no -day mortality. conclusion: endoscopic trans-papillary gallbladder drainage is an alternative treatment modality for patients with cholecystitis who are at high-risk for surgery and or those who are unsuitable for percutaneous gallbladder drainage. the technique is feasible, however, careful case selection and high endoscopic skill is needed. julia f kohn, bs , alexander trenk, md , woody denham, md , john linn, md , stephen haggerty, md , ray joehl, md , michael ujiki, md ; university of illinois at chicago; northshore university healthsystem, northshore university healthsystem introduction: subtotal cholecystectomy, where the infundibulum of the gallbladder is transected to avoid dissecting within a heavily inflamed triangle of calot, has been suggested as a method to conclude laparoscopic cholecystectomy while avoiding common bile duct injury. however, some case reports have suggested the possibility of recurrent symptoms from the remnant gallbladder. this retrospective case series reports a minimum of two-year follow-up on patients who underwent subtotal cholecystectomy within one four-hospital system. methods: a retrospective chart review database containing randomly selected cholecystectomies, all of which occurred between and , was reviewed to identify all instances of subtotal cholecystectomy. charts for these patients were reviewed through / , including any documentation from other providers, including primary care. results: six patients who underwent subtotal cholecystectomy with a remnant of infundibulum left following surgery were identified. surgical approach and the choice to perform subtotal cholecystectomy were dependent on the attending surgeon; all decisions were made intraoperatively. there was an average of months of follow-up for these patients within our institution. discussion: this case series adds six cases to the literature surrounding long-term outcomes in patients who underwent subtotal cholecystectomy. although one patient was lost to follow-up, no patient had recurrent biliary colic or other complications arising from the remnant gallbladder. this may be encouraging to surgeons who feel that subtotal cholecystectomy with an infundibular remnant is the safest way to proceed with cholecystectomy in patients with severe inflammation. objective: this study aims to evaluate the utility and efficiency of icg as an alternative to routine intraoperative cholangiogram in patients undergoing cholecystectomy. introduction: common bile duct injury is an uncommon, but serious complication associated with laparoscopic cholecystectomy. current guidelines state that when used routinely intraoperative cholangiogram (ioc) can decrease biliary injury, however it is not routinely used due to increased time of operation, and inaccessibility of equipment. indocyanine green (icg) has been found to be effective for identification of biliary anatomy during cholecystectomy, however has not yet been widely adopted. we aim to assess if icg is able to overcome the obstacles of ioc, while still effectively assessing biliary anatomy. methods: we performed a retrospective analysis of laparoscopic cholecystectomies performed in a single institution from january to september . elective and emergent cases were included. we stratified patients into icg and non-icg groups. patients who had concomitant procedures performed were excluded. we analyzed patient demographic information, as well as bmi, asa classification and comorbidities in both groups. our primary outcome was operation time (skin to skin), and laparotomy conversion rate. secondary outcomes were effectiveness of icg in visualizing biliary anatomy, and cost. results: patients were included in our study, in the non-icg arm and in the icg arm. both groups were similar in background. there were no statistical differences in patient demographics, asa classification, bmi, or comorbidities. there was no statistical difference in operation time ( . vs . minutes; p. ) or conversion rate ( . vs %; p. ). icg was able to delineate biliary anatomy in % of the patients. the cost of a mg/vial kit of icg is approximately $ . conclusion: the use of icg does not increase operating time during laparoscopic cholecystectomy. icg is an inexpensive and effective tool used to delineate biliary anatomy without the inherent burden and limitations of ioc. benefsha mohammad, md , michele richard, md , steve brandwein, md , keith zuccala, md ; danbury hospital, danbury hospital department of gastroenterology, introduction: obesity is a prevalent issue in today's society, which has increased the number of gastric weight loss surgeries. this presents an anatomical challenge to biliary disease requiring endoscopic retrograde cholangiopancreatography (ercp). in gastric bypass patients, traditional ercp via the mouth in these patients is technically more challenging, requiring a longer endoscope with a reported success rate of less than %. a solution is laparoscopic assisted ercp (la-ercp) via gastrostomy. this minimally invasive technique has become increasingly more prevalent and safe. we present our experience with la-ercp at our teaching community hospital in a large cohort of patients. methods and procedures: retrospective chart review was performed on all patients with a history of prior laparoscopic gastric bypass surgery who underwent la-ercp from april to april . the procedure was performed by two different general surgeons and one gastroenterologist. a pursestring suture and transfacial stay sutures were used to bring the gastric remnant to the abdominal wall. a gastrostomy was then created and accessed by the duodenoscope to perform the ercp. biliary sphincterotomy, papillary or biliary dilation, lithotripsy, stent placement, and/or stone removal were performed as indicated. we observed the incidence of postoperative outcomes, including acute pancreatitis, reoperation, post-procedure infection, pain control, hospital re-admission and bile leak. results: thirty-two patients met inclusion criteria. six patients were male and twenty-six were female, with mean ages of (std dev ) and years (std dev ), respectively. indications for la-ercp included suspected choledocholithiasis ( / ), cholangitis with choledocholithiasis ( / ), acute pancreatitis ( / ), abdominal pain with abnormal lft ( / ), cholangitis with cholecystitis ( / ), and bile leak ( / ). la-ercp was successfully performed in all thirty-two patients. biliary cannulation, sphincterotomy and stone extraction were performed on / patients, and one patient underwent sphincterotomy and stent placement for bile leak after recent laparoscopic cholecystectomy. one patient developed acute pancreatitis with elevated pancreatic enzymes which resolved after conservative treatment. one patient required a second la-ercp for stent replacement due to a persistent bile leak. the median length of stay was days (range - days). conclusions: la-ercp is a safe and feasible alternative to open surgery, and can be safely implemented at community hospitals with adequately trained providers. obesity is a growing burden on society, increasing the incidence of weight loss surgery. our large study proves that in this minimally invasive era, la-ercp provides gastric bypass patients a safe alternative with less pain and increased satisfaction. ahmed elgeidie, elsayed adel; gastrointestinal surgery center background: endoscopic sphincterotomy (es) is an effective therapeutic procedure for common bile duct (cbd) stone clearance but it carries a substantial risk of recurrent stones at long-term outcome. aim of the study: to evaluate the rate of cbd stones recurrence after primary complete endoscopic clearance, and to identify the risk factors of recurrence. methods: between january and december , patients with cbd stones who underwent successful es and complete stone clearance were studied retrospectively. recurrent cbd stone, was defined by the confirmation of the presence of cbd stone at least months after previous complete cbd stone clearance by es. the risk factors for recurrent cbd stones and mean time interval between initial es and stone recurrence were analyzed. results: in total, patients we included. the median follow up period was months. recurrent cbd stones appeared in / ( . %) patients after a median time interval of ( - ) months following es. stone recurrences were observed on multiple occasions in patients ( . %). on the univariate analysis, the significant risk factors related to recurrent cbd stone were male sex (p= . ), previous history of cholecystectomy (p= . ) multiple cbd stones (p= . ), large cbd stone (p= . ) the presence of periampulary diverticulum (p= . ) and stone crushing using mechanical lithotripsy (p= . ) conclusion: recurrence of cbd stones is an identified long-term risk after es and stone clearance. background: laparoscopic cholecystectomy during advanced pregnancy is challenging due to the limited intraabdominal space. patients may be at increased risk for developing trocar site hernia. case report: a year old hispanic female in her th week of pregnancy came to the er with acute right upper quadrant pain. due to lack of accessibility she had poor prenatal care. she had mildly elevated amylase but normal lfts and ultrasound showed some gallbladder wall thickening suggestive for acute cholecystitis and no dilated biliary duct. fetal ultrasound was normal. she was admitted to the hospital and started on antibiotics, obstetrics was consulted. her amylase peaked at [ u/l but then normalized and indication for laparoscopic cholecystectomy was made. mrcp and ercp were not performed as it was assumed that the patient had passed a stone. five mm trocars were placed in the luq and the umbilicus and a teleflex minigrasper between the tow. the uterus was found at the umbilical level. the gb was pulled out and the serosa was incised on both sides and a window was created behind the gb midportion and widened towards infundibulum and fundus. there was gb wall thickening and edema. the critical view was obtained and the cystic artery and duct were clipped and divided. the common bile duct appeared normal and no ioc was done. the specimen was retrieved through the luq port site using a mm endobag after dilatation to . cm due to the presence of two large stones. the port site fascia was closed using a suture passer. the postoperative course was uneventful and both mother and baby were well at the two weeks follow up. discussion: in case of biliary pancreatitis during pregnancy, lc should be performed and if ultrasound shows a normal biliary system and amylase/lipase normalize, mrcp/ercp and ioc may be avoidable to protect the baby. lc with two ports is feasible during pregnancy. removal of the specimen through a lateral abdominal wall site may help prevent an umbilical port site hernia in this patient population. s surg endosc ( ) :s -s introduction: splenic abscess is a rare, potentially lethal condition, with autopsy studies showing incidence rates between . - . %. mortality rates ranging from to % making early diagnosis and prompt intervention vital. several case reports have documented post surgical splenic abscess, most notably after laparoscopic sleeve gastrectomy. to the best of our knowledge, there has not been any reported cases of splenic abscess arising after laparoscopic cholecystectomy. it is important to remember this disease process for expeditious targeted treatment in future cases. case presentation: a year-old female with past medical history significant for cholilithiasis, hypertension, and hyperlipidemia presented to the emergency department (ed) with a chief complaint of abdominal pain for two days. labs and imaging were obtained which confirmed the diagnosis of choledocholithiasis and pancreatitis. ercp was performed which showed a . cm stone causing obstruction, with several other smaller filling defects. the stones were removed after sphincterotomy. post procedurally, the patient underwent an uncomplicated laparoscopic cholecystectomy on hospital day (hd) # . post operatively, the patient had persistent leukocytosis peaking at . thousand on postoperative day (pod) # . a ct scan was performed which showed a rim-enhancing splenic collection measuring . . cm suggestive of an abscess. interventional radiology was consulted and aspirated ml of purulent fluid. cultures grew out klebsiella pneumoniae and enterobacter cloacae complex, and the patient was discharged home on zosyn. discussion: laparoscopic cholecystectomy has become the cornerstone in treatment of symptomatic biliary colic and acute cholecystitis. of the many recognized complications of laparoscopic cholecystectomy, splenic abscess has not yet been reported in current literature. the nonspecific signs and symptoms of splenic abscess make clinical diagnosis difficult. the classic triad of fever, palpable spleen and left upper quadrant pain are only seen in about two-thirds of patients. ct scan has been shown to be the most sensitive imaging modality for diagnosis of splenic abscess. current treatment options for splenic abscess are broken down into two subsets: percutaneous and surgical intervention. percutaneous treatment includes image guided aspiration with or without placement of drainage catheter. surgical intervention can be either laparoscopic or open and includes drainage of abscess with splenectomy or splenic conservation. the best treatment option remains unclear, and there is lacking prospective data demonstrating which modality is superior. introduction: laparoscopic subtotal cholecystectomy is widely accepted as a safe alternative to the conventional laparoscopic cholecystectomy in case of acute cholecystitis with frozen calot's triangle. the remnant stump of the gallbladder may be either sutured or looped. however, there are limited studies comparing the outcomes of the two techniques. the present study is aimed at comparing loop and suture closure of the gall bladder stump. methods: a retrospective analysis of our prospectively maintained database revealed that between january and december . patients underwent laparoscopic subtotal cholecystectomy for acute cholecystitis, chronic cholecystitis or empyema gallbladder with frozen calot's triangle. the decision to use endoloop or sutures for stump closure was made intra-operatively after dividing the gallbladder through the infundibulum. a no. sized drain was kept in all the cases. the patients were discharged with drain in situ, and were reviewed on post-operative day during which an ultrasound was done and drain removed if the progress was satisfactory. the intra-operative and post-operative data between the two groups were recorded and analyzed. results: endoloop closure was performed in patients and suture closure using . ethibond was done in patients. three patients from the sutured group had post operative bile leak among which one patient underwent endobiliary stenting. the other were managed conservatively while the drain had to be retained for weeks. two patients in the endoloop group were detected to have retained stone in the remnant gallbladder cuff among which one had recurrent cholecystitis requiring laparoscopic completion cholecystectomy. none of the patients had bile duct injury or surgical site infection. mean post operative stay was . + . days, did not significantly vary between the groups. suturing needed more surgical expertise and had prolonged operative time than endoloop ( + min versus + min, p= . ). conclusion: suture or loop closure of the remnant gallbladder after subtotal cholecystectomy are equally effective. suturing the stump may be associated with increased incidence of biliary leak while endoloop may have higher incidence of retained gallstones. the choice between the two may be made intra-operatively based on the surgeon's expertise and preference. background and aim: in recent years, due to the spread of laparoscopic cholecystectomy, bile duct injury as its complication has been reported at a certain frequency. current surgical treatments include ) suturing and closing the injured part laparoscopically during surgery, ) transitioning to laparotomy and closing the suture, ) inserting a tube such as t-tube under the laparotomy, ) bile duct-intestinal anastomosis under the laparotomy, etc. are taken into consideration. regardless of which treatment method, it is not a definite ideal treatment. we have developed a bioabsorbable material (caprolactone: lactic acid ( : ) polymer reinforced with polyglycolic acid fiber and designed to be absorbed in about weeks). at this conference, we would like to talk about the current state and problems of development of minimally invasive therapy for biliary damaged area using bioabsorbable materials we developed. method: in order to overcome the problem of the current bile duct injury cure method, we have been developed, a) a method of closing a perforation part endoscopically from the luminal side of a bile duct (a covered stent using a bioabsorbable material in the damaged part), b) develop a method of closing the biliary duct injury under the laparoscope from the outside of the bile duct (adhering the bioabsorbable sheet to the bile duct perforation using a biocompatible adhesive). results: experimental results of suturing the bioabsorbable material in the biliary duct in surgery of laparotomy were able to regenerate the bile duct without stenosis in the damaged area. however, various adhesives were tried to bond the sheet of this bioabsorbable material and the native bile duct under the endoscope, but at the moment, there is no glue that will allow the sheet to be adhered readily and reliably where there is moisture to a certain extent. a tool for delivering the sheet from the bile duct into the injured part is under development and good results are obtained at present. conclusion: it is possible to regenerate the bile duct without constriction using a bioabsorbable material. it is difficult to laparoscopically adhere to the injured part of the bile duct, but we hope that it will be possible in the near future to develop further adhesives. s surg endosc ( ) , - kg/m (c) and more than kg/m (d). we made a . -cm longitudinal skin incision within the umbilicus. a wound retractor and a surgical glove were applied at that incision. we used the three -mm ports technique. after retracting the gallbladder upward, the cystic duct and artery were divided and identified using pre-bending forceps through the flexible port and laparoscopic coagulating shears (lcs). the cystic artery was dissected using the lcs and the cystic duct was also dissected after clipping. the gallbladder was freed from the liver bed using the lcs, and the specimen was retrieved from the umbilical wound. results: there were conversions to open laparotomy in cases ( . %) and requirement of additional ports in ( . %). the mean age (years), operation time (min), blood loss (ml) and postoperative hospital stay (days) in group a, b, c and d were . , . , . and . (p= . [), . , . , . and . (p= . ), . , . , . and . (p= . ) , and . , . , . , and . (p= . ), respectively. there was a significant difference in age only. the complications were bile duct injury in one case ( . %) and pneumothorax in two ( . %). conclusion: obesity had no influence of surgical outcomes for performing silc. introduction: recent studies have reported mixed outcomes when comparing surgeon case volume and laparoscopic cholecystectomy (lc) outcomes. formal minimally invasive surgical training (mist) has been shown to be associated with shorter post-operative length of stay (los), but no difference in major adverse events such as bile leak, bile duct injury, intra-abdominal abscess formation, and death. we aim to determine -day rates of major adverse events after lc in a university hospital setting, to identify significant associated risk factors, and to determine if mist or surgeon volume are associated with differences in los and major adverse events. methods: we conducted a single-center retrospective review of , cholecystectomies performed over a seven-year period ( ) ( ) ( ) ( ) ( ) ( ) ( ) ( ) . characteristics and outcomes were compared using chi squared or rank sum tests. multivariable regression modeling was used to determine independent associations with the two main outcomes, major adverse events and los. results: we identified , adults who underwent lc during the study period, with a median age of , and % women. about % (n= ) of patients had a los[ day and . % (n= ) were re-admitted within the first days after surgery for any reason. within days of lc, . % (n= ) of patients suffered from one or more major adverse events. this includes . % (n= ) of patients with bile duct injury, . % (n= ) of patients with bile leak, . % (n= ) of patients with intra-abdominal abscess, and . % (n= ) of patients died for reasons related to their procedure or post-operative recovery. table shows the characteristics of the patients and procedures with a comparison of the patients with an adverse event versus those without one. in univariate analysis, high annual surgical volume ( + cases/year) and procedure urgency were found to be significant predictors of adverse events and los, however, mist was not. in multivariable analysis, controlling for significant univariate predictors, urgent or emergent cases were associated with a -fold increase in odds of an adverse event (or= . introduction: laparoscopic cholecystectomy is an extremely common procedure in the united states, with over , cases performed annually. despite the procedure's overall safety, there has been some evidence that tobacco use is associated with increased risk of wound infection after lc. this retrospective chart review sought to examine whether tobacco use is associated with increased complications following laparoscopic cholecystectomy within a high-volume healthcare system. methods: after irb approval, of approximately , cholecystectomies performed within one four-hospital system between and were randomly selected, and patient charts were retrospectively reviewed. pre-, intra-, and postoperative data were collected, including all complications within days. tobacco use cohorts were defined as follows: never, former (any historical tobacco use), and current (active tobacco use within year of surgery) per the acs nsqip surgical risk guidelines. following preliminary data analysis, multivariable logistic regression models were generated to identify whether tobacco use was predictive of outcomes of interest. of the cases analyzed, patients ( . %) were never smokers; . % were former smokers, and . % were current tobacco users or had quit less than months prior to surgery. there were surgical site infections, one wound dehiscence, one port site hernia, three common bile duct injuries, and medical complications requiring prolonged hospitalization or readmission within days. current tobacco users were significantly more likely to undergo urgent surgery (following emergency admission or direct admission to the hospital) than former or nonsmokers. however, there was no difference between cohorts for prolonged duration of surgery, conversion to an open procedure, surgical site infection, wound dehiscence or hernia, common bile duct injury, or other medical complication. there was no significant difference between cohorts when all postoperative complications were pooled. conclusions: there does not appear to be a significant difference in -day surgical outcomes or complications in active tobacco users vs. former or non-users. although studies in other surgical settings have indicated a possible reduction in complications if patients abstained from smoking prior to surgery, this may not be beneficial in laparoscopic cholecystectomy. moreover, as current tobacco use appears to be associated with higher rates of urgent surgery, these patients may not be able to stop smoking prior to an elective procedure. prospective studies to further clarify whether there is any benefit towards tobacco cessation prior to lc may be valuable. , [ , [ respectively ( - ) , cyfra were . , . , . respectively ( - . ) . afp and cea were negative. as for this patient, he is of high risk of hepatobiliary system diseases. introduction: thymoma is one of the rare tumor entity benign or malignant arsisng from the epithelial cells of thymus gland, frequently associated with neuromuscular disorder myasthenia gravis. so, we are presenting this rare case of thymoma with myasthenia gravis in our institute. methods: we operated a single patient of thymoma in a case of myasthenia gravis by video assissted thoracoscopic approach. results: operative time- min, intraoperative blood loss − ml, post operative analgesia requirement in form of nsaids is for days, no ventilatory support required post operatively, with follow up reduction in achr ab from nmol/l to nmol/l and reduction in symptoms in form of reduced ptosis. conclusion: thoracoscopic thymectomy is feasible and safe in terms less operative time, less post operative pain and analgesia requirement and no post operative ventilatory support requirement. carter c lebares, md, stanley j rogers, md; ucsf background: duodenal fistulas are uncommon but morbid complications of acute necrotizing pancreatitis. if percutaneous drainage fails, surgical correction via roux-en-y diversion or pancreaticoduodenectomy can be required. while self-expanding metal stents have been tried, complications like migration and perforation have limited such use. endoscopic transmural stents have successfully treated fistulas of the stomach, particularly post-sleeve gastrectomy. here we present a case of endoscopic transmural stents used to treat a non-resolving duodenal fistula following acute necrotizing pancreatitis. methods: under general anesthesia, using a standard adult gastroscope, the fistula was identified in the second portion of the duodenum (fig. ) . a flexible-tipped guide wire was used to identify the fistula tract and two fr cm double pigtail biliary stents were deployed ( fig. ) with positioning verified under fluoroscopy. two weeks later these were removed and a single stent deployed into the visibly smaller tract (fig. ). two weeks after that, the single stent was removed and contrast medium was injected under fluoroscopic visualization, demonstrating resolution of the fistula (fig. ) . case: this patient is a year old woman with hypertension and congenital hearing loss who underwent a cholecystectomy for biliary colic and subsequent ercp with sphincterotomy for retained stone. this was complicated by acute pancreatitis which progressed to severe necrotizing pancreatitis with infected retroperitoneal necrosis. percutaneous drainage yielded initial improvement but a persistent moderate collection ( cc per day) lead to the identification of a fistula in the second part of the duodenum. repositioning and exchange of percutaneous drains over weeks did not hasten resolution. endoscopic transmural pigtail stents were tried after visualization of a large ( - mm diameter) fistula tract. stents were utilized as described in methods, with a total of three endoscopic interventions, at week intervals, resulting in resolution of the fistula as evidenced by contrast injection into the duodenum under fluoroscopy and subsequent ct scan with oral contrast. the patient's symptoms resolved and she was tolerating a normal diet. she remained thus at month follow-up. conclusion: this case demonstrates the benefit of endoscopic transmural stents for the resolution of duodenal fistulas, expanding the utility of this technique to address leaks and fistulas of the upper gastrointestinal tract. further study is warranted to clarify the timing and adjuncts to optimize the use of this promising approach. totally laparoscopic alpps combined with the microwave ablation for a patient with a huge hcc hua zhang; department of hepatopancreatobiliary surgery, west china hospital, sichuan university introduction: associating liver partition and portal vein ligation for staged hepatectomy (alpps) is a novel technique for resecting hepatic tumors that were previously considered unresectable due to the insufficient future liver remnant (flr) which may result in postoperative liver failure (plf). the procedure has been accepted and modified in many medical centers worldwide. but reports about the laparoscopic alpps were rare. this study aimed to report a totally alpps combined with microwave ablation for a patient with huge hcc and confirm the feasibility of laparoscopic alpps. methods: a -year-old man had complained of -year history of right upper abdominal pain, and the syndrome was worsened in recent month. abdominal enhanced computed tomography (ct) imaging revealed a cm solid mass in right lobe of liver with non-uniform and unclear boundary, the right posterior branch of the portal vein was invaded. in addition, a small lesion was simultaneous found in left lateral lobe of liver. the tumor was evaluated as unresectable due to the flr was only ml ( %). we decided to perform the laparoscopic alpps procedure. first stage including microwave ablation of the lesion in left lobe, cholecystectomy, ligation of the portal vein and transection of liver parenchyma. the second stage was done days later and consisted of laparoscopic right hemihepatectomy. results: the two stages were underwent by laparoscopy successfully. the operation duration was and minutes, respectively. estimated blood loss was and ml. the hospitalization time in intensive care unit was and days. there was no need for transfusion in both stages. the patient was discharged days after the second stage and the total hospitalization time was days. recovery of the patient was uneventful in addition to the incision infection after the second stage which recovered with conservative management. the patient did not show any signs of liver failure. the ct scan before the second stage showed an enlargement of left lobe, the flr was ml ( . %). there was no signs of residual liver disease in the ct scan days after the operation. the patient showed no signs of recurrence or liver failure in the following up period of six months. conclusion: totally laparoscopic alpps combined with microwave ablation is safe and feasible for the multiple hcc which was not resectable. the hypertrophy of remaining liver was fast and can achieve an adequate volume in a short time. introduction: chronic pancreatitis is a benign, irreversible inflammatory disorder characterized by the conversion of the pancreatic parenchyma into fibrous tissue. initial management should be conservative, surgery is applied in case of failure of medical treatment. the development of minimally invasive techniques has made it possible to perform these highly technical procedures in a laparoscopic manner. materials and method: we have the history of patients with and years with chronic pancreatitis and pancreatic lithiasis of difficult handling but intractable pain to those who decided to surgical management. we performed the procedure under general anesthesia, epidural analgesia catheter was placed. neumoperitoneum technique of cali, at mmhg and approach using a mm umbilical port, working ports of and a of mm port,. the pancreas was exposed by a section of the gastrocolic ligament with a mm ultrasonic scalpel, with cephalic retraction of the stomach, opening of a smaller sac and approaching the transpavity of omentum. the ventral surface of the pancreas was exposed from the neck. an incision was made in a pancreas body with a monopolar hook. primary pancreatic duct lumen was identified and the incision was extended longitudinally from the neck to the tail of the pancreas ( cm). roux's y loop was prepared cm from the treitz ligament, with a jejunum section with a mm stapler, roux's loop was transmecoscopically retrocollic, closing the gap of the mesocolon with monocryl. a -cm jejunum-jejunal anastomosis was performed with endo-gia stapler and closure of enterotomy with - polypropylene intracorporeal suture. jejunal (roux) isoperistaltic loop was placed longitudinally at the opening of the main pancreatic duct, and enterotomy was performed with monopolar in antimesenteric segment. the intracorporeal pancreatico and jejunum anastomosis was performed using a lower and an upper plane, with single points of total thickness with ethnobond - . closed drains were placed towards each anastomosis. this procedure was performed in the patients reported. operative time - min complications none operative time - days minimal bleeding drains no retired in both cases at days year follow-up of patients improved pain\ conclusions: minimally invasive surgery is a fundamental tool for the approach and management of patients with biliopancreatic pathologies. the establishment of multidisciplinary groups, offer an excellent alteranativa in the integral management of the patients. surg endosc ( ) gallbladder anatomy is highly variable, and surgeons must be prepared to identify anomalies of form, number, and position. variants include gallbladder agenesis, diverticulum, duplication, bilobed, multiseptate, phrygian cap, ectopic, and hourglass gallbladder. the hourglass gallbladder has been described from the earliest days of cholecystectomy, as morton described a congenital case in , and else thoroughly described the acquired and congenital strictures leading to the hourglass deformity in . we describe a case of an hourglass gallbladder found during one-step endoscopic retrograde cholangiopancreatography (ercp) and laparoscopic cholecystectomy. this year old male presented to an outside hospital with one day of nausea, and constant, severe, epigastric pain that radiated to his back. he endorsed a history of similar pain several times in the past. his abdomen was soft, nontender, and without murphy sign. laboratory evaluation revealed total bilirubin . mg/dl, alkaline phosphatase u/l, ast u/l, alt u/l, and no leukocytosis. ct abdomen and pelvis revealed cholelithiasis, distal choledocholithiasis, intra-and extra-hepatic ductal dilation, and a . centimeter left liver hemangioma. he was transferred for management of choledocholithiasis, and an abdominal ultrasound revealed cholelithiasis, without gallbladder wall thickening or pericholecystic fluid, and a . millimeter common bile duct without choledocholithiasis. he was taken to the operating room for a one-step ercp and laparoscopic cholecystectomy. upon laparoscopy, dense adhesions to the gallbladder were found. after initially attempting to obtain the critical view of safety, we then embarked on the retrograde "top down" dissection. this isolated a spherical structure measuring . . centimeters. two very thin tubular structures were identified, clipped, and transected after we found they were too small to place a cholangiocatheter. the common bile duct appeared to be pulled anteriorly by surrounding inflammation, though this was later found to be the proximal segment of gallbladder. the intra-operative ercp identified a remnant gallbladder with cholelithiasis and no extravasation of contrast. given the unusual anatomy, we completed the operation, ordered a post-operative ct liver and mrcp, and consulted a hepatopancreatobiliary surgeon. a small remnant gallbladder was identified on ct liver, though not on mrcp. completion laparoscopic cholecystectomy with intraoperative cholangiogram and ultrasound was performed on hospital day . this hourglass gallbladder variant likely occurred secondary to chronic fibrosis from cholecystitis, leading to a proximal and distal gallbladder lumen. in anatomic uncertainty, the "top down" dissection, intraoperative cholangiography, ct liver, and expert consultation are safe methods to avoid iatrogenic injury. introduction: endoscopic entero-enteral bypass could change our approach to small bowel obstruction in patients with prohibitively high operative risk. magnetic compression anastomoses have been well-vetted in animal studies, but remain infrequent in humans. isolated cases of successful use in humans include treatment of biliary strictures and esophageal atresia. while endoscopic gastro-enteric magnetic anastomoses have been described, the associated multicenter cohort study was terminated due to serious adverse events. since then, the technology has evolved and recently our own institution reported results of the first in-human trial of magnetic compression anastomosis (magnamosis), deployed through an open approach. here we present the first case of endoscopic delivery of the magnamosis device and the successful creation of an enteroenteral anastomosis for chronic small bowel obstruction in a patient with prohibitively high operative risk. methods: the magnamosis device has previously been approved by the food and drug administration (fda) for use in clinical trial. our institutional review board approved emergency compassionate endoscopic use of the device in this patient due to a non-resolving small bowel resection and prohibitively high operative risk. case: this is a year old man with advanced liver disease, chronic obstructive pulmonary disease, and history of emergent right colectomy with end ileostomy for cecal perforation. he presented with multiple acute on chronic episodes of small bowel obstruction with a stable transition point in the distal ileum, radiographically estimated at centimeters proximal to the ileostomy. endoscopic evaluation through the ileostomy revealed a traversable obstruction with proximally dilated small bowel. the magnets were delivered via endoscopic snare under fluoroscopic guidance and positioned in adjacent loops of bowel on either side of the obstruction (image ). by days post-procedure, healthy villi were visible through the central portion of the mated magnetic rings (image ). by days the magnetic rings were mobile and the anastomosis was widely patent allowing easy passage of the gastroscope (image ), and the patient's symptoms were completely resolved. the rings passed through the ileostomy days post-procedure. at month follow up, the anastomosis was unchanged (image ). conclusion: this case demonstrates the benefit of an endoscopically created magnetic compression anastomosis in a patient with small bowel obstruction and high operative risk. further studies are indicated to evaluate the use of this technique in similar patients or those with malignant obstruct, ion. desiree raygor, md, ruchir puri, md; university of florida health jacksonville cholecystectomy is one of the commonest operations in general surgery [ ] . occasionally chronic cholecystitis can lead to a small contracted gallbladder. this diagnosis can be misleading as it may represent congenital agenesis of the gallbladder [ ] . a -year-old female with a past history of pancreatitis presented with a three day history of right upper quadrant pain associated with nausea and vomiting. upon exam she exhibited tenderness in the right upper quadrant. her leukocyte count and liver function tests were within normal limits. ultrasound revealed a poorly visualized, contracted gallbladder without stones and a dilated common bile duct (cbd). cholescintigraphy revealed non visualization of the gallbladder after two hours, which was suggestive of acute cholecystitis. decision was made to proceed with a laparoscopic cholecystectomy. the abdomen was entered by an open hasson technique and standard trocar placement for a cholecystectomy was performed. on initial inspection, the gallbladder was not readily visible. a structure appearing to be the cbd was present and was mobilized circumferentially (fig. ) . a gauge butterfly cannula was utilized and multiple cholangiographic images were obtained (fig. ). no cystic duct or gallbladder was identified which was suggestive of congenital agenesis of the gallbladder. the patient did well postoperatively, and was discharged home on postoperative day two. the patient's symptoms resolved and she continues to be pain free one month postoperatively. congenital agenesis of the gall bladder is a rare disorder. a high index of suspicion is required especially in the setting of a small contracted gall bladder. if preoperative imaging is inconclusive then diagnostic laparoscopy should be the next step. cholangiogram should be performed routinely to confirm the diagnosis and to rule out an ectopic gall bladder. conversion to open does not offer any distinct advantage, and laparotomy should be avoided if possible given its associated morbidity. there are many reports upper abdominal major arterial aneurysms. however, an aneurysm of left inferior phrenic artery had never been reported. a -year-old woman with liver cirrhosis associated with hepatitis b viral infection was referred to department of surgery for treatment of aneurysm of left inferior phrenic artery. she underwent trans-arterial chemoembolization (tace) for treatment of hepatocellular carcinoma three times, previously. on months after last tace, mm sized highly enhancing nodular lesion of gastric fundus was found on follow-up abdomenpelvis computed tomography (a-p ct). one year later, the size of this lesion increased to mm, and an aneurysm was diagnosed. she underwent angiography and attempted embolization with an aneurysm of the left inferior phrenic artery, but access failed. we performed a laparoscopic vessel ligation. she recovered with no complication and discharged on the th postoperative day. s surg endosc ( ) :s -s yousef almuhanna, vatsal trivedi, fady balaa; university of ottawa a years old female, g and weeks pregnant, was brought to the hospital by ems, after being found on the floor in her toilette surrounded by vomitus and urine. mother-inlaw, who happens to be at the house that time, have heard severe retching followed by a loud bang sound. firefighters have found no pulse and therefore started cpr. return of spontaneous circulation was achieved, yet unfortunately, she had arrested again minutes prior to arrival to er. pocus assessment showed large rvot, and therefore tpa was started on the assumption of pulmonary embolism. upon arrival of blood work, it was found that her hemoglobin had dropped from to . fast was repeated showing moderate to severe amount of free fluid in the morrison's pouch and pelvis. she was then taken to the operating theatre, had undergone laparotomy showing liver segment ii injury. pringle's maneuver and aortic clamping did not control the bleed, therefore finger fracture and venous clips were used to temporary minimize the bleed, and head to interventional radiology suite. after multiple attempts to control the bleed, and the massive transfusion, she vital signs were not maintained, and had arrested afterwards. sarrath sutthipong, md, chumpunut chuthanan, md, chinnavat sutthivana, md, petch kasetsuwan, md; bhumibol adulyadej hospital, bangkok, thailand background: mesenteric panniculitis (mp) is a rare, benign and chronic fibrosing inflammatory disease that affects the adipose tissue of the mesentery of the small bowel and colon. the specific etiology is unknown and no clear information about the incidence. the diagnosis is suggested by ct and is usually confirmed by surgical biopsy. treatment is based on some selected drugs. surgical resection is sometimes attempted for definitive therapy, although the surgical approach is often limited. we reported a case of the mp diagnosed with ct and surgical biopsy by laparoscopic approach. case report: -year-old woman with months history of chronic abdominal pain, mainly localized in the sub-epigastrium, intermittent and mild. she had anorexia but no weight loss or change in bowel habits. no history of medical illness or surgery. the physical examination was unremarkable, except for palpation of ill-defined mass about cm at mid-abdomen, firm, smooth surface with mild tenderness. the laboratory profile and tumor marker were normal. ct of the abdomen, which showed focal heterogeneous enhancement of the mesenteric fat with stranding ( . . cm) with multiple internal subcentimeter lns in the supraumbilical area, which was probably inflammatory in origin and suggestive of mp. f-fdg pet/ct showed faint fdg uptake in multiple mesenteric lns. the patient was subsequently underwent diagnostic laparoscopy with biopsy. intra-operative finding showed a fat-like surface of yellowish mass at mesentery of jejunal segment, incisional biopsy was performed laparoscopically. the histology showed adipose tissue with areas of fat necrosis, fibrosis, foamy macrophages infiltration and predominant chronic inflammation, no evidence of malignancy. ihc studies (including cd , s- , cd and cd ) were performed and the result was compatible with reactive process. treatment was started with mg prednisone once daily and planned for follow-up with repeated ct scan. discussion: mp involves the small bowel mesentery in over % of cases. the diagnosis is made by pathologic findings: fibrosis, chronic inflammation and fatty infiltration. the differential diagnosis is broad and has been associated with malignancies such as lymphoma, well-differentiated liposarcoma and melanoma. the imaging appearance varies depending on the predominant tissue component. a definitive diagnosis is biopsy but open biopsy is not always necessary. no data of laparoscopic biopsy, which has been reported previously. treatment has been reserved for symptomatic cases with a variety of drugs. our case was started on oral corticosteroid treatment and waited for responsive evaluation. background: laparoscopic appendectomy is the gold standard for treatment of acute appendicitis. stapled closure of the appendiceal stump is often performed and has been shown to have several advantages. few prior cases have been reported demonstrating complications from free staples left within the abdominal cavity after the laparoscopic stapler has been fired. case report: a previously healthy year old female initially underwent laparoscopic appendectomy for acute uncomplicated appendicitis during which the appendix and mesoappendix were divided using laparoscopic gastrointestinal anastomosis (gia) staplers. her initial postoperative recovery was uncomplicated and she was discharged home the same day. the patient returned to the emergency department on postoperative day with one day of sharp mid-abdominal pain, obstipation, and emesis. her abdomen was distended and mildly tender but not peritoneal. she was afebrile but was found to have a leukocytosis of . . ct demonstrated twisted loops of dilated small bowel in the right lower quadrant with two transition points, suggestive of internal hernia with closed loop bowel obstruction. diagnostic laparoscopy was performed through the three prior appendectomy incisions. an adhesion was noted between the veil of treves and the mesentery of a more proximal loop of ileum caused by a solitary free closed staple, remote from the staple lines, resulting in an internal hernia containing several loops of ileum ( fig. ). the hernia was reduced, and the small bowel was noted to have early ischemic discoloration. the adhesion was lysed by removing the staple from both structures to prevent recurrence. through the remainder of the procedure, the compromised loops of bowel began to peristalse and the color normalized. the procedure was concluded without resection. the patient recovered on a surgical floor and was discharged home on postoperative day one. conclusion: gastrointestinal staplers are commonly used secondary to ease of use and low complication rate. it is not uncommon to leave free staples in the abdomen during laparoscopy as retrieval can often be more difficult and time consuming. our case is only the second in the literature reporting an internal hernia with closed loop bowel obstruction as a complication of retained staple. choosing the most appropriate size staple load, to reduce the number of extra staples after the fire, and removing as many free staples as possible can prevent potentially devastating complications. video-assisted thoracoscopic pulmonary wedge resection in a patient with hemopytsis and intralobar sequestration: a case report mary k lindemuth, md, subrato j deb, md; the university of oklahoma health science center case report: a -year-old male with history of noonan's syndrome, bronchitis, and asthma presented with acute hemoptysis. while chest x-ray was unremarkable, a computed tomography angiogram of his chest was significant for intralobar pulmonary sequestration in the right lower lobe. the aberrant pulmonary artery originated from the abdominal aorta, immediately proximal to the celiac axis, and coursed through the hiatus in the retroperitoneum. flexible, fiberoptic bronchoscopy revealed blood within the right lower lobe bronchus with no appreciable source. a right video-assisted thoracoscopic approach was taken for wedge resection of the sequestration. twoportal technique was utilized with the patient on single lung ventilation. the sequestration was easily identified; the anomalous pulmonary artery coursed directly to a large, focal area of hemorrhage noted within the lower lobe pulmonary parenchyma, as seen in image [rectangle marking the aberrant artery and oval marking the sequestration]. pathologically, the specimen was noted to be benign lung parenchyma with bronchiectasis and abundant, acute hemorrhage. discussion: pulmonary sequestration (ps) is a rare, congenital bronchopulmonary foregut malformation. literature describes the incidence of ps to be only . - . % of all pulmonary malformations. as ps is most frequently diagnosed during childhood, the occurrence of diagnosis during adulthood is estimated to be less than per , adults. two types (intra-and extralobar) are described, with intralobar sequestration most common and contained within the normal visceral pleura. both types have aberrant systemic arterial blood supply, most frequently from the thoracic aorta. likewise, both types are nonfunctioning lung tissue, as there is no direct communication with the bronchopulmonary tree. the most common presentation is pneumonia, and often patients will have had recurrent symptoms before diagnosis. it is rare to present with hemoptysis, which is understood to be secondary to elevated capillary pressure within the sequestration and then communication through the pores of kohn. while endovascular embolization of the aberrant pulmonary artery has been described as a safe alterative for surgical intervention, the subjects of these studies have primarily been children and long-term outcomes are unknown. the definitive treatment of ps continues to be surgical intervention. the surgeon should strive to leave as much normal lung parenchyma as possible. video-assisted thoracoscopic resection is well tolerated by patients when compared to thoracotomy. however, it is vital for the surgeon to be aware of the potential risk of life-threatening hemorrhage secondary to the sequestration having systemic blood supply that must be controlled and ligated. case report: a years-old female patient with history of an increased mass and weight loss of kilograms in months, associated with vomiting and nausea for eight months. abdominal ultrasound showed an irregular cyst, without solid projections and without signs of flow in doppler, measuring cm. investigation continued with ct scan that showed a large homogeneous cystic lesion with no septum in the abdominopelvic region, possibly mesenteric, measuring . . cm. a laparoscopic approach for resection of the cyst was then performed. the surgery was performed with a patient in the dorsal decubitus, using three trocars: one in the umbilical region ( -mm) for the camera, and where the pneumoperitoneum was created by the hasson open technique under direct vision; and another two located in the epigastrium ( -mm) and in the right upper quadrant ( -mm) . in addition to the mesenteric cyst, a simple cyst in the right ovary and a solid nodule with a lipomatous characteristic of approximately cm in the abdominal cavity were visualized. total resection of the mesenteric cyst with periprancreatic fibrous tissue was performed. the cyst was punctured and its contents fully aspirated. resection of the right ovarian cyst was also performed. at the end of the procedure the mesenteric and ovarian cysts, the nodule, part of the omentum, and the peripancreatic tissue were removed through the -mm trocar at the umbilicus. patient had no further complications, being discharged four days after the procedure. histopathologic result showed a serous cyst in the right ovary, serous cyst in peripancreatic mesentery with chronic inflammatory process and signs of calcification; no signs of malignancy were observed in any specimen. we aimed to present the succesul therapeutic approach utilizing laparoscopy for safely removing a gastrointestinal stromal tumor. depicted is a year old jehova's witness female who presented to the emergency department for evaluation of bitemporal headache and dizziness and found with profound anemia with hemoglobin . and hematocrit . upon arrival to ed. the patient refused blood transfusion as her religious beliefs, jehovah's witness, preclude her from taking blood products. as part of her work up, endoscopy was performed and revealed a large, approximatelly cm, prolapsed, ulcerated, nodular lesion with active bleeding in the cardia of the stomach. this was temporized but the friable tissue, with no single identifiable lesion for clip placement, left the patient at high risk for re-bleeding. she was taken to the operating room and laparoscopic partial gastrectomy with intraoperative esophagogastroduodenoscopy were succefully perfomed, with minimall blood loss and no intra operative complications. patient was discharged on post op day . we present the case of a -year-old male with a history of morbid obesity with an initial bmi of . , who underwent an elective laparoscopic single anastomosis duodenal-ileal bypass with sleeve gastrectomy (sadi-s). postoperatively he developed an anastomotic leak at the duodeno-ileal anastomosis that would not resolve despite reoperation. he was then converted to a roux-en-y gastric bypass (rygb). postoperative imaging failed to reveal any signs of anastomotic leak and the patient was discharged tolerating an oral diet. he returned to the emergency department days later with a cm sub-hepatic collection arising from the duodenal stump from the surgical conversion. interventional radiology percutaneously drained the collection and found a connection between the cavity and the duodenum. using this connection, a percutaneous decompressive duodenostomy drain was successfully inserted into the duodenum using a guidewire through the abscess cavity along with an extra-enteric drain placed within this cavity. the collection was obliterated and the duodenal leak was controlled successfully with percutaneous drainage, bowel rest with parenteral nutrition and broad-spectrum intravenous (iv) antibiotics. the patient was reintroduced to a bariatric clear diet after a week of bowel rest and the abscess drain was then discontinued during the same hospital admission. the patient was discharged with the percutaneous duodenostomy tube which was removed in clinic days later, after the patient tolerated capping trials and imaging failed to reveal any further collections, oral contrast extravasation or distal obstruction. in this article we analyze notable imaging from the case and review current literature on the different management options for a duodenal stump blowout. we also discuss the basics of the sadi-s procedure and conversion of a sadi-s procedure to a rygb. keywords: anastomotic leak, duodenal stump blowout, sadi-s, duodenostomy tube. pancreatopic heterotopia is often an incidental finding on autopsy, but in some cases can lead to abdominal pain, obstruction, or intussusception. we present a case of pancreatic herterotopia mimicking an internal hernia on radiologic imaging. a year old female with seven month history of chronic abdominal pain treated for low back pain and recurrent urinary tract infections. she was found to have a computed tomography (ct) scan concerning for internal hernia and labs consistent with acidosis. she was taken for a laparotomy and did not have an internal hernia, but an exophytic mass in the proximal jejunum. the mass was resected and a stapled side to side jejunojejunostomy was created. on pathologic review, the specimen was found to be pancreatic heterotopia. her post operative course was complicated by an ileus, but was discharged post op day three. at her two week follow up she had minimal incisional pain and at one year follow-up she had resolution of her left upper quadrant abdominal pain. prior to this report, pancreatic heterotopia has never been described as presenting on ct scan as an internal hernia. although uncommon it should remain in the differential when evaluating a patient presenting with abdominal pain and radiologic evidence of obstruction or internal hernia. case report: a -year-old male patient who was diagnosed with high blood pressure at years-old and presented tetraparesis and intense asthenia for six months. blood tests showed hypokalemia, hypernatremia, and suppressed renin activity. ultrasound of the urinary tract was normal. ct scan of the abdomen showed a hypodense nodule with regular margins, measuring . . cm with a density of hu in the non-contrast phase and heterogeneous uptake after the injection of the contrast in the left adrenal gland. thus, the diagnosis of hyperaldosteronism secondary to the left adrenal nodule was confirmed, and surgical resection was indicated. the procedure was performed with the patient in the right lateral decubitus. two -mm and one -mm trocars were used on the left flank, as well as the -mm portal for the camera in the lower right quadrant under direct vision. the pneumoperitoneum was created by the hasson open technique in the transumbilical incision. the procedure consisted of the dissection, isolation and electrocautery of the left renal capsule and the left adrenal region with ultrasonic device, as well as the periadrenal vessels, adjacent lymph nodes and periadrenal and adrenal fat tissue. the surgery was uneventful and the patient had no further complications, being discharged the next day. histopathologic result showed a completely excised adrenocortical adenoma. conclusions: the hybrid minimally invasive approach proved to be safe and effective for this procedure, and the known advantages of minilaparoscopy such as less trauma, better visualization, better dexterity, better aesthetics, and reduced hospital stay were observed. s surg endosc ( ) background: coccidioidomycosis is a fungal infection endemic to the southwestern united states, central america and south america. coccidioides is ubiquitous in many of these endemic regions, with near % seroconversion in some communities. two-thirds of these mycotic infections may be asymptomatic. the most common presentation of coccidioidomycosis consists of "flu-like" symptoms or pneumonia. less than five percent of symptomatic cases progress to disseminated coccidioidomycosis which may involve any organ system. very rarely infection may include the peritoneum. we report a case of coccidioidomycosis with peritoneal involvement in an immunocompetent individual. case: a -year-old male presented to the emergency department with progressive abdominal pain. he was seen and treated for pneumonia in the emergency department one week prior. the patient worked outdoors in arizona and was otherwise healthy with a family history of malignancy and blood disorders. fever, leukocytosis and ascites on computed tomography scan prompted a diagnostic laparoscopy which revealed peritoneal granulomas positive for coccidioides. the patient was treated outpatient with fluconazole. discussion: since this is the th reported case of peritoneal coccidioidomycosis to our knowledge. the patient described in this case report was an otherwise healthy -year-old male; this is incongruent with many of the previously recorded cases which involved disseminated disease in immunocompromised patients. the patient's family history of malignancy and blood disorders suggests a potential underlying genetic predisposition that could account for this abdominal presentation. possible mutations include genes coding for the interleukin- β receptor and the signal transducer and activator of transcription which have been implicated in increased coccidioidomycosis susceptibility. peritoneal infection presents a unique challenge in diagnosis. in these cases coccidioidomycosis may not be suspected due to nonspecific symptoms and imaging, the infrequency of this extra-pulmonary manifestation and clinical characteristics that mimic the presentation of tuberculosis and malignancy. abdominal infections have been misdiagnosed as appendicular abscesses, iliopsoas abscesses, adnexal abscesses and pancreatic masses. consequently, the diagnosis of peritoneal coccidioidomycosis is often made after laparoscopic exploration of the abdomen and histopathology, as it was in this case report. conclusions: coccidioidomycosis incidence is on the rise in endemic areas and it often falls on the surgeon to make the diagnosis in extra-pulmonary cases. the peritoneal subset of coccidioidomycosis should be considered in endemic areas when a young, otherwise healthy patient presents with abdominal pain. failure to recognize the possibility of coccidioidomycosis may lead to unnecessary treatments and procedures. indocyanine green cholangiography to detect anomalous biliary anatomy steven d schwaitzberg, md, gabrielle yee, ms; university at buffalo jacobs school of medicine introduction: common bile duct injury is the most feared complication of cholecystectomy. imaging with indocyanine green (icg) is a safe and effective technique to detect biliary anatomy in open, laparoscopic and robotic surgery. several studies report detecting aberrant biliary anatomy with the use of icg in laparoscopic cholecystectomy with high success rates. by identifying the cystic duct-common hepatic duct confluence before dissecting calot's triangle, icg allows surgeons to perform "virtual" cholangiography at the start of procedures to identify either normal anatomy or possible anatomic variants. it is clear that icg use is an effective tool to achieve the critical view of safety. however, no reports have suggested icg cholangiography as the last operative step in cholecystectomy to identify hidden biliary anomalies and avoid postoperative bile leak complications. case report: we report a novel use of icg cholangiography in visualizing anomalous biliary anatomy prior to closing, thus avoiding potential bile duct leakage. in our case, icg cholangiography was used to fluoresce the common hepatic duct, common bile duct and cystic duct. the cystic duct was transected, and the gallbladder was removed using electrosurgery. at the completion of the gallbladder removal, the liver was elevated to inspect the clips on the cystic duct and artery. at this point, near infrared imaging was reinitiated, and a small mm structure was noted to fluoresce next to the cystic artery. this structure was identified using white light and subsequently clipped. discussion: the use of icg in this context after the completion of the cholecystectomy facilitated the identification of a small hepatocystic or aberrant duct, which would have likely leaked bile sometime in the postoperative period. based on our experience, we recommend one additional routine near infrared viewing to identify small structures or potential leaks at the completion of cholecystectomy. improved visualization of the extrahepatic biliary anatomy by icg has the potential to translate into improved clinical outcomes. solitary fibrous tumors (sft) are uncommon fibroblastic mesenchymal neoplasms that display a wide range of histologic behaviors. these tumors, which are estimated to account for % of all soft tissue neoplasms, typically follow a benign clinical course. however, it is estimated that - % of sfts are malignant and demonstrate aggressive behavior with local recurrence and metastasis up to several years after surgical resection. we report a case of sft arising from the stomach, which is an exceptionally rare finding and has been reported only six times in the literature. additionally, this tumor was associated with dedifferentiation into undifferentiated pleomorphic sarcoma. to our knowledge, there are no documented cases of a malignant sft arising from the stomach to demonstrate dedifferentiation into an undifferentiated pleomorphic sarcoma. a -year-old male presented to the emergency department with vague complaints of right-sided flank pain. the patient had a history of nephrolithiasis and underwent a ct abdomen. this scan revealed a large heterogeneous mass in the left upper quadrant. the patient underwent endoscopic ultrasonography with fine needle aspiration of the mass, which stained strongly for cd . gastrointestinal stromal tumor (gist) was the favored diagnosis as it is by far the most common mesenchymal neoplasm of the stomach, especially cd positive spindle cell neoplasm. accordingly, the patient began treatment with imatinib; however, after four weeks of therapy, there was no significant radiologic regression. a second biopsy was performed and the specimen was sent for stat immunohistochemistry, which revealed diffuse strong nuclear positivity. a diagnosis of solitary fibrous tumor was provided. surgical resection of the tumor was performed, which measured . cm. the patient was to undergo surveillance imaging every to months post-operatively. surveillance scan showed solitary metastatic disease in the left lateral segment of the liver. he underwent left lateral segmentectomy with an uneventful recovery. our case was complicated by diagnostic dilemma with gist, highlighting the challenges of diagnosing and characterizing sfts. dedifferentiation, or the abrupt transition from a classic sft into a high-grade sarcoma, is a particularly concerning finding in our case, as it is associated with a worse prognosis than classic malignant sft. the stat marker by immunohistochemistry is very specific for sft and may have aided in the diagnosis earlier. therefore, it is imperative to keep solitary fibrous tumor, albeit exceedingly rare, in the differential diagnosis of mesenchymal neoplasms of the stomach. appendiceal diverticulitits is an uncommon pathology that can clinically mimic acute appendicitis. some radiographic distinctions have been reported, but final pathologic examination of the surgical specimen is required to confirm the diagnosis. symptoms are often more mild, which can lead to a delayed diagnosis, and increases the risk of severe complications such as perforation. a year old female presented with a three day history of right lower quadrant pain. she described the pain as constant and radiating to the left lower quadrant. associated symptoms included nausea and vomiting, and decreased appetite; she denied fevers or diarrhea. the patient had no significant past medical history, and surgical history was significant for a total nephrectomy for living donor kidney transplant to her mother. on physical exam she was tender in the right lower quadrant with rebound and a positive rosving's sign. all laboratory results were unremarkable, and she was hemodynamically stable. ct scan was performed and demonstrated a dilated fluid filled appendix with surrounding inflammatory change without abscess or free intra-peritoneal air. she was subsequently admitted to the hospital, made npo, started on iv antibiotics, and was taken to the operating room where she underwent an uncomplicated laparoscopic appendectomy. post-operatively, her hospital course was unremarkable. pathology revealed acute suppurative appendicitis secondary to an acutely inflamed appendiceal diverticula, consistent with a final diagnosis of acute appendiceal diverticulitis. appendiceal diverticulitis should be considered in patients presenting with acute right lower quadrant abdominal pain. although some consider appendiceal diverticulitis a variant of acute appendicitis, it is important to distinguish between the two diagnoses. appendiceal diverticulitis has a higher rate of complications, including perforation, and is associated with a higher risk of neoplasm, particularly mucinous adenomas and carcinoid tumors. appendectomy should be performed in all cases in order to obtain appropriate pathological examination and rule out coexistent neoplasms. laparoscopic appendectomy is a safe and appropriate approach to treatment of appendiceal diverticulitis. upper gi endoscopy and biopsy showed a gastrointestinal stromal tumor (gist) in the stomach. a videolaparoscopic partial gastrectomy was then proposed. the surgery was performed with the patient in the right lateral decubitus. two -mm minilaparoscopic trocars, a -mm conventional trocar for an ultrasonic instrument and a -mm trocar in the umbilical region for the camera were used. pneumoperitoneum was created using the hasson open technique under direct vision. trans-operatory endoscopy was perfomed to identify the tumor easily. initially, the ultrasonic device released the large omentum, and, then, the tumor was resected in the body of the stomach. the gastric wall was manually sutured with a - vicryl, and the tumor was removed in an endobag through the -mm incision in the umbilicus. the surgery was uneventful, with a total time of minutes. the patient had no further complications, being discharged two days after the procedure with good clinical conditions. histopathological result showed a free margins gist. conclusion: the minimally invasive approach proved to be safe and effective for this procedure. the known advantages of video-surgery such as less trauma, better visualization, increased dexterity, better esthetics, and less postoperative recovery time were confirmed. the upper gi endoscopy contributed to improve the safety and efficacy of the procedure, allowing a more precise resection of the gist, as well as the intragastric review of the suture line at the end of the surgery. background: portal vein thrombosis (pvt) is a rare post-operative complication, which has been associated with a wide range of precipitating factors. most commonly described associated conditions include; cirrhosis, bacteremia, myeloproliferative disorders and hypercoagulable states. pvt most frequently occurs as a complication after hepatobiliary surgery, and although possible, very few cases have been documented occurring after laparoscopic surgery of the gastrointestinal tract. herein, we describe a case of pvt in a patient who underwent elective laparoscopic right hemicolectomy and was treated successfully at our center. case: a year-old female with past medical history of depression, migraines and endometriosis underwent an uncomplicated laparoscopic right hemicolectomy at our facility, for recurrent rightsided diverticulitis. she had suffered previous episodes of diverticulitis and desired definitive surgical treatment. her hospital course was uneventful and she was discharged to home on postoperative day . on post-operative day , she presented to the emergency department complaining of severe abdominal pain, back pain and nausea. computed tomography of abdomen and pelvis revealed pvt. she was initiated on therapeutic anticoagulation with heparin. hematology was consulted for hypercoagulable workup. further investigation revealed that she had a family history of a brother who had had a lower extremity deep venous thrombosis, with negative hypercoagulable workup. she had also previously been taking leuprolide and conjugated estrogen and medroxyprogesterone for her endometriosis. she was ultimately found to have a heterozygous prothrombin g a gene mutation. her anticoagulation was bridged to coumadin and she was discharged home. she has recovered as expected, without any further complications. discussion: although more common in patients with cirrhosis after hepatobiliary surgery, pvt is a rare complication that can occur after virtually all types laparoscopic surgeries, including elective right hemicolectomy. patients may be completely asymptomatic, or present with a broad spectrum of symptoms including; severe abdominal pain, fever, diarrhea, or gastrointestinal bleeding. physicians should be aware of this possible complication, since early diagnosis and treatment is imperative to prevent life-threatening complications, such as intestinal ischemia and perforation. a detailed medical and family history is imperative, and all patients with post-operative pvt should undergo complete hypercoagulability workup. this is a case of a year old male with a previous history of a redo-hiatal hernia years prior who presented with two episodes of upper gastrointestinal bleeding with no identifiable source noted on both endoscopy and angiography. during his second admission, initial hemoglobin was . g/dl and endoscopy performed showed massive amount of blood in the stomach. continuous oozing was seen originating in the fundus area but no clear source could be identified. empiric epinephrine was injected to the area but failed to achieve hemostasis. angiography was also negative. repeat endoscopy performed showed no active bleeding, however, distention of the wrap into the gastric cavity was observed. the patient re-bled and was taken to the operating room emergently after failed attempt at endoscopic control. the patient underwent proximal gastrectomy after intra-operative gastrostomy and exploration was unable to identify a bleeding source. the patient was left with an open abdomen and in discontinuity while resuscitation was performed in the surgical intensive care unit. he subsequently underwent a roux-en-y reconstruction and gastrostomy tube placement via the distal gastric remnant. upper gastrointestinal series performed demonstrated absence of leak, and the patient was started on a liquid diet supplemented with tube feeding. his recovery was uneventful and he was discharged home in stable condition. pathology revealed gastric ischemia at the base of the wrap making it impossible to visualize through endoscopy. on reviewing the literature, gastric ulcers and ischemia have been previously described. incidence was up to % and their onset of presentation ranged from the early post-operative period up to years. most were located in the lesser curvature. the exact pathophysiology for its occurrence is not completely understood. factors hypothesized include technical aspect of the fundoplication causing inappropriate tension, vessel disruption and ischemia, and injury to the vagus nerve affecting gastric emptying which was thought to increase gastrin secretion. treatment includes medical management with proton pump inhibitors; however, few cases describe antrectomy with inclusion of the bleeding ulcer. our case presents failed medical and endoscopic management. we recommend take down of the fundoplication in hemodynamically stable patients to completely evaluate the gastric mucosa, identify, and address the source of bleeding. otherwise emergent cases will require staged gastrectomy including the wrap followed by roux-en-y reconstruction. acalculous cholecystitis associated with a large periampullary duodenal diverticulum: a case report peng yu, md, phd, austin iovoli, aaron hoffman, md; department of surgery, suny buffalo, kaleida health system, buffalo, ny introduction: periampullary diverticulum (pad) could compress common bile duct (cbd), and consequently cause obstructive jaundice and cholangitis as few publications have documented. here we first report an acalculous cholecystitis associated with a pad-related cbd obstruction. case: the patient was a -year-old female with a past surgical history of laparoscopic sleeve gastrectomy who presented at the emergency room with upper abdominal pain and vomiting for one day, associated with leukocytosis and left shift. serum total bilirubin raised up to . mg/dl on hospital day (hd) . ct, ultrasound, and mrcp images confirmed a distended, wall-thickening gallbladder with pericholecystic fluid, and a significantly dilated cbd at . cm of diameter ( fig. ) , without cholelithiasis or choledocholithiasis. ercp was unable to be completed due to the post-gastrectomy anatomy and the failure in cannulation into the ampulla which embedded in a large foodimpacted pad (fig. ). on hd , the patient underwent a diagnostic laparoscopy and an intra-operative cholangiogram which confirmed a mildly inflamed edematous gallbladder, and a . . cm large pad with a narrow neck that was distorting the distal cbd (fig. ). since the patient's bilirubin level had been improving, we decided to only do a laparoscopic cholecystectomy. intraoperatively an anatomic variation of the cystic artery encircling the cystic duct ( fig. ) was also identified. postoperatively the patient recovered well during the thereafter inpatient course and at the postoperative -week outpatient follow-up. the pathology of the excised gallbladder confirmed cholecystitis without cholelithiasis. discussion: lemmel's syndrome is defined, in the absence of cholelithiasis or other detectable obstacle, by obstructive jaundice due to pad. since lemmel described this duodenal-diverticulum-obstructive jaundice in , there still have been very few cases reported or investigated. to date there is no report describing the association of acalculous cholecystitis with lemmel's syndrome. this patient's mild acalculous cholecystitis probably attributed to the biliary obstruction and consequent gallbladder hydrops. her symptoms could be from either acalculous cholecystitis or intermittently worsening biliary obstruction. in this case, the contribution of the anatomic variation of the cystic artery is unclear. in the future, if this patient's symptoms recur, the treatment plans for her will be sphincterotomy, removal of the impacted food in the pad, or diverticulectomy. accidental fish bone ingestion masquerading as acute abdomen aim: to report a case of fish bone ingestion masquerading as acute abdomen. case report: a years old female patient presented with complaints of severe abdominal pain since days. there was no history of associated nausea or vomiting, fever or altered in bowel habits. on examination patient had tenderness and guarding localized to the right iliac fossa. blood investigations revealed raised inflammatory markers. ultrasound whole abdomen and contrast enhanced computed tomography (cect) were normal. patient was managed conservatively but in view of persistence of symptoms a triple puncture diagnostic laparoscopy was performed on day of admission. omental inflammation with soapy appendix was found and appendicectomy was performed. on further assessment a foreign body was also found in the ileum which was removed and identified as a fish bone. patient had a satisfactory post operative recovery and was discharged in stable condition. discussion: acute abdomen due to fish bone ingestion is not a very common occurrence. unfortunately the history is often non-specific and these people can be misdiagnosed with acute appendicitis & other pathologies. ct scans can be useful to aid diagnostics. it is however not fully sensitive in detecting complications arising from fishbone ingestion. conclusion: any patient with acute abdomen, with non-specific history and normal imaging may still benefit from a diagnostic laparoscopy. discussion: this patient presented with a bowel obstruction, partial cecal necrosis and neuroendocrine carcinoma. literature suggests that cecal necrosis in the majority of cases is caused by a vascular event, occlusive or non-occlusive. the patient had atherosclerosis and an underlying malignancy which can be associated with prothrombotic states and contributes to an overall risk of thrombosis. the cecum can sustain ischemic ischemic injury in the presence of severe or prolonged hypotension. most frequent causes being decompensated heart failure, hemorrhage, arrhythmia or severe dehydration, only of which was present in this patient. the midgut neuroendocrine tumor is generally located in the terminal ileum, as a fibrotic submucosal tumor cm or less. mesenteric metastases are often larger than the primary tumor and associated with fibrosis which may entrap loops of the small intestine and cause bowel obstruction. this may eventually encase the mesenteric vessels with resulting venous stasis and ischemia in segments of the intestine as seen in this patient. conclusion: cecal necrosis is a rare entity, but its incidence increases with age. isolated cecal necrosis may manifest as a ct-negative appendicitis or a small bowel obstruction in the absence of past surgical history. s surg endosc ( ) laparoscopic transection of the falciform and triangular ligament successfully released the entrapped loop with successful reperfusion by the end of the surgery. in the absence of any prothrombotic comorbidity, the patients were discharged asymptomatic without further anticoagulation. to date only few similar cases have been reported, and most of them described in neonates and pediatric patients. to our knowledge, this cases reporteds in the elderlys. in this patients laparoscopic approach was both diagnostic and therapeutic with the transection the ligament. roberto javier rueda esteban , andres mauricio garcia sierra , felipe perdomo ; universidad de los andes, fundacion santa fe this is a patient´s rare case of spontaneous splenic rupture associated to chronic myeloid leukemia as an uncommon complication. the case report and review of the relevant literature on symptomatology and clinical management is presented. emphasis is made about the importance of including splenic rupture as differential diagnosis for acute abdominal pain, especially in a patient with neoplastic hematopathology, since early treatment increases patient survival and prognosis. esophagectomy is a complex operation associated with serious immediate complications and long term chronic complications. gastric ulcers are a common chronic complication after esophagectomy with gastric conduit reconstruction. these are rarely complicated by significant bleeding or perforation. we report a case of delayed diagnosis of a fistula forming between a gastric conduit and right bronchial tree years after esophagectomy. this was successfully treated using multiple therapeutic approaches including endoscopic localization and resection through a right thoractomy. to the best of our knowledge, our patient is the only survivor from a chronic gastric conduit bronchial fistula. a year old male with type diabetes mellitus, dyslipidemia, asthma and smoking history presented years after an ivory-lewis esophagectomy for a gastrointestinal stromal tumor (gist) with a chronic cough starting years after his esophagectomy followed by multiple episodes of hematoptysis over the next years. the patient was known to have ulcers in his gastric conduit with a massive bleed year after his esophagectomy. repeat endoscopy revealed two large chronic ulcers that had increased in size based on comparison of pictures from endoscopies to years after his esophagectomy despite maximal medical management. the patient presented to numerous specialists at tertiary care centers in canada and the united states. ultimately, in a clinic the patient was observed to cough immediately after the ingestion of water, but not solids leading to a provisional diagnosis of a gastrobronchial fistula. a barium swallow failed to show a fistula (fig. ). however at endoscopy, instillation of saline directed at an ulcer immediately induced a cough, but this was not reproduced when the saline was directed away from the ulcer. the fistula was ultimately demonstrated by placing a wire through the ulcer and visualizing it bronchoscopically in the right superior segmental bronchus . in an effort to pursue a minimally invasive approach two attempts were made to close the fistula with over-the-scope clips (otsc). unfortunately, the patient's symptoms persisted. a wire was placed through the fistula and delivered through the patient's mouth and endotracheal tube. a right thoracotomy allowed access to the conduit, which was opened and the fistula localized using the wire. the fistula was resected and the bronchus closed. at twelve month follow up the patient did not have a recurrent cough or hemoptysis while tolerating a full diet. introduction: roux en-y gastric bypass (rygb) is one of the initial and most studied weight reduction procedures and remains the gold standard for comparison in bariatric surgery clinical outcomes. although rygb is an effective procedure for weight loss, it has been less popular over last several years because of increased morbidity compared to the more utilized vertical sleeve gastrectomy (vsg). early complications of rygb include bleeding, perforation, or leakage. late complications include internal hernias, small bowel obstruction, anastomotic stenosis, marginal ulcers, and gastrogastric fistulas. case report: a -year old female with a past medical history of morbid obesity, diabetes mellitus type , hypertension, gerd, peptic ulcer disease, cholelithiasis, liver dysfunction with ascites, asthma, and a past surgical history of rygb ( years ago) presented to our institution with acute on chronic abdominal pain associated with nausea, vomiting, dysphagia, inability to eat and maintain hydration, and an additional weight loss of about lbs. over the last year. in addition, the patient was a chronic opioid and nsaid user, had an extensive smoking history, and had not followed with her surgeon for years. at the time of presentation, the patient weighed lbs (bmi: . ), had normal vital signs, and appeared cachectic. an upper gastrointestinal study followed by an upper endoscopic examination demonstrated complete obliteration of the gastrojejunal anastomosis and revealed a -cm long gastrogastric fistula originating from the distal end of the gastric pouch to the lesser curvature of the excluded stomach. after conservative measures were initiated to hydrate and metabolically stabilize the patient, the decision was made to proceed with diagnostic laparoscopy and surgical placement of a gastrostomy tube to the gastric remnant. the patient was discharged after tolerating a full liquid diet and gastrostomy tube feedings, for plan of future revision of gastrojejunostomy when optimal nutritional status is achieved. conclusions: late complications of rygb occur at a rate of - %. major risk factors for anastomotic complications include non-compliance, smoking, and opiate and nsaid abuse. though abdominal pain, anastomotic stenosis, marginal ulcers, and fistulas are relatively common late complications of rygb, complete obliteration of the gastrojejunal anastomosis has not been well described in the literature. this case demonstrates the importance of long term follow up post rygb for early diagnosis of late complications and brings attention to this rare, but possible sequele that can arise in patients after rygb. contrast radiograms and upper endoscopic photographs will be presented. introduction: retroperitoneal sarcoma represents approximately - % of all sarcomas and less than . % of all neoplasia. radiotherapy and chemotherapy still do not represent valid therapeutic alternatives; therefore complete surgical resection is the only potential curative treatment modality for retroperitoneal sarcomas. the ability of complete resection of a retroperitoneal sarcoma with tumor grading remains the most important predictor of local recurrence and disease-specific survival. in a patient with a large fibrosarcoma and associated hypoglycemia, assays for insulin-like activity (ila) were found to be high in the extract of tumor tissue, while insulin was not detected in significant concentration neither in the same extract nor in his serum. laparoscopic surgery represents an alternative technique for radical resection of such tumors as a minimally invasive rather than traditional surgery. only few cases were reported in the literature. introduction: roux-en-y gastric bypass (rygb) is a frequently performed bariatric procedure, of which internal hernia (ih) is a known complication. we discuss a rare finding of occult gastric remnant perforation as a result of an obstructed ih in a post bypass patient. methods: we present a case report of a single bariatric surgeon's experience at a tertiary care hospital. literature review of pubmed confirms the unique presentation and operative findings in our patient, as few similar cases have been published. a -year-old male s/p rygb years ago presented to the ed with right upper quadrant pain, nausea, vomiting, and a leukocytosis of , . bmi was . ; weight was lbs. workup included an abdominal ultrasound showing gallbladder distention without signs of cholecystitis. liver function tests were normal. further imaging included a ct scan, remarkable for a paraesophageal hernia (peh) containing the gastric pouch, and an elevated left hemidiaphragm. the scan showed no evidence of ih or bowel obstruction. an upper gi series was additionally obtained, which was also negative for small bowel obstruction. due to unclear etiology for this patient's symptoms or source of leukocytosis, diagnostic laparoscopy was planned. results: intraoperative findings were significant for ih containing dilated small bowel with twisted and incarcerated omentum through the jejunojenunostomy site, as well as a distended gallbladder without acute inflammation. ih was reduced and closed without bowel resection. cholecystectomy was completed. subsequent inspection of the diaphragmatic hiatus revealed uncomplicated herniation of the gastric pouch. in attempts to dissect the left diaphragmatic crus, a large pocket of purulent material was encountered below the left diaphragm in the region of the remnant stomach fundus. methylene blue test and intraoperative endoscopy did not demonstrate any connection to gastric pouch. the purulence was attributed to an occult remnant stomach perforation related to distal obstructed ih. a drain was left in the abscess and the peh was not surgically addressed. patient was discharged on postoperative day . he has not suffered any further complications or recurrent complaints. conclusion: gastric perforation following rygb is an uncommon complication resulting from ih. this diagnosis was missed by preoperative imaging and was only found after thorough laparoscopic investigation. surgeons should maintain a high clinical suspicion of ih in post rygb patients with otherwise unexplained abdominal symptoms, fever, and leukocytosis, even in the absence of confirmatory diagnostic testing. threshold for operative exploration in this clinical setting should remain low. alejandro garza, md, robert alleyn, md, jose almeda, md, ricardo martinez, md; utrgv obesity is an epidemic condition worldwide carrying significant morbidity and mortality. surgical therapy is the only proven effective method to sustain weight loss. among the different surgical procedures gastric bypass is the most effective. during this surgery, most of the stomach is excluded from the upper gastrointestinal tract which makes future evaluation of the same very challenging. this could potentially lead to delay in diagnosis of any pathology in the bypass stomach. gastric cancer is the th most common cause of cancer and cause of cancer death in the united states. we present a case report of a patient who underwent a roux-en-y gastric bypass and went on to developed adenocarcinoma in the gastric remnant year after her surgery. she underwent an exploratory laparotomy, extended antrectomy, subtotal gastrectomy including the gastro-colic ligament, and incidental appendectomy. pathology showed grade undifferentiated adenocarcinoma that penetrated the visceral peritoneum with clear margins. there was angiolymphatic invasion and perineural invasion along with metastatic carcinoma in out of lymph nodes. introduction: polyarteritis nodosa (pan) is a systemic transmural inflammatory vasculitis that affects medium-sized arteries. inflammation of the vessel wall and intimal proliferation creates luminal narrowing which can lead to stenosis and insufficiency. the same inflammatory process causes disruption of the elastic lamina leading to aneurysm formation and possible spontaneous rupture with life-threatening bleeding. multifocal segments of stenosis and aneurysm formation are characteristically identified as a "rosary sign" or "beads on a string". unlike other vasculitides, pan does not involve small arteries or veins, and is not associated with anti-neutrophil cytoplasmic antibodies. we present the case of a year old female with a significant intra-abdominal bleed that was explored and repaired primarily. she was subsequently found on angiogram and postmortem pathology to have findings consistent with pan. case presentation: year old female who presented to the emergency department with abdominal pain followed by hemorrhagic shock and found to have a ruptured left hepatic artery aneurysm during exploratory laparotomy. this aneurysm was suture ligated with a successful outcome. a mesenteric arteriogram was performed the following day and demonstrated lesions consistent with pan including aneurysms of the left gastric branches, right and left hepatic arteries, and beaded appearance of the iliac artery. however, days after hospital discharge she developed massive pulmonary embolism from which she did not recover. postmortem examination confirmed rupture of the left hepatic artery aneurysm in addition to gross anatomical and histological findings consistent with pan. discussion: polyarteritis nodosa is a systemic inflammatory vasculitis that causes intimal proliferation and elastic lamina disruption. this multifocal disruption of the vessel results in aneurysm formation alternating with stenosis creating a characteristic "rosary sign" on imaging. spontaneous rupture of these aneurysms is rare and almost always fatal due to life-threatening hemorrhage. with acutely ruptured aneurysms, prompt diagnosis, aggressive resuscitation, and hemostasis through transarterial embolization or surgery is paramount for patient survival. while acute rupture of an aneurysm as the result of pan is exceedingly rare, it must be considered as a differential diagnosis in the setting of acute abdominal pain and hemodynamic instability. in a patient known to have a medical history of pan and aneurysm formation, routine monitoring and disease progression should be followed. introduction: , surgeries are done annually in the us for small bowel obstruction, which is most commonly caused by intraabdominal adhesions, malignancy, and hernias. . to . % of small bowel obstructions are due to paraduodenal hernias. paraduodenal hernias carry a % lifetime risk of incarceration with a mortality of to %. case report: the patient is a year old male who presented with severe upper abdominal pain for one day. he was passing flatus and had had a bowel movement the previous day. on examination, the patient was tender over the upper abdomen. computed tomography (ct) scan with iv contrast showed a mesenteric swirl sign. the decision was made to perform diagnostic laparoscopy with possible small bowel resection. intraoperatively, a mesenteric defect was noted posterior and to the right of the duodenum, through which bowel was herniating. the herniated bowel and its mesentery were edematous. the defect was sutured closed, taking seromuscular and mesenteric bites through the stomach, jejunum, and mesentery. the patient had an uneventful recovery postoperatively and was discharged on postoperative day . he returned on postoperative day with periumbilical pain which resolved with conservative management. he was followed up weeks postoperatively and was doing well. discussion: paraduodenal hernias are the most common internal hernias. they are seen more often in males. they are caused by failure of the counterclockwise rotation of the prearterial segment of the embryonic midgut in weeks to of embryonic development. paraduodenal hernias usually present with chronic intermittent abdominal pain, weight loss, nausea, and vomiting. they may present acutely with symptoms of bowel obstruction. peritoneal signs are often not appreciated due to retroperitoneal position of the hernia. ct scan of the abdomen often shows clustering of bowel loops, which cannot be displaced on repositioning the patient. if imaging is equivocal, diagnostic laparoscopy may be undertaken. surgical correction consists of reducing the bowel, resecting nonviable segments, and either closing the defect or opening the sac laterally into the general peritoneal cavity. in summary, paraduodenal hernias are a rare cause of bowel obstruction and as such present a challenge in diagnosis and early intervention. diverticulosis of the appendix is a rare disease found in . - . % of appendectomies, first described in . the clinical presentation may be acute inflammatory with or without appendicitis or it may be an incidental finding in an uninflamed appendix. the congenital type is rare and it has all the bowel wall layers. it most frequently represents as pseudo diverticulum which lacks the muscularis layer. the pathogenesis of appendiceal diverticula is not completely elucidated. its symptoms are similar to and often misdiagnosed for that early acute or chronic appendicitis. while appendectomy is curative for both entities, it is important to distinguish diverticulum of the appendix from appendicitis as it is four times more likely to perforate and may be a sign of an underlying neoplasm. we reported a very rare giant pseudo diverticulum of the appendix in a -year-old male presenting with chronic abdominal discomfort for months. abdominal x-ray showed abnormal gaseous finding. physical exam was significant for a soft rubbery mass in the periumbilical region. blood work revealed slight elevation of c-reactive protein. preoperative ct and mri showed a -centimeter-large cavity composed of thin wall, located at the tip of the appendix with peri appendicular fat stranding. in the concern of pending obstructive symptom and chronic abdominal pain, we decided to perform the resection laparoscopic. the soft mass arose from the tip of the appendix. there were dense adhesions between the appendix, mesentery, and sigmoid colon. after adhesiohedlysis, laparoscopic appendectomy was performed with endogia. the specimen was extracted through a small incision without spillage. hospital course was uneventful and the patient was discharged on post-operative day . the pathological finding was consistent with a pseudo diverticulum of the appendix which lacked muscularis layer and the inner wall of the cavity was lined with a scattered cubital epithelial layer in the continuity with the appendiceal mucosal membrane. here we report a successful laparoscopic resection of an extremely rare giant chronic pseudo diverticulum of the appendix. yvette farran, ms, jorge a miranda, ms, benjamin clapp, md, elizabeth de la rosa, md; texas tech university health sciences center introduction: sigmoid colon intussusception is rarely encountered and given its vague symptomatology diagnosis and management can be difficult. the treatment of an intussusception in adults is different than in children. lipomas as the causative etiology for intussusception are encountered up to . % of the times and up to %- % of the patients require surgical resection for treatment. methods: this is a case report about a year old male that presented with two weeks of worsening abdominal pain and distention. physical exam was only pertinent for abdominal pain on light palpation, guarding and moderate distress. ct scan of abdomen and pelvis demonstrated a lipomatous mass causing complete obstruction of the sigmoid colon with intussusception. this was managed with laparoscopic sigmoidectomy. the patient had an uncomplicated post-operative period and was discharged on post-operative day . pathology of the lipomatous mass confirmed a benign lipoma. discussion: intussusception is rarely encountered in clinical practice in adults and constitutes % of all cases. lipoma induced sigmoid intussusception with complete obstruction is rare. symptoms can be non-specific as in this case. this case report highlights the importance of timely diagnosis and treatment of an intussusception in adult patients. ct scan is the gold standard for diagnosis and often shows a "target sign". other imaging techniques like ultrasound have shown adequate results but remain less effective than ct scan. the treatment in adults is not a reduction by enema like in pediatrics but rather resection of the lead point. this can be appropriately done with a laparoscopic technique in most cases. conclusion: colonic intussusception is rare. surgery is the only treatment for an intussusception in adults since the lead point needs to be removed, and can be attempted safely with a laparoscopic approach. surg endosc ( ) :s -s joshua smith, md, kern brittany, md, amie hop, md, amy banks-venegoni, md; spectrum health case report: year-old female with no significant past medical history presents with a -year history of nocturnal cough that had worsened over the past months and had associated regurgitation. she underwent esophagogastroduodenoscopy (egd) that showed a tortuous esophagus and tight lower esophageal sphincter that required dilation. she received an upper gastrointestinal (ugi) contrast study that showed a dilated, tortuous esophagus with 'bird's beak' tapering, consistent with achalasia, as well as a large epiphrenic diverticulum measuring cm. esophageal manometry confirmed "pan-esophageal pressurization" consistent with type ii achalasia. given her symptoms in the presence of these findings, she elected to proceed with surgery. she underwent laparoscopic, trans-hiatal epiphrenic diverticulectomy, heller myotomy and dorr fundoplication. extensive dissection allowed for approximately cm of retraction down from the chest and we were able to come across it with a single blue load of a mm linear cutting stapler. post-operatively, she tolerated the procedure well with immediate improvement in her symptoms. her ugi on post-operative day showed no evidence of leak, she tolerated a soft diet and was discharged home. she was seen at -week and -year follow-up appointments with complete resolution of symptoms. discussion: epiphrenic diverticula in the presence of achalasia has an occurrence rate of %. large diverticula ([ cm), are even more rare with only a handful of case reports in the literature. historically, thoracotomy or, more recently, thoracoscopic approaches are required for resection. however, thoracic approaches are associated with a % increase in morbidity, namely due to staple line leak and the resulting pulmonary complications. only a single case report exists on our review of the literature that demonstrates successful trans-hiatal laparoscopic resection without post-operative complications of a diverticulum of this size. the shortest documented length of hospital stay postoperatively for similar cases is days, while the average is - days or longer for those with complications. our patient was able to go home on post-operative day after a normal ugi and was tolerating a soft diet. not only does this case show that a large epiphrenic diverticulm can be successfully resected via the trans-abdominal laparoscopic approach, this case makes the argument that patients undergoing any minimally-invasive epiphrenic diverticulectomy and myotomy, with or without fundoplication, may be successfully managed with early post-operative contrast studies and dietary advancement, thus decreasing their length of hospitalization and overall cost of treatment. kazuma sato, shunji kinuta, koichi takiguchi, naoyuki hanari, naoki koshiishi; takeda general hospital background: situs inversus totalis (sit) is a rare congenital condition in which the abdominal and thoracic organs are located opposite to their normal positions. few cases of laparoscopic surgery for gastric cancer with sit have been reported. we report a case of laparoscopic distal gastrectomy with d lymph node dissection performed for gastric cancer in a patient with sit. case description: an -year-old woman was admitted to our hospital for treatment of gastric cancer that was diagnosed by esophagogastroduodenoscopy (egd) at a local clinic after she experienced anemia and nausea. egd identified an irregularly shaped gastric ulcer located at the anterior side of the lesser curvature of the antrum. a biopsy revealed a moderately differentiated adenocarcinoma. she was then diagnosed with sit by chest radiography and abdominal computed tomography (ct). the abdominal ct showed that all organs were inversely positioned and that the wall of the antrum had thickened; it also showed the lymph nodes in the lesser curvature of the stomach, without distant metastasis or an abnormal course of vascularity. the patient was clinically diagnosed with t n m stage iiia gastric cancer according to the japanese classification of gastric carcinoma. a laparoscopic distal gastrectomy with d lymph node dissection in accordance with the japanese gastric cancer treatment guidelines as well as a roux-en-y anastomosis due to an esophageal hiatal hernia were performed. the surgery was safely and successfully performed, although it required more time than usual because the inverted anatomic structures were repeatedly examined during the surgery. the postoperative course was positive, and the patient was discharged on postoperative day without any complications. the final stage of this case was pt bn m stage ia. currently, the patient is doing well without recurrent gastric cancer. conclusion: gastric cancer with sit is an extremely rare occurrence. we experienced a case of laparoscopic distal gastrectomy with d lymph node dissection performed for gastric cancer in a patient with sit. we simulated the operation for sit by viewing left-right reversed ordinary surgical videos. the abdominal ct angiography with a three-dimensional reconstruction helped reveal any variation and confirmed the structures and locations of vessels before the surgery. the operation could safely be performed following the standardized surgical technique by reversing the surgeon standing position and trocar position. sternum or chest wall resection is performed for a variety of conditions such as primary and secondary tumors of the chest wall or the sternum. sternum reconstruction has been a complex problem in the past due to intraoperative technical difficulties, surgical complications, and respiratory failure caused by the chest wall instability and paradoxical respiratory movements. advances in the fields of surgery and anesthesia result in more aggressive resections. nowadays neither the size nor the position of the chest wall defect limits surgical management, because resection and reconstruction are performed in a single operation that provides immediate chest wall stability. chest wall resection involves resection of the ribs, sternum, costal cartilages and the accompanying soft tissues and the reconstruction strategy depends on the site and extent of the resected chest wall defect. here i'll present, the youngest ever case reported, years old girl with rhabdomyosarcoma involving the sternum. i will present the management challenges and the reconstruction options. introduction: neuroendrocrine malignancies constitute . % of all cancers. the gastrointestinal tract is the commonest site, followed by the lung. the last decade has seen a steady increase in their incidence. this is a case series of twenty five such tumours and their clinicopathological characteristics. materials and methods: twenty five patients with neuroendocrine tumours of the gastrointestinal tract were studied with reference to their demographic and clinicopathological characteristics. apart from routine pathological examination, these tumours were also checked for e cadherin expression as an independent marker of aggressive disease. results: the age of our patients ranged from to years. we had female and male patients, contradicting a female preponderance in literature. the vast majority of the tumours we encountered were from the stomach and duodenum, with and patients, respectively. two tumours were at the gastroduodenal junction, two from the appendix, small intestine and pancreas, each, and one each from the rectum and gall bladder. this is in contrast to literature that shows that neuroendocrine tumours of the git most commonly arise from the appendix and small bowel, followed by the rectum, stomach and duodenum. two of these tumours were functional. the diagnosis was confirmed by immunohistochemistry staining for chromogranin a and synaptophysin. grading was done using who criteria that takes into account the mitotic count, ki index and necrosis. of our cases were grade i. further, immunohistochemistry for e cadherin showed that absence of expression correlated with more aggressive clinical behavior. out of twenty five patients were operable at presentation and standard resections depending on the organ of origin with adjuvant therapies were given as required. could only be given palliative care. the functional tumours were treated with radiolabelled somatostatin analogues following uptake studies. conclusion: as neuroendocrine tumours are relatively rare, information about them is not as abundant as with other malignancies. absence of e cadherin expression is associated with more aggressive disease. more studies are required that document the pathological characteristics and clinical behavior in order to offer well rounded treatment protocols that treat not only the primary, but also the generalized effects of the secretions produced by them. targeted chemotherapy is gaining prominence, but more specific drugs directed at the plethora of receptors these tumours express, could potentially revolutionize treatment. ( ) . unfortunately there are no publications from denmark. we would like to present first to our knowledge reported case of double gallbladder in denmark. double gallbladder is a rare anomaly with a prevalence of : in autopsy studies, described first by boyden in ( ) . there are several classifications of double gallbladder that are based on relation between gallbladder, cystic duct and common bile duct ( , ) . non-specific symptoms and inadequate imaging are possible causes of lack of awareness of the condition. removal of all gallbladders, preferably laparoscopic with special attention to the biliary anatomy, is recommended ( ). method: case report with review of the literature. a -year-old female patient of polish origin was hospitalized due to upper right quadrant pain. on admission clinical manifestations and paraclinical abnormalities of pancreatitis were present. ultrasound scanning of the abdomen showed bile stones, ultrasonic manifestations of acute cholecystitis and normal intra-and extrahepatic bile ducts. because of elevated liver enzymes mrcp was performed and showed double gallbladder, double cystic duct and signs of pancreas anulare. scheduled ercp confirmed bile stones in cbd, double gallbladder with double cystic duct, h-type according to harlaftis classification ( ) . because of minor retroperitoneal perforation second ercp was needed for removal of all stones. the patient was then scheduled to laparoscopic cholecystectomy with perioperativ cholangiography. conclusion: anatomical variations of the gallbladder such as double gallbladder are rare and often remain unnoticed. they are most often identified because of clinical manifestations symptoms, diverse imaging studies, during surgery or autopsy. as most of them are not expected, they can contribute to complications during surgery. careful preoperative imaging is very important to prevent accidental bile duct injury. looking at the number of case reports, double gallbladder seems to be slightly more common than expected. the interesting question is whether a gallbladder discovered during an unrelated radiological investigation in a patient that previously underwent a cholecystectomy can represent undetected case of double gallbladder. we would like to present a review of the literature as well as images from mrcp, ercp and laparoscopy. michael jaroncyzk, md, courtney e collins, md, ms, vladimir p daoud, md, ms, ibrahim daoud, md; st. francis hospital; hartford ct introduction: several decades ago, surgical training was saturated with procedures to treated peptic ulcer disease. since the introduction of histamine- blockers and proton pump inhibitors, these procedures have dwindled significantly. however, there are still instances where patients require surgical intervention for peptic ulcer disease. perforation is one of the indications for surgery. the surgical options to treat a perforated peptic ulcer are numerous. one of the most common options is a graham patch. we are presenting a case of a patient with a perforated ulcer that did not have available omentum for the repair. methods and procedures: recently, a -year-old female with a past history of an open total abdominal hysterectomy and bilateral salpingo-oophorectomy presented as an outpatient with chronic lower abdominal pain. she underwent a work-up and imaging that did not reveal any pathology. at diagnostic laparoscopy, she had diffuse lower abdominal adhesions, which were lysed. she was discharged on the same day, but presented to the emergency department two days later with severe abdominal pain and fevers. the work-up revealed tachycardia, diffuse abdominal tenderness with peritoneal signs, leukocytosis and a large amount of free air on imaging. she was emergently brought to the operating room for a diagnostic laparoscopy. during laparoscopic exploration, the lower abdominal cavity appeared normal for a recent lysis of adhesions. attention was turned to the upper cavity to find the pathology. bile-stained free fluid and peri-gastric exudates were identified, but no perforation was visualized. intra-operative endoscopy revealed the site of perforation in the antrum on the lesser curvature. a biopsy was performed and the decision was made to perform a graham patch. however, the omentum was already densely involved with the lower abdominal cavity from the enterolysis. due to the close proximity of the falciform ligament, it was mobilized laparoscopically and the pedicle was used as a graham patch. the patient recovered without any additional issues. the biopsy was reported as a chronic gastric ulcer. conclusion: surgical history has given us many options to treat peptic ulcer disease that are not nearly as common as they were decades ago. perforated ulcers can be managed laparoscopically and graham patches are a common choice for repair. however, the lack of the omentum for a proper pedicle flap can pose a problem in some patients. we have shown in this patient that a falciform pedicle flap can be successfully used as a substitution. laparoscopic management of boerhaave's syndrome after a late presentation: a case report and literature review tahir yunus, hager aref, obadah alhallaq; imc background: boerhaave's syndrome involves an abrupt elevation in the intraluminal pressure of the oesophagus, causing a transmural perforation. it is associated with high morbidity and mortality. having a nonspecific presentation may contribute to a delay in diagnosis and results in poor outcomes. treatment is challenging, yet early surgical intervention is the most important prognostic factor. case presentation: we present a case of a thirty-two-year-old male with a long medical history of dysphagia due to benign oesophagal stricture. he presented with acute onset of epigastric pain after severe emesis. based on computed tomography scan, he was diagnosed with boerhaave's syndrome. presenting with signs of shock, mandated immediate surgical exploration. for which he was taken for laparoscopic primary repair with uneventful postoperative recovery. the golden period of the first hours of insult still applies for cases of oesophagal perforation. the rarity of these cases makes a comparison between the various treatment methods difficult. our data support that the use of laparoscopic operative intervention with primary repair as the mainstay of treatment for the management of oesophageal perforation. lipomas of the gastrointestinal tract are rare benign soft tissue tumors that are often discovered incidentally. these lesions are often asymptomatic, but have occasionally been reported to have clinical significance as will be described in this case report. a year old male initially presented to his primary care physician's office with a three week history of vague intermittent abdominal pain. his pain was located in the mid epigastrium and was associated with mild nausea. past medical history was significant for hyperlipidemia and a right-sided goiter, and he denied any previous surgeries. outpatient work up revealed a microcytic anemia, intermittent melena and hemoccult positive stools. the patient was referred to hematology and gastroenterology. endoscopies revealed gastritis, and small internal and external hemorrhoids. he underwent an outpatient ct scan which demonstrated a . . cm mass within the lumen of the jejunum causing long segment non-obstucting intussusception. subsequently, the patient was referred to surgery and underwent a diagnostic laparoscopy. at the time of surgery, an approximately twelve centimeter segment of proximal jejunum was identified intussuscepting into a distal limb. this segment was attempted to be reduced laparoscopically, however there was significant mesentery within in the intussusceptum and the segment could not be safely reduced. therefore, the section of bowel was delivered through a small periumbilical incision. the intussusceptum was then able to be manually reduced from the intussusception. at this point a large mass was palpated inside the lumen of the jejunum. a small bowel side to side, functional end to end resection and anastomosis was preformed. the bowel was returned to the abdomen and the abdomen was re-insufflated. the remainder of the small bowel was run and no additional lesions were identified. final pathology revealed a . . . cm submucosal partially obstructing lipoma with ulceration at the tip. the patient recovered uneventfully and was discharged home on the second post operative day. this case report describes a submucosal jejunal lipoma that was acting as a lead point for intermittent non-obstructing small bowel intussusception, while simultaneously causing a microcytic anemia due to ulceration at the tip of the lipoma. laparoscopic assisted reduction and small bowel resection is a safe and effective treatment for gastrointestinal tract lipomas that are unable to removed endoscopically. percutaneous endoscopic gastrostomy (peg) is an alternative to laparotomy for open gastrostomy tube placement to provide enteral nutrition for those who are unable to pass nutrition orally. despite being less invasive, the procedure is not without its complications, one of which includes the formation of a gastrocolocutaneous fistula. the case describes a year old female who presented with a peg placed months prior with reports of leakage of tube feeds from the gastrostomy site. as there was concern for possible ileus or obstruction, an upper gi series was completed which seemed to indicate dislodgement of the g-tube. the g-tube was replaced and a follow-up gastrograffin study was repeated which now indicated that the g-tube was within the lumen of the colon. soon thereafter fecal matter was noted to be draining around the g-tube site; however, patient was without clinical signs of peritonitis. the patient was managed non-surgically as she was a poor surgical candidate with multiple prohibitive co-morbidities. the g-tube was removed bedside by cutting it flush at the skin level with the anticipation that the remainder of the tube would be excreted with bowel movements. the decision was then made to attempt closure of the gastric fistula endoscopically which was accomplished with hemoclips. a follow up upper gi study hours later showed no extravasation of contrast through the gastric fistula. the colocutaneous fistula had self-resolved over the next couple days as well. placement of the peg tube through the transverse colon can present with varying ill effects including diarrhea, pneumoperitoneum, peritonitis, gram negative pulmonary infection or feculent vomiting with the formation of a gastrocutaneous fistula. treatment historically for a gastrocolocutaneous fistula has been exploration and excision of the fistula tract with resection of the involved colonic segment. however, there currently is no gold standard for the management of, and really ranges from conservative management to surgical and is dependent on the presenting symptoms. if the peg becomes dislodged with resultant spillage from the colon with resultant peritonitis, surgical exploration is needed with removal of the g-tube and repair of the stomach and colon. on the other hand, non-surgical management has been suggested in management of a well-established fistula. fistula closure may be spontaneous; however, can be inhibited due to delayed gastric emptying or leakage of gastric secretions through the fistula. endoscopic clipping of the fistula tract employing the hemoclips is a treatment option. median arcuate ligament syndrome (mals) is a rare etiology of abdominal pain caused by narrowing of the celiac artery at its origin by the median arcuate ligament with relative hypoperfusion downstream. patients suffer from post-prandial abdominal pain, abdominal pain associated with exercise, nausea, and unintentional weight loss. diagnosis is historically made by demonstrating elevated celiac artery velocities and respiratory variation on dynamic vascular studies. standard of care for mals patients is laparoscopic celiac artery dissection with release of the median arcuate ligament. at our institution, we have encountered fourteen patients (eleven female, three male) diagnosed by elevated peak velocity in the celiac artery by duplex ultrasound in conjunction with ct angiogram, mr angiogram, arteriogram, or multiple modalities. all but one patient had multiple diagnostic imaging modalities, with the most common being ct angiogram; eight patients had invasive imaging. the mean age at presentation was . years in men and . years in women. on average, male patients presented with a longer duration of symptoms, . years (range - years), as compared to women, . years (range - years). symptoms were fairly consistent between genders and included nausea, emesis, abnormal bowel habits, early satiety, post-prandial pain, and weight loss. all male patients reported at least two symptoms, most commonly nausea and post-prandial pain. in female patients, % reported having three or more symptoms. notably, post-prandial pain was universal among men and women, while weight loss was exclusive to female patients as reported by %. pre-operative peak velocities were recorded in all but one patient, with mean values more elevated in female patients as opposed to male patients, cm/s versus cm/s. post-operative duplexes were obtained in seven patients; pooled data show a mean change of negative cm/s for an average of cm/s after decompression. in all cases, the celiac artery trifurcation was visualized and noted to have a distinct change in artery caliber after division of the ligament. in total, % of patients reported significant improvement with return to normal diet and healthy weight gain post-operatively. of the three without complete resolution, two were diagnosed with motility disorders and one was lost to follow-up. our experience demonstrates that laparoscopic release of the median arcuate ligament in patients with significant flow limitation of the celiac artery on dynamic and anatomic imaging can be a successful treatment option for patients with recalcitrant pain and gastrointestinal dysfunction with no alternative diagnosis. matthew a goldstein, ma, kirill zakharov, do, sharique nazir, md; nyu langone brooklyn adhesions are fibrotic bands that form between and among abdominal organs. the most common cause of abdominal adhesions is previous surgery in the area as well as radiation, infection and frequently occurring with unknown etiology. these bands occur among abdominal organs, commonly the small bowel, and can lead to obstruction or remain asymptomatic, akin to the patient discussed here. congenital abdominal adhesions are rare and have received little attention in research and field of study. the patient described in this case is a -year-old female with a past medical history of morbid obesity, bmi of , hypertension and no past abdominal surgical procedures. the patient presented in august for bariatric surgical consultation and was ultimately taken for an attempted laparoscopic sleeve gastrectomy. upon entering the abdomen, significant adhesions were encountered and an additional attending was called to assist in identifying the stomach. the splenic flexure was found to be plastered to the diaphragm and the descending and transverse colon were adhered to the anterior surface of the stomach. additionally, small bowel adhesions encased the area between the right and left hepatic lobes as well as the caudate lobe. after extensive enterolysis, the pylorus remained the only identifiable portion of the stomach. the patient also demonstrated significant hepatomegaly and a wedge resection was performed. the amount of adhesion and matting of the small and large bowel obscured the view of the stomach and the procedure was deemed too dangerous and terminated. this case represents the uncommon scenario in which an abdomen with no prior surgical history presents with extensive, obscuring adhesions. one such recent study describes the influence of cytokines and proinflammatory states as contributors to obstruction and malrotation in children, but this patient demonstrated no significant history. further investigation is needed to determine potential etiologies of symptomatic and non-symptomatic congenital adhesions among bariatric patients who fail conservative treatment. today the patient is doing well and the surgical team will attempt to complete the procedure in the coming months. laparoscopic spenulectomy: an interesting case report riva das, md , daniel a ringold, md , thai q vu, md ; orlando health, abington jefferson health introduction: spenules, or accessory spleens, are a rare disease entity. most often, they are asymptomatic, and found incidentally during radiographic workup for an unrelated problem. torsion can cause a splenule to not only become symptomatic, but also confound the results of usual diagnostic studies. case description: a -year-old female patient with history of uncomplicated hypertension, hyperlipidemia, hysterectomy, cholecystectomy, spinal surgery, and partial left nephrectomy, presented to the hospital with a two-week history of intermittent left upper quadrant abdominal pain. she denied any similar episodes in the past, or any associated symptoms. further investigation with a ct scan of the abdomen and pelvis showed an acute inflammatory process in the left upper quadrant in same location as some colonic diverticulosis, as well as a . cm soft tissue mass. this indeterminate soft tissue mass was described as having decreased attenuation compared with the spleen. differential diagnosis for this mass included malignancy, an atypical splenule, or an infectious/inflammatory mass. an mri was recommended for further evaluation, but did not reveal any additional significant findings. nuclear medicine liver/spleen scintigraphy was performed, which showed no focal activity associated with the indeterminate left upper quadrant mass, therefore making it unlikely to reflect a splenule, and making malignancy the diagnosis of exclusion. following a period of observation with analgesia, intravenous antibiotics, and bowel rest, her abdominal pain did not resolve, and the decision was made to proceed with operative exploration. diagnostic laparoscopy revealed an approximately cm spherical mass in the left upper quadrant located just below the inferior aspect of the spleen. the superior aspect of the mass gave rise to a vascular pedicle, which upon tracing, seemed to originate from the splenic hilum. this pedicle was easily ligated, and the mass removed. pathology revealed an extensive infarcted hemorrhagic nodule with organizing thrombus and attached thrombosed artery, consistent with an infarcted splenule due to torsion along its own axis. the patient had an uncomplicated postoperative course. discussion: this case report demonstrates the unusual presentation and workup of a patient that was ultimately diagnosed with an infarcted splenule, despite imaging findings that did not correlate, and may even have confused her diagnosis. scintigraphy, which is normally the gold standard for diagnosing and localizing accessory splenic tissue, was in this case unrevealing, due to inability of the tracer to traverse the torsed vascular pedicle. operative exploration was both diagnostic and therapeutic. patients which was treated with antibiotics suggested by culture and sensitivity report and local wound care. one patient died due to sepsis at presentation. conclusion: chikungunya virus was found circulating in rodents in pakistan as early as . duodenal ulcer perforation which is a common surgical emergency in our part of the world usually presents with pinpoint perforation in ant wall of first part of duodenum unlike in already diagnosed cases of chikungunya disease where a slit like duodenal perforation is noted in the anterior wall of first part of duodenum. literature and consensus relate this perforation with the excessive use of nsaids due to usual presentation of arthritis in chikungunya disease but the unusual presentation is still to be answered. introduction: bouveret's syndrome is a rare form of gallstone ileus in which an impaction of a gallstone in the duodenum results in a gastric outlet obstruction. gallstone ileus accounts for approximately - % of all cases of small bowel obstruction. the terminal ileum is the most common location for a calculus to cause obstruction followed by the proximal ileum, jejunum and duodenum/stomach respectively. open and laparoscopic surgery has previously been the mainstay of treatment for bouveret's syndrome, however with the advent of new endoscopic techniques and instruments there has been increasing success in endoscopic management. this case report looks at a patient with a gastric outlet obstruction from a gallstone, and discusses the current literature regarding diagnosis and management. case: year old male presented with several day history of epigastric abdominal pain and multiple episodes of nonbloody, nonbilious emesis. he had previously been diagnosed with cholelithiasis, however had refused surgery at that time. on admission the patient was found to have a leukocytosis of . . an ultrasound was performed in which the images were limited due to pneumobilia. a subsequent ct scan revealed pneumobilia, and a large cm gallstone impacted in the first portion of the duodenum causing a gastric outlet obstruction. the patient underwent failed endoscopic attempts at removal and ultimately required a laparotomy, enerotomy with stone extraction. discussion: bouveret's syndrome is a rare variant of gallstone ileus. with newer endoscopic techniques and electrohydraulic lithotripsy, there has been increasing success with endoscopic retrieval of the impacted gallstones. there is some controversy in regards to the need for definitive operative management. stone extraction, without cholecystectomy and fistula repair, has been shown to have less postoperative complications as well as lower mortality rates compared to when a cholecystectomy and fistula repair has been performed. total mesorectal excision (tme) with neoadjuvant chemoradiotherapy (nacrt) is standard treatment for rectal cancer, which has resulted in a decrease in local recurrence. however, nacrt has shown no significant overall survival and some adverse effects mainly caused by radiation therapy. recently, the usefulness of neoadjuvant chemotherapy (nac) has been reported. we retrospectively assessed the efficacy and safety of the neoadjuvant mfolfoxiri compared with nacrt followed by laparoscopic surgery. a total of patients undergoing laparoscopic surgery for lower rectal cancer (clinical stage: ii or iii) from july to february in our department were retrospectively evaluated. patients underwent nac, and patients underwent nacrt. the following data were collected: pathological complete response (pcr), histological grade, down staging, radial margin (rm) and postoperative complications. histological grade was defined as follows: tumor cell necrosis or degeneration is present in less than one third of the tumor area (grade a), between one and two thirds (grade b), more than two thirds but viable cells remain (grade ), and complete response (grade ). these two groups were demographically comparable. down staging did not differ between the two groups. histological grade (?grade b) and pcr were significantly higher in the nacrt than in the nac group (p. ). rm had no significant difference in both groups, but tended to be able to secure negative rm in the nac group ( % vs. . %, p= . aims: increasing evidence suggest that cme may improve overall and disease free survival in colon cancer. our aims were to investigate the safety and efficacy of single incision laparoscopic cme colectomy (silcc) compared to multiport cme laparoscopic colectomy (mpclc) providing the first meta-analytical evidence. methods: pubmed, scopus and cochrane library were searched. studies comparing the silcc to mpclc in adults with colon adenocarcinoma were included. the studies were critically appraised using the newcastle ottawa scale. statistical heterogeneity was assessed with x and i . the symmetry of funnel plots was examined for publication bias. results: one randomized and four case control trials were included ( silcc vs sl introduction: obesity has been associated with increased morbidity following total proctocolectomies with ilealpouch anal anastomosis (tpc-ipaa). however, the incremental added risk of increasing obesity class is not known. the aim of this study was to evaluate the additional morbidity of increasing obesity class for tpc-ipaa. methods: after ethics board approval, the acs-nsqip database ( ) ( ) ( ) ( ) ( ) ( ) ( ) ( ) ( ) ( ) ( ) was accessed to identify patients who underwent elective tpc-ipaa. body mass index (bmi, kg/m ) was classified as normal ( . - . ) , overweight ( . - . ), obesity class-i ( - . ), obesity class-ii ( - . ) and obesity class-iii (≥ ). primary outcomes were overall surgical site infection (ssi) and organ-space infection (osi). secondary outcomes were -day major morbidity and length of hospital stay (los aim: in curatively intended resection of sigmoid and rectal cancer, many surgeons prefer to perform ligation of the root of the inferior mesenteric artery (ima), high tie, because of oncological reasons. however, ligation of the ima has been known to decrease blood flow to the anastomosis. there are few reports of patients undergoing the reduced port laparoscopic approach (rps) including single-incision laparoscopic approach (sils) even among those undergoing laparoscopic lymph node dissection around the ima with preservation of the left colic artery (lca). our objective was to evaluate the quality of this procedure regarding application of rps for the treatment of sigmoid and rectal cancer. methods: the feasibility of this procedure was evaluated in consecutive cases of rps for sigmoid and rectal cancer. a lap protector (lp) was inserted through a . cm transumbilical incision, and an ez-access was mounted to lp and three -mm ports were placed. almost all procedures were performed with standard laparoscopic instruments using a flexible scope (sils). a mm port was inserted in right lower quadrant mainly in rectal cancer surgery (sils + ). our method involves peeling off the vascular sheath from the ima and dissection of the ln around the ima together with the sheath. results: lymph nodes around the ima were dissected with preservation of the lca in cases (group a). the ima was ligated at its root in cases (high tie, group b). in group a, patients were treated with sils and patients were treated with sils+ . in group b, patients were treated with sils and patients were treated with sils+ . median operative time was . , and . min for group a, and b, respectively. the operative time was significantly longer in group a. estimated blood loss was . and . g, and mean numbers of harvested ln were . , and . . none of the other operative results of groups a and b were different statistically. in this series, there was only one anastomotic leakage in group b. conclusion: our method allows equivalent laparoscopic lymph node dissection to the high tie technique. the operative time tends to be longer, however this procedure has a possibility to reduce an anastomotic leakage. introduction: the routine mobilization of the left colonic flexure in colorectal surgery is still a matter of debate. we present our surgical approach with data. this technique may increases the surgical expertise/confidence when the surgical maneuver is necessary. up to % of all splenectomies are for surgery-related injuries; % of those splenic injuries are treated by splenectomy. the iatrogenic splenic injury rate during colorectal surgery is . %. iatrogenic splenic injuries create: increased risk of mortality/morbidity, extended operative time/patient in-hospital stay and increased healthcare costs. risk factors for iatrogenic splenic injury are: advanced age, adhesions, underlying pathology. obesity is not a risk factor. it is debated if the left colonic flexure mobilization is a risk factor for splenic injury. the ligament over-traction is the most frequent damage mechanism. the most dangerous surgical manuever is the spleno-colic ligament surgical dissection. moreover, laparoscopy descreases by almost , times the splenic injury risk. some surgeons are reluctant to routinely take down the splenic flexure. materials and procedures: robotic left colonic/rectal cases with routine splenic flexure mobilization technique have been performed: left colectomy (n= ), rectal surgery (n= ), transverse-colectomy (n= ) and pancolectomy (n = ). conversion rate , %, ebl\ ml, postop-leak ( . %) and % iatrogenic splenic injuries. results: in our approach, there are pathways that need to be mastered for the splenic flexure mobilization:a) medial to lateral dissection (underneath the inferior mesenteric vein); b) lateral to medial (from the lateral peritoneal reflection); c) access to the lesser sac with omental detachment from the transverse colon; d) access to the lesser sac with the gastrocolic opening, following the inferior border of the pancreas. the dissection should be closer to the colon rather than to the spleen. in our experience the routine mobilization of the splenic flexure may have some advantages: a) better (without tension) distal anastomosis formation; b) better perfusion of the proxiaml stump; c) wider oncological dissection; d) no need of going back to the flexure when the proximal stump is too short; e) mastering a surgical manuver useful in other procedures (e.g. distal pancreasectomy). the theoretical drawbacks of routine splenic flexure mobilization can be:a) longer operative time, which is on average increased by minutes; b) risk of splenic injuries, in our experience, no splenic injuries have been registered. conclusions: technical accuracy with cautious dissection/visualization can reduce iatrogenic splenic damages rate. laparoscopy decreases splenic injury rate. robotic surgery may have the potential to further reduce this complications. our data suggest that the routine mobilization of the splenic flexure, has more advantages than drawbacks and it can reduce the iatrogenic splenic injury rate. more trials are needed in order confirm our findings. introduction: the robotic stapler with the endowrist™ technology (intuitive surgical, inc.) includes a larger range of motion and articulation compared to the laparoscopic device, and may provide some benefits in difficult areas like the pelvis. to date, few studies have been published on the application of robotic endowristed stapling. we present our preliminary experience using the robotic stapler in low anterior rectal resection (larr) with total mesorectal excision (tme) for rectal cancer. methods and procedures: between march and september , patients underwent elective robotic larr with tme and primary colorectal anastomosis within the eras program. patient demographic, intra-operative data and post-operative outcomes were compared between the endowrist™ robotic stapler group (rs group) and the laparoscopic stapler group (ls group). results: the two groups were homogeneous in terms of demographic and clinical characteristics. thirteen ( males) and patients ( males) were included in rs and in ls group, respectively. seven patients received preoperative chemoradiation in rs group, in ls group. there was no difference in intra-operative blood loss and total operative time. the median number of stapler fires for patients in rs group and in ls group was (range, - ) and (range, - ), respectively. loop-ileostomy was fashioned in patients in rs group ( . %) and patients in ls group ( . %). the days mortality was nil. two cases of anastomotic leaks have been detected in rs group ( . %), cases ( . %), occurred in ls group, all treated conservatively. the mean length of postoperative stay was . ± . days in rs group, . ± . days in ls group. conclusions: in our preliminary experience the application of robotic stapler during larr with tme has shown to be safe and feasible with acceptable morbidity. even if our case series is pretty small, fewer stapler fires were required in the rsg compared to lsg. we believe that the robotic stapler might lead to a more precise firing during pelvic surgery: it can explain the trend toward a decreased number of fires, that has been well documented in literature to be related to a lower risk of anastomotic leak. further high quality studies are required to confirm these findings. background and objectives: the present study was aimed at investigating the safety and feasibility of laparoscopic ultra-low anterior resection (l-ular) with total mesorectal excision (tme) and transanal specimen extraction for rectal cancer located at lower one-third rectum, and specifically understanding the oncological outcome of the operation. patients and method: a prospective designed database of a consecutive series of patients undergoing laparoscopic ultra-low anterior resection for rectal malignancy with various tumornode-metastasis (tnm) classifications from to at the texas endosurgery institute was analyzed. in this study ultra-low anterior resection is defined as low anterior resection for the malignant lesion at distal / of rectum. results: ultralow anterior resections were completed laparoscopically with tme and transanal specimen extraction. the operating time for the surgery was . ± . minutes, and estimated blood loss during the procedure was . ± . ml. the length of the lesion from the anal verge measured with intraoperative colonoscopy ranged from . cm to . cm, and shortest distance of colorectal anastomosis from the anal verge is cm. since diverting ileostomy was routinely installed after l-ular, none was found to have anastomotic leakage, however patients developed anal stenosis within -month follow-up. therefore the overall rate of postoperative complication is . %. moreover patients were reported to have local recurrence in -year followup with the rate of . %. conclusions: l-ular is safe and effective procedure for the rectal cancer at distal / rectum with comparable local recurrence and postoperative complication rates, thereby suggesting l-ular can be considered as a procedure of choice for rectal cancer at very low location in the rectum. for rectal cancer, however, local full-thickness excisions are fraught with high local recurrence rates -even if limited to early and best selected lesions. this corroborated observation is likely caused by a combination of missed nodal disease and direct implantation of tumor cells into the mesorectum, which upstages even early t lesions to at least a t lesion. the treatment of choice for invasive adenocarcinoma consists of an oncological total mesorectal resection, possibly with other modalities. rectal tumors of uncertain behavior can present a treatment dilemma between over-treatment vs under-treatment. concept: if the nature of a lesion is not certain or if contradictory results have been obtained, we propose a superficial local excision as a mucosal excisional biopsy to establish the diagnosis while avoiding interference with subsequent definitive treatment modalities by preserving the integrity of the external rectal wall and mesorectum. a benign final pathology concludes the treatment, whereas a detection of invasive cancer will be managed with a subsequent oncological resection. methods: this is a case report of a -year-old woman found to have a . cm villous lesion in the mid to distal rectum without proven or disproven invasive cancer. a tems-guided mucosal resection of the rectal mass at cm above the anal verge was performed whereby the lesion was dissected off the underlying muscularis. results: with preoperative discrepant erus and mri staging ut - vs ct lesion, a technically successful mucosal resection of the large rectal mass was carried out. pathology revealed a tubulovillous adenoma without high grade dysplasia or malignancy and a complete resection. conclusion: tems mucosal excisional biopsy of rectal tumors of uncertain behavior allows for a less invasive diagnostic approach that may (a) be definitive treatment if the lesion is proven benign, or (b) confirm the need for more aggressive treatment without having burned any treatment bridges or upstaged an early tumor by violating the mesorectal plane. an oncologic resection with appropriate (neo-)adjuvant chemotherapy can be carried out while preventing the potential for tumor seeding at initial operation. background: adequate visualization of the entire lumen of the large bowel is essential in detecting pathology and establishing diagnoses during colonoscopies. patients are provided dietary instructions and medications in order to achieve adequate bowel preparation. given the extensive amount of preparation required, some patients may be unable to adhere to the prescribed routine, resulting in rescheduling or repeat procedures and misallocation of limited resources. a number of previous quality-improvement efforts have been implemented to ensure adequate preparation prior to colonoscopy. objective: the objective of this study was to develop and assess the feasibility of a novel smart phone application in the delivery of bowel preparation instructions. methods: a novel smart phone application was developed to deliver bowel preparation instructions to patients undergoing colonoscopy for the first time. patients were included in the pilot phase of this project if they were undergoing a colonoscopy for the first time. we included patients who had access to a smart phone, had not previously had a bowel preparation for any reason. we excluded patients with a previous diagnosis of inflammatory bowel disease or colorectal cancer. patient surveys were administered at the time of colonoscopy. patients were questioned regarding the completeness of bowel preparation and adherence to bowel preparation instructions. patient questionnaires were completed to ascertain the ease of use of the smart phone application and any concerns that arose. quality of bowel preparation was assessed by the colonoscopist using the validated ottawa bowel preparation score. this is the pilot study results for the "coloprep" trial (nct ). results: a total of patients were enrolled in the pilot phase of this study. patient satisfaction, adherence to instructions and ease of use of the smart phone application were ascertained. bowel preparation, as assessed by the colonoscopist, was reported. conclusions: this study assessed the feasibility of using a novel smart phone application for delivery of bowel preparation instruction. this pilot study is the initial phase of a randomized controlled trial to compare smart phone application vs. written instructions in the delivery of bowel preparation instructions. the . median follow-up was months. there were no statistically significant differences found in clinical features and laboratory findings between the two groups. no statistically significant difference was found regarding the overall success rates and the complication rates between the conservative and the surgical arms (success rates: . % and . % (p= . ) and complication rates: . % and . % (p= . ), respectively). however, surgical treatment was better than conservative treatment in preventing recurrent diverticulitis (recurrence rates: % and . % (p= . ), respectively). conclusion: conservative management with bowel rest and antibiotics is a safe and effective treatment for right-sided colonic uncomplicated diverticulitis and may be considered as the initial option. on the other hand, laparoscopic diverticulectomy is also safe, effective and adequate. surgery is advocated to decrease the recurrence rate. introduction: it has been hypothesized that the structural and functional changes that develop in the defunctioned segment of bowel may contribute to the development of postoperative ileus (poi) after loop ileostomy closure (lic). as such, longer intersurgery interval between ileostomy creation and lic may increase poi. methods and procedures: after institutional review board approval, all patients who underwent lic at a single institution between - were identified. the primary endpoint, primary poi, was defined as either a) being kept nil-per-os on or after postoperative day for symptoms of nausea/vomiting, distension, and/or obstipation or b) having a nasogastric tube (ngt) inserted, without postoperative obstruction or sepsis. secondary endpoints included length of hospital stay (los) and non-poi related morbidity. patients who left the operating room with a ngt, had a planned laparotomy with a concomitant procedure at the time of lic, had a total proctocolectomy as their index operation, or had secondary poi, were excluded. patients were then divided into two groups based on timing from the index operation to lic (\ months vs. objective: fecal incontinence can be a debilitating problem significantly diminishing productivity and quality of life. sacral neuromodulation has emerged as a first line surgical option treatment in patients with fecal incontinence. though its efficacy has been rigorously evaluated in adult populations there is scant data available for its use in the pediatric pateints with fecal incontinence. this case study discusses the management of fecal incontinence in a pediatric patient with a history of hirschsprung's disease utilizing sacral nerve stimulation. methods: our patient is a -year-old female with a history of hirshsprung's diagnosed in infancy and treated surgically with coloanal pull through at the age of who presented with complaints of fecal incontinence. the patient was wearing pads daily, noting frequent uncontrolled bowel movements as well as having frequent missed days of school due to these symptoms. despite maximal medical management and pelvic floor physical therapy the patient continued to have - episodes of fecal incontinence daily. a ct scan with rectal contrast was used to establish her postoperative anatomy. anal manometry showed low rest/squeeze pressures, absent resting anal inhibitory reflex, and abnormal sensation. furthermore, during balloon expulsion testing the patient failed to pass device. the patient was deemed a candidate for stage testing with sacral nerve neuromodulation. during follow-up, the patient was noted to have resolution of her episodes of fecal incontinence and the second stage was completed. the patient continues to note % continence and dramatic improvement in her quality of life. conclusion: in this patient with a history of severe fecal incontinence due to hirschsprung's disease, sacral neuromodulation has had a significant impact on her quality of life. post-operatively she continues to have marked improvement in her symptoms with - bowel movements a day with no recurrence of fecal incontinence. the use of sacral neuromodulation is a promising treatment for fecal incontinence in the pediatric population. future research investigating the longterm efficacy of this treatment modality in the pediatric population is needed. cases of bowel obstruction caused by colorectal cancer recurrence and progression were excluded. surgical cases ( . %) were considered to be early bowel obstruction and ( . %) were classified as late bowel obstruction. left hemicolectomy (n= , . %) was a significantly more frequent procedure in early bowel obstruction, and abdominoperineal resection (n= , . %) was significantly more common in late bowel obstruction (p. ). both early and late bowel obstruction included adhesive small bowel obstruction (n= ), internal hernia (n= ), and strangulation obstruction (n= ). internal hernia (n= ) and strangulation obstruction (n= ) occurred after left hemicolectomy and abdominoperineal resection, respectively. there is no apparent relationship between surgical procedures and adhesion regions (abdominal wall, intestinal tract, and pelvic cavity). the incidence rate of postoperative small bowel obstruction remained low, and laparoscopic colectomy had been safely performed. however, countermeasures are needed because of the high frequency of both early and late bowel obstruction which occurred after left hemicolectomy and abdominoperineal resection, respectively. improved utilization of resources as an improvement introduction: nowadays, treatment decisions about patients with rectal cancer are increasingly made within the context of a multi-disciplinary team (mdt) meeting. the outcomes of rectal cancer patients before and after the era of multi-disciplinary team was analyzed and compared in this paper. the purpose of the present study is to evaluate the value of discussing rectal cancer patients in a multi-disciplinary team. methods and procedures: in our health institute, weekly mdt conferences were initiated in january . meetings were attended by surgeons, radiologists, radiation and medical oncologists and key nursing personnel. all rectal cancer patients diagnosed and treated in - in the general surgery division of the "carlo urbani" hospital in jesi (an, italy) were included. then, the data from rectal cancer patients in were evaluated, before the adoption of mdt and in year , after the adoption of meetings. datasets regarding demographics, tumor stage, treatment, and outcomes based on pathology after operation were obtained. during an mdt discussion patient history, clinical and psychological condition, co-morbidity, modes of work-up, clinical staging, and optimal treatment strategies were discussed. a database was created to include each patient's workup, treatments to date and recommendations by each specialty. ''demographic variables'' consisted of age at diagnosis, sex, body mass index, comorbidities, american society of anesthesiologists physical status classification system, clinical stage and pathological stage. other analyzed variables included baseline carcinoembryonic antigen (cea), the type of imaging, use of neoadjuvant chemo-radiation, restaging following neoadjuvant therapy, distance from the anal verge, operation type and use of adjuvant chemo-radiation. ''outcome variables'' consisted in a comparison for each group between clinical and pathological stage. results: sixty-five patients were included in this study: thirty patients in (pre-mdt) and thirty-five patients in . demographic variables did not differ significantly between groups. preoperative clinical stages with baseline preoperative cea and postoperative pathological stage were analysed, too. thanks to the mdt and the increased use of the neoadjuvant therapy, a statistically significant difference in reduction of the stage between the clinical and pathological stage in the patients of the mdt group was verified. conclusions: the vast majority of rectal mdt decisions were implemented and when decisions changed, it mostly related to patient factors that had not been taken into account prior to the adoption of multi-disciplinary team. analysis of the implementation of team decisions is an informative process in order to monitor the quality of mdt decision-making. purpose: in japan, lateral pelvic node dissection (lpnd) is the standard treatment for locally advanced lower rectal cancer. there are few reports of patients undergoing single-incision plus one port laparoscopic (sils+ ) lpnd even among those undergoing laparoscopic lpnd. the aim of this study is to describe our initial experience and assess the feasibility and safety of sils+ lpnd for patients with advanced lower rectal cancer. methods: a lap protector (lp) was inserted through a . cm transumbilical incision, and an ezaccess was mounted to lp and three -mm ports were placed. a mm port was inserted in right lower quadrant. a single institutional experience of sils+ lplnd for rectal cancer are presented. inclusion criteria was indications for lld were lower rectal cancer with t - , or t - rectal cancer with metastasis of lateral lymph node, as described by the japanese society for cancer of the colon and rectum (jsccr) guidelines for the treatment of colorectal cancer. perioperative outcomes including operative time, operative blood loss, length of stay, postoperative complications, and histopathological data were collected prospectively. introduction: endoscopic stenting with a self-expandable metallic stent (sems) is widely accepted procedure for malignant colorectal obstruction. we assessed the safety and efficacy of insertion of a sems followed by elective surgery as 'bridge to surgery (bts)' in our institute. methods: this study was a retrospective study in our institute. the data was collected from medical charts from january to june . results: a total of consecutive patients underwent radical surgery for colorectal malignancy during this period. in this series, patients ( . %) were diagnosed malignant colorectal obstruction and intended to a bts. the stent was successfully placed in patients and all the patients were planned to undergo radical surgery. the failed patients underwent stoma creation ( patients) and hartmann's procedure. the technical success rate was % and the clinical success rate was %. the median time from sems to surgery was days ( - days) . open and laparoscopic surgery was performed in and patients, respectively, except for one patient refused radical surgery because of a great age. the tumor could be resected in patients (bts patients) with primary anastomosis. however, diverting stoma creation was needed in patients and decompression rectal tube was placed in patient. the entire patient laparoscopically was no conversion to open surgery. there was no anastomotic leakage in bts patients. the median duration of postoperative hospital stay was days ( - days). the overall postoperative complication was % ( / ) including bowel obstruction and anastomotic stricture. the median follow-up period was days. during the follow-up period, patients were relapsed peritoneal dissemination, ovarian metastasis, and liver and pulmonary metastases, respectively. former patients were diagnosed stage iva at the time of primary surgery. one patient died from sudden death. conclusions: our data suggested that routine use of sems insertion was safe and effective procedure for malignant colorectal obstruction as a bts. moreover, laparoscopic procedure was useful procedure in bts patient. the short-and long-term surgical outcomes were also acceptable. introduction: serpin e , also known as plasminogen activator inhibitor- (pai- ) is an inhibitor of urokinase type plasminogen activator (upa) and tissue-type plasminogen activators (tpa ). pai- plays a role in the regulation of angiogenesis, wound healing, and tumor cell invasion; over expression has been noted in breast, esophageal, and colorectal cancer (crc). pai- is also a potent regulator of endothelial cell (ec) proliferation and migration in vitro and of angiogenesis and tumor growth in vivo. the plasminogen/plasmin system plays a key role in cancer progression by mediating extracellular matrix degradation and tumor cell migration. surgery's impact on plasma pai- levels is unknown. this study's purpose was to measure plasma pai- levels before and during the first month after minimally invasive colorectal resection (micr) for crc. objectives: retroflexion in the rectum at the end of a colonoscopy is a requirement for a complete endoscopic evaluation. retroflexion helps to visualize and detect polyps which would be missed otherwise. currently new endoscopes are available which can do retroflexion in the caecum. aim: our study aims to compare the rate of polyp detection rate in cecum and ascending colon with and without retroflexion in cecum. methods: this is a single center, single operator, retrospective study. a total of two hundred patients were involved. a single center irb waiver was obtained. patients were divided into two groups based on the presence/absence of retroflexion in caecum during their colonoscopy. the data was obtained from records. group a (n= ) had colonoscopy without retroflexion in caecum group b (n= ) had colonoscopy with retroflexion in caecum inclusion criteria: patients undergoing screening colonoscopy between the age of and . results: group a: total of patients were screened. a total of polyps were detected in group a. number of cecal polyps were ( . % of total polyp count). number of ascending colon polyp were ( % of total polyp). on analyzing the pathology % of the cecal polyps were tubular adenoma, % hyperplastic polyps % and % lymphoid aggregate. number of ascending colon polyps were , of which % were tubular adenoma, % tubular adenoma and % tubulovillous adenoma group b: total of patients were screened. a total of polyps were detected. number of cecal polyps detected were ( . % of total polyp count). number of ascending of ascending colon polyps were ( %). on analyzing pathology, % cecal polyps were tubular adenoma and % were sessile serrated. out of the ascending colon polyps % were tubular adenoma, % sessile serrated, % tubulovillous and % hyperplastic polyp. side events: two mass lesions were noted in both group a and b. there was incomplete colonoscopy in group a and b. conclusion: this retrospective analysis reveals a small increase in polyp detection in the cecum with retroflexion, especially in detecting sessile polyps which have more malignant potential. however, a large multicenter analysis will be required to validate the above observation. background: while uncommon, rectal prolapse is a disabling condition affecting older females. in a small subset of patients, concomitant organ prolapses with or without incarceration can lead to significant morbidity. as the field of laparoscopy has evolved, minimally invasive surgical options for rectal prolapse have led to improved quality and reduced morbidity for patients suffering this debilitating disease. methods: the - acs-nsqip databases was queried for patients undergoing a traditional or minimally invasive rectopexy based on cpt codes ( , , , and ) . emergent cases and patients with preoperative infections or inflammatory states were excluded. the primary outcome of interest was a -day postoperative composite morbidity score. statistical analysis incorporated multivariate analysis and binomial logistic regression with p. holding significance. results: these inclusion and exclusion criteria identified patients undergoing traditional ( ) and minimally invasive ( ) rectopexy for prolapse between and . patients undergoing traditional rectopexy were older (p. ), had a higher body mass index (p= . ), more comorbid conditions (diabetes, copd, hypertension) and less functional independence (p= . ). patients undergoing a traditional rectopexy had a higher composite morbidity incidence of . % vs. % for minimally invasive rectopexy (p. ). specifically, minimally invasive rectopexy patients had a . % reduction in wound complications (p= . ) and a shorter hospital stay ( . days vs. . days, p . ) compared to a traditional rectopexy. readmission rates were also . % lower in the minimally invasive group (p= . ). after controlling for the differences in the cohorts, a minimally invasive approach was a significant protective factor against the incidence of -day postoperative morbidity (or . , p. ). conclusion: a minimally invasive rectopexy has improved -day postoperative morbidity compared to a traditional rectopexy and should be strongly considered for the treatment of rectal prolapse. objectives: the short-term safety and efficacy of a self-expandable metallic stent (sems) placement followed by elective surgery, "bridge to surgery (bts)", for malignant large-bowel obstruction (mlbo) have been well described. the aim of this study was to investigate the risk factors for postoperative complications and optimal interval between sems placement and surgery in patients with mlbo. methods: retrospective examination of patient records revealed that the bts strategy was attempted in patients with mlbo from january to march in our institution. two of these patients were excluded because they had undergone emergency surgery for sems migration; thus, patients with mlbo who had undergone sems placement followed by elective surgery were included. of these patients, eight had developed postoperative complications (clavien-dindo grading≥ii) (postoperative complication: poc group) whereas patients had no such complications (no poc group). results: univariate analyses showed that the factors of asa score, number of lymph nodes resected, interval between sems and surgery, and preoperative albumin concentration were associated with postoperative complications. multivariate analysis identified only the interval between sems and surgery as an independent risk factor. furthermore, a cut-off value of days for interval between sems and surgery was identified by roc curve analysis. conclusions: an interval of ≥ days from sems placement to surgery is an independent predictive factor for postoperative complications in patients undergoing elective surgery in a bts setting. thus, an interval of over days is recommended for minimizing postoperative complications. haseeb kothar, ronan cahill; mater misericordiae university hospital current clinical advances in operative near-infrared visualisation of cells, tissues and structures are predicated on the use of commercial available near-infrared cameras to excite and visualise emission energy from non-selective, approved compounds (predominantly indocyanine green (icg)). it is expected that new generation compounds wholly selective for specific cellular components are now needed for further advance and a variety of molecular targets have been proposed and are being developed primarily for oncological imaging purposes. recent publications have however suggested icg itself is retained within malignant tissue differently to its uptake and clearance from surrounding non-malignant tissue which is important for two reasons. firstly, it exploits and makes visual the increased vascular permeability and disordered clearance associated with carcinogenesis which is a common endpoint of a variety of mediators including but not limited to vegf. this raises the useful option of targeting downstream effects of cancer compounds on a metabolic basis as opposed to tagging individual cell or antigen components. this means that a single agent could be used to target a variety of cancers rather then needing a specific one for each specific sub-type as well as obviating the issue of cancer cells heterogeneity even in a single cancer deposit. second, it is very likely that some or all of the "localisation" effect of proposed selective compounds may well be due to a similar phenomenum rather then cell-specific binding and may make distinction from other areas of similar metabolic behaviour (ie inflammatory regions) difficult. the crucial step-advance for such agent development so may well relate to timing of compound delivery and "visualisation window" at the region of interest rather then highly selective oncocellular-targeting. to illustrate this in more detail, we have been examining the tissue-specific effects and actions of near-infrared excitation in patients (n= ) with localised malignant colorectal primaries receiving an aliquot of icg before such examination at the time of resection. icg can be selectively apparent in the colorectal primary minutes after its systemic administration likely due to altered vascular dynamics. additional dose-related work has shown that early administration ( - minutes before examination) does not give useful information related to tumour fluorescence. interestingly none of these patients had fluorescence seen within their regional lymphatics but none also had malignant lymph nodes associated with their large primaries on pathological examination. however, this procedure is not usually performed in laparoscopic apr for its technique difficulty, which may lead to increased rates of complications ( fig. ) . here, we compared the feasibility and peri-operative outcomes of the laparoscopic apr with and without pelvic peritoneum closure (ppc) for lower rectal cancer. introduction: there are reports of increased operative duration, blood loss and postoperative morbidity, caused by difficulties in obtaining good visualization and in controlling bleeding when laparoscopic resection is performed in obese patients with colon cancer. purpose: the aim of this study was to investigate the impact of obesity on perioperative outcomes after laparoscopic colorectal resection performed by various operative methods in our department. patients and methods: we conducted a retrospective analysis of patients with colorectal cancer who underwent laparoscopic surgery between january to december . right colectomy was performed in patients, sigmoidectomy in patients, and low anterior resection in patients. the surgical outcomes were compared between non-obese (body mass index [bmi]\ kg/m ) and obese (bmi ? kg/m ) patients. results: right colectomy cases: the amount of blood loss was significantly increased in the obese group compared with the non-obese group, but operation time did not differ significantly between the groups. there were no significant differences between the two groups in the rate of postoperative complications and duration of post-operative hospitalization. sigmoidectomy cases: there were no significant differences between the two groups in operation time and amount of blood loss. even though the preoperative asa score and the rate of postoperative complications were higher in the obese group, the mean postoperative hospital stay did not differ significantly between the two groups. low anterior resection cases: there were no significant differences between the obese group and the non-obese groups in operation time, amount of blood loss, rate of postoperative complications, and duration of post-operative hospitalization. discussion: although there are some reports of increased operative times in obese patients, the operative procedure was not extended in any of the present study patients. the amount of blood loss was significantly increased in the obese group compared with the non-obese group when right colectomy was performed. among the patients undergoing sigmoidectomy, the postoperative rate of complications was higher in the obese group; however, the preoperative asa status was also higher in the obese group than non-obese group, indicating that factors other than obesity may be involved. conclusion: we concluded that laparoscopic colorectal resection appeared to be safe and feasible in both obese patients and non-obese patients. however, bmi may not accurately reflect the amount of visceral fat present. background: for the complete rectal prolapse (basically longer than cm), we thought sling rectopexy was most reasonable to hang up and fix the rectum, which drooped down and prolapsed due to the relaxation of supporting tissue. we considered ripstein method had enough fixed power of rectum to sacrum. however, complications of rectal stenosis, constipation, mesh infection and mesh penetration were reported. therefore, we modified ripstein method to conquer such complications. aim: a prospective study beyond the randomized control trial (rct) between our modified ( introduction: the results of the japan clinical oncology group (jcog) study suggested that total mesorectal excision (tme) and lateral lymph node dissection (llnd) could become the standard treatment for lower rectal carcinoma. however, llnd must also be performed laparoscopically if surgery for lower rectal carcinoma is to be carried out as a completely laparoscopic procedure. transanal tme (tatme) is expected to provide better results than the conventional tme, both oncologically and in terms of pelvic function, and its use has recently been spreading in japan. we started performing laparoscopic tatme+llnd in our department in july and here report the short-term outcomes. subjects and methods: we used laparoscopic tatme+llnd to treat men and women with ct or deeper rectal carcinoma in whom the inferior margin of the tumor was on the anal side of the peritoneal reflection. this was a retrospective study of short-term postoperative outcomes. surgical procedure: laparoscopic surgery was started simultaneously by two teams, one working transabdominally and the other working transanally. the transabdominal team performed the standard proximal llnd and mobilization of the splenic flexure via five ports. they then dissected the bilateral lateral lymph nodes, mainly in the obturator (# ) and internal iliac (# ) groups. during this time, the transanal team performed laparoscopic tatme. finally, both dissection layers were connected and the cancer was excised. results: six patients had clinical stage ii and two had clinical stage iii lower rectal carcinoma. all the patients underwent preoperative chemotherapy with s- +l-ohp. five underwent a sphincterpreserving surgery, and three underwent rectal amputation. the mean operating time was minutes (range, - minutes), and the mean amount of hemorrhage was g ( - g). the mean number of lymph nodes dissected was , and r resection was performed in all the cases. the mean length of hospital stay was days, and a postoperative complication of clavien-dindo grade iii or higher occurred in one patient (anastomotic failure). conclusions: laparoscopic tatme+llnd performed by two teams simultaneously is an extremely useful procedure that not only reduces operating time, but also is less invasive than laparoscopic surgery. it may also be effective for improving curative nature, nerve preservation, and anal function. objective: in laparoscopic appendectomy, the base of the appendix is usually secured by applying a roeders knot. the aim of this study was to compare the advantages of using staplers and hem-olocks for securing the base of the appendix. method: the study included patients between age of to years with acute appendicitis randomly divided into two groups. in the first group, the base of the appendix was secured using roeders knot. in the second group, mesoappendix was not dissected and was included in the endostapler jaws. the primary outcome was overall morbidity. secondary outcomes were total duration of surgery, total length of stay and ease in difficult cases. result: no morbidity was recorded in any group. the time of the operative procedure was significantly longer in the cases with roeders knot than in the stapler group (p. ) as mesoappendix was not dissected in the later. cases with unhealthy base were progressed to laparoscopic quadricolectomy. apart from the ease of applying a stapler, cases of second group with gangrenous base were easily tackled using endostapler, avoiding the need of a hemicolectomy. conclusion: all forms of closure of the appendix base are acceptable, but endostapler technique apart from providing a secure base, reduces operative time and is an essential tool in cases of gangrenous base. introduction: accurate staging is essential to estimate the prognosis of patients with colorectal cancer (crc) and lymph node evaluation is key to determine it. in non-metastatic crc, the number of harvested lymph nodes is the strongest prognostic factor for outcome and survival. additionally, it is thought that a higher lymph node yield may be representative of a higher quality of surgical care. due to the importance of the association between lymph node evaluation and outcome in crc, it is necessary to evaluate factors which may affect lymph node harvest. introduction: hatmann's procedure is commonly done in treating complicated diverticulitis, negleccted rectal trauma with sepsis and sometimes malignancy. the traditional techniques to restore the intestinal continuity after hartmann's procedure were for many years the standard of care in these operations, but in fact they carry many morbidity and even mortality and failure. laparoscopic techniques is not only carry the advantage of minimal invasive surgery, but also of better visualizationn and magnification. the aim is evaluating the outcome of using the laparoscope in reversal of hartmann's procedure as regard feasibility and safety. patients and method: forty patients were subjected to laparoscopic reversal of hatmann's procedure in tanta university hospital, there ages ranged between to years, the time elapsed after the original operation ranged from months to years, excluding advanced malignany. conversin occurred in cases due to extensive adhesions and bleeding. results: no mortality, or major morbidity in our study and only single leak treated by covering ilestomy. conclusion; laparoscopic hartmann's procedure is feasible, promising tehnique with minimal morbidity. background: minimal invasive surgery has been well established in the elective colorectal surgery and it has been proven better clinical outcome compared with open surgery. in the emergent setting, laparoscope is used mostly in the colecystectomy, appendectomy but laparoscopic emergent colorectal surgery is limited for it's complexity and difficulity. the aim of this study was to envaluate the feasibility of laparoscopic emergent colorectal surgery. methods: this study is prospective collected, observational single center study of patients undergoing laparoscopic emergent colorectal surgery from to . the patient demographics, surgery indication and detail, complication, clinical outcome and hospital stay were collected and analyzed. results: there are total emergent colorectal operations and patients were managed with minimal invasive method. among these laparoscopic emergent surgery, there are male patients and female patients. mean age of the patients was . years (range - years). the main indication for operation: perforation . % ( / ), leakage after elective colorectal surgery . % ( / ), obstruction . % ( / ), ischemia colitis . % ( / ,), bleeding . % ( / ). there are cases in asa , cases in asa , cases in asa . the qsofa score for sepsis: cases was , cases was , cases was , case was . there are cases undergoing laparoscopic lavage with diverting stomy, cases were hartmann procedure, cases were anterior resection, cases were right hemicolectomy, cases were perforation repair, cases were redo anastomosis. there are cases coversion to open method including cases were due to bowel adhesion, cases were due to bowel distension, case was due to severe shock status. mean operative time is . minutes. the overall mortality rate was . % and major complication rate (clavien-dindo grade above ) was . %. re-operation rate was . %. the mean hospital stay was . days. conclusions: this study presents evidence of an initially clinical outcome in emergent laparoscopic colorectal suregry. in the absence of large case series, the benefits of a laparoscopic approach should befall to at least a minority of these patients. confocal laser endomicroscopy (cle) can provide real-time observation of the cell structure and tissue morphology. in our study, we aim to assess the situation of anastomotic perfusion using cle. method: the experimental rabbits were separated into two groups: group a (good anastomotic perfusion, n= ), group b (poor anastomotic perfusion, n= ). the partial colectomy and anastomosis was performed for group a and b. then detection for anastomotic perfusion using cle was carried out after the surgery. during the continuous scanning, we counted the number of blood cells that cross over the certain point of anastomotic stoma in the same period. results: assistant with fluorescein sodium, the blood vessels are highlighted. we can see significant difference of imaging effect between group a and group b. the average number of blood cells are . /min of group a and . /min of group b (p. ), which has significant difference. conclusion: cle can allow real-time observation of the blood flow of anastomotic stoma in vivo. therefore, it is feasible to assess the anastomotic perfusion using cle in colorectal surgery. cigdem benlice, ahmet rencuzogullari, james church, gokhan ozuner, david liska, scott steele, emre gorgun; cleveland clinic background: intraoperative colonoscopy (ioc) is an adjunct in colorectal surgery (crs) especially in patients with malignancies in order to detect location of the primary or synchronous lesions as well as assessing anastomotic integrity. however, effects of intraoperative colonoscopy on short term outcomes during crs is a concern. this study aims to evaluate safety and feasibility and post-operative outcomes of intraoperative colonoscopy in left-sided colectomy patients for colorectal cancer patients by using the nationwide database. patients and methods: patients undergoing elective left-sided colectomy with low pelvic anastomosis without any proximal diversion for colorectal cancer were reviewed from the american college of surgeons national surgical quality improvement program (acs-nsqip) proceduretargeted database ( ) ( ) ( ) according to their primary procedure current procedural terminology (cpt) code. subsequently, patients who underwent intraoperative colonoscopy were identified from concurrent cpt codes and divided into two groups based on the simultaneous intraoperative colonoscopy. demographics, comorbidities, -day postoperative complications were evaluated and compared between the groups. multivariate logistic regression was conducted adjusting for significant factors between the groups. results: a total of patients were identified and ioc was performed for ( . %) patients. objective: laparoscopic ileostomy commonly performed for the patients with colorectal obstruction due to cancer, peritonitis with perforation of colon or the other reason. reduced port surgery is a novel technique that may be performed when considering minimally invasive surgery and desiring a cosmetic benefit. the aim of this study was to evaluate safety and feasibility of reduced port laparoscopic ileostomy for the patients with advanced colorectal cancer before chemotherapy. methods: between july and august , patients who underwent reduced port laparoscopic ileostomy were included ( male and female, age: years old. the outcomes were evaluated in terms of operation time, intraoperative blood loss and perioperative complications. sugical procedures: the patients were placed in the supine position and the operator stood left side. an access device with the wound-protector (ez access, hakko, nagono, japan) was inserted on the future ileostomy site in the right lower abdomen, inserting two of -mm trocars, maintaining pneumoperitoneum at mmhg with carbon dioxide. a -mm trocar was inserted in the left lower abdomen. a -mm flexible laparoscope was inserted from access device port. after exploring abdominal cavity, ileum end was identified. then the marking using dye was put on the ileum of cm proximal from the ileum end. the ileum marked by dye was grasped, and extracted through the access devise. then a blooke ileostomy was created. results: reduced port laparoscopic ileostomy was performed for patients with colorectal obstruction due to cancer before chemotherapy. the mean operative time was minutes, the mean blood loss was . ml. three patient received one additional port. there were no intraoperative complications. five patients ( . %) experienced postoperative complications (two of deep surgical site infection, one of pneumonia, one of outlet obstruction and one of renal dysfunction). there were no other intraoperative or postoperative complications. conclusion: reduced port laparoscopic ileostomy is a safe and feasible procedure for the patients with advanced colorectal cancer before chemotherapy. methods: we performed elective lcr on patients for primary colorectal cancers between june and june . seventy-two patients were excluded in this study following reasons: patients underwent multiple organ resection, and colorectal cancer was diagnosed with stage iv in patients. accordingly, patients were eligible for comparative analysis, with in group po (post operation) and in group c (control). in group po, past operative procedures were as follows: appendectomy ( %), digestive tract ( %), hepato-billiary-pancreatic ( %), gynecologic ( %), urologic surgery ( %), and others ( %). results: there were no significant differences between two groups in asa (grade≤ : vs. %, p= . ), bmi ( introduction: the treatment of rectal cancer requires highly skilled practice by the entire multidisciplinary team. important aims of treatment are: to reduce the risk of residual disease in the pelvis, with lower morbidity and to preserve good sphincter function. the tata procedure is transanal transabdominal radical proctosigmoidectomy with coloanal anastomosis. this technique was first developed in by dr. gerald marks to avoid a permanent colostomy for low-lying rectal cancer. this study reports the long-term results of tata procedure for low rectal cancer. methods and procedures: a prospective study was on patients with low rectal cancer between april and july in a tertiary referral university-affiliated center specializing in laparoscopic surgery. all resections were carried out by a team of dedicated colorectal surgery and standard protocol was used for all pre-and-post-operative care. all the patients underwent total mesorectal excision. results: consecutive patients ( male, female, mean age ) underwent tata procedure, of them ( , %) after neoadjuvant radiochemotherapy. the mean operation time was min (range - ) and the mean estimated blood loss was ml (range - ). the overall incidence of morbidity was , % ( / ) and the mean hospital stay was , days. the mean follow-up period was , (range, - ) months with a recurrence rate of , % ( / ), overall estimated -year survival , % and the disease-free survival rate , %. conclusion: laparoscopic total mesorectal excision with tata procedure is safe with excellent local recurrence and disease-free survival rate. jacek piatkowski, md, phd, marek jackowski, prof; clinic of general, gastroenterological and oncological surgery introduction: more than years ago, laparoscopic technique was considered to be a fully accepted surgical method for treatment of rectal cancer. the following years are a further search for a new surgical method that reduces invasiveness and improves treatment outcomes. it seems that such a method is transanal total mesorectal excision. the aim of this study was to evaluate the new method of rectal cancer surgery (tatme) after years of its use. methods: radicality of treatment (r resection, local recurrence), outcome of surgical treatment and quality of life of patients after surgery were evaluated. results: in the period from . . . - . . . patients ( men, women) were operated in the clinic. in cases the indication for surgery was lower and middle rectal cancer and in cases high grade dysplasia. all patients underwent laparoscopic rectal proctectomy with transanal access (tatme). in all cases, complete oncological radicalization (resection r ) was obtained. the average operation time was minutes. we had used two teams approach (cecil approach) with laparoscopic sets -abdominal and perineal starting at the same time. in the postoperative course, patients had signs of anastomosis leak ( of them required reoperation). the follow-up period is - months. none of the patients had any recurrence of cancer. conclusions: . transanal tme for rectal cancer surgery is an alternative method to conventional laparoscopic surgery. . in a large proportion of patients with lower and middle tumors, the rectum can avoid abdomino-perineal resection with permanent colostomy. background: the double stapling technique (dst) has widely spread colorectal anastomosis especially for anastomosis after low anterior resection. as for the colorectal cancer treatment, heald reported total mesorectal excision (tme) in , and has been accepted as the standard technique for rectal resection due to the decreased local recurrence rate and improved functional results. with advent of dst, there is a background that it has become possible to preserve anus, even in the case with the lesion at lower rectum. laparoscopic surgery for colon cancer was introduced in the s, and has had promising results including long-term outcomes. according to the spread of laparoscopic surgery, laparoscopic surgery had been applied to the rectal resection, with technical difficulty. one of the reasons for the difficulty is that the high rate of anastomotic leakage, a critical adverse effect of low anterior resection (lar). thus, risk factors for anastomotic leakage were widely discussed, including technical factors such as pre-compression and number or firing. the decisive difference in conventional lar and laparoscopic lar in dst, is the stapler used for transection of the rectum. the laparoscopic staplers which are currently available are thought to be not ideal, and there is little evidence of specific specifications of stapler for laparoscopic surgery. materials and methods: all method described in this study was approved by the institutional ethical review committee. we reviewed the colon and rectal wall thickness according to histological examination using h&e staining of distal margin of resected specimen of the patients who conclusions: rstc for severe acute uc is at least as safe as the laparoscopic approach. although the robotic cohort had more comorbidities, major postoperative complications, readmissions, and reoperation rates were less when compared to lstc. rstc was also associated with an earlier return of bowel function and shorter length of stay. a prospective study with larger numbers is needed to see if the superiority of robotic versus laparoscopic approaches is reproducible. s surg endosc ( ) introduction: complete mesocolic excision (cme) has been advocated based on oncologic superiority, but is not commonly performed in north america. furthermore, many data are limited to case series with few comparative studies. therefore the objective was to systematically review studies comparing the short-and long-term outcomes between cme and non-cme colectomy for colon cancer. methods: a systematic review was performed according to prisma guidelines of medline, embase, healthstar, web of science, and cochrane library. studies were only included if they compared conventional resection (non-cme) to cme for colon cancer. quality was assessed using the methodological index for non-randomized studies (minors). the main outcome measures were short-term morbidity and oncologic outcomes. study eligibility, data extraction and quality assessment was performed by two independent reviewers, and disagreements resolved by consensus. weighted pooled means and proportions with %ci were calculated using a randomeffects model when appropriate. results: out of citations, studies underwent full-text review and met the inclusion criteria, of which were unique series. mean minors score was . (range - ). the mean sample size in the cme group was (range - ) and (range - ) in the non-cme group. in the unique studies, included only right-sided resection, and . % ( % ci . - . ) of the remaining were right-sided colectomies. of the studies that reported surgical approach, . % ( %ci . - . ) of cme were performed laparoscopically. there were papers reporting plane of dissection, with cme plane achieved in . % ( . - . ). mean or time in cme group was minutes (range - ) and in non-cme group minutes (range - ). perioperative morbidity was reported in studies, with pooled overall complications of . % ( %ci . - . ) for cme and . ( %ci . - . ) for non-cme resections. anastomotic leak occurred in . % ( %ci . - . ) of cme versus . % ( %ci . - . ) in non-cme colectomies. cme surgery consistently resulted in more lymph nodes retrieved, longer distance to high tie, and specimen length. there were studies that compared -or -year overall or disease-free survival, or local recurrence. only studies reported statistically significant higher disease-free or overall survival in favour of cme. local recurrence was lower after cme in of reported studies. conclusions: the quality of the current evidence is limited and does not consistently support the superiority of cme. more rigorous data are needed before cme can be recommended as the standard of care for colon cancer resections. gilberto lozano dubernard, md, facs, ramon gil-ortiz, md, gustavo cruz-santiago, md, bernardo rueda-torres, md, javier lopez-gutierrez, md, facs; hospital angeles del pedregal introduction: to assess the feasibility of a single-stage colorectal laparoscopic re intervention without ostomy. colonic laparoscopic interventions on patients that previously underwent a minimally invasive procedure, constitutes the current boundary in the management of the acute colorectal pathology. that includes, patients with fecal peritonitis due to diverting procedures already treated surgically. the outcome of our patients could significantly improve if the surgical procedure is performed in one time, with no stoma. method and procedures: from september to june , one hundred thirty-two patients underwent colorectal laparoscopic surgery. five of these patients developed complications: three perforations due to colonoscopy and two due to dehiscence of the anastomosis. these five patients underwent a second laparoscopic procedure that included resection and anastomosis. no stoma required. results: all five patients underwent a second laparoscopic procedure due to an anastomosis leak. no stoma was required. the procedure consisted on resection of the previous anastomosis, re anastomosis, abdominal lavage, aspiration and drains placement. all of them supported with parenteral nutrition. there were no surgical complications. only one patient developed pneumonic symptoms that were solved. conclusion: the reported results, regarding no conversion rate, nor mortality, on our series of patients, suggest that single stage laparoscopic re intervention is feasible, despite fecal peritonitis. introduction: total mesorectal excision is known to be a gold standard surgical procedure for the rectal cancer. subsequently complete mesocolic excision (cme) is recognized as an essential surgical procedure for the colon cancer. the transverse colon is relatively minor location for colon cancer. variety of vessels and mobilization of splenic flexure and dissection close to pancreas make operations for the transverse colon cancer complicated. laparoscopic transverse mesocolic excision in our hospital is presented. method: laparoscopic surgery is conducted with five trocars under the lithotomy position. inferior mesenteric vein is cut after dissection of the descending colon with medial approach. the lower edge of pancreas is exposed near the inferior mesenteric vein and is dissected along toward the tail of pancreas. the splenic flexure is mobilized with lateral approach and the dissection between transverse mesocolon and the lower edge of pancreas is continued in the direction to the pancreas head. coming to the exposure of superior mesenteric artery and vein, the origin of middle colic artery and vein are cut. the transverse mesocolon is separated from the pancreas head and the duodenum with preserving the gastrocolic trunk of henle and the right gastroepiploic vein. the hepatic flexure is mobilized and cme for the transverse colon is finished. this method, the 'tail to head of pancreas' approach, we called, was performed from september . this method is well performed with one series of surgical view, and seems to be a simple procedure as cme with central vascular ligation for the transverse colonic cancer. there were no intraoperative complications, and one postoperative pancreatitis with grade ? of clavien-dindo classification of surgical complications. conclusion: our method, the 'tail to head of pancreas' approach, with transverse mesocoloc excision is simple, safe and feasible. the introduction: anastomotic complication after stapled anastomosis in colorectal cancer surgery is a considerable problem. there are various types of anastomotic complication and they have different severity. this study was aimed to evaluate the impact of intraoperative colonoscopy on detection of anastomotic complication, and its effectiveness in treatment of anastomotic complications after anterior resection (ar) and low anterior resection (lar) for colorectal cancer intraoperatively. methods: from dec. to jul. , a total of patients who underwent anastomosis between sigmoid colon and rectum after colorectal resection were reviewed retrospectively. intraoperative colonoscopy was performed routinely since december in our hospital after anterior resection and low anterior resection. to identify effectiveness of intraoperative colonoscopy, we compared postoperative complications with non-intraoperative colonoscopy group during previous months. intraoperative colonoscopy was performed after anastomosis to visualize the anastomosis line and to perform an air leakage test. if anastomotic defect and moderate bleeding were found in intraoperative colonoscopy, it was managed by means of reinforcement suture or transanal suture repair. we used logistic regression to analyze anastomotic complication between two groups with or without intraoperative colonoscopy. results: of the patients who were performed intraoperative colonoscopy after ar (n= ) and lar (n= ), abnormal findings including bleeding and air leak were found in patients ( . %). among those, cases were observed without any procedure, additional procedures were performed in patients ( . %, transanal suture ( ), lembert suture ( )). postoperative complication was developed in patients; patients had anastomosis bleeding ( . %), patients had ileus ( . %), patient had pneumonia ( . %), patients had minor complication ( . %, acute urinary retention, chylous drainage, laparoscopic port site bleeding). among patients who had anastomosis bleeding, patients were treated by endoscopic clipping, patients were cured by conservative treatment. there was no postoperative anastomotic leakage. the cases of ar and lar were and in non-intraoperative colonoscopy group, there was no significant difference between two group (p= . ). the proportion of laparoscopic surgery was . % and . % on intraoperative colonoscopy and non-intraoperative colonoscopy group, respectively, there was significant difference statistically (p= . ). however, there was no significant difference in anastomotic complication rate between two groups. (rr= . , % ci, . - . ). conclusions: although there was no significant difference in postoperative anastomotic complication rate between two groups, intraoperative colonoscopy may be valuable method for decreasing postoperative complication by visualizing anastomosis line and performing additional procedure. conclusion: it was suggested that lymph node dissection of both middle and left colic regions is necessary for splenic flexure colon cancer, because lymph node metastasis was recognized in both region. surg endosc ( ) :s -s the aims: laparoscopic right hemicolectomy became the standard of care for treating cecum, ascending and proximal transverse colon cancer in many centers. most centers use laparoscopic colectomy with extracorporeal resection and anastomosis (lc). single-incision laparoscopic colectomy with intracorporeal resection and extracorporeal (sc) remains controversial. the aim of the present study is to compare these two techniques using propensity score matching analysis. methods: we analysed the data of patients who underwent laparoscopic right hemicolectomy with lc or sc between december and december . the propensity score was calculated from age, gender, body mass index, the american society of anesthesiologists score, previous abdominal surgery and d lymphnode dissection. short-term outcomes were recorded. postoperative pain was evaluated using a visual analogue scale (vas) and postoperative analgesic use as outcome measure. results: the length of skin incision in the sc group was significantly shorter than in the lc group: median (range) ( . - ) cm verses ( - ) cm (p= . ). the vas score on day and day after surgery was significantly less in the sc group than in the lc group: median (range) ( - ) verses ( - ) on day (p= . ) and median (range) ( - ) verses ( - ) on day (p= . ). significantly fewer the number of requiring analgesia in the sc group on day and day after surgery: median (range) ( - ) times verses ( - ) times on day (p= . ) and ( - ) times verses ( - ) times on day (p= . ). there were no significant differences in operative time, intraoperative blood loss, the number of lymph nodes removed and postoperative courses between the groups. conclusions: sc for right colon cancer is safe and technically feasible. sc reduces the length of skin incision and postoperative pain compared with conventional lc. patients were divided into the following groups: cephalo-medial-to-lateral approach group (cml group, n= ) and medial-to-lateral approach group (ml group, n= introduction: laparoscopic technique has been widely used in the treatment of colorectal cancer, while playing its minimally invasive advantages, but also achieved a good effect of radical oncology. however, t colorectal cancer is not recommended laparoscopic surgery. methods: retrospectively collected pt colorectal cancer data from to in guangdong general hospital, all cases were undergoing radical surgery. results: a total of cases were enrolled in the pt group, including cases of laparoscopic group, cases of open group, conversion rate was . %. there was no difference in baseline data (age, sex, bmi, asa, etc.)(p. ). there was a significant difference between the two groups (p. ) in blood loss, postoperative complications and postoperative recovery index. in the pathologic t a/b, combined-organ resection, postoperative recurrence, the laparotomy group had more cases, and there was a statistically significant difference between the two groups (p\ . ). the -and -year overall survival rates were . % and . % for the lap group and . % and . % for the open group (p= . ). meanwhile, the -and -years disease-(p= . ). iiic stage, lymph node status, ca - and adjuvant chemotherapy were independent prognostic factors affecting overall survival. the age, pt a/b, iiic stage, ca - and adjuvant chemotherapy were independent influencing factors of disease-free survival. conclusions: laparoscopic surgery for pt colorectal cancer surgery, it is not only in the play of its minimally invasive but also obtained with the similar long-term effect. but we need more multicenter, prospective, and large sample clinical studies to validate our findings. introduction: lymph node (ln) retrieval after surgery is important. in the present study we evaluated the efficacy of the fat dissolution technique using fluid containing collagenase and lipase to avoid staging migration after laparoscopic colorectal surgery. methods: seventeen patients who underwent laparoscopic ln dissection for colorectal cancer were evaluated. first, unfixed lns within the resected mesentery were explored by visual inspection and palpation immediately after the operation by the surgeon, which is the most common practice in japan. subsequently, the fat dissolution technique was used on remnant fat tissue, and the lns were evaluated again. the primary endpoint was whether the second assessment increased the number of lns evaluated. results: the median number of lns identified at the first and second assessments was and , respectively, resulting in a significant increase in the total number of lns evaluated ( vs. , p\ . , paired t-test). one positive node was identified among all the additional lns identified ( . %; / ). although staging was not altered in any patient, the second assessment resulted in an increase in the originally insufficient number of lns evaluated (\ for stage ii) in three patients, whose treatment may be altered. tumor cells detected after the fat dissolution technique were stained with carcinoembryonic antigen and cytokeratin- . conclusion: using the fat dissolution liquid on remnant fat tissue of the mesentery of the colon and rectum enabled identification of additional lns. this method should be considered when the number of lns identified is not sufficient after conventional ln retrieval, and may avoid stage migration. aim: the aim of this study is to evaluate the pathological resection margin after laparoscopic intersphincteric resection for low rectal cancer. method: from to , there were eight laparoscopic intersphincteric resection cases for low rectal cancer. we evaluated the clinicopathological findings and the positivity of pathological resection margin. results: the median distance from the anal verge to the tumor was mm (range, - ), and the median diameter of the tumor was mm (range, - ). there was no case with neoadjuvant therapy. the estimated tumor depth were ct in cases ( . %) and ct in cases ( . %), and the actual tumor depth were ptis in cases ( . %) and pt in cases ( . %) and pt in cases ( . %). the median distal resection margin was mm (range, - ). pathological resection margin, such as the proximal, distal and circumferential margin was negative in all cases ( %). there was no mortality, but morbidity occurred in two cases (one case of anastomotic leakage and one case of small bowel obstruction). no recurrence nor distant metastasis was observed in the follow up period. conclusion: there was no positive resection margin case in the series. our patient selection, indication and the technique were considered to be precise and appropriate. introduction: the fistulas of the intestine to the vagina or the bladder include a highly morbid entity, with several functional limitation and loss of the quality of life, its diagnosis is complex and more than its treatment, which include a wide range of possibilities that go from the simple derivative colostomy in search of the spontaneous closure of the fistula, under the complete correction of the pathology with resections, anastomosis and mini-vasive reconstructions. give to know our experience in the minimally invasive treatment of whole vaginal and whole vesicial fistules by laparoscopic via, for the last years. results: a total of patients were operated in this period, women and men, all those by laparoscopic via, with intestinal resection, in thick intestine cases, in one small intestine and in another case with the commitment of the two, everyone restriction and intestinal anastomosis and in no matter were colostomy, primary closures of the fistula in patients were required, conversion to open surgery in a case and there was no recurrence, patients had prolonged hospitalization for localized infections, a requirement reintervencion for revision. a patient suffried a umbilical eventration for the extraction site, which was corrected one year after laparoscopy. conclusion: minimally invasive surgery in patients with this type of pathology becomes an excellent strategy for the integral management of these patients. group work guarantees good results. robbie sparks, dr, ronan cahill; mater misericordiae university hospital background: precise preoperative localisation of colonic cancer is a prerequisite for correct oncological resection. effective endoscopic lesional tattoo is vital for small, radiologically unseen tumors planned for laparoscopic resection but its practice may be imperfect. methods: retrospective review of consecutive patients with preoperative endoscopic lesional tattoo who underwent laparoscopic colonic resection identified from our prospectively-maintained cancer database with supplementary clinical chart and radiological, histological, endoscopic and theatre database/logbook interrogation. results: patients ( males, mean age years, median bmi . kg/m , left sided lesions, screen detected, benign polyps, % conversion rate). in operations ( %) tattoo visibility was documented with tattoo absence noted in ( . %) although tattoo was identifiable in the pathological specimen in four. in those with "missing tattoos", six of the lesions were radiologically occult and in three the tumor was found in a different colonic segment then had been judged at colonoscopy. four patients had on-table colonoscopy and five were converted to laparotomy ( % conversion rate, p. ). mean postoperative length of stay was . (range - ) days. one patient's segmental resection contained only benign pathology requiring a second operation to remove the cancer. on univariate analysis, time between endoscopy and surgery (but not patient age, gender, bmi, endoscopist or surgeon seniority, tumor size or location) was significantly associated with absence of tattoo intraoperatively (p= . ). conclusion: recording related to tattoo is variable but definite lack of gross tattoo visualisation significantly impacts the procedure. the mechanism of tattoo absence is multifactorial needing careful consideration but solvable. the aim of the present study was to perform a systematic review of the literature to determine the role of antibiotics in the management of acute uncomplicated diverticulitis (aud). diverticular disease is the most common disease of the large bowel and poses a significant burden on healthcare resources. in the united states alone, the cost of diverticular disease has been estimated to be over $ billion making it the fifth most important gastrointestinal disease economically. the use of antibiotics in the management of aud, however, is primarily based on expert opinion as current high-quality evidence is lacking. recent studies have not only questioned the optimal type and duration of antibiotic regimens, but whether antibiotics provide any benefit in the treatment of aud. conclusions: antibiotic use in patients with acute uncomplicated diverticulitis is not associated with a reduction in major complications, readmissions, treatment failure, progression to complicated diverticulitis, or need for elective and emergent surgery. however, it increases the length of hospital stay. given the risk of selection bias in included studies, further randomized trials are needed to clarify the need for antibiotics in uncomplicated diverticulitis. laparoscopic para-aortic lymph node resection for colorectal cancer aim: we want to highlight the feasibility of a sigmoidectomy using total laparoscopic with a transanal extraction of the specimen. methods: it is a -year-old female patient, obese (bmi= kg/m ) to the antecedents of laparoscopic cholecystectomy and chronic constipation. she was treated three months ago for a sigmoidal diverticulitis complicated with a pelvic abscess. the evolution has been favorable under antibiotic therapy and percutaneous drainage of the abscess. the colonoscopy showed a multiple diverticula located between and cm from the anal verge. prophylactic sigmoidectomy was performed laparoscopically using trocars ( mm supra ombilical, mm fid and mm right flank). the specimen was extracted transanally, thus avoiding a pubic incision. the steps of the intervention were: -mobilisation of left colon -closing of distal left colon stump -rectal stump lavage -opening on the rectum -transanal introduction of the anvil -specimen transanal extraction -closing og rectal stump -colonic positioning of the anvil -coloractal anastomosis. results: the intervention was minutes. no perioperative incidents. the liquid regime was authorized on the night of the intervention. the operating procedures were favorable with an exit to j post operative. the anapath examination of the surgical specimen confirmed the presence of sigmoidal diverticula. conclusion: laparoscopic sigmoidectomy with transanal extraction of the specimen for benign desease is a seductive technique with satisfactory results. it avoids a pubic incision with its parietal and aesthetic complications. chengzhi huang; guangdong general hospital (guangdong academy of medical science) background: colorectal cancer (crc) is one of the most common malignant diseases over the world. of the causes of the death of crc, metastasis to liver or lung are the major factors. however, there is still lack of precise tumor biomarker that precisely predict the clinical outcome of crc. the salt-inducible kinase (sik ) encodes a serine kinase of amp-activated protein kinase (ampk) family, which may play critical roles in tumorigenesis and tumor progression. this study aimed the study the expression and clinical significance of sik and crc patients. methods: the expression of sik protein was measured by western-blot and analysis of immunohistochemistry. sik mrna expression in cancerous tissue was measured by rt-pcr. results: the expression level of sik was correlated with the following factors: tumor invasion (t stages), lymph node metastasis, clinical stages (tnm) and tumor location. the down-regulated sik implies poor clinical outcome measured by kaplan-meier analysis (p-value. ), and may act as an independent risk factor of crc patients. background: surgical specimens for resected colon cancer vary in quality and there remains no universally accepted technique to guide resection margins. a minimum of lymph nodes provides some quality assurance, however this remains a crude marker of optimal oncological surgery. a tool to precisely identify lymphatic drainage within the mesentery could improve the oncologic quality of resection and better guide adjuvant treatment through more optimal mesenteric lymphadenectomy. while fluorescence imaging (fi) has been described to identify nodal disease in several other cancers, feasibility and best practices have not been established in colon cancer. we describe a novel technique of fi using indocyanine green (icg) to identify lymphatic spread and potentially guide optimal mesenteric lymphadenectomy in colon cancer. methods: three consecutive patients with colon cancer undergoing a laparoscopic resection had peritumoral subserosal injection of icg for fi after extracorporealization of the mobilized specimen. three concentrations of icg were injected − mg/ ml, mg/ ml, and mg/ ml. a total of ml was given for each patient. using a modified laparoscopic camera, the icg was excited by light in the near-infrared (nir) spectrum, for real-time visualization of the lymphatic drainage. the main outcome measure was identification of lymphatic drainage. results: three patients with right-sided primary colon cancer were evaluated. all three patients had successful identification of the lymphatic drainage pattern along the mesentery. the most successful protocol was ml (concentration mg/ ml) subserosal injection at points within close proximity ( cm) of the tumor with a -gauge needle, then waiting minutes for complete mapping. no intraoperative or injection-related adverse effects occurred with -day follow-up. the median lymph node yield was . all specimens had tumor-free margins. conclusion: from this small series, fluorescence imaging with icg is a potentially safe and feasible technique for identifying mesocolic lymphatic drainage patterns. this proof of concept and protocol will lead to future studies to examine the utility of fluoresence imaging to guide more precise surgery in colon cancer. introduction: anastomotic leakage in colon/rectal surgery is a dangerous event with an occurance rate ranging from to %. the associated mortality rate is between - %. the white-light intraoperative subjective surgical assessment (the most frequently used approach) underestimates the actual anastomotic leakage rate. intraoperative tissue perfusion assessment by indocyanine green (icg)-enhanced fluorescence has been reported in multiple clinical scenarios in laparoscopic/ robotic surgery, as well as for for bowel perfusion assessment. this technology can detect microvascular impairment, potentially preventing anastomotic leakage. we reviewed the literature and present our data to evaluate the feasibility and usefulness of icg-enhanced ?uorescence in the intraoperative assessment of vascular peri-anastomotic tissue perfusion in colorectal surgery. methods and procedures: a pubmed literature narrative review has been performed. moreover, out of a total of robotic colorectal cases, we retrospectively analyzed icg-enhanced fluorescence robotic colorectal resections ( left colectomies- rectal resections- right- transverse- pancolectomy). results: after icg-technology use, the biggest (n[ ) case-series showed a rate of . - % of cases in which they changed the level of resection based on icg. icg technology may variably reduce the anastomotic leak rate from to %. however, the threshold values to define the actual sub-optimal perfusion are still under investigation. in our experience, out of icg cases performed: the conversion, intraoperative complication, dye allergic reactionand mortality rates were all %. post-op surgical complications: case of leak ( , %) and sbo for incarcerated hernia ( . %). in cases, with normal white-light assessment, the level of the anastomosis was changed after icg showed ischemic tissues. despite the application of icg, anastomotic leak has been registered. conclusions: icg-enhanced ?uorescence may intraoperatively change the white-light assessed resection/anastomotic level, potentially decreasing the anastomotic leakage rate. our data shows that this technology is safe, feasibile and may prevent anastomotic leakage. however, the decision making is still too subjective and not data driven. at this stage icg, beside being a promising technique, doesn't have high level of evidence (most of the reports are retrospective). some randomized prospective trials with an adequate statistical power are needed. a precise injection dose and timing standardization is required. the main challange is to develop a method to objectively obtain a real-time intensity assessement. this may provide objective metric tresholds for an intraoperative evidence/data-based surgical decision making. introduction: according to the world health organization, colorectal cancer is the rd most commonly diagnosed cancer in the world. one of the main risk factors for the development of colorectal cancer is obesity. obesity is seen to increase the risk of colorectal cancer by % in women per kg/m and % in men per kg/m . bariatric surgery is one of the treatments that is considered to achieve and sustain a significant amount of intentional weight loss in patients. considering that fact that bariatric surgery decreases obesity, this intentional weight loss would seem to provide a favorable outcome in terms of diagnosis and prognosis of colorectal cancer. a systemic review of the literature was conducted via pubmed to identify relevant studies from january through may . the main outcome for this study is to assess whether patients who underwent bariatric surgery (restrictive and malabsorptive procedures) had an increased or decreased risk of colorectal cancer. all studies included in this meta-analysis are retrospective cohort studies. results were expressed as standard difference in means with standard error. statistical analysis was done using fixed-effects meta-analysis to compare the mean value of the two groups between bariatric surgery and non-surgery in patients with colorectal cancer. (comprehensive meta-analysis version . . software; biostat inc., englewood, nj). results: four out of studies were quantitatively assessed and included for meta-analysis. among the four studies, , underwent bariatric surgery and , did not undergo bariatric surgery. there is a significant decrease ( . ± . ; p= . ) in the risk in patients developing colorectal cancer in patients who underwent bariatric surgery compared to those who didn't get surgery. conclusion: bariatric surgery patients appear to have a decreased risk of colorectal cancer compared to patients who did not have bariatric surgery. guh jung seo, hyung-suk cho; department of colorectal surgery, dae han surgical clinic, gwangju, south korea introduction: the incidence of rectal carcinoid tumors is increasing due to the widespread use of screening colonoscopy. endoscopic mucosal resection (emr) is a useful method for small rectal carcinoid tumors (≤ mm) because of its simplicity, quick procedure and low complication rates. we aimed to describe our experience and evaluate the outcomes of emr for rectal carcinoid tumors. the patients enrolled in this study were patients with small rectal carcinoid tumors who underwent emr using a submucosal injection technique of epinephrinesaline mixture between august and october . all medical records, including characteristics of the patients and tumors, complications, were retrospectively reviewed. results: the patients were men and women, with a mean age of . years (range, - years). en block resection was performed by emr in all cases. the endoscopic mean size of tumors was . mm (range, - mm). the pathologically measured mean size of the resected specimens was . mm (range, - mm). the mean size of resected carcinoid tumors was . mm (range, . - mm). the tumor shape was submucosal tumor in and polyp in . histological examination revealed that cases had resection margin positive of tumor and case had undetermined resection margin of tumor. of the patients, patients underwent endoscopic treatment and patients underwent transanal excision. no residual tumor was found in additionally removed tissue. there were cases with emr-related complications: early postprocedural bleeding and postpolypectomy syndrome. there was no significant bleeding requiring blood transfusion or perforations. conclusion: endoscopic mucosal resection is considered to be a relatively safe and useful method for treatment of small rectal carcinoids in selected patients. background: disturbance of sexual function after an operation for rectal cancer has often occurred. the relationship between autonomic nerves and arteries in pelvis was examined. methods: clinical studies of male patients with resected rectal cancer were performed using snap gauge method, penile-brachial index and evoked bulvo-cavernous reflex. in canine experiments, pelvic splanchnic nerve (psn) electric stimulation, arterial flow measurement, corpus cavernosum pressure measurement and muscle strip study using drugs were evaluated. results: in clinical studies of male patients, transection of the hypogastric nerve (hgn) and the sympathetic trunk did not affect the erectile function in the postoperative course. in animal experiments transection of these nerves did not affect the increase in inner pressure of the penis cavernosum. in postoperative cases in which only one side of the lower grade branches of the psn (s ) were preserved, the erectile function was preserved. in animal experiments in which the psn of one side was disturbed, the ipa flow of the same side decreased, while the flow of the other side increased. we have evaluated the role of adrenergic components in the psn on the erectile function in the dog. the effect of norepinephrine hydrochloride on canine vascular smooth muscle was examined in vitro. vascular smooth muscle strips from the ipa relaxed longitudinally. electrical stimulation of the psn increased blood flow in the ipa and also elevated the cavernous pressure. these increases were blocked in part by phentolamine, but not by propranolol or atropine. the effects of cholinergic and adrenergic agonists and antagonists on mechanical responses were also examined in muscle strips obtained from various arteries in the intra-pelvic region including the ipa. norepinephrine induced contraction in the iliac artery and relaxation in the ipa, and both the contraction and relaxation responses were blocked by phentolamine but not by propranolol. these findings suggest that in the dog, α-adrenergic components projected through the psn may contribute to penile erection. conclusion: blood flow in the ipa was controlled significantly by the same side psn, but compensatory by the other side psn. it is also conceivable that the erectile function through the psn is controlled by the sympathetic nerve, not by the parasympathetic nerve. in postoperative cases in which only one side of the lower grade branches of the psn (s ) were preserved, the erectile function was preserved. introduction: currently, neoadjuvant chemo-radiotherapy (ncrt) followed by low anterior resection or abdominoperineal resection are the standard treatments for locally advanced rectal cancer. ncrt can improve resecability, achieve better sphincter preservation and reduce local recurrence. although total mesorectal excision is the standard treatment for advanced rectal cancer, recent trends in minimally invasive treatments led to an increase in local excision or "watch and wait" in patients with an excellent response to ncrt. the purpose of this study, part of an ongoing research, is critically evaluating the feasibility of "non-operative treatment" for rectal cancer in a district hospital. methods and procedures: a total of patients with rectal cancer, who where treated with ncrt from january to august at "carlo urbani" district hospital in jesi (italy), were retrospectively reviewed. all patients had histologically-confirmed primary adenocarcinoma of the rectum located within cm from the anal verge. the involved patients completed ncrt and had no recurrence disease, distant metastasis, synchronous malignancies. they were classified according to the mandard's tumor regression grade (trg) into two clusters: group a (trg - ) and b . results: the average age of people is . and were male. five patients underwent abdominoperineal resection and % fell within group a. six patients had lymph nodes involved. four patients suffered relevant complications, such as wound complication, anastomotic leak, operative reintervention and death. univariate analysis showed that the main predictors of tumor regression were the absence of lymph-nodes involvement from initial imaging (p. ), normal initial carcinoembryonic antigen level (p. ) and tumor downstaging in imaging (p. ). in addition, most relevant complications occurred to elderly patients although they observed a good clinical response. besides, % of patients were found to be complete pathologic responders upon examination of the surgical specimen. conclusions: the oncologic feasibility of non-operative management for the patients with complete clinical response after ncrt has been growing, but some studies have suggested lack of oncologic safety in these patients. the patients with a complete clinical response expect good survival, but they may still harbor residual disease. no consensus on "watch and wait" policy in the field of rectal cancer was obtained, yet. our data did not entirely support this policy although it might be the best strategy, based on the predictors of tumor regression, to avoid the complications associated with surgery in elderly patients with significant medical comorbidities and fear of a permanent stoma. introduction: conventional incision laparoscopic surgery procedure for rectal cancer is widely accepted as a successful alternative to laparotomy now, bestowing specific advantages without causing detriment to oncological outcome. evolving from this, single-incision laparoscopic surgery (sils) has been successfully utilized for the removal of colonic tumors, but the literature lacks sufficient data analyzing the suitability of sils for rectal cancer especially for total resection mesorectal excision (tme), particularlyon oncological outcome. we report the short-term clinical and oncological outcomes from a large cases retrospective analysis of observational study of sils for tme procedure of rectal cancer. methods: rectal cancer patients who underwent transumbilical single incision laparoscopic tme surgery were recruited in the current study. short-term perioperative clinical parameters and oncological outcomes were observed and all patients were followed up after surgery. then summarize the preliminary application results. results: operations were accomplished successfully with single incision laparoscopy, patients were converted to multiport approach, and was converted to laparotomy, no diverting ileostomy was performed. the average operative time was ( . ± . ) min, with an average blood loss of ( . ± . ) ml, the median postoperative hospital stay was ( . ± . ) days. all patients received a r resection and the surgical margin were conformed negative in all cases, the median number of harvested lymph node is ( . ± . ), the specimens met the requirement of tme. there were postoperational complications, no operation-related mortality or postoperative anastomotic leakage was observed. no patient appeared recurrent in a median follow up of months. conclusions: total mesorectal excision surgery for rectal cancer can be safely performed using transumbilical single incision laparoscopic technique, with acceptable short-term clinical and oncological outcome. surg endosc ( ) background: any surgical trauma induces an inflammatory response, which is considered as a negative factor in the general immune response, specially in malignant disease. the c-reactive protein (crp) is an acute phase protein often used as a marker of surgical trauma. stent treatment has been used as a treatment option for colonic obstruction in palliative cases for many years, and also as a bridge to surgery in selected cases. in a pilot study we compared the inflammatory response after acute stent treatment or surgery for malignant colonic obstruction. method: we compared two consecutive series of treatment of acute malignant colonic obstruction, stent treatment or emergency surgery during - . all patients were admitted with acute colonic obstruction due to colorectal cancer. choice of treatment was based on attending senior colorectal surgeons' preference, patient comorbidities and disseminated disease was considered. patient age, crp, time to first defecation and length of stay was recorded. results: a total of patients were identified in a retrospective analysis. patients had acute stent treatment and had acute surgical treatment for colonic obstruction, all due to colorectal cancer. median age was y ( - ) with no difference between the groups. there was no difference in metastatic disease between the groups. median time until first defecation after treatment was significantly shorter for the stented patients ( h ( - )) compared with those operated ( h ( - )) (p, ). median hospital stay was also shorter in the stent group, days ( - ), versus days ( - ) in the surgical group (p= , ). crp did not differ between the groups before treatment. both treatments resulted in increased crp levels at postoperative days and , but the crp levels were significantly higher in the surgical group than in the stent group at both time points (pod p= , , pod p, ) conclusion: acute stent treatment in colonic malignant obstruction seems to induce a less pronounced inflammatory response compared with surgery, as shown by a significantly reduced increase in postoperative crp resulting in shorter time to first defecation and a shorter hospital stay. introduction: meckel's diverticulum is the most common congenital abnormality in newborns, present in about - % of them. diagnostic of meckel's diverticulum requires a high index of suspicion, and even with the use of modern imaging technologies, they are often diagnosed intraoperatively. what to do when an asymptomatic diverticulum is found incidentally during surgery for other causes is a matter of discussion. objective: the aim of this article is to report symptomatic and asymptomatic incidentally found cases seen in a fourth-level hospital in colombia. the reports of the histopathologic examinations carried out in the hospital in the last years were reviewed searching for those containing meckel's diverticulum in their diagnosis. patients were divided in asymptomatic and symptomatic groups. the asymptomatic group was defined as patients who were operated for a different indication and a meckel's diverticulum was found incidentally. morbidity was divided in early and late complications after the initial surgery. results: from january to june , a total of pathology reports included the diagnosis meckel's diverticulum. a total of adult patients were retrieved. all of those patients with meckel's diverticulum a total of patients were symptomatic, being sbo the most common complication and required the surgical remove incidentally. conclusion: the correct approach of the patients with diverticular pathology allows the early identification and the appropriate management of the surgical complications that can be presented. robert j czuprynski, md, grace montenegro, md; saint louis university hospital presacral masses are a rare entity, with an incidence of . % and can be classified in several categories, including inflammatory, neurogenic, congenital, osseous and miscellaneous. in this case, a neuroendocrine tumor was identified with concern for iliac chain lymphatic and gluteal metastasis. the patient underwent abdominoperineal resection, excision of presacral mass, lymph node biopsy and omental flap. final pathology returned as a grade ii neuroendocrine tumor arising from a tailgut cyst. a year old female with a ten year history of recurrent perianal, ischiorectal and deep postanal abscesses presents with a presacral mass biopsy proven well-differentiated neuroendocrine tumor. octreotide scan demonstrated avidity for presacral mas as well as left intergluteal lymph node and two internal iliac lymph nodes. chromogranin a, neuron-specific enolase and serotonin markers were all negative. the patient was taken to the operating room and underwent abdominoperineal resection, resection of presacral mass and internal iliac nodes with an omental flap. neuroendocrine tumors arising from tailgut cysts of the presacral space are rare in nature. in a retrospective study from great britain, four of thirty one tailgut cysts had malignant transformation, so it is generally recommended to resect the cysts. in this case, the patient's tumor was a moderately differentiated, grade ii with extensive lymphovascular and perineural invasion. there are no prospective studies showing neoadjuvant therapies in neuroendocrine tumors of the presacral space. according nccn guidelines, patient is currently asymptomatic with low tumor burden. recommended treatment at this time is observation with surveillance tumor markers every - months or octreotide. anastomotic leakage has been commonly regarded as one of the toughing postoperative complications in laparoscopic mid/low rectal cancer surgery, attenuating the short-term clinical benefits. the left colic artery (lca) has been routinely central-ligated in dissection process to guarantee the oncological effects, which may potentially attribute to the postoperative ischemia-induced anastomotic leakage in the patients with left-colic vessel variation, e.g. bypass or absent of riolan arch. however, no specific study focuses on the surgical benefits of lca preservation compares to conventional ones. herein, we conduct a single center randomized controlled trial, demonstrating that lca-preserving technique shows significant reduction rate of postoperative leakage as well as overall complications comparing to the traditional central-ligation group. no difference in survival rate and recurrence in short term is found between the two groups. the lca-preserving strategy is proven to be repeatedly safe and feasible, potentially reduce the risk of anastomotic leakage with comparable short-term outcomes. further investigation is required for both the oncological safety and long-term prognosis for this innovative technique. background: three-photon imaging (tpi), which was based on the field of nonlinear optics and femtosecond lasers, has been proved to be able to provide the -dimensional ( d) morphological feature of living tissues without the administration of exogenous contrast agents. the purpose of this study is to investigate whether tpi could make a real-time histological d diagnosis for colorectal cancer compared with the gold standard hematoxylin-eosin (h-e). methods: this study was conducted between january and august . a total of patients diagnosed as colon or rectum carcinoma by preoperative colonoscopy were included. all patients received radical surgery. the fresh, unfixed and unstained full-thickness cancerous and the corresponding normal specimens in the same patient, were immediately prepared to receive tpi after surgery. for d visualization, the z-stacks were reconstructed. all tissue went through routine histological procedures. tpi images were compared with h-e by the same attending pathologist. results: the schematic diagram of tpi is shown in fig. a . peak tpi signal intensity excited at nm was detected in living tissues. the field of view (fov) was µm and the imaging deep was µm in each specimen. in normal specimens, glands lined regularly and characterized as a typical foveolar, which was comparable to h-e images ( fig. b and d ). in cancerous specimens, irregular tissue architecture and shape were identified by tpi, which was also validated by corresponding h-e images ( fig. c and e ). tpi images can be acquired with a view of d visualization. based on rates of correlation with pathological diagnosis, the accuracy, sensitivity, specificity, positive predictive value, negative predictive value were %, %, %, %, . %, respectively. conclusions: it is feasible to use tpi to make a real-time d optical diagnosis for colorectal cancer. with the miniaturization and integration of colonoscopy, tpi has the potential to make a real-time histological d diagnosis for colorectal cancer in the future, especially in low rectal cancer. erica pettke , abhinit shah , vesna cekic , daniel feingold , tracey arnell , nipa gandhi , carl winkler, md , richard whelan ; mount sinai west, columbia university introduction: alvimopan (alvim) is a peripherally acting µ-opioid receptor antagonist used to accelerate gastrointestinal functional recovery postoperatively (postop) after bowel resection. the purpose of this retrospective study was to compare the time to first flatus and bowel movement (bm) as well as length of stay (los) following elective minimally invasive colorectal resection (crr) in a group of patients (pts) who received alvimopan perioperatively (periop) vs a group that did not get this agent. methods: a data review from to from irb approved databases was carried out. operative, hospital and office charts were reviewed. routine use of alvim for elective crr cases was stared in . besides gi data, preoperative comorbidities and day postop complication rates were assessed. the results with periop alvim were compared to a no-alvim group. the students t and chi-square tests were used. results: a total of pts underwent elective crr. alvim was administered periop to pts ( %). the breakdown of indications between groups were similar. alvim pts were younger ( . vs. . years old, p= . ) and, as regards comorbidities, less likely to have heart disease (cad . % vs . %, other heart disease . % vs . %) but were otherwise similar. the rate of laparoscopic-assisted (alvim, . %; no alvim, %) and hand assisted or hybrid operations (alvim, . %; no alvim, %) were similar. alvim pts had significantly earlier return of flatus ( . vs . days) and first bm ( . vs . , p. for both) than the no alvim group. there was also a trend toward a shorter los ( . vs . days, p= . ) for the alvim group. overall complication rates were similar, however, alvim pts had lower rates of post-operative ileus ( . % vs . %, p. ), sssi's ( . vs %, p= . ), and blood transfusion ( . vs . %, p= . ) than the no alvim group. conclusion: the two groups compared were largely similar (most co-morbidities, indications, crr type) with the differences in age and cardiac issues noted. the impact of the higher rates of sssi's, blood transfusion, and mi in the no alvim group on gi function is unclear. pts who received alvim periop had an accelerated return of bowel function, decreased postoperative ileus and shorter length of stay. these results suggest that alvim is effective in reducing the postoperative ileus but further study is warranted. background: laparoscopic total proctocolectomy (tpc) is selected for minimally invasive surgical treatment of familial adenomatous polyposis (fap) and ulcerative colitis (uc). our policy of tpc is no diverting ileostomy for fap and creating ileostomy for ibd because most of the patients received steroid therapy. objective: we examined the outcome of laparoscopic tpc according to disease of fap and ibd (uc and crohn's disease). methods: twenty-three consecutive patients who underwent laparoscopic tpc between april and march were examined. the patients were divided into fap group and ibd group. results: seven patients of fap and patients of ibd (uc , crohn's disease ) underwent laparoscopic tpc or total colectomy. among them, patients (fap , ibd ) were cancerassociated cases. the procedures of the fap group was tpc with iaca in patients and hals total colectomy with ira in patient. the procedures of ibd group were tpc with iaca in patients, tpc with iaa in patients, total colectomy with ira in patients, of which hals cases. the mean operative time and blood loss were minutes, . g in the fap group and minutes, . g in the ibd group, respectively. diverting ileostomy was constructed in patients of only uc group. early complications of fap group were observed in cases (postoperative ileus , anastomotic leak with conservative treatment ), and those of ibd were observed in cases (ileus , anastomotic leak with conservative treatment , abdominal abscess , wound infection ). the median postoperative hospital stay was days in the fap group and days in the ibd group. complications requiring reoperation were cases (fap : intestinal obstruction, ibd : inflammation of stoma-closure site). no cancer recurrence and mortality were observed. one case of fap underwent additional transanal mucosal resection due to new lesion of adenoma. conclusions: laparoscopic total proctocolectomy for fap and ibd was performed safely, especially less complications occurred in fap patients without diverting ileostomy. in addition, followup of remaining mucosa is important in iaca and ira patients. treatment of complex anal fistula has always been a nightmare for surgeonsby conventional means. even the lowest and simple looking fistula at times comes out to be a complex one with high incidence of recurrence above %. most of the availability diagnostic including mri is nit conclusive and many a times the surgeon remains in a state of confusion as to what is going to come at the operation table. the conventional treatment modalities also usually leave the patient wounded needing almost to weeks to heal with a risk of sphincter damage and a high risk of recurrence. we would be presenting the technical details and results of our series of cases of complex anal fistula treated by video assisted endoscopic therapy. jun higashijima, phd, mitsuo shimada, professor, kozo yoshikawa, phd, takuya tokunaga, phd, masaaki nishi, phd, hideya kashihara, phd, chie takasu, phd, daichi ishikawa, phd; department of surgery, the university of tokushima background: one of the important causes for anastomotic leakage (al) in anterior resection is an insufficient blood flow of the stump. the hems (hyper eye medical system) and spies (laparoscopic icg system) can detect the blood flow of fresh organ intraoperatively by injection of indocyanine green (icg). and thermography also can evaluate the bloodflow less invasively. the aim of this study is to evaluate the usefulness of icg system and thermography in laparoscopic anterior resection. patients and methods: this study retrospectively included patients who underwent laparoscopic anterior resection for colon cancer with double stapling anastomosis procedure. blood flow evaluation of oral stumps was performed with measurement of fluorescence time (ft) using hems and spies. and bloodflow was also evaluated by thermography. result: evaluation by icg system: in all cases, the al rate was . % ( / cases). over ft cases, the al rate was %, higher than that of under s cases and these patinets need additional management, covering stoma or additional resection. and in border cases, ft * sec, al rate is . %, higher than under s cases. in these borderline cases, if covering stoma was performed in patinets with more than three well known risk factors, the al rate reduced to . % and false positive was . %. and under s cases, they need no additional management. evaluation by thermography: in residual intestine, the temperature was siginificantly higher than resected intestine ( . vs . ?, p. ). and the temperature in ft under s cases was significantly higher than over ft over s cases ( . vs . ?). the temperatue and ft was tended to be oppositely correlated (r = . ). conclusion: both icg system and thermography may be useful to avoid anastomotic leakage. introduction: some patients who undergo neoadjuvant chemoradiation therapy (crt) for rectal cancer achieve a pathologic complete response (pcr) in which no tumor cells are discovered during pathologic analysis of the resection specimen. achievement of pcr is correlated to improved prognoses relative to non-pcr counterparts. such correlations are not well established in the context of a community-based hospital. the study sought to examine response rates, recurrences, and survivals in locally advanced rectal cancer patients and compare patient outcomes to those achieved at major academic institutions. methods and procedures: a single-center retrospective chart review was performed at a local, community-based hospital. study population consisted of patients with locally advanced rectal cancer treated with neoadjuvant crt followed by surgical resection. patients with a history of metastasis, inflammatory bowel disease (ibd), hereditary cancer syndromes, concurrent or prior malignancy, and emergent surgery were excluded. results: patients ( . %) achieved pcr in the test population. across both groups, mean age (p =. ), gender (p=. ), and ethnicity (p=. ) were found to be comparable. mean interval between crt and or (p=. ), pre-op stage (p=. ), number of nodes (p=. ), radiation dose (p=. ), tumor location (p=. ), and days of follow-up (p=. ) presented statistically insignificant differences between groups. at years, non-pcr patients ( . %) had a recurrence with zero recurrences in the pcr group. -year mortality presented non-pcr patients ( . %) compared to pcr patient ( . %). conclusion: a multidisciplinary approach to rectal cancer consisting of standardized preoperative treatment and surgical resection can achieve patient outcomes and survival similar to those of larger academic institutions, even in the context of a community-based hospital. objective: the aim of this study was to assess safety and feasibility of total mesorectum excision (tme) within the holy plane based on embryology for rectal cancer. methods: prospectively collected data of consecutive patients with rectal cancer who underwent tatme from november to august were enrolled. surgical outcomes including tme completeness, operative time for tme completion, blood loss, complications, pathological findings and length of hospital stay were assessed. surgical procedure: after performing ractal lavage, self-retaining anal retractor was set, and anal dilators were used for an atraumatic introduction of the transanal access devise (gelpoint path). three of -mm trocars and one of -mm trocar were inserted through the gelpoint path in a quadrant shape. then the gelpoint path was introduced through the anal to rectum. after rectosigmoid colon was temporally clamped using an atraumatic endo bulldog clip, pneumoperitoneum was maintained at mmhg with carbon dioxide via an air seal platform. a purse-string suture using a polypropylen with -mm rounded needle was performed clock-wise to tightly occlude the rectum with a cm margin distal to the tumor. after irrigation with saline and marking dissection line with tattooing the rectal mucosa distal to the mucosal folds, a mucosal transection of rectum was initiated. then a full-thickness rectal transection was performed circumferentially. after dissection of rectococcygeal muscle at o'clock and rectourethral muscle in the anterior wall, circumferential sharp dissection within the holy plane was performed. dissection proceeded between the endopelvic fascia and the prehypogastric nerve fascia in the posterior plane, between the denonvilliers's fascia and the anterior mesorectum in the anterior plane, and between pelvic nerve and the mesorectum with recognition of the neurovascular bandle in the lateral plane. then the dissection connected to the abdominal plane via laparoscopic team with working together until tme completed. results: tme completion performed in ( . %) patients. thirty five ( . %) patients had negative of circumferential resection margin. mean of tme completion time and blood loss were min and g, respectively. one ( . %) patient had an intraoperative complication and ( . %) patients had postoperative complications. no other complications occurred. the length of hospital stay was days. conclusions: tatme within the holy plane on based on embryology is a safe and feasible procedure for rectal cancer. abstract: acromegaly is a debilitating condition marked by excessive production of growth hormone. this leads to disfiguration, cardiopulmonary complications, and increased risk for cancer. with up to a two-fold increased risk of developing colon cancer and worse prognosis for diagnosed patients, earlier and more frequent screening has been recommended. we present a case of a -year-old hispanic male with acromegaly who presented to our hospital with hematochezia and weight loss. a near-obstructing rectal adenocarcinoma with metastasis to the liver was discovered. after completing neoadjuvant chemoradiotherapy, he underwent laparoscopic low-anterior colon resection and simultaneous open hepatic trisegmentectomy. in this case report, we review the literature and current guidelines in screening this high-risk group of patients. introduction: in this study, we discovered that in cme for laparoscopic right hemi-colectomy starting at the ileocolic vessel and proceeds along the superior mesenteric artery (sma) achieved a better oncologic outcome compared with the conventional ones proceeding along the superior mesenteric vein (smv). methods and procedures: patients admitted to a shanghai minimally invasive surgical center were included from september to january and were randomly divided into two groups: study group (n = ) and conventional group (n = ). operation time, blood loss during surgery, liquid intake time, postoperative hospital stay, postoperative complications within days after surgery, specimen length, and number of lymph nodes harvested as well as the positive lymph node rate were observed and studied. results: there was no statistical difference between the two groups with the exception of number of lymph node dissected and the positive lymph node rate for stage iii colon cancer. the study group had more lymph node retrieved and also a higher positive rate compared with the conventional group. the mean number of lymph node retrieved of study group was . ± . , while the conventional group was . ± . (p. ). and the positive lymph node rate for study group was . %, the conventional group was . %. conclusion: when performing the laparoscopic right hemi-colectomy, dissecting the lymph node along with the left side of sma could be achievable and there were no differences of surgical outcomes compared with the conventional ways, while there was a higher number of lymph nodes dissected and positive rate probably leading to a better oncologic outcome. aims: we describe laparoscopic surgery for rectal cancer using needlescopic instruments performed at our department. methods: from to , cases of rectal cancer underwent surgery using needlescopic instruments: cases at rectosigmoid colon, at upper rectum, and at lower rectum. an umbilical camera port ( -mm) and two needlescopic instruments (endorelieftm) were directly punctured into the assistant surgical site. we started with port sites. in low rectum cancer cases, we kept the good pelvic visualization to lifting the peritoneum of the bladder onto the ventral side using the lone star retractor staystm. results: the median age was years ( - years), with males and females, and body mass index was . kg/ m ( - kg/m ). anterior resection was performed in cases, low anterior resection in cases, intersphincteric resection in cases, abdominoperineal resection in cases, hartmann's procedure in cases, and lateral lymph node dissection in case. in addition, one case of t b (bladder) was converted from laparoscopic to open surgery. however, there were no cases in which needlescopic instruments were replaced with conventional forceps. moreover, intraoperative complications related to the forceps were not observed. conclusions: in rectum cancer surgery, needlescopic instruments leave a small postoperative wound; healing is rapid and the cosmetic result is excellent. surgical safety is comparable to that using conventional forceps. there is no problem with the rigidity of needlescopic instruments. however, where the shaft is curved, operative control requires attention to mobility and directionality. in low rectum surgery, use of needlescopic instruments is limited due to the curvature of the shaft during the dissection of the anterior rectum wall, but it is possible to maintain a good field of view by using auxiliary equipment. therefore, more cases could be considered for surgeries using needlescopic instruments with the help of auxiliary equipment. introduction: anastomotic leaks are devastating complications of colorectal operations that lead to significant morbidity and potential mortality. inadequate tissue perfusion is considered a key contributor to anastomotic failure following colorectal operations. currently, clinical judgment is the most commonly used method for evaluating adequate blood supply to an anastomosis. more recently intraoperative laser angiography using indocyanine green (icg) has been utilized to assess tissue viability, particularly in reconstructive plastic surgery. this technology provides a real-time evaluation of tissue perfusion and is a helpful tool for intra-operative decisions, particularly in deciding to revise an intended colorectal anastomosis. our study aimed to determine if there is a statistical significance in colorectal anastomotic leak or abscess rate using icg compared to common clinical practice. methods and procedures: patients undergoing left-sided colorectal operations, between march and february , were retrospectively reviewed. patients' colorectal anastomoses were evaluated using icg angiography (icga) to qualitatively assess tissue perfusion (icg group). peri-operative and post-operative outcomes, including anastomotic leak and abscess rates, were compared to patients who had colorectal operations without icga (control group). the primary outcomes of intra-abdominal leak rate and intra-abdominal abscess rate were compared using exact chi-square tests. the secondary outcomes of -days or return, mortality, and readmission rate were compared using chi-square tests. all statistical analyses were performed using sas software. results: two leading indications for surgery included malignancy (n = ) and diverticulitis (n = ). the majority of patients either had a low anterior resection (n = ) or sigmoidectomy (n = ). all operations were primarily minimally invasive. no statistically significant difference was seen between the two groups in regards to patient demographics, rate of proximal diversion (p = . ), and splenic flexure mobilization (p = . ). patients in the icga group were more likely to have high ima ligation than in the control group ( . % vs. . %, p-value. ). of the icga group, of the patients underwent additional colonic resection while of the did not undergo additional colonic resection. there was no statistically significant difference in primary or secondary outcomes between the two groups. conclusion: icg angiography has become a helpful adjunct in determining adequate perfusion to an intended colorectal anastomosis. this data is unable to support any difference in patient outcome utilizing this technology over surgeons' visual and clinical assessment. our results may contribute to larger studies to determine if there is a true difference in anastomotic leak or abscess rate using this technology. objective: to investigate the feasibility and surgical strategy of complete mesocolic excision (cme) with completely medial access by "page-turning" approach (cmapa) for the laparoscopic right hemi-colectomy. the cmapa is a modified medial approach of cme, which focus on the exploration of surgical plane instead of the recognition of vessels. surgical procedures: ( ) start point: the anatomy projection of ileocolic vessel; ( ) expose the whole trunk of smv to the level of inferior edge of pancreas before ligating any branches, for the purpose of high tie and verifying their location; ( ) enter the intermesenteric space (ims) and right retrocolic space (rrcs) with cranial and right extension through transverse retrocolic space (trcs); ( ) complete mobilize the mesocolon and remove the tumor en-bloc. see figure ? . clinical outcome: from september to march , there were patients underwent cmapa in shanghai ruijin hospital. the average operation time was . ± . minutes, average blood loss was . ± . ml, number of lymph node was . ± . , average specimen length was . ± . cm, flatus time was . ± . days, fluid intake time was . ± . days and average hospital stay was . ± . days. the overall complications rate was . % ( / ). compared to traditional medial approach of cme performed in our center, the blood loss, operation time and hospital stay were significantly reduced by performing cmapa for laparoscopic right hemi-colectomy. conclusion: the advantage of the cmapa ( ) to avoid the laparoscopic "leverage effect" and "tunnel effect". ( ) to make the branches of superior mesenteric vessels more easily recognized. ( ) to offer surgeons an alternative route entering the trcs, ims and rrcs. ( ) to avoid repetitive flipping of the colon complying with the "no touch" principle, and to lower the requirements of assistants. figure : anatomy and surgical planes concerning cmapa. aim: we have reported a possibility of "one-stop shop" simulation for liver surgery by mri using gadoliniumethoxybenzyl-diethylenetriamine pentaacetic acid (eob-mri) (emerging technology, sages )., which is characterized by ( ) one-time examination, ( ) no-radiation exposure, ( ) demonstration of liver vasculatures including biliary tract, ( ) diagnosis of tumors, ( ) volumetry and ( ) estimation of liver functional reserve in each segment. the aim of this study is to investigate usefulness of "one-stop shop" simulation for liver surgery using eob-mri. methods: accuracy of liver vasculatures: d-reconstruction of dynamic eob-mri imaging was done by synapse vincent software (fujifilm medical co., ltd., japan), using a manual tracing method. visualization of hepatic vessels in eob-mri was compared with that in dynamic ct in patients. assessment of liver functional reserve: the standardized signal intensity (si) of each segment was calculated by si of each segment divided by si of the right erector spine muscle. the standardized total liver functional volume (tlfv) was calculated by ∑ [k= to ] (standardized si of segment (k) volume of segment (k)) divided by body surface area. the following formula of resection limit was established using normal liver cases ( % of the liver is resectable) and unresectable cirrhotic patients such as recipients of liver transplantation ( % of the liver is resectable). the estimated resection limit (%)= % (the standardized tlfv of the patient - )/ , . this formula was validated using other patients who underwent hepatectomy. results: accuracy of liver vasculatures: the liver simulation by eob-mri succeeded in demonstrating hepatic vasculatures including biliary tract, diagnosis of hepatic tumors, and volumetry without any radiation exposure. regarding the vessel anatomy at hilar area, biliary tract was more clearly visualized in eob-mri. regarding the hepatic artery, right and left hepatic arteries were well visualized in all cases, however, small-sized middle hepatic artery was visualized in only one out of patients. assessment of liver functional reserve: as a result of validation of the patients, one patient having resection volume with over the resection limit died of liver failure, however, the other cases within their resection limits did not suffer from liver failure. conclusion: "one-stop shop" liver surgery simulation could contribute to safety of liver surgery such as laparoscopic hepatectomy, because of no radiation exposure, accurate assessment of anatomical variations especially biliary tract, and helping decision making of resection volume. showing key steps of the procedure to be viewed. the in-studio program was hosted by an education specialist from the science center and a surgical resident from our institution, with laparoscopic instruments available for manipulation by participants. participants then viewed a video highlighting the roles of all healthcare providers involved in the specialty to be featured, including nurses, physicians, dietitians, psychologists, technologists, etc. live questions and answers were then encouraged between students and surgeons during the surgery broadcast. the program also expanded from high schools to vocational-technical colleges and nursing schools. results: during the - academic year there were sessions presented to schools, with student participants. by the - year this increased to sessions presented to schools, with participants. in sum, throughout the first years of the program, there were schools attending, with a total of , participants. of polled high school participants, % of responders acknowledged considering a career in healthcare after this experience. conclusion: over years, our program has grown steadily in popularity such that schools from several counties attend and regularly return, and we have been asked to expand the program to create a surgical summer camp for students interested in science and technology. live broadcast surgery in an elective, minimally invasive format provides unique visibility and access to surgical procedures for student audiences and promotes future interest in healthcare careers. surg endosc ( ) :s -s p improving trainees' self-assessment through gaze guidance introduction: effective learning to become competent in surgery depends on a trainee's ability to accurately recognize their strengths and weaknesses. however, a surgical trainee's self-assessment is poorly correlated with expert assessment. this study aimed to improve self-assessment by the visual gaze guidance provided through telestration in laparoscopic training. we hypothesized that visual conveyance of where to look or perform actions on the laparoscopic video enhances the trainees' awareness of the gaps in their skills and knowledge. methods and procedures: a lab-developed telestration system that enables the trainer to point or draw a free hand sketch over a laparoscopic video was used in the study (fig. ). seven surgical trainees ( surgical fellow, research fellow, pyg- and pyg- ) participated in a counterbalanced, within subjects controlled experiment, comparing standard guidance with telestration-supplemented guidance. the trainees performed four laparoscopic cholecystectomy tasks -mobilizing cystic duct and artery, clipping the duct, clipping the artery, and cutting the duct and artery, on a laparoscopic simulation. performance assessment, adapted from the global rating scale (grs) instrument, was completed by the trainers and trainees at the end of each task. the mean self-assessment scores were compared with the trainers' scores by the linear mixed model, where the trainees' performance indicated by the trainers' scores was control. the assessment alignment was evaluated by spearman's rho. results: the trainers' scores were significantly lower than the self-assessment scores in the standard guidance, while the scores of the trainers and trainees were much more similar (fig. ) . the correlation between the trainers' and trainees' assessment in telestration guidance was high (r= . , p. ), compared to the standard guidance (r= . , p= . ). the correlation comparison for each grs criterion shows a significant increase (p= . ) in the assessment alignment for depth perception in telestration guidance (r= . , p. ), compared to the standard guidance (r= . , p= . ) (fig. ) . the visual gaze guidance improved the alignment of assessment between the trainer and trainees, especially for the assessment alignment in depth perception. for visual gaze guidance to become an integrated part of the training, further work needs to be conducted to understand how gaze guidance change the nature of the training process. applying to surgical residency: what makes the best candidates? yann beaulieu, beng, louis guertin, md, frcsc, ariane p smith, md, margeret henri, md, frcsc, facs; university of montreal objective: while quotas for canadian surgical residency programs are at their lowest point in ten years, the number of canadian graduating medical students is at an apogee. this year, only spots in surgical residency programs were available for students applying to carms. undergraduate medical students individually collect anecdotal information regarding what influences admission to their surgical subspecialties of interest, as scarce literature covers the topic. we thus surveyed surgeons and residents to analyze the relative importance of modifiable factors and innate attributes in the selection of new surgical residents. methods: an electronic survey was sent to all surgeons and surgical residents affiliated with the university of montreal. participants were asked to specify their surgical subspecialty, their status, their level of experience and whether they were an active member of a residency selection committee. the subjective importance of predefined application elements and candidate qualities was assessed using -point likert-type items. results: of the surgeons and residents to whom the survey was sent, ( . %) and ( . %) completed the survey. evaluations of elective rotations and evaluations of core rotations were considered very important by . % and . % of responders respectively. regarding letters of recommendation, the content was rated very important ( . %) more often than the notoriety of the author ( . %). networking with key surgeons was considered the least important element to prioritize with % of negative assessments. with regards to the fundamental qualities of surgical candidates, the extremes were "clinical judgement" with . % and "innate technical ability" with . % of responders rating them very important. no significant differences in responses were observed between staffs and residents, between members and non-members of selection committees, between different levels of surgical experience and between surgical subspecialties. conclusion: clinical judgement and performance in core and elective rotations along with strong personalized letters of recommendation should be prioritized by medical students aiming for a surgical career. kazuhiko shinohara, phd, md; school of health science, tokyo university of technology background and objective: many types of training devices had been proposed since the early days of endoscopic surgery. however, they are too expensive for daily training of novices. we developed a simple and economical training device made of frozen fruit and agar. material and methods: to make this device, g of agar powder was added to ml of boiling water and boiled for min. the solution was then poured into a stainless steel tray containing frozen blueberries and lychees and refrigerated for h. basic maneuvers required during endoscopic dissection and resection of a tumor with laparoscopic forceps and electrosurgical devices were then performed using this agar model in a conventional laparoscopic training box. results: using this model, endoscopic dissection and enucleation of a tumor with an electrosurgical device could be practiced repeatedly with minimal expense and preparation. background: situs inversus totalis (sit) is a rare congenital anatomy and a challenging condition for laparoscopic surgeries because standardized strategy to overcome such anatomical difficulties. mirror-reversed video images of laparoscopic surgeries for patients with normal anatomy could help to develop surgical strategies for patients with sit. we had a chance to evaluate this idea with a treatment of a patient of early gastric cancer, and describe the surgical results of the case. patient and methods: seventy-two-year-old women with a history of sit was referred to our department for the treatment of early gastric cancer, and laparoscopic distal gastrectomy with d + lymphadenectomy was scheduled. a video record of the same surgery for a patient with similar physical attribute performed before then was retrieved, and was edited with a computer into full length, totally mirror-reversed images of the surgery. designated operator and assistant simulated the operation using the video several times before surgery. results: laparoscopic distal gastrectomy was performed with d + lymphadenectomy while the operator was on the left side of the patient and the assistant on the other side, being opposite positions as usual. laparoscopic b- reconstruction was followed using "delta anastomosis" technique reported by kanaya et al. total laparoscopic procedures were completed with the operation time of minutes and the blood loss below measurable limits. no appreciable complications were observed after surgery and the patient was discharged on postoperative day . no recurrence of the disease was detected until years after surgery, conclusion: although further validation is unlikely because of a rare incidence of this anatomy, the same technique would be recommended for one of the preoperative preparations for similar cases. background: surgical simulation is thought to provide a basis for improvement of resident surgical skill training, in the safety of a simulation setting. it is unclear whether surgical skills learned in a simulation curriculum actually contribute to the improvement of surgical skills when transferred to the or. methods: a ten question online survey was sent to attending surgeons and residents. the questionnaire focused on domains: confidence, independence, transferable skills, improvement of skills/knowledge and time spent on the simulation curriculum. evaluation data was collected and anonymously analyzed. background: minimally invasive surgery poses a unique learning curve due to the requirement for non-intuitive psychomotor skills. programmes such as the fundamentals of laparoscopic surgery (fls) provide mandatory training and certification for many residents. however, predictors of fls performance and retention remain to be described. this single-centre observational study aimed to assess for factors predicting the acquisition and retention of fls performance amongst a surgically naïve cohort. methods: laparoscopically naïve individuals were recruited consecutively from preclinical years of a medical university. participants completed five visuospatial and psychomotor tests followed by a questionnaire surveying demographics, extracurricular experiences and personality traits. individuals completed a baseline assessment of the five fls tasks evaluated by fls standards. subsequently, participants attended a -minute training-course over week one and two on inanimate box trainers. a post-training assessment was performed in week three to evaluate skill acquisition. participants were withdrawn from laparoscopic exposure and retested at four onemonth intervals to assess skill retention. introduction: bipolar energy can cause thermal injury to adjacent organs when used improperly. sages fuse curriculum provides didactic knowledge on principles and best practices for safety, but there is no hands-on component to practice these skills. the objective of this study is to compare the effectiveness of the vest™ bipolar training module in addition to the fuse curriculum. methods and procedures: the study was a mixed design with two groups, control and simulation. after a pre-test that assessed their baseline knowledge, the subjects were randomized to two groups. both groups were given a min presentation, reading materials from the fuse manual and an online didactic module on bipolar energy. the simulation group also practiced on the simulator for one session that consisted of five trials on the effect of activation time on thermal damage and the importance of providing a margin of safety by sealing short gastric vessels. after one week the performance of both groups was assessed using a post-questionnaire. one week after the post-test both groups performed sealing of vessels on an explanted porcine mesentery with vessels perfused. their performance was videotaped and their activation times were recorded. a total safety score was calculated by assessing the proximity of the location of activation to the intestine by two independent raters. wilcoxon -signed rank and mann-whitney u tests were used to assess difference within and between groups. results: a total of residents ( in each group) participated in this irb approved study. median test scores for both groups increased (simulation, p= . and control, p= . ). no difference was found between the two groups in their pre-test (p= . ) and post-test (p= . ) scores indicating learning. the median total activation time for control group was higher ( . s) compared to simulation ( . s) but was not statistically significant (p= . ). there was a moderate agreement between two raters for margin of safety (kappa= . , p. ). total safety scores showed no difference between the two groups (p= . ). conclusions: subjects with simulation training had lower activation time compared to control. training for margin of safety requires more simulation refinement. small sample size and variations in the explanted models contributed to variability in data but even with small sample size, simulation training along with the fuse curriculum trended towards being more beneficial than the fuse curriculum alone. the general, that aims to build educational infrastructure and standardize training and education in laparoscopy throughout mexico. ilap participants engage in didactic and hands-on modules in educational theory, laparoscopic techniques, and simulation based education (sbe), and then develop and implement a -day sbe course for local trainees. the purposes of this study were to understand the existing educational environment at a single institution in mexico and measure the changes in perceptions, attitudes, and engagement in surgical education after an intensive training course. methods and procedures: all faculty and of general surgery resident participants completed a survey that contained items designed to assess the existing educational environment at a large, public hospital in mexico. using a -point likert scale, residents self-rated the quality of faculty feedback and the learning environment within their institution ( =strongly disagree, = neutral, =strongly agree). faculty rated their perceptions of the same educational themes. upon completion of a faculty-lead simulation course, residents rated the educational environment during the course. faculty provided additional qualitative feedback. descriptive analyses were performed. irb-exemption was obtained through lurie children's hospital. results: discordance existed in perceptions of the existing educational environment. the greatest disparity between resident and faculty perceptions included "faculty provide sufficient feedback in the operating room" ( % vs. %), "faculty promote an active learning environment" ( % vs. %), and "residents may ask questions without fear of negative evaluation" ( % vs. %). faculty and residents agreed with "residents are sometimes afraid to speak up in the operating room for fear of retaliation" ( % each). post-course evaluations (n= ) revealed universal improvement in all educational themes during the simulation course. qualitative feedback revealed most faculty plan to incorporate open communication and safe learning into their practice. residents were equally positive, with % optimistic that they will see changes within the educational environment. conclusions: significant discordance exists in resident and faculty perceptions of the educational environment at a large teaching hospital in guadalajara, mexico. after participation in the ilap course, residents noted demonstrable change in the faculty approach to education and feedback, and both faculty and residents expressed optimism for increased engagement in education. the immediate successes of the ilap initiative should be followed over time, as the ultimate measure of success is sustainability and scalability throughout mexico. background: laparoscopic anterior resection is technically challenging and the learning curve is long. well-designed formative assessments can provide trainees effective and constructive feedback, an important element in efficient learning. previously reported assessments for laparoscopic colorectal procedures were developed for summative assessment. we aimed to develop a formative assessment tool to evaluate competence and provide trainees with effective feedback in laparoscopic anterior resection. methods: the assessment tool was developed by an expert panel from mcgill university affiliated hospitals. the procedure was deconstructed into a series of sequential steps including general domains, surgical principles, injury prevention and technical skills specific to laparoscopic anterior resection. the tool contains discrete items with global rating scales for each step of the operation; each domain was scored using a -point likert scale, with anchors for scores of , and . each operation was assessed through direct observation in the operating-room by the attending, a trained observer, and trainees themselves. intraclass correlation coefficients (iccs) were calculated to estimate interrater reliability for ( ) attending surgeon and trained observer, ( ) attending surgeon and self-assessment, and ( ) trained observer and self-assessment. internal consistency was measured using cronbach's alpha. comparison between training levels was done using mann-whitney u-test. the global operative assessment of laparoscopic skills (goals) was also used to assess trainees' general laproscopic skills. spearman's correlation was used to determine association between goals and this procedure-specific tool. overall usefulness of this tool was evaluated using a cm visual analog scale. results: in this pilot study, fourteen operations, performed by experienced surgeons and trainees were assessed. the icc between ( ) attending surgeon and observer was . ( % ci . to . ) ( ) observer and self-assessment was . ( % ci . to . ), and ( ) attending surgeon and self-assessment was . ( % ci - . to . ). the internal consistency of the items was excellent (cronbach's α= . ). there was a significant difference in median total score between experienced surgeons and trainees ( . ± . vs. . ± . ; p= . ). there was strong correlation (r= . ) between goals and this procedure-specific score. overall usefulness of this assessment tool was rated as . ± . . all assessments were completed in about minutes. conclusions: we present a new procedure-specific formative assessment tool for laparoscopic anterior resection and provide preliminary evidence of its reliability and validity. this formative assessment tool could be used for constructive feedback and tracking performance in competencybased surgical training. cullen introduction: one of the key challenges to the proliferation of endoscopic submucosal dissection (esd) in the west has been a lack of training platforms. therefore, the virtual endoluminal surgery simulator (vess) is being developed as a training tool for esd. the aim of our study is to inform the design of vess using cognitive task analysis (cta), which is a human factors engineering framework to describe practitioners' mental models and cognitive processes and incorporate insights into the simulator's design. methods and procedures: cta-based interview questions were developed to probe the cognitive challenges and strategies employed at each stage of the esd procedure. six esd practitioners were interviewed for varying lengths of time. two of these interviews were conducted simultaneously during an observation of a training workshop where the cta participants were instructors (total observation time was five hours, and interview time was * minutes). another interview was conducted during observation of esd procedures (total observation time was hours, and interview time was * minutes). participants had varying levels of experience in esd, with of them being 'super-experts' (exclusively esd exponents), an 'expert' and a fellow. a cta of the data is currently being conducted to systematically inform design of functionalities in the simulator. results: analysis of our data highlights a few prominent themes at each stage of esd: goals, challenges (e.g., avoiding perforation of muscularis); points of decision-making (e.g., partial or full incision for boundary demarcation); skills involved (e.g., dissection); and ambiguity (e.g., unclear lesion boundaries). participants also described risks associated with each stage of esd and strategies to prevent or overcome the same. conclusions: qualitative data for a cta were collected through observations and interviews of esd practitioners. preliminary analysis has indicated prominent themes to consider in the design of the training simulator. the next step in the study is to conduct a full-scale cta of esd based on the current data. the ultimate benefit of the cta would be to incorporate the results into informing the design of vess in a way that is compatible with the mental models of esd trainees, thus enhancing the fidelity and effectiveness of the simulator. background: colonoscopy is an important diagnostic and therapeutic procedure in the management of colonic disease; achieving competence during residency is an integral part of performing high-quality colonoscopy in-practice, regardless of specialty. there is debate and controversy however, regarding what, if any, number of procedures achieves said proficiency. furthermore, there is significant heterogeneity in the current guidelines and studies published to-date on the definition of competence in colonoscopy. objective: to determine individualized learning curves as an alternative to 'number of procedures' for assessing colonoscopy competence. methods and procedures: this is a multi-institutional prospective cohort study involving eleven surgical trainees (novice endoscopists). the main outcome, colonoscopy competence, was assessed by determining the independent colonoscopy completion rate (iccr), the number of procedures required to reach % independent colonoscopy completion and polyp detection rate. individual and overall iccr were calculated using moving average analysis. conclusions: while a benchmark for a minimum number of procedures may be necessary to allow supervisors to adequately assess performance, it is difficult to determine what number is optimal. there appears to be significant heterogeneity in both overall number of colonoscopies completed by each resident, as well as the mean iccr and the number of procedures required to reach the current benchmark for competency. the use of learning curves allows real-time tracking of progress and training tailored to the individual, as we move forward in the era of competency-based medical education. background: with the growing popularity of robotic-assisted surgery, new methods for evaluation of technical skill are necessary to determine when a surgeon is qualified to perform an operation independently. current evaluation methods are limited to point likert scales which require a degree of subjective scoring. surgeons in training need an objective method of evaluation to view progress and target areas for improvement. one method of objectively evaluating surgical performance is a cumulative sum control chart (cusum). by plotting consecutive operative outcomes on a cusum chart, surgeons can view their learning curve for a given task. another method of objective evaluation is the dv logger®, or "black box," which records objective measurements directly from the da vinci® system. methods: we followed two hpb fellows during dry lab simulation of robotic-assisted hepaticojejunostomy reconstructions using biotissues to model a portion of a whipple procedure. we simultaneously recorded objective measurements of dexterity from the da vinci® system and performed cusum analyses for each procedural step. we modeled each variable using machine learning (a self-correcting and autoregressive modeling tool) to reflect the fellows' learning curves for each task. statistically significant objective variables were then combined into a single formula to create an operative robotic index (ori). results: variables that significantly improved over the course of the simulation included completion time (p= . ), economy of motion in arm (p= . ), number of times head was removed from the console (p= . ), total time left master manipulator was active (p= . ), total time right master manipulator was active (p. ), and total time that any arm was active (p\ . ). the inflection points of our cusum charts and plots of objective variables both showed improvement in technical performance beginning between trials and [ fig. and fig. ]. the operative robotic index showed a strong fit to our observed data and improved with additional trials (r = . ). [ figure ]. conclusions: in this study we identified objective variables recorded by the da vinci® system which correlated with the technical dexterity of fellows during a robotics dry lab. we broke a complex procedure down in stepwise fashion with cusum analyses to determine targets for improvement. using variables which correlated with the improved performance of the fellows, we effectively modeled the learning curve with the creation of an operative robotics index (ori). this study successfully models the learning curve of novice robotic surgeons using a novel combination of objective measures. georg wiese, md, paula veldhuis, steve eubanks, md, facs, scott w bloom, md, frcsc, facs; florida hospital institute for surgical advancement introduction: robotic surgery is a specialized skill which requires time and resources to master. in a general surgery residency program that seeks to train competent surgeons in both open, laparoscopic and endoscopic techniques it is difficult to see where adding robotic training will be of benefit and at what cost this will be to the remaining surgical skills. we therefore sought to ascertain robotic surgery's current role in the training of new general surgeons by soliciting the opinions of current general surgery program directors on the role of robotic surgery at their respective institutions. methods: an irb approved survey was created and sent to general surgery program directors across the country to assess how robotic surgery training is being integrated into current surgical training. the survey was sent via email to publicly available email addresses from the acgme website of program directors. it was voluntary in nature and consisted of questions regarding current status of robotic training in residency as well as future goals. results: overall response from our pd survey were at % of the surgical programs with addresses available via acgme, though responses continue to be submitted at the time of this abstract. approximately % of all respondents are from independent, university based programs. % felt that robotics was an emerging skillset important for residents to master versus % feeling that it was more appropriate for fellowship. all respondents noted that robotic surgeons were present at their institution, % within the core faculty, and % indicated that they were actively recruiting robotically trained surgeons. additionally, % of programs indicated that residents were exposed to robotic surgery, % of these on core general surgery rotations. % of respondents indicated that they had a formal robotic training curriculum with % of programs taking measures to integrate robotics into the future curriculum though % lacked specific milestones for such training. finally, opinion was evenly divided among respondents as to whether one could sign off on residents to perform robotic assisted cases upon completion of pgy year with % agreeing with that statement and the remainder indicating some additional training would be necessary. conclusions: our study highlights the emerging field of robotic assisted mis surgery and its increasing role in residency training. it is evident from the data, that robotic surgery is a growing part of residency experience. importantly, however, milestones were significantly lacking for determining resident progress in robotic training. introduction: in chile, medical students have the opportunity to undertake a month-long medicine elective (me) in a community hospital, primary care center or emergency department within the country at the end of their first clinical year. due to the lack of opportunities to practice suturing in the first years, students usually do not have an optimal performance in this type of medical procedure during the me. simulation training programs in suturing improve technical skills, selfconfidence and patient safety in the medical internship. the objective of this study is to evaluate the impact of implementing a simulated suture training program earlier in the medical curriculum, before the me. methods: we conducted a prospective, randomized controlled trial with medical students at the end of their first clinical year. they were randomized into two equal groups. the intervention group received an intensive suture training program consisting in one theory class, four practical sessions and effective feedback from an expert surgeon. the control group did not receive training, remaining with the classic opportunistic learning approach during the me. after the me, all students undertook an electronic survey. statistical analysis was performed on the answers of both groups. per protocol analysis was applied. results: there were no statistical differences between groups in terms of age and sex. four students did not complete the training program. one student in the control group did not reply to the survey. higher self-confidence with regards to suturing was reported in the intervention group in comparison with the control group [ / ( %) vs / ( %), p, ]. also, a greater student desire to carry out suture-related procedures was reported in the intervention group than the control group [ / ( %) vs / ( %), p, ]. in addition, a lower rate of overseeing physician intervention was reported in the intervention group [ / ( %) vs / ( %), p, ] ( table ) . a greater number of patients requiring sutures were treated by the intervention group than the control group, with a median of patients ( - ) against ( ) ( ) ( ) ( ) . the intervention group performed a higher number of sutures with a median of ( - ) vs ( - ), with a statistically significant difference (p, ) in both cases (fig. ) . conclusion: a simulated suture training program prior to the me generates a positive impact on medical students by improving self-confidence and desire to attend patients that require sutures. this leads to a higher rate of both exposure to suture techniques and suture execution. introduction: measuring performance in the operating room (or) is challenging. performance is a multifaceted construct a complex interaction of many behaviors and actions that reflect an individual's knowledge and skill. no assessment tool to date provides an expertise-based, comprehensive evaluation of the various aptitudes necessary to excel in the or, especially with respect to advanced cognitive skills. using qualitative methodologies, we previously defined behavioral themes that guide surgeons' behaviors, decisions, and actions, within a universal framework of domains that reflect intra-operative performance. the purpose of this pilot study was to use this framework to derive a comprehensive assessment tool and to obtain evidence for its validity as a measure of intra-operative performance. methods: an assessment tool was developed by a panel of surgeons and surgical trainees based on the five-domain model of intra-operative performance: ) psychomotor skills; ) declarative knowledge; ) interpersonal skills (two items); ) personal resourcefulness, and ) advanced cognitive skills (ten items). all items were rated on an ordinal scale of (inadequate) to (expert) and equally weighted. surgical residents and surgeons from a single academic center were evaluated on their performance during standard general surgery operations, for example, open inguinal hernia repair and laparoscopic cholecystectomy. for residents, there were evaluators -the attending surgeon and an observing surgeon. attending surgeons evaluated their own performances and were also assessed by observing surgeons. internal consistency, inter-rater reliability, and correlation of total scores with training level (junior residents, senior residents, staff surgeons) were calculated. likert scale questionnaires were administered to evaluate the tool's usability, feasibility, and educational value. results: fifteen subjects ( junior residents, senior residents, surgeons) participated. the total score on the assessment demonstrated significant differences between training levels ( figure) . inter-rater reliability was high (interclass correlation coefficient= . ), as were internal consistency between each domain score (cronbach's alpha= . ), internal consistency amongst items in the advanced cognitive skill domain (cronbach's alpha= . ), and internal consistency amongst items in the interpersonal skills domain (cronbach's alpha= . ). all assessments required less than five minutes to complete. overall, evaluators agreed that the assessment tool was easy to use, was comprehensive, and should be used routinely throughout training to track performance and provide formative feedback. conclusion: in this pilot study, we developed a comprehensive assessment tool for intra-operative performance and provide preliminary validity evidence for the score. surg endosc ( ) introduction: the purpose of this study was to evaluate the validity of our developed system for assessing suturing skills in laparoscopic surgery (fig. ) . we have updated numbers of participants and a comparison method compared with the last year report. methods and procedures: fig. shows our developed computerized system for objective assessment of suturing skills by using a laparoscopic intestinal suturing model, e-lap. the system includes a new artificial intestinal model that mimics living tissue and pressure-measuring and image-processing devices. each examinee performs a specific skill using the artificial model, which is linked to a suture simulator instruction evaluation unit. the model uses internal air pressure measurements and image processing to evaluate suturing skills. five criteria, scored on a five-grade scale, were used to evaluate participants' skills ( fig. ) . the volume of air pressure leak was determined by the volume of air inside the sutured artificial intestine. for example, for the criterion "air pressure leakage", the approximate midpoint of the acceptable range was grade . values lower than the minimum acceptable value received lower grades and those above the midpoint of the acceptable range higher grades. we enrolled surgeons who participated a simulator competition event at the th annual meeting of the japan society for endoscopic surgery (jses houston methodist hosptial, baylor college of medicine introduction: the sages flexible endoscopy course for minimally-invasive surgery (mis) fellows has been shown to improve confidence and skills in performing gi endoscopy. this study evaluated the long-term retention of these confidence levels and investigated how fellows have changed practices within their fellowships as a result of the course. methods: participating mis fellows completed surveys six months after the course. respondents rated their confidence to independently perform sixteen endoscopic procedures ( =not at all; =very). while the pre-and post-course surveys identified anticipated endoscopy uses and barriers to use, the -month follow-up survey evaluated actual usage and barriers to use in each fellow's practice. respondents also noted participation in additional skills courses and status of fundamentals of endoscopic surgery (fes) certification. comparison of responses from the immediate postcourse survey to the -month follow-up survey were examined. mcnemar and paired t-tests were used for analyses. results: twenty-three of ( %) course participants returned the -month survey. % had passed the fes skills examination and % had attended another flexible endoscopy course. no major barriers to endoscopy use were identified. in fact, fellows reported less competition with gi providers as a barrier to practice compared to their original post-course expectations ( % versus %, p. ). in addition, confidence was maintained in performing the majority of the endoscopic procedures, although fellows reported significant decreases in confidence in independently performing snare polypectomy (− %; p. ), control of variceal bleeding (− %; p. ), colonic stenting (− %; p. ), barrx (− %; p. ), and tif (− %; p. ). fewer fellows used the gi suite to manage surgical problems than was anticipated post course ( % versus %, p. ). fellows without fes certification reported loss in confidence to independently perform barrx (− %; p. ) and colonic stenting (− %; p. ), and also a % decrease in the use of gi suite to manage surgical problems (p. ) fellows who passed fes noted no significant loss of independence, changes in use, or barriers to use. % of fellows made additional partnerships with industry after the course. % stated flexible endoscopy has influenced their post-fellowship job choice. % would recommend the course to other fellows. the sages flexible endoscopy course for mis fellows results in long-term practice changes with participating fellows maintaining confidence to perform the majority of taught endoscopic procedures six months later, and over % reporting that flexible endoscopy influenced their career choice. additionally, fellows experienced no major barriers to implementing endoscopy into practice. the materials and methods: at our center, we formulated a laparoscopic mentorship program where a senior consultant was paired with a particular trainee resident for a period of weeks. consultants & residents were a part of the study. the or schedules were rearranged to accommodate these pairs. an evaluation of the residents' views was performed prior to the study and once at its completion, using a simple questionnaire with each parameter scored between & . results and discussion: continuous, consistent evaluation by a consultant over an extended period of time allowed them to assess their assigned resident's laparoscopic skill set. all pairs observed an increased frequency of errors being noticed & improved upon. the consultants stressed upon shedding undesirable operative habits. there was a significant improvement in residents' scores at the end of the short study. conclusion: we found that the short-term mentorship program was easy to incorporate within our or schedule and was well received by the participants. continuous short rotations under senior consultants appear to allow residents to not only fully observe and imbibe correct operative techniques, but also helps shed unfavorable habits. we are currently amid the second cycle of our study & looking forward to the results at the end of this academic year. introduction: colorectal cancer is one of the most common cancers in the united states. endoscopic submucosal dissection (esd) is an emerging minimally invasive technique that allows complete en-bloc resection and a much lower recurrence rate at long-term follow-ups. however, performing colorectal esd is technically demanding since the colorectal wall is thin and constantly moving, and potentially higher rates of complications (e.g., bleeding and perforations). hence, an adequate training for colorectal esd is needed to acquire basic proficiency with minimum complications. objectives: a virtual reality (vr)-based simulator with visual and haptic feedback for training in colorectal esd is being developed, which the aim to allow trainees to attain competence in a controlled environment with no risk to patients. in this work, a newly developed application of the virtual simulator that promotes the endoscopists to perform and assess technical skills in esd is developed. training tasks are built based on physics-based computational models of human anatomy with tumors. methods: the main modules of the vr-based simulator for colorectal esd involve: ( ) rendering; ( ) haptic interface; ( ) physics-based simulation; and ( ) performance recording and assessment metrics. the rendering engine allows surgical tasks to be performed in the three-dimensional virtual environment. haptic feedback mechanisms allow users to physically feel the interaction forces. physics-based simulation technologies are employed to enable the complicated simulation for performing virtual surgical tool-tissue interactions. the simulator can also collect learners' performance data to offer feedback based on the built-in metrics. results: four training tasks involving marking, injection solution, circumferential cutting, and submucosal dissection are designed to practice skills with different surgical tools. the marking task aims to identify the lesion. the injection solution task minimizes the risk of bleeding and perforation to protect the muscularis. in the circumferential cutting task, the objective is initial incision of the lesion with the surgical tools. the objective of the dissection task is to remove the tumor from the connective tissue of the submucosa under the lesion. conclusions: the vr-based simulator enables realistic esd tasks to provide a possibility for developing, validating and objectively evaluating the performance metrics in colorectal esd training, and offers an opportunity to rise up the learning curve before application to patients. background: the virtual translumenal endoscopic surgery trainer (vtest) simulator is a virtual reality system that was designed to train the hybrid-notes technique. transfer of skill acquired while training on the vtest was measured in a near-real cholecystectomy procedure staged in the easie-r model. methods: sixteen medical students were divided randomly and evenly into groups: control, training. all subjects performed the cholecystectomy procedure on the vtest simulator to establish a baseline (pre-test). the training group received training sessions, over a period of consecutive weeks, consisting of trials per session or as many trials as can be accomplished in one hour, whichever was achieved first. at the end of the training period, all subjects performed one trial on the vtest simulator (post-test), and again to weeks later (retention test). two months after that, subjects performed the hybrid-notes cholecystectomy procedure on an easie-r model. performance with the easie-r simulator was video-recorded, and three tasks within the cholecystectomy procedure were isolated for evaluation: clipping, cutting, and dissecting the gallbladder. objective performance measures, such as time and error, were extracted from the videos by two independent reviewers, while subjective performance was scored by four expert surgeons who were blinded to the training conditions. expert reviewers used a modified version of the operative performance rating system by the american board of surgery and the objective structured assessment of technical skills (osats) tool. results: there was no difference in task completion time between the control and training groups, (t( )= . , p =. ) in the cutting and clipping tasks. however, there was a significant difference in the number of errors, t( )=- . , p=. . there was no difference in subjective performance between the training groups for the clipping and cutting tasks. in the gallbladder dissection task, however, there was a statistical significance in "instrument handling" based on one of the surgeons' ratings (t( )= . , p=. ), and a statistical significance in "time and motion" based on another surgeon's rating (t( )= . , p=. ). conclusions: results indicate that weeks of training on the vtest simulator did not allow the subjects to transfer their learned skills equally to the near-real environment, even though they retained the skills when tested for retention. this new insight suggests that modification of the training method for different types of surgical skills may be warranted to optimize their transfer to the real environment. examining conclusions: this study provides evidence to suggest that for bariatric surgeons, experience and skills acquired in performing non-bariatric surgery may not translate to improved outcomes in bariatric surgery. as seen in this study, improvement in bariatric surgical outcomes is likely more dependent on experience specifically performing bariatric procedures. as there may be no benefit acquired from performing surrogate procedures, this may have implications in the design of subspecialty training programs and for accreditation purposes. . a universally adjustable cellphone holder was used where smartphones could be placed inside the fls box in order to capture the task from a similar angle as the onboard camera. residents were able to use their own smartphones to record their performance on each of the five fls tasks in high definition (hd) quality. after each practicing session, they would upload their videos to a designated folder on a password-protected computer in the simulation lab. this folder was linked to a cloud-based storage system that fls instructor had exclusive access. the faculty was able to review each video in the next hours and provide immediate feedback to the residents via email, over the phone or in-person. the video library of performance also allowed the instructor to track the progress of the residents and whether they reached proficiency level in all five tasks to take the fls examination. this program was offered to all surgical trainees. results: utilization of simulation lab to practice fls tasks increased significantly across all postgraduate years after implementation of this model. six residents took the fls examination. the passing rate of the residents remained the same ( % before and after) but their scores in fls manual skills improved significantly compared to the group prior to implementation. the residents evaluated this change positively and reported that the use of videos and immediate feedback by faculty was a valuable intervention in their learning experience. conclusions: the smartphone cameras are readily available and can be used for telementoring. incorporation of telementoring in standard proficiency based fls training can promote self-directed learning and improve the access to experts for immediate feedback as a crucial element of effective training in acquisition of laparoscopic skills. background: it is important that making individual procedures a language, and an objective qualitative evaluation for the laproscopic training. recently, task training and the sham operation using the virtual simulator are carried out for medical students as the basic laparoscopic maneuver training, but there are few reports of objective qualitative evaluation for the training. in this study, we investigated rubric evaluation as the qualitative evaluation for laparoscopic training. materials and methods: one hundred and six students in th grade of tokushima univ. were participated. basic laparoscopic task training (gummy band ligation, beads transfer, delivery of beads, gauze excision) with training box and sham laparoscopic cholecystectomy with virtual simulator were performed. task execution time and rubric evaluation which includes the evaluation standard that became a language for each maneuver were performed before and after basic task training and sham operation. the group who are bad at laparoscopic maneuver was decided by time exceeded in tasks more than two from before practice. relationship between the group who are bad at laparoscopic maneuver and the group which self-evaluation was higher in a rubric evaluation was investigated. results: in basic task training, average task execution time in all students was shortened after practice compared with before practice, but investigated individual, students exceeded in more than two tasks. rubric evaluation in basic task training showed no difference between self-evaluation and evaluation by tutor before and after practice. in sham laparoscopic cholecystectomy, all students and tutor showed high score by rubric evaluation after practice compared with before practice. some students showed higher score than tutor, especially in part of extension of operation field by elevation of the gall bladder, exposure of triangle of calot, and exposure of cystic duct. students who showed high score by self-evaluation in many maneuver of sham laparoscopic cholecystectomy also exceeded in more than two basic tasks. conclusions: as rubric evaluation showed the point of the maneuver is made a language definitely, it was useful for an objective qualitative evaluation for laparoscopic training. pre introduction: bariatric surgery candidates have the opportunity to research bariatric surgeons and hospitals prior to scheduling their elective surgery. pre-operative information sessions are important tools for bariatric surgeons to provide patient education while increasing their patient population. online education is becoming increasingly popular, but its utility over in-person education is uncertain. our objective was to compare patients attending the two most commonly used educational formats: online (webinars) and in-person (seminars) and determine which were more likely to undergo bariatric surgery. methods: we conducted a retrospective cohort study of , patients who attended pre-operative information sessions from january to december by reviewing data maintained by the obesity, prevention, policy and management (oppm) database from our institution. the patients were divided into two groups: those who attended an in-person session (n= ) and those who attended an online session (n= , ). the proportion of patients who went on to have bariatric surgery was compared between the two groups. to categorize the study sample, patient demographics, surgeon providing the information session, and procedure performed were compared between groups. multivariate logistic regression model was applied to compare the effectiveness of in-person session and online session. results: of , patients analyzed, % attended online information sessions ( % female, mean age ). the remaining % attended in-person information sessions ( % female, mean age ). analysis found that . % of patients who attended online information sessions went on to have a bariatric surgical procedure, while . % of patients who attended in-person sessions went on to have a bariatric surgical procedure. after controlling for differences in age and gender, results of multivariate logistic regression analysis indicate that patients who attended inperson sessions were % more likely to have a bariatric surgical procedure than patients who attended an online session ( introduction: knot security is the ability of knots to resist slippage as force is applied, and the optimal number of throws to ensure a secure knot improves efficiency and outcome. the literature on the accepted number of throws per type of suture material has been largely anecdotal, often referring to throws for silk, for polyglactin (vicryl), five for polydioxanone (pds), and six for polyproprolene (prolene). we report a pilot knot-tying study of four suture types to determine optimal numbers of throws. materials and methods: four senior general surgery residents (pgy- and above) and four attending surgeons participated. participants viewed a standardized instructional video and a one-handed knot-tying tutorial. they were instructed to tie one-handed knots, beginning each knot with two throws in the same direction, and square the third and subsequent throws in the opposite direction. each surgeon tied knots, using differenttypes of - suture material: silk, polyglactin, polydioxanone, and polyproprolene. suture types were evaluated using , , , or throws. the participants were randomized to both suture type and order of throw numbers. the knots were then tested on the f.a.s. t knot tester (sawbones, vashon island, wa) for slippage (insecure knot) or breakage (secure knot). generalized estimating equation (gee) analysis was used to determine optimal throw number. results: knots were individually tested on the knot tester for slippage and recorded as % slipped (see table) . the percentage of slipped knots varied by participant and ranged from to %. generalized estimating equation analysis suggested that the only significant variable when determining knot security was number of throws (p= . ), not suture type or participant training level. the optimal number of throws for - silk, polydioxanone, and polypropylene was five, whereas six throws was optimal for polyglactin. conclusion: knot security is dependent on the number of throws placed, and these optimal numbers were higher in our study than the commonly accepted number of throws. evaluation of take introduction: laparoscopic skills can be learned using portable simulators and these skills are transferrable to the operating room. several training regions within the uk have therefore developed and delivered home-based laparoscopic training programmes for junior surgical trainees. although performance improved in some, overall engagement has been poor. similar results have been observed in north america. the aim of our study was to uncover the reasons for poor engagement with home-based simulation with a view to developing a future, more successful, programme. methods: this was a qualitative study utilising focus groups. interviews were undertaken with key stakeholders involved in various laparoscopic home-based simulation programmes through the uk. training equipment comprised the eosim portable simulator paired with online training tasks. the tasks were similar to those used in the fundamentals of laparoscopic surgery programme (fls). basic metric feedback was provided (eg time to complete task). a total of individuals were interviewed, including surgical trainees, consultant trainers, training directors and programme faculty. this generated approximately hours of data which was coded using nvivo software. a basic thematic analysis was performed. results: trainees cited multiple competing professional commitments as a barrier to engaging with home-based simulation. they tended to focus on scoring 'points' which contributed toward career progression rather than tasks which were interesting, or associated with personal development. this approach is perpetuated by the surgical training system, which rewards trainees with points for publications and exams, but not for operative skill. this leads to conflict between trainers and trainees, the former expecting trainees to instead focus upon developing their technical abilities. trainees were unsatisfied with metric feedback and wanted individual feedback from consultant trainers (attending equivalent). trainees generally perceived consultants as lacking interest toward the programmes and training in general. however, some consultants were in fact unaware of the programmes being delivered and others felt lacking in confidence to deliver necessary training to trainees. conclusions: our findings are widely generalizable and have implications for any institution delivering a similar programme. as a means of improving engagement, the the inception of scheduled simulation study days, providing trainees with the opportunity for personalised feedback from consultants, has been suggested. equipping trainers with the necessary competencies to deliver training can be achieved by ensuring attendance at the necessary professional development courses. tackling the 'box ticking' culture is more challenging and may involve a move toward restructuring the current surgical training scheme. introduction: to provide evidence for the face and content validity of a hybrid active-shooter team training simulation and the impact of a hybrid curricular model on learner's engagement and performance. the following study was conducted because hospitals are increasingly threatened by active-shooter incidents, and no active and noticeable training is currently available to train hospital staff members. methods: thirty-five volunteers (medical students, residents and other allied health providers) from the university of minnesota affiliated medical centers were randomly selected and divided into control and experimental groups. the control group (n= ) was given a traditional lecture-style presentation. the experimental group (n= ) participated in the hybrid curriculum which included augmented reality, kinesthetic simulation, and debriefing components. following both curriculum styles, nasa task load index (tlx) surveys were completed by each group member. a final active shooter simulation experience was presented and evaluated by active-shooter trained raters using a checklist of critical actions from the department of defense. a post-simulation nasa tlx survey and post-test were provided. to assess face and content validation of a hybrid team-training simulation exercise to prepare healthcare personnel in the event of a hospital-related active-shooter crisis, a -point likert-scale survey determined the realism, utility, and applicability of this type of training while engagement and performance during the simulation were measured using a nasa-tlx survey and contrasted with the rater's evaluation. our study provided evidence to support the face and content validation of an active-shooter simulation team training curriculum as a useful adjunct to health care institutional safety planning. we demonstrated that this type of training requires an optimal level of cognitive activation to increases learner's engagement and performance. we concluded that the hybrid design of our curriculum was successful in delivering these optimal levels of cognitive stimuli by producing engaging team training simulation experience capable of motivating our learners to acquire the tactical skills and life-preserving behaviors consistent with better survival opportunities during a hospital related active-shooter crisis. the introduction: the virtual electrosurgical skill trainer (vest) provides surgeons and trainees with a hands-on approach to learning the best practices in electrosurgery. it is comprised of five modules covering tissue effects, stray currents, bipolar tools, monopolar tools and or fire safety. the module in this study teaches the origins of stray currents and shows the learner how they can cause damage to non-target tissues via direct and capacitive coupling. the aim of this study was to assess learning using the vest system. methods: the irb approved study followed a single group pretest-posttest design and was conducted at the sages learning center. thirty-eight subjects participated and out of these, % were attending surgeons while the rest were medical students, residents and fellows. % of subjects had prior fuse exposure, while the remaining had none. subjects were asked to complete a five-question multiple choice questionnaire before and after using the simulator. it assessed their knowledge in topics such as direct coupling, capacitive coupling and insulation failure. participants then used the simulator to complete three tasks. first, the subject used direct coupling to seal a vessel and observed the desired effects and potential pitfalls. in the second task the subject was immersed inside the peritoneal cavity and was directed to use the active electrode to observe how the activation of energy can cause capacitive coupling. in the third task the subject practiced evaluating the insulation of electrosurgical tools for defects. wilcoxon's signed rank test was used to differentiate between pre-and post-test scores, and the mann-whitney u test was used to differentiate between the groups of subjects as a function of fuse experience. results: the median score on the pre-simulator assessment was % and the post-simulator median score was % (p = . ). there was no statistically significant difference in pre-assessment scores between attending surgeons and the others (p= . ). subjects with prior fuse exposure scored significantly higher on the pre-module assessment compared to those that had no prior fuse exposure ( % vs %, p= . ). in the post-assessment their median scores were % and %, respectively (p= . ). conclusions: the vest simulator module successfully increased the overall participants' knowledge of coupling in electrosurgery regardless of level of surgical experience. participants with prior exposure to the fuse curriculum had increased knowledge on this topic at baseline as compared to participants without any fuse exposure. introduction: the objective of this study was to assess the reliability of a modified notechs rating scale for the evaluation of medical students' non-technical (nt) skills. the importance of physician nt skills for the safe care of patients is receiving increasing attention in the literature. tools to assess nt skills such as notechs that addresses communication, situation awareness, cooperation, leadership, and decision-making have been shown to be valid and reliable. despite its importance, the assessment of nt skills of medical students, our future physicians, has received little attention. methods and procedures: twenty-seven medical students participated in of acute care simulated scenarios, each approximately minutes long. video recordings of student performance were reviewed and assessed using a modified notechs rating tool adapted for these scenarios with input from a team of clinicians, nurses, and human factors specialists. the rating scale ranged from to , representing very problematic behavior (e.g., not vocalizing concerns or decision process) and representing model behavior (e.g., identifies future problems and remains calm to unexpected events). two reviewers rated all videos independently on the notechs domains and specific subscales. student scores in each nt skill domain and interrater reliability were assessed. results: a summary of the scores of each notechs domain is shown in table . the highest overall average score of a participant was . while the lowest was . . the intra-class correlation (icc; two-way random model) was . , and the cronbach's α coefficient was [ . . the lowest icc agreement was in the situation awareness domain ( . ) while the highest agreement was in leadership ( . ). conclusion: medical student nt skills during acute care simulated scenarios vary significantly using a modified notechs assessment. this newly developed tool provides a framework for educators to evaluate medical students' nt skills during simulation training. it further identified domains where students scored lower, such as situation awareness, and could be targeted for education. the moderate icc, between the . - . range, shows that further refinement of the tool is needed to reliably assess the constructs. future steps to obtain validity evidence include additional raters and applying the tool in non-simulated settings. introduction: a general misperception of the real concept of robotic surgery seems to be revealed in our clinical practice. despite its introduction almost years ago, robotic surgery is still related to many myths and beliefs. before designing a trial to see if these false awareness could impact on outcome, we measured this misperception by a survey. moreover we tested if medical school is able today to give to the future doctors a necessary knowledge about robotic surgery. with the same survey we explore the feelings about the introduction of the artificial intelligence in medicine and the perception of the consequences of a larger use of technology in medicine. methods and procedures: a multiple choice survey was designed and anonymously administered via the platform surveymonkey (http://www.surveymonkey.com). a total of questions were selected from the research team and included in the survey. the questionnaire was divided in three parts: the first was to get information on participants' population; the second asked specific questions about robotic surgery; the third focused on technology use in medical education. results: we received and analyzed questionnaires, of which totally filled. many undergraduates consider robotic surgery as "experimental", will prefer open surgery on themselves and see a risk for robotic surgery in damaging the patient-surgeon relationship. this situation is better for medical students, but still a great diffidence were encountered. % of ug consider robotic surgery as "experimental" vs only . % of ms (q ). most thought robotic surgery had been used for only years or less (q ). . % of ug and . % of ms gave the right answer (p=. ). almost % of ug see robotic surgery as a risk in damaging the patient-surgeon relationship. this is not seen among ms (q ) (p=. ). % of ug are fearful of robots used to operate them. this fear is significantly reduced among medical students (p=. ). ug were less familiar with the indications and uses for robotics. ms gave a correct response more frequently (q , . % vs . %, p= . ). conclusions: our results indicates that nowadays, the robotic surgery is related a lot of misperceptions and a generally low level of information. this general picture is partially mitigated during the medical school, but the level of knowledge is still low. a big effort seems mandatory in clarify every technical aspect and an ethic debate about robotics, technology and ai as part of medical curriculum is advisable. background: learning theory states that a certain level of physiological stress or cognitive activation is required to achieve optimal task engagement and performance by the learners. our study will seek to determine if a hybrid team training curriculum inclusive of a task-oriented interactive virtual environment could help achieve the optimal level of cognitive activation required to result in a higher task engagement and performance. methods: a total of thirty-five medical professionals from the university of minnesota participated in several team training simulations. participants were randomly selected to an experimental and control groups. the experimental group (n= ) was exposed to a hybrid team training module, consisting of a task-oriented augmented reality phase followed by a second and third phase consisting of a kinesthetic simulation scenario and debriefing, respectively. the augmented reality phase presented the trainees to an interactive -degree image of the same clinical room where the simulation would take place allowing for ''situated-learning'' to take place. during the learning phase, trainees were encouraged to interact and communicate with each other while completing the tasks allowing for ''social-learning'' to effect. the control group (n= ), educational component consisted of a traditional audiovisual lecture-style introductory presentation, a simulation, and debriefing. after completing their respective educational components, each group completed a nasa task load index survey to assess the cognitive load experience of the individual educational models. subjects were then exposed to a final simulation (test simulation) similar in content and structure to the initial simulation. this was followed by a second nasa tlx survey. raters evaluated both group level of engagement and performance using a validated checklist of critical actions. results: the experimental groups showed higher weighted overall nasa cognitive load index scores than the control group (p= . ) prior to the test simulation. the weighted nasa score remained elevated in the experimental participant groups following the test simulation, whereas in the control group the post-simulation nasa assessment revealed a decrease in cognitive load (p= . ). expert raters using a validated checklist determined that . ± . % of the experimental (hybrid curriculum) group and . ± . % of the control group appeared to be more engaged and performed better during the simulation. conclusions: pre-simulation task-oriented augmented reality learning environments designed to incorporate situated, and social learning virtual experiences can provide the optimal level of cognitive boost that can result in a higher participant engagement and performance during team training simulation scenarios. introduction: despite the huge importance of laparoscopy, medical students have a brief contact with this surgical specialty during medical school in brazil. usually, they get in touch with this specialty during the surgery clerkship in the last years of medical school. therefore, few students perform clinical research or develop interest for this area during graduation. objective: to awaken the interest in laparoscopy of medical students early in medical school, improving the development of clinical research projects, and to prepare new generations of minimally invasive surgeons. discussion: the academic league of videolaparoscopy was created in under the guidance of dr. gustavo carvalho from the university of pernambuco, brazil. an academic league is a group of medical students who are guided by a tutor to develop three areas: research, teaching, and clinical practice. every year new students join the league after being selected with a multiple question test and an analysis of the curriculum vitae. the students are stimulated to participate in laparoscopic procedures as observers, learning about the techniques and instruments. moreover, there are minimally invasive surgery lectures and courses during the year. general surgery residents can also be part of the program as tutors. they are encouraged to present lectures, and to assist with research projects. medical students participated of this program in years. % pursued a surgical specialty after graduation. % did minimally invasive surgery as a fellowship. conclusions: the students who participate in several activities provided by the league have an increased interest in pursuing the path to become a laparoscopic surgeon. background: surgical education is an active and adaptive process of developing knowledge, technical and non-technical skills. the rise of social media has created a paradigm shift in surgical education, with online learning platforms offering exposure to real-time content, expert instruction, and global collaboration. while these disruptive technologies evolve, their influence on surgical education has not been investigated. our goal was to evaluate the growth and impact of an online surgical education model-the advances in surgery (ais) channel. our hypothesis was that utilization and engagement with the platform continues to grow, providing novel methods of measuring successful education. methods: assessment of the platform's membership demographic, user activity, and engagement was performed from inception in to quarter . the ais channel uniquely provides free, high quality, innovative content from elite surgeons in scheduled and continuously available formats across colorectal, bariatric and endocrine surgery service lines. users login to access content, with demographics, time spent, and content accessed recorded as measures of active account utilization and engagement. the main outcome measures were overall membership trends, utilization patterns by region, content type, and surgical specialty for the platform. results: users were predominately male ( . %), surgeons ( . %), and ranged in age from to years ( . %). the main surgical subspecialty represented was colorectal ( . %). active account usage/weekly recurrence was . % ( % industry benchmark), with users engaged for a mean minutes/session (excluding live events). since inception, steady exponential growth was seen across several dimensions. registered users and unique ip addresses increased from over , and , in to over , and . million in , respectively. the number of countries represented increased to reach across continents. at present, over live surgeries and live congresses have been broadcast from countries, with over , surgical videos available on demand to facilitate surgical education. the greatest engagement is seen with live surgical broadcasts. conclusion: our analysis demonstrated proof of concept for a unique, online surgical education model to provide effective surgical education. success was validated through the increase in overall users, sustained active account usage, and global penetration. user preferences for live surgical broadcasts were seen. knowing the utilization and preference patterns, the platform can continue to evolve and enhance the learners' experience. with this growth and penetrance, there is the potential to globally improve patient outcomes and the quality of care provided. background: a realistic simulator for transabdominal preperitoneal (tapp) inguinal hernia repair would enhance the surgeons' training experience before they enter the operating theater. the purpose of this study was to evaluate the efficacy of d-printed tapp simulator in evaluating preoperative skill before entering operative theater. methods: surgeons in our institution were enrolled in this study. they performed simulation tapp and the performance score was measured using tapp check list. the tapp simulator allows for the performance of all procedures required in tapp. the correlation between post -graduate years (pgys), age, experienced a number of laparoscopic surgery (more than , less than ), experienced number of tapp and the performance score was evaluated. results: strong correlation between experienced member of tapp inguinal hernia repair and the performance score was evaluated in this study (r= . ). however, the correlation between pgy, age and score was weak ( introduction: as the field of laparoscopic surgery grows, the need for standard measures of complex laparoscopic surgical skills is apparent. fundamentals of laparoscopic skills (fls) testing is required to complete general surgery residency, but there is no standard metric to convey expertise in advanced laparoscopic procedures. in an effort to develop a standardized assessment of laparoscopic suturing expertise, a group of experts was surveyed using delphi methodology to reach consensus on observed laparoscopic suturing skills reflective of performing at an expert level. methods: expert laparoscopic surgeons participated in serial surveys via redcap (research electronic data capture). experts included surgeons who perform[ /year laparoscopic procedures that involve intra-corporeal suturing, obtained from the authors' personal and professions networks. using a point likert scale, participants were asked to agree/disagree if different observed laparoscopic suturing skills indicate performing at an expert level. these skills were chosen from prior assessment instruments in the literature and the authors' previously published work. tasks were considered to meet criteria for consensus and eliminated from the next round of the survey after reaching % consensus as "strongly agree." results of the previous round of surveys were shared with participants at the start of the next round. the predefined endpoint for the delphi was set as maximum of rounds, reaching % consensus on each skill, or if[ % of initial respondents fail to return for subsequent surveys. results: after the first round of the delphi survey, respondents met inclusion criteria. preliminary data demonstrated skills that reached consensus ([ % of respondents chose "strongly agree"): forehand suturing, avoiding tissue trauma, having a technically acceptable final product (ie. tight closure), and tying a secure knot at the end of suturing. items did not approach consensus (\ % of respondents chose "strongly agree" or "agree"): alternating hands for each throw while tying, never missing a target when grabbing needle/suture, alternating direction of throws when tying, and backhand suturing. data from all four rounds of surveys as well as the final draft of the assessment instrument will be available at time of presentation. conclusion: preliminary data of this delphi study allowed us to reach consensus amongst a group of expert laparoscopic surgeons on the characteristics of expert laparoscopic suturing, which will allow creation of a comprehensive assessment tool for this domain. validation of such a tool will help advance the surgical field towards true competency-based credentialing and promotion. the study was designed to assess the knowledge of scp among european surgeons (specialists and residents). additionally, surgeons' opinion on usefulness of each of the rules of scp was gathered. the data were analyzed in terms of differences between residents and specialists. this is to set ground for and an educational program and increase the safety of elective laparoscopic cholecystectomy by minimizing the occurrence of cbdi. methods: the data on the knowledge of scp and opinion on usefulness of its rules were gathered in form of an anonymous questionnaire distributed among participants of several surgical conferences in poland. the questionnaire then asked about the surgeon's experience in terms of cholecystectomies performed and the number of complications in form of cbdi. it then listed the scp rules and asked the surgeon about their opinion on usefulness of each of the rules on a -point scale. gathered data were subject to statistical analysis and a comparison between specialists and residents was performed. the study has been registered in the clinicaltrials.gov-nct . , although these numbers are still low. significant differences in the mean usefulness score between residents and specialists were observed in regard to two rules: rule was found more useful by residents (mean score , vs. , , p= . ), whereas rule was found more useful by specialists (mean . vs. . , p= . ). the awareness of the sages safe cholecystectomy program in poland is still low and needs to be promoted. both surgical residents and specialists consider the rules of scp to be useful during surgery, although there are slight differences in the usefulness scores between the groups. an educational program to promote and further implement the scp should be established. introduction: transanal total mesorectal excision (tatme) has attracted substantial interest amongst colorectal surgeons throughout the world. technical challenges of the technique however have been acknowledged by early adopters and this may underpin the early reports of visceral injuries which occurred during the perineal phase. evidence from previous surgical training programs suggest that a structured proctorship programme can shorten the learning curve, operative time and most importantly reduce major complications. the aim of this study was to report on the first national pilot training initiative which was developed in the uk to ensure safe introduction of this technique. methods: a pilot training programme for the uk has been established in partnership with the healthcare industry, and supported by the association of coloproctology of great britain and ireland. the programme consists of three phases: (i) development of a consensus process on the optimum training curriculum of tatme from all relevant stakeholders, including experts, early adopters, and potential learners, to guide the training of this technique (ii) piloting of this training curriculum and (iii) assessment and quality assurance mechanisms to monitor training and measure outcomes. results: a cohesive multi-modal training curriculum has been developed providing clear guidance on case selection, supporting multi-disciplinary and multimodal training including online modules, dry-lab, purse-string simulators, cadaveric training and formal clinical proctoring programme. the uk pilot programme opened for applications in may and, after a rigorous selection process, the initiative was launched in september with trainers mentoring consultant colorectal surgeons from five centres. the selection of learners was based on suitable case volume and prior experience in laparoscopic rectal surgery. objective assessment tools were applied to an unedited video of a laparoscopic rectal surgery case for each applicant. for the selected centres, access to the ilapp tatme app was provided to access educational content including operative video footage, prior to attending a bespoke cadaveric workshop. each learner will then benefit from a structured, centrally organised and funded proctorship programme at their own institutions. a global assessment score form has been specifically designed to monitor training and a formal accreditation process will be used to sign off each learner using competency assessment tool. data on the cadaveric workshop and initial outcomes of the clinical mentorship will be presented at the conference. conclusion: a competency-based pilot training programme for transanal total mesorectal excision has been launched in the uk to support safe introduction of this technique. practicing on a fls trainer box is effective but requires large amount of consumables and is scored subjectively. the purpose of this study is to evaluate the face validity of the intracorporeal suturing task on a virtual fundamentals of laparoscopic surgery simulator (virtual fls). we hypothesize that the virtual fls will demonstrate face validity. methods and procedures: after a video demonstration and a practice period, twenty-three medical students and residents completed an evaluation of the simulator. the participants were asked to perform the standard intracorporeal suturing task on each of the virtual fls and the traditional fls box trainer. the presentation order of the devices was balanced. the performance scores on each device were calculated based on time (seconds), deviations to the black dots (mm), and incision gap (mm). the participants were then asked to finish a -question questionnaire regarding the face validity of the simulator. participants answered questions with ratings from (not realistic/useful) to (very realistic/ useful). a wilcoxon signed ranks test was performed to identify differences in performance on the virtual fls compared to the traditional fls box trainer. results: responses to of the questions ( . %) averaged above a . out of . those questions that rated the highest were the degree of realism of the target objects in the virtual fls compared to the fls ( . ) presently, most training methods for thoracoscopic esophagectomy use live porcines; this presents several problems including cost, long preparation times, and ethical issues. these problems further prevent frequent training. currently, no alternative models for thoracocopic esophagectomy training. we report, for the first time, the development and use of a non-biomaterial training model for thoracoscopic esophagectomy. methods: we collaborated with sunarrow co., ltd. (tokyo, japan) to develop the training model. we created organ models for esophagus, trachea, bronchus, aorta, vagus nerve, recurrent nerve, bronchial artery, lymph node, vertebrae, azygos vein, and thoracic duct, and filled the models with a polyvinyl alcohol hydrogel. the gaps between organs were filled with a filler material mimicking connective tissue. we chose a synthetic resin that closely mimics the characteristics (rigidity or elasticity) of each organ. after each organ was fixed, the model was covered with a filler to create a pleural membrane to allow training in peeling operations. in addition, because a patient plate was attached to the rear of the training model, excision with an energy device was possible and more closely simulated surgical conditions. results: using the training model resulted in a highly satisfactory level of experience in three trainees. the trainees were able to learn anatomical positions and sequence of surgical procedures, including endoscope handling. centre for rural health, aberdeen university introduction: as doctors become expert in a complex procedure, they develop automatic nuances of performance that are difficult to explain to a peer or a trainee (so called 'unconscious competence'). traditional methods which aim to allow sharing of expertise have limitations: concurrent reporting alters the flow of the task at hand while retrospective reporting is subject to bias and often incomplete. iview expert is a technique validated in the aerospace domain which externalises an expert's cognitive processes, without disrupting the task at hand. the aim of this project is to assess the feasibility of adapting the technique to medical training. methods: this was an observational case study in which an expert endoscopist wore a head mounted camera to capture a complex procedure (colonoscopy). captured video was reviewed during a facilitated debrief which externalised the expert's cognitive processes. the debrief was recorded and formed an audio commentary. the video and accompanying audio commentary formed a learning package which was watched by a specialty trainee. the technique differs from standard procedural videos in that it provides a more detailed insight into cognitive processes of the expert. this is achieved through the debrief, which encourages reflection upon kinaesthetic (head movement) as well as auditory and visual cues, resulting in a higher level of experiential immersion. questionnaires examined acceptability and educational value of the technique using likert scales and free text answers. quantitative data were presented using basic descriptions in terms of agreement with statements. qualitative data from free text responses were coded in order to identify key themes. results: the expert agreed that wearing the camera was acceptable and did not interfere with the procedure, nor usual decision making processes. qualitative analysis revealed the debrief process to be associated with a high level of experiential immersion: "as if they were there". both the expert and the trainee strongly agreed that the process was educationally valuable and that they learned something new. qualitative analysis demonstrated that the technique revealed useful and unique nuances of the procedure. the intervention could represent a powerful adjunct to existing training methods, especially amongst more experienced practitioners. we are currently undertaking a larger study involving a greater range of procedures with more learners. introduction: endoscopy is an important skill for general surgeons to possess. however, there is lack of training within surgery residency programs. we implemented a one-day endoscopic surgery course with the aim of improving the confidence of surgical residents in performing endoscopic procedures. we also aimed to examine the effect of the exposure to this course on self-reported confidence in performing endoscopic procedures. methods and procedures: the fundamental of endoscopic surgery course at texas tech university health science center is a one-day course consisting of both didactic training and lab training. the didactic part of the course is taught by attending physicians and focuses on the basics of endoscopy, management of upper and lower gastrointestinal (gi) bleed, and techniques to perform a variety of gi endoscopic procedures on swine esophagus and stomach explant. the lab portion of the course allows residents to perform different endoscopic surgical procedures with the attending physicians providing guidance. residents from pgy- to pgy- participated in the course. a -item questionnaire that measured the self-reported confidence in performing several endoscopic procedures on a - likert scale was administered before and after the course. results: twenty-two participants successfully completed the training and the questionnaires. a significant improvement was observed in the overall confidence in performing a variety of endoscopic procedures ( . ± . , p. ). the improvements remained significant even after controlling for the years of postgraduate surgical training (p. ). conclusion: the one-day fundamental of endoscopic surgery course enabled residents to be more confident with endoscopic procedures. overall, the residents felt that the course was helpful and would like to attend more than one session per year. this course should be held, at least, annually to allow the general surgery residents to become even more confident with this important skill. by being more confident in their surgical endoscopy skills, they will ultimately be able to provide better care for patients. introduction: a course evaluation study on the effectiveness of improving laparoscopic skills of surgical residents using swine models was evaluated through a self-report questionnaire administered before and after course completion. the purpose of the training is to provide surgical residents opportunities to practice and advance their laparoscopic proficiencies. methods and procedures: participating residents in all post-graduate year levels (pgy through pgy , n= ) were provided anesthetized pigs with which to perform a variety of simple to complex laparoscopic cases. prior to training, residents were given a questionnaire composed of eleven questions requiring the subjects to rate their confidence in performing various laparoscopic procedures on a - likert scale. after completion of the course, an identical questionnaire was distributed with two additional questions relating to the overall impact of the course. all statistical analyses were conducted using r statistical software (version . conclusion: overall, one-day hands-on training using swine models improved resident's skills, confidence, and understanding of laparoscopic surgery. the information acquired through the questionnaire emphasized the importance of providing a laparoscopic training course as a standard requirement at all medical institutions. allowing opportunities for surgical residents to practice their laparoscopic skillset will not only help in their individual academic advancements, it will allow them to provide optimum care for their patients. background: learning laparoscopy is difficult and many educational tools including simulation training are required. feedback plays a crucial role for motor skill training but require expert tutors and its time consuming. e-learning increases knowledge acquisition through a more interacting multimedia experience and reduces de costs of learning. in the last decade multiple applications (apps) have been developed for mobile medical training. a new ios app was developed using specially designed educational videos that explain the main technical aspects in advanced laparoscopy through simulation training. the aim of this study is to present the first results of its incorporation in a surgical simulation lab as a complement of effective feedback. methods: twenty-five consecutive residents were trained in our simulation lab through a session validated training program for the acquisition of advanced laparoscopic skills needed for the performance of a laparoscopic hand-sewn jejuno-jejunostomy. every session had written instructions and a basic tutorial video. the app consist two main sections, the first one explains the essential techniques needed for intracorporeal suturing and the second is a complete walkthrough of the validated training program. the trainees were divided in two groups, the first was trained without using the app (napp) and the second group was trained using the app (yapp). both groups of trainees could ask for feedback anytime they needed. trainees were assessed before and after the training program using validated rating scales and the number of necessary tutor-feedback sessions were registered. finally the yapp group answered a survey about the strengths and weaknesses of the app for learning advanced laparoscopic skills. results: twenty-five residents completed the training program; yapp and napp. both groups finalized their training with no statistical significant differences in their scores (p: . ). the number of tutor-feedback needed to complete the training in the yapp vs napp was of [ ( - ) vs ( - ) (p. )] respectively. in the questionnaire all participants considered that the app was effective for learning advanced laparoscopy. over downloads have been registered since the app was published in the apple app store in . we present a novel smartphone app that guides laparoscopic training using simulation-based educational videos with very good results. the use of app guided learning reduces de need of expert tutor feedback reducing the costs of simulated training. jemin choi, young-il choi; kosin university gospel hospital purpose: laparoscopic appendectomy (la) has been widely performed for acute appendicitis. in addition, minimally invasive surgery such as la is common surgical technique to the surgical residents. however, single incision laparoscopic surgery (sils) is a challenge to inexperienced surgical residents. we described our initial experience in teaching sils procedure for appendectomy in our medical center. methods: twenty nine cases of single incision laparoscopic appendectomy (sila) were performed by single surgical resident and cases of la were performed by surgical residents and boardcertified surgeons. a study was reviewed retrospectively. ( ) clinical stressors (i.e., vitals of patient coding). we developed a stress simulator testbed by integrating an fls box trainer with a linux computer, running custom c++ code. the code generated various stressor conditions, while recording sensor data from the trainer and human operator. we tested groups of participants in an irb approved trial including: novices (non-medical students), intermediates (medical students), and experts (pgy residents and fellows). the study consisted of subjects performing the peg transfer and the pattern cut six times (baseline, four randomized stressors, posttest). after each task, the nasa-tlx survey was administered to determine the overall workload of that stressor condition. an analysis of variance was conducted to identify significant trends in terms of stressor type. results: when compared to baseline nasa-tlx scores, the intermediate group had the greatest changes in overall workload than novices and experts (p= . ). additionally, the change between baseline and post-test workload was significantly lower than for the environmental, negative evaluative, and clinical stressors (p= . ). for pattern cutting, subjects reported a significantly lower perception of failure (p= . ) in both the positive evaluative (mean= . ) and post-test conditions (mean= . ), yet, though not statistically significant (p= . ), the measured accuracy in the task during the positive evaluative condition was actually worse ( . %), second only to the pre-test accuracy ( . %). the best accuracy for pattern cutting across all expertise levels was % for the post-test followed by . % in the negative evaluative condition. these results are interesting as they show that despite perceived improvements in performance with a positive feedback condition, performance actually degrades and is better in the negative feedback condition, which is perceived to be more difficult. these results were not found in the peg transfer task, which is arguably an easier task. conclusion: from the evidence gathered in the study, it is clear that there is a correlation between distractors and performance. further analysis is needed to identify the relationship between the type of stressor, and inherent difficulty of the tasks, in terms of which type of stressor best improves learning and outcomes. surg endosc ( ) each received credentials to perform diagnostic and therapeutic ercp from their respective hospitals in nevada, minnesota, and idaho. one continues to teach ercp to general surgery residents, and another taught the skill to fellows in an advanced endoscopy fellowship. all three continue to use ercp in their practice ( to times per month), as they each specialized in a field that utilizes ercp routinely. choledocholothiaisis is the most frequent indication, though ercp is also performed for iatrogenic biliary duct leaks, traumatic biliary or pancreatic duct leaks, chronic pancreatitis, and malignancy. conclusions: training in esophagogastroduodenoscopy and colonoscopy is required for general surgery residents, but the addition of ercp to select residents' training enables them to completely manage their patients' surgical disease. the training of select general surgery residents in this skill has been successful, evidenced by the continued use of ercp in the practices of three residents who completed this training program at our institution. the decision to train residents in this skill should be left to individual program directors and department chairs. we recommend that residents selected for this additional training should plan to practice in specialties where ercp can be implemented. conclusion: same-day discharge after nissen fundoplication and hiatal hernia repair is feasible for select patients. one major challenge for same day discharge is the current insurance provisions required for hospital reimbursement. within the parameters of this study, bmi and asa score did not differ between discharged and admitted patients, while older age and increased procedure duration were associated with need for admission. premkumar anandan, ms, facs; bangalore medical college and research institute introduction: minimal access surgery is an imperative element of enhanced recovery program and has significantly improved the outcomes. enhanced recovery program (erp) synonym "fast track" surgery "was first conceived by dr henrich kelhet. largely described for colorectal surgery and reported to be feasible and useful for maintaining physiological function and smooth the progress of recovery. most of the patients who present for surgical emergency are not adequately prepared and many are not in normal physiological state. the feasibility of enhanced recovery programs protocol in such emergency minimal access surgery remains indistinct. this study was designed to validate an enhanced recovery program in patients who undergo emergency minimal access surgery. introduction: pathways for enhanced recovery after surgery (eras) have been shown to improve length of stay and postoperative complication rates across various surgical fields, however there is a relative lack of evidence-based studies in bariatric surgery. the objective of the current study was to determine if starting a bariatric full liquid diet on postoperative day (pod) zero was associated with shorter length of stay (los) for patients who underwent laparoscopic sleeve gastrectomy (lsg) or roux-en-y gastric bypass (rygb). methods: retrospective review of a prospectively collected dataset was conducted at a single institution before and after implementation of a new diet protocol for lsg and rygb. postoperative diet orders were changed from full liquid diet on pod to pod . length of stay and -day readmissions were reviewed from june to august . independent samples t-tests were used to compare continuous variables and chi-squared tests for categorical variables before and after diet change was implemented. patients were excluded if they were undergoing revision surgery, were discharged directly from pacu, or had significant intraoperative complications or required reoperation within the same admission. introduction: data suggests value in using tap (transversus abdominis plane) neural blockade in abdominal surgical procedures. we deploy tap blockade using liposomal bupivacaine via ultrasound (us) as part of a narcotic sparing pain management pathway for patients undergoing abdominal surgery in our rural community setting. our goal was to evaluate adequacy of postoperative discomfort and the success in avoiding narcotic usage. methods and procedures: records of patients undergoing abdominal surgical procedures performed by one surgeon over an month period were reviewed under irb approval. patients taking narcotics prior to the procedure (except for discomfort due to the condition being surgically treated) were excluded from analysis, as were those admitted to the hospital for postoperative treatment. us guided lateral tap blocks were performed by the surgeon using mg of liposomal bupivacaine and mg of bupivacaine in the or prior to the incision. unilateral block was performed for unilateral procedures (e.g. inguinal hernia) and bilateral for laparoscopic or midline procedures. incisional sites were treated with a field block of mg of bupivacaine. prescriptions for medications included , mg of acetaminophen qid and mg of naproxen sodium tid for days. a prescription for tramadol ( to mg prn up to times daily; tablets with no refill) was given. patients were seen in followup two weeks postoperatively. data (following standard scales/metrics) for patient-reported-outcomes e.g. pain, nausea-vomiting, & fatigue will be analyzed with the above data and the analysis with conclusions will be presented & discussed. federico sertic, md, ashwin gojanur, dr, ahmed hammad, md; guy's and st thomas' hospital introduction: the aim of this project is to assess the quality of post-operative pain relief in colorectal surgery and identify patients in whom pain management has not been effective, in order to improve the quality of post-operative care. effective management of post-operative pain has long been recognised as important in improving the post-operative experience, reducing complications and promoting early discharge from hospital. standards: all patients should be pain free at rest, % of elective patients should be told about what analgesia they will have post-operatively, % of patients should be satisfied with their pain management and % of patients should feel staff did everything they could to control their pain. methods and procedures: questionnaires were given to patients on the day prior to discharge. questions about pre-operative and post-operative pain experience were asked. data regarding post-operative analgesia were collected from medication charts and medical notes. data were collected over a period of two months (august/september ). range of procedures: elective laparoscopic abdomino-perineal-excision-of-rectum with igap flaps, elective laparoscopic right hemicholectomy, laparotomy+bowel resection/stoma formation ( elective, emergency), elective repair of parastomal hernia, appendicectomy ( laparoscopic elective, laparoscopic emergency, laparotomy emergency) and elective reversal of ileosomy. pain scores ( - ): immediately post-operative pain, day post-operative pain, post-operative pain after day and pain on moving/coughing/straining. results: mean immediate post-operative pain score was . ( % of patients with score +), mean day post-operative pain score was . , mean post-operative pain score after day was . , mean pain score on moving was . ( % of patients with score +), mean pain score on coughing/ straining was . ( % of patients with score +). % of patients were satisfied with their post-operative pain management and felt that the staff had done everything they could to manage their pain. % of patients were not aware of their post-operative analgesia regimen and % did not know how regularly they could request analgesia. conclusions: effective management of post-operative pain is a key part of post-operative care and an important component of enhanced recovery programmes. patient satisfaction with pain management has been found to correlate with received pre-operative information. increasing ward nurses' and acute pain teams' knowledge is important in improving patients' pain experiences. interestingly, those patients who had a background of long-term opioid requirements reported that they were satisfied with their pain management. methods and procedures: a patient undergoing a standard ultrasound guided ql block by an anesthesiologist established the baseline anticipated response, and procedure time. the procedure, performed under sedation preoperatively, required over minutes. for this study, patients undergoing laparoscopic colorectal surgery were administered a lateral ql block (modified ql ) under ultrasound guidance by the operating surgeon. ml of a mixture ( ml injectable liposomal bupivacaine suspension, ml . % bupivacaine hydrochloride and ml normal saline) was injected bilaterally, after induction, skin preparation, draping, and prior to the operation. postoperative narcotic use and pain vas scores were documented. results: six patients were administered a bi-lateral ql block intraoperatively. procedures were: laparoscopic sigmoid colectomies, one end ileostomy reversal, laparoscopic completion proctectomy with ileal pouch anal anastomosis, and a laparoscopic descending colectomy. of the narcotic naïve patients, mean pain vas on post op days , and were . , . and . respectively within a multimodality pain management/enhanced recovery program, where standing orders prompting narcotic administration by nursing staff is pain vas . all were discharged on pod or without narcotic prescriptions. two of the patients were chronic narcotic users, and they were discharged on their baseline narcotics, i.e. without additional narcotics. all intraoperative blocks were performed in less than minutes. conclusion: a novel, surgeon-administered lateral ql block under ultrasound guidance, is feasible and provides post-operative pain control. patients are discharged home on no/baseline narcotics. a randomized controlled trial is being constructed based on these striking findings. keywords: lc-laparoscopic cholecystectomy, ga-general anaesthesia, sa-spinal anaesthesia. nikhil gupta, rachan kathpal, dr, arun k gupta, dr, dipankar naskar, dr, c k durga, dr; pgimer dr rml hospital, delhi introduction: cholecystectomy have shown some advantages when done under spinal anaesthesia (sa) and associated with less intra operative and post -operative morbidity and mortality. laparoscopic cholecystectomy (lc) under regional anaesthesia alone included patients with coexisting pulmonary disease, who are deemed high risk for ga. the aim of the present study is to assess the efficacy and safety of laparoscopic cholecystectomy under sa. materials: this prospective, interventional study was conducted on patients with chronic calculous cholecystitis attending general surgery out-patient department of our institution. results: in our study, intraoperative complications recorded were hypotension, bradycardia, intra op shoulder tip pain, bleeding from the liver bed, bile spillage, post-op pain and vomiting. % patients had intraoperative pain, % had shoulder tip pain, . % had bradycardia, . % had hypotension, . % had bile spillage and . % had bleeding. laparoscopic cholecystectomy under spinal anaesthesia should be promoted more even in developing countries but we need to establish well evaluated safety guidelines that could be followed faithfully for minimizing the risk of complication. background: the "opioid crisis" has taken over headlines with increasing public attention brought to the drastically increasing rates of addiction to prescription narcotics. in , the american society of addiction medicine reported million americans with an addiction to prescription pain relievers and a four-fold increase in overdose related deaths. in a medical setting, increased opiate use is associated with increased rates of delirium, ileus, urinary retention, and respiratory depression. these risks are increased in the obese/bariatric population. transversus abdominis plane (tap) block is a safe and effective approach to achieve optimum pain control. it reduces the use of opiates in patients undergoing major abdominal surgery. however, there is currently no data in the literature examining its use in the bariatric population. our study examines the use of liposomal bupivacaine for tap block in patients undergoing laparoscopic sleeve gastrectomy (lsg). methods: sixteen patients undergoing lsg with tap block were compared with historical cohort of sixteen patients undergoing lsg without tap block (standard group). the primary outcome measured was post-operative in-hospital opiate use (morphine equivalents). statistical analysis was performed using student's t test for continuous variables and fisher's exact test for categorical variables. results: both groups were well matched in regards to bmi, age, and asa class. there was a significant decrease in the post-operative use of opiates with the use of the tap block ( . mg in the tap block group vs. mg in the standard group; p . ). there was no difference in the mean length of stay between the two groups. there was an increase in the mean operative time with use of the tap block ( minutes in the tap block group vs. minutes in the standard group; p. ) conclusions: the use liposomal bupivacaine for tap block provides substantial analgesia, allowing for significant reduction in post-operative opiate use in our bariatric patients. this can be an important adjunct in pain control for the bariatric population and aid in post-operative complication risk reduction. introduction: the objective of this study was to identify variation in weight and demographics in the distribution of pre-operative clinical characteristics between super obese females compared with males who were about to undergo bpd/ds surgery. as the american obesity epidemic increases, morbidly obese patients have become integral to every surgical practice; they are no longer limited to bariatric surgeons. every clinical insight helps the surgeon to optimize outcomes when operating on and managing these medically fragile individuals. in this context, however, clinically and statistically significant differences in demographics, body mass, and in the distribution of weight-related medical problems between super-obese women and men are unknown. introduction: a transversus abdominis plane (tap) block is an ultrasound-guided injection of local anesthetic in the plane between the internal oblique and transversus abdominis muscles to interrupt innervation to the abdominal skin, muscles, and parietal peritoneum. currently there are incongruent findings on the benefit of this regional anesthetic to surgical patients, particularly the obese population. we hypothesized the addition of a tap block in an enhanced recovery pathway (eras) for bariatric patients would decrease opioid use and shorten hospital length of stay. methods: a retrospective review of all patients who underwent bariatric surgery at a single institution from january to december was performed. patients were identified as: no tap block (no tap), tap blocks that were performed after induction either pre-surgery (pre-tap) or post-surgery (post-tap). the primary outcome was time to first opioid (min) and total morphine (mg) equivalents in pacu. objective: prolonged postoperative ileus increases hospital length of stay and therefore impacts healthcare costs. although many surgeons recommend ambulation in the postoperative period to hasten return of bowel function, little evidence exists to support this practice. our hypothesis is that early ambulation does reduce the time to return of bowel function after intestinal surgery. methods: a subset of patients undergoing intestinal surgery from an ongoing, prospective trial evaluating perioperative physical activity was analyzed. preoperatively, patients wore an activity tracker for a minimum of three days to establish a baseline activity level, measured by daily steps. postoperatively, steps were recorded for days. patients were included in this study if they underwent an operation on the small bowel, colon, or rectum. resolution of postoperative ileus was defined as the postoperative day when patients were noted to meet all of the following criteria on review of nursing documentation: passing flatus, stooling or having ostomy output, and tolerating a regular diet without intravenous fluids. "early" postoperative activity was defined as the average number of daily steps during the first two postoperative days. discussion: these results suggest the patients who received an intraoperative block laparoscopically were more likely to be able to spend less time in the post anesthesia care unit and be discharged home the same day. based on these results, additional process improvement ideas will be implemented in an attempt to improve outcomes. riley d stewart, md, msc, frcsc, james ellsmere, md, msc, frcsc; dalhousie university division of general surgery introduction: oropharyngeal and gastrointestinal (gi) perforations from bbq brush bristles are being reported in the literature with increasing frequency. media attention to this problem has increased awareness by the public. most commonly, bbq bristles lodged in the gi tract can be removed endoscopically or pass without complication. rarely, surgical intervention is required for removal of the bristle or drainage of an associated abscess. we report a case of gastric perforation by a bbq bristle leading to a pancreatic abscess. case report: a -year-old male presented to a regional center with epigastric pain and malaise. his medical history included: hypertension, dyslipidemia, gerd, and smoking. his surgical history included: a tonsillectomy, excision of bronchial cleft cyst, and an umbilical hernia repair. on presentation, his laboratory investigations where unremarkable aside from an elevated white blood cell count. investigations including an abdominal x-rays and an abdominal ultrasound were unremarkable. he was initially treated with a proton pump inhibitor for presumed peptic ulcer disease. he returned to the local emergency room, no better than before. a ct scan was arranged which demonstrated a foreign body at the pylorus consistent with a bbq bristle and a peripancreatic fluid collection (figs. & ) . a gastroscopy failed to identify the bristle. he was admitted, placed on iv antibiotics and referred to our center. despite several days of antibiotics prior to arrival, the collection size on repeat ct scan had increased and the patient had ongoing pain. we repeated the endoscopy with a side viewing endoscope. the perforation was identified posteriorly at the pylorus. the bristle had migrated into the peripancreatic space. the perforation was cannulated with a jagtome. fluoroscopy was used to confirm the position of a wire in the fluid collection (figs. & ) . pus was drained from the collection into the stomach by placement of a french pigtail catheter (fig. ) . the patient was discharged pain free the following day. the patient was asymptomatic at weeks' follow-up. a repeat ct scan showed resolution of the abscess and safe migration of the bristle and stent out of the gi tract (fig. ) conclusion: to our knowledge, this is the first reported transgastric endoscopic drainage of a peripancreatic abscess caused by a bbq bristle gastric perforation. this case is a demonstration of the ever-expanding role of therapeutic endoscopy in a surgical practice. andrew w white, md, carl westcott, md; wake forest baptist medical center introduction: endoscopic balloon dilation of the gastroesophageal junction (gej) is generally limited to mm in diameter. in many stenotic or spastic disorders of the gej mm is just not big enough. larger balloon sizes are available ( and mm), although these are deployed under fluoroscopy without endoscopy. thus, these larger dilations are often not feasible at the time of the diagnostic endoscopy because different facilities and/or equipment are needed. also, fluoroscopic mm balloon dilations are associated with a percent perforation rate. to address these shortcomings we present an experience with a retroflexed "against the scope" balloon dilation of the gej. in detail, the gej is visualized while retroflexed and a balloon is then placed through the scope. the gej is cannulated next to the scope and deployed. please see the attached image for example. methods and procedures: a retrospective chart review was performed for a single surgeon during the past five years. we identified those who had retrograde dilations and evaluated the indications, repeat dilations, complications and symptomatic response. results: a total of retrograde dilations were performed on patients with gej related dysphagia. the average age was . years. of dilations were with a mm balloon while other dilations used as small as a mm balloon. dilations were performed for persistent dysphagia after cardiomyotomy between and days after surgery. other indications for dilation were dysphagia after fundoplication ( / ), dysphagia after paraesophageal hernia repair ( / ) and achalasia during pregnancy ( / ). patients required a total of repeat retrograde dilations at an average time of days after previous dilation. there were instances reported where the dilation did not improve symptoms. there was mucosal breakdown noted in instances although there were no perforations. bleeding was noted in instances although this was always minimal and selfresolving. conclusions: retrograde endoscopic dilation is safe and effective in this small series. the mm balloon against a mm scope gives a mm diameter, but a different shape and a decreased total circumference. there is a possible added safety advantage given that the balloon is inflated under visualization. it can be inflated in steps or stopped if it appears too aggressive. in addition these larger dilations were provided at the time of the initial diagnostic egd without extra equipment. more studies are needed to compare retrograde endoscopic dilation to other methods of management of gej stenosis. introduction: robot-assisted surgery allows surgeons to perform many types of complex laparoscopic surgical procedures. more and more patients are treated with this sophisticated system. however, all the instruments used in the currently available surgical robot system is rigid. therefore, there exists a limitation in the extent of reach to the deeper surgical fields. in order to overcome this difficulty, we are developing a novel flexible endoscopic surgery system (fess) which has flexible single port platform of cm in diameter, independently controlled endoscope and instruments, open architecture that is compatible with existing flexible devices and a magnified d hd camera that has sensors of both rgb and infrared. furthermore, the system is smaller and would be more cost-effective than existing robotic surgical system. a preliminary experiment was performed in surgical procedures using porcine model to evaluate effectiveness and feasibility of fess. methods and procedures: experimental protocols were approved by the animal research committees of our institution. we used a female swine of kg. an assistant forcep lifted up the fundus of gallbladder to create good visualization of surgical field. the cystic duct was ligated by laparoscopic clip device from assistant port. blunt dissection was performed by pushing the forceps and sharp dissection by monopolar electrocoagulation. results: the fess accomplished the dissection of the gallbladder from the liver bed successfully. two mm forceps had enough grasping and dissecting force and dexterity. the gallbladder was removed from single port site easily. conclusions: this experiment showed that it is feasible to intuitively operate single-site cholecystectomy with fess. in order to realize a pure fess procedure, an additional novel device to create good visualization of the surgical field is necessary for the fess platform. a prototype has already been developed for evaluation in securing the surgical field. the optimal working range, or "sweet spot" of fess is not relatively large. in addressing this issue, the feature of easy setup is being improved to enable more efficient positioning and shifting of the sweet spot for the surgical field. this mechanism could enhance the expansion of procedures suitable for fess. the target procedures of fess are those specifically suitable for single port surgery, such as transanal surgeries and transcervical mediastinoscopic surgeries. intraluminal procedures and natural orifice translumenal surgery (notes), which are not considered suitable for rigid surgical robot, are also good applications of fess. regression of anal and scrotal squamous cell carcinoma (hpv related) with imiquimod index patient is a year old hiv positive homosexual man with anal-scrotal condylomas (ain) initially resected in , then treated with radiation in for recurrence. recurred in with changes severe enough to ''…consider diagnosis of invasive squamous cell carcinoma…''. patient elected trial of imiquimod % cream three times per week to defer recommendation of abdominoperineal resection. imiquimod has no antiviral effect but stimulates interferon and cytokines to suppress hpv subtypes and , among other immune effects. no data exists as to systemic effects of imiquimod. after three months of therapy, lesions had largely regressed with only one specimen showing ''…concern for squamous cell carcinoma in situ…''. patient has elected to continue treatment pending further biopsy. this report is typical of a number of other reports of small numbers of cases of neoplasia regression with imiquimod % cream to include melanoma-in-situ, basal cell cancer of skin and other cutaneous malignancies as well as vin. a second female patient, years old, hiv+ with hpv lesions (ain ) including urethral lesions, is being treated with vulvar application of imiquimod to determine if urethral lesions will regress. there is no fda-approved indication for mucosal application of imiquimod. biopsies are pending at completion of six month trial of imiquiimod. surg endosc ( ) introduction: training in flexible endoscopy remains a critical skill for surgeons, as therapeutic endoscopy procedures continue to evolve and to supplant standard surgical operations. the role of endoscopy across surgical subspecialties is shifting, as endolumenal procedures (like per-oral endoscopic myotomy and endolumenal bariatric interventions) have become commonplace. while surgical residency minimum case volumes are mandated, little is known about the volume of endoscopic procedures surgical fellows participate in. we aimed to characterize the volume of flexible endoscopy cases logged by surgical subspecialty fellows as a measure of endoscopic platform use by surgeons. methods: operative case logs for fellows enrolled in post-graduate training programs participating in the fellowship council were de-identified (no patient or program specific information) and provided for analysis. the case log is an online, mandatory, self-reported collection of all surgeries, procedures and endoscopies performed during fellowship year. all cases listed within the category of "gi endoscopy" in which the fellow designated their role as "primary" surgeon for the procedure were further sorted based on subcategory and linked to the year of fellowship graduation. rigid endoscopy, trans-anal endoscopic procedures, and those in which the fellows roll was "first assistant" were excluded. introduction: complex pancreatic and duodenal injuries due to trauma continue to present a formidable challenge to the trauma surgeon with a described mortality of - % and morbidity of - %. duodenal fistula formation subsequent to failure of attempted primary repair is associated with significant morbidity and mortality. we present the first reported series of four patients with complex trauma-related duodenal injuries who had failure of primary repair which were managed with duodenal stenting. we compared outcomes to a matched case control cohort of patients with trauma related duodenal injuries. the aim of this study is to document our experience with enteral stents in patients with complex duodenopancreatic traumatic injuries. methods: a retrospective review at a level i trauma center identified patients who underwent endoscopically placed indwelling covered metal stents after failure of primary duodenal repair in the form of high output duodenal fistulas. a matched case control cohort was identified including patients with duodenal fistulas who were not treated with stents. drainage volumes were collected and classified according to source and phase of intervention (i.e. admission to fistula diagnosis, to stent insertion, after removal, and until discharge). results: there was a decrease in the mean combined drain output of ml/day (p= . ) after stent placement. when comparing the sum of all output sources, there was a statistically significant difference across phases (p= . ) and "after removal" was significantly less when compared to the reference phase (p= . ). there was also a change in the directionality of the slope for the sum of all drain outputs with an increase of ml/day prior to stent placement compared to a decrease of ml/day (p= . ) after stent placement. the stenting group demonstrated a decrease in mean drain output ( ml/day vs ml/day, p= . ) and increase in distal gastrointestinal output ( ml/day vs ml/day, p= . ). one patient in the stent group required later operative repair. all other patients in the stenting and control group had resolution of their fistulas over time. there were late mortalities in the control group. the stent treated patients demonstrated diversion of approximately ml/day of enteral contents distally. while all patients eventually healed their fistulas, the stent treated patients demonstrated an accelerated abatement of drain outputs when compared to the control cohort, but did not reach statistical significance. indwelling enteral-coated stents appear to be an effective rescue method for an otherwise inaccessible duodenal fistula after failure of primary repair. kevin l chow, md, hassan mashbari, md, mohannad hemdi, md, eduardo smith-singares, md; university of illinois at chicago introduction: esophageal trauma represents an uncommon but potentially catastrophic injury with a reported overall mortality of up to %. the management of iatrogenic and spontaneous perforations have been previously described with well-established guidelines which have been mirrored in the trauma setting. esophageal leaks are the most feared complication after primary surgical management and present a challenge to salvage. there has been increasing reports in the literature supporting the use of removable covered metal stents to treat esophageal perforations and leaks in the non-trauma setting. we present the first reported case series of four patients presenting with external penetrating trauma induced esophageal injuries, complicated by failure of initial primary surgical repair and leak development, successfully managed with the use of esophageal stents. materials and methods: a retrospective review was performed at a level i trauma center identifying four patients who underwent endoscopically placed removable covered metal stents, either by a surgical endoscopist or an interventional gastroenterologist, after failure of primary surgical repair of esophageal traumatic injuries. demographic information, hospital stay, additional interventions, complications, imaging studies, iss scores, and outcomes were collected. results: our cohort consisted of patients with penetrating injuries to the chest and neck with esophageal injuries ( thoracic and cervical esophageal injuries) managed with esophageal stenting after leaks were diagnosed following primary surgical repair. their initial esophageal injuries included grades , and . leaks were diagnosed on average post-operative day . two patients underwent an additional attempted surgical repair and subsequent leak development. esophageal stents were placed under endoscopic and fluoroscopic guidance within days of leak diagnosis. there was resolution of their esophageal fistulas with all patients resuming oral intake (averaging days after stent placement). three patients ( %) required further endoscopic interventions to adjust the stent due to migration or for dilations due to strictures. mortality was %, all patients survived to be discharged from the hospital with average icu length of stay of days. conclusion: the use of esophageal stenting has progressed over the last few years, with successful management of both post-operative upper gastrointestinal leaks as well as benign, spontaneous, or iatrogenic esophageal perforations. while the mainstay of external penetrating traumatic esophageal injuries remains surgical exploration, debridement, and repair with perivisceral drainage; our case series illustrates that the use of esophageal stents is an attractive adjunct that can be effective in the management of post-operative leaks in the trauma patient. results of the ovesco-over-overstitch technique for managing bariatric surgical complications introduction: since , the preferred method of enteral access has been the percutaneous endoscopic gastrostomy tube (peg). accidental removal is a common complication associated with excessive cost and possible significant morbidity. removal prior to days is considered ''early removal.'' early removal has more significant risk associated with it, and can necessitate emergent operation to prevent peritonitis and sepsis. some patients, who do not exhibit signs of peritonitis, may be simply observed. for these patients, peg replacement would typically be delayed - days to ensure closure. this delay results in prolonged npo status and worsened nutritional status. presented below is a case of early accidental removal followed by endoscopic clip closure, and immediate peg replacement. case report: a -year-old male presented after a large left middle cerebral artery infarct. a peg placement was completed without complication. eleven hours after the procedure the patient had pulled the peg tube out of the abdominal wall. at this time the patient appeared to have no abdominal pain and no signs of peritonitis. twelve hours following the accidental removal of his peg tube, the patient was taken back to the endoscopy suite, and an egd was performed. the previous peg site was identified and appeared closed and ulcerated. the mucosal defect was closed with two endoscopic metallic clips. a peg tube was then placed at an adjacent site. the following day, the patient was restarted on trickle feeds and advanced to regular tube feeding over a period of hours. since that time, his peg has been functioning well. discussion: we propose that in the case of early accidental peg removal, the patient should be examined first for evidence of peritonitis. if initial physical exam and radiographic investigation do not reveal peritonitis or significant pneumoperitoneum, the patient should undergo urgent repeat endoscopy. at this time, the gastrotomy can be closed endoscopically via metallic clips and peg can be replaced immediately. tube feeds can be initiated after a - hour period of dependent drainage with serial abdominal exams. introduction: since its inception in , poem has become a viable procedure for the treatment of achalasia and esophageal dysmotility disorders. however many institutions are in the beginning stages of implementing the procedure into their programs. in view of training, we report the successful ability to dissect and identify common landmarks during a poem procedure performed by trainees under supervision in a high volume poem center. methods: posterior poem procedures performed by trainees with experienced proctor guidance during the period between february to july were evaluated for the frequency of identifying the perforating vessels, the presence of sling fibers, and position on the lesser curvature of stomach evaluated by double scoping method during the creation of the tunnel and myotomy for procedure. results: all poem procedures were successfully completed by trainees (gi and surgery fellows). the average length of procedure was minutes. indication for procedure included patients with type achalasia ( %), with type achalasia ( %) and des ( %). average length of myotomy for all procedures was . cm. during these procedures or perforator vessels were identified in ( %) of patients, sling muscle was identified in patients ( %) of patients. myotomy extended to anterior lesser curvature of stomach on double scope exam in % of patients. no patient had a serious complication requiring intervention. conclusion: trainees performing a posterior poem procedure were able to correctly dissect and identify the sling muscle and/or perforating vessels in approximately % and % respectively of procedures. however the myotomy position was correctly placed in all procedures. this indicates that while ideally the sling fibers and perforating vessels should be identified, a correctly positioned myotomy can still be successfully performed by trainees without identification of these landmarks. introduction: gastroparesis is a rapidly increasing problem with sometimes devestating patient consequenses. surgical treatments, particularly laparoscopic pyloroplasty, have recently gained popularity but require general anesthesia, advanced skills and create risk of leaks. peroral pyloromyotomy (pop) is a less invasive alternative but is technically demanding and not widely available. we propose an hybrid laparo-endoscopic collaborative approach using a novel gastric access device to allow a endoluminal stapled pyloroplasty as an alternative treatment option for functional gastric outlet obstruction. methods and procedures: under general anesthesia six female pigs (mean weight kg) had endoscopic placement of or mm intragastric ports (taggs, kansas, usa) using a technique similar to percutaneous endoscopic gastrostomy. a mm laparoscope was used for visualization. endoflip (crospon, inc., galway, ireland) was used to measure cross sectional area (csa) and compliance of the pylorus before intervention, immediately after and at week survival. pyloroplasty was performed using a mm articulating laparoscopic stapler (dextera microcutter). after removing the taggs ports, the gastrotomies were closed by either endoscopic clip, endoscopic suture or suture under laparoscopic vision. the animals were survived for week. after - days, a second laparo-endoscopic procedure was performed to verify healing of the pyloroplasty as well as intraluminal dimensions. at the end of the protocol, animals were euthanized. results: six endoluminal linear stapled pyloroplasty were performed. the mean operative time was min. in all cases, this technique was effective in achieving optimal pyloric dilatation. median pyloric diameter (d) and median cross-sectional area (csa) pre-pyloroplasty were mm ( . - . mm) and . mm ( - mm ). after the procedure, these values were increased to . mm ( . - . mm) and . mm ( - mm ) respectively (p= . the quality of endoscopic examination depends on the quality of endoscopic equipment, experience of the endoscopist and preparation of the patient. contemporarily electronic endoscopes make feasible to transfer image directly to external device which is subsequently linked to computer network and can be transferred further. dynamic image viewed in real time is more accurately interpreted by a physician than a static one. the possibility of simultaneous voice contact makes teleconsultation sterling. the aim of this study was to present our own experience regarding endoscopic teleconsultations. materials and methods: analysis enrolled examinations performed in endoscopic centers located in lesser poland district and in denmark. consultations took place in real time, consulting physicians had more than years of experience in endoscopic procedures and over colonoscopies and therapeutic procedures performed. there were teleconsultations via standard internet connection mb/s. endoscopic centers were equipped with olympus and series linked to video card. each card had its own ip address, and the image was accessible through internet login from anywhere. consulting physicians used computers connected to internet for tracing the image synchronously and giving advice. results: teleconsultations were undertaken in . % of all endoscopic procedures. teleconsultations concerned difficulties in endoscopic image interpretation in cases and decisions regarding further treatment in cases. the consulting physician solved all problems concerning proper endoscopic image interpretation. in cases the elective procedure was rejected. the elective treatment was continued in remaining cases. patients had a complication of polypectomy that was endoscopically treated. conclusions: the opinion of independent consulting physician in difficult clinical cases regarding endoscopic procedures helps to understand the endoscopic image in real time and implicates a decrease in complications after endoscopic procedures. michelle ganyo, md, robert lawson, md; naval medical center san diego introduction: a presacral phlegmon is a contained collection of infected fluid and inflammation within the bony pelvis, posterior to the rectum and anterior to the sacrum, that usually arises as a complication of surgery, malignancy, inflammatory bowel disease, ischemic colitis or perforated viscous. symptoms include low-back pain, pelvic pain and fevers. antibiotics and supportive therapy are the mainstay of treatment. however, if abscess develops, drainage is required usually by trans-gluteal percutaneous and/or surgical methods, both of which are associated with significant morbidity and mortality. endoscopic ultrasound (eus) -guided drainage of perirectal and presacral abscesses is a well described minimally-invasive approach that permits clear definition of anatomy, real-time access to the abscess and creation of an internalized fistula through placement of one or more transluminal stents. however, to date there is no published report describing endoscopic treatment of the more complicated, clinically challenging presacral phlegmon. here we present a case of a symptomatic presacral phlegmon recalcitrant to medical management that was successfully treated with an endoscopically placed retrievable, transmural, lumen-apposing metal stent. case report: this is a case-report of a -year-old, post-partum female who presented with fevers and recurrent lower back pain radiating to her rectum and vagina. her spontaneous vaginal delivery was notable for a second-degree laceration that was primarily repaired at the time of delivery months prior to presentation. her past medical history was otherwise unremarkable. radiographic imaging revealed several perirectal and presacral abscesses that were considered too small for percutaneous drainage. iv antibiotics were started and the largest abscess was targeted for eusguided aspiration. unfortunately, her pain became constant and progressed in severity. a follow-up mri a week later revealed a -cm presacral phlegmon. results: colonoscopy revealed a luminal bulge in the rectum but was otherwise normal. to permit drainage and multiple sessions of endoscopic necrosectomy, a mm lumen-apposing metal stent (lams) was placed transrectally under eus-guidance into the presacral phlegmon. endoscopic debridement with forceps and copious irrigation was performed. over the following weeks the patient reported purulent rectal drainage and resolution of her fevers and pain. repeat endoscopy revealed a normal rectum and no sign of the stent. a follow up mri showed a -cm area of heterogenous tissue in the presacral area. conclusions: although not previously described for management of a presacral phlegmon, lams appears to be a safe and effective, minimally-invasive treatment option. introduction: flexible endoscopy has evolved to include multiple endoluminal procedures such as anti-reflux procedures, pyloromyotomy, and mucosal and submucosal tumor resections. however, these remain technically demanding procedures as they are hindered by the state of flexible technology which has difficult imaging, limited energy devices, no staplers, and cumbersome suturing abilities. an alternative approach is transgastric laparoscopy, which for almost decades has been shown to be a good procedure for pancreatic pseudocyst drainage and full-thickness and mucosal resection of various lesions. we propose to expand the indications of transgastric laparoscopy by using novel endoscopically placed transgastric laparoscopy ports (taggs, kansa, usa) to replicate endoscopic procedures such as endoluminal antireflux surgery. methods and procedures: under general anesthesia female pigs (mean weight . kg) had endoscopic placement of mm-intragastric ports (taggs, kansas, usa) using a technique similar to percutaneous endoscopic gastrostomy. a mm laparoscope was used for visualization. endoflip, (crospon, inc., galway, ireland) was used to measure cross sectional area (csa) and compliance of the gastroesophageal junction (gej) before and after intervention. laparoendoscopic-assited suture plication of the gej was performed using - sutures (polysorb®). once the taggs ports were removed, the gastrotomies were closed by using endoscopic clip. at the end of the protocol, animals were euthanized. results: five laparoendoscopic-assited sewing plication were performed. the mean operative time was , min (endoscopic evaluation: . min, tagss insertion: min, endoflip evaluation+ gej plication: , min, gastric wall closure: min). in all cases, this technique was effective in achieving adequate gej plication. median gej diameter (d) and median cross-sectional area (csa) pre-plication were . mm ( . - . mm) and . mm ( - mm ). after the procedure, these values were decreased to . mm ( . - . mm) and . mm ( - mm ) respectively (p= , ). median distensibility (d) and median compliance (c) pre-plication were . mm /mmhg ( . - . mm /mmhg) and . mm /mmhg ( , - , mm /mmhg). after the procedure, these values were decreased to , mm /mmhg ( . - . mm /mmhg) and . mm /mmhg ( . - . mm /mmhg) respectively (p= , ). no intraoperative events were observed. conclusion: a hybrid laparoendoscopic approach is a feasible alternative for performing intragastric procedures with the assistance of conventional laparoscopic instruments; especially in cases where the location of the intervention limits the access of standard endoscopy or where endoscopic technology is inadequate. further evaluation is planned in survival models and clinical trials. introduction: due to previous manipulation or submucosal invasion, colonic lesions referred for endoscopic mucosal resection (emr) frequently have flat areas of visible tissue that cannot be snared. current methods for treating residual tissue may lead to incomplete eradication or not allow complete tissue sampling for histologic evaluation. our aim is to describe dissection-enabled scaffold assisted resection (descar): a new technique combining circumferential esd with emr for removal of superficial non-lifting or residual "islands" with suspected submucosal involvement/fibrosis. methods: from to , lesions referred for emr were retrospectively reviewed. cases were identified where lifting and/or snaring of the lesion was incomplete and the descar technique was undertaken. cases were reviewed for location, prior manipulation, rates of successful hybrid resection and adverse events. results: lesions underwent descar due to non-lifting or residual "islands" of tissue. patients were % m, % f, and average age (sd ± . yrs). lesions were located in the cecum (n= ), right colon (n= ), left colon (n= ) and rectum (n= ). average size was mm (sd ± . mm). previous manipulation occurred in / cases ( % biopsy, % resection attempt, % tattoo). the technical success rate for resection of non-lifting lesions was %. there was one delayed bleeding episode but no other adverse events. approximately % of patients have been followed up endoscopically to date with no evidence of residual adenoma. conclusions: descar is a feasible and safe alternative to argon plasma coagulation and avulsion for the endoscopic management of non-lifting or residual colonic lesions, providing en-bloc resection of tissue for histologic review. further studies are needed to demonstrate long-term eradication and for comparison with other methods. results: patients underwent fully covered stent placement procedures. indications for stent placement were leak in patients ( sleeve; bypass) and stricture in patients ( bypass, sleeve). five patients had stent migration. three required surgical removal, one patient endoscopic repositioning and one passed the stent per rectum. all eight patients with enteric leak successfully underwent stent placement in conjunction with diagnostic laparoscopy and drainage. all but one of these patients developed an enteric leak perioperative to index procedure. the average duration of stent treatment in these patients was days ( - days). of the patients treated for a stricture, patients ( sleeve, bypass) failed treatment and required subsequent definitive operative revision. average length of time of stent treatment in these patients was days (range, - days) and five had severe intolerance. conclusions: endoscopic stent placement of leak may require multiple procedures and carries the risk of migration; however, this therapy seems to be an effective treatment. failure rates are higher with strictures and are not as tolerated by patients. background: colonoscopy is the most commonly performed endoscopic examination worldwide and is considered the gold standard for colorectal cancer screening. the quality of examination and endoscopic treatment is affected by a number of factors that are verified by recognized parameters such as cecal intubation rate and time (cir, cit), withdrawal time, adenoma detection rate (adr) and polyp detection rate (pdr). advanced endoscopic imaging improves accurate recognition of the nature and variety of pathologic lesions, while the endoscope tips, third eye retroscope and wide-angle endoscopy allow detection of lesions located on the proximal side of the intestinal folds. the aim of the study was to assess the suitability of wide-angle colonoscopy for the detection of colorectal lesions and to analyze the functionality of a special endoscope series regarding cir, cit and withdrawal time. introduction: leak is an uncommon but serious complication of gastrointestinal surgery. when identified post-operatively, percutaneous drains are used to manage abscesses and prevent further peritoneal contamination. if drain position is suboptimal, however, the consequences of persistent leak may necessitate a formal surgical intervention in a hostile abdomen. in select situations, we have utilized natural orifice transluminal endoscopic surgery (notes) methods to enter the abdominal cavity and place/reposition drains under direct endoscopic visualization a part of our comprehensive endoscopic management algorithm for leaks. methods and procedures: a prospectively collected database was queried for patients who had undergone transluminal endoscopic drain repositioning (tedr) as part of multimodal endolumenal therapy for leak (including interventions like defect closure, enteral feeding access, or endolumenal stent placement). inadequate drainage was identified pre-procedurally by undrained fluid collections in conjunction with clinical signs of sepsis. translumenal access was obtained via the leak site and carbon dioxide insufflation was used in all cases. the peritoneal cavity was surveilled and cleared of gross debris by irrigation and suction. intraabdominal drains were located endoscopically and fluoroscopically, grasped with an endoscopic snare or grasper and repositioned adjacent to the leak site to ensure better drainage. results: four patients ( female), average age (range - ), average body mass index (range - ) were managed with tedr as a component of endoscopic treatment of full-thickness gastrointestinal leak. two patients developed leak following revisional bariatric surgery. one patient had an acutely dislodged gastrostomy tube with intraperitoneal leak after multiple laparotomies recently closed with a granulating vicryl mesh. one patient developed a leak at an esophagojejunostomy following total gastrectomy. three patients had adequate drainage after the initial tedr, while one patient required tedr on two occasions. all patients had improved drainage demonstrated by resolution of clinical signs of sepsis and resolution of fluid collections. drains were removed as clinically indicated. conclusion: intraabdominal drains are an essential element in the management of full-thickness gastrointestinal leaks, but are not always able to be adequately positioned percutaneously. transluminal endoscopic drain repositioning via a gastrointestinal defect is a viable option to avoid surgical intervention in an otherwise hostile field and is a novel practical notes application. background: epiphrenic diverticula (ed) arise from increased intraluminal pressures, often secondary to achalasia or another underlying esophageal motility disorder which causes "pulsion" physiology. ed are traditionally thought to contribute to patients' symptoms of regurgitation and dysphagia, and are frequently resected at time of heller myotomy and fundoplication done for treatment of the primary motility disorder. ed excision carries significant risks (staple line leak, pulmonary complications, mortality), and little is known regarding patients with ed and esophageal motility disorder who undergo surgical myotomy without ed resection. the goal of this study was to compare outcomes of patients with ed and esophageal motility disorder who did and did not undergo diverticulectomy at time of myotomy and fundoplication. methods: retrospective analysis of prospectively collected database from to was performed. patients with diagnosis of ed undergoing surgical treatment of symptomatic esophageal motility disorder were included. all patients underwent laparoscopic heller myotomy with toupet fundoplication by a single surgeon at a tertiary referral hospital. patients were stratified according to whether ed was excised or not excised at time of primary surgery. patient-reported symptoms were obtained from pre/post-operative clinic evaluations and mailed surveys during the follow-up period. independent samples t-test and fisher's exact test were used to compare continuous and categorical variables respectively. results: ed was identified in patients prior to surgery. primary diagnoses included achalasia (n = ), nutcracker esophagus (n= ), and diffuse esophageal spasm (n= ). ed was excised in five patients ( . %) and not excised in ten patients ( . %), with no significant difference in frequency of preoperative dysphagia ( % vs. %, p= . ) or regurgitation ( % vs. %, p= . ) between groups respectively. reasons for non-resection included ed was too proximal (n= ), patient/surgeon preference (n= ), and small ed size (n= ). the resection group did not experience any leaks and there were no mortalities in either cohort during the follow-up period. at mean clinic follow-up of days, there was no difference in frequency of residual dysphagia in patients who did or did not undergo ed resection ( % vs. %, p= . ) and neither cohort reported residual regurgitation symptoms. conclusions: this study suggests that leaving ed in place during surgical treatment of an esophageal motility disorder may achieve similar rates of postoperative symptom control. while ed excision in this study did not cause significant excess morbidity, ed resection introduces risk of leak and requires more extensive surgery that may not provide significant benefit to patients. introduction: median arcuate ligament syndrome (mals) has been described in the literature as presenting with a constellation of symptoms including nausea, vomiting, weight loss, and post-prandial epigastric pain. while many of these symptoms are consistent with foregut pathology, a cohort of patients with mals presenting with delayed gastric emptying has not been described in the literature. in this study we report on the possible association of mals with delayed gastric emptying. methods: cases of mal release were collected between and . eight patients were identified who presented with mals and underwent subsequent mal release. all patients underwent laparoscopic or robotic surgery. patients were compiled into a retrospective database and their demographic, symptomatic, imaging, and outcomes data were analyzed. background: laparoscopic fundoplication (lf) is often performed to treat paraesophageal hernia and/or gerd. care is taken to select the right patients for the operation. some patients may not improve, and others experience dysphagia or bloating after surgery. factors associated with patient satisfaction after fundoplication would be helpful during the patient selection process. methods: a retrospective review of a prospectively collected database was performed. queried patients underwent lf from to . non-elective operations and fundoplications after heller myotomy were excluded. of this cohort, patients were included only if they responded to a two-year postoperative quality of life survey. surveys were distributed preoperatively, at three weeks, at one year, and at two years. the surveys include the reflux severity index, gerd-hrql, and dysphagia score. the gerd-hrql asks about patient satisfaction with their current state ( = dissatisfied, = somewhat satisfied, = very satisfied). the cohort was divided according to their answer to this question at two years. demographics and preoperative factors were compared between the groups with kruskal-wallis and fisher's exact tests. univariable and multivariable ordinal logistic regression was performed to identify preoperative symptoms associated with satisfaction at two years. scores on the surveys over time were were also analyzed. results: a total of patients were included in the analysis (dissatisfied = , somewhat satisfied = , very satisfied = ). the only significant demographic or preoperative difference was a high number of paraesophageal hernias in the 'very satisfied' cohort (p = . ). on univariable regression, younger age and paraesophageal hernia predicted satisfaction. several variables negatively predicted satisfaction with an or \ . multivariable regression, controlled for age and hernia type, identified throat clearing, post-nasal drip, and globus sensation as preoperative symptoms less likely to result in patient satisfaction (p = . , . , and . , respectively). subgroup analysis of patients with paraesophageal hernias revealed that patients with bloating preoperatively are less likely to be satisfied at two years. survey scores over time showed all groups improving over three weeks, but while satisfied patients continued to improved, dissatisfied patients symptomatically worsened over time. conclusion: this study confirms previous reports stating atypical symptoms of gerd are less likely to improve after lf. it also shows individuals with paraesophageal hernia tend to do quite well, unless they report bloating preoperatively. patient-centered analysis such as this can be useful when discussing postoperative expectations with patients, and may reveal opportunities to individualize operative approach. objective: the study was performed to assess whether sutured crural closure or mesh reinforcement for hiatal closure yields better results with regards to symptom resolution and recurrence post-operatively. material and methods: a prospective randomized controlled trial was carried out at grant medical college and sir j. j. group of hospitals, mumbai, india. patients were randomized to receive either sutured repair or mesh reinforcement of hiatal closure. outcomes of interest were symptom resolution, quality of life scores and recurrence in the postoperative period. results: patients were recruited for the trial ( -sutured repair, -mesh reinforcement). the two groups were comparable in terms of demographic profiles, symptom severity and findings at esophagogastroscopy and manometry in the pre-operative period as well as size of the hiatal defect measured intra-operatively. post-operatively the mesh repair group had significantly better symptom resolution in terms of early satiety, chest pain and regurgitation (p\ . ) while with respect to heartburn, dysphagia and post-prandial pain there was no significant difference between the improvements demonstrated. improvement in quality of life scores after either procedure was not significantly different. recurrence was higher in the suture repair group ( vs , p. ). recurrence lead to poorer symptom severity scores as well as quality of life scores and one patient underwent re-operation. the change in the symptom severity score from baseline after the procedure at months in the subgroup population. conclusion: mesh reinforcement results in a reduced rate of recurrence and offers excellent symptom control in the short-term without a rise in complications when compared to sutured repair for the closure of hiatal defects in laparoscopic hiatal hernia repairs. material and methods: in a period from to , patients underwent laparoscopic resection ( -gastric resection, -duodenal resection), using different techniques. all patients were investigated with upper gi endoscopy, eus and abdominal contrast-ct, which allows us to get the complete evaluation of tumor, including size, location, type of growth and the gi layer. based on the findings the decision on the type of resection was made. the majority of resections were wedge or partial resections, performed using endoscopic steplers or using ultrasound scissors followed by double-suturing of gatro/duodenotomy. in the cases of tumor location on the posterior gastric wall we mobilized the the greater curvature to get a direct approach to the tumor with extraluminal growth. in the cases with intraluminal growth we used transgastric approach with small , cm incision on the anterior gastric wall for endoscopic stepler. technically the most complex procedures were in the cases of tumor location close to anatomically narrow places and muscle sphincters (gastroesophageal junction, pylorus, duodenal bulb, duodenal flexure), with high risk of stenosis and dysfunction of anatomical sphincters. in such cases we used «lifting-technique» in which we dissect serous and muscle layers circumferentially around the tumor making partial enucleation of lesion followed by total resection preserving almost all normal tissue with minimal suturing and deformity at the site of surgery. ( : ), mean age was . years (sd ± . ), patients ( %) had mis. the type of reconstruction was predominantly with a "pull-up" technique (n = , . %) followed by the kirschner-akiyama procedure (n = , . %), stapled gastroplasty was performed in patients. all the anastomosis were performed at the level of the neck and only one of the patients had a stapled anastomosis, mean operative time was min (sd ± min) including resection of the specimen. primary neoplasms were predominantly hypopharynx (n = , . %), distal esophagus (n = , %), cervical esophagus (n = , . %) and thoracic esophagus (n = , . %). histologic types were mainly squamous cell carcinoma (n = , . %) and adenocarcinoma (n = , . %). mean of hospitalization days was . (sd ± . ). no complications were observed in patients and major complications (dindo-clavien ≥iiib) were found in patients. anastomotic leak was present in patients ( . %) and perioperative mortality ( days) was . %. progressive shift to laparoscopic surgery was evidenced through the years ( - : . %, - : . % and - : . %; p = . ) and reduction in major complications (p = , ) was observed. anastomotic leaks (p = , ) and perioperative mortality (p = . ) did not show significant differences in the present study. conclusions: results in our center show that major complications decrease with time after application of minimally-invasive surgery and no differences in anastomotic leaks and mortality were seen. current data has lead us to abandon open total esophagectomy as a first-choice procedure. introduction: minimal invasive three-fields esophagectomy for minimal invasion is the surgical standard for oncological procedures and benign diseases. cervical dissection has a risk of to % in some series, of, lesion or paralysis of the rnl, but the standard in mckeon approach is %. a high level of suspicion is needed because this type of lesion has an impact on postoperative evolution and the hospital stay. main: to describe three cases of rnl post esophagectomy paralysis in three planes by least invasion. methods: in a period of years, january to june , esophagectomies for bening disease were performed. three patients ( males female) with diagnosis of terminal achalasia and stenosis secondary to caustic ingestion consulted at the minimal invasion service fundcacion valle del lili. they were schedualed for minimal invasive three fields esophagectomy, one patient without complications and early discharge ( postoperative day) but occasional dysphagia, the other two required early reintubation after de surgery with ards, patient requiered tracheostomy, the second patient could be extubated after days but with occasional dysphagia. all three had mild hoarseness after surgery. the patient who required tracheostomy was decannulated at days without complication. results: the three patients underwent endoscopy without complication in the cervical anastomosis stenosis or disorder in the emptying of the gastric tube, swallowing study without alteration and laryngoscopy with paralysis of the left vocal cord. these patients went to speech therapy with total paralysis recovery at months corroborated with laryngoscopy, without dysphagia or hoarseness. conclusion: rnl innervates the larynx and upper esophageal sphincter, therefore lesion or paresis causes symptoms such as hoarseness, dysphagia, difficulty swallowing, aspiration, difficulty in coughing, pneumonia and ards. injury has a predecessor factor in pulmonary complications and prolongation of the hospital stay. % of these patients may require some surgical procedure to restore the function of rnl. noninvasive monitoring of the laryngeal nerve decreases the risk of injury. philip case report: multiple esophageal diverticula associated with achalasia introduction: achalasia is well defined disorder of increased lower esophageal sphincter tone ( ). epiphrenic esophageal diverticulum are a rare disorder believed to result from increased intraesophageal pressure often in conjunction with a motility disorder causing functional outflow obstruction. they are a pulsion-type pseudo-diverticulum with mucosal bulging most frequently from the right posterior esophageal wall ( ) . we present a very rare case of achalasia associated with multiple esophageal diverticula successfully treated with laparoscopic heller myotomy with dor fundoplication. case presentation: a year old woman presented with years of dysphagia, chest discomfort, regurgitation, and weight loss. esophagoscopy showed a patulous esophagus with multiple esophageal diverticula (figure ). barium esophogram demonstrated esophageal diverticula in the distal esophagus and delayed clearance of esophageal contrast (figure ). high resolution monometry revealed a hypertensive mean les, an aperistaltic body on of wet swallows, and panesophageal pressurization in of wet swallows -consistent with type ii achalasia by chicago classification ( ). we performed a laparoscopic heller myotomy with dor fundoplication. the myotomy was extended cm above the gasgtroesophageal junction and cm onto the gastric cardia. an anterior diaphragmatic defect with a moderate type hiatal hernia was repaired with two sutures, ensuring to not impinge the esophagus (figure ). at weeks post operatively the patient reports excellent results. her dysphagia and chest discomfort have entirely resolved. her eckhardt score improved from seven preoperatively to one post operatively. discussion: type ii achalasia is successfully treated in the majority of cases with laparoscopic heller myotomy and partial fundoplication ( ). however, esophageal diverticula typically require both myotomy as well as diverticulectomy for successful treatment ( ) . there is little experience with the surgical management of multiple esophageal diverticula. we propose a two stage surgical approach for these patients. we reason that the risk of esophageal leak or stenosis in the case of multiple esophageal diverticulectomies out weighs the proposed benefit. indeed epidemiologic studies indicate that the majority of esophageal diverticula are asymptomatic ( ) . in the event the patient remains symptomatic after myotomy a second stage operation with diverticulectomies would be possible. this single experience suggests that diverticulectomy may not be necessary in the case of multiple diverticula associated with achalasia. instead, treatment may be directed at relieving the functional obstruction responsible for the symptoms by performing laparoscopic heller myotomy with dor fundoplication. takahiro kinoshita, md, facs, masanori tokunaga, md, akio kaito, md, masahiro watanabe, md, shizuki sugita, md; national cancer center hospital east, japan objective: the optimal surgical approach for siwert type ii cancer is still controversial due to the anatomical complexity of the region. potential advantages of laparoscopic transhiatal approach have not been fully investigated. methods and procedures: we retrospectively analyzed consecutive patients with siewert type ii cancer who underwent laparoscopic transhiatal resection. indication of surgery is patients with siewert type ii cancer with less than cm esophageal invasion. regarding the extent of resection, basically proximal gastrectomy with the lower esophageal resection was selected, aiming at preservation of gastric reservoir function. in terms of reconstruction after proximal gastrectomy, double-tract method was performed. intraoperative peroral endoscopy was routinely employed for determination of the appropriate resection level of the stomach. esophagojejunostomy was employed by overlap method using a mm linear stapler. in order to obtain a wider operative field in the lower mediastinum, the diaphragmatic crus was dissected to widen the esophageal hiatus. results: in patients ( males and females), median operation time was minutes, and estimated blood loss was g. the rate of surgical morbidity was %, and that of anastomotic leakage was %. there was no mortality. the mean length of proximal margin was mm, and no positive margin was recorded. the -and -year overall survival rate was . % and %, respectively. conclusions: laparoscopic transhiatal resection for siewert type ii cancer is technically challenging, but appears feasible and safe when performed by an experienced surgical team. a largescale prospective study is necessary for final conclusion. introduction: mesh use for reinforcement of primary crural closure is controversial. synthetic mesh use poses a risk of erosion but there is no evidence that non-synthetic mesh is useful to minimize the risk of hernia recurrence. we evaluated a fully bioresorbable mesh made from poly- -hydroxybutyrate (p hb) for crural reinforcement after para-esophageal hernia (peh) repair. the aim of this study was to evaluate the safety and efficacy of p hb mesh at the hiatus in patients undergoing peh repair. this was a review of prospectively collected data on consecutive patients that had repair of a peh with reinforcement of the crural closure with p hb mesh. to be considered a peh at least % of the stomach was herniated into the chest. a collis gastroplasty or crural relaxing incision was added for short esophagus or crural tension when necessary. routine follow-up consisted of esophagogastroduodenoscopy (egd) at months for patients that had a collis gastroplasty and a barium upper gi study (ugi), high resolution manometry (hrm) and ph test in all patients at months. a hernia of any size identified during objective follow-up testing was considered a recurrence. overall, there was a significant difference in mean measured tension between the three subjective suture ratings by the surgeons. however, there was substantial variability and overlap amongst the surgeon's ratings (figure) . the tension necessary to approximate the crura during peh repair can be objectively measured and as expected increases progressively with anterior movement up the hiatus. while there was some correlation between a surgeon's subjective assessment of the tension necessary to bring the crura together and actual measured tension, there was wide variability and imprecision from one stitch to another. objective tension measurement may provide a more reliable assessment of when excessive force is being used to re-approximate the crura and potentially improve peh recurrence rates. ahmed introduction: paraesophageal hernia repairs are increasing in prevalence, and unfortunately carry a high recurrence rate. consequently, reoperation is expected to increase in frequency. published data on the outcomes of recurrent paraesophageal hernia (rpeh) repair is very limited. because of the technical difficulties of revisional surgery, we hypothesize that laparoscopic revisional paraesophageal hernia repairs are associated with high perioperative morbidity and poor patient outcomes. methods: all rpeh repairs performed by the foregut surgical service at our institution from to were reviewed. patients were included if their index operation was a true pehr (initial type hiatal hernia repairs were excluded, as well as multiply recurrent hernias). demographics, medical and surgical history, and operative notes from the index surgery were reviewed. details from standardized pre-operative symptom assessment, objective testing and operative details for the revisional surgery were collected. patients were routinely offered month post-operative upper gastrointestinal contrast evaluation. postoperative outcomes included a standardized symptom assessment and results of objective testing at any time after surgery. results: twenty six patients were identified who underwent repair of rpeh. demographic, operative and perioperative data was available for all patients (table ) . twenty four patients underwent followup symptom evaluation (two were lost to follow-up after the initial hospitalization). sixteen patients underwent follow-up objective testing by radiographic evaluation with contrast, endoscopy or both. these subgroups were used to calculate symptomatic and objective outcomes (table ) . conclusion: reoperative laparoscopic surgery for recurrent paraesophageal hernias is technically challenging as evidenced by long operative times. despite this, perioperative outcomes at a high volume center are good, with low morbidity and no mortality. importantly, symptomatic outcomes for this difficult problem are excellent. introduction: hypotension of the lower esophageal sphincter (hles) and the presence of hiatal hernia (hh) have both been associated with gastroesophageal reflux disease (gerd). the exact likelihood with which a hles or a hiatal hernia predict gerd continues to be defined. we hypothesize a synergistic interaction in those with hles and hh in predicting gerd as defined by a positive ph study. methods and procedures: between and , consecutive patients presenting to a surgical practice with symptoms most concerning for gerd, without prior antireflux surgery were evaluated by high resolution manometry (hrm), esophagogastroduodenoscopy (egd), videoesophagography (veg) and an ambulatory ph study. hles was defined as residual les pressure of\ mmhg, hh was defined as having been noted and measured by the radiologist, these were further categorized into any hh, - cm, [ - cm background: while clinical outcomes have been reported for antireflux surgery, there is limited data on postoperative outpatient encounters and their associated costs. the aim of this study is to evaluate the utilization of healthcare and its associated costs during the -day postoperative period following antireflux surgery. methods: we analyzed data from the truven health marketscan® research databases. patients ≥ years with an icd- procedure code or cpt code for antireflux surgery and a primary diagnosis of gerd during - were selected. only patients with continuous enrollment six months prior to the date of surgery and -days after surgery were analyzed. patients with a diagnosis of esophageal cancer or achalasia during the six-month period prior to antireflux surgery, a length of stay [ days following index procedure, a capitated plan, or patients who underwent emergency surgery were excluded. outpatient endoscopy was defined using icd- and cpt codes, and related readmission was defined by clinical classification software. introduction: the development of postsurgical gastroparesis following nissen fundoplication is poorly understood. in this study, we analyze the development of gastroparesis requiring intervention and other subsequent procedures following fundoplication and paraesophageal hernia (peh) repair procedures in the state of new york. methods: using a comprehensive state-wide administrative database (sparcs), we examined all in-patient and outpatient records for adult patients who underwent fundoplication or peh repair as a primary procedure for the treatment of gerd between the years of - . patients with an initial gastroparesis diagnosis were excluded from the analysis. through the use of a unique identifier, each patient was followed until for the subsequent diagnosis of gastroparesis or reoperation. surgical procedures for the treatment of gastroparesis included pyloroplasty, pyloromyotomy, or gastroenterostomy procedures. multivariable logistic regression models were used to identify independent predictors for having subsequent reoperation. results: a total of , patients were analyzed. this included , fundoplication patients ( . %) and , ( . %) with peh repair. in the fundoplication group, ( . %) patients had a follow-up diagnosis of gastroparesis or secondary procedure. ( . %) of the patients who underwent a primary peh repair procedure had a follow-up procedure or gastroparesis diagnosis (table ) . mean time to follow-up procedure or diagnosis was . years for the fundoplication group and . years for the peh repair group. the majority of the follow-up procedures in the fundoplication group were revisional procedures (fundoplication or peh repair) (n = , . %), while ( . %) patients were newly diagnosed with gastroparesis and/or underwent a secondary procedure for its treatment. conclusion: fundoplication and peh repair procedures have a relatively low post-operative incidence of gastroparesis following initial procedure for treatment of gerd. secondary fundoplication or peh repair was more commonly performed compared to any of the surgical procedures for gastroparesis for both procedures. further analysis of association with subsequent procedures is needed. during this procedure, gastro-esophageal reflux was evaluated and assigned to severe, moderate and slight category. if the reflux was observed slightly up to cervical esophagus, the case was assigned to moderate category. if the reflux was observed intensely up to cervical esophagus, the position was returned to head high position for the safety and the case was assigned to severe category. the anti-reflux surgery was considered in the moderate and severe categories. results: we have performed laparoscopic nissen procedure in cases. the mean operation time was min. the outcome was assessed by reflux test performed on - postoperative day, and the results showed the reflux was disappeared in every cases. median follow-up period of this study was months ( - months). in cases ( . %) ppi was restarted before months after the anti-reflux surgery. in cases ( . %) ppi was restarted after the anti-reflux surgery during the whole follow-up period of this study. the bmi of the patients had no relationship to the needed restart of ppi. to evaluate the degree of esophagitis objectively before and after the anti-reflux surgery we designed "the esophagitis score". in this scoring method, a number from - was assigned according to the degree of esophagitis along with the la classification. the results of the study have shown that the reflux esophagitis was improved obviously after the anti-reflux surgery even in the ppi restarted group (p. ). discussion: the number of gerd patients who needed anti-reflux surgery seems to be so high. to extract the patients who needed it remarkably is important. the anti-reflux surgery is most effective for the patients who really have the obvious reflux. reflux test is feasible because of its convenience and visual effects for the patients. the results of the laparoscopic nissen fundoplication were good and satisfied by the patients mostly. surg endosc ( ) :s -s introduction: fundoplication at the time of giant paraesophageal hernia repair is controversial. the proposed advantages are better reflux control and lower recurrence. disadvantages include fundoplication specific complications, might be unnecessary and may not decrease recurrence. we retrospectively reviewed giant paraesophageal hernia repairs (peh) with two point gastropexy in the fundus and body, and no antireflux procedure. data collected is postoperative gerd symptoms, postoperative proton pump inhibitors (ppis) therapy and recurrence. methods: a retrospective review of patients who underwent repair of giant peh from to december of . giant was defined as a hernia with % or more of the stomach above the diaphragm. follow up consisted of upper gi (ugi) study one year postoperatively and reflux symptom questionnaire. patients were followed every months in the surgery clinic and a ppi wean was initiated at the second postoperative visit. the primary outcome we evaluated was discontinuation of ppis. in addition, we utilized a standardized reflux scale and recurrence rates collected. chi-squared was used for statistical analysis. background: gastroesophageal reflux disease (gerd) is a highly prevalent disorder with a multitude of treatment options ranging from lifestyle modifications and medical management to surgical options. despite the numerous treatments available, there is still debate over which approach is most appropriate and effective for patients. this study aims to examine the effect of robotic hiatal hernia repair (rhhr) with the novel addition of esophagopexy in patients with gerd. methods: a single institution, single surgeon, prospectively maintained database was used to identify patients who underwent rhhr with a partial fundoplication and concomitant esophagopexy for gerd from november to july . patient characteristics, operative details and postoperative outcomes were analyzed. primary endpoint was resolution of subjective gerd symptoms and discontinuation of proton pump inhibitor (ppi). recurrence of hiatal hernia was a secondary endpoint. results: eleven patients were identified meeting the inclusion criteria (rhhr + esophagopexy) with a mean follow-up of . weeks ± . weeks. in regards to the rhhr, % underwent a partial fundoplication and the additional % underwent a re-do wrap. this patient cohort was . % female with a mean age of . ± . years. preoperative esophagogastroduodenoscopy (egd) was performed in % of patients with the study showing a hiatal hernia in . %, gastritis in . % and esophagitis in . % of patients. manometry was performed in . % of the patients showing % of these patients with esophageal dysmotility. esophagograms and ph studies were performed preoperatively in . % and . % of patients respectively. preoperatively, % of patients had a documented diagnosis of gerd and were taking a ppi and/or h blocker. after rhhr with esophagopexy, . % of patients had resolution of their gerd symptoms while . % (n = ) remained symptomatic. however, one of two patients reported a subjective decrease in symptom severity following the procedure. despite resolution of symptoms, . % remained on ppis. another % switched to h blockers and one patient discontinued all antisecretory therapy. none of the patients experienced recurrence of their hiatal hernia. conclusion: based on our data, rhhr with esophagopexy results in resolution gerd symptoms in over % of symptomatic patient. in patients with hiatal hernias and gerd, rhhr with esophagopexy does lead to resolution of symptoms, however, the majority of patients remained on ppis. long-term follow up is needed to investigate whether these patients are able to discontinue ppis and remain symptom free. chaya shwaartz, nadav zilka, mustapha siddiq, yuri goldes, md; sheba medical center, israel background: d gastrectomy for gastric carcinoma is a well-established procedure in patients undergoing surgery for gastric cancer and is the standard of care in our institution. reduced pain, early ambulation, and better cosmetics are some of the benefits of minimally invasive surgery for early gastric cancer. we aimed to describe our experience in laparoscopic d gastrectomies undertaken by a single surgeon in our institution. methods: this is a single-center retrospective review of prospectively collected d gastrectomies performed by a single surgeon. between november and february , laparoscopic subtotal/total gastrectomies were performed at sheba medical center, a tertiary center for forgut cancer. clinicopathological characteristics of the patients, surgical performance, postoperative outcomes and pathological data were collected. results: forty-five patients underwent laparoscopic gastrectomy. of these, had subtotal gastrectomy and had total gastrectomy. the median age in our series ( - ). most of the patients in our series had early gastric cancer (t - ) ( %). the mean average of dissected lymph nodes was ± . the mean operative time was ± . the postoperative complications, classified using the clavien-dindo classification. severe complications ([ cd iiia) rate was %. conclusions: laparoscopic d gastrectomy for invasive gastric cancer is safe and feasible when carried out in high-volume centers by an experienced surgeon as part of a multidisciplinary team with careful case selection and appropriate high-quality postoperative support. minimally invasive management of diaphragmatic hernias after esophagectomy: a case report introduction: esophagectomy is a common treatment for both benign and malignant pathologies of the foregut. hiatial paraconduit hernias are rare complications following esophagectomy. in this study, we review our experience with these rare diaphragmatic hernias. methods: a retrospective analysis of all patients presenting with hiatial hernia after esophageal resection at the university of oklahoma health science center between and was performed. data was abstracted from the medical record for evaluation and included demographics, symptoms, repair techniques and outcomes. no patients were excluded. results: a total of ten patients were identified to have paraconduit hernias. during this time interval, there were a total of esophageal resections performed. all patients had esophagectomy for malignant disease. seven of the patients have undergone surgery. two patients are asymptomatic and are being followed at their request, and one patient is pending elective correction. of the seven patients who underwent surgery, the median age was , with males and two females. six of the seven patients underwent minimally invasive ivor lewis esophagectomy and one had an open mckeown procedure. the median time from esophagectomy to hernia repair was months, with range from month to months. the most common presenting complaint was abdominal pain and nausea. one patient was noted to have a paraconduit hernia on postoperative day and taken to surgery for repair during the hospitalization. there was one death in a patient who presented with necrosis of the small bowel. the remaining patients all had laparoscopic approach. one patient required a hand port to reduce incarcerated colon and one patient was noted to have a cecal perforation during port closure requiring repair. all patients had herniated colon, with small intestine or pancreas herniation noted in three. repair was performed by reducing the viscera, a left phrenic relaxing incision, closure of the hiatus around the conduit and then closure of the diaphragmatic defect with mesh. at median follow up of months, there are no recurrences. conclusion: hiatal paraconduit hernias are becoming a frequent finding among survivors of esophageal cancer surgery. our study demonstrates that there is a propensity for patients who undergo minimally invasive esophagectomy to develop these hernias. the vast majority of patients can undergo laparoscopic repair. our recommendation is to perform a diaphragmatic relaxing incision and liberal use of mesh. early results appear to be favorable regarding recurrence. aim: there have been several reports illustrating the safety and efficacy of various surgical techniques in performing laparoscopic esophagojejunostomy (ej). this study aims to compare two established methods of ej anastomosis -circular stapling with purse-string suture ("lap-jack") and linear stapling technique -in laparoscopic total gastrectomy. methods: patients diagnosed with gastric cancer underwent intracorporeal ej anastomosis in laparoscopic total gastrectomy from january, to october, . cases used the circular stapler with purse-string "lap-jack" method, and patients used the linear stapling method for ej anastomosis. were matched using propensity scores, and retrospective data for patient characteristics, surgical outcome, and post-operative complications was reviewed. the two groups showed no significant difference in age, bmi, or other clinicopathological characteristics, and there was no conversion to an open procedure. after propensity score matching analysis, the linear group had significantly shorter operating time ( . ± . vs . ± . , p≤ . ) and more sufficient proximal margin ( . ± . vs . ± . , p = . ). no significant difference was found in estimated blood loss, retrieved lymph node, hospital stay, and time for first flatus. there was no postoperative mortality. early postoperative complication of the circular and linear group occurred in ( . %) and ( . %, p = . ) patients respectively. ej leakage occurred in ( . %) cases from each groups, with ( %) case from both group needing radiologic or surgical intervention. no other significant difference in early complication was found. late complication was observed in ( . %) cases (circular = linear = , p = . ) with ej anastomosis stricture in the linear group, but there was no statistical significance. conclusion: both circular stapling and linear stapling techniques are feasible and safe in performing intracorporeal ej anastomosis during laparoscopic total gastrectomy. linear-stapling technique had more sufficient proximal margin and shorter operating time. there was no significant difference in anastomosis related complication between the two groups. masahiro watanabe, masanori tokunaga, akio kaito, shizuki sugita, takahiro kinoshita; national cancer center hospital east, gastric surgery division background: although the current standard treatment for advanced gastric cancer (agc) is open gastrectomy, laparoscopic gastrectomy (lg) is increasingly performed, especially in the east. however, it is a technically demanding procedure, and the feasibility remains unclear. the aim of the present study was to clarify the feasibility of lg for agc. patients and methods: the present study included patients who underwent lg for agc between and . the indication of lg has gradually expanded in our institute, and is currently any stage gastric cancer except for gastric cancer obviously invading adjacent organs or gastric stump carcinoma. we retrospectively reviewed short-and long-term surgical outcomes of the patients. results: male/female ratio was : , and median age (range) was ( - ) years. distal gastrectomy was most frequently performed ( %), followed by total gastrectomy ( %). median operation time and intraoperative blood loss was ( - ) minutes and ( - ) g, respectively. clavien-dindo grade iii or more complication rate was . %. with a median followup period of months, the -year recurrence free survival rates of pstage ii and iii patients were % and %, respectively. conclusion: the outcomes of lg for agc are satisfactory, provided that an experienced team performs the surgery. introduction: the present study aims to evaluate the predictive value of indocyanine green (icg) for the detection and prevention of anastomotic leak following esophagectomy. anastomotic leak is a highly morbid and potentially fatal complication of esophagectomy. ensuring adequate perfusion of the gastric conduit can minimize the risk of postoperative leak. intraoperative evaluation with fluorescence angiography using icg offers a dynamic assessment of gastric conduit perfusion, and can guide anastomotic site selection. methods: a search of electronic databases medline, embase, scopus, web of science and the cochrane library using the search terms "indocyanine/fluorescence" and esophagectomy was completed to include all english articles published between and august . articles were selected by two independent reviewers based on the following major inclusion criteria: ( ) esophagectomy with gastric conduit reconstruction; ( ) use of fluorescence angiography with indocyanine green to assess perfusion; ( ) age ≥ years; ( ) sufficient outcome data for the calculation of leak rates and ( ) sample size ≥ . the quality of included studies was assessed using the quality assessment of diagnostic accuracy studies- . results: our literature search yielded potential studies, of which studies were included for meta-analysis after screening and exclusions. there were eleven prospective and three retrospective studies. the pooled anastomotic leak rate when icg was used was found to be %. pooled sensitivity and specificity for leak detection were . ( . - . ) and . ( . - . ), respectively. when studies involving intraoperative modifications were removed, pooled sensitivity and specificity were only marginally changed to . ( . - . ) and . ( . - . ), respectively. the diagnostic odds ratio was found to be . ( . - . ) across all studies and . ( . - . ) when intraoperative interventions were excluded. only three trials included a control group, giving a sample size of . in studies with a comparator group, icg was associated with an % reduction in the risk of anastomotic leak [or: . ( . - . )]. conclusions: in non-randomized trials, the use of icg as an intraoperative tool for visualizing vascular perfusion and conduit site selection, is promising. however, poor data quality and heterogeneity in reported variables limits cross-study comparisons and generalizability of findings. randomized, multi-center trials are needed to account for independent risk factors for leak rates and to better elucidate the impact of icg in predicting and preventing anastomotic leaks. objective: robotic assistance for bariatric surgery represents a novel application of a rapidly emerging technology. its safety and efficacy remains primarily characterized by smaller, singleinstitution studies. in this investigation, the influence of robotic assistance on short-term perioperative outcomes is contrasted with the more established primary multi-port laparoscopic approach for patients undergoing roux-en-y gastric bypass (rygb), using data from a national bariatric database. methods: a retrospective analysis of , robotic-assist and , laparoscopic rygb patients from the metabolic and bariatric surgery accreditation and quality improvement program national database were reviewed for differences in patient characteristics and short-term outcomes. on bivariate analysis, variables associated with primary outcomes of -day reoperation, readmission and reintervention were imputed into multivariate analyses to determine independent significance. results: robotic-assist bypass patients were older (p\. ), had a higher prevalence of comorbidities and had concomitant operations more frequently performed during surgery (p\. ). on bivariate analysis, robotic-assist patients had a higher rate of readmission than laparoscopic patients ( . % vs. . %; p=. ), but no differences in -day reoperation ( conclusion: robotic-assistance does not confer an increased rate of morbidity and mortality after rygb, and represents a feasible surgical modality for the surgeon willing to adopt the technology and accept its limitations. alicia m bonanno, md, brandon tieu, md, farah husain, md; oregon health and science university introduction: marginal ulcer is a common complication following roux-en-y gastric bypass with incidence rates between and %. most marginal ulcers resolve with medical management and lifestyle changes, but in the rare case of a non-healing marginal ulcer there are few treatment options. revision of the gastrojejunal (gj) anastomosis carries significant morbidity and mortality with complication rates ranging from to %. thoracoscopic truncal vagotomy (ttv) may be a safer alternative with decreased operative times. the purpose of this study is to evaluate the safety and effectiveness of ttv in comparison to gj revision for treatment of recalcitrant marginal ulcers. methods and procedures: a retrospective chart review of patients who required surgical intervention for non-healing marginal ulcers was performed from st september to st september . all underwent medical therapy along with lifestyle changes prior to intervention and had preoperative egd that demonstrated a recalcitrant marginal ulcer. revision of the gj anastomosis or ttv was performed. data collected included operative time, ulcer recurrence, morbidity rate, and mortality rate. statistical analysis was performed using t-test and fischer's exact test. results: a total of fifteen patients were identified who underwent either gj revision (n= ) or ttv (n= ). there were no -day mortalities in either group. mean operative time was significantly lower in the ttv group in comparison to gj revision ( . ± vs. . ± minutes respectively, p= . ). recurrence of the ulcer was not significant between groups and occurred following gj revisions and ttv. overall complication rate was not significantly different with % in the gj revision group and % in the ttv group. complications included anastomotic leak ( gj), anastomotic stricture ( gj), aspiration ( ttv), dysphagia ( gj and ttv), and dumping syndrome ( gj). conclusions: our results demonstrate that thoracoscopic vagotomy may be a better alternative with decreased operative times and similar effectiveness. however, further prospective observational studies with a larger patient population would be beneficial to evaluate complication rates and ulcer recurrence rates between groups. we present a case of a -year-old female with a history of thyroid cancer who initially presented to an outside hospital complaining of reflux, abdominal pain, early satiety, and -pound unintentional weight loss. endoscopy demonstrated a cm pre-pyloric mass; with initial biopsies of the mass demonstrating only gastric mucosa. endoscopic ultrasound and fna of the lesion also failed to elucidate its pathology. due to the pyloric location of the mass and inability to rule out invasive malignancy, we recommended a robotic-assisted transgastric submucosal resection with possible distal gastrectomy. intraoperatively we found a -degree circumferential pre-pyloric exophytic sessile tumor. frozen sections suggested a benign papillary tumor therefore we proceeded with submucosal resection. the resulting mucosal defect and gastrotomy were closed primarily with absorbable suture. final pathology showed the tumor to be a tubulovillous adenoma with high grade dysplasia arising against a background of intestinal metaplasia. the resection margins were negative for dysplasia. the postoperative course was complicated by a minor leak which did not require operative intervention and subsequent gastric outlet narrowing which required endoscopic dilation and feeding tube placement. however, the patient has recovered well and has advanced to diet as tolerated. gastric adenoma has a prevalence of . - . % in the western hemisphere. the risk of carcinomatous transformation in gastric adenomas is related to size, degree of dysplasia, and villosity. gastric adenomas are considered precancerous lesions. pre-operative pathologic diagnosis of dysplasia is often elusive as biopsies will often miss or under-grade the lesion. guidelines advocate for complete resection with either endoscopic submucosal dissection or surgical resection depending on surgeon preference and local expertise. endoscopic resection has been shown to be safe and efficacious in the removal of adenomas with good long-term outcomes. in this case the pathology of the lesion was unclear after multiple unsuccessful biopsies and required a surgical diagnosis to rule out invasive malignancy. management of gastric adenomas, while rare, may require a multidisciplinary approach between surgical endoscopy, minimally invasive surgery, and surgical oncology to achieve local control in an oncologically sound manner. we show that transgastric submucosal resection can be achieved in a minimally invasive fashion using robotic assistance. objective: parahiatal hernia is a rare type of diaphragmatic hernia with incidence of . - . %. para-hiatal hernias arises lateral to the left crural musculature adjacent to but separate from the oesophageal diaphragmatic hiatus. in view of its rare occurance and little clinical suspicion, it is almost never diagnosed clinically. the current case report is intended to depict the clinical profile of an intraoperatively diagnosed para-hiatal hernia and feasibility of laparoscopic repair of parahiatal hernias. method: laparoscopic fundoplication is frequently performed at grant medical college and sir j. j. group of hospitals, india. during one such case intraoperatively para-hiatal hernia was diagnosed. discussion: primary or true parahiatal hernias occur as a result of a congenital weakness and secondary defects follow hiatal surgery. the primary treatment of para-hiatal hernia is mesh-plasty. this is coupled with fundoplication in cases of large hernia and those symptomatic for gastroesophageal reflux disease. laparoscopic repair of these uncommon hernias is safe, effective and provides all of the benefits of minimally invasive surgery. conclusion: due to its rare occurrence, knowledge about this condition among laparoscopic surgeons is important to avoid diagnostic dilemma. knowledge about its management aids intraoperatvely to avoid performing incomplete procedure. introduction: extended indications of endoscopic resection for early gastric cancer (egc) have been widely accepted. according to current japanese guidelines, additional gastrectomy with lymph node dissection (lnd) is recommended for patients proven to have potential risks of lymph node metastasis (lnm) on histopathological findings. on the other hand, the frequency of lnm in these patients is exteremely low. the aim of this study was to elucidate the accurate risk of lnm based on the number of risk factors (rf) for possible lnm, and to compare the stratified risk of lnm with predicted risk from additional radical resection. methods and procedures: we enrolled egc patients who did not meet absolute or extended indications of endoscopic resection, and investigated the risk stratification of lnm according to the total number of lnm rfs described below; ( ) sm , ( ) lymphatic vessels invasion, ( ) undifferentiated adenocarinoma and [ mm in diameter, and ( ) [ mm in diameter and ulcer formation. we compared the stratification risk to the surgical risk that was calculated based on the japanese national clinical database (ncd) risk calculator in patients with additional gastrectomy after esd. results: the total number of lnm rfs and frequency of lnm were significantly correlated ( / rf; . %, rfs; . %, rfs, . %, rfs, . %; p. , fischer exact test). the estimated frequency of lnm was found to be lower than the predicted value of in-hospital mortality rate based on ncd in . % of / rf-patients who underwent additional gastrectomy with lnd after esd. the present study suggested that some patients must be over-indicated for additional gastrectomy with lnd, and no additional surgical treatment or less invasive surgery, such as local lnd (sentinel node navigation surgery or lymphatic basin resection), might be indicated for some patients with low number ( / rf) of lnm risk factors after esd. aims: laparoscopic proximal gastrectomy has been applied for early gastric cancer in upper third. we previously reported outcomes of laparoscopic total gastrectomy in managing this condition. in this study, we applied this modified technique for upper third early gastric cancer with double tract reconstruction. it is expected that our technique could be useful for treating these cases. methods: from april of to june of , consecutive patients with upper third early gastric cancer were assigned to undergo surgical treatment with proximal gastrectory at our hospital. we had cases of total gastrectory for upper third early gastric cancer in the same study period. background: laparoscopic total gastrectomy for remnant gastric cancer is much more difficult than common laparoscopic total gastrectomy due to severe adhesions to adjacent organs, displacement of anatomical structure. purpose: the aim was to analyze cases of laparoscopic total gastrectomy for remnant gastric cancer at the department of surgery of juntendo university urayasu hospital between november and april . method: we analyzed outcome and feasibility of laparoscopic total gastrectomy surgery for remnant gastric cancer. and we compared with laparoscopic total remnant gastrectomy ( cases) versus laparoscopic total gastrectomy ( cases) in our hospital. results: in the previous laparoscopic surgeries. we performed laparoscopic distal gastrectomy in cases, laparoscopic proximal gastrectomy in pcases, and open distal gastrectomy in cases. all cases were performed laparoscopic total gastrectomy with r-y reconstruction. case of them had been converted to open surgery due to severe adhesions. the mean operative time was min and the mean blood loss was ml. there were no intraoperative complications, and there were postoperative complications as a pancreatic fistula and a bowel obstruction. however, there were no intra-operative complications more than grade according to the clavien-dindo classification. the mean postoperative hospital stay was . days. all cases were without recurrence. thus, there were no significant differences in operative time, bleeding volumes, intra and postoperative complications and hospital stay compared with laparoscopic total gastrectomy. conclusions: laparoscopic total remnant gastrectomy can be performed with similar short-term outcomes to laparoscopic total gastrectomy, and may be feasible and safe procedure, and can become an option of therapeutic strategy. although this study was not powered to show lower recurrence rates with synthetic absorbable as compared to biologic, the . % recurrence rate is consistent with other series utilizing this mesh. it is interesting to note the difference in time to recurrence. these results suggest that while synthetic absorbable mesh may result in lower recurrence rates, recurrence seems to occur earlier. the results also suggest that deconditioning (lower bmi), and difficult cases and/or recovery may predispose to recurrence. these findings can help inform lf mesh selection and predict which patients are at higher risk of recurrence. introduction: little discussion of gastroparesis (gp) following laparoscopic paraesophageal hernia repair (lphr) has been reported in the literature. we wished to examine the incidence in our institution, and identify potential risk factors for development of gastroparesis following lphr. methods and procedures: a single institution retrospective chart review was preformed using cpt codes corresponding to paraesophageal hernia repair and fundoplication to identify patients undergoing laparoscopic paraesophageal hernia repair over a five year period ( / / - / / ) by three surgeons. emergency procedures and reoperations were excluded. in total, patients undergoing non-emergent first time lphrs were identified. size of the hiatal defect was identified when able, via either measurement between the diaphragmatic crura on ct or by medical record documentation. data obtained included sex, age, hernia type, mesh usage, and existence of specific comorbidities associated with gastroparesis. presence of gastroparesis was identified either by documentation of diagnosis via clinical judgment, or by results of gastric emptying nuclear medicine studies, with timing being no longer than months from date of surgery. independent students t-test and fisher exact test were used to determine statistical differences between the groups. results: patients undergoing non-emergent first time lphrs were identified. of these, we were able to obtain the size of the hiatal defect in patients. patients overall were diagnosed with gastroparesis, with an overall incidence of . %. when comparing all patients who developed gastroparesis to those who did not, only females comprised the group which did develop gastroparesis ( males/ females with gp, males/ females without gp, p= . ). age was also found to be greater in the group which developed gastroparesis. for patients in which the size of the hernia defect was identified, the average age was years older in the group diagnosed with gastroparesis ( step under laparoscopic view, left part of the lesser omentum was cut with preserving the hepatic branch of vagus nerve. the right crus of the diaphragma has been dissected free from the soft tissue around the stomach and abdominal esophagus. in this step the fascia of the right crus should be preserved and the soft tissue should not been damaged to avoid bleeding. after cutting the peritoneum just inside the right crus, the soft tissue was dissected bluntly to left side. then the inside margin of the left crus of the diaphragma was recognized from the right side. in this part of the procedure, laparoscope uses trocar (a), the assistant uses trocar (b) to pull the stomach to left lower side and the operator's right hand uses trocar (c). step the branches of left gastroepiploic vessels and the short gastric vessels were divided with ultrasonic coagulation and dissection device. the left crus of the diaphragma was exposed and the window at the posterior side of the abdominal esophagus was widely opened. in this part of the procedure, laparoscope uses trocar (a) at the beginning of dividing left gastroepiploic vessels, trocar (b) when dividing short gastric vessels. step the right and left crus are sutured with interrupted stitches to reduce the hiatus. from the right side, the fundus of the stomach is grasped through the widely opened window behind the abdominal esophagus. then the fundus of the stomach is pulled to obtain a degree ''stomach-wrap'' around the abdominal esophagus (fundoplication). using - non-absorbable braided suture, stitches are placed between both gastric flaps. purpose: laparoscopic gastrecomy has been widely adopted as the treatment of choice by many countries and institutions. internal hernia is a well-known complication after rouxen-y gastric bypass in the field of bariatric surgery. however, there were only a few reports of internal hernia after gastrectomy in gastric cancer patients. the purpose of this study was to analyze the incidence and clinical features of internal hernia after gastric cancer surgery in a high-volume center. method: , gastric cancer patients who underwent curative gastrectomy at seoul national university bundang hospital between january and december were retrospectively reviewed in this study. internal hernia was classified into two types, mesenteric hernia and petersen's hernia. result: patients who underwent distal gastrectomy (dg) with reconstruction by billroth ii, rouxen-y gastrojejunostomy and uncut rouxen-y gastrojejunostomy, total gastrectomy (tg) with esophagojejunostomy, and proximal gastrectomy with double tract reconstruction (pg dtr) with esophagojejunostomy and gastrojejunostomy had potential space for internal hernia. among these patients, ( . %) were determined as internal hernia by computed tomography and patients ( . %) underwent surgical treatment of internal herniation. two patients were conservatively managed. all patients suffered from abdominal pain and / ( %) patients showed nausea and vomiting. the median interval between the initial gastrectomy and surgery for internal hernia was days. mesenteric hernia was observed in cases and petersen's hernia in cases. since we started closing the mesenteric and petersen's defects from may of , there were only cases ( %) observed afterwards but there were cases ( %) before closure of the defects. conclusion: internal hernia after gastrectomy is likely underreported. although we analyzed patients with internal hernia, there might be more patients with mild symptoms who were managed conservatively by their own. a high degree of suspiciousness for internal hernia should be maintained in patients presenting symptoms like nausea, vomiting and abdominal pain after gastrectomy with potential space for internal hernia. with our experience, closure of the mesenteric and petersen's defect is helpful in reducing internal hernia. however, due to low incidence, a multicenter retrospective study is necessary. introduction: the increased incidence of anemia in patients with a hiatal hernia (hh) has been clearly demonstrated, as has resolution of anemia after hh repair in these patients. despite this, the implications of preoperative anemia on postoperative outcomes have not been well described. in this study, we aimed to identify the incidence of preoperative anemia in patients undergoing hh repair at our institution and sought to determine whether preoperative anemia had an impact on postoperative outcomes. methods and procedures: using our irb-approved institutional hh database, we retrospectively identified patients undergoing hh repair between january and april at our institution. we identified all patients with anemia, defined as serum hemoglobin levels less than mg/dl in men and mg/dl in women, measured within two weeks prior to surgery, and compared this cohort to those that had normal hemoglobin values preoperatively. specific perioperative outcomes analyzed included: estimated blood loss (ebl), operative time, need for blood transfusion, failure to extubate postoperatively, intensive care unit (icu) admission, postoperative complications, length of stay (los), and -day readmission. results: we identified patients undergoing hh repair, of which had preoperative bloodwork available for review. the average age was years and the majority of patients were female ( %, n= ). most were treated electively ( %, n= ) and with a minimally invasive approach ( %, n= ). patients ( . %) had preoperative anemia. compared to patients without anemia, patients with anemia had increased rates of failed extubation postoperatively ( . % vs. . %, p= . ), increased icu admissions ( . % vs. . %, p= . ), increased need for perioperative blood transfusions ( . % vs %, p= . ), and increased rates of postoperative complications ( . % vs. . %, p. ). although mean los ( . days vs. . days, p . ), mean operating time ( mins vs. mins, p= . ), and ebl ( ml vs ml, p= . ) were greater in the anemic group, they did not reach statistical significance, and there was no significant difference in -day readmission rate ( . % vs . %, p= . ). conclusions: anemia diagnosed on preoperative bloodwork appears to be associated with increased failure to extubate postoperatively, need for icu admissions, need for perioperative blood transfusion, and increased overall complication rate after hh repair. however, we found no significant difference in los or -day readmissions between anemic and non-anemic patients. since the majority of patients in this analysis underwent elective repairs, these results would support the preoperative treatment of anemia in patients undergoing hh repair. few studies have compared the procedures' long-term effectiveness with none looking beyond years. this study sought to characterize the efficacy of laparoscopic toupet versus nissen fundoplication for types iii and iv hiatal hernia using a telephone survey. methods and procedures: with irb approval, a review of all laparoscopic hiatal hernia repairs with mesh reinforcement performed over seven years at a single center by one surgeon was conducted. patient demographics and perioperative characteristics were recorded. hiatal hernia was classified per published sages guidelines as type iii or iv using operative reports and preoperative imaging. patients with type i or ii or recurrent hiatal hernia and patients receiving concomitant procedures were excluded. the gerd-health related quality of life survey was administered by telephone no earlier than months postoperatively. patients responded to items concerning symptom severity using a -point scale ( =no symptoms to =symptoms are incapacitating to do daily activities). symptoms surveyed included heartburn ( items), difficulty swallowing ( item) and regurgitation ( items introduction: as the thoracic esophageal carcinoma has a high metastatic rate of upper mediastinal lymph nodes, especially along the recurrent laryngeal nerve (rln), it is crucial to perform complete lymph node dissection along the rln without complications. although intraoperative neural monitoring (ionm) during thyroid and parathyroid surgery has gained widespread acceptance as the useful tool of visual nerve identification, the utilization of ionm during esophageal surgery has not become common. here, we describe our procedures focusing on a lymphadenectomy along the rln utilizing the ionm. methods and procedures: we first dissect ventral and dorsal side of the esophagus preserving the membranous structure (meso-esophagus), which contains tracheoesophageal artery, rln and lymph nodes. we next identify the location of the rln which runs in the meso-esophagus using ionm before visual contact. after that, we perform lymphadenectomy around the rln preserving the nerve. this technique was evaluated in consecutive cases (neural monitoring group; nm) of esophagectomy in prone positioning, and compared with our historical cases (conventional method group; cm background: laparoscopic hiatal hernia repair, particularly large type and type hernias, is associated with high recurrence rates. various use of overlay mesh reinforcement have been described in an attempt to improve outcomes. unfortunately, overlay use of biologic mesh continues to result in high recurrence rates, and more effective repairs employing permanent mesh raise serious erosion concerns and are therefore rarely used. we theorize that employing an interlay technique with permanent mesh (positioned between both crura) will help enhance crural closure and improve rates of hiatal hernia recurrences with minimal risk of erosion. methods: we reviewed all patients who underwent a laparoscopic hiatal hernia repair from april to august by a single surgeon from a prospectively maintained database at a tertiary care referral center (n= ). patients who underwent surgery for achalasia with concurrent hiatal repair were excluded. during this time frame, a new interlay technique of polypropylene mesh was employed upon suture closure of the crura. outcomes of repair were retrospectively reviewed. recurrence of hernia was identified by positive work up of patient's symptoms (new onset dysphagia, gerd, pain). results: a total of consecutive laparoscopic hiatal hernia repair were reported in a period of months. interlay polypropylene mesh was utilized in all repairs. patients were majority females ( . %), had a median age of and had a mean bmi of . . eleven ( . %) patients were redo repairs. majority of patients received a nissen fundoplication (n= , . %) followed by a toupet fundoplication (n= , . %). median length of stay after surgery was day. median follow up was days (range: - days). there were zero reported recurrences. conclusion: laparoscopic hiatal hernia repair with interlay polypropylene mesh appears in the short term to be a safe and durable technique to reduce the incidence of hiatal hernia recurrences. further studies are needed to assess more long term outcomes of this novel technique. zia kanani , melissa helm , max schumm , jon c gould, md ; introduction: laparoscopic fundoplication remains the current gold standard surgical intervention for medically refractory gastroesophageal reflux disease. studies suggest that on average - % of patients undergo reoperative surgery due to recurrent, persistent, or new symptoms. the primary objective of this study was to characterize the long-term symptomatic outcomes of primary and reoperative fundoplications in a clinical series of patients who have undergone one or more fundoplications. methods: patients who underwent laparoscopic primary or reoperative fundoplication between and by a single surgeon were retrospectively identified using a prospectively maintained database. patients undergoing takedown of a failed fundoplication and conversion to roux-en y gastric bypass (for morbid obesity, severe gastroparesis, or or more prior failed attempts) were excluded from the current analysis. all procedures were performed laparoscopically. patients were asked to complete the validated gerd-health related quality of life (gerd-hrql) survey prior to surgery and postoperatively at standard intervals to assess long-term symptomatic outcomes and quality of life. gerd-hrql composite scores range from (highest disease-related quality of life) to (lowest diseaserelated quality of life, most severe symptoms conclusions: patients who need to undergo reoperative fundoplication have more severe gerd-related symptoms at years post-op compared to patients undergoing primary fundoplication. however, good outcomes and morbidity rates of laparoscopic reoperation that approximate that of a primary fundoplication are possible in the hands of an experienced surgeon. adenocarcinoma of duodenum: surgical or endoscopic treatment? introduction: it is well known that the adenocarcinoma of the duodenum (adc) is a quite rare lesion infact represents % of cancer of the small bowel and % of these are localized in the periampullary area: % affect the sub-papillary tract and only % the supra-papillary segment of the duodenum. the adc may arise from duodenal polyps (familial polyposis, or gardner's syndrom or be associated with coeliac disease). until now the treatment was the pancreatoduodenectomy (for anatomo-surgical reasons and for the possibility of regional lymphonode resection). infact in my series of of such procedures, where performed for duodenal cancer. in this last years patients with adc of supra-papillary segment of the duodenum underwent endoscopic submucosal dissection (esd). the purpose of this study were to check the feasibility of the esd in treating such cases. in our experience this kind of endoscopic operation was feasible with high complication rate; perforation in cases ( . %); and bleeding occurred in case ( . %). all the complications were successfully treated endoscopically and the long-term outcomes was favorable. consitering the high rate of complications, the difficult and long procedure, the compliance of patients (c ), the general anesthesia, a very very skilled endoscopist is needed. conclusions: the esd represent a new endoscopic approach enstablished in clinical practice: end is performed following the intraluminal path ( rd space) wich, unlike the others, remain virtual and has to be created by dissecting and expanding the tissues layer between the mucosa and the muscolaris propria allowing the endoscope to gain access. the benefit of esd for treating the adc of the supra-papillary segment of the duodenum, according to our experience, must be validate in the future; a pre-operative pet-tac scan examination must be performed in order to demostred the lesion of the duodenum and if there is any limphatic involvement and no infiltration of the head of the pancreas. yoontaek lee, md, sa-hong min, md, young suk park, md, sang-hoon ahn, md, do joong park, md, phd; seoul national university bundang hospital purpose: this study summarizes the single institution experience of laparoscopic gastrectomy in advanced gastric cancer and evaluates the postoperative morbidities and long-term oncologic outcomes. methods: a total of , laparoscopic gastrectomy for advanced gastric cancer were performed at seoul national university bundang hospital between may and may . the characteristics of patients, surgical techniques, postoperative morbidities, and long-term oncologic outcomes were retrospectively reviewed using electronic medical records. results: patients required conversion to open surgery. the reasons of conversion to open surgery were advanced stage (n= ), intraoperative bleeding (n= ), adhesion due to previous abdominal operation (n= ), small abdominal cavity (n= ), associated disease (n= ), and intraoperative pleural injury (n= ). the mean hospital stay was . days for distal gastrectomy, . days for total gastrectomy, . days for proximal gastrectomy, and . days for pylorus preserving gastrectomy. the mean number of collected lymph nodes was . for distal gastrectomy, . for total gastrectomy, . for proximal gastrectomy, and . for pylorus preserving gastrectomy. the rates of postoperative complications of grade ii or more were . %. there was one case of postoperative mortality due to delayed bleeding after discharge. old age was the only independent predictor of surgical morbidities. background: intrathoracic gastric volvulus is a life-threatening condition of paraesophageal hernia. the therapeutic is a challenge because in acute volvulus it may lead to gastric strangulation and necrosis. most patients are elderly and with a significant associated medical illness which has higher morbidity and mortality of major surgery. we present a laparoscopic surgery is safe in paraesophageal hernia with acute intrathoracic gastric volvulus in a high-risk patient. case presentation: an -year-old woman with underlying of diabetes mellitus and hypertension was transferred from an outlying hospital with anemia, dysphagia, urinary tract infection and aspiration pneumonia. she had severe recurrent emesis after admission. ct scan of the chest and abdomen revealed a large esophageal hiatal hernia, and most of the stomach was in the inferior mediastinum with organoaxial gastric volvulus. endoscopy revealed flat pigmented spot gastric ulcer which compatible with cameron lesion and twisting of gastric folds without evidence of ischemia. the endoscopic reduction was unsuccessful. a laparoscopic surgery was performed and the herniated stomach was successfully reduced. the hernial sac was excised. the crura were approximated and reinforced with composite mesh. nissen fundoplication was performed along with gastropexy of the greater curve of the stomach to the abdominal wall. there was no perioperative complication. she tolerated enteral diet on a postoperative day . she had an uneventful recovery and discharged in weeks after treatment of her associated medical illnesses. she had no relapse of previous symptoms at her six-month follow-up assessment. discussion: endoscopic reduction of acute gastric volvulus may be the first option in a patient with severe comorbidities. however, if there is evidence of ischemia or failure of endoscopic reduction, surgical treatment should be considered. laparoscopic reduction and gastropexy may be a lessinvasive and viable alternative to the more aggressive surgical procedure but definitive surgery with repair hiatal hernia can be done in a selected patient. conclusion: minimally invasive treatments of acute gastric volvulus with paraesophageal hernia, either endoscopic or laparoscopic offer the option for reducing morbidity and mortality in elderly with significant comorbidities. the definitive laparoscopic surgery can be accomplished successfully and safely when it is performed with meticulous attention to the surgical technique and perioperative care. reid fletcher, md, mph, emily ramirez, rn, alfonso torquati, md, philip omotosho, md; rush university medical center introduction: the objective of this study was to evaluate the impact of an enhanced recovery after surgery (eras) program on post-operative length of stay following laparoscopic sleeve gastrectomy. eras programs have been demonstrated to improve outcomes and decrease length of stay in multiple surgical disciplines however relatively little has been published regarding the impact of eras programs in bariatric surgery. methods: an eras program for all patients undergoing bariatric surgery was implemented in february at a single institution. we retrospectively reviewed all patients undergoing laparoscopic sleeve gastrectomy between february and august . as a pre-eras historical control, we also reviewed all patients undergoing laparoscopic sleeve gastrectomy between january and december . baseline patient characteristics, additional concomitant operative procedures as well as -day readmission and complication rates were reviewed. logistic regression analysis was used in univariate and multivariate models to identify factors that predicted early post-operative discharge. data analysis was completed using stata se software (statacorp lp; college station, tx). results: eighty-five patients underwent laparoscopic sleeve gastrectomy after implementation of the eras program while patients were included in the pre-eras control group. there were no statistically significant differences in the baseline characteristics between the two groups and there were no differences in the rate of concomitant procedures performed. there was a statistically significant decrease in post-operative length of stay following implementation of the eras program from . it has been reported that laparoscopic redo surgery is effective for recurrent gerd and/or hiatal hernia after surgery. however, there has been very few reports from japan. we report an initial experience of laparoscopic surgery for japanese patients with recurrent gerd and/or hiatal hernia. among patients who had undergone laparoscopic fundoplication in our hospital from to , patients with recurrent gerd/hiatal hernia underwent redo surgery. preoperative work-up included upper gi series, endoscopy, ct, h ph-impedance and manometry. the patients consisted of women and men with a mean age of . years. the interval from the initial surgery was . months ( days- months). the types of initial fundoplication were nissen: , toupet: , anterior: . the types of recurrence were sliding hernia: and paraesophageal hernia: . one patient with recurrent sliding hernia had poor gastric motility. laparoscopic redo surgery was performed on patients. redo surgery included crural repair with mesh reinforcement: , refundoplication: (nissen-nissen: , nissen-toupet: , toupet-toupet: , toupet-lateral: ) and reduction of the incarcerated paraesophageal hernia: . additional procedure included mesh reinforcement: and pyloroplasty: . open partial gastrectomy was performed for one patient with incarcerated and strangulated hernia. operation time was min. patients was converted to open surgery. oral intake was started on the st pod and postoperative stay was . days. two patients recurred after redo surgery, one of whom underwent re-redo surgery. during the surgery, ivc was injured but rescued by open surgery. eleven patients had good outcome and patients required ppi after redo surgery. our morphological fundoplication score significantly improved after redo surgery. symptom score and acid exposure time were also significantly improved after redo surgery. laparoscopic redo surgery for recurrent gerd and/or hiatal hernia after surgery is safe and effective, although attention should be paid during surgery to avoid injury of the adjacent organs. surg endosc ( ) introduction: cameron ulcers (cu) are linear erosions or ulcerations in the gastric mucosa at the level of the diaphragmatic hiatus in patients with a hiatal hernia (hh) and are frequently associated with anemia. perioperative outcomes of patients with cu undergoing hh repair are not well described. we sought to identify the incidence of cu in patients undergoing hh repair at our institution and determine whether the presence of cu impacted postoperative outcomes. methods and procedures: using our irb-approved institutional hh database, we retrospectively identified patients undergoing repair between january and april . we identified all patients with cu found on preoperative esophagogastroduodenoscopy (egd). we compared patients with and without cu to determine if they differed in terms of preoperative anemia (defined as hemoglobin levels less than mg/dl in men and mg/dl in women). lastly, we compared outcomes between the cu group and the non-cu group, focusing on need for perioperative blood transfusion, failure to extubate postoperatively, intensive care unit (icu) admission, postoperative complications, length of stay (los), and -day readmission. conclusions: the presence of cu on preoperative egd is associated with increased rate of preoperative anemia, increased los, and increased icu admission after hh repair. although the cause of anemia in patients with hh is commonly attributed to cu, only % of cu patients were anemic, indicating that differences in outcomes may not only be attributed to a higher incidence of anemia in cu patients. the implications of cu in patients undergoing hh repair need to be further elucidated. laparoscopic heller myotomy as treatment for achalasia objective: aim of this stud was to review our experience with laparoscopic heller dor myotomy. disphagia constitutes the main symptom. diagnosis is performed by means of esophageal manometry. materials and method: over a period of years, patients were treated with heller myotomy plus dor fundoplication laparoscopically. all patients had lost weight, and there was a prevalence of females with an average age of . twenty five patients had chagas disease. they were all assessed with serial x-rays, endoscopy, esophageal manometry, and their symptoms were assessed with a - score, being the most severe. results: there was no conversion or mortality. in patients the mucosa was perforated during myotomy. the mucosa was sutured without altering the result of the treatment. average hospital stay was hours. one patient had to be reoperate because of esophageal perforation with peritonitis. sixty patients were followed up with manometric control and ph-probe testing, and only % of those had pathologic reflux. conclusions: laparoscopic treatment of achalasia is possible and reproducible, while reducing the morbility of laparotomy with relieve of patients symptoms. introduction: stent treatment in the gastrointestinal tract is emerging as a standard therapy for overcoming strictures and sealing perforations. we have started to treat patients with perforated duodenal ulcers using a partially covered stent and external drainage achieving good clinical results. stent migration is a serious complication that may require surgery. pyloric physiology during stent-treatment has not been studied and mechanisms for migration are unknown. the aims of this study were to investigate the pyloric response to distention mimicking stent-treatment, using the endoflip, investigating changes in motility patterns due to distention at baseline, after a pro-kinetic drug and after food ingestion. methods: a non-survival study in five pigs was carried out, followed by a pilot study in one human volunteer. a gastroscopy was performed in anaesthetized pigs and the endoflip was placed through the scope straddling the pylorus. baseline distensibility readings were performed at stepwise balloon distention to ml, ml, ml and ml, measuring pyloric cross sectional area and pyloric pressure. measurements were repeated after administration of a pro-kinetic drug (neostigmin) and after instillation of a liquid meal. in the human study readings were performed in conscious sedation at baseline and after stimulation with metoclopramide. results: during baseline readings the pylorus was shown to open more with increasing distention, together with higher amplitude motility waves. reaching maximum distention-volume ( ml), pyloric pressure increased significantly (p= . ) and motility waves disappeared. after prokinetic stimulation pyloric pressure decreased and motility waves increased in frequency and amplitude at , and ml distentions. after food stimulation pyloric pressure stayed low and motility waves showed increase in amplitude at distentions of , and ml. during both tests the pylorus showed higher pressure and lack of motility waves at maximum probe distention of ml. similar results were found in the human study. the pylorus seems to acts as a sphincter at low distention but when further dilated starts acting as a peristaltic pump. when fully distended, pyloric motility waves almost disappeared and the pressure remained high, leaving the pylorus open and inactive. stent placement in the pylorus results in pyloric distention, possibly changing motility. this study indicates that a duodenal stent placed over the pylorus should have a high radial force in the pyloric part in order to dilate the pylorus and diminish the contraction waves, this might reduce stent migration. introduction: cutting-edge technology in the field of minimal invasive surgery allows the application of singleincision laparoscopic surgery on gastric cancer. however, single-incision distal gastrectomy (sidg) is still technically difficult due to limited range of motion and unstable field of view-even in the hands of an experienced scopist. solo surgery using a passive scope holder may be the key in allowing sidg to be safer and efficient. we report our initial experience of consecutive cases of solo sidg. methods: prospectively collected database of patients clinically diagnosed as early gastric cancer who underwent solo sidg from october until july were analyzed. all the operations were held by a single surgeon and a scrub nurse. a passive laparoscopic scope holder was controlled by the surgeon to fix the field of view. results: the mean operation time (sd) was . (± . ) min, and the average estimated blood loss was . ± . ml. average body mass index was . ± . kg/m . the median hospital stay (range) was ( ) ( ) ( ) ( ) ( ) ( ) ( ) ( ) ( ) ( ) ( ) days, and the mean number of retrieved lymph nodes was . ± . . there was no conversion to multiport or open surgery. early postoperative complication occurred on % with three delayed gastric emptying, two postoperative pneumonia, one pancreatitis, and one wound complication. conclusion: solo sidg using a passive scope holder allows sidg to become more feasible by providing a stable field of view. there were no peri-operative deaths in either group. in the elective group, age was not an independent risk factor for complications (or . , % ci . - . ). conclusions: the incidence of major complications and mortality in this series were much lower than those previously reported for elective lpehr, while morbidity after emergency repair remains high. the paradigm of watchful waiting for elderly and/or minimally symptomatic patients with giant peh should be revisited. the impact of vagal nerve integrity testing in the surgical kamthorn yolsuriyanwong, md, eric marcotte, md, mukund venu, bipan chand, md; loyola university chicago, stritch school of medicine background: thoracic and gastric operations can cause vagal nerve injury, either accidentally or intended. the most common procedure, which can lead to such an injury, includes fundoplication, lung or heart transplantation and esophageal or gastric surgery. patients may present with minimal symptoms or some degree of gastroparesis. gastroparetic symptoms of include nausea, vomiting, early satiety, bloating and abdominal pain. if these symptoms occur and persist, the clinician should have a high suspicion of a possible vagal injury. investigative studies include endoscopy, esophageal motility, contrast imaging and often nuclear medicine gastric emptying studies (ges). however, ges in the post-surgical patient have limited sensitivity and specificity. if a vagal nerve injury is encountered, subsequent secondary operations must be planned accordingly. methods: from january to august , patients who had a previous surgical history of a foregut operation, with the potential risk of a vagal nerve injury, had vagal nerve integrity (vni) test results reviewed. vni test was measured indirectly by the response of plasma pancreatic polypeptide to sham feeding. the data collected and analyzed included age, gender, previous surgical procedures, clinical presentation, results of vni testing and the secondary procedure planned or performed. vni testing was compared to other testing modalities to determine if outcomes would have changed. results: eight patients ( females) were included. the age ranged from to years. two patients had prior lung transplantation and six patients had prior hiatal hernia repair with fundoplication. seven patients presented with reflux and delayed gastric emptying symptoms. one lung transplantation patient had no symptoms but his lung biopsy pathology showed chronic micro-aspiration with rejection. the vni testing results were compatible with vagal nerve injury in patients. according to these abnormal results, the plans for nissen fundoplication in patients were modified by an additional pyloroplasty and the plans for redo-nissen fundoplication in patients were changed to redo-nissen fundoplication plus pyloroplasty in patient and partial gastrectomy with roux-en-y reconstruction in patients. the operative plans in patients with a normal vni test were not altered. all patients that had secondary surgery had improvement in symptoms and or improvement in objective tests (ie signs of rejection). conclusion: the addition of vni testing in patients with previous potential risks of vagal nerve injury may help the surgeon select the appropriate secondary procedure. . we present a single-center experience with a "myotomy first" approach for all patients, regardless of diverticular size. the hypothesis is that cardiomyotomy alone will provide satisfactory symptom abatement in some patients. and mis cardiomyotomy causes minimal scarring, so a staged mis diverticulectomy is feasible at a later date if diverticular retention/stasis continues. in order to discuss this treatment algorithm we present our experience with cardiomyotomy alone for patients with epiphrenic diverticula. methods: the electronic medical record was queried for patients with esophageal diverticula who were managed with cardiomyotomy and dor fundoplication alone. pre and post-operative reflux/dysphagia questionnaires were gathered; imaging studies, operative data, complications and follow up were reviewed. results: from march of until the present, patients with esophageal diverticula were treated using the "myotomy first" approach. intraoperative esophagoscopy was done to internally visualize the elimination of the inciting spastic esophageal muscle. preoperatively, all patients complained of regurgitation, followed by dysphagia in ( %) and weight loss ( %). postoperatively, dysphagia and weight loss resolved in all subjects. regurgitation symptoms resolved in ( %) patients. the average size of the diverticula was . cm , the range was - cm . post operative esophagream's showed persistent diverticual, however most had decreased in size. there were no perioperative complications, average length of stay was . days and there were no icu admissions or returns to the or. the average length of follow up for these patients was days where all patients reported being satisfied with their results and none of them have yet desired to pursue diverticulectomy. discussion: a "myotomy first" approach resulted in excellent short term symptomatic control. none of the have retained or re-experienced symptoms of diverticular retention worthy of surgical intervention. in the age of laparoscopic surgery, an esophageal epiphrenic diverteculectomy should be staged. this step wise approach seeks to assure surgical necessity for a morbid endeavor. surg endosc ( ) :s -s the background: the two-stage oesophagectomy (ivor-lewis procedure) remains the mainstay of curative surgery for oesophageal cancers in the uk. gastro-oesophageal anastomotic leak is a potentially devastating complication of this procedure affecting perioperative morbidity and mortality. although the leak rates have improved over the years, it still remains widely variable. intraoperative reinforcement of gastro-oesophageal anastomosis with an 'omental wrap' has been proposed as a measure to reduce anastomotic leak rates. there is some data to suggest that this additional technique reduces anastomotic leak. we reviewed our single institution data to assess if the omental wrap indeed had a 'cocoon' effect in maturing the anastomosis and reducing leak rates. methods: data for all cancer oesophagectomies (ilog) performed in our institute since april - was retrospectively analysed from a prospectively maintained database. the patients were categorised into two groups. masafumi ohira; department of gastroenterological surgery, hokkaido university graduate school of medicine background: in laparoscopic surgery, both surgical technique and adequate support and traction by an assistant are highly important. this study assessed the impact of the first assistant on shortterm outcomes of laparoscopic distal gastrectomy (ldg) and laparoscope-assisted distal gastrectomy (ladg). methods: patients who underwent ldg or ladg for gastric cancer at our hospital, between november and august , were included. ldg and ladg cases of billroth i reconstruction, performed by a single surgeon accredited in endoscopic procedures, were analyzed. the cases were categorized into the following groups according to the first assistant's postgraduate years (pgy) of experience: group a, - years; group b, - years; group c, - years; and group d, [ years. short-term outcomes were compared between the groups. results: we examined cases. operative time was significantly longer in group a than in group b (p= . ). no significant differences in operative time were found between groups b, c, and d. the cases were recategorized into groups as follows: group a, the young assistant group (group y, n= ), and groups b, c, and d, the senior assistant group (group s, n= ). significant differences in operative time and method of anastomosis (circular stapler or delta anastomosis) were observed between the groups (p= . and p= . , respectively), but no significant differences in complication rates were found (p= . ). the unadjusted analysis revealed that the group, method of anastomosis, and body mass index (bmi) were significant factors associated with longer operative time. multivariate linear regression analysis with stepwise model selection using akaike's information criterion (aic) revealed that bmi and group were significant factors associated with longer operative time (p= . and p= . , respectively). multivariate analysis using these variables and the method of anastomosis confirmed the significance of bmi and group for longer operative time, but no significance was found in the method of anastomosis (p= . , p= . , and p= . , respectively). conclusions: our study showed that operative time tended to be longer when the first assistant had experience of less than pgy, but the morbidity did not increase. as with the operator, the first assistant needs adequate training to ensure a smooth operation. steven g leeds, md, marc ward, md, brittany buckmaster, pa, estrellita ontiveros, ms; baylor university medical center at dallas background: gastric contents can reach beyond the esophagus into the larynx and pharynx causing an increasingly prevalent disease called laryngopharyngeal reflux (lpr). magnetic sphincter augmentation (msa) has been used as an alternative treatment for gerd with good success, but there is no data to support its use in lpr. methods: forty-five patients with msa implants for symptomatic relief with both gerd and lpr symptoms were examined. all patients experienced at least one typical gerd symptom as well as at least one extra-esophageal symptom. this was assessed using the gerd-hrql which is questions graded - on each question, and reflux symptom index (rsi) which is questions graded - on each question. patients filled out questionnaires preoperatively, one month postoperatively (early follow up), and at months to year postoperatively (late follow up). the responses on the gerd-hrql were clustered into questions inquiring about heartburn ( ), dysphagia ( ), and regurgitation ( ) like all surgical fields there is a push towards standardization of the post operative course while maintaining safe practices. other surgical fields have streamlined recovery processes in an effort to standardize care and minimize costs. laparoscopic hiatal hernia repair is a complex procedure, but with experience and a team approach, this operation can become a streamline process. methods: a retrospective review was done for over laparoscopic hiatal hernia repairs at a single institution. aspects of post operative care such hospital floor, nursing ratio utilized, pain medication, diet advancement, use of foley catheters and length of hospital stay were tracked. statistical analysis was done to compare utilization of resources as years went on along with complications and readmissions. results: a total of hiatal hernias were performed between and . improvements were noted in nearly every field over time, including faster foley removal, decreased length of hospital stay, decreased use of patient controlled analgesics (pcas) and faster advancement of diet. furthermore these patients are now treated on a surgical floor rather than the intensive care unit or step down with a higher nurse to patient ratio, decreasing hospital cost. there were no changes in complications, reoperations or readmissions over the course of the study. conclusions: cost, length of stay and so called "advanced recovery pathways" are all the rage in the surgical literature. anytime a procedure and its post operative course can become less of a "major undertaking" and more routine, the more streamline it becomes. this comes from making a standard protocol that deescalates treatment based on what is actually needed. nearly every aspect of post operative care was simplified; length of stay and cost to the hospital was decreased while no additional complications or readmissions were accrued. the foundation of a formalized advanced recovery pathway will be implemented from these factors which were studied. background: the obesity epidemic continues to worsen. bariatric surgery remains the most effective way to achieve weight loss and resolution of comorbidities. laparoscopic sleeve gastrectomy has become the most common bariatric operation due to excellent efficacy and low morbidity and mortality. the most common complication of sleeve gastrectomy is gastroesophageal reflux disease (gerd), which can adversely impact the quality of life and lead to additional esophageal complications. recently, esophageal magnetic sphincter augmentation (linx®) has become an acceptable alternative to fundoplication for certain patients with gerd. the use of linx® in patients who previously underwent laparoscopic sleeve gastrectomy was described in a case series in . the known complications of these devices include dysphagia, need for endoscopic dilation, and device erosion. the complication profile of linx® in the setting of sleeve gastrectomy has not been reported heretofore. methods: we present a case of a patient with prior sleeve gastrectomy who received a linx® device one year after her bariatric operation due to severe gerd refractory to medical management. initial evaluation demonstrated a hypotensive lower esophageal sphincter and hiatal hernia, but no evidence of stricture or twisting. soon after linx® implantation, the patient developed progressive dysphagia and worsened reflux. repeat evaluation showed esophagitis, a moderate stricture with angulation at the incisura, and a large amount of retained food. discussion: the patient was recommended conversion to roux-en-y gastric bypass, but was deemed to be a poor candidate due to heavy smoking. thus, laparoscopic removal of the linx® device was performed with hiatal hernia repair and gastric stricturoplasty. post-operative fluoroscopic evaluation revealed improvement in the stricture, but persistent gastroesophageal reflux. the patient experienced a significant improvement in her symptoms of dysphagia, nausea, and vomiting. however, once smoking cessation is achieved, she may still need a conversion to roux-en-y gastric bypass in order to address persistent gerd. conclusion: conversion to roux-en-y gastric bypass remains the standard approach to treatment of gerd post sleeve gastrectomy. new approaches to this problem, including placement of linx®, are promising but have not been evaluated for long-term safety and efficacy in the setting of prior bariatric surgery. careful diagnostic evaluation prior to placement of magnetic sphincter augmentation device should be routinely undertaken. postoperatively, close long-term follow up is imperative, particularly in patients with prior sleeve gastrectomy. presence of linx® in a patient with prior bariatric surgery may lead to worsening symptoms if complications of initial operation are present. kazuto tsuboi, md , nobuo omura, md , fumiaki yano, md , masato hoshino, md , se-ryung yamamoto , shunsuke akimoto, md , takahiro masuda , hideyuki kashiwagi, md , norio mitsumori, md , katsuhiko yanaga, md ; fuji city general hospital, shizuoka, japan, nishisaitama-chuo national hospital, saitama, japan, the jikei university school of medicine, tokyo, japan background: esophageal achalasia is one of the primary esophageal motility disorders, and the patients suffer from dysphagia, vomiting and chest pain. timed barium esophagogram (tbe) is a convenient method to assess esophageal clearance, which we usually performed before and after surgery. meanwhile, laparoscopic heller-dor operation (lhd) has been considered worldwide as a gold standard for the surgical management of esophageal achalasia. the aim of this study is to examine the effect of preoperative clearance rate at the lower part of the esophagus on surgical outcomes in patients with esophageal achalasia. patients and method: between august and april , patients who underwent lhd at our institution were extracted from the database. out of patients, patients met our inclusion criteria; such as the patients who underwent lhd as an initial operation with complete evaluation with preoperative esophageal clearance by tbe. these patients were divided into three groups by the degree of esophageal clearance (group a: clearance rate \ %, group b: %? clearance rate \ %, and group c: %? clearance rate). patients' background, pre-and post-operative symptom scores, and surgical results were compared. before and after surgery, the standardized questionnaire was used to assess the degree of frequency and severity of symptoms (dysphagia, vomiting, chest pain and heartburn). moreover, satisfaction with operation was evaluated using the standardized questionnaire. statistical analysis was performed by using krasukal-wallis test or chi-square test, and p-value less than . was defined as statistically different. results: their mean age was . years and of them were male ( . %). one hundred and sixty-eight patients ( . %) were in group a, ( . %) in group b, and ( . %) in group c. the maximum width of the esophagus in group c was smaller than that in other groups (p= . ). as to the pre-operative symptom score, the frequency score of dysphagia was significantly lower in group c (p= . ), whereas the severity score of chest pain was significantly higher in group c (p= . ). surgical outcomes including the incidence of mucosal injury were not different among the groups. moreover, the patient satisfaction with lhd was excellent regardless of preoperative esophageal clearance. conclusion: preoperative clearance rate at the lower part of the esophagus in patients with esophageal achalasia did not affect the surgical outcomes of lhd, but the characteristics of preoperative symptoms in patients with poor esophageal clearance was low dysphagia and high chest pain. surg endosc ( ) ( . cm . cm) was made by dissecting between submucosal and muscular layers at the anterior remnant gastric wall. after creation of the double flap, the posterior esophageal wall ( cm from the edge) and the anterior gastric wall (superior edge of the mucosal window) were sutured for fixation, and . cm from the inferior edge of the mucosal window was opened, and the wall of the esophageal edge and the opening of the remnant gastric mucosa were sutured continuously. the anastomosis was fully covered by the seromuscular flaps with suturing. in latg, roux-en-y reconstruction was performed through a small incision using a circular stapler. introduction: the purpose of this study was to clarify the long-term and short-term outcomes of consecutive patients who underwent thoracoscopic esophagectomy in the prone position using a preceding anterior approach for the resection of esophageal cancer at a single institution. this method was established to make an esophagectomy easier to perform and to achieve better outcomes in terms of safety and curativity. methods and procedures: we retrospectively reviewed a database of patients with thoracic esophageal cancer who had undergone a thoracoscopic esophagectomy (te, patients) or an esophagectomy through thoracotomy (oe, patients) between january and august . to compare the long-term outcomes of te and oe, we used a propensity score matching analysis and a kaplan-meier survival analysis. to analyze the short-term outcomes of te, patients were chronologically divided into three groups: a first period group ( patients), a second period group ( patients), and a third period group ( patients). as for thoracoscopic procedure, the esophagus was mobilized from the anterior structure during the first step and from the posterior structure during the second step. the lymph nodes around the esophagus were also dissected anteriorly and posteriorly. the intraoperative factors, the number of dissected lymph nodes, and the incidence of adverse events were compared among the three period groups using a one-way anova or chi-square test. results: one hundred and twenty-three patients from each group, for a total of patients, were completely selected and paired. background: it is also difficult to anastomose using circular stapler in the narrow neck field. to overcome the problem we modified circular stapling for anastomosis. gastric juice reflux is frequently observed at the esophagogastric anastomosis. we develop and report trapezoidal tunnel method to reduce the incidence reflux. ( ) patients one hundred thirteen cases ( in left lateral and in prone position), with esophageal carcinomas underwent vats-e, respectively. esophago-gastric anastomosis is performed for cases by modified circular stapling and cases by trapezoidal tunnel method. ( ) methods at first the patients are fixed at semi-prone position and esophagectomy is performed in prone position that can be set by rotating and ports are used at the intercostal space (ics). esophagectomy and the l.n. dissection are performed with pneumothorax by maintaining co insufflation. esophago-gastric anastomosis is performed as following, i) trapezoidal tunnel method sero-muscular layer of anterior wall in the near top of gastric conduit is peeled from submucosal layer after parallel horizontal incision of sero-muscular layer, and then trapezoidal tunnel of sero-muscular layer is created. the edge of the proximal esophagus is drawn into the tunnel and esophago-gastric submucosa anastomosis is performed. to wrap anastomosis distal side of parallel line is closed. ii) modified circular stapling at first the circular stapler is introduced into the gastric conduit and joined to an anvil, and close a little. and then a joined anvil is placed into the proximal esophagus and secured by means of a pursestring suture. the gastric conduit opening is closed by a linear stapler. purpose: mesh utilization and its impact on postoperative hernia recurrence following paraesophageal hernia repair remains a polarizing topic. this analysis evaluates the recent trends in laparoscopic paraesophageal hernia repairs and analyzes the impact of operative time on postoperative morbidity. methods: the - acs-nsqip database was queried for primary cpt code for laparoscopic paraesophageal hernia repair with and without mesh ( / ). only elective cases performed by a general surgeon were included. operative time was grouped into quartiles ( - , - , - , - min) and statistical analysis was performed using anova univariate with post-hoc testing and multivariate regression modeling controlling for age, diabetes, renal disease and weight loss. this analysis was powered to detect a greater than % difference in outcomes based on mesh utilization. the outcomes of interest were composite morbidity scores and readmission rates within days of surgery. results: the database identified a cohort of , laparoscopic paraesophageal hernia repairs performed between and . average patient age was years and average patient body mass index was . mesh was utilized in % of cases per year and did not change over the study period (p= . ) however mesh utilization was %, %, %, and % within operative time quartiles - respectively (p. ). postoperative morbidity and readmission rates for each operative time quartile were . %, . %, . %, and . % (p. ) and . %, %, . %, and . % (p= . ), respectively. post-hoc testing indicated statistically significant differences in postoperative morbidity and readmission rates between quartiles and / . multivariate regression analysis documented operative time as a risk factor for postoperative morbidities and readmission, even after controlling for covariates. mesh utilization was only significant for a reduction in the rate of venous thromboembolic complications (or . , p= . ) but did not impact other morbidities or readmission rates. conclusion: this analysis suggests that patients with higher operative times have increased postoperative morbidity and readmission while mesh utilization does not impact postoperative outcomes, after accounting for the longer operative time of a paraesophageal hernia repair with mesh. introduction: gastroparesis is a chronic gastric motility disorder defined by delayed gastric emptying and symptoms such as nausea, vomiting, bloating and abdominal pain. surgical options for refractory gastroparesis include pyloroplasty, gastric stimulator insertion, and gastrectomy. the palliation from a pyloroplasty and gastric stimulator may be synergistic, however concerns remain regarding the possibility of stimulator infection when performing both procedures simultaneously. we present our initial experience of combined laparoscopic pyloroplasty and insertion of gastric stimulator. methods: gastroparesis patients diagnosed by solid gastric scintigraphy or endoscopic evidence of retained food after prolonged npo status who underwent combined laparoscopic heineke-mikulicz pyloroplasty and gastric stimulator insertion between july and july were reviewed. patient demographics, pre-and post-operative symptom scores and outcomes were collected. results were analyzed using statistical tests as appropriate. p value . were considered significant. results: seven patients underwent the simultaneous pyloroplasty and gastric stimulator insertion. six patients ( %) were idiopathic and one patient ( %) was diabetic. one patient was male and six patients were female. charleen yeo, enming yong, danson yeo, kaushal sanghvi, aaryan koura, jaideepraj rao, myint oo aung; tan tock seng hospital introduction: gastric cancer is one of the most common cancers in the asian population, with recent literature supporting the laparoscopic approach in early disease. however, the minimally invasive approach in advanced disease is still controversial. the outcomes of laparoscopic gastrectomy in the elderly have also not been extensively studied. we aim to evaluate our institution's short term outcomes of laparoscopic versus open gastrectomy for gastric cancer-with particular focus on advanced disease and elderly patients. methodology: we prospectively collected the data of all patients who underwent gastrectomies for stomach cancer from to . all patients underwent a partial or total gastrectomy with d lymphadenectomy. the decision for open or laparoscopic approach was decided between surgeon and patient. we excluded patients who underwent palliative resection. all patients were followed up for at least one year post-operatively. introduction: it was an eye-opener when the lancet brought the attention about global surgery. it is estimated that the deaths due to lack of access to surgery is far greater than deaths due to malaria, tuberculosis and hiv/aids put together. there is greater need to stress the importance in developing countries. there is a responsibility at the medical schools to enlighten students about this necessity and arouse interest in concept of global surgery. the students or surgical residents in the future are a great resource to solve this major problem. the first step would be to educate surgical residents. we need to assess the existing awareness about global surgery problem among surgical residents. we can plan a program to train the next generation surgeons. methods and procedure: all the surgical residents in our institution (victoria hospital, bangalore, india) were enrolled for this study. a total of residents were enrolled. a multiple-choice questionnaire regarding global surgery was designed. the received questionnaire was analyzed to assess the depth of knowledge about global surgery. there were multiple choice questions (mcq) and an option was provided at the end for feedback and suggestion to improve the global surgery in our country. each question carried one mark. score more than was considered the cutoff for pass and those students were termed 'informed'. results: ( . %) students cleared the cut off score of and were termed 'informed'. among this group ( %) residents scored marks. ( . %) students did not cross the cut off and were termed 'non-informed'. among these ( . %) students scored marks and did not know anything on the topic. students provided relevant suggestions and opinions to improve global surgery issue. conclusion: there is a great lacuna in knowledge about global surgery among surgical residents. we need to plan a program integrating global surgery in the syllabus of surgical training. the awareness among residents would arouse interest and participation in the future. introduction: minimally invasive surgical techniques (mists) could have tremendous applications and benefits in resource poor environment. these include but are not limited to short hospital stay, reduced cost of care, and reduced morbidity, especially related to post operative infections. there is growing interest in mists in most low and middle income countries (lmic) but its adoption has remained limited largely due to high cost of initial set-up, lack of technological backup and limited access to training among others. one of the most limiting factors is the maintenance of the vision system. an affordable laparoscopic set-up as an example will therefore go a long way in improving access to mists. methods and procedures: a common zero-degrees mm scope is attached on the camera of a low price smartphone (samsung galaxy j , samsung®, seoul, south korea). two elastic bands are used to fix the scope right in front of the main camera on the smartphone. the device is covered with sterile transparent drapes (tegaderm®, m corporate, st. paul, mn, usa). a light source is connected with a fiber optic cable for endoscopic use. the image can be seen in real time on a common tv screen through an hdmi connection to the smartphone, with a sterile drape. holding the vision system through the scope would guarantee to keep the camera in place without issues. to operate in full screen the vision was digitally zoomed at . , without losing quality (that is more related to the intensity of the light). as a collateral project we built a low cost simulator training box with the same camera to train the surgeon, obtaining a high fidelity and affordable simulation setting. results: we were able to perform the tasks of the fundamentals of laparoscopic surgery curriculum using our vision system with proficiency. in a pig model, we performed a tubal ligation to simulate an appendectomy and we were able to perform basic laparoscopic suturing. no major issue were encountered and small adjustment only were required to have an acceptable, stable and clear view. conclusion: there is growing interest in minimally invasive surgeries among surgeons in lmic, but its adoption has remained limited due to reasons such as high cost of initial set-up, lack of technological backup and limited access to training among others. an affordable laparoscopic camera system will therefore go a long way in improving access to mis in such settings. open. there were no deaths or bile duct injuries in our series. two patients undergoing laparoscopic approach were converted to open ( . %). complications, los, and gender were similar between the two groups. the laparoscopic group were significantly younger and had a significantly longer operative duration (table) . long term outcomes were not available for analysis. laparoscopic and open cholecystectomy appear safe in the setting of short term surgical missions. neither group suffered major complications. both had similar immediate outcomes. los for both groups was surprisingly similar and shorter than larger series which may possibly due to patient selection. given similar immediate outcomes and large burden of disease, the open approach should be considered. however, this cost may be extracted in terms of greater pain or longer recovery time for patients, which may outweigh the benefits. further data is needed to study pain, long term outcomes, and return to work. introduction: minimally invasive surgery relies on optimal camera control for the successful execution of operations. one disadvantage of laparoscopic surgery is that camera control is dependent on a surgical assistant's interpretation of visual cues and ability to predict the next field of focus in addition to verbal commands from the operating physician to provide the optimal view. robot-assisted minimally invasive surgery provides the operating surgeon the advantage of dictating their field of view. this study aims to utilize a video processing algorithm to determine the incidence of improperly centered field of view in laparoscopic vs. robot-assisted surgery. methods: in this study, recordings of minimally invasive resection of rectal cancer ( laparoscopic and robot-assisted surgery) were evaluated. recordings were input into matlab® video processing to generate single frames at each second interval. a single reviewer would indicate the pixel which best determined where the camera should be centered based on positioning of instruments, current action (dissection/hemostasis/traction) depicted in the frame, and previous review of recordings. pixel locations were recorded for subsequent analysis. centered views were determined as those with the identified centered position pixel lying within the center quadrant when frames were split into a uniform grid. in addition, distance of each point to the absolute center of the frame was calculated based on the pixel's x and y positions. results: individual operation data was analyzed for percent of centered pixel locations and pixel distance from the center pixel of the frame. robot-assisted surgery demonstrated higher percentage of centered views over laparoscopic surgery ( . ± . vs. . ± . ; p. ). robot-assisted surgery also demonstrated shorter distances to frame center than laparoscopic surgery ( . ± . vs. . ± . ; p. ). conclusion: robot-assisted surgery aims to resolve conflicts of cooperation that occur between surgeon and assistant in laparoscopic surgery by enabling manual visual control of the operative field by the operating surgeon. this study demonstrates that by eliminating such conflicts, optimal surgical view is more frequently obtained. surg endosc ( ) background/objective: valveless laparoscopic insufflator systems are marketed for ability to prevent loss of abdominal collapse and desufflation during laparoscopy. however, community surgeons raised concern for possible entrainment of room air, including oxygen ( ), with these systems. this study seeks to quantify o and non-medical air entrainment by a laparoscopic valveless cannula system to understand the risk of intraoperative air embolism. a communityuniversity collaborative was created to design a model and test this hypothesis. methods: an artificial abdomen was developed and calibrated to equivalent compliance and intraoperative volume of an average adult abdomen. it was connected to a flow meter, oxygen concentration sensor, and commercially available laparoscopic valveless cannula system. background: further advance of near-infrared (nir) imaging capability into greater clinical usefulness will be helped by the development of new targetable agents. to avoid issues related to dose timing and contamination, compounds that become fluorescent only at the site being targeted would be a significant advance. here we build on earlier laboratory work to show step-wise advance of the agent towards clinical trialling. methods: a novel agent (nir-aza) was tested in ex vivo colorectal specimens using two commercially available systems to determine characteristics in biological tissue. it was then trialled in a large animal cohort (n= ) to determine its performance for both intestinal perfusion assessment and lymph node mapping (both stomach and colon) using again a commercially available optical imaging system and including a direct comparison with indocyanine green. results: the novel agent was easily detectable in biological tissue in the near infrared wavelength relevant to commercial instrumentation both as a local depot tattoo and as a lymphatic tracing agent. porcine model trialling again showing excellent detection and tracking characteristics both in the circulation and in gastrointestinal tissue with clear tracking to relevant lymph nodes within minutes evident with the latter. while these studies were non-survival, there was no evidence of local tissue or systemic system toxicity in any case. direct qualitative and quantificative comparison between in situ nir-aza and icg at both intestinal and lymph basin regions showed similar levels of fluorescence. conclusion: the trial compound underwent successful testing indicating proof of earlier projected potential. this is encouraging for further work to advance to first in human testing. introduction: enhanced imaging systems have been developed to alter laparoscopic camera output to facilitate visualization during laparoscopic surgery using several novel imaging modes: clara mode reduces overexposure and reflections while brightening darker areas of the image; chroma mode intensifies color contrast to more clearly delineate blood vessels; and a combined chroma-clara mode. the ies also allows the surgeon to change imaging modes throughout the procedure as needed to facilitate different portions of the operation. we hypothesized that this technology would enhance visualization of critical structures during laparoscopic cholecystectomy (lc) compared to standard laparoscopic imaging. methods: videos and still images from an ies (karl storz endoscopy) were assessed in patients undergoing lc using the four imaging modalities. three time points were assessed: ) after adhesions were taken down but before any other dissection; ) after partial dissection of the hepatocystic triangle; and ) after establishment of the critical view of safety (cvs). seven surgeons blinded to the imaging modalities ranked each modality from (best) to (worst) for each of time points ( dissection points for cases). structures identified on achievement of the cvs were also analyzed. all statistics were performed using spss. rank data was analyzed with the friedman and wilcoxon signed rank tests. results: the median ranks of the chroma and chroma-clara imaging modalities (median [iqr] [ ] [ ] [ ] vs ( - ), p= . ) were not significantly different from each other, but both ranked significantly higher than the clara and standard modalities (median rank [iqr] [ ] [ ] and [ ] [ ] , respectively, p. ). individual surgeon preferences varied; four surgeons preferred chroma-clara, two preferred chroma, one preferred clara, and none preferred the standard mode. in addition, the cystic artery and cystic duct were visible in all cases after achieving the cvs, but the common bile duct was visible in only % of cases. conclusion: enhanced imaging system technology provides modalities that were significantly preferred over standard laparoscopic imaging on retrospective review of still and video images during lc. enhanced imaging modalities should be evaluated further to assess their impact on outcomes of lc and other laparoscopic procedures. introduction: cholangiocarcinoma is often diagnosed at an unresectable stage. endoscopic stent placement is generally performed to release the tumor-induced biliary obstruction. however, stents misplacement and migration, tumor tissue ingrowth and cholangitis are relatively frequent complications. energy-based techniques (radiofrequency ablation and photodynamic therapy) have been proposed as alternatives or in addition to the stent placement, showing controversial results. the use of laser sources in the ablation of the biliary wall has not been investigated so far. this study aims at the evaluation of the optimal power and exposure time to achieve a controlled circumferential intraluminal laser ablation of the common bile duct (cbd). methods: through a laparotomy access, the cbd of pigs was exposed and a small choledocotomy was made. a confocal endomicroscopy (ce) scanning (cellvizio) was performed through the choledocotomy, after injection of ml of sodium fluorescein. the . mm diameter circumferentiallyemitting diode laser probe ( nm wavelength) was introduced in the cbd. laser ablation was performed at w during s (n= ) or s (n= ). the power setting was predetermined on preliminary ex-vivo tests on porcine liver specimen. local temperature control was monitored through a fiber bragg grating, embedded in the laser probe. ce scanning was then repeated. the extent of the ablation was measured on hematoxilin-eosin and nadh stained slides. results: the diameter of the probe was too small to enable a single-shot circumferential ablation. there were no full-thickness perforations. after s from turning laser on, the temperature at the application site reached a plateau with minimal oscillations, and remained at mean values of . ± . °c during both and min. histology revealed that the mucosa ablation, at the contact areas, induced a consistent cellular necrosis (nadh-). ce scanning provided real-time images with a specific aspect of the post-ablation mucosa, including an alteration of the normal glandular structure and a general lack of enhanced imaging. the local application of a circumferential laser source induced a precise and safe mucosa ablation with a long-standing increase in temperature in the cbd, in this experimental trial. however, there is a need of an adapted probe, better fitting the diameter of the cbd to enable a single-shot circumferential treatment. goutaro katsuno, md, phd , yasuhiko nakata, md, phd , nobuyuki kubota, md, phd , teruo kaiga, md, phd , takao mamiya, md , masahiro yan, md , naoaki shimamoto, md , shuichi sakamoto, md, phd ; department of gastrointestinal and minimally invasive surgery, mitsuwadai general hospital, introduction: recently major developments in video imaging have been achieved for performing complete mesocolic excisions (cme) or total mesorectum excisions (tme). indocyanine green (icg) fluorescence imaging is already contributing greatly to making intraoperative decisions for keeping an intact visceral fascial layer, making suitable mesentery division lines and identifying anastomotic perfusions. the aim of this study is to present our experience with laparoscopic procedures for colo-rectal cancers using icg fluorescence imaging (lap icg-fi). patients and methods: we usually use the near-infrared (nir) laparoscopy (stryker corporation, michigan, usa) for lap icg-fi. [indocyanine green fluorescent imaging] visualization of lymph flow: icg ( . mg/ . ml) was injected into the submucosal layer around the tumor at points with a -gauge localized injection before the lymph node dissection. visualization of blood flow: after complete colorectal mobilization, the mesocolon was completely divided at the planned proximal or distal transection line. indocyanine green was injected intravenously and the transection location(s) and/or distal rectal stump, if applicable, were re-assessed in fluorescent imaging mode. results: we experienced lap icg-fi cases with colo-rectal cancer patients. tumor was located at the rectum in of them, at the sigmoid colon in , at the transverse colon in , at the descending colon in , at the ascending colon in , and at the cecum in . tnm stage was -i in patients, ii in , iii in , and iv in . the median (range) age of the patients was ( - ) years with a median (range) bmi of . ( - . ) kg/m . the lymph flow was visualized in patients ( %) intraoperatively. however, a high-quality intraoperative icg lymphangiogram was achieved in patients ( %). in high-quality lymphangiogram, the lymphatic ducts and lymph nodes were clearly visualized in real time, and this proved useful in keeping an intact visceral fascial layer as well as in making a suitable mesentery division line even in the bmi[ patients. a high-quality intraoperative icg angiogram was achieved in all patients. anastomotic perfusion was satisfactory in all cases. in patients ( . %), the use of nir+icg resulted in revision of the proximal colonic transection point before formation of the anastomosis. there were no postoperative anastomotic leakages. no injection-related adverse effects were reported. conclusion: lap icg-fi is a simple, safe and useful tool to help us complete lap cme or tme and check real-time anastomotic tissue perfusion. introduction: recently, the spread of laparoscopic surgery as a standard treatment and the development of information & communication technology have yielded abundant video data of laparoscopic procedures. these data have been accumulated and we can access them anytime, anywhere. however, the direction of how to use the abundant video data are still unclear. conventionally, surgical procedures have been performed based on surgeon's subjective decisions and skills, so called "tacit knowledge". for the purpose of objective analysis of laparoscopic procedures in video data, automatic recognition of surgical tools and understanding of surgical workflow must be the first critical step. we used convolutional neural network (cnn) which is the current trend in machine learning and computer vision tasks. methods: using video database of laparoscopic sigmoid colectomy in our institute, we performed annotation of tools and phases in every frame of the operating videos. for the tool detection, we annotated bounding boxes for both left and right tools in the videos. furthermore, phase annotation was performed by watching the videos in consultation with laparoscopic surgeons. the laparoscopic sigmoid colectomy operation passes through phases; -placement of ports and preparation, -dissection of retrorectal space, -medial approach to ima, -isolation and division of ima, -medial-to-lateral retromesenteric dissection, -lateral mobilization of left colon, -rectosigmoid mobilization, -division of mesorectum, -rectosigmoid resection and anastomosis, -finishing. we used cnn architecture to perform surgical tool detection and workflow recognition. results: we totally labeled tools used in the procedures of laparoscopic sigmoid colectomy and successfully developed tool detection system by cnn. as for surgical workflow, average times of phase - were . , . , . , . , . , . , . , . , . , . min, respectively. workflow recognition system using cnn was also successfully developed, while we needed to extract pure operating scenes in advance for efficient recognition outcomes. we've developed tool detection and phase recognition systems using cnn. we need more datasets to improve the detecting ability for future clinical uses. introduction: surgical environments require special aseptic conditions for direct interaction with the preoperative images and surgical equipment, which hampers the use of traditional input devices. we presented the feasibility of using a natural user interface (nui) for gesture control combined with voice control to directly interact in a more intuitive and sterile manner with the preoperative images and the integrated operating room (or) functionalities during laparoscopic surgery. in this study, efficiency and face validity of using this nui for medical image navigation and remote control during the performance of a set of basic tasks in the or will be assessed. methods and procedures: twenty experienced laparoscopic surgeons participated in this study. they performed basic tasks in the or focused on the interaction with a medical image viewer (osirix; pixmeo) and with the functionalities of the integrated or (or ; karl storz). these tasks were carried out by means of traditional manual interaction, using a computer keyboard and mouse and a touching screen, and using a gesture control sensor (myo armband) in combination with voice commands. this nui is controlled by the tedcube system (tedcas medical systems). time required to complete the tasks using each interaction method was recorded. at the end of the tasks, participants completed a questionnaire for face validation and usability assessment. results: the use of the nui required significantly less time than conventional manual control to show preoperative studies and information for surgical support. however, the interaction with the medical image viewer was significantly faster using the traditional input devices. participants evaluated the nui as an intuitive, simple and versatile tool that improves sterility during surgical activity. seventy-five percent of the participants would choose the gesture control system as a method of interaction with the patient's preoperative information during surgery. conclusions: the presented gesture control system allows surgeons to directly interact with preoperative imaging studies and the functionalities of an integrated or during surgery maintaining the aseptic conditions. for the traditional manual interaction, it is necessary to take into account the possible reaction time and displacement time of the technician to execute the surgeon's requests. a more personalized medical image viewer is required and with higher integration with the capabilities of the presented gesture control system. emma k gibson, bs, jacqueline j blank, md, timothy j ridolfi, introduction: following a generous left hemicolectomy an anastomosis between the transverse colon and rectum may be required. extensive mobilization and retroileal routing is sometimes necessary to create a tension-free anastomosis. retroileal routing is a technique in which a window is created in the ilieocolic mesentery. the colon is routed through this window, beneath the ileum, prior to entering the pelvis. retroileal routing is uncommon and there is no data on this technique when performed in using a hand-assisted laparoscopic technique. the aim of this study was to review our experience with hand-assisted laparoscopic left sided colon resections including retroileal routing of the proximal colon to the rectum. methods and procedures: we performed a retrospective review of a single surgeon's experience with hand-assisted laparoscopic left sided resections over a seven-year period from - . indication for operation, basic demographics, bmi, procedure time, short-and long-term morbidity, and mortality were recorded. results: a total of patients underwent a hand-assisted laparoscopic left sided resection with a colorectal or coloanal anastomosis. of these, underwent hand-assisted laparoscopic procedures with retroileal routing of the proximal colon. in each case, operations included a midline hand port incision and two mm ports in the lower abdomen. the indications for operation were diverticular disease and neoplasm in nine and four patients respectively. procedures took an average of . ( - ) minutes to complete. postoperative morbidity included intubation for co retention in one patient and a rll effusion in another patient. there were no anastomotic leaks and there were no -day or -day mortalities. conclusion: retroileal routing of the colon following left hemicolectomy occurs infrequently. a hand-assisted laparoscopic approach appears to be a safe and efficient in these technically challenging cases. objective: approximation of the diaphragmatic crus pillars is a key step in hiatal hernia repair. the dogma of successful hernia repair requires tension free approximation of tissue. there are no techniques described to measure tension across the crus closure. aim of this study is to describe a novel technique for measuring the tension exerted on crural sutures and report initial findings. methods: data was collected at institutions by the same surgeon. after hiatus dissection was complete the crus defect was measured both anterio-posterior and transverse dimension. the crus closure sutures were placed posterior and then lateral to the esophagus. the initial suture is started posteriorly with a figure of eight fashion (# ). with each subsequent stitch placed anteriorly (# and # ) or laterally (l , l ) till adequate hiatus closure is achieved. we measured tension on each suture placed as follows. conclusions: the autolap system provides improved image stability, staff interactions, and enhanced ergonomic comfort for the surgical team. it also offers cost-savings from decreased staffing requirements for hospitals that routinely use staff camera holders. the system set up of - min was less variable after cases, representing the learning curve. in addition, our approach identified problems with the system that require improvement by the manufacturer. notably, we identified significant ergonomic problems for human camera holders, which has been previously described and can be addressed by this device. background: gastric leaks continue to be a troubling predicament for physicians and patients alike. they are especially concerning after bariatric surgery. electrolyte abnormalities and dehydration continuously pose a life threatening problem in these patients. methods: this is an irb approved retrospective review of our experience with a biologic tissue mesh plug closure of gastric leaks. our interventional radiology colleagues percutaneously accessed the perigastric collection with a wire and a straight catheter was guided through the gastric wall defect and advanced over the wire until it was intraluminal. the surgeon then placed an endoscope down to the level of the gastric defect. the wire was then retrieved by the endoscope achieving percutaneo-oral wire access. the biologic tissue matrix was then measured and cut to a square and inverted into a cone like structure with a flat straight piece on the open end. the cone patch was then secured to the wire with braided polyglactin suture loop. the wire was then withdrawn back through the gastric defect pulling the plug and patch into position and placement was confirmed by endoscopy. results: we attempted closure of a gastric leak arising after bariatric surgery in six patients. five underwent successful deployment while one had premature disconnection of the plug from the wire and could not be deployed. the five who had successful deployment had immediate success and within days resumed enteral intake of liquids and resolution of the leak. two of the six patients additionally underwent covered stent placement to stent a stenotic area at the incisura angularis from the esophagus to the antrum. this stent was typically removed - weeks later. there were no complications related to the procedure or the plug. only one patient has undergone repeat endoscopy to evaluate the status of the plug. in that patient an ulcer at the plug site was visualized one month after the procedure. three months later endoscopy showed the clean ulcer had shrunk to half of the original ulcer size. conclusion: this novel minimally invasive technique utilizing ir and endoscopic placement of a biologic mesh plug into gastric leaks after bariatric surgery has been highly successful in treating chronic and subacute gastric leaks. we recommend that these endoscopic techniques be used to close gastric defects prior to operative intervention. introduction: laparoscopic surgery has spread worldwide and become a standard procedure among many abdominal surgical fields. the incidence of postoperative adhesion, which is a typical postoperative complication, is considered low compared with that after laparotomy, but once complications develop, such as adhesion-induced intestinal obstruction and chronic abdominal pain, the low-invasiveness of laparoscopic surgery may decrease markedly. while we have previously used a sheet-type absorbable barrier to prevent adhesion, it requires a technique in many cases when it is applied in the abdominal cavity. in this study, we used a spray-type absorbable barrier, which is considered simple to apply, as an adhesion-preventing absorbable barrier following laparoscopic surgery. subjects and methods: a spray-type absorbable barrier for prevention of adhesion (ad spray type l®) was applied to the dissected surface, port region, and beneath the small incised wound in patients who underwent laparoscopic surgery of the large intestine after february . the nozzle is long ( mm in length) and the angle of the tip is adjustable to some extent, so that the spray could be applied easily to the target region, even in areas in which it would be difficult to secure a work space, by rotating the shaft and finely adjusting the angle of the tip. in order for the barrier to remain in the target region, this preparation must remain viscous after application. discussion: approaches for the insertion and affixing of a conventional sheet-type absorbable barrier for the prevention of adhesion has been reported previously by various researchers. the adhesion-preventing absorbable barrier used in this study was a spray type with a long nozzle, which may have been useful because it made the laparoscopic application easy. however, its application requires some experience and time for preparation compared with the use of the sheet type, which could be disadvantageous. further accumulation of cases, including evaluation of prevention of adhesion after use of the adhesion-preventing absorbable barrier may be necessary. christopher g yheulon, md, priya rajdev, md, s. scott davis, md; introduction: evidence has demonstrated that biosynthetic glue for laparoscopic inguinal hernia repair results in decreased pain. however, the two glue sub-types (biologic-fibrin based; synthetic -cyanoacrylate based) have never been compared. this study aims to assess the outcomes of those subtypes. method and procedures: a systematic review of the medline database was undertaken. randomized trials assessing the outcomes of laparoscopic inguinal hernia repair with penetrating and glue fixation methods were considered for inclusion and data analysis. thirteen trials involving patients were identified with eight trials utilizing fibrin and five trials utilizing cyanoacrylate. results: there were no differences in recurrence or wound infection between the glue subtypes when compared individually to penetrating fixation alone or indirectly to each other. there was a significant reduction in urinary retention with fibrin glue when compared to penetrating fixation (or . , % c.i. . - . ). no studies utilizing cyanoacrylate analyzed urinary retention as an outcome. there were non-significant trends in reduction of hematoma and seroma for both glue subtypes when compared to penetrating fixation (or . , % confidence interval . - . ). conclusions: glue fixation in laparoscopic inguinal hernia repair reduces the incidence of urinary retention and may reduce the rate of hematoma or seroma formation. as there are no differences in outcomes when comparing fibrin or cyanoacrylate glue, surgeons should choose the glue that is available at the lowest cost at their respective institution. however, improvement of the optical system is necessary to further utilize this advantage. we are developing an optical lens system covering the range from macroscopic to microscopic. methods: we developed a handheld prototype created by combining the objective lens system of an optical microscope and a telescope lens. a feasibility study using a porcine model was conducted. macroscopic observation was done at a distance followed by microscopic observation in contact with tissue. first, we observed the operative field macroscopically. we then observed the serosa of the small intestine microscopically, and effects of blood flow occlusion were studied. results: ( fig. and fig. ) the same visual field as ordinary laparoscopy was achieved during macroscopic observation, while using microscopic observation it was possible to observe the complex peristaltic movements of the intestine. the minute blood vessels of the visceral peritoneum and larger, deeper blood vessels were also observed. when the mesenteric vessels were occluded, changes in peristaltic movement were seen directly. congestion in blood vessels in the deep layers of the serosa was observed. improvement in peristalsis and congestion were confirmed by restoring blood flow. this system enables direct visual observations not possible with conventional optics. this system can be utilized in both laparoscopic and open surgery. the microscopic visual information obtained by this system may help with intra-operative decision making and serve to facilitate safe and precise surgery. introduction: accurate, real-time visualization is critical for efficient, effective and safe surgery. although optical imaging using near-infrared (nir) fluorescence has been used for visualization of anatomic structures and physiologic functions in open and minimally invasive surgeries, its efficacy and adoption remain suboptimal due to the lack of specificity and sensitivity. herein, we report a novel class of compounds, which are exclusively metabolized in liver or kidney, rapidly excreted into to biliary or urinary systems, and emitted two different nir fluorescence spectrums. methods: novel, water-soluble heptamethine cyanines; compound x (biliary) and compound y (urinary), unreactive towards gluthathione and the cellular proteome were synthesized, and visualized using real-time, dual-color nir imaging device. sprague-dawley rats (n= ) and yorkshire pigs (n= ) were used to demonstrate and validate its usefulness, distributed into a control group (icg; rat n= , irdye cw rat n= ), a biliary group (compound x; rat n= , pig n= ), a urinary group (compound y; rat n= , pig n= ), and dual-labeling group (compound x&y; rat n= , pig n= ). each rat and pig received one or two of the compounds at optimized dose of . -mg/kg intravenously, fluorescence signals and bio-distributions were monitored and recorded over time. the target to background ratio (tbr) was calculated in each target systems and compared to assess sensitivity and specificity. results: compound x was rapidly cleared from liver within min after intravenous injection while the fluorescence signals in biliary system lasted up to h both in rats and pigs. compound y showed significant renal excretion up to h and the urinary signals remained up to h. they were both highly specific to target organs with tbr values of . (biliary), . (urinary) and . (cf. icg) at peak signals. these new compounds have approximately - times higher quantum yields than icg and . - . times higher specificity to kidney and liver than irdye cw. one-way anova showed significant differences between control, biliary, and urinary group (p. .) dual-labeling results also showed a complete separation of these two metabolic systems (p= . ) and a real-time display of these two systems were clearly visualized with pseudo-colored labeling inside the animal body. conclusion: we report a new generation of organ-specific, real-time fluorescent markers for intraoperative visualization, navigation and potential geo-fencing. these new compounds have significantly higher quantum yields and higher specificity to visualize kidney and/or liver than any currently available reagents. background: porcine models have been widely accepted for gastrointestinal surgery studies, due to their similarities to human anatomy, histology and physiology. devices such as laparoscopic staplers have been widely used in bariatrics and are currently the cornerstone of bariatric. there are currently few published articles regarding surgical stapler testing in porcine models by means of a survival design. the purpose of this study is to present a new model for stapler testing in porcines. we present the following study in which we asses a novel stapler's feasibility and safety, and its compatibility to currently used stapler reloads. this novel stapler, the aeon™ endoscopic linear stapler (lexington medical inc., billerica, ma. pending fda approval), has been previously tested in-vitro and in-vivo by the lexington medical engineering department in matters of mechanical function, staple line bursting pressure, staple formation and hemostasis. duffy et al. used this instrument for small bowel anastomoses in a two-week survival study in porcine models. methods and procedures: four porcine animal model was used under iacuc protocol for a -day survival study held at the fiu (doral, fl, u.s.a) research facility. all animals underwent sleeve gastrectomy using the novel stapler handle, combined with the endo gia™ (medtronic, mansfield, ma) mm-staple reloads in two of the animals and aeon™ mm-staple reloads in the remaining two. no reinforcements or oversewing of the staple line was done. these procedures were performed by two bariatric surgeons. animals were monitored perioperatively by the facility staff as per protocol. the animals were euthanized at day . post-mortem assessments were done blindly. gross evaluation and comparison of the gastric tube and their staple lines was done, as well as patency, strictures, and staple line integrity. results: stapler function was equivalent with both reload brands, no technical issues were encountered. - firings were used per animal. no intraoperative complications related to stapler function ensued. no postoperative complications were encountered. all animals survived the full length of the study- days. all sleeves were patent, no strictures or bowel obstruction were present. conclusions: in an animal survival study, a follow-up period of weeks appears to be a good benchmark for stapler testing. the use of the novel stapler for gastric resections appears feasible and safe. further studies such as microscopic examination of the staple lines, might help confirm equivalence, safety and feasibility of these products for the sleeve gastrectomy procedure. jason m samuels, md , peter einersen, md , krzysztof j wikiel, md , heather carmichael , douglas m overby , john t moore , carlton c barnett , thomas n robinson, md , teresa s jones , edward l jones, md ; university of colorado denver, denver va medical center introduction: the purpose of our study was to evaluate the impact of smoke evacuation devices on operating room fires caused by surgical skin preps. surgical fires are rare but preventable events that cause devastating injuries. alcohol-based surgical skin prep serves as the fuel for a fire ignited by electrosurgical instruments. we hypothesized that increasing air exchanges near the tip of the active electrode will reduce the concentration of alcohol thus reducing the incidence of surgical fires. methods: a standardized, ex vivo model was created with a cm section of clipped, porcine skin. surgical skin preparations tested: % isopropyl alcohol with % chlorhexidine gluconate (chg-ipa) and % isopropyl alcohol with . % iodine povacrylex (iodine-ipa). based upon previous studies, a high-risk situation was replicated with immediate energy activation in the presence of pooled alcohol-based prep. the site was draped to simulate a small surgical procedure with approximately square cm exposed. (figure ) a standard and smoke evacuating electrosurgical pencil was activated for s on w coagulation mode in the presence of % oxygen. a standard wall suction was also tested with the tip held cm from the tip of the electrosurgical pencil. a chi-square test was used to compare differences between groups. results: surgical fires were created in % ( / ) of the tests with the chg-ipa and % ( / ; p= . ) of the tests with iodine-ipa. continuous wall suction did not change the incidence of fire. the smoke evacuation electrosurgical pencil significantly decreased the incidence of fire when compared to the standard pencil and continuous wall suction for both preparations (table ) . with chg-ipa, the smoke evacuation electrosurgical pencil decreased the frequency of fire by % (figure , p. ). similarly, when using iodine-ipa, the electrosurgical pencil with integrated smoke evacuation demonstrated a % decrease in fires (figure , p. ). conclusion: alcohol-based skin preps fuel surgical fires. the use of a smoke evacuator electrosurgical pencil reduces the occurrence of surgical fires. elimination of alcohol-based preps and the use of smoke evacuation devices decrease the risk of operating room fires. brian bassiri-tehrani, md, netanel alper, md, jeffrey s aronoff, md, yaniv larish, md; lenox hill hospital ureteral stents have historically been used in pelvic surgery when anatomical or clinical considerations warrant urological expertise to aid in identifying the ureters. in the colorectal and gynecologic surgery literature, prophylactic ureteral stents appear to increase the ability to detect ureteral injuries while not being shown to prevent such injuries. with the increasingly widespread use of laparoscopy and the robotic platform in complex colorectal and pelvic surgery, the utility of stents remains unclear. one of the limiting factors regarding the use of ureteral stents in minimally invasive surgery is the lack of tactile feedback; the inability of the surgeon to directly palpate the stents. one proposed method to overcome this deficiency has been the use of lighted ureteral stents. increased operating time, increased cost, and need for specialized equipment are potential drawbacks of lighted stents. an alternative to using lighted stents in minimally-invasive surgery is to directly inject indocyanine green (icg) into the ureters after cystoscopy-guided placement of ureteral stents. intraoperative visualization of the ureters is acheived by using either the pinpoint endoscopic fluorescence imaging system in laparoscopy, or firefly integrated with the robotic platform. it is hoped that the risk of inadvertent ureteral injuries during colorectal and pelvic operations will be minimized using this technique, due to improved visualization of the ureters throughout the procedure. in this case presentation, we describe a novel use of icg in a patient undergoing a laparoscopic surgery for resection of a . . . cm pelvic mass abutting the bladder, sigmoid colon and left ureter. preoperatively, there was concern that the mass would be intimately adherent to, or even invading, the bilateral ureters based on ct scan findings. after ureteral injection of icg, visualization of both ureters was easily achieved at the time of operation, and the procedure proceeded with careful and safe dissection of the mass with visualization of the ureters at all times. though there is a paucity of studies evaluating the use of icg in the laparoscopic modality, this technique was safe, easy to employ, inexpensive and very useful to visualize the ureters intraoperatively. indeed, larger studies with appropriate sample sizes would help to further validate this novel use of icg. university of colorado -denver, va eastern colorado healthcare system introduction: operating room fires are "never events" that expose the patient to the risk of devastating complications. our group has previously demonstrated that alcohol-based surgical skin preparations fuel operating room fires. manufacturer guidelines recommend a three-minute delay after application of alcohol-based preps to decrease the risk of prep pooling and surgical fires. the purpose of this study was to evaluate the efficacy of the three-minute dry time in reducing the incidence of surgical fires. methods and procedures: a standardized, ex vivo model was used with a cm section of clipped, porcine skin. alcohol-based surgical skin preparations tested were % isopropyl alcohol (ipa) with % chlorhexidine gluconate (chg) and % ipa with . % iodine povacrylex (iodine-ipa). nonalcohol-based solutions included % chlorhexidine gluconate and % povidone-iodine "paint." an electrosurgical ''bovie'' pencil was activated for seconds on watts coagulation mode in % oxygen, both immediately and minutes after skin preparation application, with and without solution pooling. results: no fires occurred with immediate testing of nonalcohol-based preparations ( / ). alcohol-based preps created flames on immediate testing in % ( / ) of cases when pooling was present. without pooling, flames occurred in % ( / ) of cases on immediate testing. after a -minute delay, there was no difference in the incidence of fire when pooling was present ( / vs. / , p [ ) . similarly, there was no difference when pooling was not present ( / vs. / , p= ). (table ) conclusions: alcohol-based surgical skin preparations fuel surgical fires. waiting minutes for drying of the surgical skin prep did not change the incidence of surgical fire (regardless of whether there was pooling of the prep solution). the use of nonalcohol-based skin preps eliminated the risk of fire. introduction: laparoscopic port sites are associated with a significant incidence of long-term hernia formation. in addition, closure with closed loop suture may lead to increased post operative pain thereby limiting patient mobility. the development of novel trocar closure systems could offer a pathway towards quality improvement and warrants investigation. we performed a randomized controlled trial comparing a novel anchor based system (neoclose®) versus standard suture closure. methods: a prospective randomized controlled trial of patients undergoing port site closure following robotic assisted laparoscopic sleeve gastrectomy or gastric bypass was completed ( with neoclose® device and with standard laparoscopic suture closure). each patient had both the camera port and stapling port closed ( port sites in each group). primary outcome measures included the incidence of hernia ( week ultrasound), time for port site closure, and depth of needle penetration. secondary outcome measures were analog pain scoring at post op day , week and week . results: physical exam as well as ultrasound evaluation showed no hernias in either group at weeks. when compared to suture closure, the neoclose® device was associated with shorter closure times ( . ± . versus . ± . s, p. ) and needle depth penetration ( . ± . versus . ± . cm, p\ . ). the neoclose® device was associated with decreased pain at week after the operation (analog pain score . ± . versus . ± . , p. ). no difference in pain scoring was observed on post operative day or at week . conclusions: trocar site closure with the neoclose® device is associated with decreased closure times and needle depth penetration. no difference in the incidence of hernias was identified very early after operation. the neoclose® device led to decreased pain week after trocar closure which is potentially secondary to decreased tension when compared to closure with closed loop suture. long term hernia data ( year) is pending with patients scheduled for follow up physical exams and ultrasounds. federico gheza, md, mario a masrur, md, simone crivellaro, md; uic introduction: robotic instruments provides a better ergonomics during suturing compared to standard laparoscopy. minimally invasive procedures with limited need of few suture may benefit from an economically affordable device able to overcome some limitations of laparoscopic suturing. flexdex surgical recently obtained the fda approval for human use of its articulated laparoscopic needle driver. the official training provided by the company (available at https://flexdex.com/register-for-training) is a h basic dry lab. the training curriculum as well as the accreditation process is not well structured. no literature is available today on this matter. our goal was building a dedicated training, to allow a safe and predictable early use in humans. methods and procedures: the training module design and implementation was done in our minimally invasive laboratory. in the preliminary phase we define with a small group of residents and research specialists a short list of mandatory concepts to detail showing the instrument. a simple suturing task was then performed by the same group with the new device, laparoscopically and with the robot, available in our lab for training only. a more complex task, based on a dedicated self-designed high-fidelity model of urethral anastomosis was then proposed, exploring different options (one flexdex only vs two flexdex, surgeon vs assistant holding the camera). lastly, we applied the new device in animals to evaluate the usefulness of including simple tasks or entire procedures in the training curriculum. results: we were able to define a multilevel, adaptable training module including a basic information session, a dry lab with inanimate low-and highfidelity models and a pig lab. subjects with different level of expertise (medical student, resident, fellow, expert and very expert surgeon) were involved to have an extensive feedback. however, our main focus was to design a training module for laparoscopic and robotic surgeons, to safely introduce the flexdex in their practice. the only outcome for this preliminary work was collected through a "post exposure" survey. the expert surgeon that did the entire training was able to give feedback after his first application of the device in humans as well. conclusions: flexdex is a promising device, available in the united states in approved facilities only. a minimally invasive lab with high laparoscopic and robotic training experience is the ideal setting to build a curriculum. a first adaptable, multilevel, original, high-fidelity training is proposed to be validated with further studies and could be implementable for accreditation purposes. surg endosc ( ) :s -s augmenting spatial awareness in laparoscopic surgery by immersive holographic mixed reality navigation using hololens objectives: endoscopic minimally invasive surgery provides a limited field of view, thus requiring a high degree of spatial awareness and orientation. because of a d field of endoscopic view, a surgeon's spatial awareness is diminished. this study aims to evaluate the efficacy of our novel surgical navigation system of immersive holographic mixed reality (mr) using a head-mounted smart glass display hololens to enhance spatial awareness of the operating field in laparoscopic surgery. the authors describe a method of registering and overlaying the preoperative mdct imaging localization of tumors, vessels, and organs onto the real world in the operating theatre through holographic smartglasses in augmented reality (ar). methods: in this study we included laparoscopic gi, hpb, urology, and gynecologic surgeries using this system. we developed a ct-based patient-specific holographic mr surgical navigating application using hololens, that is a pair of see-through monitors built-in head-mounted display. by reconstructing the patient-specific d surface polygons of tumors, vessels, and organs out of the patient's mdct, mr anatomy was displayed on the see-through grasses three-dimensionally during actual surgery. the hololens features an inertial measurement unit which includes an accelerometer, gyroscope, and a magnetometer for environment understanding sensors, an energy-efficient depth camera, a photographic video camera, and an ambient light sensor. results: the accurate surgical anatomy of size, position, and depth of the tumors, surrounding organs, and vessels during surgeries could be measured using build-in dual infrared light sensors. the exact location between surgical devices and patient's anatomy could be traced on the pair of mr smart-glasses by satellite tracking. the gesture controlled manipulation by surgeons' hands with surgical groves was useful for intraoperative anatomical references of tumors and vascular position under sterilized environment. it allowed the user to manipulate the spatial attributes of the virtual and real anatomies. this system reduced the length of the operation and discussion time. this could support complex procedures with the help of pre-and intra-operative imaging with better visualization of the surgical anatomy and spatial awareness with visualization of surgical instruments in relation to anatomical landmarks. conclusions: the immersive holographic mr system provides a real-time d interactive perspective of the inside of the patient, accurately guiding the surgeon. this helps spatial awareness of the surgeons in the operating field and has illustrative benefits in surgical planning, simulation, education, and navigation. enhancing scene visualization is a feasible strategy for augmenting spatial awareness in laparoscopic surgery. francisco miguel sánchez margallo, phd , juan a. sánchez-margallo, phd , andreas skiadopoulos, phd , konstantinos gianikellis, phd ; minimally invasive surgery centre, cáceres, spain, university of nebraska at omaha, university of extremadura, spain introduction: new handheld devices have been developed in order to address the technical limitations and ergonomic issues present in laparoscopic surgery. the aim of this study is to analyze the surgeon's performance and ergonomics using the radius r drive instruments (tubingen scientific medical, germany) during the execution of laparoscopic cutting and suturing tasks. methods and procedures: three experienced laparoscopic surgeons performed both an intracorporeal suturing task and a cutting task on a box trainer. both tasks were repeated three times. a maryland dissector and a pair of scissors were used for the cutting task. for the suturing task, a maryland dissector and needle holder were used. conventional laparoscopic instruments and their equivalent r drive instruments were used. the order in the use of the type of instruments was randomized. execution time and surgeon's ergonomics were assessed. for the latter, surface electromyography (trapezius, deltoid and paravertebral muscles) and the nasa-tlx index were analyzed. for the cutting task, the percentage of the area of deviation from the cutting pattern (% of error) was assessed. the suturing performance was assessed by means of a task-specific validated checklist. results: surgeons required more time to perform both laparoscopic tasks using the r drive instruments. the use of both instruments had a similar percentage of deviation from the exterior part of the cutting pattern. however, the deviation from the inner part was significantly higher using the r drive instruments (conv: . ± . % vs r drive: . ± . %; p\. ). needle driving was scored lower using the r drive instruments, but quality of knot tying was similar to conventional instruments. the use of r drive increased the muscle activity of the trapezius muscles bilaterally for both laparoscopic tasks. this muscle activity also increased for the left deltoid muscle during the cutting task. surgeons stated that the use of r drive instruments leads to a higher mental and physical workload when compared to traditional laparoscopic instruments. conclusions: despite the novel and ergonomic design of the r drive laparoscopic instruments, the results of this study suggest that an improvement in surgical performance and physical workload is required prior their use in an actual surgical setting. further studies should be done to analyze the use of these instruments during other laparoscopic tasks and procedures. we believe that surgeons need a longer and comprehensive training period with these laparoscopic instruments to reach their full potential in laparoscopic practice. background/objectives: d printing has been shown to be a useful tool for preoperative planning in various surgical disciplines. however, there are only several single case reports in the field of liver surgery. this is because of problematic visualization of anatomy, difficulties in methodology and-most importantly-high costs limiting implementation of d printing. the goal of this study is to evaluate the utility of personalized d-printed liver models as routinely used tools in planning and guidance of laparoscopic liver resections. materials and methods: contrast-enhanced computed tomography images of consecutive patients who underwent laparoscopic liver resections in a single centre were acquired and processed. proper segmentation algorithms were used to obtain virtual models of anatomical structures, including vessels, tumor, gallbladder and liver parenchyma in stl (stereolithography) format. after processing files, models in parts were subsequently printed with desktop ultimaker + (ultimaker, netherlands) d printer, using polylactic acid filaments as printing material. all parts were matched together to create a mold, which was later casted with transparent silicone. models were delivered to surgical teams prior to the surgery as well as used in patients' education. results: up to now, six full-sized, transparent, personalized liver models were created before laparoscopic liver resections and used as a tool for preoperative planning and intraoperative guidance. usefulness of these models has been evaluated qualitatively with surgeons. operative data was obtained for each patient and it will be used for quantitative analysis in further study phases. costs of one model varied between $ and $ and whole process of development took approximately days in every case. conclusions: d-printed models allow precise planning in complex cases of minimally invasive liver surgery by providing high-quality visualization of patient-specific anatomy. implementation of this technology might potentially lead to clinical benefits, such as reduction of operative time or improvement of short-term outcomes. having said that, more data is needed to decisively prove these hypotheses. introduction: modern laparoscopic graspers may risk inadvertent injury to tissues, and have been shown to produce crush and puncture injuries. in addition, the force transmitted to the tissues by grasper handles can be highly variable, dependent on the orientation and amount of tissue engaged by the grasper. we have developed a novel vacuum-based laparoscopic grasper designed to reduce tissue injury from grasping. the aim of this study is to compare the incidence and severity of tissue trauma caused by vacuum-based graspers versus standard compressive graspers while manipulating tissue. we performed an in vivo surgical porcine study to assess gross and histologic tissue injury after grasping trials. grasping trials were divided equally between two adult porcine models; samples of small bowel were grasped with a standard atraumatic laparoscopic grasper (aesculap double-action atraumatic wave grasper) and were grasped with our novel vacuum grasper with varying vacuum head designs ( for head a, each for heads b and c). following grasping, the porcine model was allowed to dwell for hours prior to harvest. gross injury was graded as follows: ) no injury, ) ecchymosis only, ) serosal injury, ) seromuscular injury, and ) perforation. histologic injury was graded as follows: ) serositis, ) partial-thickness injury to the muscularis propria (mp), ) full-thickness mp injury, and ) full-thickness mp and mucosal injury. mann-whitney u test was performed to compare both gross and histologic injury scores between the groups. results: on gross assessment, no samples were noted to have injury more severe than ecchymoses following grasping. the vacuum grasper was found to cause more ecchymosis (median= ) than the compressive laparoscopic grasper (med.= , u= , p. ). on histologic assessment, the compressive grasper caused significantly more severe injury (med.= ) compared to the vacuum grasper (med.= , u= , p= . ). subgroup analysis showed that heads a (med.= , u= . , p= . ) and b (med.= , u= , p= . ) caused significantly less injury compared to the compressive grasper. head c (med.= , u= . , p= . ) also showed less injury but did not reach statistical significance. conclusion: this study demonstrates that our novel laparoscopic vacuum grasper produces less tissue trauma than standard compressive graspers. vacuum-based grasping is a viable alterative for reducing inadvertent tissue injury in laparoscopy. minimally invasive surgery centre, cáceres, spain, university of nebraska at omaha, university of extremadura, spain introduction: the aim of this study is to analyze the surgeon's performance, workload and ergonomics using an ergonomically designed handheld robotic needle holder during laparoscopic urethrovesical anastomosis in an animal model, and comparing it with the use of a conventional laparoscopic needle holder. methods and procedures: six experienced surgeons performed an urethrovesical anastomosis in a porcine model using a handheld robotic needle holder and a conventional laparoscopic axialhandled needle holder (karl storz gmbh). the robotic instrument (dex®, dextérité surgical) has an ergonomic handle and a flexible tip with unlimited rotation, providing seven degrees of freedom. the use of the surgical instrument was randomized. for each procedure, an expert surgeon evaluated the surgical performance in a blinded fashion using the global operative assessment of laparoscopic skills rating scale. besides, the quality of the intracorporeal suture was assess by a validated suturing-specific checklist. the surgeon's posture was recorded and analyzed using the xsens mvn biomech system based on inertial measurement units. the surgeon's workload was evaluated by means of the nasa task load index, a subjective, multidimensional assessment tool. the patency of each anastomosis was assessed using methylene blue. results: all urethrovesical anastomoses were completed without complications. only one anastomosis with the robotic device failed the patency test. surgeons showed similar surgical skills with both instruments, although they presented greater autonomy with the conventional instruments (p =. ). for the suturing performance, the use of the robotic device led to an increase in the number of movements during the needle driving and lower tendency to follow its curvature during the withdrawal maneuver (p=. ). the level of workload increased with the robotic device. however, the surgeon's satisfaction with the surgical outcome did not differ using both instruments. the use of the robotic instrument led to similar posture of the shoulder and wrist and better posture of the right elbow (p=. ) when compared to the conventional instrument. conclusions: the use of the robotized needle holder obtained similar results for the surgical performance and surgical outcome of the urethrovesical anastomosis when compared to the conventional instrument. we consider that aspects such as the surgeon's autonomy, dexterity in driving the needle and workload could be improved with a comprehensive training with the new device. inertial sensors can be an alternative for actual and crowded surgical environments. surgeons acquired a better body posture using the novel robotic needle holder. surg endosc ( ) :s -s introduction: temporal and spatial tissue temperature profile in electrosurgical devises, such as ultrasonic scissors and bipolar vessel sealing system, was experimentally measured, and the incidence of postoperative complications after thoracoscopic esophagectomy was assessed according to the electrosurgical devises used. methods and procedures: experiment of thermal spread: sonicision (sonic) was used for ultrasonic scissors and ligasure (ls) was used for bipolar vessel sealing system. each device was activated in order to cut porcine muscle at room temperature. temperatures of both the device blade and porcine tissues beside the device were measured using a temperature probe. each experiment was performed at least three times. room temperature was degrees. clinical analysis: the patients who underwent thoracoscopic esophagectomy with -field lymph node dissection in the prone position were selected in the study. incidence of postoperative complications after thoracoscopic esophagectomy was compared according to electrosurgical devises. bronchoscopy was used for diagnosis of recurrent laryngeal nerve paralysis (rlnp). sonic and ls was employed in and patients, respectively. material: we compared consecutive cases using d laparoscopic surgery versus cases of d conventional laparoscopic surgery from january to june . all surgical procedures were performed by experienced laparoscopic surgeons using d (einsteinvision system) and hd conventional laparoscopic optic. d-laparoscopic surgery offers the depth perception of the surgical field that is lost with the conventional ( d) laparoscopic surgery, and in many series is reported to be better in terms of surgical performance. outcome measures was operation time, surgical performance, blood looses, complications and surgeon satisfaction with the procedure. results: cholecystectomy was the most frequent surgery performed with cases ( %); hernia surgery cases ( %); fundoplication cases ( %), appendectomy cases ( %), left colon excison with colo-rectal anastomosis cases( %), and other cases ( %) wich included ovarian cyst excision, liver biopsy, prostatectomy and pediatric surgery. we compared each d procedure with a standard laparoscopy case performed by the same surgeon during the time of the study. d vs d surgical procedures outcome measures are shown in table . we found better results in operation time, surgical performance and less blood looses in favor of three-dimensional laparoscopy (. ). conclusion: d laparoscopy reduces operation time related to better performance during the procedure. depth perception facilitates dissection, intracorporeal knotting, mesh placement and colo-rectal anastomosis. surgeons reported better surgical performance and comfort during d laparoscopy; there were any reported side effects such as headache or dizziness. background: social media (some) uniquely allows international collaboration, with immediacy and ease of access and communication. in areas where surgical management is contentious, this could be a valuable tool to frame the current state, propose best practices, and possibly guide management in a rapid, cost-effective, global scale. our goal was to determine the ability to use twitter-a some platform-as an alternative surgical research tool. methods: twitter was used to host an online poll on a pre-selected controversial topic with no current consensus guidelines-pathological complete response in rectal cancer. an influential colorectal surgeon published the survey "t n rectal cancer undergoes a complete response" on two separate occasions. both polls were open for duration of three days. two methodologies were tested to increase exposure and direct towards relevant participants: first, tagging several worldwide experts, then using the well-established hashtag #colorectalsurgery and publishing during an international surgical conference. the main outcome measure was the feasibility, validity, reproducibility, and methods to further participation of a twitter survey. results: the tweet polls were posted three weeks apart. there was no cost and the time required for the process was three minutes, demonstrating the feasibility. providing three closed options to select from facilitated validity. the poll's anonymity limited knowledge of the participant's qualifications, but public comments and "retweets" came from surgeons with experience ranging from trainee to department chair. a robust volume of respondents was observed. the st post received votes, "likes", "retweets", and comments from a diverse international group ( countries). all tagged members participated in the forum. the nd received votes, "likes", "retweets", and comments. the results were reproducible, with the majority favoring option on both occasions ( % and %, respectively; p= . ). treatment recommendations, their rationale, and open questions were identified in the thread. conclusions: some can be used as a research tool, with valid, reproducible, and representative survey results. while exposure was comparable across the two methods, tagging specific members guided experts to provide more opinions than using conference and specialty hashtags. this could expand awareness, education, and possibly affect management in a transparent, cost-effective method. the anonymous nature of respondents limited the ability to make conclusions, but interest and opinion leaders for further study can be easily identified. this demonstrates the potential for some to facilitate international collaborative research. background: despite the technological advancement of a minimally invasive approach to pylorus -preserving pancreaticoduodenectomy (pppd), the morbidity is still high. among the many complications, postoperative pancreatic fistula (popf) is reported in high incidence rate, which varies from researcher to researcher, and a fistula risk score (frs) has been developed to predict the popf. the aim of this study is validate the fistula risk score in minimally invasive approach of pppd and find the other meaningful parameter for prediction of popf. method and materials: from january to august , laparoscopy attempted right-sided pancreas resection was performed on patients including robotic reconstruction in the division of hepatobiliary and pancreas at yonsei university health system. among them, patients were excluded due to total pancreatectomy (n= ), open conversion (n= ), pancreaticogastrostomy and hybrid manual anastomosis (n= ), non-measurable drain and missing datas (n= conclusions: fistula risk score is significant prediction factor of popf including biochemical leaks. in addition to the previously known frs variables, our data showed that bmi is an important predictor of popf with clinical relavancy in a minimally invasive approach of pppd. laparoscopic hemi-hepatectomy for liver tumor satoru imura, hiroki teraoku, yuji saito, shuichi iwahashi, tetsuya ikemoto, yuji morine, mitsuo shimada; tokushima university introduction: with progress of surgical technique and devices, laparoscopic liver resection became a realizable option for patients with liver tumor. major liver resection such as anatomical left or right hemi-hepatectomy has also been introduced in many centers. herein, we evaluate surgical results of laparoscopic hemi-hepatectomy for liver tumor. patients and methods: until march , consecutive patients who underwent laparoscopic or laparoscope-assisted hemi-hepatectomy (left: , right: ) were reviewed and the surgical data such as operation time, blood loss, postoperative complications were analyzed retrospectively. results: of the patients underwent left hemi-hepatectomy, cases were primary liver cancer, cases were metastatic tumor, and cases were benign tumor. pure laparoscopic surgery was performed in cases. the mean blood loss was ( - ) ml, mean operating time was ( - ) minutes and mean postoperative hospital stay was ( - ) days. the rate of postoperative complications was . % (wound infection; n= ). all right hemi-hepatectomy was performed by laparoscope-assisted method. of the patients underwent right hemi-hepatectomy, cases were primary liver cancer, cases were metastatic tumor, and cases were benign tumor. the mean blood loss was ( - ) ml, mean operating time was ( - ) minutes and mean postoperative hospital stay was ( - ) days. the rate of postoperative complications was . % (biliary stenosis; n= ). the patients with hepatocellular carcinoma were followed up for a median of ( - ) months. recurrence occurred in cases and none of them had died at the time of follow-up. conclusion: laparoscopic hemi-hepatectomy is a safe and effective procedure for the treatment of benign and malignant liver tumors. ibrahim a salama, professor; department of hepatobiliary surgery, national liver institute, menoufia university abstract background: iatrogenic biliary injuries are considered as the most serious complications during cholecystectomy. better outcome of such injuries have been shown in cases managed in a specialized center. objective: evaluatation of biliary injuries management in major referral hepatobiliary center. patients and methods: four hundred seventy two consecutive patients with post-cholecystectomy biliary injuries were managed with multidisciplinary team (hepatobiliary surgeon, gastroenterologist and radiologist) at major hepatobiliary center in egypt over years period using endoscopy in patients, percutaneous techniques in patients and surgery in patients. results: endoscopy was very successful initial treatment of patients ( %) with mild/moderate biliary leakage ( %) and biliary stricture ( %) with increased success by addition of percutaneous (rendezvous technique) in patients ( . %). however, surgery was needed in ( %) for major duct transection, ligation, major leakage and massive stricture. surgery was urgently in patients and electively in patients. hepaticojejunostomy was done in most of cases with transanastomatic stents. one mortality after surgery due to biliary sepsis and postoperative stricture was in cases ( . %) treated with percutaneous dilation and stenting. conclusion: management of biliary injuries was much better with multidisciplinary care team with initial minimal invasive technique to major surgery in major complex injury encouraging for early referral to highly specialized hepatobiliary center. introduction: simple liver cyst is the solitary non parasitic cystic lesion of the liver. teatment of symptomatic liver cyst varies from simple aspiration to hepatic resection. each treatment has its own merits and associatied complications. laparoscopic unroofing (fenestration) offers the best balance between efficacy and safety. polycystic liver disese (pcld) treatment by this method are less clear because of high failure rate. liver resection though more effective carries higher risks. treatment of hydatid disease are controversial. materials and method: simple cyst may be asymptomatic and picked up as incidental findings on ultrasound examination for other abdominal complaints. few cyst have symptoms of mass effect or with complication effect due to haemorrage, rupture, infection. on examination liver is palpable. compression over bile duct give rise to jaundice. the commonest symptoms are pain, early satiety, nausea and vomiting. simple cyst are more common in female after years of age. the cyst located antriorly inferiorly and laterally are the ideal case. investigation like ultrasonography is important. it will helps us to detect the cyst nature, will help to differentiate bet ween simple cyst from poly cystic liver disease, from neoplastic liver. in endemic area of hydatid liver disease serological test is mandatory. ct scan is important regarding details information about to localise the cyst, to identify the liver tissue arroud the cyst, relationship of cyst with the nearby vital structures, number of cyst, calcification and carcinomatous changes in its wall. aspiration of cystfluid, biological and cytological examination to rule out the presence of infection, biliary communication and malignancy. recently, ca - estimation is helpful for the differentiating the simple cyst from the cystadenoma or carcinoma. for jaundice patient ercp is impotant to locate the intraductal polyp causing the biliary obstruction or cyst causes the compression of the biliary tree. for bleeding in cyst mri is helpful. carcinoma at epithelial lining may occur. result: laparoscopic de-roofing (fenestration) less radical procedure ensures adequate drainage of cyst content into the peritoneal cavity. the cyst wall can be removed using harmonic scalpel so smoked produced and fogging of lens can be minimized. the interior surface inspected with care to exclude neoplastic growth and biliary communication. whole operative procedure, duration of postoperative recovery, hospital stay is much shorter in this procedure. large cevron incision can be avoided. no recurrence in two years follow up period. liver resection and total cystectomy theoretically minimizes the recurrence risk but invoke the a real risk of postoperative complications and death. conclusion: careful case selection and meticulous surgical skills are the two major determinants of the outcome. in the llr group, the first port was placed with an alexis® wound retractor (applied medical, usa) and free access® (top corporation, japan) at the abdominal defect made by previous sc. an additional or trocars were placed as needed. results: all patients in the llr group were treated using the laparoscopic approach. there were no other significant differences in patient background and characteristics. operative duration was similar for these groups. blood loss, complication rate, and hospital stay in the llr group were significantly decreased compared with the olr group. conclusion: in concurrent liver resection and sc, the open approach may require multiple large incisions, but the laparoscopic approach can complete procedures with a stoma wound and a few port wounds. additionally, use of a platform on the wound for sc enhances safety and efficacy for dissection of intraabdominal adhesions and a clear operative view. primary hepatic lymphoma: the importance of liver biopsy diego t enjuto , carlos ortiz , laura casanova , jose luis castro , pablo sánchez , jaime vázquez , norberto herrera , benjamín tallon , carmen jimenez ; hospital severo ochoa, hospital san rafael, hospital henares primary hepatic lymphoma (phl) is a very uncommon lymphoproliferative malignancy. it accounts for only . % of all extranodal non-hodgkin lymphoma and . % of all cases of non-hodgkin disease. the diagnosis is made when there is only liver involvement or when there is minimal non-liver disease. bone marrow, spleen, or hematologic affection should be excluded to confirm the diagnosis. we present our experience with two phl's that were correctly diagnosed thanks to laparoscopic liver biopsy. -year-old male admitted because of a -month history of right upper quadrant pain and nonmeasured weight loss. liver function tests and cholestasic enzymes showed normal values. serologic tests showed negative results for both hbv (hepatitis b virus) and hcv (hepatitis c virus). ct (computed tomography) scan showed three intrahepatic lesions in segments v, vi, and vii. ct-guided fine needle did not reach the diagnosis so a laparoscopic hepatic biopsy was performed. the final diagnosis was burkitt-like lymphoma. chemotherapy with r-chop (rutiximab, cyclophosphamide, adriamycin, vincristine, and prednisone) modality was started and completed after cycles. it is currently years since the patient was diagnosed and there are no clinical or radiological signs of recurrence. -year-old male who complained of diarrhoea and abdominal pain. chronic hb infection with no viral charge was detected. ultrasound showed heterogeneity of the whole left hepatic lobe and an mri was performed. a ten by segen centimeters lesion occupying the left hepatic lobe enhanced in arterial phase was seen suggesting adenoma. laparoscopic hepatic biopsy was completed to reach a definitive diagnosis. non-hodgkin lymphoma follicular type has just been confirmed with the histology and immuno-histochemistry. chemotherapy with r-chop should be started in the following weeks. phl's diagnosis is hard to achieve. fine needle biopsies are frequently negative because of the large area of necrosis. surgical biopsies are sometimes indispensable to get enough tissue to reach the diagnosis. phls are sometimes misdiagnosed as hepatocellular carcinoma because of its relation to hcv meaning a major hepatic resection. that is the reason why we consider that all diagnostic measures should be undertaken to rule out a different type of tumor. surgical resection is normally not needed in phls; as they are chemosensitive lesions. surgical options usually add unnecessary morbidity and mortality to these patients. chemotherapy standard treatment for phl consists on r-chop combination. pancreatic neoplasm enucleation -when is it safe? case report and review of the literature elaine jayne buckley , k molik , j mellinger ; siu-som, hshs pediatric surgery introduction: solid pseudopapillary tumors are rare neoplasms accounting for - % of pancreatic malignancies with a low risk of recurrence and metastasis. pancreatic malignancies are less common in pediatric populations, though small case series have identified that pseudopapillary tumors comprise between and % of pediatric pancreatic neoplasms. as these tumors have a low risk of metastasis, the mainstay of treatment has remained surgical excision. several surgical approaches have been described from extensive resections such as pancreaticoduodenectomy to local enucleation. we present a case of enucleation of a large pseudopapillary tumor from the pancreatic head complicated by pancreatic fistula. a literature review was performed given the rarity of this tumor to review surgical approaches, to compare complications and long-term outcomes, and to identify specific strategies to decrease the risk of pancreatic fistula. case description: a year-old female presented with months of abdominal pain. computed tomography identified a right upper quadrant mass felt to be consistent with a lipoma. follow up ct at months suggested the mass was more likely a gastrointestinal stromal tumor (gist), and surgical resection was recommended. enucleation of the mass was chosen in view of a wellcircumscribed appearance, clear operative tissue planes, and concern for long-term morbidity of a more extensive resection given the patient's young age. pathology demonstrated an . cm pseudopapillary tumor with negative margins. her post-operative course was complicated by a grade b pancreatic fistula, managed with nutritional support, external drain maintenance, and endoscopic stenting. the patient achieved healing of the pancreatic fistula after four months. results: our literature review demonstrates no difference in recurrence, mortality or morbidity between types of surgery. pancreatic fistula contributed to the majority of postoperative morbidity in all cases. recommendations for enucleation include small ( - cm) tumors with between and mm margin from the main pancreatic duct. techniques identified to minimized post-operative pancreatic fistula include preoperative imaging of the duct anatomy, preoperative pancreatic stent placement, and intraoperative ultrasound to identify the pancreatic duct. some literature supports preservation of pancreatic parenchyma, particularly in younger patients, to reduce endocrine and exocrine dysfunction given the low rates of recurrence and metastasis with this rare neoplasm. conclusion: our case demonstrates complications of enucleation of a large pseudopapillary tumor with successful multidisciplinary post-operative management. with the risk reduction strategies identified, we suggest that enucleation may be considered for pseudopapillary tumors in younger patients to preserve pancreatic parenchyma and long-term pancreatic function. introduction: recent advancements in minimally invasive techniques led to increased effort and interest in laparoscopic pancreatic surgery. laparoscopic distal pancreatectomy is a widely accepted procedure for left-sided pancreatic lesions. in other cases, the adoption of laparoscopic pancreaticoduodenectomy has been hindered by the technical complexity of laparoscopic reconstruction. hybrid laparoscopy-assisted pancreaticoduodenectomy (hlapd) in which pancreaticoduodenal resection is performed laparoscopically, while reconstruction is completed via a small upper midline minilaparotomy, is combines the efficacy of open approach, and the benefits of laparoscopic approach. the purpose of this study is to report our experience of hlapd and to define the learning curves. methods: patients with benign and malignant periampullary lesion underwent hlapd by a single surgeon between july and may were retrospectively reviewed. the clinicopathologic variables were prospectively collected and analyzed. the learning curve for hlapd was assessed using cumulative sum (cusum) and risk-adjusted cusum (ra-cusum) methods. results: the most common histopathology was pancreatic ductal adenocarcinoma (n= , . %), followed by intraductal papillary mucinous neoplasms (n= , . %), ampulla of vater cancer (n= , . %), and common bile duct cancer (n= , . %). the median operation time was min (range, - min) and the median estimated blood loss was ml. based on the cusum and the ra-cusum analyses, the learning curve for hlapd was grouped into four phases: phase i was the initial learning period (cases - ), phase ii was the technical stabilizing period (cases - ), phase iii was the second learning period (cases - ) and phase iv represented the second stabilizing period (cases - ). there was a statistical difference in terms of surgical indication between phase ii and iii (p= . ). conclusions: hlapd is a technically feasible and safe procedure in selected patients. this procedure has benefits of both open and minimally invasive procedure, and could be a stepping-stone for transition from open to purely minimally invasive pancreaticoduodenectomy. in silico investigation of the background: wilson's disease is a rare autosomal recessive genetic disorder of copper metabolism, which is characterized by hepatic and neurological disease. the gene atp b (on chromosome ) leads to wilson's disease is highly expressed in the liver, kidney, and placenta and encodes a transmembrane protein atpase (atp b), which functions as a copper-dependent p-type atpase. methods: here, the rare codons of atp b gene and their location in the structure of atp b protein was studied with rare codon calculator (racc) (http://nihserver.mbi.ucla.edu/racc/), atgme (http://atgme.org/), latcom (http://structure.biol.ucy.ac.cy/latcom.html) and sherlocc program (http://bcb.med.usherbrooke.ca/sherlocc.php). racc server identified arg, leu, ile, and pro codons as rare codons. results: results showed that cyp a gene have single rare codons of arg. additionally, racc detected two rare codons of leu, single rare codons of ile and rare codon of pro. atp b gene analysis in minmax and sliding_window algorithm resulted in identification of and rare codon clusters, which shows the difference features of these algorithms in detection of rcc. analyzing the d model of atp b protein show that arg residue constitute hydrogen bonds with glu and glu that with mutation of this residue to ser this hydrogen bonds were disrupted and may interfere in the proper folding of this protein. moreover, the side chain of arg don't forms any bond with others residues that with mutation to thr form new hydrogen bond with the side chain of arg . these addition and deletion of hydrogen bonds effects on the folding mechanism of atp b protein and interfere with the proper function of the atp b position. his forms the hydrogen bonds with the his and it seems that this hydrogen bond close together two region of this protein and it seems that has a critical role in the final folding of atp b protein. conclusions: computational study of diseases such as wilson's disease and involved genes (atp b) help us in understanding of disease's physiopathology and finding new approaches for detection and treatment. pancreatic stump leak and fistula formation are significant causes of morbidity in patients undergoing distal pancreatectomy (dp), with incidence of % to as high as % in a large systematic review. we present a case of a year old female, four months status post distal pancreatectomy and splenectomy for pseudopapillary neoplasm of pancreatic tail. patient presented to our institution with day history of left upper quadrant pain and general malaise. differential diagnosis on admission was abdominal wall abscess vs incarcerated incisional hernia. physical exam was positive for severe tenderness to palpation over a * cm cm non reducible mass in left upper quadrant with surrounding skin erythema. patient underwent a diagnostic laparoscopy and intraoperative findings revealed extensive adhesions to the anterior abdominal wall and a loop of small bowel was found adhered to the previous incision site in left upper quadrant. upon further dissection we entered a large cm cavity with saponified caseous material. the saponified material and thick tan fluid were evacuated into an endocatch bag and two large bore jackson pratt drains were left within the cavity. further examination showed that the small intestine was normal with no signs of obstruction or ischemia. fluid studies and cultures were sent and showed yeast like organisms and negative for acid fast bacillus. we report an unusual presentation of a distal pancreatectomy stump leak in the formation of an intra-abdominal saponified fluid collection four months after the primary procedure. given the high incidence of pancreatic stump leak and fistula formation after distal pancreatectomy, much effort has been made to identify factors associated with higher incidence of leaks and their usual and unusual presentations, which will be reviewed in this report. initial concerns regarding healthy donor's safety and graft integrity, need for acquiring surgical expertise in both laparoscopic liver surgery and living donor transplantation (ldlt) have delayed the development of laparoscopic donor hepatectomy in adult-to-adult ldlt. however, decreased blood loss, less postoperative pain, shorter length of stay in hospital, and excellent cosmetic outcome have well been validated as the advantage of laparoscopic hepatectomy. hence, the safety and feasibility for laparoscopic donor should be further investigated. we present initial experiences and safety for totally laparoscopic living donor right hepatectomy. in cases who received elective living donor right hepatectomy for adult-to-adult ldlt, totally laparoscopic approach was applied from may up to august . the anatomical variation of portal vein was not considered as an exclusion criteria, but all donors were with type i portal vein variation. the bile duct anomaly was preoperatively evaluated with magnetic resonance cholangiopancreatography (mrcp) and was never excluded for totally laparoscopic approach. d conventional rigid º rigid laparoscopic system was used in cases and the remaining cases used d flexible laparoscopic system. in about %, hepatic duct anomalies (type , a, b) were identified. the operation time was from hours to hours. and the time for the graft removal was within minutes. the hepatic duct transection was performed under operative cholangiography via a cystic duct and the patency of left hepatic duct was also confirmed by operative cholangiography. however, during postoperative period, bile leakage was identified in only case and resolved after the biliary stent insertion by ercp. during operation, there was no transfusion and the inflow control like pringle maneuver was not used at all. v or v were reconstructed in cases and large right inferior hepatic vein was prepared for anastomosis in cases. all grafts were removed through the suprapubic transverse incision. most donors were discharged at days after hepatectomy. during the short-term follow-up period in the donors except this case, complications were not identified. conclusively, totally laparoscopic right donor hepatectomy in elective adult-to-adult ldlt can be initially attempted after enough experiences of laparoscopic hepatectomy and ldlt. however, the true benefits of totally laparoscopic living donor right hepatectomy should be fully assessed through various experiences from multi-institutes. background: the role of neoadjuvent chemotherapy on the treatment of pancreatic cancer remains widely controversial. studies have evaluated its effect on resectability and survival; however, few have studied the consequence of neoadjuvent therapy on surgical outcomes and complications. methods and procedures: a retrospective analysis was performed utilizing the targeted pancreas module of the national surgical quality improvement project (nsqip) for patients undergoing pancreaticoduodenectomy. neoadjuvent therapy was defined by chemotherapy and/or radiation in the -days before surgery. patient demographics, operative characteristics, and -day outcomes were compared amongst patients undergoing neoadjuvent chemotherapy, radiation, chemoradiation, and no neoadjuvent therapy. both univariable and multivariable analysis were completed. results: pancreaticoduodenectomy was completed in , patients. , patients had no neoadjuvent therapy; underwent both chemotherapy and radiation; underwent chemotherapy alone, and underwent radiation alone. there were no differences in demographics or comorbidities. no difference in -day mortality was found; however pancreatic fistula formation was affected by neoadjuvent therapy. neoadjuvant radiation increased fistula formation (or: . , % ci: . - . ) while neoadjuvent chemotherapy (or: . , % ci: . - . ) was protective. conclusion: neoadjuvent therapy significantly impacts surgical outcomes following pancreaticoduodenectomy. given that pancreatic fistula formation can delay post-operative chemotherapy, it may be reasonable to refrain from neoadjuvent radiation therapy for patients with resectable and borderlineresectable disease. the influence of thickest background: the use of stapling devices for distal pancreatectomy remains controversial, due to concerns about the development of postoperative pancreatic ?stula (popf). pancreas thickness might be associated with popf, but suitable thickness of stapler remains also inconclusive in view of reducing popf. methods: we routinely use thickest endo gia™ reloads with tri-staple™ (covidien, north haven, ct) for pancreas closure during laparoscopic left side pancreatectomy (lp) since . we compared short term surgical results of the consecutive ten patients underwent lp using new stapler (ns) and patients with lp using other type of stapler (os) focusing on popf. results: no patients developed clinically relavent (cr)-popf in ns group and two patients ( . %) with os group experienced cr-popf. however, there was no difference of cr-popf between two groups. pancreas thickness on stapling point were not different between two groups ( . mm vs . mm, p= . ). in ns group, patients ( . %) developed a popf, whereas in os group, patients ( . %) developed a popf. there was also no difference of popf between groups. conclusion: the gia™ reloads with the thickest tri-staple™ allows effective prevention of cr-popf after distal pancreatectomy. however, there was no advantage over thinner stapler for lp. introduction: single-incision laparoscopic hepatectomy (silh) has been showed feasible and safe in experienced hands for selected patients with benign or malignant liver diseases. there were only small series reported and most of the procedures were minor liver resections. we herein present our experience of silh during a period of months. methods and procedures: consecutive patients underwent silh which were performed by two experienced laparoscopic surgeons with straight instruments. patient characteristics and surgical outcomes were analyzed by reviewing the medical charts. results: the patient age was . ± . ( - ) years with male predominance ( patients, . %). six patients ( . %) had liver cirrhosis proved by pathologic examinations. nine procedures ( . %) were indicated for malignancy. four major hepatectomies (over two segments) and nine minor ones were performed including seven anatomical resections. the abdominal incisions were para-or trans-umbilical except one which was along the old operative scar at lower midline, while most of them (n= , . %) was within cm in length. inflow control was carried out by either individual hilar dissection or extraglissonian approach instead of pringle maneuver. the operations were all accomplished successfully without additional ports or open conversion. the operative time was . ± . ( - ) min and the estimated blood loss was . ± . ( - ) ml. five ( . %) patients encountered complications and four of them were classified as clavien-dindo grade i. the postoperative length of hospital stay was . ± . ( - ) days. there was no mortality. conclusion: silh can be performed safely and efficaciously for selected patients with benign and malignant liver diseases including cirrhosis. not only minor but also major liver resections are feasible. this innovative procedure provides low postoperative pain and fast recovery. before adopting this demanding technique, surgeons should be familiar with both single-incision laparoscopic surgery and laparoscopic hepatectomy. better outcomes after the learning curve could be anticipated. background: laparoscopic distal panreatectomy (ldp) has been replacing the open procedure for benign or malignant diseases of the pancreas. however, it is often difficult to apply ldp for pancreatic ductal adenocarcinoma (pdac) because its aggressive invasion to adjacent organs or major vessels. objectives: the objective of this study was to report our experiences for laparoscopic extended pancreatectomy with en-bloc resection of adjacent organs or major vessels for left-sided pdac. methods: we reviewed data for all consecutive patients undergoing ldp for left-sided pdac at asan medical center (seoul, south korea) between april and december . the patients who underwent laparoscopic extended panreatectomy with en-bloc resection of adjacent organs or major vessels were included in analyses. results: of total patients, underwent laparoscopic extended pancreatectomy. there were male and female patients with a median age of . years. resected adjacent organs or vessels were as following: stomach in , duodenum in , colon in , kidney in , superior mesenteric vein in , and celiac axis in . median operative duration was minutes, and median length of hospital stay was days. pathological reports revealed the following: a median tumor size of . cm, the tumor differentiation (well differentiated in , moderately differentiated in , and poorly differentiated in ), t stages (t in , t in , and t in ) , and n stages (n in and n in ). r resection was achieved in patients, and most r resection were tangential retroperitoneal margins. postoperatively, clinically relevant postoperative pancreatic fistula was occured in patients, and there was no -day mortality. median overall survival was . months and year survival rate was . %. conclusions: although laparoscopic surgery has limitations in treating extensive diseases, some selected patients can be applicable for laparoscopic extended pancreatectomy with acceptable complication and survival rates. who underwent hepatic resection was included. these patients were divided into llr or olr. demographics, tumor characteristics, recurrence rates and over-all survival were compared between the groups. results: patients were included and grouped into llr (n= ) and olr (n= ). the average tumor number was ± for both groups, while the mean tumor size was . cm and . cm for the llr and olr group, respectively. when compared with olr, llr had lower post-operative complication rates ( . % vs . %, p= . ) and shorter hospital stay ( vs days, p= . ), although the difference was not statistically significant. overall, recurrence-free and disease-free survival was comparable between llr and olr. introduction: single port surgery has been described since with cholecystectomy, colectomy, gastrectomy, and others. nevertheless, few cases are still reported in field of hbp surgery. herein, we report single port pancreatic surgery developed from our previous experience. we had started single port surgery in , since then we have done more than cases of single port surgery using surgical glove port including cholecystectomy, appendectomy, and colectomy. because we consider this experience should develop to pancreatic surgery, cases of single port staging laparoscopy for potentially resectable and borderline resectable pancreatic cancer and cases of single port plus one port distal pancreatectomy (spop-dp) have been done in our institution. single port staging laparoscopy for pancreatic cancer. resectability was proved in ( %) out of patients while patents had unresectale factor such as small liver and peritoneal metastases that was not able to detect pre-operatively. the length of hospital days were . ± . days and the days to chemotherapy were . ± . days. single port plus one port distal pancreatectomy (spop-dp) spop-dp starts with . cm skin incision on umbilicus. subsequently, a wound retractor is installed at umbilical wound. then, a non-powdered surgical glove ( . inches) is put on the wound retractor through which three -mm slim trocars and one -mm trocar are inserted via each finger tips. a semi-flexible laparoscopic camera is inserted via the middle finger port. -mm port is used when laparoscopic us, mechanical stapler, endo intestinal clip or retrieval bag were needed. an additional -mm port is inserted at left subcostal lesion mainly used for surgeon's right hand instrument. gastric posterior wall is fixed to abdominal wall by suture instead of manual retraction. pre-compression before transection of the pancreas was done using endo intestinal clip before firing. discussion: as we have seen in these two decades, surgery has dramatically been changed by laparoscopic surgery or robotic surgery. nevertheless, because of technical difficulty and relatively high post-operative complication rate, introduction of reduced port surgery to hbp surgery has just started. spop-dp using endo intestinal clip, glove port and gastric wall hanging method is feasible. but its advantage is not clear so far, multicenter rct is highly desired to clear the benefit of reduced port surgery for pancreas. introduction: scoring systems (ss) are an essential pillar of care in acute pancreatitis (ap) management. we compared six ss (acute physiology and chronic health examination (apache-ii), bedside index for severity in ap (bisap), glasgow score, harmless ap score (haps), ranson's score and sequential organ failure assessment (sofa) score) for their utility in predicting severity, intensive care unit (icu) admission and mortality. methods: ap patients treated between july and september were studied retrospectively. demographic profile, clinical presentation and discharge outcomes were recorded. predictive accuracy of six ss was assessed using areas under receiver-operative curve (auc) with pairwise comparisons. results: patients were treated for ap. twenty-two ( . %) patients were excluded for insufficient data. / ( . %) were male and mean age was . ( - ) years. most common aetiology was gallstones ( . %). mean length of stay was . ( - ) days. ( . %) patients had severe ap, ( . %) required icu admission and ( . %) died. table below shows positive predictive value (ppv), negative predictive value (npv) and auc of six ss in predicting outcomes. pairwise comparisons revealed ranson's (p. ) and sofa (p. ) scores were superior than other ss in predicting all three outcomes. auc of sofa was greater than ranson's score in predicting severity (p. ), but similar in predicting icu admission (p= . ) and mortality (p = . ). conclusion: sofa score is superior to classical ss in predicting severity, icu admission, and mortality in ap. introduction: necrotizing pancreatitis is often a devastating sequelae of acute pancreatitis. historically several approaches have been described with variable outcome. open necrosectomy is associated with higher morbidity ( %) and mortality ( %). endoscopic necrosectomy often is tolerated well but associated with stent migration and multiple procedures. video-assisted retroperitoneal debridement is tolerated well but associated with severe bleeding if adjacent blood vessels are injured during the procedure leading to severe complications. methods: in our series, we perform a step up approach by involvement of a multidisciplinary group consisting of general surgeons, gastroenterologists, infectious disease physicians, critical care internalist, interventional radiologist and nutritional services to formulate a management plan. the necrotized pancreas is initially drained with an ir guided drain, fluid cultures sent for microbiology and treatment with appropriate antibiotics if deemed necessary. the drain is gradually upsized to a fr sized drain to form a well-defined tract for surgical debridement; a preoperative ct scan of the abdomen with iv contrast to access the location and proximity of the vasculature around the necrotized pancreas. a collaboration with the interventional radiologist to discuss possible ir embolization of splenic artery prior to surgical debridement. the patient would then undergo video assisted retroperitoneal pancreatic necrosectomy and a sump drain left in-situ at the pancreatic fossa. post-operative management in the surgical icu would be lead by the critical care internalist. results: three patients were managed by this multidisciplinary approach with excellent outcomes. one patient underwent preoperative ir embolization followed by surgical debridement; second patient underwent embolization immediately following debridement; one patient did not require any embolization but had ir on standby if needed to intervene. post-operatively all three patients recovered well. they all were tolerating good oral intake and were discharged to rehabilitation facilities. conclusion: our preliminary experience demonstrates that an early multidisciplinary plan by various subspecialties can result in a pragmatic and successful approach to this potentially catastrophic condition. introduction: liver resection with preservation of as much liver parenchyma as possible is called parenchymal sparing hepatectomy (psh). psh has been shown to improve overall survival by increasing the re-resection rate in patients with colorectal liver metastases (crlm) and recurrence. the caudal-cranial perspective in laparoscopy makes the cranial segments ( , a, , and ) more difficult to access. the objective of this systematic review is to analyze feasibility, safety, morbidity, and oncologic outcomes of laparoscopic psh. methods: a systematic review of the literature was performed. medline/pubmed, scopus, and cochrane databases were searched. a search strategy was published with the prospero registry. a systematic review was conducted on all cases reported, they were categorized by area of resection and quantitative meta-analysis of operative time, blood loss, length of hospital stay, complications, and r resection was performed. results: of the studies screened for relevance, studies were selected. because interventions or endpoints were noncontributory or reporting incomplete, were excluded. only publications remained, reporting data from patients who underwent laparoscopic psh. the highest oxford evidence level was b and selective reporting bias was common due to single center and noncontrolled reports. among them, ( . %) resections were in the cranial segments ( . %), a ( . %), ( %), and ( . %), which previously would have required laparoscopic hemi-hepatectomies or sectorectomies. the most common tumor type was crlm ( %) and the second most common tumor type was hepatocellular carcinoma ( %). feasibility of laparoscopic psh was %, conversion rate was %, and complications were seen in % of cases. no perioperative mortality was reported. no standardized reporting format for complications was used across studies. meta-analysis revealed a weighted average operating time of minutes, estimated blood loss of cc, and length of stay of days. r resections were achieved in % of cases. conclusion: laparoscopic psh of difficult to reach liver tumors are feasible with acceptable conversion and complication rate, but relatively long operating times and relatively high blood loss. in future studies, data on long term survival and specific tumor type recurrence should be reported and bias reduced. yangseok koh , eun-kyu park , hee-joon kim , young-hoe hur , chol-kyoon cho ; chonnam national university hwasun hospital, chonnam national university hospital purpose: laparoscopic surgery has become the mainstream surgical operation due to its stability and feasibility. even for liver surgery, the laparoscopic approach has become an integral procedure. according to the recent international consensus meeting on laparoscopic liver surgery, laparoscopic left lateral sectionectomy ( conclusion: this study showed that laparoscopic lls is safe and feasible, because it involves less blood loss and a shorter hospital stay. for left lateral lesions, laparoscopic lls might be the first option to be considered. keywords: laparoscopy, left lateral sectionectomy. outcome analysis of pure laparoscopic hepatectomy for hcc and cirrhosis by icg immunofluorescence in.-a propensity score analysis introduction: in laparoscopic hepatectomy, the surgeon cannot use their hand to palpate the liver lesion and estimate margin of resection. the use of icg immunofluorescence technique can show up the liver tumour and has the potential to facilitate a throughout assessment during the operation. method: between and , there were patients undergone pure laparoscopic liver resection for hcc in our hospital. patients had undergone surgery by the conventional laparoscopic approach. patients had laparoscopic hepatectomy with additional icg immunofluorescence augmented technique. the surgical outcome was compared with propensity score analysis in a ratio of : . result: patients had icg immunofluorescence assisted laparoscopic hepatectomy (group ). patients using conventional laparoscopic liver resection with propensity-matched were selected for comparison (group ). the median operation time was minutes vs minutes p= . , the median blood loss was ml vs ml (p= . ). additional tumours were identified by icg technique. patients had suspicious lesion picked up by icg technique but proven to be benign pathology on frozen section examination. the sensitivity of tumour detection by group was %. % r resection was achieved in group and group respectively. hospital stay was days vs days (p= . ), post-operative complication was ( %) vs ( . %) (p= . ) none of the patient developed icg related complication. conclusion: in the current study, the new technique showed equally good short-term outcome when compared with conventional laparoscopic hepatectomy. icg immunofluorescence augmented reality is a promising technique that might facilitate easier identification tumour during laparoscopic hepatectomy. surg endosc ( ) :s -s taking the training wheels off: transitioning from robotic assisted to total laparoscopic whipple introduction: there is a substantial learning curve to performing minimally invasive pancreatoduodenectomy (mis-pd) for surgeons who are trained in open pd. the learning curve to transition from robotic assisted pd (rapd) to total laparoscopic pd (tlpd) is not well established. methods: mis-pds performed between january and june performed by sc as a surgeon or co-surgeon were included for analysis. mis-pds were performed using a robotic assisted technique prior to august , and tlpds were performed subsequently. rapds performed prior to were excluded to limit the comparison to rapds after the initial learning curve. demographics, clinical and pathologic outcomes, operative and post-operative outcomes were compared. results: a total of rapds and tlpds were scheduled during the study period. there was no statistically significant difference in age, body mass index, or prior abdominal surgery. median time from initial clinic consultation to surgery was days for the rapd group versus days in the tlpd group (p= . ). conversion to laparotomy was required in of patients ( there were no operative complications or mortality. the mean hospital stay was ± . hours. there was no postoperative jaundice, bile leak, intra-abdominal collections or mortality. conclusion: when surgery is indicated for difficult acute calculous cholecystitis, laparoscopic subtotal cholecystectomy with control of the cystic duct is safe with excellent outcomes. however, if the critical view of safety can't be achieved due to obscured anatomy at calot's triangle, conversion to open surgery or cholecystostomy must be performed to prevent bile duct injury. scott revell, md , joshua parreco , rishi rattan, md , alvaro castillo, md ; u. miami -jfk gme consortium, university of miami, miller school of medicine introduction: over the last two decades the increasing incidence of benign liver tumors has led to the expanded need for clinicians to make therapeutic decisions regarding the utilization of open, minimally invasive and ablative techniques. the purpose of this study was to compare outcomes of the management of benign liver disease based on operative approach and pathology. methods: patients aged years or older who underwent liver surgery for benign liver tumors from to were identified in the nationwide readmissions database. patients were compared based on liver pathology, resection versus ablation, and an open versus laparoscopic/robotic approach. the outcomes of interest were in-hospital mortality, prolonged length of stay (los) [ days, and readmission within -days. univariable analysis was performed for these outcomes and multivariable logistic regression was performed using the variables with a p-value . on univariable analysis. results were weighted for national estimates. results: there were , patients undergoing surgery for benign hepatic tumors in the us during the study period. the most common pathology was benign neoplasm ( . %) followed by hemangioma ( . %), and congenital cystic disease ( . %). resection alone was performed in . %, ablation alone in . %, and resection with ablation in . %. a laparoscopic/robotic approach was used in . % of cases. the overall mortality rate was . %, a prolonged los was found in . %, and readmission within days occurred in . %. an increased risk for mortality was found with hemangioma (or . , p= . ) and congenital cystic disease (or . , p= . ). resection with ablation was associated with an increased risk of prolonged los (or . , p. ), while a laparoscopic/robotic approach was a protective factor for prolonged los (or . , p. ). patients treated with ablation alone were at decreased risk for readmission (or . , p. ). omar m ghanem, md , desmond huynh, md , tomasz rogula, md ; mosaic life care, cedars sinai, introduction: laparoscopic sleeve gastrectomy is the most commonly weight loss procedures performed worldwide. as such, there is great diversity in the techniques utilized. this study aims to identify and categorize the differences in techniques and assess the need for guidelines in this field. case description: surgeons were surveyed on the techniques they employ on biweekly basis using the international bariatric club facebook group. the survey included sleeve staple line reinforcement, preoperative work up, intraoperative hiatal dissection, bougie size, distance from pylorus to distal staple line, and intraoperative leak testing. surveys were conducted between may and july . each survey was active for weeks after which data was collected. participants were required to select a single answer per question. discussion: when surveyed on staple line reinforcement (n= ), surgeons used no reinforcement, over-sewed, buttressed, clipped as necessary, over-sewed as necessary. for preoperative work up (n= ), utilized routine endoscopy, routinely obtained upper gi series, routinely obtained both endoscopy and upper gi, and employed endoscopy or upper gi series only in patients who were symptomatic. for hiatal dissection (n= ), surgeons dissected the hiatus routinely, dissected only when obvious hernias intraoperatively, dissected only if the hernia was detected on preoperative work up, and dissected in the setting of gerd symptoms. for sleeve caliber sizing (n= ), bougie \ f was used by surgeon, bougie size f, f, f were utilized by , bougie size f and f were utilized by , bougie[ f were used by , and gastroscopes ( f) were used by . with regards to distance from pylorus to where the sleeve staple line was initiated (n= ), participants started \ cm away from pylorus, between and cm, and started [ cm from pylorus. finally, for preferred intraoperative leak test during sleeve (n= ), methylene blue was used by surgeons, air leak test by , used both, and opted for none. conclusion: this study characterizes the wide varieties in the techniques used during sleeve gastrectomy. a great number of variations exist in every parameter surveyed; however, there is little evidence comparing the effectiveness and safety of these variations. in this setting, further randomized controlled trials are necessary and should be used to construct guidelines to best optimize outcomes in this extremely common and necessary operation. yen-yi juo, md, mph, yas sanaiha, md, yijun chen, md, erik dutson, md; ucla introduction: bariatric surgeries are commonly performed in accredited centers of excellence, but no consensus exists regarding the optimal readmission destination when complications occurred. our study aims to examine the impact of care fragmentation on post-operative outcome and evaluate its causes and consequences among patients undergoing -day readmission after bariatric surgery. methods: the metabolic and bariatric surgery accreditation and quality improvement program (mbsaqip) database was used to identify patients who experienced -day unplanned readmission following bariatric surgery. non-index readmission was defined as any readmission occurring at a hospital other than the one where initial surgery was performed. primary outcome was -day mortality after surgery. logistic regressions were used to identify risk factors for nonindex readmission and to adjust for confounders in the association between non-index readmission and -day mortality. results: a total of , patients were identified as experiencing -day unplanned readmission following bariatric surgery, among whom ( . %) were non-index readmissions. occurrence of postoperative complication during initial hospitalization was the most significant risk factor for non-index readmission (or . , % ci . - . , p= . ) in our multivariate logistic regression. the three most common reasons for readmission were similar within the two comparison groups, including nausea/vomit, abdominal pain and anastomotic leakage. similar proportion of patients underwent reoperation among the two comparison groups ( . vs . %, p= . ). even after adjusting for occurrence of complications, being readmitted to a non-index facility was still associated with a . -fold odds of -day mortality ( % ci . - . , p\. ). conclusion: non-index readmission significantly increases the risk of -day mortality following bariatric surgery. patients were more likely to visit a non-index facility if complications occurred during their initial hospitalization. further patient education is required to re-inforce the importance of continuity-of-care during management of bariatric complications and guide patient's decision making in choosing readmission destinations. introduction: sleeve gastrectomy has become the most performed bariatric surgery. removing part of the stomach causes weight loss by restricting food intake and regulating the production of incretins, particularly ghrelin. however, prognostic factors to weight loss after sleeve gastrectomy have been difficult to find. the goal of this research was to study the correlation between the volume of resected stomach and weight loss. methods and procedures: volume of resected stomach of patients undergoing sleeve gastrectomy was measured. a standard laparoscopic technique was used. calibration was performed tightly around a fr bougie, and stapling started - cm from the pylorus. the standardized technique for measurement involved insufflation with a g catheter with saline solution to a pressure of cm h o immediately after removal of the specimen. resected stomach's volume, gender, age, bmi, height and % of total weight loss (%twl) at months and year were prospectively recorded. correlation between variables was analyzed with pearson's test and linear regression models. conclusion: removed stomach was larger on men than women and its size slightly correlated to height. however, volume of resected stomach did not seem to have an incidence on short termweight loss. gastric size should not be considered as a prognostic factor for weight loss in patients undergoing sleeve gastrectomy. revisional bariatric surgery after initial laparoscopic sleeve gastrectomy: what to choose salman alsabah, eliana al haddad, ahmad almulla, khaled alenezi, shehab akrouf, waleed buhamid, mohannad alhaddad, saud al-subaie; amiri hospital introduction: bariatric surgery has been shown to produce the most predictable weight loss results, with laparoscopic sleeve gastrectomy (lsg) being the most performed procedure as of . however, inadequate weight-loss may present the need for a revisional procedure. the aim of this study is to compare the efficacy of laparoscopic re-sleeve gastrectomy (lrsg), laparoscopic roux-en-y gastric bypass (lrygb) and gastric mini-bypass surgery (mgbp) in attaining successful weight loss following initial lsg. methods: a retrospective analysis was performed on all patients who underwent lsg at amiri and royale hayat hospital, kuwait from to . a list was obtained of those who underwent revisional bariatric surgery after initial lsg, and their demographics were analyzed. introduction: the aim of this study is to identify potential risk factors or early indicators, specifically related to perioperative blood pressure, and its association with perioperative hemorrhage in the bariatric population. laparoscopic bariatric surgery in the united states has been steadily increasing over the past several years. between and , the annual number of cases has increased by %. although rare, hemorrhagic complications (hc) occur at a rate of - % and can lead to significant morbidity and mortality. by identifying factors which may place a patient at higher chance of hc, surgeons can potentially mitigate those risks. these modifications could reduce morbidity and limit the requirement of transfusions or reoperations. methods and procedures: a retrospective case-control series was performed to include all patients who underwent either laparoscopic sleeve gastrectomy (sg) or laparoscopic roux-en-y gastric bypass (gb) in at a single bariatric center of excellence. a total of patients were identified with perioperative hc. each patient was matched : for procedure, body mass index, and medical comorbidities. peak systolic, diastolic, and mean arterial pressures were compared between groups at time of admission, intraoperative, and during remainder of initial hospital stay. welch's t-tests were used for comparison between groups. results: a total of procedures were performed with de novo sg, and de novo gb. revisional bariatric cases were excluded from the study. hc occurred in ( . %) total patients, gs and gb. four patients required operative treatment for hc, were treated laparoscopically and required laparotomy. the mean diastolic pressures at time of arrival on day of surgery was higher in patients who develop hc (p= . ) and mean peak diastolic pressure intraoperatively was lower in patients who develop hc (p= . ). there was no statistical difference in peak systolic or mean arterial pressures throughout the hospital stay. conclusions: bariatric surgical patients with elevated preoperative diastolic blood pressures are at an increased risk of postoperative hc. additionally, decreased peak diastolic blood pressures may be an early indication of an hc in bariatric patients. introduction: bariatric surgery in the adult population is recognized as one of the most effective treatments for obesity and its comorbidities. nonetheless, the safety, efficacy, and substantial outcomes of bariatric surgery in young adults are still not well documented. the aim of our study is to evaluate the safety and efficacy of laparoscopic sleeve gastrectomy (lsg) in young adults (\ years old) versus older adults (≥ years old). methods: we retrospectively reviewed all patients who underwent bariatric surgery at our institution from to. propensity score matching was used in order to balance covariates, matching for common demographics and comorbidities between the younger patient population (\ years old) and the control group ([ years old). all tests were two-tailed and performed at a significant level of . . statistical software r, version . . ( - - ) was used for all analyses. results: of patients, . % (n= ) met our inclusion criteria after matching. we found . % (n= ) patients under years old and . % (n= ) patients greater or equal to years old (control group). we observed that our younger population distribution was predominantly caucasian and female, . % (n= ) and . % (n= ) respectively. the mean age was . ± . years with a preoperative body mass index (bmi) of . ± . kg/m in the younger group compared to . ± . years and a bmi of . ± . kg/m in the control group. diagnosis of diabetes and hypertension were present in . % (n= ) and . % (n= ) of our younger group, respectively. no statistical significance was found when assessing the percentage of bmi loss (% ebmil) at and months follow-up as shown in table . when comparing the % ebmil at months follow-up, the younger group had . % more ebmil than the control group (p= . ). when assessing post-operative complications we observed no statistical significance. conclusions: bariatric surgery is equally effective and safe in young adult population demonstrating a significant better %ebmil at months following bariatric surgery. following prospective studies are needed to elucidate the resolution and behavior of comorbidities in a younger bariatric population. minimally invasive conversion of sleeve gastrectomy to rouxen-y gastric bypass for intractable gastroesophageal reflux disease: short term outcome background: surgical management recommendations for intractable gastroesophageal reflux disease (gerd) after sleeve gastrectomy (sg) remain controversial. this case series demonstrates our experience with treatment of post-operative intractable gerd using minimally invasive conversion of sg to roux-en-y gastric bypass (rygb). patients and methods: this is a retrospective review of a prospective data registry (mbsaqip) from jan through sept . eleven patients, female and male, were evaluated. of the surgeries, were laparoscopic, assisted with xi da vinci robot, and assisted with si da vinci robot. all patients presented with intractable reflux on high dose ppi. three had a history of aspiration pneumonia. ± . %, respectively. one was omitted due to pending results. conclusion: several solutions exist for operative management of intractable gerd after sg including redo-sleeve gastrectomy, combined gastrectomy with fundoplication, conversion to gastric bypass or anti-reflux procedures such as linx. reports remain small in series and require further study to evaluate the consistency of results. we found minimally invasive conversion of sg to rygb is a highly effective and safe option for treatment of intractable gerd. setthasiri pantanakul, chotirot angkurawaranon, ratchamon pinyoteppratarn, poochong timrattana; rajavithi hospital background: obesity is an important health problem affecting more than million people worldwide. esophageal dysmotility is a gastrointestinal pathology associated with obesity; however, its prevalence and characteristics remain unclear. esophageal dysmotilities have a high prevalence among obese patients regardless of gastrointestinal symptoms. objective: to identify the prevalence of esophageal motility disorder in asymptomatic obese patient. materials and methods: prospective study was performed between june and march . a total of of morbid obese patients who visited the bariatric and metabolic clinic at rajavithi hospital (bangkok, thailand) underwent preoperative evaluation with high resolution esophageal manometric test with manoscantm eso (smith medical). tracings were retrospective analysis and reviewed according to chicago classification criteria for esophageal motility disorders. results: among asymptomatic obese participants, twenty five of them were female. the mean age was . ( - ) years old. most of the participants were classified as class three obesity or over. the mean bmi was . kg/m . no hiatal hernia was found and the anatomy of esophagus was normal in all patients. the mean irp was . mmhg. twenty-one patients ( . %) demonstrated high irp over normal limit ([ mmhg) . four patients demonstrated premature contraction (dl\ . second). hypercontractile esophagus was identified in patients and ineffective motility disorder was found in patients. two patients were diagnosed as distal esophageal spasm (des). two patients were compatible with type achalasia and patients ( . %) have esophageal outflow obstruction. none of the patient demonstrate incomplete bolus clearance even high irp or abnormal motility. conclusion: this study reveals a high prevalence of esophageal dysmotility in asymptomatic thai obese patients. the most common abnormality were esophageal outflow obstruction and ineffective motility. the chicago classification of esophageal motility disorder may not suitable among obese population. sitembile lee, ms , chike okolocha , aliu sanni, mdfacs ; philadelphia college of osteopathic medicine ga campus, eastside bariatric and general surgery introduction: roux-en-y gastric bypass (rygb) is the most popular bariatric procedure performed worldwide, accounting for % of all bariatric procedures. however, in patients with a body mass index (bmi) ≥ kg/m (super-super obese) the rygb procedure can be technically challenging. this has led to the adoption of a single-stage treatment such as one anastomosis (mini) gastric bypass (oagb/mgb) in the super-super obese patients. proponents of the oagb/mgb claim the clinical outcomes are comparable to the rygb. the aim of this study is to compare the outcomes of the two procedures by examining the literature. methods: a systematic review was conducted through pubmed to identify relevant studies from to with comparative data on rygb versus oagb/mgb on super-super obese populations. the primary outcome was the percentage excess weight loss (%ewl). other outcomes include operative times, complication rates and length of hospital stay. results were expressed as standard difference in means with standard error. statistical analysis was done using randomeffects meta-analysis to compare the mean value of the two groups (comprehensive meta analysis version . . software; biostat inc., englewood, nj introduction: obesity is becoming more prevalent in patients with inflammatory bowel disease (ibd). the obese body habitus increases the complexity of surgeries that are often needed to treat ibd. some surgeons may delay definitive surgical treatment because of obesity. little data exists on bariatric surgery in the obese patient with ibd. methods: we retrospectively identified patients who had known diagnosis of ibd who underwent bariatric surgery from to . demographics and post-operative outcomes were assessed. results: patients were identified: with ulcerative colitis (uc) and with crohn's disease (cd). of the uc patients, none of the patients had surgery for uc and only one was on a biologic. of the uc patients, had adjustable gastric band (agb), had gastric bypass and had sleeve gastrectomy. one patient with agb had it replaced for slip and subsequently removed for dysphagia. uc preoperative bmi average was . . postoperative bmi was . with excess weight loss (ewl) of %. average follow up was months. of the cd patients, patients had ileocolic resections and one had total proctocolectomy with end ileostomy. one was on remicade and one on mp. of the cd patients, had agb, had gastric bypass and had sleeve gastrectomy. one agb patient had conversion to gastric bypass because of dysphagia and poor weight loss. a second abg patient had band removal because of dysphagia. cd patients' preoperative bmi average was . . postoperative bmi was . with average ewl of %. average follow up was months. overall, agb patients had % ewl, sleeves % and gastric bypass %. two uc patients had post-operative flares, one immediately post op and one month post-operative. four of the band patients had dysphagia, with one replacement, two removals and one conversion to bypass. there were no leaks, intraabdominal infections, fistulas or wound infections. conclusions: uc patients appear to have higher excess weight loss compared to crohn's patients; ewl % compared to % but was not statistically significant. agb had poor results in both uc and cd patients. sleeve gastrectomy and gastric bypass results in effective weight loss for obese patients with ibd. gastric bypass in ibd patient is controversial, but may be appropriate in the right clinical setting. introduction: previous studies suggest that modest preoperative weight loss is associated with improved weight loss following bariatric surgery. however, there remains a need to investigate factors which may successfully predict preoperative weight loss among bariatric patients. methods and procedures: this analysis included patients who underwent laparoscopic roux-en-y gastric bypass (rygb), sleeve gastrectomy, or gastric banding at an academic medical center in california. data were measured at patients' consult and preoperative clinical visits. preoperative weight loss outcomes were categorized as follows: no weight loss, lost weight, or gained weight. associations between categorical sociodemographic and surgical characteristics and preoperative weight loss outcomes were assessed using the chi-square test of association. associations between continuous measures and preoperative weight loss outcomes were assessed using anova. a sub-group analysis was completed among participants who lost weight prior to bariatric surgery. wilcoxon-rank-sum and kruskal-wallis tests were used to evaluate associations between patient characteristics and the number of pounds lost. results: patients (n= , ) were predominately ages - ( %), female ( %), white ( %), and privately insured ( %). patient race was significantly associated with weight loss outcomes (p = . ): whereas % of white patients lost weight prior to surgery, only % of black patients lost preoperative weight. among privately insured patients, % lost weight. in contrast, % of patients insured by medi-cal/medicaid lost weight (p= . ). on average, lower baseline excess body weight was associated with no weight loss. patients who lost preoperative weight (n= , ) were included in the sub-group analysis. male sex (p\. ), black race (p. ), undergoing laparoscopic rygb (p= . ), no previous abdominal surgeries (p= . ), upper tertile baseline weight (p. ), waist circumference (p\. ), percent body fat (p\. ), bmi (p. ), excess body weight (p. ), and systolic blood pressure (p= . ) were associated with more pounds lost. conclusions: this study demonstrates various associations between sociodemographic and clinical patient characteristics and preoperative weight loss. given previous literature indicating the positive relationship between preoperative and postoperative weight loss following bariatric surgery, the results of this study suggest an opportunity to improve preoperative weight loss among specific groups. yen-yi juo, md, mph , usah khrucharoen, md , yijun chen, md , yas sanaiha, md , peyman benharash, md , erik dutson, md ; background: besides rate and extent of weight loss, little is known regarding factors predicting interval cholecystectomy following bariatric surgery, which are important factors in a surgeon's consideration during decision-making regarding whether to perform prophylactic cholecystectomy. in addition, no previous studies have quantified the incremental costs associated with ic. we aim to identify risk factors predicting interval cholecystectomy (ic) following bariatric surgery and quantify its costs. methods: a retrospective cohort study was performed using the national readmission database - . cox proportional hazard analyses were used to identify risk factors for ic. linear regression models were constructed to examine associations between cholecystectomy timing and cumulative hospitalization costs. background: patient-reported outcomes after bariatric surgery are important in understanding the longitudinal effects of surgery. the impact of hospital practices and surgical outcomes on followup rates remains unexplored. objective: to assess the effect of hospital-level practices and -day complication rates on -year follow-up rates of a standardized patient-reported outcomes survey. methods: bariatric surgery program coordinators in a statewide quality improvement collaborative were surveyed in june about their practices for obtaining patient-reported outcomes data one year after surgery. hospitals were ranked based on their follow-up rates between and (accounting for overall performance and improvement). univariate analysis was used to identify hospital practices associated with higher follow-up rates. multivariable regression was used to identify independent associations between -day outcomes and follow-up rates after adjusting for patient factors. results: overall, follow-up rates improved from ( . %± . ) to ( . %± . ) though there was wide variability between hospitals ( . % vs . % in ) . coordinator survey response rate was %. sixty-one percent of all surveyed coordinators perceived that surgeons prioritize high follow-up rates. when asked how long were their patients followed for, % of coordinators noted their programs provided lifelong follow-up. patient reminders about the -year survey were used by % of programs, mostly during clinic visits ( %). most programs ( %) had implemented strategies to improve follow-up rates, such as handing out the survey ( %) during clinic visits. follow-up providers included surgeons ( %), nurse practitioners ( %), and/or registered dietitians ( %). patient disinterest ( %), loss to follow-up ( %), survey length ( %), and lack of staff/ resources ( %) were the factors most commonly perceived as barriers to high follow-up rates. when compared to programs in the bottom quartile of follow-up rates, those in the top quartile were more likely to hand out the survey to patients during clinic visits ( % vs . %; p= . ) and had lower rates of risk-adjusted severe complications ( . % vs . %; p= . ), readmissions ( . % vs . %; p= . ), and reoperations ( . % vs . %; p= . ). conclusions: hospitals vary considerably in their -year follow-up rates when seeking patientreported outcomes data after bariatric surgery. there were also significant differences in programspecific practices for obtaining these data. hospitals with higher -year follow-up rates were more likely to physically hand surveys to patients during a clinic visit and had lower -day severe complication, readmission, and reoperation rates. improved -year patient-reported outcomes follow-up after bariatric surgery may be a proxy for higher quality perioperative care. david merkle , kazim mohommed , danielle r rioux , dilendra weerasinghe, md, facs ; nova southeastern university, herbert wertheim college of medicine, bariatric surgery is gaining popularity not only for its weight loss benefits, but also for its metabolic effects. we present a -year-old female patient with symptoms of neuroglycopenia, occurring -years post roux-en-y gastric bypass surgery. during one of her syncopal episodes, her blood sugar was noted to be mg/dl. continuous glucose monitoring demonstrated post prandial hypoglycemia, averaging episodes per day, with a maximum of episodes in one day. upon further evaluation, the lab results of the hba c, chromogranin a, somatostatin, and urinary sulfonylurea levels were all normal, with the c-peptide level within the upper limit of normal. ct scan of the abdomen and pelvis did not show any obvious masses in the pancreas, and since the chromogranin a level was normal, it lead to the empiric diagnosis of nesidioblastosis by exclusion. we placed the patient initially on medical management which included a carbohydrate restricted diet of g per meal, eating - small meals per day, and taking mg of acarbose three times per day. overall, symptoms have improved, and she has - episodes per month, compared with about episodes per day. we will also present the data with regards to other invasive treatment options, which are available when medical treatment options have failed, such as gastric bypass reversal versus distal gastrectomy. vertical banded gastroplasties (vbgs) were a common bariatric procedure in the s but have largely fallen out of favor due to unsatisfactory weight loss and a relatively high incidence of longterm complications such as dysphagia and severe gastroesophageal reflux disease (gerd). one of the ways to address these undesirable effects is to convert to a roux-en-y gastric bypass (rygb). the aim of this study was to assess the safety and efficacy of vbg-to-rygb conversion. outcomes of vbg revisions performed at an academic center between and were reviewed. of the vbg revisions, gastrogastrostomies were created in two patients, two underwent a planned -stage conversion, and vbgs were converted to rygbs. patients were operated on an average of years after their initial vbg. presenting symptoms were weight regain (n= , . %), dysphagia (n= , . %), or severe gerd (n= , . %). fourteen patients ( %) had a gastric staple line dehiscence. of the vbg to rygb conversions, were laparoscopic, were converted to open, were open, and were robotic-assisted. average operative time and length of hospital stay were . minutes and . days, respectively. within the first months post-operatively, twelve ( %) patients required readmission directly related to surgery, while eight ( %) visited the emergency department. eight patients ( %) required at least one unplanned operation due to complication(s) during the entire follow-up: small bowel obstruction (n= , at -week, -months, and -months), necrosis/leak of remnant stomach requiring remnant gastrectomy (n= ), tracheostomy for prolonged respiratory failure (n= ), bleeding (n= ), anastomotic leak (n= ), and hemothorax requiring vats (n= ). four patients ( %) had a contained perforation that was medically managed and five ( %) developed a gastrojejunal anastomosis stricture requiring endoscopic intervention. one patient ( . %) developed pulmonary embolism. there was no mortality directly related to surgery. complete resolution or improvement of gerd/dysphagia was appreciated in all patients in the short term follow-up. patients who presented with weight regain had a mean bmi loss of . ± . points in the median follow-up time of . months up to a year after conversion to rygb. in summary, reoperative bariatric surgeries after vbgs are complex, requiring longer operative times and length of stay. our study found % risk of severe complications requiring reoperations, compared to the previously cited % in short and long-term complications. conversion of vbg to rygb provides excellent relief of severe gerd and dysphagia and is a viable option for significant weight reduction. introduction: bariatric surgery is a safe and effective treatment for severe obesity and its comorbidities. however, concomitant splenectomy is sometimes required due to uncontrolled bleeding during the surgery. limited literature exists regarding the effects of concurrent splenectomy on outcomes of bariatric surgery. this study aimed to determine these outcomes. methods: adult patients with obesity who underwent primary, elective laparoscopic roux-en-y gastric bypass (lrygb) or laparoscopic sleeve gastrectomy (lsg) with concomitant splenectomy were identified from the metabolic and bariatric surgery accreditation and quality improvement program (mbsaqip, ) and national surgical quality improvement program (nsqip, (nsqip, - datasets. using propensity scores (based on baseline variables), patients who underwent primary bariatric surgery were matched : to a control group (primary lrygb/lsg without concomitant splenectomy) and thirty-day postoperative outcomes were compared. continuous variables and categorical variables were categorized as medians with interquartile range (iqr) and counts with percentages, respectively. background: several previous studies have suggested a correlation between weight loss and age after bariatric surgery. objective: the aim of our study is to further address age as a preoperative factor to determine the amount of weight loss after bariatric surgery. materials and methods: we performed a retrospective analysis of outcomes of a prospectively maintained database of , obese patients who underwent either sleeve gastrectomy (sg) or roux-en-y gastric bypass surgery (rygb) at our hospital between and . we analyzed the -month, -month, and -year postoperative percent total body weight loss (%tbwl) of obese patients who underwent bariatric surgery based on their preoperative age. results: the average age of patients included in the study was years old with a range of - years. an inverse relationship between preoperative age and postoperative weight loss was observed. younger patients achieved a higher % tbwl than older patients at the -month, -month, and -year postoperative follow-up. the average %tbwl for all patients at the -month, -month, and -year postoperative follow-up periods were . %, . %, and . %, respectively. at the -year follow-up, for every decade increase in age (above the average age of ), patients lost % less tbwl. conclusion: in our study, younger patients tend to lose a greater amount of %tbwl than older patients after bariatric surgery. results: patients participated in the survey. the median age was yo (iqr: - ) and . % were females. the following responses were encountered when asked about the importance of surgery-related factors: the study population indicated the following responses regarding expectations from magnetic surgery compared to conventional laparoscopy: there was no significant evidence of different responses by demographic groups. additionally, . % of the population indicated that a surgeon performing magnetic surgery should be more skillful than a surgeon performing conventional laparoscopy. conclusion: this study represents the first report of bariatric patient's perception regarding surgery-related factors. notably, nearly % of the cohort indicated that cosmesis after surgery is an important factor, whereas the responses regarding the rest of the factors were indicated as expected. the bariatric population included in this study had a positive perception of magnetic surgery. furthermore, the population perceived that this technique is associated with better outcomes, better cosmetic results, and higher surgeon dexterity. introduction: although much is known regarding medical outcomes of metabolic surgery, less is known regarding quality of life outcomes. we hypothesized that the collection of patient-reported outcomes (pros) could help us understand quality of life in this patient population. we chose to primarily use patient reported outcomes measurement information system (promis) instruments because of their broad applicability, low cost, and ability to use computer-adapted technology to survey. methods: we implemented the routine collection of pros as part of clinical care in december, . patients were offered tablets in clinic, and were asked to complete the surveys at most of their visits. we used computer-adapted technology to decrease the length of time needed to survey. we collected the following promis instruments: depression, pain interference, physical function, and satisfaction with social roles. we also collected the gerd-hrql, a general health question, and a current health visual analog scale (vas). we retrospectively reviewed our results from december through september . results: our response rate was % over the last year of collection. in total, assessments were completed by patients. the mean scores in our total patient population were as follows: vas , gerd-hrql , general health , depression , pain , physical function , and social roles . for promis instruments, the mean for the national population is , with as the standard deviation. for the depression and pain scores a higher score is worse, while a higher score indicates better quality of life for social roles and physical function. conclusions: routine collection of patient reported outcomes can be implemented in a metabolic surgery clinic. health-related quality of life appears to be decreased in this patient population compared to the general public. further work is ongoing to learn about postoperative trends, as well as differential effects of metabolic procedures. the effect of peri-operative antibiotic drug class on the resolution rate of hypertension after roux-en-y gastric bypass and sleeve gastrectomy. results: in total, rygb and sg were included in our analysis. no significant differences were found between cefazolin and clindamycin regarding hypertension resolution rates after sg. there was a significant difference in the resolution of hypertension after rygb with the use of prophylactic clindamycin or cefazolin. as shown in figure , patients who underwent rygb and received clindamycin had a significantly higher rate of hypertension resolution compared to cefazolin. this effect started at weeks post-operatively ( . % vs . % respectively, p= . ) and persisted up to the -year ( . % vs . % respectively, p= . ). we found no significant differences in patient age, sex, number of pre-operative hypertensive medications, pre-operative bmi, or %bmi change after year to account for the significant effect of antibiotic choice on hypertension resolution. conclusion: this study represents the first clinical report to suggest an impact of the type of antibiotic administered at the time of rygb on co-morbidity resolution, specifically hypertension. future studies will be needed to confirm that the mechanism of action for this novel finding is due to the differing modifications of the gastrointestinal microflora population based on the specific peri-operative antibiotic administered. introduction: laparoscopic adjustable gastric band with plication (lagbp) is a novel bariatric procedure which combines the adjustability of the laparoscopic adjustable gastric band (lagb) with the restrictive nature of the vertical sleeve gastrectomy (vsg). the addition of plication of the stomach to lagb should provide better appetite control, more effective weight loss, and greater weight loss potential. objective: the purpose of the study was to analyze the outcomes of lagbp at months. setting: this is a retrospective analysis from one surgeon at a single private institution. methods: data from all patients who underwent a primary laparoscopic lagbp procedure from december to june were retrospectively analyzed. data collected from each patient included age, gender, weight, body mass index (bmi), and excess weight loss (ewl). results: sixty-six patients underwent lagbp. the mean age and bmi was . ± . years and . ± . kg/m, respectively. all patients were beyond the -month postoperative mark. no patient was lost to follow-up. the patients lost an average of % and . % excess weight loss (ewl) at months ( . % follow-up) and months ( . % follow-up), respectively. also, the patients lost a mean bmi of . kg/m and . kg/m at months and months, respectively. the total number of fills during the study period was , and the mean fill volume was . ± cc. dysphagia was the most common long-term complication. the mortality rate was %. conclusions: lagbp is a relatively safe and effective bariatric procedure. in light of recent studies demonstrating poor outcomes following lagb, lagbp may prove to be the future for patients desiring a bariatric procedure without resection of the stomach. the median interval between (lrygb) and reoperation is months in group a and months in group b. the median percentage of excess weight loss (%ewl) is % vs %, respectively (p= . ). patients % ( in group a) were admitted in an emergency with an acute abdomen pain. ct scan was performed in patients % and has shown signs of occlusion in all cases. the most common symptoms were abdominal pain and vomiting. the surgery was performed by laparoscopy in patients % and by laparotomy or conversion in patients %. in all cases internal hernia was reduced and closed all defects. in only one patient in (group a) small bowel at jja was resected. there was no mortality and one patient had pneumonia with acute respiratory distress which was treated medically. conclusions: the closure of mesenteric defects at (lrygb) by tight non-absorbable continued sutures is recommended because it is associated with a significant reduction in the incidence of internal hernia. introduction: laparoscopic roux-en-y gastric bypass (rygb) is a common and effective form of bariatric weight loss surgery. however, a subset of patients will fail to achieve the expected total body weight loss (tbwl) greater than % after months or experience significant weight regain despite dietary, psychiatric, and behavioral counseling. although alternative procedural interventions exist for operative revision after suboptimal rygb weight loss, laparoscopic adjustable gastric banding (lagb) provides an option with short operative time, low morbidity, and effective results. we have previously demonstrated that short-term ( -month), and mid-term ( -month) weight loss is achievable with lagb for failed rygb. the objective of this study is to report the long term year outcomes of lagb after rygb failure. methods and procedures: a retrospective review of prospectively collected data before and after rygb when available, and before and after revision with lagb was performed. background: saline filled intragastic balloons have become a common outpatient procedure for the treatment of obesity. acute dilation, ischemia and necrosis of the stomach has been described in the medical literature. gastric necrosis from acute gastric dilation is a rare but life-threatening condition, which requires timely diagnosis and management. we present a case of partial gastric ischemia with necrosis hours following placement of a saline filled intragastric balloon. postoperative complaints of bloating, nausea and vomiting are common complaints following placement of saline filled intragastric balloons and can lead to a delay in diagnosis. early diagnosis and management is essential in avoiding this life threatening complication. case report: a year old woman, bmi , comorbid conditions of diabetes mellitus underwent uncomplicated placement of a saline filled intragastric balloon for treatment of obesity. hours after placement the patient complained of cramping and bloating. hours following placement the patient developed vomiting and presented to an emergency room for evaluation. she was found to have blood glucose exceeding and a severely dilated stomach with pneumotosis on ct evaluation. ng tube decompression and icu management of the severe hyperglycemia was initiated. removal of the intragastric balloon was delayed - hours until an appropriate endoscopic retrieval kit could be obtained. endoscopic retrieval was performed without incident and near complete necrosis of the gastric mucosa was noted. the antrum was the only area spared. hours after retrieval a laparoscopic evaluation of the stomach revealed full thickness necross of the entire fundus and greater curve. indocyanine green (icg) fluorescent dye was used to assess vascular integrity of the remaining stomach and to define lines of resection. resection of the greater curvature was performed using icg florescent dye to ensure that the angle of hiss was viable and well perfused. the patient had a full recovery and subtotal gastrectomy was avoided. conclusions: spontaneous gastric distension exacerbated by gastric outlet obstruction following placement of a saline filled intragastric balloon can occur. unrecognized this condition can lead to ischemia, necrosis and perforation of the stomach. appropriate evaluation of patients following placement of intragastric balloons is essential. recognition of this condition can be delayed due to the complaints of cramping, bloating and vomiting which are typical following placement of saline filled intragastric balloons. untreated, gastric ischemia and necrosis can lead to early perforation which is associated with a high mortality rate. introduction: morbid obesity has become a growing health risk in the united states with up to % of americans suffering with obesity. bariatric surgery remains the best treatment for morbid obesity. the recent use of laparoscopic sleeve gastrectomy (lsg) as a single stage procedure has met with great success because of its quick learning curve and minimal postoperative complication rates. however, there are concerns if the lsg is an effective procedure for long-term weight loss. although criticized at first, the mini-gastric bypass (mgb) surgery has become a great option for morbidly obese patients because of the ability to lose weight with minimal post-op complications. the aim of this review is to assess the outcomes of lsg as it compares to mgb for the management of morbid obesity. introduction: we hypothesize that a jejunoileal anastomosis and partial diversion using magnamosis, a novel magnetic compression device, is technically feasible and will improve insulin resistance and metabolic syndrome similarly to patients who underwent bariatric surgery. metabolic surgery has demonstrated improvements in various parameters including insulin resistance, triglyceride levels, and cholesterol. it may be technically feasible to perform a less-invasive operation through partial diversion, and thereby stimulate an increase in incretins from the l-cells of the ileum to glean these benefits. methods and procedures: we performed a laparotomy and jejunoileal partial diversion using magnamosis in five rhesus macaques with induced insulin resistance through dietary modifications. after surgery, weight was monitored and a metabolic laboratory evaluation was performed weekly. timed tests were performed at baseline and again at and weeks postoperatively for triglyceride levels, glp- , insulin, glucose, and bile acids. the primates were followed for weeks prior to euthanasia. results are represented as mean±sem and all p-values were calculated using a two-sample students' t-test. introduction: many studies concerning individuals seeking bariatric surgery indicate a higher prevalence of psychiatric disorder in this population, both before and after surgery, however results are not conclusive. the aim of this study was to investigate changes in psychiatric health after gastric bypass surgery. methods: patients within the catchment area of the department of psychiatry of the south alvsborg hospital, operated with gastric bypass surgery during - were identified through the scandinavian quality registry (soreg). patients files were examined and psychiatric diagnoses and alcohol/drug abuse were recorded preoperatively and with a follow up time of years. results: a total of operated patients were identified. of these patients had been in contact with the psychiatric department before or after surgery. patients had attempted suicide preoperatively, but no attempts were made postoperatively, all women. patients attempted suicide postoperatively without a previous history of suicidal attempts, men woman. four patients with a preoperative history of alcohol abuse were identified, all women. these individuals did not seem to abuse alcohol/drugs postoperatively. postoperatively patients with an alcohol/drug abuse were identified, men, women. none of them had a former history of abuse. of the patient performing suicidal attempts postoperatively, men woman, had a postoperatively emerging alcohol/drug abuse. conclusion: preoperatively known alcohol/drug abuse or suicidal attempts do not seem to predispose for postoperative abusive problems or suicidal behavior. preoperative identification of individuals prone to alcohol/drug abuse or suicidal attempts seems difficult. introduction: in the past, our group has popularized models for gastric bypass, sleeve and gastric imbrication. there are currently no models to predict weight loss following single anastomosis duodenal switch. surgeons who offer this procedure are left to guess based on their limited experience how their patients will do following surgery we have developed a simple office based algorithm to predict weight loss following this procedure. method: patients met the criteria for this study. these patients underwent surgery at a single institution from june to december . non-linear regression analysis was performed to interpolate weight loss at one year. a multilinear regression was run to determine the significant variables. a model was then constructed to predict weight loss after single anastomosis duodenal switch. results: bmi, htn, gender, and the interaction between htn and dm were found to affect weight loss. the model achieved a r value of . and the average error of prediction in the model was . %ewl. conclusion: today too many surgical practices offer procedures tailored to surgeon instead of the needs of the patient. using our models predicting postoperative weight loss can be a straightforward process using easily gathered data. all surgeons should be doing this currently in their own practice to allow patient to choose targeted healthcare interventions based on patient's personal goals. surg endosc ( ) introduction: there is a long-standing practice of testing anastamosis both in upper and lower gi surgery. post-operative leaks in bariatric surgery are an uncommon but serious compilation increasing morbidity and risk of mortality. the present study looks at the practice of performing an intra-operative leak test during roux-en-y gastric bypass (rygb) and sleeve gastrectomy (sg). methods and procedures: the study was divided in two independent phases of six months and months. data was collected from all patients undergoing sg, rygb or revision rygb within those two periods. to confirm the integrity of the staple line all patients underwent a methylene blue and air test intra-operatively. this was followed by a gastrograffin swallow the morning post procedure. results: total number of patients in the study was . there were four positive intraoperative tests. one patient was a primary rygb and three patients were undergoing revision rygb. all were reinforced and subsequent recovery and gastrograffin swallow showed no leak. one revision rygb had an undetected small bowel injury distal to jejuno-jejunostomy that was not identified on intraoperative or next day imaging. we used multivariate statistical analysis to study our population sample and classified the impact of each factor or their combination with the use of principal component analysis. we used systematic clustering to identify subpopulations that have significant differences in statistical distribution. result: the main determinant of total operative time was the surgeon and the level of his assistant. prior surgeries, bmi and smoking history had a statistically significant impact on the laparoscopic time (p value. ). removing the impact of various surgeons, we detected four clusters of patients based on more than patient characteristics. we noticed total or time had two different clusters: one with a standard-deviation of - min while the other had over min. conclusion: this study may have practical implications on improving scheduling. the different comorbidities of these bariatric patients helped to stratify patients into these main cluster groups. better predictability on length of operative procedure can lead to more efficient use of or time and staff, thus ultimately leading to savings for the hospital. in addition, we used automated noninvasive tracking methods to identify phases of bariatric procedures that will allow more accurate estimated or time to efficiently schedule cases. the smart or, which is equipped with multiple noninvasive sensors, allows for error free tracking and monitoring without human interference. objectives: successful outcomes after bariatric surgery (bs) require a comprehensive educational program (cep) focused on post-surgical dietary and lifestyle changes. at our institution, patients must comply with a -week life-after-surgery program prior to surgery. since many patients are not able to participate in-person, an online cep was created to improve accessibility. to evaluate comprehension, a -question test is administered at the last preoperative visit to participants of both classes. the primary objective of this study is to evaluate the effectiveness of online versus inperson cep in terms of comprehension and post-operative weight loss. methods: patients who underwent bs from august -may were retrospectively reviewed at a single institution. all patients who underwent the in-person or online cep, completed the -question test, and had post-operative follow-up for at least months were included. baseline demographic, operative, and weight data were obtained using the electronic medical record. background: body weight loss after bariatric surgery is affected by several factors. diabetes status or preoperative body mass index (bmi) would affect the body weight loss after surgery. age and sexuality may also be the predictor. furthermore, the malabsorptive procedure is considered more effective for body weight loss than the restrictive procedure alone. we investigated the contribution of preoperative background data and procedures to the body weight loss after surgery. methods: this was a multicenter, retrospective study to validate the efficacy of bariatric surgery for morbidly obese patients in japan. patients underwent sleeve gastrectomy (lsg) or lsg with duodenal-jejunal bypass (lsg/djb) in each institution from january to december , and whose bmi was kg/m or more at the first visit were included in this study. we investigated the percent excess body weight loss (%ewl) at months after surgery. univariate and multivariate analyses were done to evaluate the predictive factors of body weight loss. we defined that %ewl more than % as well response (wr background: despite its known safety and efficacy, bariatric surgery is an underutilized treatment for morbid obesity in the united states. objective: our goal was to identify factors associated with failing to proceed with surgery despite being considered an eligible candidate by a bariatric surgery program. methods: this is a retrospective study that includes all patients (n= ) who attended a bariatric surgery informational session (bis) at a single center academic institution in . eligible candidates were identified after clinical evaluation and multidisciplinary candidacy review (mcr). we compared patients who underwent surgery to those who did not (i.e. dropped out) by evaluating patient-specific, insurance-specific, and bariatric surgery program-specific variables. univariate analysis and multivariable regression were performed to identify risk factors associated with failing to undergo surgery among eligible candidates. introduction: the elderly are a special subset of the population due to their limited physiological reserve with aging. revisional bariatric surgery is becoming more common with increase in primary bariatric procedures. data on safety, weight loss, and metabolic effects of revisional bariatric surgery in elderly is limited. the aim of this study was to assess the safety and efficacy of revisional bariatric surgery in the elderly. methods: clinical data of all elderly patients ( years and above) who underwent elective revisional bariatric surgery at an academic institute between and were reviewed. demographic data, perioperative variables, and postoperative outcomes were studied. results: a total of patients were identified with a female predominance ( : ). mean age was ± . years. mean bmi at the time of revisional surgery was . ± . kg/m . the primary indication for revisional surgery included management of postoperative adverse events (n= , . %) and weight recidivism (n= , . %). in patients with postoperative complications, the most common indications for revisional surgery were dysphagia (n= , . %), marginal ulcer (n= , . %), gastric outlet obstruction (n= , . %), and fistula formation (n= , . %). the most common type of revisions included conversion of vertical banded gastroplasty to roux-en-y gastric bypass (rygb, n= ), revision of rygb (n= ), conversion of adjustable gastric banding to sleeve gastrectomy (sg, n= ), and sg to rygb (n= ). two out of seven ( . %) patients with -day postoperative readmissions had serious complications that required reoperation. one of them underwent small bowel resection for ischemia and the other had thoracotomy for hemothorax evacuation developing secondary to a gastropleural fistula. while there was no mortality over the first days postoperatively, two patients died months after surgery due to infectious complications. in the median follow-up time of (interquartile range, - ) months, mean weight and bmi changes of − . kg and − . kg/m were observed. twenty-three ( . %) patients had diabetes at time of revisional surgery. a mean reduction of . mg/dl in fasting blood glucose and . % in glycated hemoglobin were noted between baseline and last follow-up. conclusion: revisional bariatric surgery in elderly is associated with high complication rates. our data indicate that revisional bariatric surgery can potentially alleviate symptoms and resolve complications of primary bariatric surgery. elderly patients should have their risk stratified and weighed against the benefits of surgery. anne-marie carpenter, bs, alexander l ayzengart, md, mph; university of florida introduction: bariatric surgery is the most effective treatment for morbid obesity. of all available procedures, laparoscopic sleeve gastrectomy (lsg) is now the most popular worldwide. common complications of lsg include gastroesophageal reflux, stricture, and staple-line leak. although rare, portomesenteric venous thrombosis (pmvt) and liver retractor-induced injuries are increasingly reported. we present a case of isolated left portal vein thrombus after routine lsg that was likely caused by prolonged compression of left liver lobe by the nathanson retractor. case presentation: a -year-old female with a bmi of and biliary colic due to cholelithiasis underwent lsg with hiatal hernia repair and cholecystectomy. she tolerated the procedure without complication and was discharged home on the following day. on postoperative day , she presented to the emergency department with fever and epigastric pain. contrast ct revealed an isolated filling defect within the proximal left portal vein; abdominal doppler demonstrated an acute thrombus occluding the left portal vein with normal flow in the main and right portal veins. the patient was treated with a -month course of therapeutic anticoagulation with lovenox. a complete hematologic workup did not uncover any hypercoagulable conditions. the patient recovered well and remained asymptomatic at her follow-up visit weeks after operation. discussion: pmvt is a rare surgical complication with multifactorial etiology. in bariatric surgery, evidence suggests lsg elicits more frequent pmvt compared with roux-en-y gastric bypass. a systematic review cited the incidence rate of pmvt as . - % after lsg. the mechanisms are thought to be due to pneumoperitoneum, procoagulant obese state, manipulation of portomesenteric venous system during division of the gastrocolic ligament, and postoperative dehydration. liver retraction is paramount during laparoscopic bariatric surgery to provide adequate visualization of the upper stomach and diaphragmatic hiatus. most methods of liver retraction produce significant pressure on the liver parenchyma by compressing it against the diaphragm. three types of liver injury have been documented in literature: minor congestion, traumatic parenchymal rupture, and delayed liver necrosis. uniquely, we propose an additional type of injury-left portal vein thrombosis due to compression of left liver lobe with the nathanson retractor. conclusion: the case described herein represents the first documented report of isolated left portal vein thrombosis after lsg. this is a unique presentation of retraction-related liver injury causing pmvt by mechanical compression of liver parenchyma. as surgical procedures increase in duration, intermittent release of liver retraction should be performed at regular intervals. introduction: up to % of patients experience internal hernia (ih) after laparoscopic roux-en-y gastric bypass (rygb). studies have shown that antecolic roux limb orientation, and closure of the mesenteric defect reduce, but do not eliminate, the incidence of ih. we hypothesize that despite operative differences, ih occur more frequently in patients who experience significant weight loss. this study aims to determine whether those patients who present with ih following rygb experience greater than % excess body weight loss (ebwl). methods: a retrospective chart of all patients who underwent ih repair following rygb at our institution between sept and sept was performed. all applicable cpt codes to encompass ih repair were reviewed (n= ). patients with ih repair after rygb were identified. results: of the patients, were female. the mean pre-rygb weight was lbs (sd± . ), bmi . kg/m (sd± . ). all procedures but one were performed in an antecolic configuration; the other retrocolic-antegastric. fifteen cases were laparoscopic and two were open; nine had the jejunal mesenteric defect closed, eight did not. the average weight loss from the time of rygbp to ih presentation was . lbs (sd± . ) and %ebwl from rygb to the nadir weight was % (sd± ). when evaluated by t-test, there was no statistical difference in bmi at the time of program initiation, rygb, or ih presentation, as well as number of pounds lost, %ebwl, or time to ih presentation, when comparing patients for whom the mesenteric defect was closed or not. average time from rygb to ih presentation was . years (range - days) . conclusion: in our limited cohort of patients who have presented with internal hernia after rygb, there was an average of % ebwl. this is greater than the average expected %ebwl at our institution and others, suggesting that ih may occur in patients with greater weight loss at a higher frequency. mesenteric defect closure did not appear to have any influence in this limited cohort, suggesting that weight loss is a stronger factor in ih development. we plan a more extensive evaluation in a larger cohort of patients to determine if greater %ebwl is a predictor of ih formation in patients undergoing rygb. introduction: introduction of enhanced recovery after surgery (eras) pathways has led to early recovery and shorter hospital stay after laparoscopic roux-en-y gastric bypass (lrygb) and laparoscopic sleeve gastrectomy (lsg). this study aims to assess feasibility and outcomes of postoperative day (pod) discharge after lrygb and lsg from a national database. methods: patients who underwent elective primary lrygb and lsg and were discharged on pod and were extracted from metabolic and bariatric surgery accreditation and quality improvement program (mbsaqip) dataset. a : propensity score matching was performed between cases with pod vs pod discharge, and the -day outcomes of the propensity-matched cohorts were compared. high risk patients were excluded from the analysis. purpose: the aim of this study was to evaluate a large volume, multi surgeon bariatric surgery center producing the largest sample size to date proving efficacy (% weight loss) and safety of sleeve gastrectomy following band removal in one or two step procedures. methods: all patients undergoing conversion of lagb to lrygb ( ) and lsg ( ) regardless of one step vs two step conversion from january to january were included. a retrospective analysis of our prospectively maintained database was performed to compare outcomes in patients undergoing conversion to lrybg vs lsg after lagb to identify the outcomes. introduction: the purpose of the study was to describe the use of intraoperative indocyanine green (icg) fluorescence angiography to identify the blood supply patterns of the stomach and gastroesophageal junction (gej). we hypothesized that identifying these vascular patterns may help modifying the surgical technique to prevent ischemia-related postoperative leaks. methods: patients underwent laparoscopic sg and were examined intraoperatively with icg fluorescence angiography at an academic center from january to september . prior to the construction of the sg, ml of icg was injected intravenously and pinpoint® technology was used to identify the blood supply of the stomach. afterwards, the sg was created with attention to preserving the identified blood supply to the gej and gastric tube. finally, ml of icg were injected and pinpoint® technology was used again to ensure that all the pertinent blood vessels were preserved. results: patients successfully underwent the procedure with no complications. the following blood supply patterns to the gej were found: the incidence of overall accessory blood supply to the right-side dominant pattern was more common than expected. in about half of the cases where an accessory vessel was found in the gastrohepatic ligament, the blood flow was toward the stomach (and not the liver). furthermore, the incidence of accessory blood supply from the left side was found in % of the cases. % of patients had both the left side accessory and accessory gastric artery pattern. in these particular patients, if a concurrent hiatal hernia repair is performed, these accessory blood supplies are at risk of being injured if care is not taken to preserve them, rendering the gej relatively ischemic. conclusion: icg fluorescence angiography allows determining the major blood supply to the proximal stomach prior to any dissection during sleeve gastrectomy so that an effort can be made to avoid unnecessary injury to these vessels. background: morbid obesity, a common medical concern with significant health risks, has a prevalence of . % among u.s. adults. bariatric surgery provides effective weight loss for morbidly obese patients with improvement in their comorbid conditions. traditionally, routine intraoperative drain placement (idp) and postoperative esophagram (ugis) were thought to identify early postoperative complications. recently, these interventions have been scrutinized for their effectiveness. we hypothesized that idp and postoperative ugis do not alter outcomes in bariatric surgery and only increase hospital length of stay (los). methods: two cohorts, each consisting of patients from either or were analyzed from our institution. in the cohort, all patients had idp and an ugis on postoperative day , prior to starting a clear liquid diet. in the cohort, no patients had idp or ugis, but instead were started on a clear liquid diet on postoperative day , in the absence of vomiting. all patients in each cohort underwent either a laparoscopic sleeve gastrectomy or a roux-en-y gastric bypass. a retrospective study was performed to analyze whether there was a significant difference in postoperative complications, length of stay, and operating room time between these two cohorts. those who underwent t dm remission were less likely to be vdd at all time points. the rates of vdd appear to be slightly higher in rygb at each time points. the rates of macrocytic anemia, microcytic anemia and hypoalbuminemia were low and varied depending on surgical procedure, with no relevant increase following surgery (see figure ). conclusions: vitamin d deficiency is prevalent among diabetic patients with obesity presenting for bariatric surgery. the postoperative management was successful in addressing vdd following surgery; those who experienced t dm remission after surgery were less likely to be vdd. further prospective studies are needed to explore this relationship. surg endosc ( ) :s -s introduction: it is well known that morbid obesity is strongly associated with high blood pressure. cardiovascular risk reduction is a well studied and described result of bariatric surgery. the objective of this study is to quantify hypertension resolution in patients who underwent bariatric surgery at our institution. methods: we retrospectively reviewed all the patients who underwent either laparoscopic sleeve gastrectomy (lsg) or laparoscopic roux en y gastric bypass (lrygb) at our institution between and . we selected those patients who were on antihypertensive medical treatment and had a -month follow-up. hypertension resolution was defined as the interruption of any blood pressure medications within the follow-up period. we compared the patients who had resolution of hypertension (group ) with patients who did not (group ), based on demographics, comorbidities, and outcomes. chi-square and student t-test were used for categorical and continuous variables respectively. results: out of patients, ( . %) patients met the inclusion criteria, out of which, ( . %) had a complete resolution of hypertension within months. the patient population included in group was predominantly female n= patients ( . %), diabetic (n= , %), with a mean bmi of . ± . kg/m , a mean age of . ± . years, and a preoperative systolic blood pressure mean of ± . mmhg. the most common procedure performed was lsg with n= ( %). comparison between group and group based on age, gender, bmi, and diabetes showed no statistically significant difference. estimated bmi loss % at months, type of procedure and % ebmil showed no statistically significant difference between the groups. conclusions: rapid weight loss is associated with a drastic reduction of blood pressure. besides weight loss, we did not identify a clear correlation between risk factors when we compared patients who had resolution of hypertension with patients without resolution. further prospective studies should be done for better understand these findings. the mount sinai hospital, university of chicago introduction: for many patients, hiv has transformed from a life-threatening illness into a manageable chronic disease. reflecting trends in the general population, obesity is increasingly prevalent among hiv-positive patients. surgical intervention has shown the greatest effectiveness in treating obesity. it is unknown, however, whether physician attitudes reflect the changing trends in obesity care for hiv-positive patients. methods and procedures: medical students from the first, second, and fourth years of training were invited to participate in an irb-approved survey, handed out during didactic sessions, which was designed to assess their knowledge and attitudes regarding bariatric surgery in hiv-positive patients. self-reported demographic information of respondents was also collected. the outcome of interest was the proportion of correct responses. univariate and multivariate regression analyses were performed. results: surveys were completed by medical students. demographic covariates included the following: age, sex, race, bmi, and year of training. age, sex, race, and bmi were not statistically significant in the multivariate model. however, in both univariate and multivariate models, each additional year of training was associated with a significant increase in the proportion of correct responses (multivariate model beta coefficient= . , p. ). conclusions: obese and hiv-positive patients suffer from well-documented stigma in health care. these findings suggest that medical training corrects common misperceptions of obese and hivpositive patients, and may lead to a better understanding of the appropriateness of bariatric surgery for hiv patients. whether these attitudes are predictive of referral practices remains to be seen. introduction: obesity is a common problem worldwide with numerous associated comorbidities and is associated with an increased risk of developing some cancers. despite bariatric surgery being associated with a risk reduction for cancer development, some will develop cancer after surgery and little is known about complications which might arise during multimodality cancer treatment. here we report the case of a year-old female who developed an unusual giant marginal ulcer (mu) post laparoscopic roux-en-y-gastric bypass (lrygb) while receiving systemic chemotherapy for an early stage breast cancer. case report: in summary, a year-old female with a preoperative bmi of kg/m had an uncomplicated lrygb one year prior to her presentation. she was a non-smoker, was abstinent of alcohol and did not use nsaids, steroids or other ulcerogenic medications. eight months post procedure with a bmi of . kg/m she was diagnosed and treated with bcs plus slnb for a pt n m er/pr +ve her −ve breast cancer. one week following her third cycle of docetaxel and cyclophosphamide, she presented with two days of melena, small volume hematemesis and abdominal discomfort. the patient was resuscitated with prbc, started on a ppi infusion and had free air ruled out on a cxr. upper endoscopy was complete showing a giant mu at the gastrojejunal anastomosis, biopsies ruled out malignancy and h. pylori. subsequent ct abdomen/pelvis identified contrast extravasation from the anastomosis confirming a free perforation. broad spectrum antibiotics were started and a diagnostic laparoscopy complete. a graham patch repair utilizing omentum and abdominal washout were complete with placement of surgical drains. the patient was supported with parenteral nutrition while npo. diet was advanced after an upper gi series on post operative day showed no ongoing leak. the patient was discharged on post operative day , recovered and although further chemotherapy was discontinued she completed whole breast radiotherapy. conclusion: leaks and hemorrhage are early postoperative complications that are not seen intraoperatively in our experience. furthermore, endoscopy significantly increases mean operative time. routine use should be left to the discretion of the surgeon but should not be considered an essential step of the sleeve gastrectomy. the objective of the study: surgical site infection (ssi) following bariatric surgery contributes to patient morbidity and additional use of health care resources. we investigated whether a ssi quality control initiative in the form of a refined preoperativeantimicrobial protocol affected the rate of ssi following laparoscopic roux-en-y gastric bypass (lrygb). we reviewed all lrygb procedures performed between june and december at a single bariatric surgery centre of excellence. two preoperative antimicrobial protocols were compared. patients undergoing surgery prior to february received g of cefazolin whereas patients undergoing surgery after february , received a new antimicrobial protocol consisting of g cefazolin, mg metronidazole and ml oralchlorhexidine rinse. the primary outcome was day ssi including superficial ssi, deep incisional ssi and organ/space infection as defined by the centre for disease control. clinic charts and provincial electronic medical records were reviewed for emergency department visits, microbiology investigations and physician dictations diagnosing ssi. outcomes were assessed using a students t-test. results: two hundred seventy six patients underwent lrygb of which received the refined antimicrobial protocol and received cefazolin. the refined antimicrobial protocol significantly decreased the rate of deep incisional ssi compared to cefazolin (n= , . % vs n= , . %; p\ . ). the refined antimicrobial protocol resulted in an insignificant overall reduction in the rate of superficial ssi (n= , . % vs n= , . %; p[ . ) and organ/space infection (n= , . % vs n= , . %; p[ . ) respectively. conclusions: a preoperative antimicrobial protocol using cefazolin, metronidazole and chlorhexidine oral rinse appears to reduce the rate of ssi following lrygb. this protocol may be most effective to prevent deep incisional ssi. additional patient cases or alternative study design including a randomized control trial is required to better understand the efficacy of this protocol. background: for many years, the roux-en-y gastric bypass (rygb) was considered a good balance of complications and weight loss. according to a several short-term studies single anastomosis duodenal switch or stomach intestinal pylorus sparing surgery (sips) offers similar to weight loss to rygb with fewer complications and better diabetes resolution. however, no one has substantiated complication and nutritional differences between these two procedures over the midterm. this paper seeks to substantiate previous studies and compare complication and nutritional outcomes between rygb and sips. methods: a retrospective analysis of patients who either had sips or rygb from to . complications were gathered for each patient. nutritional outcomes were measured for each group at , , and years. regression analysis was applied to interpolate each patient's weight at , , , , , , and months. these were then compared with t tests, fisher exact tests, and chi squared tests. results: rygb and sips have statistically similar weight loss at , , , , and months. they statistically differ at and months. at months, there is a trend for weight loss difference. there were only statistical differences in nutritional outcomes between the two procedures with calcium at and years and vitamin d at year. there were statistically significantly more long term major complications, minor complications, reoperations, ulcers, small bowel obstructions, nausea, and vomiting with the rygb than sips. conclusion: with comparable weight loss and nutritional outcomes, sips has fewer short and long-term complications than rygb and better type diabetes resolution rates. introduction: the purpose of this study is to determine the risk factors that contributed to increased postoperative complications, as noted in prior studies within the publicly funded insurance population undergoing bariatric surgery. methods and procedures: data was collected via a retrospective review of the medical records of patients who underwent laparoscopic roux en y gastric bypass or laparoscopic sleeve gastrectomy from to at a single institution. for each patient, data was collected in the following categories: baseline demographics, insurance status, medical comorbidities, immediate complications, re-admissions and associated complications, and follow up out to years. results: a total of patient charts were reviewed, patients were categorized as private insurance and patients were categorized as public insurance. there was no statistically significant difference in mean patient age (private . years vs public years), sex (male:female %: % for both groups), or bmi ( vs ). there was a statistical significance in relationship status in the categories of single ( % vs %), married ( % vs %) or living with a partner ( % vs %), as well as employment status ( % vs %). when comparing comorbid conditions preoperatively there was no difference except for diabetes which was less common in the private insurance group % vs %. readmission rates for complications were significantly different as well at % vs % with public insurance patients having increased complication rates and readmissions. there was no difference in follow up percentages at each time point for the two groups. interestingly postoperative bmi was significantly different in the two groups until year out ( vs ) when the difference disappears. conclusions: our current data set confirms prior research that documented higher complication rates in public insurance patient populations without differences in long term results in regards to weight loss. it also shows that the public insurance group is possibly at higher risk for complications and readmissions postoperatively due to the lack of social support at home given that a much higher percentage of them are single or divorced, and lack employment. it is likely that this lack of support at home prompts more frequent readmissions and associated complications. introduction: gastric bypass has been an acceptable treatment for the morbidly obese patient, with proven efficacy on weight loss and remission of co morbidities, especially diabetes (t dm). laparoscopic sleeve gastrectomy (lsg) is gaining momentum as an alternative procedure for the morbidly obese patient. the aim of this study is to assess the resolution of t dm by examining hba c, bmi, fat %, and % excess weight loss in t dm patients in our lsg patients. methods: we performed a retrospective chart review of t dm patients before and after lsg, analyzing hga c, bmi, % weight loss, fat %, and diabetic medications. data was analyzed by using spss version . paired t-test was applied to see the significance of bmi, weight, fat % and hba c before and after the procedure. introduction: gastroesophageal reflux disease (gerd) is a known risk following laparoscopic sleeve gastrectomy (lsg), with up to % of patients affected by the disease postoperatively. of these patients, an unknown number progress to medically refractory gerd. due to their postsurgical anatomy, these patients have limited options for intervention. while endoluminal therapies are available, surgical revision to roux-en-y gastric bypass (lrygb) has become an accepted revisional treatment. despite this therapeutic option, many payors deny coverage for this treatment. in this study, we report outcomes of revision of lsg to lrygb and difficulties in obtaining insurance approval for the operation. methods: we conducted a retrospective review of all patients who underwent a revisional bariatric operation at a single institution between january and august . we analyzed all patients who underwent conversion of lsg to lrygb. we collected data on -day mortality and morbidities, pre-and postoperative antacid use, and the insurance approval process. results: within the study period, we identified patients undergoing revisional bariatric surgery. seventeen patients had undergone conversion of lsg to lrygb. all of these patients underwent revision due gerd refractory to maximal medical therapy. the average body mass index was kg/m , and our average operative time was minutes. one patient required laparoscopic cholecystectomy within days due to acute cholecystitis, and another patient required reoperation for control of staple line bleeding. there were otherwise no -day morbidities or readmissions. fifty nine percent stopped all antacid medication by six months, and % stopped by months. of the % percent of patient still on proton pump inhibitor therapy, none of those patients complained of reflux symptoms. of non-medicare patients, % were initially denied insurance coverage for revision. only one plan accounted for all initial approvals. twenty five percent of denied patients eventually paid out of pocket, and the remaining % ultimately secured coverage after an appeal process. with no significant differences in mortality or hospital stay. significantly shorter operative times were observed in the adolescent group ( . ± vs . ± , p. ). in univariate analysis blood transfusions and vte rates were significantly lower in the adolescent group but there was no difference after risk-adjusted logistic regression analysis. analysis of readmission data showed lower rates in adolescents compared to young adults ( . % vs . % p= . ). however, adolescents are more frequently readmitted secondary to gallstone disease ( . % vs . %, p. ). the most common reason for readmissions in both groups was nausea and vomiting with fluid/electrolyte depletion, followed by abdominal pain. conclusion: adolescent bariatric surgery is feasible and safe, with outcomes similar to that of young adults. lsg is currently the most common bariatric procedure performed in adolescents which is reasonable given the relative lack of co-morbid conditions within this group. nausea and vomiting are the most common reason of readmission in both groups, but gallstone disease is significantly higher in adolescents, suggesting that this population should be carefully screened for gallbladder disease preoperatively. further studies regarding long-term results are needed to elucidate long-term outcomes, such as the durability of comorbidity resolutions in adolescent patients. introduction: revision bariatric surgery is always considered to be associated with higher complication rates. there is currently controversy in the literature regarding one stage and two stage revisions. methods: the present study is ongoing longitudinal prospective analysis of data of revision surgery in a single unit. the revision surgery was offered after initial failed or complicated gastric band, sleeve gastrectomy and roux-en-y gastric bypass (rygb). results: there were forty-two individuals who had revision bariatric surgery. the age of the cohort of patients ranged from twenty-six to seventy-five years. thirty-three were females and nine males. all patients who were hypertensive or diabetic at the time of their initial bariatric operation had a relapse of their co-morbidity prior to their revision surgery. the two stage revisions patients had their band removed at another facility, had a compilation from the band itself or did not wish for revision surgery initially. of the two failed bypasses one had a large pouch and very short limbs. the other had a gastro-gastric fistula and ultra short limbs. there were no deaths in this study. one patient who underwent one stage revision of a gastric band to bypass had an iatrogenic small bowel injury that required a second operation. amelioration of diabetes and hypertension was seen in all who had relapsed. weight loss was good in all patients except for the those undergoing revision from short limbed to long limbed bypass. conclusion: there is enough evidence that revision surgery is feasible, and can ameliorate metabolic co-morbidities after failed band and sleeve. two staged surgery is not necessarily safer compared to one stage revision. in the present study an inadvertent iatrogenic injury occurred in one stage revision group but is not true reflection of increased complications. the association between preoperative endoscopic esophagitis and post operative gerd in sleeve gastrectomy patients samer elkassem, md; medicine hat regional hospital introduction: gerd is a common complication after sleeve gastrectomy (sg). the purpose of this study is assess the relationship between pre-operative findings of endoscopic esophagitis and postoperative gerd in sg patients. the hypothesis of this study is that patients with pre-op esophagitis are more likely to have gerd post-op than patients with no esophagitis pre-op. methods: a retrospective review of sg patients who had pre-operative endoscopy and followed prospectively for at least one year was preformed. patients were divided into two groups based on pre-op endoscopic findings: those with no findings of esophagitis (ne), and those with endoscopic esophagitis, including barretts (ee). patients were followed for at least one year, and assessed for usage of a proton pump inhibitor (ppi) usage. the two groups were compared using both student t-test and chi square test. results: a total of patients did not have any findings of esophagitis on pre-op endoscopy (ne group), and patients had findings of endoscopic esophigitis (ee). there was no difference in preoperative demographics and post-op weight loss at one year (table i) . follow-up ranged from one to years post-op. the dependency on ppi usage and de novo reflux are shown in table ii . introduction: patients with "super-super obesity", defined as a bmi≥ , are at higher risk of weight-related health problems and might benefit more than others from metabolic and bariatric surgery. however, these benefits need to be weighed against the potential for increased operative and perioperative risks. accurate data regarding these patients is critical to guide procedure choice and informed, shared decision-making. the metabolic and bariatric surgery accreditation and quality improvement program (mbsa-qip) is a national accreditation and quality improvement program, which captures clinically-rich specialty-specific data for the majority of all bariatric operations in the united states. this is the first analysis of the mbsaqip participant use file (puf) focusing on this at-risk subpopulation. introduction: sleeve gastrectomy represents one of the most common surgical procedure used in bariatric surgery. the most feared complication following laparoscopic sleeve gastrectomy is the leak that occurs at the staple line. one method to reduce the risk of leak is the use of reinforcement material at the suture line. in this study, the efficacy of sutures and fibrin glue in the prevention of staple leak has been compared retrospectively. materials and methods: a total of patients undergoing lsg between october and august at the medical faculty of firat university were retrospectively assessed using the hospital database system records. results: there were males ( %) and ( %) females, with a mean age of years (range: - y), and body mass index of kg/m . while no reinforcement material was used in patients ( %) at the suture line, reinforcement sutures or fibrin glue were used in ( %) and ( %) patients, respectively. postoperative leak occurred in patients ( . %), and ( . %) of these had no use of reinforcment material for leak prevention, while additional sutures or fibrin glue had been used in patients, one in each group ( . %). one patient died due to leak and the consequent development of sepsis ( . %). discussion: lsg is increasingly more frequently used in bariatric surgery practice. however, an increase also occurs in the rate of complications. a discrepancy exists in the published literature regarding the benefit of reinforcment the suture line on the risk of leak risk. in our patient series, patients without the use of additional material in the staple line had a significantly increased risk of leak. conclusion: despite some controversy, strong evidence exists on the effectiveness of fibrin glue in the prevention of leaks in patients undergoing laparoscopic sleeve gastrectomy. background: laparoscopic bariatric surgery has been performed safely since . in a persistent search for fewer and smaller scars, single port and acuscopic surgery or even notes have been implemented. the goal of this study is to analyze the safety and feasibility of using a low cost incisionless liver retraction compared to a standard laparoscopic retractor for sleeve gastrectomy. methods and procedures: candidates for sleeve gastrectomy that fulfilled nih criteria for bariatric surgery were selected. those younger than and/or with prior upper-left quadrant surgery were excluded. all patients signed written consent. patients were randomized : to either a standard port technique with a fan-type liver retractor through a mm port (group a); or a port technique with the liver retracted by a polypropylene suture passed through the right crura and retrieved at the epigastrium with the use of a fascia closure needle (group b). all surgeries were performed by the same surgeon. surgery length from insertion of first port to withdrawal of the last was the primary endpoint. anthropometric data, % of pre-surgical total weight loss (%ptwl), visualization of the surgical field, complications inherent to liver retraction and postoperative morbidity were recorded. background: comprehensive web and hospital based preparative patient education allow the morbidly obese patients to understand weight loss surgery, its benefits, the necessity of follow up and the risk of weight regain. while the inhouse seminars provide a face-to-face interaction with the bariatric program staff, the online seminars are easily accessible and more cost effective. the primary objective of this study is to compare demographics and weight loss surgery outcomes between patients who participated in the online vs in-house preparative seminars. methods: after obtaining institutional review board approval, a retrospective chart review was performed involving patients who underwent bariatric surgery between january and december at a tertiary care center. the patients were divided into two groups based on their choice of educational seminar, online or in-house, prior to their initial consult with a surgeon. data was collected on age, type of insurance, length of stay (los), longest follow up and change in bmi to assess weight loss. results: one hundred and eighteen patients were included in this study. eighty patients attended in-house seminar while completed online seminar. the various types of surgery (laparoscopic gastric bypass, sleeve gastrectomy, and band) were similarly represented between the two groups. there was no difference in the type of insurance policy between the groups. patients who elected to take the in-house seminar were on average years older than those who chose the online course, which was statistically significant (p. ). there were no differences in los, longest follow up after surgery, and weight loss at months between the groups. conclusions: based on mbsaqip registry data, patients age or over did not have higher odds of a -day readmission compared to younger patients after lsg or lrygb. rates of -day readmission, reoperation, and death were similar, but rates of complications (e.g. pneumonias, unplanned intubations) were higher in the older group. bariatric surgery in the elderly should therefore be performed only after careful and patient-centered selection processes. introduction: revisional bariatric surgery has become more common in recent years. it is to address short and long-term complications of primary bariatric surgery as well as the issue of weight regain. the aim of this study was to retrospectively analyze the indications for reoperation and short-term outcomes in our institution. methods and procedures: between and , patients who underwent bariatric surgery in our center were included in a prospectively collected database. demographic data, primary and revisional bariatric procedures, reasons for revisions and outcomes were recorded and reviewed retrospectively. results: a total of patients underwent bariatric surgery at our institution and % of these (n= ) were revisional bariatric surgery. we identified groups of patients according to their primary procedures: adjustable gastric band (agb), roux-en-y gastric bypass (rygbp), vertical band gastroplasty (vbg), and sleeve gastrectomy (sg). of the patients, ( %) had abg as primary procedure. of those, % had their band removed due to food intolerance and severe dysphagia and % had a conversion to either rygbp or sleeve gastrectomy (sg) due to weight recidivism. in the rygpbp group (n= ), % of the patients presented with late complications. of these, % had an acute presentation (small bowel obstruction, internal hernia, or perforated marginal ulcer) requiring emergency surgery. only % patients needed gastric bypass takedown due to severe hypoglycemia. weight recidivism was noted in % of the patients that necessitated either revising the anastomosis, trimming of the gastric pouch or gastrogastric fistula takedown. in the vbg group (n= ), % of the patients experienced weight recidivism that required conversion to rygb and % of the patients required the vbg to be taken down due to obstructive symptoms. in the sg group (n= ), % of the patients experienced early complications needing a second procedure. weight recidivism was found as the most common reason for conversion ( %) to rygbp. twenty nine percent of the patients in this group underwent conversion to a rygbp due to severe de novo gerd. introduction: our aim was to systematically review the literature to compare weight loss outcomes and safety of secondary surgery after sleeve gastrectomy (sg), particularly between roux-en-y gastric bypass (rygb) and biliopancreatic diversion with duodenal switch (bpd-ds). sg was originally developed as the first part of a two-stage procedure for bpd-ds. however, it is now the most common standalone bariatric surgery performed in the united states. the majority of sg are done as the sole bariatric operation but in %, a second operation is necessary, due to insufficient weight loss, weight regain or reflux. the most common second-stage operations are rygb at % and bpd-ds at %. there are a few small case series comparing rygb to bpd-ds as a secondary surgery after sg. these studies suggest that after failed sg, bpd-ds results in greater weight loss but higher early complication rates than rygb. we had one mortality, related in part to supra-therapeutic anticoagulation perioperatively. one patient underwent successful heart transplantation and additional patients were reactivated on the transplant list. conclusion: laparoscopic sleeve gastrectomy is effective in advanced heart failure patients for meaningful weight loss, reactivation to the transplant wait list, and ultimately cardiac transplantation. however, this complex population carries a high perioperative risk and close multidisciplinary collaboration is required. more data is needed to best optimize perioperative management of these patients. the introduction: bariatric surgery is a highly effective treatment for severe obesity. while its effect on improvement of the metabolic syndrome is well described, its effect on intrinsic bone fragility and fracture propagation is unclear. therefore, the aims of this systematic review of the literature were to examine ( ) the incidence of fracture following bariatric surgery, ( ) the association of fracture with the specific bariatric surgical procedure ( ) conclusion: it appears that the overall risk of sustaining a fracture of any type after undergoing bariatric surgery is approximately percent after an average follow up of . years. the greatest risk of fractures is associated with the bpd, with the rygb being the most favorable. fractures following bariatric surgeries tend to follow osteoporotic and fragility patterns. post-operative supplementation of vitamin d, calcium and weight bearing exercises need to be optimized, and long term follow-up studies will be needed to confirm that these interventions will indeed reduce fracture risk following bariatric surgery. background: the effect of sleeve gastrectomy on gastroesophageal reflux (gerd) remains controversial. it is currently common practice to perform a hiatal hernia repair (hhr) at the time of the sleeve gastrectomy, however, there are few data on the outcomes of gerd symptoms in these patients. the aim of this study was to evaluate the effect of performing an esophagopexy hiatal hernia repair on gerd symptoms in morbidly obese patients undergoing robotic sleeve gastrectomy (rsg). methods: a single institution, single surgeon, prospectively maintained database was used to identify patients who underwent rsg and concomitant esophagopexy for hiatal hernia repair from november to july . patient characteristics, operative details and postoperative outcomes were analyzed. primary endpoint was subjective gerd symptoms and recurrence of hiatal hernia. results: thirty-seven patients were identified meeting the inclusion criteria (rsg+hhr+esophagopexy) with a mean follow-up of . over the past years there have been several bariatric surgeries cancelled secondarily to abnormal pre-operative test results within eastern health. these surgeries are often cancelled the day before their scheduled surgery, which does not provide sufficient time to book other patients. the end result is that the or gets underutilized and the bariatric surgery waitlist grows. prior to any major surgery patients are often subjected to a routine screening process, which includes a history and physical along with diagnostic screening tests and screening blood work. a preliminary analysis was done of the first patients through the bariatric surgery program at eastern health assessing the coagulation study results and outcomes. analysis showed that out of the first patients % were found to have a history of bleeding, % were using anticoagulants preoperatively, another % were noted to have a family history of bleeding. in the preoperative blood work that was done, % were found to have an elevated ptt/ inr for which hematology ended up being consulted in % of the patients. overall this did not change the preoperative management of these patients and they went on to have their surgery. intraoperatively patient was noted to have excessive bleeding and this was found not be associated with any preoperative elevation in their coagulation studies or family history of bleeding disorders. post operatively there was bleeding in patient which required transfusion, however this too was found not to be associated with any preoperative elevation in their coagulation studies or family history of bleeding disorders. overall this initial analysis showed no difference in operative management or delay in surgery secondarily to abnormal preoperative assessment findings. further analysis of a larger population of the bariatric surgery program patients is needed in order to determine whether any changes should be made to the preoperative assessment protocol. introduction: patients undergoing bariatric surgery frequently present with various obesity-related psychiatric comorbidities, including depression. furthermore, previous literature has demonstrated a positive association between depression and cardiovascular disease, and obesity serves as an independent risk factor for cardiovascular disease. however, the relationship between preoperative depression and cardio-metabolic risk factors following bariatric surgery remains unknown. methods and procedures: this retrospective analysis utilized data obtained from patients (n= , ) who underwent bariatric surgery at a single academic medical center in california. patients underwent either laparoscopic roux-en-y gastric bypass or sleeve gastrectomy. using medical record data, patients were preoperatively categorized as follows: not depressed, history of depression but not currently on anti-depressive medication, and history of depression and presently taking anti-depressive medication. patient demographic characteristics were obtained preoperatively. clinical and biochemical risk factors for cardiovascular disease were evaluated preoperatively and and months following bariatric surgery. anova, kruskal-wallis, and chisquare tests were applied where appropriate. results: in this sample, % of patients were not depressed, % had a history of depression but were not taking anti-depressive medication preoperatively, and % had a history of depression and were taking anti-depressive medication preoperatively. at baseline, depressive history was positively associated with female sex (p\. ), older age (p\. ), white race (p\. ), medicare insurance (p\. ), previous abdominal surgery (p\. ), length of stay (p\. ), requiring an inferior vena cava filter (p=. ), total cholesterol (p\. ), and triglycerides (p =. ). on average, patients with a history of depression taking anti-depressive medication weighed less than patients with a history of depression not on medication and patients without depression preoperatively (p=. ) and (p=. ) and (p=. ) months after surgery. after six months of follow-up, preoperative depressive history was positively associated with total cholesterol (p=. ), triglycerides (p\. ), hba c (p=. ), and fasting serum concentrations of insulin (p=. ). after months of follow-up, preoperative depressive history was positively associated with higher levels of total cholesterol (p=. ), ldl cholesterol (p=. ), and triglycerides (p= . ). conclusion: a history of depression prior to surgery was associated with higher levels of total cholesterol and triglycerides at baseline and and months postoperatively. after months, preoperative depressive history was also associated with higher levels of ldl cholesterol. this study suggests that, on average, bariatric patients with comorbid depression have worse lipid profiles prior to-and up to one year after-bariatric surgery relative to counterparts without depression. yen-yi juo, md, mph, yas sanaiha, md, erik dutson, md, yijun chen, md; ucla introduction: anastomotic leak is one of the most morbid complications of roux-en-y gastric bypass (rygb), yet its risk factors are ill-defined due to the rarity of the complication. we aim to identify both patient-and operative-level risk factors for anastomotic leak after rygb using a national clinical database. methods: a retrospective cohort study was performed using the metabolic and bariatric surgery accreditation and quality improvement program (mbsaqip) database. all adult patients who underwent laparoscopic or open rygb were included. multivariate logistic regression models were used to identify patient-and operative-level variables associated with development of anastomotic leakage. clinically relevant anastomotic leakage is defined as those that required readmission, intervention, or reoperation. introduction: hyperammonemia secondary to ornithine transcarbamylase (otc) deficiency is a rare and potentially lethal disorder. the prevalence of otc deficiency is reported to be : , to : , in the general population. otc deficiency has been reported in patients presenting with neurological symptoms after roux-en-y gastric bypass (rygb), and less than cases have been reported in the literature. the aims of this study are to examine the apparent incidence of this uncommon disorder in patients after bariatric surgery and to examine potential predictors of mortality. methods and procedures: this is a single center, retrospective study in a large, urban teaching hospital of postbariatric surgery patients who developed hyperammonemia from january to august . elevated plasma ammonia with an elevated urinary orotic acid level is accepted as consistent with a diagnosis of otc deficiency. all patients in our program are instructed on a post-operative diet containing grams/day of protein. descriptive and correlative statistics are calculated for all variables. results: between january and august , bariatric surgical procedures were performed at this single medical center. seven women with neurological symptoms had plasma ammonia levels above the upper limit of normal range. their average bmi is kg/m . two patients underwent vertical sleeve gastrectomy (vsg), underwent vsg with duodenal switch, and underwent rygb. all patients were hospitalized. the mean peak plasma ammonia level is umol/l (range: - ). the mean urinary orotic acid level is . mmol/mol creatinine (range: . - . ). there were patients with no orotic acid level checked, secondary to demise. no patient had clinical features or findings of progressive hepatic failure. there are four mortalities ( . %). serum folate and peak lactic acid levels are predictors of mortality with p-values of . and . respectively. the apparent incidence of otc deficiency is : in post-operative patients. conclusions: in our post-operative population, hyperammonemia results in a high mortality. its apparent incidence, secondary to otc deficiency, amongst bariatric surgery patients is higher than that reported in the general population. since otc deficiency is identified after multiple bariatric surgical procedures, further investigation will be important to examine potential mechanisms for its development which may include a genetic predisposition (possibly triggered by nutritional deficiencies), upper gut bacterial overgrowth (supported by elevated serum folate levels), or preexisting, subclinical hepatic dysfunction. introduction: the use of closed suction drains is associated with poor outcomes in many anastomotic operations and routine use is not recommended. in this context, intraoperative drain placement for primary bariatric surgery remains controversial. recent studies demonstrate that drains confer no benefit to patients; however, data are limited to descriptive single center experiences with low sample size. in order to characterize this practice gap, and implement evidence based recommendations, we sought to evaluate the use of closed suction drain and outcomes following primary bariatric cases using the mbsaqip registry. methods: we used data from the metabolic and bariatric surgery accreditation and quality improvement program (mbsaqip) public use file for patients who underwent a non-revisional laparoscopic roux-en-y gastric bypass (rygb), laparoscopic sleeve gastrectomy (lsg), or laparoscopic adjustable gastric banding (lagb). we excluded patients with asa status greater than or conversion to an open procedure. we analyzed demographics, preoperative comorbidities, procedure type for patients who did and did not undergo drain placement. adjusted rates of postoperative complications and mortality were then compared based on receipt of postoperative drain placement. results: of the , included patients who underwent laparoscopic bariatric surgery, , ( . %) underwent intraoperative drain placement. drains were more often placed in patients who underwent lrygb, were older, had higher preoperative bmi, had higher preoperative asa status, and had more comorbid conditions. after patient level risk adjustment, there was no difference in rates of leaks requiring intervention ( . % versus . %, p= . ) or mortality ( . % versus . %, p= . ) for patients with and without drains. in patients who underwent drain placement, there were higher rates of transfusion ( . % versus . %, p. ), reoperations for bleeding ( . % versus . %, p= . ), all reoperations ( . % versus . %, p. ), and surgical site infections (ssi) ( . % versus . %, p. ). conclusion: our analysis demonstrates that nearly one quarter of all laparoscopic bariatric surgery patients undergo drain placement. we found that drain placement is more common in preoperatively higher risk patients and following higher complexity procedures as suggested by associated increased rates of transfusion and reoperations for bleeding. we found no benefit to drain placement in terms of interventions for clinically significant leaks or mortality. finally, patients who underwent drain placement were more likely to develop ssi suggesting routine placement is not without risk. although further prospective studies are warranted, our analysis demonstrates that drains have the potential for harm with minimal protective benefit for patients after primary bariatric surgery. sleeve gastrectomy ( % n= ) and laparoscopic roux-en-y gastric bypass ( % n= ) were the two types of surgeries done in our population. the risk of developing atrial fibrillation was calculated preoperatively and found a -fold higher risk in females and -fold greater risk in males when compared with the ideal risk for each category. at months follow-up the preoperative risk was . ± . % with an absolute risk reduction of . % corresponding to a relative risk reduction of . % with males having a more significant change at months follow-up. these findings and the electrocardiographic changes at months follow-up are better described in background: the sleeve gastrectomy (lsg) is the most popular procedure worldwide to treat obesity. among those that are obese, gerd has a prevalence of . percent. many surgeons do not perform lsg in these patients because only . percent of symptomatic patients showed resolution of gerd-like symptoms after concomitant sleeve gastrectomy with hiatal hernia repair. many surgeons perform the gastric bypass on gerd patients with hiatal hernias because they believe its superior for the resolution of gerd. when they do this they overlook the many long term complication associated with gastric bypass. also, many patients do not want the gastric bypass under any circumstances. surgeons need to be open to finding better way to reduce the high recurrent rates of gerd after lsg. materials and methods: this is a single institution, multi-surgeon, retrospective study involving morbidly obese patients in a prospectively kept data base from january of through july of . these patients all had gerd with preoperatively identified hiatal hernias on egd. all patients were dependent on anti-reflux medications. there were ( . %) males and ( . %) females. bmi ranged from to . hiatal hernias measured from cm to cm. all lsg patients received a primary crural closure, with or without gore bio a mesh placement, at least weeks prior to the sleeve gastrectomy. post-operatively, patients were interviewed for gerd symptomatology and anti-reflux medication dependency. results: of the patients, ( . %) patients had resolution of gerd-like symptoms and off all anti-reflux medications after the staged hiatal hernia repair and sleeve gastrectomy. patients ( . %) had improvement of gerd but still dependent on anti-reflux medication. patients ( . %) had no resolution or improvement of gerd. there was one post-operative complication of laryngospasm with pulmonary edema status post extubation. there were no mortalities in the series. conclusions: in this study, staged hiatal hernia repair, at least weeks prior to sleeve gastrectomy, doubled the published rate of gerd resolution from % to %. % showed improvement in symptoms at one year. this rate is comparable to gerd resolution after gastric bypass. this may be an alternative approach to hiatal hernias in the morbidly obese patient with gastroesophageal reflux disease who do not want a gastric bypass. background: bariatric surgery is a common procedure in general surgery. gastric bypass has been performed laparoscopically for over two decades and multiple techniques are described. the circular stapled anastomosis, one of the earliest methods for gastrojejunostomy, is performed in two ways: a transoral method to introduce the anvil and a transabdominal approach developed later. the former technique requires passing the anvil of the circular stapler through the mouth, down the esophagus, and into the gastric pouch. in the latter method, a gastrotomy is made, the anvil is introduced, and the gastrotomy is stapled off, creating the gastric pouch. this study aims to objectively compare the two methods of circular stapled gastrojejunostomy in terms of surgical site infection (ssi) rate. methods: a retrospective chart review of patients undergoing laparoscopic roux-en-y gastric bypass with one of two surgeons at a bariatric center of excellence in an academic hospital from january introduction: laparoscopic sleeve gastrectomy (lsg) has become the most commonly performed procedure in the treatment of morbid obesity, but there is significant variability in its performance. from national database analysis, more restrictive sleeve construction, based on smaller bougie size, has not correlated with greater weight loss. we hypothesize that bougie size is not reflective of actual restriction, or that sleeve restriction does not correlate with weight loss. we performed qualitative and volumetric analysis of immediate post-sleeve contrast studies to determine the association of sleeve restriction with post-operative weight loss and complications. methods: between and , patients underwent immediate post-sleeve contrast studies. based on standardized vertebral body height assessment by preoperative chest radiograph, sleeve diameter at intervals (including the narrowest point) was measured in mm, and the volume above the narrowest point of the sleeve was calculated. sleeve shape was assumed as dual-tiered or simple truncated cone based on morphology. sleeve restriction, morphology and volumetric analysis were associated with clinical outcomes including complications, post-op symptoms, and weight loss at months. background: variability in surgical technique resulting in narrowing at the incisura angularis, twisting along the staple line, and retention of the gastric fundus has been implicated in increased gastroesophageal reflux disease (gerd) following laparoscopic sleeve gastrectomy (lsg). standardizing creation of the sleeve based on anatomic landmarks may help produce more consistent sleeve anatomy and improve outcomes. methods: a retrospective review of all patients undergoing lsg from january to november at a single institution specializing in bariatric surgery was performed (n= ). patients underwent either traditional lsg with use of a f suction bougie to guide creation of the sleeve (n = ) or anatomy-based sleeve gastrectomy (abs, n= ). abs was performed using a gastric clamp to maintain predetermined distances from key landmarks ( cm from gastroesophageal junction, cm from incisura angularis, cm from pylorus) during stapling. patient demographics, perioperative characteristics, and post-operative outcomes were compared using chi-square and student's t-tests as required. helicobacter pylori (hp) is prevalent in up to % of the population worldwide with increased rates observed in the bariatric population. bariatric surgery has seen a rapid expansion over the last years with the growing rates of severe obesity. higher hp rates are thought to be associated with increased rates of postoperative complications including increased marginal ulceration and leak rates. accordingly, some bariatric centers have adopted routine pre-operative screening and hp eradication programs. yet, while hp correlation with gastritis and malignancy has now been well defined, its impact on patients undergoing bariatric surgery remains unclear. background: the risk of developing a hiatal hernia in the obese population is . fold compared to patients with a bmi \ . most hiatal hernias after bariatric surgery are asymptomatic and when symptoms are present they may be difficult to differentiate from overeating or maladaptive eating habits. the aim of this study was to define the risk and symptoms associated with a hiatal hernia in the post-bariatric surgery cohort. methods: a retrospective review of prospectively collected data for patients who underwent laparoscopic hiatal hernia repair who previously had primary roux-en-y gastric bypass (rygb) or sleeve gastrectomy (sg). data collection spanned a five-year interval ( / - / ). preoperative and follow up data were collected from medical records and questionnaires in the clinic or by telephone. variables obtained include age, gender, psychiatric history, pre-index procedure bmi, pre-hiatal hernia repair bmi, post-hernia repair bmi, pre and post operative symptoms, and associated morbidity. all hiatal hernia repairs were done laparoscopically, with posterior cruroplasty after circumferential hiatal dissection. results: we identified patients with a symptomatic hiatal hernia who had previously (range: - years) underwent bariatric surgery. fourteen rygb patients presented at a mean of . years compared to sg patients who presented at a mean of . years after index procedure. diagnosis was by a combination of ugi ( %), ct scan ( %) and egd ( %). mean follow up was . months (range: - months). laparoscopic hiatal hernia repair was successfully performed in all patients with % mortality. dysphagia and regurgitative symptoms markedly improved in [ % of patients however, nausea, vomiting and abdominal pain were not changed in - % of patients ( figure) . conclusion: hiatal hernia following bariatric surgery is a rare but important cause of bloating manifested as nausea and vomiting, abdominal pain, regurgitation or reflux, and food intolerance or dysphagia (barf)-and should be further evaluated with imaging or endoscopy when present. laparoscopic repair of hiatal hernia is warranted and results in resolution of symptoms in the majority of symptomatic patients. mid-term outcomes of sleeve introduction: obese patients suffer from multiple organ comorbidities which contribute to a shortened lifespan. one of the effects of obesity is thought to be pseudotumor cerebri, which is secondary to increase in intracranial pressure (icp) in the absence of an obstruction. over the past two years, we have measured icp after insufflating with a laparoscopy device. we found that icp increases dramatically and it correlates with the amount of insufflation in the abdomen. over the years, there have been studies in obese patients and intra-abdominal pressure. these studies have shown that some obese patients have an intra-abdominal pressure of - mmhg. increasing intraabdominal pressure is thought to escalate intracranial pressure (icp). the objective of this pilot study was to observed change in icp after the raising intra-abdominal pressure. method: in this retrospective chart review preliminary study, pressure in each of the patients either normal pressure hydrocephalous or high pressure hydrocephalous receiving a ventricle shunt were measured by manometer. once the shunt was placed into the ventricle, we attached a manometer to measure the opening pressure. after we accessed the abdominal cavity using the standardoptiview technique, we created a pneumoperitoneum. after achieving an intraabominal pressure of mmhg, were measured the icp using the manometer. spss software version was used for data analysis. paired t-test was applied on icp before and after the procedure. introduction: postoperative bleeding represents an infrequent, yet serious complication after bariatric surgery. differences in the rate of postoperative bleeding reported for the two most common weight loss procedures-laparoscopic roux-en-y gastric bypass (lrygb) and laparoscopic sleeve gastrectomy (lsg)-are ostensibly confounded by patient and surgeon specific preoperative, intraoperative and postoperative factors, in particular, by the utilization of staple line reinforcement or oversewing. with this understanding, we aim to use a large national database to definitively characterize differences in bleeding rates between lsg and lrygb. conclusions: after appropriate risk-matching, lsg patients have a reduced likelihood of a postoperative bleeding event compared to those undergoing lrygb. this difference is likely more pronounced with intraoperative securing of the staple line via oversew, buttress or an alternative method. these findings from a large national database represent an important consideration for surgeons and patients alike when evaluating the appropriate bariatric operation. background: bariatric surgery has shown to be the most effective treatment, with documented improvement in obesity-related comorbidities. the type of health insurance coverage plays an important role in the access to bariatric surgery, but might also affect postoperative outcomes. the objective of this study is to determine whether there is a difference in outcomes based on the type of insurance months after bariatric surgery. methods: we retrospectively reviewed all the patients that underwent bariatric surgery at our institution from to . we divided the patients into two groups, based on the type of insurance, private (group one), and public (group two). we compared demographics and months outcomes between the groups, using t-test for continuous variables and chi-square for categorical variables. we also compared months estimated bmi loss between different private insurances using anova. introduction: bariatric surgeons are now performing primary and revisional procedures on the extremes of age. there is controversy surrounding the safety and effectiveness of bariatric surgery among older age groups compared to younger age groups. to address this knowledge gap, we designed a study assessing short-term bariatric surgery outcomes among various age groupings across a large national database. methods and procedures: de-identified patient data across from the mbsaqip registry was used. age groupings were organized into young, middle-aged, and older adults (in years) as follows: \ , - , and [ , respectively. the following -day outcomes were evaluated between all possible pairwise age groupings: mortality, surgical site infection (ssi), and readmission; logistic regression was used to compare outcomes between age groupings controlling for primary vs. revisional index operation, patient factors, and procedure factors. a p value of . was deemed statistically significant. results: a total of , patients were identified (age range: to [ ); % (n= , ) underwent primary bariatric operations while % (n= , ) underwent revisional cases. older adults had significantly worse outcomes than middle-aged and younger adults, respectively, for over comparisons across all outcomes; in contrast, younger adults had significantly worse outcomes than middle-aged adults for only comparisons across ssi and readmission. for primary bariatric cases, older adults had significantly higher mortality rates than middle-aged and younger adults, respectively, in the following categories: asa , laparoscopic sleeve gastrectomy (lsg), or laparoscopic roux-en-y gastric bypass (lrygb). for revisional cases, older adults had significantly higher mortality rates than middle-aged and younger adults, respectively, in the setting of female gender, caucasian race, or asa . regarding ssi, older adults undergoing primary lrygb had significantly higher organ space infections compared to younger adults. in addition, older adults who had revisional lrygb had significantly higher deep surgical site infections compared to middle-aged adults. following primary bariatric cases, older adults had significantly higher readmission rates compared to younger adults in the presence of male gender, caucasian race, asa , copd, or after lsg. following revisional cases, older adults had significantly higher readmission rates than middle-aged and younger adults, respectively, in the setting of pre-operative chronic steroid use. conclusions: overall, older adults had worse short-term outcomes compared to their younger counterparts following primary and revisional cases. further research is required to investigate these findings with the goal of targeting interventions to improve outcomes among bariatric surgical patients. background: the obesity epidemic in the united states has been accompanied by surge in bariatric surgery. nearly , bariatric procedures were performed in the us in , % of which involved roux-en-y gastric bypass (rnygb). while rnygb has proven an effective tool in combating obesity, it also alters a patient's anatomy in a way that makes traditional ercp a difficult, if not impossible option for interrogating the common bile duct. one way to approach the post-rnygb patient with obstructive jaundice is to access the peritoneal cavity via a laparoscopic/ robotic approach followed by direct cannulation of the gastric remnant with a laparoscopic port, allowing passage of an endoscope. the aim of this study was to evaluate our single center experience with minimally-invasive transgastric ercp (tg-ercp) from to . methods: we compiled a list of all patients who underwent laparoscopically or robotically assisted tg-ercp at our institution from - . we then examined patient demographics, procedural details, postoperative outcomes, and success rate, with success defined as cannulation of the ampulla, clearance of obstruction if present (stones/sludge/stenotic ampulla), and completion imaging of the biliary and pancreatic ducts. results: patients were included in the study. cases were performed robotically ( %), and laparoscopically ( %). ercp was successful in cases ( %). all unsuccessful attempts were aborted when the endoscopist was unable to pass the scope through a tight pylorus. median time of operation was minutes ( minutes if concomitant cholecystectomy was performed, minutes if not). median length of stay after operation was days (range - days). median estimated blood loss (ebl) was ml. post ercp pancreatitis occurred in patients ( . %), and was mild and self limited in all cases. patients had postoperative bleeding requiring transfusion. both of these had concomitant cholecystectomy. discussion: in patients with biliary obstruction and anatomy not suitable for traditional ercp, tg-ercp is a viable option. it can be performed with in a minimally invasive fashion (either laparoscopically or robotically) with a high success rate and low morbidity. as the population of patients who have undergone rnygb continues to grow, so does the likelihood of encountering one with obstructive jaundice. tg-ercp, therefore, should be thought of as an essential tool in the armamentarium of the general surgeon. introduction: primary palmar hyperhidrosis (ph) is a pathological condition of over perspiration caused by body produces an excessive amount of sweat. this disorder affects to decrease quality of life of patients. thoracoscopic sympathectomy is minimally invasive and an effective procedure to treat hyperhidrosis. different of level of sympathectomy has been debate for the best outcomes. many researchers studied about short term outcomes but no empirical research evidences long term outcomes of thoracoscopic sympathectomy in thailand. this study purposed to evaluate and compare the long term clinical outcomes between patients who underwent t and t thoracoscopic sympathectomy for ph with particular attention to patient satisfaction and quality of life. methods and procedures: sixty patients with ph underwent thoracoscopic sympathectomy. patients were divided into two groups by the level of thoracoscopic sympathectomy as t group and t group. they were investigated the improvement of sweating, compensatory sweating, satisfaction and quality of life. the long-term investigation was designed to examine clinical outcomes at before surgery, six months after surgery, year after surgery, years after surgery, and last follow up days were compared within group and between of t and t group. they were subjected to telephone interview using multiple questionnaires to investigate surgery outcomes, degree of satisfaction, and quality of life improvement. results: sixty patients responded to the telephone interview. patients demographic data and also recurrence rate of ph between t and t group was not significant different (p= . ). both groups improved severity of sweating without any statistical significant. but the t thoracoscopic sympathectomy led to significantly lower incidence of compensatory hyperhidrosis when compared with t group at back and trunk sites. the t group had higher overall satisfaction than t group with was not significantly different. long term result are followed after years. conclusions: there was no difference in decreasing severity of sweating between t and t level of thoracoscopic sympathectomy. both group equally archived patient satisfaction. but the t level of thoracoscopic significantly had lower severity of ch and better quality of life in long term period. introduction: acute pancreatitis due trauma is commonest cause of pseudocyst in pediatric age. due to limited literature available and under diagnosis by pediatricians, the true incidence of pseudocyst in - age group is not known. material and methods: retrospective analysis of pediatric age ( - years) patients who underwent laparoscopic cystogastrostomy at distric teaching hospital was done. patients data, presentation, investigations, opetation done and post operative course was studied. result: total of patients ( males & females) had mean age of . years, mean weight of kg. etiollogies included blunt abdominal trauma ( ), idiopathic ( ), gallstones ( ) . average cyst diameter was . cm. laparoscopic cystogastrostomy by transgastric approach was successfully possible in cases with no conversion. cystogastrostomy was performed using sutures in patients and ultrasonic energy device in patients. gastrotomy was closed with sutures in all cases. mean operative time was minutes. post operative imaging at months revealed no persistence or recurrence of cyst. conclusion: minimally invasive laparoscopic approach for chronic pancreatic pseudocyst in pediatric age group is safe and effective strategy and should be adopted as primary modality of treatment. introduction: videoscopic neck surgery is developing despite the fact that only potential spaces exist in the neck. gagner first described the endoscopic subtotal parathyroidectomy with constant co gas insufflations for hyperparathyroidism in . the cervical approach utilizes small incisions in the neck thus making it cosmetically unacceptable and cannot be used for lesions greater than cm. the axillary approach makes it difficult to visualize the opposite lobe. the anterior chest wall approach utilizes port access at various positions on the anterior chest wall depending on the surgeon. this technique also allows bilateral neck exploration. hence we have been able to perform total thyroidectomies with central compartment clearance for papillary carcinoma and near-total thyroidectomies for large multinodular goiters, materials and methods: three incisions subplatysmal plane pneumoinsufflation with carbon dioxide (co ) ports creating a subplatysmal palne dissection begins at the inferior pole posterior dissection clipping superior thyroid vessels specimen freed up thyroid lobectomy was performed in the twenty cases. the average blood loss was ml mean operative time was min there were no complications and no cases were converted to open. there were no cases of recurrent laryngeal nerve injury or postoperative tetany. no subcutaneous emphysema, ecchymosis or hypercarbia was observed in any patient. all patients were discharged on the second postoperative day except the first on the fifth day. in conclusion this approach seems to be safe in case of unilateral lobectomy but early to say it is superior to conventional thyroidectomy especially in total thyroidectomy. introduction: laparoscopic sleeve gastrectomy (lsg) is one of the most commonly performed weight loss surgeries. prolonged hospital admissions are associated with both increased morbidity and mortality and increased strain on the health care system; studies are now investigating the safety and feasibility of outpatient lsg. this study examined a single surgeon's postoperative admission trends for patients who underwent lsg. the patients were divided into two cohorts based on the date of surgery, and we hypothesize institutional experience has a significant impact on postoperative stay and hospital readmission rate. methods: this is a retrospective study on lsgs performed by a single surgeon in a tertiary center from - . inclusion criteria: patients [ years old, bmi [ with comorbidities or bmi [ , and patient approval by the bariatric surgical program in victoria, british columbia. patients with prior weight-loss surgery were excluded. patients were discharged home on a care plan involving: nurse and surgeon telephone follow-ups within one week post-surgery. patients were divided into two cohorts: cohort a (procedures between - inclusive) and cohort b (procedures between - inclusive). results: patients were included in this study: females ( . %) and males ( . %). the mean preoperative age was . ± . years, and the mean preoperative bmi was . ± . kg/m . the average postoperative discharge day for the population was day . ± . and the average or time was . ± . minutes. one patient in cohort b was re-admitted pod with a diagnosis of postoperative edema managed conservatively and is included in the analysis as pod . a second patient in cohort b returned to hospital (pod ) for abdominal pain and was managed conservatively as outpatient. conclusion: there was a significant difference in the average postoperative discharge day between patients in cohort a and cohort b who underwent lsg with patients in cohort b requiring a shorter average admission time. this study suggests that with increasing institutional experience and a postoperative discharge plan, patients undergoing lsg may be discharged on postoperative day one safely. surg endosc ( ) introduction: minimally invasive techniques have revolutionized the art of the surgical practice. the laparoscopic approach to cholecystectomy has become the gold standard and is the most common laparoscopic general surgery procedure worldwide. in an effort to further enhance the advantages of laparoscopic surgery, even less invasive methods have been attempted, including smaller and fewer incisions. the objective of this study was describing our results of years of needlescopic cholecystectomy. methods: since march all patients that underwent to needlescopic cholecystectomy micro-laparoscopic procedure with instruments of mm were included in this study in a prospective database and the information was analyzed. results: between march and september , needlescopic cholecystectomies have been done at texas endosurgery institute in san antonio, texas by a single surgeon. % of the patients were female. the average age was . (range of - years old). average operating time was . minutes (range of - minutes). the minute operation required laparoscopic cbd exploration, accounting for the extended time. average estimated blood loss (ebl) was cc (range of - cc). % of cases required conversion to standard mm cholecystectomy and was completed without incidents. all patients were followed up at weeks, weeks, and months after the procedure. only patient presented with a hernia at the umbilical site. otherwise no wound, bile duct, bile leak, bleeding or thermal injury complications were identified. conclusions: micro-laparoscopic procedures with mm instruments in this specific procedure of needlescopic cholecystectomy is safe and feasible, and is a cosmetic alternative to the standard laparoscopic cholecystectomy. there's still less report about thyroid cancer cases in toetva. this study reviews all cases of thyroid cancer which surgery were performed. there were cases of toetva in thyroid cancer and cases of opened thyroidectomy. objective: to review and report in terms of surgical outcome, complication, post-surgical treatment and recurrence in all cases of thyroid cancer surgery, especially in toetva technique. material and methods: from march -july in police general hospital, a total of patients underwent toetva with cases of toetva in thyroid cancer and cases of opened thyroid surgery in thyroid cancer. all patients were recorded in multiple parameters. results: this study have total of thyroid cancer cases which cases ( %) were male and cases ( %) were female, with an average age of . most clinical presentation was thyroid mass or nodule which was at cases ( . %), case ( . %) was non-toxic goiter and case ( . %) was grave disease. the clinical presentation mean time was . years ( weeks- years). there were cases ( . %) with a mass at right lobe, cases ( . %) with a mass at left lobe, and cases ( . %) with mass at both lobes. the size of thyroid mass was . ± . centimeters ( - centimeters). there were cases ( . %) had euthyroid, case ( . %) had subclinical hyperthyroid, cases ( . %) had subclinical hypothyroid, and cases ( . %) had hyperthyroid. for type of surgery, there were cases ( . %) of toetva surgery and cases ( . %) of opened total thyroidectomy. most patients at cases ( . %) didn't have any post-operative complication. and there were hypothyroid cases ( . %), transient hypocalcemia with no symptom cases ( . %), and transient hoarseness cases ( . %). after toetva surgery performed, cases ( . %) were redo completion thyroidectomy, cases ( . %) were transaxillary completion thyroidectomy, cases ( . %) were redo toetva, and case ( . %) deny for reoperation. and cases ( %) didn't have any complication after redo surgery, cases ( . %) were hypothyroid, cases ( . %) were hypocalcemia and hypoparathyroid, and case ( . %) was transient hoarseness. after did thyroidectomy, ultrasound neck shown that cases had no residual or recurrence thyroid mass, cases had residual thyroid tissue. all cases received radioactive iodine ablation. radionuclide total body scan showed no evidence of distant functioning metastasis. conclusion: three-year short-term followed up toetva in thyroid cancer has shown less complication and no recurrence cancer. objective of the study: sentinel node navigation surgery (snns) in gastric cancer has been investigated for almost two decades in an effort to reduce operative morbidity. indocyanine green (icg) with enhanced infrared visualization is one technique with increasing evidence for clinical use. we are the first to systematically review and perform metaanalysis to assess the diagnostic utility of icg and infrared electronic endoscopy (iree) or near infrared fluorescent imaging (nifi) for snns exclusively in gastric cancer. methods and procedures: a search of electronic databases medline, embase, scopus, web of science and the cochrane library using search terms "gastric/stomach" and "tumor/carcinoma/cancer/neoplasm/adenocarcinoma/malignancy" and "indocyanine green" was completed in may . all human, english language randomized control trials, non-randomized studies, and case series were evaluated. articles were selected by two independent reviewers based on the following major inclusion criteria: ( ) diagnostic accuracy study design; ( ) indocyanine green was injected at tumor site; ( ) iree or nifi was used for intraoperative visualization. the primary outcomes of interest were identification rate, sensitivity and specificity. titles or abstracts were screened after removing duplicates. the quality of all included studies was assessed using the quality assessment of diagnostic accuracy studies- . results: ten full text studies were selected for meta-analysis. a total of patients were identified with the majority of patients possessing t tumors ( . %). pooled identification rate, diagnostic odds ratio, sensitivity and specificity was . ( . - . ), . ( . - ), . ( . - . ) and . ( . - . ) respectively. the summary receiver operator characteristic for icg+iree/nifi demonstrated a test accuracy of . %. subgroup analysis found improved test performance for studies with low risk quadas- scores, studies published after and submucosal icg injection. iree had improved diagnostic odds ratio, sensitivity and identification rate compared to nifi. heterogeneity among studies ranged from low (i \ %) to high (i [ %). conclusions: the idea of snns in gastric cancer is intriguing because of the potential to limit operative morbidity. we found encouraging results regarding the accuracy, diagnostic odds ratio and specificity of the test. the sensitivity was not optimal but may be improved by a carefully planned and strict protocol to augment the technique. given the limited number and heterogeneity of studies, our results must be viewed with caution. objective: to evaluate the feasibility, cost effectiveness and safety of single incision laparoscopic surgery using routine laparoscopy instruments. method: cases of acute appendicitis and cases of symptomatic gallstone disease were included in study. cases were enrolled in study and prospective observational study was performed. ruptured appendicitis/abscess formation were excluded from study. similarly empyema gallbladder/gallbladder perforation were also excluded. results: total cases included; cases of appendicitis and cases of symptomatic cholelithiasis. mean age of appendectomy group was . ± . years and mean age of cholecystectomy group was . ± . years. in our study, mean operative time for sil appendectomy was . ± . min. post-operative fever was noted in cases ( . %). mean post-operative pain as per vas score taken after hours, on pod was . . average post op stay in hospital was . days, port site infection occurred in one case ( . %). patient satisfaction score obtained on the scale of - on one month follow up was . , while scar cosmesis score was . . in our study, cases underwent sil cholecystectomy, of which were male ( . %) and were females ( . %), and mean age of patients was . yrs. mean operative time in our study was . min, mean post-operative pain taken on pod as per vas score was . , mean post-operative hospital stay was . days, port site infections occurred in cases. post-op fever was noted in cases, post-operative patient satisfaction score obtained at month follow up was . and scar score of . on the scale of - . no case required drain placement and conversion. conclusion: sils can be performed using conventional laparoscopic instruments especially in a government setup where per capita economic burden to patient will be less. though it has more operative time, it has comparably less post-operative hospital stay, causes less pain, and has significantly more patient satisfaction regarding post-operative scar and cosmesis. since sils has more patient acceptance and satisfaction, it can be offered to all patients undergoing laparoscopic surgery. it is very useful in government setup where lower economic class of patients will also benefit, irrespective of unavailability of special instruments and financial constraints, as it can be performed using routine laparoscopic instruments. in the year we started to practice the pericardic window by laparoscopy to diagnostic of head injury hidden in precausal trauma, although lucketally for our society, this type of injury has decreased considerably, we have achieved an important number of patients and in the last year we have performed the procedure for another type of pathologies and also diversified the approach route according to the case. objective: sharing accumulated experience in years in the pericardic window practice by laparoscopy or thoracoscopy. material and methods description of cases results: during this period, we have accomplished cases of laparoscopic pericardal window with two unique ports for the diagnosis of head injury in trauma precordial, additionally there were practiced windows through traumatic trauma of which have been derived in treatment of cardiac injury on this way, without performing open approach. in another scenario, we have performed pericardial spill treatments for different causes by minimally invasive via. no complication or mortality associated with the procedure has been presented. conclusions: the pericardic window performed by a minimally invasive surgery is an effective, replicable strategy for the management of diagnosis and the medical and traumatic treatment of this pathology. patient selection is key and work in multidisciplinary groups guarantees good results. introduction: for the transabdominal preperitoneal repair (tapp) for groin hernia, single port surgery (sps) has been reported to reduce the abdominal wall damages. to reduce the length of the umbilical scar and to keep the view of triangulation, we use one needle forceps plus sps. patients and methods: from may to july , consecutive tapp patients were retrospectively investigated. there were male and female. we use two mm ports ( for the scope and for the operator's right hand forceps) through an umbilical multi-channel port and additional mm needle instrument is pierced above the pubic bone. a mm flexible scope allowed us to keep the triangular formation easily. we studied the safety and usefulness of this method from the viewpoints of operation time and the complications. results: median operation time of single side hernia ( cases) was min ( - ) and the bilateral case ( cases) was min ( - ). five cases needed one or two additional mm ports, and one case with severe preperitoneal adhesion due to the previous prostate cancer surgery was converted to open method because of the venous bleeding. other complications were spermatic cord injury and postoperative seroma that required the percutaneous puncture. umbilical scars and the pierced needle instrument scars became gradually invisible within or months. there were no incisional hernia nor wound infections in our series. these data was comparable to the conventional laparoscopic hernia repairs. conclusions: operation scars of this method had better cosmesis than the conventional tapp or sps tapp, and there were no differences between our sps-tapp with one needle foerceps and conventional method in operation time and the complication rate. our method was demonstrated as a less invasive approach for laparoscopic groin hernia repair. clinical application: fj clip is a stainless steel that can be used to hold organs in the abdominal cavity. it is available in two sizes: mm and mm. the device is short, it has a strong grasp, and it causes no or only negligible organ damage. we have used fj clip in the performance of local gastric excision (n= ), colectomy (n= ), and cholecystectomy (n= ) with no resulting difficulty. f loop plus is a g stainless steel loop-like device into which we can insert φ . mm nt alloy thread, which we draw out extracorporeally via simple puncture. laparoscopic total and proximal gastrectomy. we made a small incision at the umbilicus and inserted a -mm camera port and -mm metal cannula. we placed two (left and right) epigastric ports. retraction of the left hepatic lobe was easy with use of the -mm fj clip and a -mm penrose drain. for # lymph node dissection, we used the fj clip to grasp the upper part of the stomach, inserted the f loop plus from the upper right abdomen. for # dissection, we grasped the pyloric vestibule and pulled it leftward. for dissection of the upper edge of the pancreas, we grasped the left gastric arteriovenous pedicle and pulled it toward the abdomen. the fj clip's grasp and traction exerted on the stomach wall were strong and effective, and there was little organ damage. reconstruction (roux-y) or double tract were performed within the abdominal cavity by hand-sewn purse string suture of the esophageal stump, insertion of an anvil, and use of an automated anastomosis device. we have experienced total and proximally cases to date, but there have been no complications, and both intraoperative bleeding and operation time were within normal limits. conclusion: we believe the fj clip and f loop plus will replace conventional forceps for various tasks in reduced port gastrectomy. introduction: pulmonary anatomical resection is considered as standard treatment for early staged lung cancer. uniportal video-assisted thoracoscopic surgery (uvats) has recently showed favorable surgical outcomes, but remains technically demanding, especially in a complex procedure such as anatomic segmentectomy. needlescopic instruments facilitates complex laparoscopic surgeries with nearly painless and scarless postoperative outcomes, however, its utilization of thoracoscopic surgery were mostly for minor procedures such as bullectomy and sympathectomy. we presented our initial experience of lung cancer surgery performed by uniportal vats and additional needlescopic instruments, and we also compare the operative results with conventional uniportal vats. methods: from december to august , consecutive patients with lung cancer undergoing anatomical lung resections including lobectomies and segmentectomies were reviewed retrospectively. of these patients, patients received conventional uniportal vats (uvats), and patients received needlescopic-assisted uniportal vats (na-uvats). we compared the peri-and post-operative outcomes in these groups. results: there was no significant difference in demographic, anesthetic, or operative characteristics in two groups except for age. the mean operation time was statistically less in the na-uvats group ( . ± . min vs . ± . min, p= . ). the intraoperative blood loss was significantly less in the na-uvats group ( . ± . ml vs . ± . ml, p= . ). there were two major pulmonary arterial bleeding events and one conversion to thoracotomy in the uvats group. the hospital stay, duration of chest tube drainage and post-operative pain scale were comparative in the two groups. conclusion: under the assistance of additional needlescopic instruments, uniportal vats can be performed more efficiently and safely without compromising its benefit in less postoperative pain and early recovery. purpose: we applicated the v-loc into abdominal wall closure in single incision laparoscopic appendectomy (sila) from . the aim of our study is to present our experience of abdominal wall wound closure technique using barbed suture in sila and comparision of perioperative outcomes with conventional method of layer by layer abdominal wall closure after sila. methods: from august to june , sila was performed on patients with acute appendicitis at the department of surgery, hallym sacred heart hospital. under approval of institutional review board, data concerning demographic characteristics, operative outcomes, postoperative complications were compared between both v-loc closure group and conventional layer by layer closure procedures. in v-loc closure group, after removing the appendix, divided linear alba was closed using unidirectional absorbable barbed suture v-loc - with continous running fashon. begins at the end of incision, and coming back with reinforced running. subcutaneous closure was also done using same thread, and the subcuticular suture along incision line was performed with remaning portion of v-loc. results: the demographic data of patients's characteristics were similar between the two groups. the use of barbed suture significantly reduced the suturing time for abdominal wall closure (p= . ) compared with conventional suture. the postoperative incision length was significantly shorter in v-loc group than conventional group (p= . ). the rate of surgical site infection were similar in both group. no incisional hernia were noted in both group with median follow up periods of . months. the total costs of the procedure were comparable in both group under korean drg system. the use of barbed suture in abdominal wall closure in single port laparoscopic appendectomy is safe, and feasible method, reduces the suturing time, thereby decreasing the total operation time, and incision length with cosmetic effect. angela m kao, md, michael r arnold, md, julia e marx, paul d colavita, md, b todd heniford; carolinas medical center introduction: morgagni hernia is an anteromedial congenital diaphragmatic hernia seen in approximately in live births and rarely identified in adulthood. patients may be asymptomatic, have intermittent symptoms, or present acutely with incarceration/obstruction. given this, surgical repair is recommended, but a standardized technique has not yet been described. methods: a prospectively collected hernia-specific database was queried for all adult morgagni hernias performed at a tertiary hernia center. demographics and peri-operative data were compared. ( ) repair. the most common ( . %) method of repair included suturing mesh to the diaphragmatic portion of the defect and securing the anterior-inferior edge to anterior abdominal wall with transfascial sutures and/or tacks. four patients ( . %) underwent primary repair. average defect and mesh size was . cm and . cm , respectively. three patients ( %) underwent a concomitant paraesophageal hernia repair. mean ebl and length of stay was ml (range - ml) and . days (range - days). postoperative morbidity included transient postoperative hypoxemia ( patients) and pleural effusion ( ) . there was no mortality, mesh complications or recurrences with a mean follow-up of months. conclusions: morgagni hernias patients were more often older, obese, and women. these hernias remained unrepaired in % of patients despite their having had previous abdominal surgery. a laparoscopic or robotic approach offers an effective hernia repair with minimal complications, short hospital stay, and excellent long-term results for both elective and acute operations. mesh repair, sutured to the diaphragm and sutured/tacked to the abdominal wall, appears to be a very successful means to repair larger defects. introduction: hydatidosis is a zoonotic disease caused by echinococcus granulosus. it is endemic in the mediterranean, south america and middle east. it is a systemic disease wherein lungs are the second most common organ involved, after liver. radio-imaging plays an important role in diagnosing and determining the extent of the disease. surgical enucleation of cyst has been the classical treatment for this disease. bilateral lung involvement has been traditionally treated by median sternotomy or a bilateral thoracotomy. video assisted thoracoscopic surgery (vats) is an effective surgical approach in such settings. materials and methods: at our center, we have operated cases of pulmonary hydatidosis thoracoscopically over the past years. in all cases, area around the cyst was cordoned off with . % cetrimide soaked gauze pieces. a pericystotomy is performed with ultrasonic shears & the germinal membrane is delivered en masse into an endo-bag. an air leak test after saline instillation into the cavity, is a standard part of the procedure. for those cases with cysto-bronchiolar communications, the defect was sealed by either suturing or glue application. traditionally, bilateral cases & cysts larger than cm in size were tackled by an open approach. but, in our experience, cyst size, bilaterality & presence of complications are not contraindications for vats. all cases are administered perioperative albendazole ( mg twice a day, administered for three cycles of days each, with a gap of days in between) which helps in preventing recurrence and also takes care of any inadvertent intra-operative spillage. introduction: minimally invasive surgery (mis) is the standard approach for most of the surgical procedures performed by general surgeons. traditionally the majority of operations for trauma are performed open due to the complexity of the cases, however, trauma surgeons are expanding their armamentarium to include mis in a variety of acute procedures. we report our experience with the application of laparoscopy in a variety of trauma cases. methods: a retrospective review of trauma cases performed between / - / . during that time laparoscopic cases were performed after traumatic injury. patient demographics, injury severity (iss), injury mechanisms, the types of procedures and outcomes will be described. means and standard deviations were calculated and t test were performed. a p value of . was statistically significant. results: demographics-a total of trauma cases were performed laparoscopically during the study period. the majority were male, n= and the age was sd . obesity was documented in %, hypertension or cad was in %, and substance abuse was in %. blunt trauma was in % and penetrating %. the iss was sd . surgical procedures-the majority, %, of the procedures were completed laparoscopically. non-therapeutic laparoscopy was performed in %. repair of diaphragmatic or traumatic abdominal wall hernias were %. hematoma evacuation and control of bleeding was %. control of solid organ bleeding and repair was performed in %. intestinal repair occurred in %. for the cases that required open conversion iss was sd vs. laparoscopic cases iss was sd , p= . . outcomes: the overall length of stay was days sd . there was n= late death in a poly-trauma patient that required open conversion for complex solid organ and intestinal injuries. there was n= case of a community acquired pneumonia, and n= case of a recurrent pneumothorax. conclusions: a descriptive series of trauma operations approached with mis techniques is described. this cohort had high injury severity and a predominance of comorbid conditions. laparoscopy was successfully applied in the majority of cases for a variety of therapeutic procedures and mortality and morbidity was low. mis is safe and is gaining momentum for application in traumatic injury. objectives: laparoscopic distal gastrectomy for early gastric cancer is a standard treatment in japan described in guidelines. the surgical procedure has been shifting from laparoscopic assisted to complete laparoscopic surgery. in this study, we evaluated the outcomes and safety of the laparoscopic assisted distal gastrectomy. methods: for the marking of the oral side transecting line, the clipping at oral side of cancer lesion was performed by gastro-endoscopy before surgery. the lymph node dissection (d +/d ) is performed laparoscopically. as the dissection of the pancreatic superior region, the assistant hold the left gastric artery and keep the good view by retracting the pancreas. the common hepatic artery and proximal side of splenic artery are exposed. both sides of the left gastric artery and vein are exposed. left gastric vein and left gastric artery are cut after clipping and sealing. lymph node dissention of hepato-duodenal ligament is done and right gastric artery is cut after clipping and sealing. minor curvature of upper gastric wall is exposed (no , dissection). billroth i reconstruction by the circular stapler (cdh) is performed. through the upper median incision with cm, operator pulls out the stomach and transects the oral side of stomach with linear stapler after palpating the clips. duodenum is transected after purse string suture. gastroduodenal anastomosis is performed by cdh. results: two hundred cases were analyzed. the operation time, blood loss and the conversion to open surgery rate were minutes, ml, and . %, respectively. as postoperative complications, anastomotic failure, pancreatic fistula and postoperative bleeding were %, . % and %, respectively. the reoperation rate was %. one surgical death due to cerebral infarction was experienced. there were no patients with ppm (pathological proximal margin) positive and too much pm distance. frequency of abdominal wall incisional hernia and ileus were % and %, respectively. conclusion: although there is the disadvantage that small laparotomy can be made in the upper abdomen, laparoscopic assisted distal gastrectomy with billroth i reconstruction in our procedure is enough good from the viewpoint of the precision of proximal margin, and the incidence of serious complications. introduction: minilaparoscopy (mini) is a modality of minimally invasive surgery that attempts to produce less surgical trauma to the abdominal wall by reducing the diameter of surgical instruments to mm. searching for better outcomes in inguinal hernia repair, surgeons have looked for new and less invasive alternatives such as single-incision surgery, single-port surgery and mini. minilaparoscopic transabdominal preperitoneal hernia repair (mini-tapp) demonstrates some of the known advantages of mini general surgery procedures such as enhanced visualization, improved dexterity and great cosmetic outcome. it is safe and reproducible since it does not differ from standard laparoscopy. introduction: the celiac plexus is a structure located in the retroperitoneum, at the level of the lumbar vertebra, which is located in the prevertebral region and has sympathetic fibers. patients with advanced gastrointestinal cancer and associated pain, one of the management strategies is pain control. neurolysis of the celiac pleural by laparoscopy was first reported in humans in in patients with advanced pancreatic adenocarcinoma with excellent results. experience will be shown in the simplification of the technique for the procedure. method: neurolysis of the celiac pleura was performed in patients with advanced gastrointestinal cancer, stomach %, pancreas % liver % other %, no complications associated with the procedure, pain improvement was achieved in % of patients after process. the standardization of the technique by laparoscopy and its simplification, has made this procedure that is replicable and safe. description of the technique: patient in french position, technique of trocars, umbilical trocar mm and trocars of mm paraumbilical, staging laparoscopy is performed and sampling if necessary, is identified in the region of the lowercurvature of the stomach, the celiac trunk and the emergence of the left gastric artery are identified and cc of % alcohol diluted to the medium in the lateral fatty bearing are instilled through a pericranial under direct vision, verifying the non-arterial instillation of the alcohol. there were no complications related to the procedure. results: we report the experience of one group who underwent celiac pleura neurolysis in patients with advanced gastrointestinal cancer, gastric cancer %, pancreatic cancer %, liver cancer % and another %. the most frequent pathology report was adenocarcinoma, % of the patients were managed at hours with sustained effects, up to months of follow-up. with a significant decrease in pain medication. only patient required new laparoscopic neurolysis because of difficult-to-manage pain. the operative time of this procedure was minutes. the standardization of the technique, the use of low cost inputs, makes this type of procedure easily replicable with goodresults in pain management in cancerpatients. conclusions: mis is offered as one of the fundamental tools for the management of palliative procedures in gastrointestinal cancer. neurolysis of the celiac pleura with standardization of the technique, use of low cost elements, and the surgeon's skills make this procedure an option of management and control of pain in patients with advanced gastrointestinal cancer, is easily replicable, economical and insurance. background: the non-absorbable polymer clip offers a solution to the disadvantage of traditional metallic clip. due to its metallic property, it is not only expensive but also causes artifacts on imaging studies and often migrates into cbd. this study compares the traditional standard metallic clip with hem-o-lock used in laparoscopic cholecystectomy (lc) in regard of the safety and efficacy?. material and methods: this study includes patients who underwent lc implementing metallic clip (mc) and patients implementing hem-o-lock clips (h )?. both clips were applied to cystic duct and artery, then the gallbladder was dissected from the liver bed by diathermy. the intraoperative and postoperative parameters were collected including duration of the operation and complications?. results: the median operative time was not statistically different between the mc and the hc group ( . vs . minutes, respectively; p= . ) with no significantly less incidence of bile spillage ( vs. , p= . ) . no statistically significant difference was found in the incidence of postoperative complications between both groups ( vs. , p= . ). no postoperative bile leakage was encountered in both groups. conclusion: hem-o-lock clip provides a complete hemobiliary stasis and a secure cystic duct and artery control. its cost effectiveness is also attractive while provides efficacy equivalent to that of the standard metallic clip. introduction: most of the blunt thoracoabdominal injury patients always have multiple organ injuries. plan of definite treatment depends on the preoperative diagnosis. in isolating diaphragmatic traumatic injury without others organ injury laparoscopic approach is helpful, decrease a length of hospital stay as well as decrease a wound complication. authors describe the laparoscopic treatment of the patient who had rupture of a diaphragm from blunt trauma in an emergency setting. methods and procedures: a years old man presented with motor vehicle accident and mechanism of injury was blunt thoracoabdominal injury. he complains about chest tightness and tachycardia. complete evaluation and ct scan ware performed. stomach was herniated to the left chest and diaphragmatic ruptured was found neither others great vessels nor solid organs injury. the laparoscopic approach was desired and left diaphragm was repair by non-absorbable sutured without intraoperative complication. results: the patient has been discharged days post-operative with full recovery. chest x-ray was taken before discharge, in out-patient department weeks as well as months after discharge which shown no diaphragmatic herniation. conclusion(s): laparoscopic approach in isolated traumatic ruptured diaphragm patients is safe and should be considered. short-term outcome of laparoscopy-assisted distal gastrectomy with roux-en-y reconstruction through mini-laparotomy for gastric cancer since , we have introduced laparoscopy-assisted distal gastrectomy (ladg) with b-i reconstruction through mini-laparotomy. regarding to reconstruction, roux-en y reconstruction are also one of the choice in ladg, however, the technical feasibility has not been well documented so far. the purpose of this study was to compare the short-term outcome of ladg with roux-en-y reconstruction through mini-laparotomy compared to that of ladg with b-i anastomosis. between and , patients who underwent ladg for gastric cancer in oita university were enrolled in this retrospective study. since , the roux-en-y reconstruction has been performed as a standard method in our department. these patients were divided two groups based on anatstomosis; roux-en-y (r-y) group (n= ) and billroth i (b-i) group (n= ). baseline characteristics, operative results (including complications) and pathological results were evaluated. there were a considerably greater number of patients with advanced clinical stage and having ≥t invasion in the r-y group. estimated blood loss was lower in r-y than in b-i (p. ) and operative time was longer in r-y than in b-i (p. ). there were no significant differences in all grade intra-operative complications (p= . ). in addition, there were no significant differences in all grade post-operative complications between the two groups except internal hernia. hospital mortality was % in each group. ladg with r-y reconstruction through mini-laparotomy was technically feasible as well as ladg with b-i anastomosis. utilization of laparoscopy associated with blunt abdominal trauma: the nationwide inpatient sample - kenneth w bueltmann , marek rudnicki ; advocate illinois masonic medical center, chicago, il, university of illinois introduction: the incidence of trauma and its heavy burden upon the healthcare system remain strong. paradigm shifts in the management of these cases has, however, improved the mortality in such cases. it can be expected that improvements in management, when combined with the benefits of laparoscopy, will demonstrate positive impacts upon treatment outcomes. methods: the nationwide inpatient sample was referenced for inpatient stays for the years to . abdominal trauma cases were selected and identified as hollow (ho) or solid organ (so) type, and as blunt or penetrating. the trauma subset was then scanned for the presence of discrete laparoscopic procedures, laparotomy, and converted cases, and flagged accordingly. conclusion: utilization of laparoscopy in treatment of intraabdominal solid and hollow organs injury increases over time. although current analysis based on available hcup nis data include any procedures done during post-traumatic hospitalization, its results can lead to conclusion that minimally invasive technique is being utilized in increased fashion. introduction: single incision laparoscopic (sil) surgery is a laparoscopic procedure which leaves a single small incision in navel, and has been reported to be less invasive than and as safe and efficient as the conventional multiport laparoscopic (mpl) surgery. the long-term rate of incisional hernia after sils colectomy is unknown, and the risk factors of incisional hernia formation is not fully elucidated. methods and procedures: this is a retrospective from a prospectively collected database. the investigation took place in a high-volume multidisciplinary tertiary private hospital in japan. introduction: laparoscopic approach in the acute surgical care setting continues to be underutilized. we aim to report the successful diagnostic and therapeutic use of laparoscopy in the management of a nontoxic patient presenting with acute abdomen and to highlight the benefits of a minimally invasive approach without added morbidity. case report: presented is a -year-old male with history of cad s/p cabgx two years prior and no abdominal surgical history who presented to the ed with sudden onset severe, diffuse, abdominal pain of six-hour duration with n/v. there was no trauma to the abdomen. he had mildmoderate hypertension, but was otherwise hemodynamically stable. on examination, the patient was in severe distress and writhing in pain. fast exam was unable to be performed secondary to pain. cta of the abdomen revealed mesenteric abnormalities with associated small bowel edema in the rlq suspicious for small bowel ischemia. he was taken to the or for diagnostic laparoscopy. he was found to have an omental adhesive band to the abdominal wall with herniation of the small bowel through the small opening. approximately cm of ischemic, nonviable small bowel was resected and anastomosed intracorporeally. he tolerated the procedure well and was discharged home on post-operative day . discussion: primary omental related internal herniation of small bowel is exceedingly rare. there have been only few cases reported in the literature ( , , , ) . two were diagnosed on exploratory laparotomy, one on diagnostic laparoscopy and one at autopsy. the one who underwent diagnostic laparoscopy did not require bowel resection. in presenting this case, we hope to illustrate the role of laparoscopy in the management of acute abdominal pain due to bowel compromise. introduction: morgagni hernias are a rare finding in the adult population, and represent - % of all congenital diaphragmatic hernias. multiple approaches to these rare hernias have been described in the literature. here we present a novel technique of laparoscopic trans-abdominal repair using a combination of the endo-close device (medtronic, minneapolis, mn) and the ti-knot (lsi solutions, victor, ny.) methods: in a patient with a large left anterior diaphragmatic defect we performed trans-abdominal suturing utilizing the endo-close to perform primary closure of the defect, using the ti-knot to secure the pledged sutures along the anterior fascia. due to the size of the defect ( cm) this primary repair was buttressed with polyester mesh. in a second patient with a smaller ( cm) classic right-sided anterior diaphragmatic defect we similarly performed laparoscopic trans-abdominal suturing using the endo-close to traverse both the anterior and posterior fascia and the ti-knot to secure the sutures in order to perform a primary repair of the hernia. both patients presented had an uneventful postoperative course and no indication of recurrence at months. conclusions: morgagni hernias present unique technical challenges. in our experience the combined use of trans-abdominal suture with laparoscopic knot replacement device allowed for completion of both cases laparoscopically with minimal tension on the repairs. feasibility of concomitant laparoscopic splenectomy and cholecystectomy in situs inversus totalis: first case report worldwide ibrahim a salama, md, phd; department of hepatobiliary surgery, national liver institute, menoufia university introduction: situs inversus totalis is a rare anomaly characterized by transposition of organs to the opposite site of the body. combined laparoscopic splenectomy and cholecystectomy in those patients is technically more demanding and needs reorientation of visual-motor skills. presentation of case: herein, we report a year old girl presented with yellowish discoloration and left hypochondrium and epigastric pain diagnosed as hereditary spherocytosis (hs). the patient had not been diagnosed as situs inversus totalis before. the patient exhibit a left sided "murphy's sign" and spleen palpable in right hypochondruim. diagnosis of situs inversus totalis was confirmed with ultrasound, computerized tomography (ct) and magnetic resonant image (mri) with enlarged right sided spleen and presence of multiplegall bladder stones with no intra or extrabiliary duct dilatation. the patient underwent combined laparoscopic splenectomy and cholecystectomy as treatment of hereditary spherocytosis (hs). discussion: feasibility and technical difficulty in diagnosis and treatment of such case pose challenge problem due to the contra lateral disposition of the viscera. difficulty is the laparoscopic technique encountered in skelatonizing the structures in calot's triangle, which consume extra time than normally located gall bladder with right sided standing surgeon and the position changed to left sided standing surgeon during splenectomy. in review up to date medical literature this is the first case reported worldwide. conclusion: provided that the technique is performed by an experienced surgical team, concomitant laparoscopic splenectomy and cholecystectomy in situs inversus totalis is a safe and feasible procedure and may be considered for coexisting spleen and gallbladder disease as in hereditary spherocytosis (hs) as changes in anatomical disposition of organ not only influence the localization of symptoms and signs arising from a diseased organ but also imposes special demands on the diagnosis and surgical skills of the surgeon. objective: to identify the preference among medical students of the following surgical approaches: open surgery, conventional laparoscopy, minilaparoscopy (mini), single incision laparoscopic surgery (sils), natural orifice transluminal endoscopic surgery (notes), and robotic surgery. methods: an online google questionnaire was filled by medical students of different years in medical school. before answering the questionnaire, they watched an online video showing the different techniques, its advantages and disadvantages. the questionnaire consisted of questions about the hypothetical situation where the participants were going to be submitted to an elective cholecystectomy and they could decide which technique they would prefer. all statistical analysis was performed using the r software program, version . . . the chi-squared test was performed for categorical variables where appropriate. a p value . was statistically significant. results: one hundred and eleven medical students answered the survey. ( . %) were female and men. most of the students were between to years old ( . %). they were in the first four years of medical school. when asked if they would consider notes or single incision even if they know that they are new procedures and with not completely established security standards, . % ( ) answered that they wouldn´t consider with no difference between gender (p= . ). when asked if only conventional laparoscopy, robotics or mini were offered, which one they would choose: % of women and . % men chose mini first (p= . ). about the factors that they would consider the most important when choosing the surgical technique, they answered safety first ( . %), followed by the surgeon´s experience with the procedure ( . %), with no statistically significant result between genders (p= . ). when asked if they would consider an open technique even with the other techniques available and compared according to their year in medical school, students closer to finishing medical school would not consider it, with a statistically significant result (p= . ). regarding the most important factors they would consider and compared by year in medical school, safety and experience of the surgeon performed best, with a statistically significant result (p. ). conclusion: among the available surgical approaches, minilaparoscopy tends to be the preference among women medical students who considered safety the most important aspect. the closer they get to the end to medical school, the less they consider the open technique. background: extension of the single incision for the purpose of specimen removal in singleincision plus one additional port laparoscopic surgery (sils+ ) can undermine the merits of sils + , either by increasing wound-related morbidity or by destroying cosmesis. methods: we retrospectively analyzed the clinical outcomes of patients underwent elective sils + anterior resection, either with transanal specimen extraction (tase, n= ) or transumbilical specimen extraction (tuse, n= ), for colorectal cancer from january to june . this study included patients with a tumor diameter less than cm, measured by preoperative computer tomography. results: both groups were similar in patient's basic information and oncologic condition. most surgical data and postoperative clinical variables were comparable between tase and tuse group, except for increasing operative time in tase ( . + . vs. ± . min, p= . ) and reducing wound complications in tase ( % vs . %, p= . ). dosage requirement of narcotic analgesics was not inferior in tase group compare to tuse group. no significant differences were observed in conversion rate, perioperative and overall morbidity between the two groups. conclusion: although sils+ with tase prolonged operative time compare to with tuse, implement of tase is expected to provide benefit of reduced wound-related morbidity in patients with a tumor diameter less than cm. medhat ibrahim, md; al-azhar university, naser city, cairo, egypt purpose: morgagni hernia (mh) is a rare condition. mh is less than % of surgically treated diaphragmatic hernias in infants. there is no specific symptom for the maorgagni hernia. open surgical repair was the golden stander before the introduction of the laparoscopic surgery in the children and infant. there are many different laparoscopic techniques for mh repair have been reported. i report laparoscopic repair of mh in five infants using primary sutures closure with inrta-corporeal knot tying and ethicon secure strap device. this study is an evaluation of the safety and efficacy of this new laparoscopic technique of mh repair in infants with it is short-term outcomes follow up. patients and methods: five infants with mhs underwent laparoscopic repair by hernia sac excision then two primary sutures, non-absorbable proline through the full thickness of the anterior abdominal wall and the posterior rim of the defect, intra corporeal sutures knot tying, ethicon secure strap device which was used to complete the colures of the defect. there was no insertion of chest tube or drain. results: five infants with mh were operated upon. there were males and female. all cases were left side mh, male-female ratio was : . intraoperative and postoperative analgesia requirement was minimal (paracetamole mg/kg/rectal suppository/ hours for the first hours). ceftriaxone mg/kg single dose at the anesthesia induction. all operations were completed laparoscopic. all infants started and tolerated oral regular feeding with in hours from surgery. none of the patients developed intraoperative or postoperative complications. the maximum follow-up was months (mean, months). all patients are in good health without recurrence or port site compilation. conclusion: this easy save technique of mh repair is reducing the operative time and postoperative hospital stay. it is minims the need of postoperative analgesia, anti biotic. the early oral feeding is also a good benefit. the introduction: transumbilical single port laparoscopic appendectomy (tspla) is the most popularized single port surgery in the world. it provides more cosmetic benefits than conventional laparoscopic surgery. however, single port appendectomy requires longer operation time and advanced surgical skills. we aimed to investigate the learning curve for tspla. material and methods: data were collected from patients who underwent tspla by single surgeon between march and february . the learning curve were analyzed using a cumulative sum control chart (cusum) for operation time and complication. results: a total of patients were included in this study. mean operation time is . ± . minutes. there was no open or multi-port conversion. based on cusum for operation time, learning curve were cases. conclusions: tspla is a safe and effective alternative procedure. the learning curve could be overcome safely without major complications. our results suggest that the cases are sufficient to achieve surgical skills for tspla. introduction: anastomotic leakage (al) is a life threatening complication after minimally invasive ivor lewis esophagectomy (tmie ile) and has diverse treatment strategies such as conservative treatment, endoscopic treatment and surgery. however, there is no consensus on which treatment strategy is best. the aim of this study was to analyse various therapeutic strategies for al and their outcomes. methods and procedures: this retrospective multicentre study was performed in three highvolume hospitals. all patients that developed al after tmie ile in the period of january -july were included. the different endoscopic (stenting, clipping and suction-drainage) and surgical treatments and their success-rate were described; success was defined as clinical improvement after primary treatment. primary endpoint was the time until oral feeding was resumed. secondary endpoints were hospital stay and the total amount of surgical, endoscopic and radiologic interventions. results: in total patients that developed al were identified; four patients received antibiotics only. in the remaining patient, endoscopic treatment was performed as primary treatment in %; % received primary surgical treatment. basic variables were similar in these groups. median postoperative day of diagnosis of al was day in the endoscopic-group and day in the surgical-group (p= . ). admission to the icu as a result of the leakage was necessary in % in the endoscopic-group versus % in the surgical-group (p. ). however, median icu-stay was significantly shorter in the endoscopic-group ( days versus days, p= . ). success-rate of the primary treatment was similar; % and % respectively (p= . ). primary and secondary endpoints were comparable for both the endoscopic-and surgical-group; median time until oral feeding was resumed was days and days respectively (p= . ), median total hospital stay days and days respectively (p= . ) and the median number of interventions was in both groups (p= . ). conclusion: endoscopic treatment appears to be a safe and efficient therapy for al after tmie ile. a patient-tailored approach based on the condition of the patient and the morphology of the leak can be adapted to avoid surgery in a selection of patients. this may prevent surgical reoperations and reduce icu admissions. background: lymph node (ln) dissection around recurrent laryngeal nerve (rln) is one of the most important and difficult procedure in esophageal cancer surgery because of high rate of ln metastasis and risk of rln palsy. especially around left rln, the surgical area is far and narrow by thoracic approach which tends to results in insufficient ln dissection. therefore, we tried to remove this ln by imaging lymphatic chain to dissect sufficient ln. surgical procedure: we perform thoracoscopic esophagectomy by semi-prone position using - mmhg thoracic air pressure. after dissection of right rln ln, middle and lower esophagus, encircle the esophagus at the level of bifurcation of bronchus and pull toward right side by tape to dissect the dorsal and left side of upper esophagus. dissect the tissue including left rln ln from trachea by pulling esophagus up to dorsal side and try to move this tissue toward dorsal side of left rln so that this rln ln tissue can recognize as the "lymphatic chain". to increase the mobility of esophagus, cut the esophagus at the level of aortic arch and pull further up this upper esophagus to dorsal side. cut the esophageal branch of rln and separate this lymphatic chain from rln. at the end of thoracic procedure, this lymphatic chain is attached to upper esophagus. after the upper esophagus has pulled out from cervical site, lymphatic chain can easily recognize at the esophageal wall. result: we performed this lymphatic chain procedure in cases. to evaluate this procedure, cases of conventional method by same prone positioned esophagectomy was used for control. there was no statistical difference between these two groups in amount of blood loss (lymphatic chain: conventional= ml: ml, p= . ), rate of rln palsy ( . %: . %, p= . ). although the thoracic operation time was extended in some degree ( min: min, p= . ), number of dissected ln was increased ( . : . , p= . ) and recurrence along left rln has been relatively fewer by this method ( . %: . % p= . ). conclusion: ln dissection around left rln would be easy and sufficient by imaging lymphatic chain. further improvement is needed to secure this procedure and further evaluation should be done to support this data. introduction: to evaluate the role of robotic assisted surgery as part of an appropriate patient work-up and treatment of ipmn and its consistency in terms of perioperative and long term results. few reports described singular minimally invasive procedures for ipmn. this study aims to describe a comprehensive, oncologically adequate treatment of ipmn in a minimally invasive unit with an extremely high robotic penetrance. methods and procedures: we retrospectively analyze our database of resected ipmn between and . this case series includes consecutive, unselected patients: all candidates with a preoperative diagnosis of ipmn were approached robotically. results: among robot assisted pancreatic resections, we identified patients with ipmn. one was excluded for having less than months follow-up, so patients were included and analyzed. they underwent duodenopancreatectomy in cases, distal pancreatectomy in cases and central pancratectomy in . all but one indications followed the most updated available guidelines (sendai from to and fukoka from to ; american gastroenterology association guidelines were used for comparison only). one patient was operated even if the guidelines were suggesting to follow up, because of a strong familiar cancer history. the final pathology for this patient was high grade dysplasia. in another patient we were inside fukoka's recommendations, but outside aga guidelines and the final pathology was adenoma in chronic pancreatitis. postoperative morbidity was . ( low grade complications, one grade a pancreatic fistula, now considered a biochemical leakage only) and mortality was zero. one conversions to open surgery occurred only: a dp in jehowah's witness with a bulky mass behind the portal vein. the mean follow up was months (range: - ), with only one loss to follow up after months for a high grade dysplasia. conclusion: in hepatobiliary pancreatic minimally invasive centers the treatment of ipmn can be grant following the same principles of major cancer centers, with comparable results. large unbiased studies are needed to evaluate if a minimally invasive approach could modify the ratio between operated and surveilled patients. reducing the use of catheters, tubes and imaging after hiatal hernia surgery significantly reduces length of hospital stay sophia s oswald, candice l wilshire, md, brian e louie, md, ralph w aye, md, alexander s farivar, md; swedish medical center introduction: historically, standard post-operative management of patients undergoing laparoscopic hiatal hernia surgery has been placement of a foley catheter and nasogastric tube (ngt) at the time of surgery with removal early on postoperative day (pod) one, at which time an upper-gastrointestinal series study (ugi) would be performed. we initiated a quality improvement project, seeking to assess if we could safely forego placement of foley and ngt along with the ugi, unless clinically indicated. our aim was to determine if this decreased overall length of stay (los), and how often and which demographic of patients needed placement of foley or ngt postoperatively. methods and procedures: we reviewed patients who had undergone laparoscopic hiatal hernia surgery between and under a single thoracic surgeon. patients were excluded for poor esophageal motility (peristalsis \ %), previous esophageal surgery, and presence of a paraesophageal hernia (peh) with over % of the stomach contained in the chest. eligible patients were further stratified into two groups: fast track and non-fast track. fast track was defined as patients who left the operating room (or) with no foley or ngt, and did not receive a routine ugi on pod one. non-fast track was defined as patients who left the or with a foley and ngt and received a routine ugi on pod one. los was measured in hours from the start of surgery to the time of discharge. results: of the patients included, were categorized as fast track and as non-fast track. the two groups were similar in terms of age, gender, bmi and asa; however, the fast track group had fewer paraesophageal hernias and shorter surgery times [table] . the hospital los, however, was significantly shorter in the fast track group, even though there were more postoperative urinary catheters utilized. no patients in fast track group needed an ngt placed or ugi ordered during initial stay. conclusion: in more straightforward laparoscopic hiatal hernia surgery, surgeons can safely forego ngt and foley placement, as well as ugi evaluation the following morning. these initiatives may translate to a quicker discharge from the ward, and may allow safe transition to performing these cases in hour ambulatory outpatient setting. further evaluation of additional interventions and patient education to decrease los are underway. the conclusion: laparoscopic surgery seems to be a safe and feasible option, with long-term benefit for primary tumor resection with metastatic colorectal cancer, but optimal treatment has yet to be defined. the canadian association of gastroenterology (cag) has implemented the colonoscopy skills improvement (csi) program across canada with a goal of improving colonoscopy quality. the programs' efficacy has not yet been formally assessed. this retrospective cohort study was performed on fourteen endoscopists practicing in a tertiary referral center who have undergone csi training between october and december . procedural data were collected before and after csi training. data were extracted from the electronic medical record (emr) and entered into spss version . for analysis. student's t-test was used to compare groups for continuous data; chi-squared tests were used for categorical data. data were collected for a total of procedures; were done before csi training and procedures since csi training. our sample size provided % power to detect a mean difference in adr improvement of %. the most common indication for colonoscopy was family history of colorectal cancer in ( . %) patients. while age ( . yrs v. . yrs, p. ) and gender ( . % male v. . % male, p= . ) were similar, they were statistically different between groups. groups were comparable in terms of indication, and completion rate ( . % v. . %). adr improved significantly after completing the course ( . % v. %, p. ). an improvement was also noted in both polyp detection ( . % v. . %, p. ) and polyp removal ( . % v. . %, p\ . ). we have seen a significant increase in adr at out institution since implementing the csi program. gastric stomach cancer is a rapid major cause of cancer-related death globally, have higher incidence in men and it is noticeable by its heterogeneity. a lot of studies have expressed out the molecular basis of this cancer, include pathogenesis, invasion and metastasis. the invention of new technologies has help to bring out several novel biomarkers that have diagnostic and prognostic value. therefore, this review centers on biomarkers for the early diagnosis, treatment and prognosis of gastric cancer, elaborate the clinical important of serum tumor markers in a patient with this cancer as well as checking the growths, prognosis together with epigenetic changes and genetic polymorphisms. a deep and rigorous search was carried out in pub med/medline using specific words; "gastric cancer", with "tumor marker". our search yielded important reports about related topic from books and articles that were published before the end of september . conclusively, scientists are utilizing time and resource to salvage this nemesis which is of global burden. classical and novel biomarkers are important for treatment as well as pre-post diagnosis of gc. major causes for this disease are cigarette smoking, infection by helicobacter pylori, atrophic gastritis, male sex, and high salt intake. the treatment of which early diagnoses is of important to the management, after pathological diagnoses by stage prognosis and metastatic setting, although the outcome proved not so good includes chemotherapy, and oral medication are oxaliplatin, capecitabine, cisplatin and -fluorouracil ( -fu). introduction: emergent appendectomy is the standard of care in usa based on tradition rooted in theory that delaying surgery allows for progression of disease and poorer outcomes. antibiotic treatment alone has been shown feasible in the treatment of uncomplicated appendicitis. in clinical practice surgical treatment can be delayed due to a multitude of medical and logistical reasons. this study evaluates the relation between timing of surgery to outcomes. methods and procedures: consecutive adult patients undergoing appendectomy in a teaching community hospital were risk stratified using the acs risk calculator. time from imaging to incision defined early and delayed groups. statistical analysis was used to determine association between risk level, timing of surgery and outcomes. results: % of patients in this study were considered high risk. average time to incision was . hours. shorter time to incision was associated with a statistically significant lower length of stay (p. ). for every hours in surgery delay, one day was added to the length of stay. no statistical difference was found between time to incision and other outcome variables of clinical complications, conversion to open appendectomy or frequency of complicated appendicitis. length of stay was longer than predicted by acs risk calculator in both high and low risk groups. a multidisciplinary, obesity-focused approach improves diagnosis of obesity-related illnesses: a new paradigm for the care of patients with obesity roderick olivas, aaron brown, md, racquel s bueno, md, cedric s lorenzo, md; university of hawaii -department of surgery introduction: patients suffering from the burden of obesity are at significant risk for medical problems that lead to premature death and disability. we hypothesize that a multidisciplinary bariatric team will be better equipped to recognize and diagnose these conditions. this study hopes to quantify that a patient focused approach leads to increased recognition of obesity-associated comorbidities, thus improving quality of care and surgical outcomes. methods and procedure: a retrospective medical chart review of patients who underwent bariatric surgery from / / to / / was performed comparing patient problem lists obtained from their primary care providers upon entry into the bariatric program, and the final problem list generated after evaluation by the program's multidisciplinary team. the total number and specific comorbidities identified before and after multidisciplinary team evaluation was analyzed with a paired t-test and manova, respectively. comparison of the number of comorbidities identified against specific patient demographics was conducted using paired t-test. results: a total of patient charts were selected and met inclusion criteria. the sample consisted of % women and % men; the mean age was . ; the mean bmi was . ; % were morbidly obese (bmi ) and % were obese . the total number of comorbidities identified after evaluation by a multidisciplinary team was significantly greater (p=. ), with the average number of comorbidities diagnosed before and after being . and . , respectively. a significant increase (p. ) in the identification of comorbidities before and after evaluation were noted for all demographics, and no disparities regarding gender, age, marital status, employment status, bmi, or ethnicity where identified. conclusion: patients with obesity unknowingly suffer from many obesity-associated comorbidities simply because their health care providers have failed to recognize the existence of these conditions. surprisingly, this include diseases that are highly associated with obesity, such as osa and t dm, for which obese patients should be screened. although the root of this dereliction is yet to be determined, insufficient obesity-focused education and inherent weight bias among providers must be considered. assessment by a multidisciplinary bariatric team resulted in the identification and treatment of an increased number of comorbidities in this patient population. increased recognition of obesity-related comorbidities improves quality of care, which can translate into improved surgical outcomes. introduction: it is known that surgical residents suffer from sleep deprivation. no recent study evaluated the type and number of calls received at night. lately, burn out, depression and suicide have been the subject of interest in studies and media because of the higher rate among the residents compared to general population. the objective of our study is to evaluate junior resident's level of fatigue and the quantity and quality of calls received during on-call nights in general surgery at chus. methods and procedure: transversal study conducted on junior residents that were on-call in general surgery at the chus between april and august , . the participants detailed all the calls received between pm and am on an database created on the application handbase and completed a daily calendar of their on-call night noting all the tasks they did every half hour (surgery/consultation/sleep). the level of fatigue was evaluated at the end of the night at am with a visual analog of sleep scale on a score over points. results: the level of fatigue / (tired) or / (exhausted) was reached in closed to % of the oncall nights. the median number of calls by night was and the median duration of sleep was only . hours. the median lenght of uninterrupted sleep was . hours by night. among the total nights and calls analyzed, % were ''not pertinent'' and % were ''reportable in the morning''. more than % of the nights had at least one call ''not pertinent'' or ''reportable in the morning'' that have interrupted the junior resident's sleep. the level of fatigue was significantly correlated to the number of calls received during the night (spearman's rho=+ . , p. ) and to the number of uninterrupted hours of sleep (spearman's rho=− . , p. ). conclusion: the level of fatigue is very high among the junior residents in general surgery. many of the calls received during the night are not pertinent or could have been delayed to the morning. our results lead us to the conclusion that interventions and recommendations should be made to raise nurses and resident's awareness about the situation to reduce the unnecessary calls and the level of fatigue of the residents. we hope that on-call resident sleep will be better preserved and that will result in fewer health issues for them (burn out, depression, suicide). without interruptions: does twitter level the playing field? heather j logghe, md , laurel milam, ma , natalie tully, bs , arghavan salles, md, phd ; thomas jefferson university, washington university, introduction: frequent interruption of women in conversation has long been noted anecdotally, and studies confirm that women are interrupted more often than men. such interruptions can diminish perceptions of authority and compromise women's self-confidence. on twitter, users cannot be interrupted in the same way they can be in live conversation. thus the platform may provide a means for women to overcome this obstacle. to determine the degree to which women surgeon leaders utilize twitter compared to their male colleagues, we examined the twitter accounts and activity of the leaders of three national surgical societies. methods and procedures: lists of surgeons holding leadership positions in three surgical societies; the american college of surgeons, the academic association of surgery, and the society of american gastrointestinal and endoscopic surgeons, were obtained and duplicate names were deleted. table details the organizations and leadership positions included. the twitter accounts of these leaders were then identified and confirmed by reviewing the accounts for surgical content. account duration was calculated from the join date. the number of tweets, accounts following, followers, and likes were recorded for each account. outliers were defined as two standard deviations from the mean. results: one hundred sixty-eight men and women surgeon leaders were identified. forty-nine percent of the men and % of the women were found to have twitter accounts. mean account durations for men and women were similar, . years and . years, respectively. outliers for total tweets ( men, women), accounts following ( men), followers ( men), and likes ( men) were excluded from analyses. almost all positive outliers were men. there were no negative outliers. overall, excluding the outliers, there were no significant differences between men and women in any metric. conclusion: among leaders in the surgical organizations analyzed, a higher percentage of women than men have twitter accounts. those with the greatest number of tweets, accounts following, followers, and likes, however, are overwhelmingly male. thus, although women in this sample were more likely than the men to have twitter accounts, men were more likely to gain influence through their accounts. increasing women's influence in this public forum may position them as much-needed role models for the current and next generations. surgical societies may help reduce the disparity in women's representation in surgical fields through education of their members on how to use social media. introduction: the aim of this study was to report the perioperative morbidity and short-term outcomes of a case series of robotic-assisted laparoscopic transabdominal preperitoneal (tapp) inguinal hernia repairs. methods and procedures: a retrospective review (january through december ) of patients who underwent either unilateral or bilateral robotic-assisted laparoscopic tapp inguinal herniorrhaphy by two attending surgeons was performed. patient demographics, perioperative morbidity, operative time, and follow-up data were analyzed. results: patient demographics are summarized in table . mean operative times for unilateral and bilateral inguinal herniorrhaphy were . ± . and . ± . minutes, respectively. mean robot console times for unilateral and bilateral inguinal herniorrhaphy were . ± . and . ± . minutes, respectively. postoperative complications included urinary retention ( . %), conversion to open repair ( %), and delayed reoperation ( . %). no major bleeding, surgical site infection (ssi), or mortality was observed. at first follow-up visit ( ± days), symptoms/signs included groin/scrotal swelling ( %), seroma ( %), groin pain ( %), burning ( %), numbness ( %), and persistent urinary retention ( %). % of patients required a second follow-up visit. two patients underwent reoperation for suspected recurrence but instead a cord lipoma was found without a hernia defect. conclusions: robotic-assisted tapp inguinal herniorrhaphy can be performed with operative times and short-term outcomes similar to those published for open technique. the robotic-assisted tapp inguinal herniorrhaphy is a safe and an efficient minimally invasive surgical option with lower ssi risk and better cosmetic results. gunnar nelson, nathan lau, phd; virginia polytechnic institute & state university introduction: the fundamentals of robotic surgery (frs) and fundamentals skills of robotic surgery (fsrs) are universal curriculums covering a range of topics to assure a high level of surgical skills for optimal patient outcomes. this assurance of skills should include management and response to adverse events. thus, we reviewed frs and fsrs to identify any gaps in educational contents pertaining to how surgical teams are trained to handle adverse events in robotic surgery. methods and procedures: we conducted a literature search through google scholar, journal of robotic surgery, and plos one on frs and fsrs from to . we reviewed articles on preparing medical professionals in handling adverse events during robotic surgeries. besides the two curriculums, we also surveyed the literature on the characteristics of the adverse events and responses of the medical team. this literature survey provided a basis for recommending additional education contents to frs and fsrs. results: in our review, the frs contains modules consisting of an introduction to robotic surgery, with cognitive, psychomotor, and team training/communication skills. meanwhile, the fsrs contains different tasks, half of which on human-machine interaction and another half on operative interaction. both curriculums appear to lack contents on managing adverse events in robotic surgery. according to fda data, , adverse events were reported per , surgeries, of which (i) % relates to broken pieces of surgical instruments falling into patients, (ii) . % pertains to burning holes in tissue from electric arching, and (iii) . % relates to unexpected operations of the instrument such as power outage and issues with electrosurgical units. thus, medical professionals should be trained to manage common adverse events in robotic surgery. for frs, augmenting the five current scenarios in the communication section with common adverse events (i.e., broken pieces falling into patients) would minimize complications under abnormal circumstances. for fsrs, the most logical method would be augmenting the operative interaction tasks with adverse events to train medical professionals. conclusion: we discovered universal curriculums on robotic surgery lack education contents for training medical professionals to manage adverse events and out of the , procedures, ( . %) pertained to device malfunction. to protect the patient's health, universal curriculums must incorporate contents preparing medical professionals in responding to adverse events, particularly device malfunctions, during robotic surgeries. introduction: this retrospective study was performed to evaluate the safety and feasibility of the new senhance robotic system (transenterix) for laparoscopic cholecystectomies. we report the first single-institutional experience utilizing this new robotic platform. methods: approximately robotic cholecystectomies were performed using the senhance robotic system. the senhance surgical system is a new robotic platform that consists of a cockpit, manipulator arm and a connection node ( figure ). this new system provides robotic surgery with numerous advantages including eye-tracking camera control system, haptic feedback, reusable endoscopic instruments, and a high configuration versatility due to total independency of the manipulator arms. patients were between and years of age, eligible for a laparoscopic procedure with general anesthesia, had no life-threatening disease with a life-expectancy of less than month and a bmi\ . a retrospective review of a variety of prospectively collected pre-, peri-and postoperative data including but not limited to patient demographics, intraoperative as well as postoperative complications was performed. cholecystectomies were performed by expert level laparoscopic surgeons. results: the standard laparoscopic technique and setup was easily applicable to the senhance robotic system for this particular surgery. operative time and perioperative complications were comparable to reports of standard laparoscopic cholecystectomies. there was no significant learning curve detected in our case series. conclusion: we report the first experience with laparoscopic cholecystectomies using the new senhance robotic system. there were no major perioperative complications and operative time was comparable to standard laparoscopic cholecystectomies well reported in the literature. this case series suggests that the senhance robotic system can be safely and easily used for laparoscopic cholecystectomies by experienced laparoscopic surgeons. background: the ergonomic benefits or robotic surgery for the health of the surgeon are widely touted as benefits of this technique, though concern remains over a perception of increased risk of injury to patients, particularly in the novice robotic surgeon. injury to the bedside surgeon and assistants due to robotic movement can also occur, though not previously reported. we describe a finger fracture to the bedside surgeon due to entrapment between robotic arms and discuss potential risks to the surgeon in robotic procedures. procedure: a distal pancreatectomy and splenectomy was performed utilizing the davinci si system (intuitive surgical, inc., sunnyvale, ca). during the operation, hemorrhage was encountered which required an instrument exchange that was delayed by self-testing failures. after the instrument was validated and advanced into the field by the bedside surgeon, the operator abruptly took control of the device to reposition. the external portion of the active arm was then rapidly and forcefully propelled laterally toward a stationary retracting arm. the bedside surgeon's hand was still engaged on the instrument being inserted and became trapped between the two arms, leading to a right middle finger crush injury. results: the bedside surgeon sustained a fracture to the distal phalanx at the insertion of the flexor tendon with significant hyperextension of the joint. there was temporary paresthesia of the fingertip. while flexor tendon function was preserved and surgery was not required, the surgeon was required to maintain continuous splinting and was unable to return to full duty for a total of weeks. the surgeon has mild residual hyperextension. conclusions: while complications to the patient have previously been attributed to the robotic platform, this case demonstrates that there are other inherent hazards to members of the operative team. as is natural with all indirect visual surgical techniques, the operator becomes intensely focused on the internal view and instruments in the field. this spatial separation is accentuated on the robotic platform as the isolated console provides a complete visual field immersion, no tactile feedback, and a disconnect between the rapid, sizeable outward arm motions need to produce small internal movements. given the need for maximum dexterity internally, the device doesn't have external proximity sensors to prevent arm-arm or arm-operator collisions. while many bedside operators report anecdotes of collisions with the device, this case reveals the forces involved at the human-machine interface can lead to more significant injuries. robtic approach to non-midline abdominal wall hernias: a single institution experience from a high volume center emily benzer, do, j. stephen scott, md, facs; university of missouri introduction: the objective of our study was to evaluate our experience with robotically repaired non-midline abdominal wall hernias at a high-volume robotic surgery program. we also will discuss the technical advantages of the use of robotic technology in repair of these unusual hernias which have typically had higher recurrence rates then midline hernias. laparoscopic approach for lateral ventral abdominal wall hernia (spigelian) and lumbar hernia has been described, however the success of robotic assisted repair for these hernias has yet to be determined. methods: a retrospective case analysis of all robotic abdominal hernia cases between june and june at an academic institution with a single high volume robotic surgeon was performed. the operative details of robotic repair of non-midline abdominal hernias, patient demographics, length of stay and smoking status were recorded and analyzed. the technical advantages of the use of robotic technology for example circumferential fixation of the mesh, ease of intracorporeal suturing, and the use of wristed instruments to gain better angles for posterior fascial release were evaluated. results: a total of cases were identified. the average age of the patients was . years (range - years) and patients were predominantly female ( %). spigelian hernias represented % (n = ) and lumbar hernias % (n= ). all patients had primary closure of their defect and patients ( %) had a posterior myofascial release performed. mesh types placed included polypropylene uncoated (n= ), polypropylene coated (n= ), and biologic (n= ). with uncoated polypropylene mesh placed had peritoneum closed over the mesh. the average length of stat was . days (range - days). there were no recurrences identified over a mean follow up period of . months (range . - . months). conclusion: robotic assisted repair of non-midline abdominal wall hernias is a viable option in the elective setting with no recurrences noted in this case series. the technical advantages of using robotic technology were identified and discussed in detail. these advantages theoretically improve outcomes in these patients however further analysis on long-term outcome and costs will have to be determined in future studies. the inguinal hernia repair has seen several critical improvements in recent times due to the implementation of new techniques, including laparoscopic repair, as well as robotic repair. with over , inguinal hernia repairs performed annually, it is important to identify the safest and most patient-friendly method. for surgeons, robotic assisted laparoscopic surgery is gaining in popularity for its dexterity and d visualization. but despite the growing interest in robotic hernia repairs, there is a scarcity of literature to support its superiority over open inguinal hernia repair. this study hypothesizes that patients who undergo robot assisted laparoscopic inguinal hernia repair will have decreased immediate post-operative pain, shorter recovery room stays, decreased narcotic requirement, and overall decreased pain at follow up compared to open inguinal hernia repair. in this study, we performed a retrospective analysis of patients who underwent either an open or robotic assisted laparoscopic inguinal hernia repair at stamford hospital, from july -july . the following characteristics were analyzed for both subsets of patients: gender, bmi, type of repair, operative time, recovery room time, immediate post-operative pain, and post-operative pain at follow up. our study demonstrated longer average operative time for patients undergoing robotic hernia repair compared to open repair, which was statistically significant (p value=. ). patients who underwent robotic inguinal hernia repair spent less time in the recovery room compared to patient who underwent open repair. in addition, patients in the robotic hernia group required less narcotics in the recovery room compared to patients who underwent open repair (p value = . ). there was no statistically significant difference between lengths of hospital stay between the two groups. this study highlights several possible advantages of robotic inguinal hernia repair, including lower post-operative pain scores, less narcotic usage required in the post-operative period, as well as shorter recovery room time. the results from this study should increase interest in investigating the superiority of robotic inguinal hernia repair. future plans for study involve comparing robotic to laparoscopic repair. in addition, we plan to continue to follow the study patients to look at additional qualitative metrics, including time to return to work and time to return to daily activities. introduction: buccal mucosal grafts (bmg) are traditionally used in urethral reconstruction. there may be insufficient bmg for applications requiring large amounts of graft, such as urethral stricture after gender affirming phalloplasty. rectal mucosa is an alternative with less post-operative pain, no impairment in eating and speaking, and larger graft dimension. laparoscopic transanal minimally invasive surgery (tamis) has been described by our group. due to the technical challenges of harvesting a sizable graft within a confined space, we adopted a new approach using the intuitive da vinci xi® system. we demonstrate the feasibility and safety of a novel technique of robotic tamis (r-tamis) in the harvest of rectal mucosa for the purpose of onlay graft urethroplasty. methods and procedures: irb approval was obtained. three female-to-male transgender adults (age range: - years) presenting with post-phalloplasty urethral strictures underwent robotic rectal mucosal harvest. the procedure was first rehearsed on an inanimate model using bovine colon. the surgery was performed under general anesthesia with the patient in lithotomy position. the gelpoint path transanal access platform was used. the rectal mucosa was harvested by the robotic instruments after submucosal hydrodissection. specimen size harvested correlated with clinical surface area needed for urethral reconstruction. following specimen retrieval, flexible sigmoidoscopy was used to ensure hemostasis. the rectal mucosa graft was placed as an onlay for urethroplasty. results: there were no intraoperative or postoperative complications. average graft size was cm (range: - cm). every case had excellent graft take for reconstruction. all patients recovered without morbidity or mortality. they reported minimal postoperative pain and all regained bowel function on the first postoperative day. all reported significantly less postoperative pain and greater quality of life in comparison to prior bmg harvests. the procedure has been refined to increase efficiency and decrease operative time by maintaining adequate insufflation, retraction of the mucosal graft, and maintaining graft integrity. conclusions: to our knowledge, this is the first use of r-tamis for harvest of rectal mucosal graft. our preliminary series indicates the robotic approach is feasible and safe. it constitutes a promising minimally-invasive technique to employ in urethral reconstruction. demonstrated feasibility and avoidance of the challenging recovery associated with bmg harvest warrants further application and long-term evaluation of this procedure. prospective studies evaluating graft success, donor site morbidity and long-term outcomes are needed. introduction: the proportion of robotic minimally invasive procedures that are being performed annually is growing rapidly, specifically in the field of general surgery. a robotic approach to minimally invasive procedures potentially confers a number of benefits ranging from a magnified viewing field to greater attenuation and translation of hand movements leading to improved stability and maneuverability. it is paramount that a robust curriculum is designed for training surgical residents in robotic techniques. the aim of this project is to assess the current state of robotic surgery training at the ohio state university, with specific regard to whether it is currently temporally effective in addition to establishing a baseline against which the robotic surgery curriculum can be compared. methods and procedures: data were obtained for cases performed at the ohio state university hospital east, between january and september of . case time, date, type, and attending surgeon were recorded and tracked for review. of the cases, were cholecystectomies, were unilateral inguinal hernia repairs, and were bilateral inguinal hernia repairs-for a total of procedures included in the analysis. chief residents were trained in two-month blocks, beginning in january of . mean console operative times for the first and second months were compared for cholecystectomies as well as unilateral and bilateral inguinal hernia repairs. results: mean console time decreased for cholecystectomies (− . %; n= ), bilateral (− . %; n= ) and unilateral (− . %; n= ) inguinal hernia repairs from month one to month two. there was a large amount of variance across training blocks, but there was a systematic improvement in operative time across the training period. average operation length was shortest for cholecystectomies (m= . min), followed by unilateral inguinal hernia repairs (m= . min), and finally bilateral inguinal hernia repairs (m= . min). discussion: this preliminary data suggests that residents are able to decrease their robotic operation time over the course of the two-month rotation. although sample sizes were relatively small for each block, the consistency of the trend supports this conclusion. further data collection will allow for more precise estimates in the future, and stronger conclusions to be drawn. these results show that rapid improvement is possible and provide motivation to establish robotic surgery curricula for general surgery residents nationally. robotic pancreas-sparing treatment of pancreatic neuroendocrine tumors: three case reports and review of the literature alessandra marano, giorgio giraudo, stefano giaccardi, desiree cianflocca, diego sasia, felice borghi; santa croce e carle hospital introduction: pancreas-sparing resections would be the ideal procedure in case of small pancreatic neuroendocrine tumors (p-nets) reducing the risk of exocrine and endocrine insufficiency. compared to standard resection, this type of surgery is safe and feasible without increasing the risk of postoperative complications except the overall rate of clinical pancreatic fistula (pf), which did not result in higher mortality or overall morbidity. robotic surgery for pnets enucleation has been rarely described but initials experiences have shown that this approach is associated with favorable outcomes. the aim of this study is to describe three cases of dv®si™ pancreatic enucleation for p-nets located in the uncinate process, in the body and in the posterior aspect of the tail of the pancreas, respectively. a brief review of the literature regarding the application of robotics for pnets enucleation is also included. methods and procedures: this study includes patients undergoing dv®si™ enucleation for pnets with a maximum diameter no more than cm and a distance between tumour and main pancreatic duct (mpd) greater than mm. at surgery, exposure of the pancreas was achieved by separation and traction of the gastrocolic and gastropancreatic ligaments. the pancreas was explored: an intraoperative ultrasound was used ensuring negative margins and leaving the mpd intact. thus, a cross-stitch through the tumour was made routinely in order to pull the tumour. enucleoresection was carried out with monopolar scissors and bipolar forceps. the tumour was placed into a specimen bag and removed from the trocar port. a drain was always left. results: median total operative time was min. no conversion neither intraoperative complications occurred. median length of stay was . days. two patients presented a pf grade a (classification isgpf) while a pf grade b occurred in case of pancreatic tail net enucleation. final pathology revealed two insulinomas and one non-functioning net of the pancreatic body. at a median follow-up of months no pancreatic insufficiency, reoperation or tumour reoccurrence was observed in all cases. the robotic approach for the treatment of p-nets is safe and feasible and, in selected cases, it may extend the indications of minimally invasive pancreatic-sparing surgery. in particular, the robotic approach provides a more precise dissection and may ensure negative margins and the mpd intact. these preliminary results are consistent with literature data about over robotic pancreatic enucleations for p-nets that shows favourable surgical outcomes, especially if compared with those of open surgery. introduction: rectal cancer continues to be a surgical challenge. new technologies must be incorporated into practice and, at the same time, oncologic surgery and overall outcomes must be improved. the use of da vinci robotic surgery systems has spread rapidly in the field of rectal cancer treatment showing several technical advantages and favorable outcome compared to laparoscopy. since the introduction of the robotic platform in our institution in , we have adopted a single-docking robotic technique for rectal resection. the aim of this study is to present our standardized technique and to analyse the clinical outcomes of the first robotic rectal procedures. methods and procedures: prospectively collected data reviewed from consecutive patients who underwent single docking totally robotic (da vinci® si™) dissection for rectal cancer resection between june and august under eras program. robotic rectal surgery was performed without changing the position of the robotic cart but only the robotic arms are repositioned between two phases: ) vascular ligation, and sigmoid colon to splenic flexure mobilization; and ) pelvic tme. results: there were men ( %) and the median age was years (range- - ). thirty-five patients had neo-adjuvant chemoradiotherapy whilst patients had bmi [ . procedures performed included anterior resection (n= ) and abdominoperineal resection (n= ). protective ileostomy was performed in patients. the median operating time was min (range- - ). there was one conversion and two intra-operative complications (one bladder lesion and one ureteral lesion, respectively). median length of stay was . days (range, , and readmission rate was %. thirty-day mortality was zero. anastomotic leak rate was %, and all patients except by one were managed conservatively. the mean lymph node harvested was (sd± . ). radial margin was negative in all patients. at median follow-up of months, there were no local recurrences. the single docking robotic technique is a safe and feasible approach for rectal surgery: in our study it has demonstrated favourable clinical outcomes and the adoption of a standardized stepwise approach was useful especially during the initial learning phase. to the best of our knowledge, this is the largest series from italy to report this standardized approach and the short-term clinical and oncological outcomes. in the complex laparoscopic surgical procedure, there is a problem such as that the laparoscope and the surgical instruments interfere with each other because multiple instruments is concentrated in one place. this problem is significantly appear in the laparoendoscopic single site surgery. therefore we suggested multi degrees of freedom (dof) manipulator with mantle tube for assisting laparoendoscopic surgery, which manipulator has two flexion and one telescopic mechanisms actuated by wire. it is possible to insert any thin surgical instruments such an endoscope the mantle tube of the multi dof manipulator, which the manipulator can let those surgical instruments access the operative field from different axis with other instruments. the use of this manipulator has two advantages, one of which is avoidance of fighting between instruments and laparoscope. the other is that become possible to ensure a satisfactory field of vision in the operative field. in this report, we assumed that this multi-dof manipulator is used as laparoendscope. in order to evaluate the performance of this manipulator, the operation time of the test in the abdominal cavity simulator (fasotec inc.) was measured. the test is a contact test to multiple-targets, which is a test that bring a forceps contact multiple-targets in the abdominal cavity simulator according to the defined pattern. as a general comparison and evaluation target for this measurement result, it is compared with the case using the same access method as the conventional rigid endoscope. in this test, the number of contacts between forceps and laparoendoscope were recorded by using electrical device. subjects (n= ) are adult men who trained the peg transfer in the above simulator. it was compared of total operating times of the test and the field of vision obtaining each device. from these results, using the suggested manipulator device rather than using rigid laparoscope a satisfactory field of vision is obtained, and it is possible to short the operating time approximately seconds, and to small the number of contacts significantly. therefore it was shown that the effectiveness using the suggested manipulator device. for this reason, use of this device is expected to facilitate the complex surgical operation. additionally, it is performed para ablative operation of swine liver tissue in the abdominal cavity simulator, as previous step of clinical test. the operative field in this test was surveyed, the refinements of this manipulator for improvement performance were described in this report. yoshiyuki usui, md, phd, ichiro akiyama, md, phd, hironori kunisue, md, phd, hideaki mori, md, phd, tetsuya ota, md, phd; okayama medical center background and methods: we have performed approximately cases of gasless endoscopic thyroid surgery since for years. this surgery was performed through a small subclavian incision and using a wire traction and inserting an endoscope. we have modified and improved our surgical techniques by inventing various surgical instruments. here we introduce four newly invented surgical instruments, chronologically. results: we made u-retractor ( ), u-trocar ( ), u-kelly forceps ( ), and u-suction tube retractor ( ). all surgical instruments were modified from conventional surgical instruments. the u-retractor was a piercing retractor, each end of which had a sharp tip and a retractor. this retractor was inserted from the -cm working port outside the body and retracted the muscles effectively. the u-trocar was reversely set from inside to outside to make the working space wider. the u-kelly forceps which had a special ratchet were made to dissect loose connective tissue around the thyroid gland avoiding injury of the recurrent laryngeal nerve. the u-suction tube retractor facilitated a wider working port and eliminated the mist created by the ultrasonically activated scalpel effectively. recent data showed no difference of operative time, hoarseness, blood loss and hospital stay between conventional thyroid lobectomy and gasless endoscopic lobectomy. conclusion: gasless endoscopic thyroid surgery has been improved in the last years. this procedure made the excision of not only benign thyroid tumors but also small thyroid carcinomas. this operation is still cost effective, because almost all surgical instruments are reusable and is a satisfactory experience to both the patients and surgeons. objective: to put forward the importance of complete (r ) resection for the treatment of retroperitoneal tumors increasing overall survey. methods: in this study; patients having the diagnosis of retroperitoneal tumors with different histopathological subtypes whom were hospitalized in emergency surgery department of istanbul medical faculty between the years of and were evaluated retrospectively. the database of the department was analyzed. operational backgrounds, histopathological results, radiological evaluations, and assesments about relapses, and overall survey were obtained from the medical archieve. results: the average follow-up time was , years. all of the patients included into the study were undergone operations. the average time of hospital stay was calculated as days. of the patients were found to have positive surgical margins in their histopathological evaluations. overall mortality rate of the study was % ( / ). we have observed a direct correlation between complete (r ) resection and disease-free survival. patients having relapses had worse prognosis in terms of overall survey ( % mortality rate). after having done the statistical evaluation, surgery was found to be the main determining factor for the assesment of overall survey. conclusion: reference to an experienced and multidisciplinary surgical center after an early diagnosis has upmost importance for the treatment of retroperitoneal tumors. surgical approach constitutes the main element in the management. overall survey is directly correlated with complete (r ) resection. novel fluorescent dyes for real-time, intraoperative, organ-specific visualization of biliary and urinary systems using dual-color near-infrared imaging ; children's national health system, nih/nci p multidisciplinary approach for management of necrotizing pancreatitis: a case series prabhu senthil-kumar university of alberta, centre for the advancement of minimally invasive surgery introduction: the objective of this study was to systematically review the bariatric surgery literature to understand how weight loss is reported. the incidence of obesity has increased globally. according to the world health organization more than million were obese in . in the last decade, bariatric surgery has been increasingly utilized as an effective treatment option for severely obese patients. currently, bariatric surgeries are among the most commonly performed operations. the primary outcome of such procedures is weight loss which has been shown to vary according to the type of surgery. however, there are different methods used to report weight loss which makes it difficult to directly compare outcomes between studies. a previous review by dixon et al. in revealed a wide heterogeneity in weight loss reporting. however, there have been no recent reviews on the reporting of weight loss in bariatric surgery. methods: a search of the medline electronic database was performed for studies published in using search terms gastric bypass/sleeve gastrectomy, weight, human, and english. articles were selected by two independent reviewers based on the following inclusion criteria: ( ) adult participant ≥ years predictive factors for excess body weight loss after bariatric surgery in japanese obese patients takeshi naitoh hypertension resolution after rapid weight loss: a single institution experience cristian milla matute reoperative bariatric surgery: analysis of indications and outcomes: a single center experience iman ghaderi objective: to observe the effects of duodenal-jejunal transit on glucose tolerance and diabetes remission in gastric bypass rat model. method: in order to verify the effect of duodenal-jejunal transit on glucose tolerance and diabetes remission in gastric bypass, twenty-two type- diabetes sprague-dawley rat model established through high fat diet and low dose streptozotocin (stz) administered intraperitoneally were assigned to one of three groups: gastric bypass with duodenal-jejunal transit (gb-djt n= ), gastric bypass without duodenal-jejunal transit (rygb n= ) and sham (n= ). body weight, food intake, blood glucose, as well as meal-stimulated insulin, and incretin hormones responses were assess to ascertain the effect of surgery in all groups. oral glucose tolerance test (ogtt) and insulin tolerance test (itt) were conducted three and seven weeks after surgery. results: comparing our gb-djt to the rygb group, we saw no differences in the mean decline in bodyweight, food intake, and blood glucose -weeks after surgery. gb-djt group exhibited immediate and sustained glucose control throughout the study outcomes with sham operation did not differ from preoperative level. conclusion: preserving duodenal-jejunal transit does not impede glucose tolerance and diabetes remission after gastric bypass in type- diabetes sprague-dawley rat model is bariatric surgery effective for comorbidity resolution in super obese patients? methods: a retrospective analysis of outcomes of a prospectively maintained database was done on obese patients with a diagnosis of at least one or more of the following comorbidities-t dm, htn, osa, or hld-at the time of initial visit who had undergone either a sleeve gastrectomy (sg) or a roux-en-y gastric bypass (rygb) at our hospital between and . the patients were stratified based on their preoperative body mass index (bmi) class: bmi methods: we retrospectively reviewed all patients that underwent laparoscopic sleeve gastrectomy (lsg) at our institution from - . common demographics and comorbidities were collected as well as creatinine, preoperatively and up to hours after surgery. the renal function was calculated using the ckd-epi formula, derived and validated by levey et al. acute kidney injury was defined as an increase in serum creatinine by ≥ . mg/dl within hours after surgery. all tests were two-tailed and performed at a significant level of . . statistical software r, version . . ( - - ) was used for all analyses. results: of the patients reviewed conclusion: the impact of laparoscopic sleeve gastrectomy in renal function is evident within the first hours after surgery. patients undergoing lsg, especially patients with baseline chronic kidney disease stage ≥ are at increased risk of developing acute kidney injury in the perioperative setting the body mass index (bmi), fasting plasma glucose (fpg), glycosylated hemoglobin (hba c), serum triglyceride, serum cholesterol and blood pressure of all patients were measured before and at months after surgery. the results were collected and analyzed. results: patients suffered from metabolic disease undertook lsg surgery successfully (a mean age of years), were male and were female. all of patients suffered from obesity and the mean bmi of them was . ± . kg/m before surgery. among them, patients had type diabetes mellitus (t dm), patients had hypertriglyceridemia (htg), patients had hypercholesterolemia (hc) and patients had hypertension. the mean bmi of patients at months after surgery was . ± . kg/m and decreased significantly (p. ). the mean excess weight loss (ewl%) of patients was . %± . %( %* %) at months after surgery. the average levels fpg, hba c of t dm patients at months after surgery were . ± . mmol/l, . %± . % methods: we retrospectively reviewed all patients who underwent bariatric surgery from to . we assessed kidney function using the chronic kidney disease epidemiology collaboration (ckd-epi) and cardiovascular risk using framingham risk score (frs) equation pre-operatively and at and months of follow-up. our population was divided into two groups: patients with ckd stage ≥ (gfr\ ml/min) and patients with normal gfr. significance. results: of the , patients reviewed, . % (n= ) met the criteria for ckd-epi glomerular filtration rate (gfr) and framingham risk score (frs) calculations. after matching, patients ( . %) were left to analyze, % (n= ) of which had a laparoscopic sleeve gastrectomy. eighty-six patients ( %) had an impaired kidney function (ckd≥ ) (group ) and patients ( %) had a normal gfr (group ). common demographics and comorbidities after matching are described in table . the mean creatinine in group was . ± . mg/dl versus . ± . mg/ dl in group (p). glomerular filtration rate was . ± . ml/min in group and ± . ml/min in group . furthermore, when the frs was calculated at months follow-up, patients with impaired kidney function had an absolute risk reduction of . % corresponding to a relative risk reduction (rrr) of group . the percentage of estimated bmi loss was found to be similar in both groups ( . ± . and . ± . respectively p= . ). conclusions: bariatric surgery, especially lsg, has a positive impact on kidney function particularly in patients with chronic kidney disease stage or greater. despite these patients having a higher preoperative cardiovascular risk, they showed similar risk reduction when compared to patients with normal kidney function at months of follow-up the impact of socioeconomic factors and indigenous status jerry t dang only ( . %) patients underwent urgent conversion for management of complications after sg. three patients had intraoperative complications necessitating blood transfusion. fourteen ( . %) patients required readmission within days postoperatively. six patients ( . %) required surgical interventions including for gastrointestinal leak, for hemodynamic instability, for a cecal perforation, and for a small bowel obstruction. there were no mortalities within the first year of revisional surgery. in patients with bmi[ kg/m at the time of revisional surgery, at the median postoperative follow-up of (interquartile range, - ) months, a median (interquartile range, - ) kg/m reduction in bmi was observed. overall, ( . %) patients had persistent type diabetes at time of revisional surgery. improvement of diabetes was observed in patients ( . %) after conversion of sg to rygb. among patients with gerd symptoms, subjective symptomatic relief was reported at the last follow-up. conclusion: weight recidivism is the most common indication for revision of sg objective: to evaluate laparoscopic mini-gastric bypass in the treatment of morbid obesity. method: three hundred patients with a mean bmi of . . kg/m underwent a laparoscopic mini-gastric bypass between to . a laparoscopic approach with five trocar incisions was used to create a long narrow gastric tube; this was then anastomosed ante-colically to a loop of jejunum cm. distal to the ligament of treitz peri-operative and short-term follow-up results up to does age or preoperative bmi influence weight loss after bariatric surgery? one-way anova or the kruskal-wallis test was used to compare continuous data across all groups. subsequent analysis of categorical data was achieved by chi-square or fisher's exact test. statistical significance was accepted as p. . results: a total of patients ( % male) were analyzed. average age and preoperative bmi were . ( . ) years and . ( . ) kg/m , respectively. preoperative comorbidities included: diabetes ( . %), hypertension ( . %), hyperlipidemia ( . %), previous myocardial infarction ( . %), obstructive sleep apnea ( . %), chronic obstructive pulmonary disease ( . %), gastroesophageal reflux ( . %), tobacco use ( . %). the asa classes of patients undergoing sg were ii ( . %), iii ( . %), and iv ( . %). the follow up rate at , and months was . %, . %, and . %, respectively. the -day mortality and readmission rate were % and . %, respectively. the %ewl was not different among age groups at , or months for the total, male, or female cohorts. among preoperative bmi groups, %ewl was not different in any cohort at or months, but was different at months for the total cohort (p. ) and female cohort (p\ . ), and trended toward significance in the male cohort (p= . ). the highest %ewl was found to be in patients with preoperative bmi of - . there was no difference in -day mortality or readmissions among groups a crp≥ mg/dl had a sensitivity for a complication of % and a specificity of %. primary bariatric surgery patients with a post-operative complication had higher crp levels compared to those who did not ( . ± . mg/dl vs . ± . mg/dl; p= . ). there was no difference in crp levels for patients with a -day reoperation or readmission. there were no mortalities. conclusions: bariatric surgery patients with elevated post-operative crp levels are at increased risk for -day complications. the low sensitivity of a crp≥ mg/dl suggests that a normal crp methods and procedures: the patients, who formed the previously published cohort, were contacted and their charts were reviewed. follow-up visits, symptom severity scores, and any subsequent medical or surgical interventions were collected. symptoms were assessed using the symptom severity score (sss) and the gastroparesis cardinal symptom index (gcsi) questionnaires. success was defined as a sss of or less. results: out of original patients, patients ( males, females) were available for follow-up ( patients declined participation, were lost to follow-up, patient was deceased, and was excluded after undergoing esophagectomy for unrelated indication) mbbs ; grant government medical college and sir jj government hospitals methods and procedures: twenty-six nh patients with dm were prospectively randomized to undergo either lrygb or lsg. patients were followed for -years with primary end points consisting of total weight loss (twl), percent excess body weight loss (%ebw) and impact on dm as measured by fasting blood glucose (fbs) and hba c. in addition, baseline, week, and , , , , and months post-operative levels of glucagon-like peptide (glp- ), peptide yy (pyy), leptin, and ghrelin were collected. results: a total of / patients completed follow-up. the %ebw at year for lrygb and lsg were % and %, respectively. resolution of dm occurred in / patients, the remaining three subjects were in the lgs arm. pre-operative fbs in lrgyb and lsg groups, were and , respectively. pre-operative hba c in the lrygb and lsg groups, were . and . , respectively. fbs at year for lrygb and lsg were and , while hba c for lrygb and lsg were . and . , respectively. a consistent post-operative decrease in fbs was only seen in lrygb. lrygb ghrelin percentages increased at , , and months, while levels decreased in lsg. leptin percentages decreased in both groups. the ppy levels remained relatively unchanged in both groups. lrygb glp- levels increased at week, , , and months. lsg glp- trends were similar except at months where glp- levels decreased. conclusion: lrygb and lsg resulted in equivalent post-surgical weight loss and resolution of dm in the nh population video assisted thoracoscopic thymectomy (vats) has emerged as a minimally invasive alternative to the standard transsternal approach. we present herewith the surgical and neurological outcomes after vats their operative time, blood loss, conversion rate and post operative parameters like intensive care unit (icu) stay, inter-costal drainage (icd) indwelling time, hospital stay were recorded. neurological outcomes were assessed based on myasthenia gravis foundation of america (mgfa) post intervention status classification. statistical analysis was done using stata software. results: ninety patients underwent thoracoscopic thymectomy during the study period. vats was done through right approach in ( . %), left approach in ( %) bilateral approach in patients ( %) and subxiphoid approach in ( . %). there was conversion to open approach in ( . %) patients due to dense adhesions at westchina hospital of sichuan university were included. all of the operations were performed by a single skilled surgeon. we divided our patients into two groups based on whether isao was used. of them, patients received isao for lps and patients received lps without isao. surgical skills and safety were evaluated. results: there were no significant differences in preoperative patients characteristics of the two groups. significantly less intraoperative blood loss( . ± . ml vs . ± . ml; t=− . , p= . ) were observed in group of isao conclusions: isao is technically feasible, safe surgical skills for patients reveived lps, and its represents an effective method to decreased intraoperative blood loss. p modular laser-based endoluminal ablation of early cancers: in-vivo dose-effect evaluation and predictive numerical modelling giuseppe endoscopic submucosal dissection enables en-bloc removal of early gastrointestinal neoplasms. however, it is technically demanding and time-consuming. laser-based ablation (la) techniques, are limited by the lack of depth penetration control and thermal damage (td) prediction. our aim was to evaluate a predictive numerical modelling (pnm) of the td to preoperatively select the optimal power and exposure time enabling a controlled ablation down to the submucosa (sm). additionally, the ability of confocal endomicroscopy (ce) to provide information on the td was assessed at the histology, there was an increased damage depth per higher j applications. the r value at . j was . ± . , and was significantly lower when compared to energies from j (r= . ± . ; p. ) up to j ( . ± . ; p\ . ). safe m and sm ablations were achieved applying lower p settings ( . and w), at different t values, leading to an mp impairment only in and % of the cases, respectively. ce provided relevant images of the td, consisting in architecture's distortion and disappearance of the gland's contours. the predicted damage depth we also analyzed early gastric cancer patients who received lpg-ip with cm jejunal interposition. anastomosis procedure was overlap method for eshophagojejunostomy and gastrojejunostomy, feea for jejuno-jejunostomy. results: the comparison between otg/opg-ip shows no significant difference in perioperative complications and qol scores, significant smaller body weight loss in opg-ip group. lpg-ip group also shows good result in short term outcomes. consideration: as comparison in open surgery implies superiority in jejunal interposition, we have introduced lpg-ip. esophagogastrostomy after proximal gastrectomy is simple but has a risk for sever gerd symptoms, no optimal procedure for reconstruction after proximal gastrectomy has yet been established. although laparoscopic jejunal interposition is relatively complicated in procedure, we can safely perform in combination with common anastomosis techniques. conclusion: body weight loss in otg-ip group is smaller compared to otg group consecutive patients with early gastric cancer underwent solo spdg (n= ) and mldg (n= ) performed by same surgical team. solo spdg can be defined as practice in which a surgeon operates alone using camera holder. mldg usually requires two or three surgical assistants. the inclusion criteria in this study were (i) pathologic proven stage i-ii gastric cancer (ii) no other malignancy (iii) more than d lymph node dissection (iv) r surgery. one-to-two propensity score matching was performed to compensate for the differences between two groups. results: after the propensity score matching, solo spdg (n= ) and mldg (n= ) patients were selected. mean operation time ( ± . vs ± . mins, p= . ) and estimated blood loss (ebl) ( . ± . vs . ± . ml, p= . ) were significantly lower in the solo spdg group than in the mldg group. the hospital stay and the use of pain control were similar between the two groups. although the initiation of semi fluid diet was similar, the time to first flatus was earlier in the solo spdg adhesional omental hernia: a case report an unexpected cause of small intestinal obstruction in crohn's disease strangulation inguinal hernia due to an omental band adhesion within the hernia sac: a case report omental adhesion, intestinal herniation, and unexpected death in the elderly small bowel obstruction secondary to greater omental encircling band-unusual case report the median operative time was min. the median postoperative hospital stay was . d. histological examination of the tumors revealed carcinomas, adenomas, and carcinoid. complications occurred in ( %) patients, viz., ssi (two patients), pancreatic fistula (two patients), bleeding (two patients), passing failure (one patient), and cholangitis (one patient). however, no severe postoperative complications (clavien-dindo classification grade or higher) were reported in these cases. conclusion: our cases showed that duodenal tumor resection using lecs enables curability through a minimally this study aimed to compare the outcomes of tltg with those of latg by using a meta-analysis. methods: we searched pubmed, embase, and cochrane library in may, to locate prospective or retrospective studies on surgical outcomes of tltg versus latg. the outcome measures were postoperative complications such as anastomosis leakage and anastomosis stenosis, operation time, blood loss, time to flatus, time to first oral intake, and postoperative hospital stay endoscopic thyroid lobectomy: our early experience at tertiary care hospitals of lahore univariate analysis was performed followed by logistic regression to identify independent predictors for the primary outcome. results: forty-six out of ( %) patients referred for gp required jt insertion to treat malnutrition. etiology of gp included: % idiopathic, % diabetic, % post-surgical. thirty-six patients ( %) reported severe daily symptoms. twenty-five patients ( %) had successful return to oral intake while ( %) required prolonged feeding access, reinsertion of a jt or tpn initiation. on multivariate analysis patients who had a pyloroplasty (p= . , or . ) and those who were married (p= . , or . ) were found to be independent predictors of successful discontinuation of tube feedings. on subgroup analysis -hour gastric emptying time normalized after pyloroplasty (p= . ) in patients which had a successful re-initiation of oral intake while persistent gastric emptying refractory to pyloroplasty was associated with failure. the group of patients who underwent pyloroplasty did not differ in terms of demographics, marital status (p= . ) and preoperative gastric emptying (p= . ) from those who did not. gp etiology (p= . ) psychiatric conditions (p= . ) and substance abuse laparoscopic transabdominal repair of morgagni hernia rebekah macfie average procedure length was . minutes. average hospital length of stay was . days, with all patients tolerating a regular diet prior to discharge. our -day readmission rate was / ( . %). / ( . %) patients required repeat egd evaluation for either recurrence of symptoms or impacted food bolus. at week follow-up, / patients ( %) complained of dysphagia and / patients ( %) had eliminated ppi from their daily medication regimen. at month follow-up, / patients ( %) complained of dysphagia and / patients ( %) had eliminated ppis. at year follow-up, / patients ( %) complained of dysphagia and / patients ( %) had eliminated ppis. conclusion: as a recently introduced surgical option, no long-term data exists detailing the linx procedures ultimate success rates and complication profile mini-laparoscopic vs traditional laparoscopic cholecystectomy: preliminary report deniz atasoy since the introduction of minilaparoscopic cholecystectomy (mlc) in , it gained little interest that could be attributed to decreased durability of the reduced size instruments, poorer optical resolution and smaller jaws of the instrument tips. our aim was to compare the outcomes of mlc with traditional laparoscopic cholecystectomy (tlc) one developed choledocholithiasis on postoperative day one and after ercp the course was uneventful. the other patient developed choledocholithiasis and acute pancreatitis on the sixth postoperative day and was treated conservatively. the stone in the ampulla had fallen by itself without a need for ercp single-incision plus one additional port laparoscopic surgery for colorectal cancer with transanal specimen extraction: a comparative study two patients had a previous attempt of hernia repair, one with mesh. one patient did not have any immunosuppression due to hiv infection, whereas the other were on cyclosporine, tacrolimus and/or mycophenolate mofetil. there were two laparoscopic and two open cases, mean operative time was . minutes ( - ), mean blood loss was ml ( - ). mesh used were biological porcine dermis in one case, polypropylene with absorbable hydrogel barrier in three cases. mean mesh length and width were cm ( - ) and . cm ( - ) respectively. one patient underwent a component separation, though none of the patients had the fascial defect closed. there were no intra-operative complications. three patients were readmitted for hyperkalemia, abdominal pain, and seroma respectively. neither recurrences nor reoperations were reported. mean follow-up was . days ( - ) conclusion: post liver transplant incisional hernia repair is feasible either laparoscopic or in an open fashion. because of the size and location of the defect, fascial closure is unlikely achievable. the use of standard techniques and materials give a similar result of the non-transplant population. p technique of esophagojejunostomy using orvil after laparoscopy assisted total gastrectomy for gastric cancer shinichi sakuramoto there was a significant difference in mortality between the two time-periods, / patients died during - and / died during - (p= . ). those who died were significantly older ( years ( - )) than the survivors ( y ( - )) (p= . ). five of the patients who died in the previous group died without any intervention. / of those who had an acute open necrosectomy died. surgical necrosectomy correlated significantly with mortality (p= . ). the only patient who died in the recent group died without any intervention. none of the patients receiving minimal invasive drainage in this group died until now only cases in adults and fewer than cases in children have been reported in world literature, with surgical management being the only option. an innovative, minimally invasive laparoscopic excision of the abdominal sac was performed and the scrotal component was managed by jaboulay's procedure. this is probably the first case report in world literature describing laparoscopic management of hydrocele-en-bissac. case report: a year old male presented with complaints of bilateral hydrocele and swelling in right lower abdomen since one year. computed tomography of the abdomen revealed an encysted hypodense lesion with enhancing walls along the right side of pelvis, anterior to the psoas muscle and extending through the internal ring into the right inguinal region upto the scrotal sac; measuring . cm . cm suggestive of an encysted hydrocele of cord associated with hydrocele of both scrotal sacs excessive gastric resection may result in postoperative deformity of the stomach, with consequent gastric stasis in food uptake. to minimize the resection of stomach tissue, especially for lesions close to the esophagogastric junction or pyloric ring, we have developed laparoscopic wedge resection (lwr) with the serosal and muscular layers incision technique (samit) for gastric gastrointestinal stromal tumors. this samit is simple and does not require special devices. purpose: the purpose of this study was to clarify whether lwr with samit for gastric gists is technically feasible in term of short-term outcome methods: all patients who went through lsg in our department between / to / have been evaluated for bleeding complications, after implementation of anti-bleeding policy: blood pressure was controlled to mmhg during stomach resection and staple line was reinforced throughout it's length with a running - absorbable v-lock suture. drains were used selectively. results: out of patients who went through the procedure ( . %) suffered hemorrhagic complications: patients had? hb[ gr%. patients received - red blood pc's. no patients were re-operated for bleeding. patients were readmitted for infected hematoma and had ct guided drainage. one patient ( . %) suffered from leak. conclusion: implementation of anti-bleeding policy in lsg is very effective. there is no need to use expensive buttress material to achieve these results. drains can be used selectively. the impact of this policy on leak rate needs to be fifty procedures immediately prior to, immediately after, and eight months after completion of training were included for each endoscopist. data were extracted from the electronic medical record and entered into spss for analysis. student's t-test was used to compare groups for continuous data, and chi-squared tests were used for categorical data. data were collected for procedures. patient groups pre, post, and eight months after csi training were comparable in terms of age ( . yrs, . yrs, and . yrs), sex ( it's in the bag; can stoma output predict acute kidney injury in new ostomates? robert fearn colostomy output stabilised rapidly, whilst ileostomy output increased progressively throughout the first postoperative days as can be seen in chart . twelve patients ( %) developed aki during index admission. length of stay was significantly greater in the aki group at ( % ci - ) days vs ( - ) days. highest daily stoma output was non significantly higher in the aki group ml ( % ci - , ml) vs , ( - , ml) as was mean daily stoma output at ml ( - , ml) vs ml ( - ml) (chart ). seventeen patients ( %) were readmitted for any reason, ( %) specifically for aki. in total patients ( %) developed aki within three months of their stoma surgery only of whom had developed aki during their index admission. all patients who developed aki following their index admission were ileostomy patients. conclusion: acute kidney injury in new stoma patients is associated with prolonged hospital stay and readmissions with associated morbidity and healthcare costs consecutive laparoscopic bariatric operations were performed, including primary roux-en-y gastric bypasses (lrygb), primary adjustable gastric bands (lagb), primary sleeve gastrectomies (lsg) and secondary bariatric surgeries and revisions. all bariatric procedures were approached laparoscopically ( procedures were stapled and were nonstapled). the mean patient age was years ( - ), females represented % and mean bmi was . kg/m ( - ). there were no perioperative mortalities, no conversions to open surgery and no intraoperative blood transfusions. there we two major intraoperative complications (hypopharyngeal perforation- , malignant hyperthermia- ). mean hospital stay was . days ( - days). eleven patients ( . %, in gastric bypass group and one in lsg group) required -day reoperations for postoperative complications (staple line gastrointestinal bleeding- , anastomotic leak- , strangulated port site hernia- , unexplained severe abdominal pain- , intestinal obstruction- , and intraabdominal abscess- ). there were no long term ( -year) mortalities in patients that required reoperation. there was one transfer to another institution. the dynamics of further improving safety was such that there was no complication on the recent consecutive stapled procedures and the mean hospital stay was . days ( - days). detailed subgroup analyses will be provided. conclusions: with well-controlled and structured pre-, intra-, and post-operative care, laparoscopic bariatric surgery can be performed with minimal reoperations and zero mortality in a teaching institution does concomitant placement of a feeding jejunostomy tube during esophagectomy affect quality outcomes? md, facs; icahn school of medicine at mount sinai background: placement of a feeding jejunostomy tube (fj) is often performed during esophagectomy. few studies, however, have sought to determine whether concomitant placement affects postoperative outcomes of esophagectomy of these, ldg was performed patients and odg was performed . we compared elderly patients (aged years or more) with younger patients in each operative procedure. (ldg: elderly , younger ; odg: elderly , younger ) preoperative comorbidity and surgical results were analyzed. multivariate analysis was performed to detect predictive factors for postoperative complications. results: in both ldg and odg groups, the operative time and amount of blood loss did not differ, while comorbidity was more common in elderly patients than in the nonelderly, and there were fewer retrieved lymph nodes in elderly patients. the incidence of all postoperative complications did not differ between both groups in each procedure, and there were no significant differences in the time to first flatus or postoperative hospital stay. however, in terms of specific postoperative complications, respiratory complications were more frequently observed in eldery group with odg significantly (p= . ), while not with ldg group. in multivariable analysis, age was not independent predictor of postoperative complications. conclusion: odg for eldery patients requires attention particularly in postoperative respiratory complications. ldg is a safe and less invasive treatment for gastric cancer in elderly patients who have greater comorbidity. p examining the role of preoperative ineffective esophageal motility in laparoscopic fundoplication outcomes tyler hall there were no significant differences in complications or recurrence rates. preoperative quality of life measures did not vary between the cohorts nor did postoperative scores at three weeks or six months. patients with % ineffective clearance exhibited worse gerd-hrql scores one and two years postoperatively conclusion: preoperative ineffective esophageal motility was shown to result in comparable short-term quality of life following ars. however, gerd-hrql scores at one and two yearsshowed worse outcomes in patients with preoperative iem robotic surgery as part of oncologically adequate ipmn treatment: indications, short and long term results federico gheza eligible patients who had minimally invasive surgery were stratified in multiport laparoscopic and robotic cohorts, and included if they had poi/sbo after surgery. comparative analysis assessed the demographic, perioperative, and postoperative outcomes. the main outcome measures were the incidence rate, associated variables, and time to ileus/ sbo across the mis platforms. results: during the study period total patients were reviewed- laparoscopic and robotic. postoperatively, ( . %) laparoscopic and ( . %) robotic patients suffered from poi/sbo laparoscopic sbo occur significantly later after the index procedure than robotic sbo ( conclusions: the rate of poi/ sbo is considerable and comparable across laparoscopic and robotic approaches. however, there are distinct differences in the severity, time to occurrence, and impact on quality measures, such as los and readmissions between laparoscopic and robotics. this information could be an important factor in which approach the surgeon choses laparoscopic surgical procedure was standard with using laparoscopic linear stapler. responses to surgery were evaluated a month after the operation based upon the american society of hematology evidence-based practice guidelines for itp. results: there was no open conversion in this study. the mean operation time and blood loss were min and g, respectively. there was no case using blood transfusion during and after operation. with regard to complications, one patient ( %) had a postoperative pancreatic fistula that did not require percutaneous drainage. positive responses, including the complete and partial remissions, were achieved in % ( / ). the mean follow-up duration was months, and the -, -, and -year relapse-free survival rates were % for all three time points. conclusions: the present study demonstrated that ls for itp can provide good long-term outcomes two cases of conversion from sp-c to open surgery were excluded. all procedures were followed postoperatively for a minimum of months, and wound complications such as bleeding, fat lysis, infection, or hernia were recorded. patients were classified as having a wound complication or not. results: pure transumbilical sp-c was completed . %, additional trocars were used in . %, and the rate of conversion to open surgery was . %. after a median follow-up of . (range, - ) months few cases performed with hand assist, notes, or single-incision. utilization of robotics was highest for bpd/ds ( of , cases, . %). the greatest number of robotic-assisted cases were sleeve gastrectomy ( , of , , . %) and gastric bypass ( , of , cases, . %). relatively few operations were converted to a different approach (see table). operative time was longer when using robotic approaches for both sleeve ( . vs . minutes, p. ) and bypass ( . vs . , p\ . ). postoperative los was no shorter when using robotic-assistance (see table). unadjusted -day outcomes revealed slightly higher rates of readmission for both operations when using robotic-assistance (see table), and slightly higher rates of complications after robotic sleeve gastrectomy p comparision of perioperative and survival outcomes of laparoscopic versus open gastrectomy after preoperative chemotherapy: a propensity score-matched analysis adjustment for potential selection bias in the surgical approach was made with propensity score-matched (psm) analysis. perioperative and survival outcomes were compared between the lag and og groups. results: in total, patients were identified from the database. after psm analysis, patients who underwent og were one-to-one matched to patients who underwent lag in the setting of nact. these two groups had similar outcomes in terms of intra-and postoperative complications and -year overall survival. however, the lag group had a longer operation time (p= . ) and lower estimated blood loss (p= . ). moreover, compared with patients in the og group, those in the lag group had fewer days until first ambulation conclusion: the present study indicates that lag performed by well-qualified surgeons for treatment of locally advanced gastric cancer after preoperative chemotherapy is as acceptable as og in terms of oncological outcomes. p outcomes of laparoscopic antireflux surgery for gastroesophageal reflux disease: effectiveness and economic benefits kyung won seo, phd; kosin university college of medicine purpose: laparoscopic antireflux surgery (ars) is an alternative treatment option for gastroesophageal reflux disease (gerd) in the world. however, the effectiveness and economic feasibility of ars versus medical treatment is unknown. this study was performed to evaluate the effectiveness and economic benefits of ars. methods: nine patients with gerd were treated using laparoscopic ars between and . surgical results and total cost for surgery were reviewed. results: seven men and women were enrolled. preoperatively, typical symptoms were present in patients, while atypical symptoms were present in patients. one patient underwent partial fundoplication due to absent peristalsis and the other underwent nissen fundoplication. postoperatively, typical symptoms were controlled in of patients, while atypical symptoms were controlled in of patients. overall, at months after surgery, reported partial resolution of gerd symptoms, with achieving complete control. the average cost of ars for nine patients was usd. conclusion: laparoscopic ars is effective for controlling typical and atypical gerd symptoms. the cost of ars may be more economical over the long term compared to medical treatment since laparoscopic surgery is reported to affect respiration and circulation, we should take indication of lag for elderly patients into consideration carefully. indication of lag for elderly patients, however, is still controversial. the aim of this study is to assess the safety and validity of lag for elderly patients. method: medical records were retrospectively reviewed for patients who underwent lag for gastric cancer between and . in this study, patients over years of age were defined as elderly patients. patients were divided into two groups according to age; group a (age ≥ , n= ), group b (age \ , n= ). preoperative characteristics and postoperative outcomes were analyzed. two-tailed student's test and/or pearson's chi-square test were used for statistical analysis. results: there were no significant differences in male/female ratio and body mass index between two groups. number of patients whose asa physical status was ≥ , and/or performance status was ≥ did not differ total gastrectomy ( . vs . %, p= . ), proximal gastrectomy ( vs . %, p= . ). intra-operative blood loss, operating time, and number of harvested lymph nodes did not differ between the two groups. as for postoperative complications such as intra-abdominal abscess ( . vs . %, p= . ), anastomotic leakage ( vs . %, p= . ), significant difference was not observed between the two groups. in addition, respiratory and cardiovascular complication was not observed in elderly patients. incidence of clavien-dindo classification ≥grade ( . vs . %, p= . ), and postoperative hospital stay ( . vs . days, p= . ) did not differ. conclusion: short-term outcomes of lag in elderly patients were not different from those in young patients the essential role of the transcystic duct tube (c-tube) during laparoscopic common bile duct exploration (lcbde) towakai hospital introduction: laparoscopic common bile duct exploration (lcbde) is a standard surgical procedure for the treatment of common bile duct stones (cbds). however, there are some problems associated with cbd drainage after operations even if performing with the primary closure. therefore, we developed a new drainage tube, c-tube, which contributes to shorter drainage periods and reduces perioperative complications. method: c-tube is a type of bile drainage tube which is fixed to the cystic duct with an elastic band. closing the duct with an elastic band as soon as c-tube is removed prevents bile leakage from the stump of the cystic duct. the essential roles of this tube include: . assisting suturing during operations, . use during intra-and post-operative cholangiograpy, . assisting post-operative endoscopic sphincterotomy when necessary we included patients from -years prior to our intervention and compared this with patients who had follow-up after implementation. we excluded patients having revisions, gastric banding, and patients whose primary surgeon had left during the data collection period. we analyzed demographics and follow-up rates at , , , , and months. chi-square test was used to evaluate for significance, and results were corrected for multiple comparison. results: patients met inclusion criteria in the pre-intervention group, and in the postintervention group. of those, were analyzed for the year follow-up visit. the pre-intervention group had males, females, and an average age of . approximately / of the surgeries performed were sg, / were rygb. the post-intervention group had males, females, average age of . approximately half of the post-intervention cases were sg while the rest were rygb. conclusion: bariatric surgery is a useful tool in aiding weight loss and improving comorbidities. it is essential that patients receive long-term follow-up and monitoring to achieve these goals. our program now uses a system of phone call reminders for scheduled visits, as well as calls and letters for annual visits surgeon's evaluation of an intraoperative microbreaks web-app workload questions were modified nasa task load index (physical demand, mental demand, and complexity) and procedural difficulty on - ( =maximum impact) scales. primary outcomes were the impact of microbreaks on surgeons' physical performance, mental focus, pain/discomfort and fatigue with checkboxes for improved, no change and diminished. secondary outcomes were microbreaks impact on distraction level and workflow disruption using a - ( =maximum impact) scale. descriptive statistics were calculated for median and interquartile ranges (iqr) of these responses. results: seven surgeons ( male, female), with a median (iqr) surgical experience of ( . , ) years, completed ten surgical days with a median (iqr) operative duration of ( , ) minutes/surgical day. the median number of microbreaks/surgical day was . the median (iqr) for mental demand, physical demand, surgical complexity and difficulty are shown in table . following each surgical day, surgeons reported / improved physical performance situs inversus totalis (sit) is inherited in an autosomalrecessive fashion with complete abnormal transposition of thoracic & abdominal viscera. its incidence varies from in to live births. for those undergoing surgery, laparoscopic approach is preferred as it avoids inappropriate incisions. however, due to mirroring of the viscera, the surgeon faces constant visio-spatial disorientation during laparoscopy. p ''how to be a surgeon and not dying trying'' control of basic physiological parameters in perioperative phase second main variable: blood pressure (bp) with manual measurement sleeve. preoperative bp and immediate postoperative bp were measured, we were not able to measure intraoperative bp due to the lack of consent of the surgeons involved for the use of other devices different from the heart rate band. secondary variables: years from graduation, years of practice, age, body mass index (bmi), number of medical co-morbidities, number of jobs, sleeping hours the night before. we took measurements to surgeons during a laparoscopic cholecystectomy. results: the mean preoperative heart rate was . bpm. the mean minimum intraoperative heart rate was bpm. the mean maximum intraoperative heart rate was . bpm ( % with tachycardia at the surgery). the mean immediate postoperative heart rate was . cpm. the mean heart rate minutes after the postoperative phase was . cpm. at the immediate preoperative phase % of surgeons had elevated bp level (usual normotensives) articles were randomly selected and the gender of the first and last authors determined. results: of the bariatric surgery publications reviewed, only % of first authors and . % of last authors were female surgeons. even though the proportion of female authors has increased over time, this is not proportional to the increase in the number of female surgeons or surgery residents (figure ). discussion: female surgeons are under-represented in bariatric surgery research. the number of female surgeons and residents has a continuous up trend over the last few decades our survey also included the validated quick-dash (disabilities of the arm, shoulder, and hand) questionnaire for upper-limb symptoms and the ability to perform certain physical activities. the quickdash is scored into two components: disability/symptom score, and the optional work module, which represent the impact of disability on daily activities and work responsibilities, respectively. both scores range from - , with a higher score indicating greater disability. surgeons were grouped according surgical focus (open, lap, or ra), and comparisons were made between groups. surveys with more than % of responses missing were excluded. statistical analysis were done using spss . , with α= . . results: completed surveys were evaluated (open: n= , lap: n= , ra: n= ). the survey response rate was %. . % of respondents were general surgeons, and mean age was ± . years. surgeons reported an average of ± . cases performed per month ra: . %, p= . ). likewise, there were no differences in the mean disability similarly, there was a positive correlation between mean work scores and reported pain in the upper-limb for lap and ra, both p. . conclusions: this nationwide survey revealed a similar prevalence of pain in the upper-limb among surgeons performing open, laparoscopic and robotic-assisted procedures. likewise, similar disability scores were reported between the three surgical groups. older surgeons performing laparoscopic and robotic-assisted approaches reported a higher impact of upper-limb problems interfering with their daily activities, unlike open surgeons. among all surgeons who reported pain in the upper-limb, laparoscopic and robotic surgeons were more likely to report that this pain interferes with their work activities an analysis of subjective and objective fatigue between laparoscopic and robotic surgical skills practice p d laparoscopic versus robotic gastrectomy for gastric cancer: comparisons of short-term surgical outcomes lin chen, xin guo patients who underwent d-lag (n= ) or rag (n= ) for gastric cancer were enrolled. the clinicopathological factors and short-term surgical outcomes were compared with retrospectively analysis. results: the clinicopathological factors between the two groups were well matched. postoperative recovery factors including the days of first flatus, days of eating liquid diet and hospital stay were similar. the rate of postoperative complications between the two groups were with no statistical differences in the subgroups of patients with total gastrectomy, d-lag had less blood loss and shorter operative time than rag (p= . and p. ), while for distal gastrectomy, blood loss and operative time showed no statistical differences. conclusions: this study suggests that d-lag is a novel and acceptable surgical technology in terms of surgical and oncological outcomes. d-lag is a promising approach for gastric cancer therapy methods: patients underwent robotic surgery between the beginning of to first half of in turkey were included. data were obtained from a prospectively maintained database. patient, surgeon and hospital identifiers were encrypted. parameters were operation type, operation year, robotic system used (s, si, xi), hospital volume and surgeon volume. high volume robotic colorectal hospital and surgeon was defined as the caseload within the forth interquartile ( th- th) based on the median value. results: there were colorectal procedures. surgeons performed robotic colorectal surgery at hospitals. ( . %) and ( . %) procedures were performed with the s-si and xi platforms respectively. hospitals have both of the si and xi platforms. hospitals are the si, hospitals are the xi hospital currently. the number of robotic colorectal operations increased gradually by years (figure ). the median numbers of colorectal procedures were (range - ) and (range - ) per hospital and per surgeon respectively among those hvrcs, the numbers of si and xi users were and respectively. the surgeons who performed more than procedures continued to use robot in their practice except one surgeon who stopped at . only left colectomies and no right colonic resection were performed before introduction of the xi platform first robotic cases and implementation of a robotics curriculum in a general surgery residency domenech asbun armonk ny) and utilized student's t test and chi-square. we also performed a linear regression analysis to determine the effect of or time, robotic surgery, and diagnosis on operating room costs and postoperative length of stay. results: laparoscopic and robotic cholecystectomies were performed. demographic parameters (age, gender, medical comorbidities, preoperative albumin and bmi, surgical history and smoking) were comparable. primary diagnosis was significantly different (chi-square . ), driven by more acute cholecystitis in the laparoscopic group. / robotic cases and / ( . %, p = . ) laparoscopic cases were converted to open ( for adhesions, for failure to progress, and for visualization of anatomy after adjusting for or time and diagnosis, robotic surgery was associated with a $ increase in costs robotic surgery is independently associated with increased or cost, but individual hospital systems must decide if this additional cost outweighs increased robot utilization and training benefits for physicians and staff robotic abdominal wall hernia repairs: technical considerations and lessons learned inguinal hernia repairs (ihrs) comprised the majority ( . %) of cases ( . % male, mean age . , mean bmi . ). there were unilateral ihrs with an average operative time of . ± . min and an average ebl of . ml. there were bilateral ihrs with an average operative time of . ± . min and average ebl of . ml. thirteen ihrs were combined with umbilical hernias and two with incisional hernias. average operative time for combined procedures was . min and average ebl was . ml. fifty-five incisional hernias were repaired robotically ( . % male, mean age . , mean bmi . ), four of which were retrorectus and two of those required transversus abdominis release. median hernia size was cm ( - cm). mean operative time was . ± . min and average ebl was . ml. twenty-three ventral/umbilical hernias were repaired robotically ( . % male, mean age . , mean bmi . , median size . cm ( - cm), mean operative time . ± . min, average ebl . ml). one spigelian hernia (operative time min, ebl ml) and one parastomal hernia (operative time min, ebl ml) were repaired robotically. there were no major complications and only groin seroma requiring percutaneous aspiration. nine patients required conclusion: this study demonstrates improved outcomes of robotic inguinal hernia repair compared to an open or laparoscopic approach. robotic hernia repair showed overall lower -day complication and readmission rates, and shortened los. while open approach had the highest rate of opiate use we retrospectively investigated consecutive overweight gc patients (bmi≥ ) underwent distal gastrectomy with d lymphadenectomy ( for rag and for lag) performed by two surgeons. the clinicopathological and surgical features were compared between groups. the cutoff point for initial phase (phase i) and stable phase (phase ii) were determined by cumulative sum (cusum) curve of operation time. results: generally, the surgical outcomes including postoperative complication rate, duration of postoperative hospital stay and lymph nodes harvest in the overweight patients have comparable results between rag and lag groups. the cutoff determining phase i and ii according to the cusum figure for rag group was and cases for surgeon a and b, respectively. and comparison analysis showed that the operation time of phase ii rag was significantly shorter robotic-assisted transabdominal preperitoneal inguinal herniorrhaphy: a single-center experience including perioperative morbidity and short-term outcomes patient factors, treatment factors, and outcome measures were collected in an attempt to gain insight and to generate ideas to potentially improve outcomes. results: there were no operative complications. six patients ( %) had failed gastric pacemaker placement prior to intervention. nine patients ( %) reported improvement in their symptoms and overall quality of life. four patients ( %) reported no improvement in symptoms and required additional intervention for symptom control and supportive care (one underwent roux-en-y gastric bypass, three underwent laparoscopic jejunostomy feeding tube placement to maintain nutrition). conclusion: robotic-assisted pyloroplasty is a safe option that improves symptoms and quality of life in % of our patients patients were matched into cohorts by procedure type. outcomes were analyzed using unpaired t-test and fisher's exact test. results: cost data was available for patients undergoing ras or la procedures. significant increases in equipment, labor, and overhead costs resulted with ras vs. la. variable-labor and variable-overhead costs were significantly higher in la procedures. higher supply costs and longer procedure time was seen with ras in all cohorts however, total -day costs were not significantly different in any group. conclusion: ras led to significant increases in fixed clinical, operative and pathologic factors were reviewed and analyzed. results: seventy patients underwent robotic surgery for rectal cancer during the study period. the locations of tumor were upper rectum, lower rectum. the procedure were as follow, high anterior resection in , low anterior resection in , isr in , apr in patients. eight patients underwent bilateral lymph nodes dissection (llnd). the procedures were performed successfully in all cases. mean age was . years, and % of the patients were men, and the mean body mass index was . (range, . - . ) kg/m . median operative duration was ( - ) minutes. median blood loss was ( - ) ml. median postoperative stay was ( - ) days. mean harvest lymph node number was . ( - ). surgical margins were negative in all cases. there was one conversion due to bleeding during the llnd and anastomotic leakage occurred in two patients. morbidity was %. there was no mortality postoperatively in this series. conclusion: in early series of the selected patients, this technique appears to be fesible and safe when performed by surgeons skilled in laparoscopic colorectal surgery the inactive electrode was placed touching small bowel to simulate accidental thermal injury. the bowel tissue at the site of temperature change was immediately resected and examined histologically for tissue injury. student t-tests were used for all comparisons with a p-value less than . considered statistically significant. results: comparison of the laparoscopic and robotic techniques are displayed in table . energy transfer was quantified using energy leak (per ma), which in these tests averaged . degree celsius change ( % ci . - . ) at the inactive electrode. surface temperature heated to a maximum of . degrees celsius, more in the robotic system than laparoscopy but still clinically negligible. pathology results from in vivo testing showed only thermal injury to the serosa without deeper mural injury. conclusions: stray energy transfer occurs in both laparoscopic and robotic surgery in amounts that are measurable but without clinical relevance. the average change in tissue temperature is less than degrees celsius laparoscopically and less than degrees robotically. while the robotic surgery appears to transfer more stray energy, no significant bowel injuries were caused in either group. p robot assistance can improve the performance of laparoscopic extensive concomitant adhesiolysis: results from a large observational study federico gheza outcomes compared were operative time, conversion rate, overall complications, gastrointestinal (gi) related complications (wound infection, abdominal abscess, anastomotic leak, ileus and small bowel obstruction), hospital length of stay, and -day re-admission rate. two sample t-test was used and p. was considered statistically significant. results: fifty-five robotic colectomies were matched with laparoscopic counterparts based on type of operation: right colectomy (n= ), sigmoidectomy (n= ), low anterior resection (n= ), proctocolectomy (n= ), transverse colectomy (n= ), abdominoperineal resection (n= ), and total abdominal colectomy (n= ) we assessed if technical obstacles of laparoscopic suturing were decreased and if laparoscopic skills overall were improved. surgical outcomes were compared relative to our historic values; we assessed procedure time and operating room efficiency, including set up and turn-over times. results: overall, the d/flexdex system permitted a greater improvement in working speed, superior optical visualization, and better suture handling compared to standard laparoscopy. all surgeries were completed without any complications. historically, we considered laparoscopic suturing to be complicated and inefficient. we relied on tacking devices for mesh fixation, suturing was previously completed with large cumbersome straight laparascopic devices. however, with flexdex and endoeye flex d, tacking devices have been eliminated and suturing technique improved. the mean total procedure times remained comparable for inguinal and hiatal hernia surgeries, and slightly longer for ventral hernias. operating room efficiency, including mean set up and turn-over times also remained unchanged. the acquisition cost for both the olympus endoeye flex d laparoscopic imaging system we performed a cost analysis which showed an average total cost of $ , for laparoscopic sleeve gastrectomy and an average of $ , for robotic assisted. the total reimbursements were $ , for laparoscopic sleeve gastrectomy and $ , for robot assisted. this translated to an average contribution margin of $ , for laparoscopic vs $ , for robot assisted. we analyzed these differences for bypasses as well. laparoscopic bypasses averaged minutes laparoscopically vs robotically. we found an average cost of laparoscopic $ , vs robot assisted $ , , with a contribution margin of $ , laparoscopic vs $ , robot assisted. conclusions: in our study we noted increased operative times with robot assisted operations, especially bypasses which could be explained by increased use of the robotic system for difficult cases such as revisional bypasses. the impact of cost is especially important in this financial climate, and judicious use of resources becomes important when determining surgical approach average or time for rih was minutes compared to lih which was minutes. average intraoperative cost for rih was $ , compared to lih which was $ . of note, one lih was converted to open, whereas none of the rih required conversion. average los was . hours for rih compared to . hours for lih. postoperative pain at one week follow up was the same between both groups. two postoperative surgical site occurrences (sso) occurred in the lih group ( groin seromas), whereas no ssos occurred in the rih group. eleven ventral hernia repairs were examined, were robotic (rvh) and were laparoscopic (lvh). average or time for rvh was minutes compared to minutes for lvh. average intraoperative cost for rvh was $ , compared to lvh which was $ , . no procedure from either group required conversion to open. average los was . hours for rvh, and . hours for lvh. again, postoperative pain was the same at one week follow up for both groups. there were no postoperative complications noted in either cohort. conclusion: operative time and procedural costs for rvh and rih repairs were shown to be longer and more expensive when compared to their laparoscopic counterparts. however, with increased operative experience using the robotic platform, surgical time did show a decreasing trend does robotic system have advantages over laparoscopic system for distal pancreatectomy? results: a total of consecutive patients underwent minimally invasive distal pancreatectomy (ldp n= ; ra-ldp n= ). most common pathologic finding was pancreatic ductal adenocarcinomas ( cases). there was no in-hospital mortality or cases of conversion to open surgery in this study. spleen-preserving approach was performed more often in the ra-ldp ( %) than in the ldp ( . %) groups (p= . ) both groups showed no significant differences in the total number of lymph nodes, number of positive lymph nodes, tumor differentiation, tumor stage, and resection margins. conclusions: ra-ldp is a safe and feasible approach that has an advantage of performing spleenpreserving distal pancreatectomy, with perioperative and short-term oncologic outcomes comparable to those of ldp. p robot-assisted alpps technique mike fruscione right portal vein embolization was not feasible secondary to the proximity and size of the right hemi-liver tumor burden relative to the right portal vein. the pre-operative planned procedure was a right trisectionectomy and microwave ablation of the segment lesion. results: using the da vinci xi surgical system (intuitive surgical, inc.) the right portal vein was dissected, doubly-ligated, and divided. the liver parenchyma was split from the inferior edge to the dome mm medial to the falciform ligament and down to the middle hepatic vein which was preserved to maintain adequate venous outflow. the patient was discharged home on post-operative day two. on post-operative day six, ct volumetrics demonstrated a flr of %. on post-operative day seven, a second stage alpps procedure was performed where the right hepatic artery, middle and right hepatic veins and right hepatic duct were ligated and divided. segments a/b, , , and were removed. the patient was discharged home on post-operative day five they were asked to answer demographic questions and rate their comfort level ( =not comfortable, =very comfortable) with aspects of robotic surgery. paired t-tests and wilcoxon tests were used to assess whether there were changes in comfort level before and after labs, and chi-square goodness of fit tests were used to assess whether dry lab (using inanimate objects), wet lab (using a porcine model), or simulator modules were thought to be most helpful in obtaining specific robotic skills. results: the survey response rate was % (n= ). ninety-one percent of residents felt that robotic surgery is not intuitive. prior to simulation, % of residents felt inadequately prepared to safely operate on the robotic console. following simulation, % felt better prepared and more confident to participate in robotic surgery for the first patients whom we treated (the first-stage group), we invited a visiting expert from a high-volume center to perform the procedure jointly with our hospital's surgeons by using a dual console. for the subsequent patients (the second-stage group), the procedure was performed by our hospital staff alone. in this report, we describe our experience of introduction of robotassisted colectomy and discuss issues for the future. patients and methods: the operative procedure was sigmoid colectomy, low anterior resection, and intersphincteric resection. the median number of lymph nodes dissected was . . the mean operating time was minutes for the first-stage group and minutes for the second-stage group. the median console time was minutes for the first-stage group and minutes for the second-stage group, with no significant differences between the two groups. the mean operating time other than console time was minutes for the first-stage group and minutes for the second-stage group, significantly longer in the latter group. the mean amount of hemorrhage was . g in the first-stage group and g in the second-stage group. no significant differences were found between the two groups in the mean length of postoperative hospital stay. none of the patients in either group developed a complication of clavien-dindo grade iii or higher. conclusions: the use of dual console system was particularly useful for the introduction of robotassisted surgery in our hospital. for the patients whom we treated, we found almost no difference in console time between the first-and second-stage groups. the high-quality instruction received via the dual console was considered to have had a beneficial effect on the operators' learning curve. however, the operations that were set up other than console time, such as roll-in and docking, took significantly longer in the second-stage group when the proctor was not present select specimens from each trial were immediately resected and evaluated for histologic thermal injury. experiments were repeated times based to detect an expected difference of five degrees. student t-tests were used for all comparisons with significance set at . . results: stray energy transfer was higher in the single incision setup compared to the traditional setup (figure ). stray energy in the assistant grasper caused . ± . °c of temperature change in the standard configuration, and . ± . °c in the single incision configuration (p= . ). doubling energy output to w amplified the same finding robotic single-site cholecystectomy of cases: surgical outcomes and comparing with laparoscopic single-site procedure jae hoon lee incisional hernia occurred one case in each group. rssc is safe and feasible procedures. with accumulating of experience, rssc had more short operative time than sslc. comparing to sslc, rssc is relatively suitable to acute gallbladder disease and high bmi and requires a minimal learning curve to transition from traditional multiport to single-port robotic cholecystectomy. p initial experience using da vinci xi robot in colorectal surgery anna r spivak, do, john marks, md; lankenau medical center introduction: the xi robot has been developed to facilitate multiquadrant abdominal surgery. this report presents initial experience to evaluate feasibility and safety of xi robot in colorectal surgery. methods: all cases performed on xi robot were prospectively entered into a robotic database that was queried for colorectal cases performed from intraoperative complications were encountered in cases ( . %), requiring conversion to laparoscopy. none were converted to open. mean length of largest incision . cm. median ebl ml. there was no mortality. there were ( . %) immediate postoperative morbidities: postoperative abscess, bowel perforation, two postoperative bleeds, two hernias, two hematomas, smv thrombosis, small bowel obstruction. perioperative blood transfusions were required in . % of cases. there was one anastomotic leak. median time from surgery to low residue diet and discharge was days. conclusion: initial experience shows robotic colorectal resection with da vinci xi learning curve for robotic sleeve gastrectomy and roux-en-y gastric bypass: achieving equivalence to laparoscopy residents and fellows participated in an analogous fashion in both arms of the study, and patients undergoing re-operative bariatric surgery were excluded. results: a total of patients undergoing rsg (n= ) or rrygb (n= ) were included. for the overall robotic cohort, median age was (range - ), % were american society of anesthesiologists (asa) score , % were asa score , and mean body mass index (bmi) was ± with no differences between procedures. there were no conversions to open. there was one patient with portal vein thrombosis after rsg which occurred in the th rsg and one patient who underwent re-operation in the immediate post-operative period for hemorrhage at the gastro-jejunal anastomosis in the rrygb group; this occurred in the th rrygb. there were no leaks, strictures, or mortalities in either group. mean length of stay was days± for rsg with no difference based on number of procedures performed. in the rrygb group, los decreased after the first five procedures from days± to days±(p= . ). for both procedures, operative time decreased by number of procedures performed (figure). equivalence to lsg in operative time ( minutes± ) was reached after eight robotic procedures were included. the da-vinci xi® was used for the operations. age, gender, body mass index (bmi), asa score, indication for surgery, urgency of procedure, type of procedure, docking number, operation time, estimated blood loss, complications, short (≤ days) and long term ([ days) complications were evaluated. results: patients ( females) were included. median age was . median bmi was , median asa score was . total and completion rrp-ipaa were performed for and patients respectively. the indications were as follows: medical refractory uc (n= ), cancer/dysplasia (n= ), fulminant colitis (n= ), toxic megacolon (n= ), medical treatment resulting in growth retardation (n= ), medical treatment refractory bleeding (n= ). patient with toxic megacolon had an emergent operation. the median docking number was and for completion and total rrp-ipaa respectively. median operative time was minutes. median blood loss was ml. all patients had a stapled ileal j pouch anal anastomosis. all patients had a diverting loop ileostomy at the time of ipaa creation. no intraoperative complications were observed. no conversion to open surgery was needed. the median time to flatus was day. the median time to oral intake was day. patient had a laparotomy on postoperative day due to intra-abdominal bleeding. patient had a bleeding from ileostomy which was treated endoscopically. superficial surgical site infection was observed in patients. patient had a pouchitis managed with oral antibiotics. patient had an ileus responded to conservative treatment. patient had a per-anal bleeding stopped spontaneously. patient had a urinary tract infection responded to antibiotics. patients had pouchitis, patient had a perianal fistula requiring a loop ileostomy and a parastomal hernia was developed in another patient in long term follow up ) were significantly different between the two groups. , pairs undergoing primary and pairs undergoing revisional procedures were successfully matched. robotic gastric bypass was associated with a significantly longer operation length than laparoscopic gastric bypass for both primary (median difference minutes, p. ) and revisional (median difference minutes, p. ) procedures overall, there were no significant differences in anastomotic/staple line leak, -day readmission, reoperation, re-intervention, total event, and mortality rates between matched cohorts. conclusion: when controlling for patient characteristics, those undergoing primary and revisional lrygb and rrygb had no difference in early morbidity. despite the prolonged operative duration, the robotic approach was not associated with any clinical benefit or increased complications for primary or revisional gastric bypass surgery preoperative risk factors were collected. we focused on perioperative outcomes and in hospital complication rate. results: thirty-three patients underwent robot assisted giant hiatal hernia repair at our institution. patients ( %) were years and older and patients ( %) had a bmi higher then. there were no significant differences in patient characteristics between the groups. no patient underwent conversion to open or standard laparoscopy. no mortality was observed and no transfusions were needed. four patients ( %) had a complication, two of them were older than years old. three of the four patient ( %) that had a complication were obese. there were no statistical differences in mortality % and . % of them were with s-si and xi platforms respectively. the median numbers of procedures were (range - ) and (range - ) cases per hospital and per general surgeon respectively. the high volume surgeons (higher than th percentile) performed ( %) of the cases. the xi platform has been the main tool for colorectal surgery only (figure ). conclusions: while xi platform significantly increased caseload in general surgery by facilitating performance of colorectal surgery, its preference in other general surgical fields is not superior to si laparoscopic inguinal hernia repair (tapp) -first experience with the new senhance robotic system robin schmitz ; intuitive surgical inc, loma linda university medical center introduction: crohn's disease is an incurable inflammatory disorder that can affect the entire gastrointestinal tract. while medical management is considered first-line treatment, approximately % of patients with crohn's disease require surgery within years of their initial diagnosis. traditionally, surgery has been performed via an open approach with poor adoption of minimally invasive technique. the aim of this study is to demonstrate the feasibility of robotic-assisted approach as a minimally invasive option for surgical management of crohn's disease and compare the perioperative outcomes with traditional laparotomy. methods: patients who underwent elective resection of the intestine for crohn's disease by roboticassisted or laparotomy approach from to q were identified using icd- codes from premier healthcare database. all the procedures were performed by either general surgeons or colorectal surgeons. since hospital characteristics were comparable between the two cohorts before propensity-score matching, : matching was performed using patient characteristics such as age, gender, race, charlson index score and year of the surgery to create comparable cohorts. sample selection and creation of analytic variables were performed using instant health data (ihd) platform (bhe methods: we conducted a retrospective analysis of , mis inguinal hernia repairs ( , robotic, , laparoscopic) from through with data collected in the premier hospital database. patient, surgeon, and hospital demographics of robotic and laparoscopic inguinal hernia repairs were compared. the adjusted odds ratio of receiving a robotic procedure was calculated for each of the demographic factors using a multivariable logistic regression model. statistical significance was defined as p. . sas software version . was used for statistical analysis. results: the odds of a procedure being robotic increased from inguinal hernia repair is one of the most common general surgery procedures with over , performed annually in the united states. when compared to traditional open inguinal hernia repair (oihr), laparoscopic inguinal hernia repair (lihr) has been associated with faster postoperative recovery rates and lower postoperative pain. with advances in the robotic platform, robotic inguinal hernia repair (rihr) is an available technique that is currently being explored. this study examines lihr and rihr as described in literature to see if one is superior to the other. study design: search terms: ''inguinal hernia repair surgical complications including hematomas ( . %), seromas ( . %), and trocar site infection ( . %) resolved with antibiotics, with a . % postoperative complication rate. conclusion: rihr repair is a safe alternative to lihr, with fewer postoperative complications and a faster recovery time. however, operative time as well as or room time is significantly longer, which may increase overall cost laparoscopic or robotic approach were chosen on a schedule availability basis. data was collected prospectively and it involved anthropometric data, presence of type diabetes mellitus (t dm), % of preoperative total weight loss (%ptwl), surgical time, postoperative length of stay, -day complications, and need for readmission or reoperation. comparison between groups was carried on with t-test for continuous data and with chi-square test for dichotomous variables. a p lower than . was considered significant. results: overall sagb were performed, laparoscopic and robotic. a long and thin gastric pouch was created calibrated by a fr bougie and a . cm antecolic antegastric gastrojejunal (gj) anastomosis was groups (laparoscopic vs robotic) were comparable regarding age ( vs . years, p= . ), bmi ( . vs kg/m , p= . ), %ptwl ( . vs . %, p= . ) and % with t dm ( vs there were fewer men in the laparoscopic group ( . vs % there were ( . %) major complications in the laparoscopic group: bleedings from the gj anastomosis, one of which required reoperation, severe dumping syndrome, gerd requiring revision and gj stricture that underwent relaparoscopy. the only complication ( %) in the robotic group was an acute pancreatitis. readmission rate was % in both groups and reoperation rate was % for laparoscopic and % for robotic surgeries. conclusions: totally robotic sagb with manual gastro jejunal anastomosis was safe and feasible in this early experience compared to laparoscopic approach multi degrees of freedom manipulator with mantle tube for assisting endoscopic and laparoscopic surgical operations masataka nakabayashi, phd , yuta hoshito, masters student p step by step anatomic mapping during laparoscopic transabdominal adrenalectomy lateral flank approach ranbir singh steps analyzed were: right adrenalectomy: step ) mobilize liver; ) medial dissection; ) adrenal vein isolation; ) inferior dissection; ) adrenal off kidney; ) detachment. left adrenalectomy: step ) division splenorenal ligament; ) develop plane pancreas/kidney; ) mobilization medial/lateral borders adrenal; ) adrenal vein isolatoin; ) dissection adrenal off kidney; ) detachment. structures were identified as yes/no and results expressed as percentage total n of cases seen at each step. results: structures identified at each step are shown (table) incisions were made at the oral vestibule under the inferior lip. a -mm trocar was inserted through the center of the oral vestibule with two -mm trocars above incisors. the subplatysmal space was created down to the sternal notch, and carbon dioxide was insufflated at pressure mmhg to maintain the working space. parathyroidectomy was performed using laparoscopic instruments. intraoperative parathromone levels were measured minutes after excision of gland. primary end-points were the success rate in achieving the cure from hyperparathyroid state and hypocalcemia rate. secondary end-points were operating time, scar length, pain intensity assessed by the visual-analogue scale, analgesia request rate, analgesic consumption, quality of life within postoperative days (sf- ), cosmetic satisfaction, duration of postoperative hospitalization, and cost-effectiveness analysis. result: one patient experienced a transient recurrent laryngeal nerve palsy which was spontaneously resolved within month. no permanent recurrent laryngeal nerve injury was found no mental nerve injury or infection was found. conclusion: with highly sensitive localising sestamibi and ct scans, focussed exploration is the current standard of treatment. among all minimally invasive surgeries, toepva is a feasible, safe, and almost pain-free surgical option when combined with intraoperative parathormone monitoring for patients with hyperparathyroidism indocyanine green is a water soluble nontoxic compound exhibiting near infrared renal function and long-term survival. indocyanine fluorescence helps in assessing vascular flow, tissue perfusion and aberrant anatomy and thereby leads to lower conversion rates in partial nephrectomy. we aim to present our experience in patients who underwent partial nephrectomy over years. materials and methods: of the partial nephrectomies performed at our institution, were done by laparoscopic approach alone and rest by patients who underwent llr for whole hepatoma in our facility, underwent llr for a solitary hepatoma and were divided into "before standardization" (bs; n= ) and "after standardization" (as) groups (n= ). patient background, characteristics, and perioperative outcomes were compared between these groups. procedure: we chose the devices according to phases of liver transection. a soft-coagulation monopolar device was used for marking surface. an ultrasonically activated device was used for transection of the liver surface within a -cm depth. crash and sealing with biclamp were indicated for deep-phase transection. the cavitron ultrasonic surgical aspirator was used if the lesion was close to the major glisson's sheath or the major hepatic vein. results: no significant differences in the patients' background were found between the two groups. the operative durations were min ( - min) and min ( - min) in the as and bs groups, respectively, with a significant difference (p. ). the blood loss volumes were cc ( - cc) and cc ( - cc), respectively (p= . ). the lengths of hospital stay after llr were days (range, - days) and days ( - days), respectively, with a significant difference iwao kitazono, phd , kentaro gejima , hizuru kumemura , akira hiwatashi , yuichiro nasu , fumisato sasaki , akio ido , yutaka imoto ; cardiovascular and gastroenterological surgery, kagoshima university graduate school of medical and dental science, digestive and lifestyle disease, kagoshima university graduate school of medical and dental science introduction: in locally-treatable gastrointestinal tumors, laparoscopic endoscopic cooperative surgery (lecs) is a minimally-invasive technique that can avoid excessive resection of the gastrointestinal tract. objective: to share our therapeutic guidelines and surgical technique of lecs for gastroduodenal tumors. subjects: nineteen patients who underwent lecs for gastroduodenal tumors ( patients with gastric tumor and patients with duodenal tumor).[results] ) gastric tumors ( gist, glomus): . site of lesion was u ( patients), m ( ), or l ( ), . operative procedure was acquired in a stepwise manner from classical lecs ( patients) to inverted lecs ( ) to non-exposed endoscopic wall-inversion surgery: news ( ). . operative outcome revealed no postoperative complications. ) duodenal tumors ( adenoma, m cancer, ectopic pancreas): . site of lesion was bulbus duodeni ( patient), superior part ( ), or descending part ( ); . operative procedure was esd followed by laparoscopic continuous suture in a single seromuscular layer for patients with preoperatively confirmed or suspected cancer, or full-thickness resection followed by albert-lembert suture along the short axis for patients unable to undergo esd. in all cases, c-tube was placed to prevent bleeding and perforation at the site of resection due to exposure to bile; . operative outcome included successful endoscopic hemostasis upon bleeding from exposed vessel on postoperative day in patient and anastomotic leak in patient. the event of anastomotic leak resolved after days of bile drainage through c-tube and conservative therapy. compared with patients who underwent esd alone, those who underwent lecs had significantly larger diameters of resected specimens and tumors (p. ) but no significant difference in the incidence of postoperative bleeding and delayed perforation. conclusion: for gastroduodenal tumors, lecs is a minimally-invasive and safe therapeutic option as it combines advantages of both laparoscopy and endoscopy. in particular, c-tube placement for bile drainage was effective in reducing exposure of the suture site to bile as well as supporting drainage after anastomotic leak. introduction: in japan, transurethral balloon catheters (tuc) are currently inserted in most surgical patients to maintain a urine outflow route and to measure the urine output both intraoperatively and postoperatively. however, tuc insertion not only causes postoperative pain but can also lead to urinary tract infections. temporary suprapubic catheters (spc) are used in the field of obstetrics and gynecology as a method of postoperative management to avoid performing transurethral procedures. in the field of surgery, especially in laparoscopic surgery, spc also considered how it would be a useful way to reduce patient suffering. here we report our prospective study on whether an spc can be safely inserted as a substitute for tuc during laparoscopic-assisted colectomy. subjects and methods: the subjects in this study were patients who underwent laparoscopic surgery for primary colorectal cancer from to , and who would normally have had their urinary balloon catheter removed early after surgery. during surgery, an angiomed cystostomy set was installed for patients who gave their consent to participate in this study as an alternative to a urinary balloon catheter. we prospectively collected patient information including sex and age, in addition to other perioperative data, such as, time required for cystostomy, complications accompanying cystostomy, sense of discomfort or pain associated with the vesical fistula after surgery, the time of the removal of the vesical fistula, the frequency of releasing the vesical fistula, postoperative complications. results: our subjects included cases who gave their informed consent to have an spc inserted. an spc was inserted into the remaining case. the mean surgical duration was min, and the spc insertion was performed at a mean of min after the start of surgery. insertion required a mean duration of . s. the bladder of one case ( . %) was perforated, and hematuria was observed at the time of insertion in two cases ( . %), but surgery completed without any incident. six out of cases ( . %) demonstrated neither urinary urgency nor independent urination on the day the catheter was clamped. however, the clamp was released two to four times, and draining of an average of ml urine, urinary urgency, and independent urination were confirmed - days later. conclusion: spc is a procedure that avoids crossing the urethra and its associated disadvantages. here we were able to demonstrate that the procedure can be safely used in laparoscopic surgery patients.our objective is to devise methods for proper port placement to overcome the ergonomic challenges. procedure: patients with sit were operated laparoscopically in our hospital in the period of may to november , males suffering from cholelithiasis without cholecystitis and female with acute appendicitis. after thorough review of literature and proper planning, the patients were posted for surgery. for laparoscopic appendectomy, a thorough initial diagnostic survey is performed on introducing a scope through the umbilical port and confirming the exact location of the appendix. the two working ports are introduced accordingly, which is usually a mirror image of the standard port sites. the appendix was visualised in the left iliac fossa and after meticulous dissection, the appendix and mesoappendix were divided using an endostapler. the operative time was minutes and there were no intraoperative or postoperative complications.the port placement for laparoscopic cholecystectomy in such a case is trickier as the anatomical variation and the contralateral disposition of the biliary tree demand an accurate dissection and exposure of the biliary structures to avoid iatrogenic injuries. it is important to conform to the principles of triangulation during port placement. the mirror image of -port placement is convenient for left-handed surgeons. whereas, to make the procedure comfortable for right-handed surgeons, the working ports need to be shifted caudally with the surgeon standing between the patient's legs. the mean operative time was minutes and there were no minor or major intraoperative or postoperative complications.conclusion: ergonomic comfort is vital to a smooth procedure. while mirroring ports suffices for appendectomy, all other procedures require forethought for port placement. it should be noted that ambidexterity is a desirable skill in the operating room for a laparoscopic surgeon.priscila r armijo, md, chun-kai huang, phd, gurteshwar rana, md, dmitry oleynikov, md, ka-chun siu, phd; university of nebraska medical center introduction: the aim of this study was to determine how objectively-measured and self-reported fatigue of the upper-limb differ between laparoscopic and robotic surgical training environments. methods: surgeons at the sages conference learning center, and at our institution were enrolled. two surgical skills practical environments were utilized: ) a laparoscopic training-box environment (fls) and ) the mimic® dv-trainer (mimic). two standardized surgical tasks were chosen for both environments: peg transfer, and needle passing. each task was performed twice. objective fatigue was evaluated by muscle activation and fatigue, and comparisons were made between fls and mimic, for each surgical task. muscle activation of the upper trapezius, anterior deltoid, flexor carpi radialis, and extensor digitorum were recorded during practice using surface electromyography (emg; trignotm, delsys, inc., boston, ma). the maximal voluntary contraction (mvc) was obtained to normalize muscle effort as %mvc. the median frequency (mdf) was calculated to assess muscle fatigue. subjective fatigue was self-reported by completing the validated -scale score piper fatigue scale- (pfh- ) before and after practice. statistical analysis was done using spss v . , with α= . . results: this abstract represented the performance of trainees (fls: n= , mimic: n= ) as part of larger cohort of the study. for peg transfer, emg analysis revealed that mimic had a significant increase in mean muscle activation for the upper trapezius and anterior deltoid, both p\ . . conversely, practice with fls led to significantly more muscle fatigue than mimic for the same muscle groups (upper trapezius: p= . , anterior deltoid: p= . ), represented by a significantly lower mdf. similarly, for needle passing, mimic had a significant increase in mean muscle activation for the upper trapezius (p= . ) and anterior deltoid (p= . ), but practice with fls significantly induced more muscle fatigue effort for anterior deltoid (p= . ). survey analysis revealed a significant decrease in self-reported fatigue after performing fls tasks (before: . ± . , after: . ± . , p= . ), but no difference after mimic tasks (before: . ± . , after: . ± . , p= . ). conclusions: although different muscle groups are preferentially required in the performance of fls and mimic, our analysis for both surgical tasks showed practice with mimic required more activation of shoulder muscles, whereas practice with fls could lead more muscle fatigue for the same muscle groups. interestingly, surgeons reported improved or no change in perceived fatigue after the tasks, despite of having an increase in muscular activation and effort. subjective selfreport fatigue might not truly reflect the level of fatigue when trainees practice surgical tasks using fls or mimic. objective: to investigate the prevalence of musculoskeletal (msk) injuries in bariatric surgeons around the world. background: as the popularity of bariatric surgery increases, efforts into improving its patient safety and decreasing its invasiveness have also been on the rise. however, with this shift towards minimal invasiveness, surgeon ergonomic constraints have been imposed, with a recent report showing a - % prevalence of physical complaints in surgeons performing laparoscopic surgeries. methods: a web-based survey was designed and sent out to bariatric surgeons around the world. participants were queried about professional background, primary practice setting, and various issues related to bariatric surgeries and msk injuries. results: there were responses returned from surgeons from countries around the world. . % of the surgeons have had more than years of experience in laparoscopic surgery, . % in open and . % in robotic surgery. % of participants reported that they have experienced some level of discomfort/pain attributed to surgical reasons, causing the case load to decrease in . % of the surgeons. it was seen that the back was the most affected area in those performing open surgery, while shoulders and back were equally as affected in those performing laparoscopic, and the neck for those performing robotic, with . % of the surgeons reporting that this pain has affected their task accuracy/surgical performance. a higher percentage of females than males reported pain in the neck, back and shoulder area when performing laparoscopic procedures. supine positioning of patients evoked more discomfort in the wrists, while the french position caused more discomfort in the back region. only . % sought medical treatment for their msk problem, of which . % had to undergo surgery for their issue, and . % of those felt that the treatment resolved their problem. conclusion: msk injuries and pain are a common occurrence among the population of bariatric surgeons, and has the ability to hinder performance at work. therefore, it is of importance to investigate ways in which to improve ergonomics for these surgeons as to improve quality of life.introduction: the use of robotic technology is rapidly increasing among general surgeons but is not being routinely taught in general surgery residency. we aimed to evaluate our first robotic cases during which time we developed a robotic surgery curriculum incorporating residents. methods: the first robotic cases performed at our institution from - by two surgeons were analyzed. a residency curriculum was developed and instituted after the first months. it consisted of online modules offered by intuitive surgical resulting in certification, simulator training, hands on workshops for cannula placement, docking, instrument exchange, camera clutching and other introductory tasks. patient demographics, type of procedure, resident involvement, total operative and console times, comorbid conditions and complications were evaluated. unpaired t tests were performed for statistical analysis. results: females and males comprised this series with an average age of years ± . the majority of patients, % had comorbidities, with a predominance of hypertension, % and diabetes, %. the bariatric patients had an average bmi of ± . a variety of procedures were performed including hernias, foregut and bariatric. residents participated in % of cases. there were no differences in total operative and console times in cases with residents except bariatric procedures. there were complications in this series; postoperative ileus, gallbladder fossa hematoma and an enterotomy. there was one early conversion to open in a complex foregut case and no deaths in this series.conclusions: we report our initial experience of robotics in a variety of general surgery and complex foregut cases. the implementation of a robotic surgery program and residency curriculum was safe with similar outcomes related to operative times and complications. as mis expands with the application of robotics in general surgery, residency curriculums will need to be revised. further data is needed to determine residency learning curves between robotics and laparoscopy.background: robotic surgery has made a large impact in the fields of urology and gynecology. its use is significantly increasing in the fields of general and bariatric surgery. evidence remains unclear as to the clinical impact on outcomes, and significant questions remain as to the impact of cost. our goal was to evaluate the economic impact of robotic surgeries in general and bariatric surgery at our institution. methods: this study is a retrospective analysis of minimally invasive general and bariatric procedures done at a single institution from january through june . we performed a cost and reimbursement analysis of robotic versus conventional laparoscopic surgery. the cost evaluation included operative time, operating room costs, length of stay and overall hospital expenses. in addition, we looked at reimbursement and the contribution margin per cpt code. results: our study included a total of patients who underwent laparoscopic and robot assisted general and bariatric surgeries. the average time duration for laparoscopic surgeries was minutes vs minutes for robot assisted. we performed a cost analysis which showed an average total cost of $ , for laparoscopic and an average of $ , for robot assisted. the total reimbursements were $ , for laparoscopic and $ , for robot assisted. this translated to an average contribution margin of $ , for laparoscopic vs $ , for robot assisted. for general surgery we found an average cost of laparoscopic $ , vs robot assisted $ , , with a contribution margin of $ , laparoscopic vs $ , robot assisted. for bariatric surgeries we found an average contribution margin of $ , for laparoscopic vs $ , for robot assisted. conclusions: robotic surgery has been associated with higher costs and longer operative times. in this economic climate of increased cost awareness with institutions under increasing financial pressures, judicious use of resources becomes important when determining surgical approach. although cost of robot assisted surgery may decrease with time, other quality factors may be important in patient selection. although there is no clear evidence that institutions lose money with robot assisted surgery, in our experience the contribution margin is lower with robot assisted surgery as compared to conventional laparoscopy.introduction: this retrospective study was performed to evaluate the safety and feasibility of the new senhance robotic system (transenterix) for inguinal hernia repairs using the transabdominal preperitoneal approach. our series is the first experience in the field of general surgery utilizing this new robotic platform. methods: from march to september , inguinal hernia repairs in patients were performed using the senhance robotic system. the senhance surgical system is a new robotic platform that consists of a cockpit, manipulator arm and a connection node (figure ). this new system provides robotic surgery with numerous advantages including eye-tracking camera control system, haptic feedback, reusable endoscopic instruments, and a high configuration versatility due to total independency of the manipulator arms. patients were between and years of age, eligible for a laparoscopic procedure with general anesthesia, had no life-threatening disease with a life-expectancy of less than month and a bmi \ . a retrospective chart review was performed for a variety of pre-, peri-and postoperative data including but not limited to patient demographics, hernia characteristics, intraoperative and postoperative complications. results: male and female patients were included in the study. median age was . years (range - years), and median bmi was . (range . - . kg/m ). median docking time was minutes (range - minutes), and median operative time was minutes (range - minutes). two cases were converted to standard laparoscopic surgery due to robot malfunction and intraoperative bleeding respectively. one patient developed a postop seroma that did not require any further intervention. conclusion: we report the first series of laparoscopic inguinal hernia repairs using the new senhance robotic system. compared to previously published conventional laparoscopic or robotic tapp hernia repairs these data suggest similar outcomes in operative time and perioperative complications. additionally there was no significant learning curve detected due to its intuitive applicability. therefor the senhance robotic system can be safely and easily used for tapp hernia repairs by experienced laparoscopic surgeons. this is a video presentation of years old female, who presented with suprapubic pain and mass to gynecology office. she has a history of robotic hysterectomy and sbladder sling operation years ago. this was complicated with peritonitis and long icu stay, due to what she was called ''bowel injury'' but treated only conservatively with antibiotics and subsequent abscess drainages at that time. she has occasional appearing nodule and pain at the left suprapubic region. ct ordered by gynecology read as abdominal wall hernia with long sigmoid diverticuli in hernia. also there was small amount of subcutaneous air at the tip of herniated diverticuli. after antibiotic treatment and improvement, colonoscopy shows, actually the diverticuli is the limb of the sling going through the simoid and anchored in subcutaneous fat on abdominal wall ahich represents clocutaneous fistula as gets infected. clip was placed on sling and repeat imaging comfirmed that the localion of this sling fits to location of so called ''hernia'' the sling limb was resected robotically and colon was repaired with side stapling of clolonic wall. the abdominal wall defect is repaired with long term absorbable suture. as far as we have found, the presentation and treatment of this complication is unique and could not find a similar case to guide us for the plan. background: robot-assisted surgery using da vinci surgical system (dvss) is thought to have many advantages over conventional laparoscopic surgery. it was reported that the use of the surgical robot might reduce surgery-related complications, then a multi-institutional historically controlled prospective cohort study on the feasibility, safety, effectiveness and economical efficiency of robotic gastrectomy (rg) for resectable gastric cancer was conducted in japan. this study evaluated the safety of rg using dvss xi. methods: this single-center, prospective phase ii study included patients with resectable gastric cancer (umin ). the primary endpoint was the incidence of post-operative complications greater than grade iii according to clavien-dindo classification during one month after surgery. the secondary endpoints included all adverse events and completion rate of robotic surgery. results: from oct to jan , patients were enrolled for this study. the incidence of post-operative complication greater than grade iii was %. the overall incidence of adverse events was . % (grade i; . %, grade ii; . %). no patient required conversion to laparoscopic or open surgery; thus, the rg completion rate was %. conclusion: this study suggested the introduction of rg using dvss xi for gastric cancer seems to be safe and feasible. priscila r armijo, md , dmitry oleynikov, md , sages robotic task force* ; university of nebraska medical center, sages robotic task force introduction: while robotic companies continue to aggressively market and promote the use of robots in general surgery, little is known about how this technology is employed by general surgeons, and what is expected of this technology from both novice and experts in the field. the aim of this study is to evaluate the needs of general surgeons who are new to robotic surgery and the needs of established robotic surgeons. methods: the sages robotic task force survey, a one-page survey, was designed and sent electronically to all sages members. questions regarding fellowship training, area of expertise, robotic simulation and in clinical case use, services offered in the current hospital, mentorship, likelihood of switching to a different approach, and expectations for the robot were included in the survey. two groups were created based on previous use of davinci® system in a clinical scenario, or not. statistical analysis was conducted using ibm spss v. . . , using fischer's exact and pearson's chi-squared tests where appropriate. results: sages members answered the survey. surprisingly, respondents ( %) had used the davinci® in a clinical setting. among these, ( %) had additional fellowship training, compared to ( %) in the non-clinical use group, p= . . of all surgeons with additional fellowship training, the great majority ( %) had specialization in advanced gi, mis and bariatric surgery, followed by colorectal ( %). most surgeons are performing less than cases per month using the robotic system, and with the majority of cases performed using the platform being hernia repairs ( %), followed by foregut-related procedures ( %). interestingly, from all the surgeons who replied the survey, only . % are planning to switch from open procedures to its robot counterpart, whereas . % are planning to adopt robotic-assisted procedures rather than laparoscopy. conclusions: the majority of sages members who responded to the survey have used the davinci® in a clinical setting in the past. surgeons who stated they perform mainly laparoscopic procedures were likely to continue to adopt robotic techniques, whereas those who perform open hernia repair for example were not very likely to switch to robotic approach. while the use of the robot may be enabling surgeons who used to perform mostly open procedures in the urology or gynecology fields, laparoscopic skills predict robotic utilization in general surgery. hernia and foregut appear to be the most common procedures that are being utilized.aim: while conventional multiport laparoscopic splenectomy has become gold standard for some hematological or splenic diseases, reduced-port laparoscopic splenectomy (rpls) including singleincision laparoscopic splenectomy (sils) is regarded as highly challenging. herein, we describe the technical refinements for safe rpls especially for patient with splenomegaly. methods: in all cases, access was achieved via a . -cm mini-laparotomy at the umbilicus into which a sils tm port or e-z access ® with three -mm trocars was placed. a -mm flexible scope, an articulating grasper, and straight instruments were used. our rpls is characterized by the followings: a) early ligation of the splenic artery to shrink the spleen, b) application of our original "tug exposure technique," which provides good exposure of the splenic hilum by retracting (tugging) the spleen with a cloth tape, and c) safe introduction of stapler under the guidance with a flat drain into the splenic hilum. results: rpls patients ( men and women, ± years old) comprised hematological disorder (n= ), splenic disease (n= ), and liver cirrhosis (n= ). in patients ( %), rpls was successfully completed: sils in and sils plus one additional port only in patients. conversion to open surgery was necessary in patients including liver cirrhosis with remarkable collateral varicose veins around the spleen. operation time and blood loss were ± min and ± g, respectively. weight of the extracted spleen was heavier than normal and ± g (maximum g). no intra-or postoperative complication occurred. the postoperative scar was nearly invisible. conclusions: rpls might safely be performed even for splenomegaly (up to , g). however, care should be taken for cirrhotic patient with collateral veins. rpls can be the procedure of choice even in the patients with splenomegaly and who are concerned about postoperative cosmesis. the aim of this feasibility study was to evaluate laparoscopic sn biopsy for laparoscopic snns in early gastric cancer patients. subjects and methods: this study includes patients with ct n m (primary tumor \ cm) gastric cancer who underwent laparoscopic sn biopsy in conjunction with radioisotope and dye methods between jan. and jul. . first, we looked for green-dyed sns after injection of indocyanine green (icg) without near-infrared light system, and then tried to detect the radioactivity of sns using a hand-held gamma probe inserted through a small incision at the umbilical port. after the areas where sns were distributed were resected, a gastrectomy with prophylactic lymphadenectomy was performed according to the gastric cancer treatment guidelines of the japanese gastric cancer association. we looked for undetected sns in the resected specimen at the back table. results: among cases, there were ( %) in which sns were not detected in the resected specimen. there were cases in whom sns were detected in the resected specimen. in both cases, the primary tumors were located in the middle and greater curvature of the stomach. in case , laparoscopic sn biopsy identified the left ( sb) and right ( d) greater-curvature lymph node (lns) as sns, however, lesser-curvature ( ) and infrapyloric ( ) lns remained as sns in the resected specimen. in case , the left ( sb) and right ( d) greater-curvature lns were identified as sns intraoperatively, while the lesser-curvature ( ) ln remained as an sn in the resected specimen. the sns overlooked with laparoscopic sn biopsy method were detected by radioisotope only. no cases had ln metastasis, and the -year relapse-free survival rate of these patients was %. conclusions: our feasibility study of laparoscopic sentinel node biopsy for early gastric cancer showed that we should search for sns of the lesser curvature carefully even if the primary lesion is located at the greater curvature. key: cord- -pnw xiun authors: bodecka, marta; nowakowska, iwona; zajenkowska, anna; rajchert, joanna; kaźmierczak, izabela; jelonkiewicz, irena title: gender as a moderator between present-hedonistic time perspective and depressive symptoms or stress during covid- lock-down date: - - journal: pers individ dif doi: . /j.paid. . sha: doc_id: cord_uid: pnw xiun although numerous studies have addressed the impact of the covid- lock-downs on psychological distress, scarce data is available relating to the role of present-hedonistic (ph) time perspective and gender differences in the development of depressive symptoms and stress during the period of strict social distancing. we hypothesized that gender would moderate the relationship between ph and depressiveness or stress levels, such that ph would negatively correlate with psychological distress in women but correlate positively in men. the present study was online and questionnaire-based. n = participants aged – from the general population took part in the study. the results of moderation analysis allowed for full acceptance of the hypothesis for depression as a factor, but for stress the hypothesis was only partially confirmed, since the relationship between ph time perspective and stress was not significant for men (although it was positive, as expected). the findings are pioneering in terms of including ph time perspective in predicting psychological distress during the covid- lock-down and have potentially significant implications for practicing clinicians, who could include the development of more adaptive time perspectives and balance them in their therapeutic work with people experiencing lock-down-related distress. in january , the world health organization announced that covid- constituted a global pandemic (mahase, ) . the virus then proliferated worldwide and government actions to mitigate spread have significantly affected various areas of life, such as healthcare, transportation, freedom of movement and daily activity (simpson & katsanis, ; zajenkowski, jonason, leniarska, & kozakiewicz, ) . in poland, public health safety measures were initiated in january , followed by declaration of a state of epidemic emergency and imposition of lock-down measures on march th and the declaration of a state of epidemic from march th (pinkas et al., ) . lock-down and social isolation, although quite effective in slowing down the pace of the epidemic, have been shown to impact emotional and mental health (de quervain et al., ; li et al., ; shigemura & kurosawa, ) . according to these reports, one of the most significant adverse consequences of the changes in everyday life due to epidemic is an elevation of stress and depressive symptoms in the population. for instance, initial results of the swiss corona stress study (de quervain et al., ) suggested that there was a % increase in stress levels during the lock-down compared to the period preceding it. changes in stress levels were strongly associated with changes in depressive symptoms as % of participants reported an increase in depressive symptoms, which is not unexpected considering the strong link between stressful life events and depression (hammen, ) . interestingly, approximately % of the participants reported lower stress levels during lock-down than before. the authors of the report suggest that, in this group, the decrease might have been due to a reduction of stressors or having more time for recovery from stress during lock-down than under non-lock-down circumstances. accordingly, the level of experienced stress during lock-down and the impact it may have on mental health may be an individual matter. one promising avenue for investigation is found in gender differences and their potential associations with perceived stress and depressive symptoms during lockdown. it is worth noting that a greater number of depression diagnoses are observed in women than men (essau, lewinsohn, seeley, & sasagawa, ; van de velde, huijts, bracke, & bambra, ) . women have higher incidence rates of clinical diagnosis of dysthymia, recurrent brief depression and minor depression (for a review see angst et al., ) , as well as major depressive disorder and its chronic course (essau et al., ) . women were also found to report twice as many depressive symptoms as men (girgus & yang, ) , were more likely to admit being under stress and were more likely to develop depressive symptoms after a stressful event (sherrill et al., ) . ruminative tendencies, chronic strain and low mastery were also found to be more common in women and mediate the gender difference in depressive symptoms (nolen-hoeksema, larson, & grayson, ) . this gender difference might also stem from hormonal fluctuations (for a review of psychosocial factors in depression across genders see leach, christensen, mackinnon, windsor, & butterworth, ) . additionally, social roles, among other determinants, have been acknowledged as potential risk factors for developing depression in both genders (piccinelli & wilkinson, ) . gender schemas (martin & halverson jr, ) may be connected to how women and men attribute the causes of their depression onset. for instance, physical illnesses or problems were the most important precipitants of depression for both genders but especially for men (angst et al., ) . for women, problems in relationships and illness or death in the family were identified as other significant causes, whereas, for men, additional causes included problems at work and unemployment. furthermore, the question of whether elevated levels of depressive symptoms in women might be a consequence of gender inequality has been a topic of wide discussion (salk et al., ) . such an idea is supported by the association of female social roles with lower role overload and lack of choice (szpitalak & prochwicz, ; van de velde et al., ) and the well-established linkage between feelings of powerlessness, lack of control in one's own life and depression (mirowsky & ross, ) . despite a climate of social change in gender roles (eagly, nater, miller, kaufmann, & sczesny, ) , a number of cross-cultural similarities in the gender division of labor has been observed in advanced industrial societies (pérez & tavits, ) . women were found to typically invest more time in raising children, preparing food and caring for home. in contrast, men were found to typically invest more time in extra-domestic tasks. the context of lock-down creates the situation of needing to remain at home, the constant presence of all family members at the home, an increased importance of female gender schema-related activities, and either a shifting of extra-domestic activities to the home space or reduction of these activities. therefore, typically, women during lock-down might be encouraged to play more gender schemacongruent roles in the course of everyday lock-down life, in contrast to men. the remote work lifestyle, as well as fear of job loss due to the economic crisis resulting from the epidemic, might be especially gender schema-threatening for men and contribute to depressive symptoms. additionally, a lock-down situation shifts attentions to everyday activities and the uncertainty of the present moment (versluis, van asselt, & kim, ) . as no one could predict the duration of lock-down and the covid- epidemic, time perspective (at an individual consideration) may be a particularly noteworthy factor in explaining adaptations to the adverse situation. time perspective is generally defined as an "often unconscious process whereby the continual flows of personal and social experiences are assigned to temporal categories or time frames that help to give order, coherence and meaning to those events" (zimbardo & boyd, , p. . a habitual bias to process time in a certain manner might become a relatively stable individual difference, formed through learning processes and cultural influences (jochemczyk, pietrzak, buczkowski, stolarski, & markiewicz, ) . boyd ( , ) in their seminal works distinguished five time perspectives: past-negative, past-positive, present-hedonistic (ph), present-fatalistic and future. a tendency to focus on particular time perspectives, especially past-negative and present-fatalistic might be predictive of a higher level of depressive symptoms, whereas past-positive (anagnostopoulos & griva, ; zimbardo & boyd, ) appeared to protect individuals from elevated levels of depressive symptoms. in general, people rating high on past-positive and ph time perspectives also exhibit increased well-being and life satisfaction (stolarski, bitner, & zimbardo, ; zhang & howell, ) . additionally, they are happier, in contrast with those scoring higher in the past-negative time perspective, who experienced less happiness (drake, duncan, sutherland, abernethy, & henry, ) . however, compared to other time perspectives, ph time perspective was the most robust predictor of current emotional states (stolarski, matthews, postek, zimbardo, & bitner, ) . hedonism, from which the name for the ph time perspective is taken, is defined as openness to pleasurable experience (veenhoven, ) , and is associated with lower levels of depressive symptoms (disabato, kashdan, short, & jarden, ) , as well as with mania in bipolar disorder (gruber, cunningham, kirkland, & hay, ) . therefore, the ph time perspective is especially interesting for investigating depressive and stress symptoms during covid- lockdown. the main aim of the current study is to contribute to the knowledge about potential gender differences in the linkages between ph time perspective and depressive symptoms or perceived stress during covid- lock-down. personal characteristics, including time perspectives, are related to how people experience social events. phs are habitually oriented to pleasures of the present and excitement with little consideration of future consequences (zimbardo & boyd, ) . strong social situations "providing salient cues to guide behavior and having a high degree of structure and definition" (snyder & ickes, ; p. ) can be more important in predicting certain behaviors or experiences than personality traits (sherman, nave, & funder, ). an epidemic, considered to be a strong social situation, can increase psychological distress, especially depressiveness and stress level. it is possible that, due to strict social distancing, the impossibility of realizing most needs outside of home and, hence, the blockage of pleasant stimuli could predict depressiveness and stress experience. moreover, lock-down compels the discounting of immediate rewards for the sake of the one's own health and that of others, which might be difficult for ph-oriented people in general (jochemczyk et al., ; stolarski et al., ) . therefore, one might suppose that people who tend to fulfill their hedonistic needs outside of their homes might experience greater lock-down distress than people who tend to take pleasure from homeand family-oriented activities. considering the gender schema theories, it is possible that the lockdown situation could prove more depressing for men. according to such theories, men might be inclined toward valuing hedonistic extra-domestic activities (compared to typical domestic activities), which were significantly limited due to the lock-down. moreover, although in general women tend to present higher levels of depression, the factors leading to this discrepancy are distinct for women and men. for instance, men more frequently attributed the onset of their depression to current life events, such as unemployment or problems at work, than females did (angst et al., ) . the lock-down was not only linked to shifting work life to homes but sometimes caused employment uncertainty and financial insecurity. based on the above-mentioned theoretical assumptions, our hypothesis is that gender would moderate the relationship between ph and depressiveness or stress levels, such that ph would be negatively related with psychological distress in women m. bodecka, et al. personality and individual differences ( ) but positively correlated with psychological distress in men. we recruited participants ( women, men) online. power analysis conducted in g*power . (faul, erdfelder, buchner, & lang, ; faul, erdfelder, lang, & buchner, ) indicated that this sample size would allow for the detection of a small effect of partial r increase of . (alpha = . ) with a power of . . the participants were not reimbursed. all participants were between the ages of and years (m = . , sd = . ). only participants had not graduated high school, participants ( . %) declared secondary education, participants ( . %) were students and individuals reported higher education ( . %). the majority of participants lived in cities with either less than , inhabitants (n = , . %) or more than , (n = , . %) while the other participants lived in the countryside (n = , . %). participants were married, (n = , . %), in a partnership (n = , %), single (n = , . %), divorced (n = ) or widowed (n = ). participants mostly lived with other people (n = , . %), including with family (children, spouse, parents and other family members), with romantic partners or with friends. individuals ( %) declared that they were currently in psychotherapy. participants were recruited through social media, primarily facebook, through paid advertisement, a post about the study on the lab profile and on private profiles using the snowball method. the study conformed to the declaration of helsinki (world medical association, ) , and all participants provided informed consent to take part in the study. the respondents were informed that the purpose of the study is to examine "how people deal with the current situation, how they feel, what they think", that the survey is fully anonymous and that they could discontinue at any time. the average time for survey completion was approximately min. depressive symptoms. a -item patient health questionnaire (phq- ) was used to assess severity of depressive symptoms. its items correspond to criteria for diagnosis of dsm-iv and dsm-v depression symptoms (kroenke & spitzer, ; mitchell, frayne, wyatt, goller, & mccord, ) and enabled grading of depressive symptom severity. it contains questions about psychological well-being within the last two weeks (e.g., how often have you been bothered by little interest or pleasure in doing things?), including a question related to hurting oneself (i.e., how often have you been bothered by thoughts that you would be better off dead or of hurting yourself in some way?). in the current study, the phq- provided a severity measure with scores ranging from to -each of the nine items can be scored from ("not at all") to ("nearly every day"). depression severity was defined by the scale's authors as: - none, - mild, - moderate, - moderately severe and - severe. the phq- was found to be a reliable measure in our study (α = . ). perceived stress. the perceived stress scale (pss) was applied to measure levels of stress (cohen et al., ) and it measures the degree to which situations in one's life are considered stressful. the scale consists of items (four positively stated, e.g., in the last month, how often have you felt that things were going your way? and six negatively stated, e.g., in the last month, how often have you been upset because of something that happened unexpectedly?) that can be scored from ("never") to ("very often"). the pss scores are obtained by reversing the responses to the positively stated items and then summing across all the scale items. the cronbach's alpha coefficient in the present research was α = . . time perspectives. the zimbardo time perspective inventory (ztpi) was used to measure the ph time perspective (zimbardo & boyd, and future ( items e.g., when i want to achieve something, i set goals and consider specific means for reaching those goals). the participants were asked to score on a five-point likert scale the degree to which each statement referred to him/her ( = very untrue, = very true), and some items were reverse coded. the level of a specific time perspective was obtained by summing the items results for each scale. in the current study, the cronbach alpha for ztpi ph subscale was α = . . reliability of ztpi subscales which were not of the interest of the current study is presented in appendix table a . . all analyses were conducted using ibm spss . . . for windows. our main hypotheses were tested employing regression analysis with bootstrapping methods using andrew f. hayes process . . macro (hayes, ) . frequency analysis of the results from the phq- indicated that ( . %) participants had no depressive symptoms, whereas the rest of the sample displayed mild (n = , . %); moderate (n = , . %); moderately severe (n = , . %) or severe (n = , . %) depressive symptoms. next, we investigated descriptive statistics, performed correlation analysis and tested for gender differences in time perspectives, stress and depression scores. the results of these analyses for the variables of interest of the current study are presented in table . correlations and descriptive statistics for all study variables including ztpi subscales other than ph are presented in the appendix table a. . it should also be noted that depression scores were not significantly associated with ph. stress was negatively, although weakly, correlated with ph. women and men did not differ in ph. results also indicated that women declared higher perceived stress and more intensive depression symptoms. mean depression scores for women fell into the interval for m. bodecka, et al. personality and individual differences ( ) moderate level of depressive symptoms, while the mean for men fell within the mild depressive symptoms level. next, we tested our main hypothesis using regression models with a bootstrapping method for depressive symptoms and stress as dependent variables in two separate models. ph was included as the predictor and gender was included as a moderator in both models. coefficients with % ci for both models are presented in table . data from table suggests that both models predicted a significant amount of variance in the dependent variables. the results also showed that ph was negatively related with depression and with stress. women were coded and men were coded ; thus, a negative relationship indicated that women were higher on stress and depression scores. in both models, the interactions were also significant. the interpretation of ph and gender interaction with simple slopes showed that the relationship between ph perspective and depression scores was also significant and negative for women, whereas for men it was significant and positive. the relationship between ph and stress was significant and negative in women, while this relationship was not significant in men. the results allow us to accept the hypothesis in the case of depression but, in the case of stress, the hypothesis was only partially confirmed, since the relationship between ph and stress was not significant for men (although it was positive, as expected). the relationship between ph and depression scores in men and women is presented in fig. and relationship between ph and stress in men and women is presented in fig. . the aim of the study was to explore gender differences in the relationships between ph and depressive symptoms or perceived stress during the specific context of lock-down due to the covid- epidemic. we tested for two independent models predicting depression and stress. both of these variables were strongly related, which is in line with previous studies (see hammen, ) . the majority of participants displayed at least mild depressive symptoms ( . %). the study was performed during the strict social distancing period in poland and the risk of distress connected to being apart from other people might have been heightened. a study conducted on a representative sample during covid lock-down suggested that depressive symptoms were twice as high as before the measure was introduced (gambin et al., ) in our study, gender moderated the relationship between ph and depressiveness, such that women that scored higher for ph presented with fewer depressive symptoms than women scoring lower on this time perspective. for men, the relationship was inverse-men scoring higher for ph displayed more depressive symptoms than men with lower ph scores. interestingly, this was observed even though men and women did not differ in their levels of ph. although a hedonistic view of the present was found to be related to a high positive affect (desmyter & de raedt, ) , other research suggests a significant positive association between ph and depression and anxiety (davies & filippopoulos, ) . based on these inconsistencies we can assume that in stress, ph might lead to the development of both adaptive or maladaptive forms of coping, especially emotion-focused forms (blomgren, svahn, Åström, & rönnlund, ) . our results suggest that the way in which men and women actualize this time perspective may be different. as a consequence, ph-oriented women were able to succeed in the lock-down circumstances, while ph-oriented men were not (blomgren et al., ) . vandello and cohen ( ) conducted five studies to show that masculinity as opposed to femininity is a much more uncertain and vulnerable state, dependent on constant external stimulation and social acknowledgement in interactions with others. it m. bodecka, et al. personality and individual differences ( ) is possible that ph-oriented men are likely to meet their hedonistic needs in contact with other people outside of their homes, and lockdown might have been a circumstance that restricted opportunities to maintain such contacts. ph was also found to be negatively related to stress levels only in women. in men, the relationship between these variables was not significant. one of the crucial stressors during lock-down might have been a fear of viral infection. women, although generally found to be more concerned about their health than men (thompson et al., ) , when high on ph, might have been concentrated on the present and oriented at pleasure so that they found pathways for reducing their stress levels. it should be noted that negative life events, such as an epidemic, may not always result in a decrease in well-being or deterioration in mental health but can lead to effective coping with the adversities and to sustained health (luhmann & eid, ). despite inconsistent findings (see eisenbarth, ) , some data has shown that men use avoidance (e.g., sigmon, stanton, & snyder, ) , and drugs or alcohol to cope (e.g., kieffer et al., ) more often than women. women are more likely than men to seek emotional support across a range of stressors (tamres, janicki, & helgeson, ) . it is possible that men high on ph are particularly willing to distract themselves from thinking about the danger of viral infection, in contrast to women, who might seek more social contact and support from close-others. however, these are just speculations and further studies are needed to investigate coping strategies during the recent pandemic in both men and women with high ph. these findings have potentially significant implications for practicing clinicians. given that time perspectives are based on learning processes, clinicians can utilize them to enhance the development of adaptive time perspectives and balance them-in order to enhance well-being and reduce lock-down related distress. several limitations necessitate a degree of care when interpreting these findings. the sample consisted mainly of caucasian participants from a developed country. it is possible that in more embedded cultures, where people live with several generations of relatives in the same household, our result would not be valid. it should also be noted that the forms that displayed depressive symptoms take on can differ between genders (martin et al., ) . including a wider variety of measures of distress might result in a more accurate estimation of depression prevalence, especially in men. the study was cross-sectional, which makes it impossible to form causality statements about the linkages between variables. furthermore, it is advisable to continue searching for other indicators of depression and stress during lock-down, such as feelings of loneliness or perceived social support. the involvement of mb and az in preparation of the manuscript was supported by national science centre of poland, grant no. umo- / /d/hs / awarded to az. (continued on next page) m. bodecka, et al. personality and individual differences ( ) exploring time perspective in greek young adults: validation of the zimbardo time perspective inventory and relationships with mental health indicators gender differences in depression. epidemiological findings from the european depres i and ii studies coping strategies in late adolescence: relationships to parental attachment and time perspective a global measure of perceived stress changes in psychological time perspective during residential addiction treatment: a mixed-methods study the swiss corona stress study the relationship between time perspective and subjective well-being of older adults what predicts positive life events that influence the course of depression? a longitudinal examination of gratitude and meaning in life time perspective and correlates of wellbeing gender stereotypes have changed: a cross-temporal meta-analysis of u.s. public opinion polls from coping with stress: gender differences among college students gender differences in the developmental course of depression statistical power analyses using g*power . : tests for correlation and regression analyses g*power : a flexible statistical power analysis program for the social, behavioral, and biomedical sciences generalized anxiety and depressive symptoms in various age groups during the covid- lockdown. specific predictors and differences in symptoms severity gender and depression feeling stuck in the present? mania proneness and history associated with present-oriented time perspective stress and depression introduction to mediation, moderation, and conditional process analysis: a regression-based approach you only live once: present-hedonistic time perspective predicts risk propensity test and study worry and emotionality in the prediction of college students' reasons for drinking: an exploratory investigation the phq- : a new depression diagnostic and severity measure gender differences in depression and anxiety across the adult lifespan: the role of psychosocial mediators vicarious traumatization in the general public, members, and non-members of medical teams aiding in covid- control does it really feel the same? changes in life satisfaction following repeated life events covid- : who declares pandemic because of "alarming levels" of spread, severity, and inaction a schematic processing model of sex typing and stereotyping in children the experience of symptoms of depression in men vs women: analysis of the national comorbidity survey replication social causes of psychological distress comparing the phq- to the multidimensional behavioral health screen in predicting depressionrelated symptomatology in a primary medical care sample explaining the gender difference in depressive symptoms language influences public attitudes toward gender equality gender differences in depression: critical review public health interventions to mitigate early spread of sars-cov- in poland gender differences in depression in representative national samples: meta-analyses of diagnoses and symptoms properties of persons and situations related to overall and distinctive personality-behavior congruence is life stress more likely to provoke depressive episodes in women than in men? mental health impact of the covid- pandemic in japan gender differences in coping: a further test of socialization and role constraint theories the immunological case for staying active during the covid- pandemic personality and social behavior time perspective, emotional intelligence and discounting of delayed awards how we feel is a matter of time: relationships between time perspective and mood psychological gender in clinical depression. preliminary study sex differences in coping behavior: a meta-analytic review and an examination of relative coping the influence of gender and other patient characteristics on health care-seeking behaviour: a qualicopc study macro-level gender equality and depression in men and women in europe culture, gender, and men's intimate partner violence. social and personality psychology compass hedonism and happiness the multilevel regulation of complex policy problems: uncertainty and the swine flu pandemic world medical association declaration of helsinki. ethical principles for medical research involving human subjects who complies with the restrictions to reduce the spread of covid- ?: personality and perceptions of the covid- situation do time perspectives predict unique variance in life satisfaction beyond personality traits? putting time in perspective: a valid reliable individual differences metric the time para-dox key: cord- -ag j obh authors: higgins, g.c.; robertson, e.; horsely, c.; mclean, n.; douglas, j. title: ffp reusable respirators for covid- ; adequate and suitable in the healthcare setting date: - - journal: j plast reconstr aesthet surg doi: . /j.bjps. . . sha: doc_id: cord_uid: ag j obh nan "please doctor, could you tell him that i love him?": letter from plastic surgeons at the covid- warfront dear sir, how many times have we heard these words in this time? too many. the covid- pandemic has completely disrupted our normal surgical and clinical routine. in these days, many colleagues of whatever specialty are regularly employed by their hospitals to face covid- emergency in italy, europe and worldwide. we are not plastic surgeons anymore. many of us feel lost, unprepared and inadequate for such an emergency. here in bergamo, the centre of the italian epidemic, we felt small and incompetent at the beginning. however, we must remember that first of all we are doctors, then plastic surgeons. in these weeks we are putting our willingness at the service of our patients and colleagues. the numbers of the covid- pandemic in bergamo are impressive: positive patients and over official deaths in about one month. at the same time, the reaction of our hospital, papa giovanni xxiii, has been impressive too: over doctors and over nurses entirely dedicated to covid- positive patients; intensive (one of the largest intensive care unit in europe) and over nonintensive care beds are set aside for those patients. this huge wave of covid- positive patients, forced the hospital management to progressively and rapidly recruit, train and put on ward over physicians of any discipline and nurses from march th. several training programs about covid- infection and management have been scheduled in order to prepare the entire staff. two plastic surgeons of our team (on a total of six) have been fully dedicated on the shifting in covid medical areas coordinated by a pulmonologist and an intensivist. main activities focus on patient clinical exam, adjustment of oxygen therapy, regulation of cpap systems, hemogasanalysis implementation, blood and radiological exam monitoring and consequent therapy modulation, admission, discharge and deaths bureaucracy. despite these new clinical fields which are new for a plastic surgeon, we are learning how isolation of patients, due to public health reason, is the most devastating aspect of covid- pandemic. , every single day we phone and update the relatives of those who, because of the worsening of their respiratory condition, are unable to speak and call home. we are sometimes those who communicate the death of his or her beloved but also those who bring words of hope, words of love: "please doctor, could you tell him that i love him so much?". some of these patients die without the hug of their families. a plastic surgeon is not usually used to face death because in our surgery it is not so frequent. we would say that the death of a lonely patient also takes a part of us away. it acquires a different hint, touching some inner cord, it makes you feel impotent and lost. as plastic surgeons we often take care of the psychological side of patients and, except for some tumours and traumas, the pathologies we treat -like breast reconstruction -are not fatal diseases. if we compare the contribution of plastic surgery department in term of numbers, we are like a drop in the ocean. but as ovid wrote in epistulae ex ponto "gutta cavat lapidem" i.e. "the drop digs the rock". thanks to our support, a clinical physician is able to evaluate a larger number of patients, focusing on the most critical ones. this is why we keep going on. we want to make our part, working with commitment, dedication and professionalism and assisting all our patients to the best of our in-continueupdating knowledge. we are proud to help bergamo community to face covid- emergency and trying to make the difference in our wounded city. we hope this letter will help other colleagues not to consider themselves unprepared or unready. the contribute of everyone is crucial to defeat this ongoing pandemic which has not only upset our clinical routine, but it has woken us up from our everyday life. before covid- everything was scheduled, now there are no plans and we are not sure about our priorities. only if we behave, as long as necessary, with the awareness of being able to make a difference, we will win this terrible fight against sars-cov- . only together we will go back to hugging, kissing and loving each other. when the critical phase of this emergency is over, it will be necessary to think deeply about the socioeconomic development strategies to discover new horizons and new opportunities for a better future. we will never give up!…and what about you? are you ready to play your part? none. this research did not receive any specific grant from funding agencies in the public, commercial, or not-for-profit sectors. dear sir, covid- is a novel coronavirus with increasing outbreaks occurring around the world. , during the past weeks, emergence of new cases has gradually decreased in china with the help of massive efforts from society and the government. in addition to those directly working in the respiratory, infectious, cardiology, nephrology, psychology, and icu departments and covid- patients, all members of the general population may encounter the new coronavirus. medical staff in plastics, reconstructive, and other departments also have a responsibility to prevent the disease spreading in our community. in order to protect both patients and medical staff, selective operations and cosmetic treatments were reduced or postponed in the plastic surgery hospital, beijing, china. gloves and medical masks were saved and donated to the doctors and nurses in wuhan as the demand for protective equipment increased significantly. in addition, a standard operation procedure for covid- was proposed in local hos-pitals. our hospital recommended online consultations to replace face-to-face interactions. hospital websites and official social media accounts provided updated practical disease prevention information instead of plastic surgery information. other colleagues also conducted publicity campaigns on disease prevention online via their own social media accounts for relatives and friends, especially for older persons who appeared to have developed a serious illness. at the early stages of the covid- outbreak in certain areas, the public may not care much about the new disease. as more information about covid- becomes available, people without medical background may be anxious to seek diagnosis, which may result in potential risks of cross infection in the crowded fever clinics. thus, proper information and guidance can help reduce their panic and anxiety. moreover, if individuals were exhibiting relevant symptoms with epidemiologic history, they were advised to seek medical care following the directions of local health authority. in general, plastic surgeons are particularly good at introducing novel surgical methods to the public and keeping in touch with a great number of patients. as a result, they may be able to present local health authority advice in the form of straightforward images and accessible videos, as well as promote practical information via personal social media or clinic websites. in addition to local doctors and nurses from other departments helping in fever clinics and isolation wards, , (as of march , ) members of medical staff from other provinces rushed to help their colleagues in hubei province. plastic surgeons that had completed icu training in beijing and other cities supported wuhan on their own initiative as well. we suggest that measures should be taken by medical staff from all departments to help slow further spread and to protect health systems from becoming overwhelmed. dear sir, as covid- spreads quickly from asia via europe to the rest of the world, hospitals are evolving into hot zones for treatment and transmission of this disease. with the increasing acceptance that operating theatres are high risk areas for transmission of respiratory infections for both patients and surgeons, and with our health care systems being generally well-designed to only deal with occasional high-risk cases, there is an obvious need to evolve our practice. although social media campaigns via the british association of plastic, reconstructive and aesthetic surgeons (#staysafestayhome) and british society for surgery of the hand (#playsafestaysafe) are attempting to raise awareness and reduce preventable injuries, we are still seeing a steady stream of patients present to our plastic surgery trauma service. we have had to act immediately so our systems can support essential surgical care while protecting patients and staff and conserving valuable resources. as a department we have developed a set of standard operating procedures which cover the full scope of plastic surgery from the facilitation of emergent life and limb saving surgeries, rationalised oncological management to the management of minor soft tissue and bony injuries. we have been cognisant of the need to reduce footfall to the hospital and the stratification into "dirty" and "clean" areas with attempted segregation of non-, suspected and confirmed covid cases within inpatient clinical areas. this has resulted in displacement of assessment and procedure rooms within the unit. the ward itself has been earmarked as an extended intensive care unit due to its layout and facilities. standards of practise have changed, with an emphasis on "see and treat" as operating theatre availability has been reduced due to the reduced availability of nurses and theatre staff and their conversion into intensive care areas for ventilated patients. there is also an emerging assumption that all patients are covid- positive until proven otherwise. the combination of unfamiliar environments, lack of accessible equipment, requirement to reduce time spent with patients and adherence to social distancing has resulted in the need to provide a more mobile and flexible service. in order to support our mobile service, we have found that, as in other disaster situations where specialised bags have been deployed, using a simple bag containing essential equipment and consumables has revolutionised our ability to work at the point of referral and avoid unnecessary trips to theatre. despite their simplicity, bags have been fundamental for the development of human civilization, with the word originating from the norse word baggi and comparable to the welsh baich (load, bundle)!!! our portable "pandemic pack" is now being carried by the first on-call in our department. this pack contains a l ultra dry adventurer tm , polymer dry bag measuring cm (w) × cm (l) as shown in figure . the contents are shown in figure . we have found this adequate for managing most common plastic surgery trauma and emergency scenarios. the bag is easily cleaned with ppm available chlorine (in accordance with public health england guidance) after each patient exposure. we have found it useful to make up two packs in advance so that one is available at handover whilst the other is replenished by the outgoing team. we are sure that this concept has been used elsewhere, but if it is not common practice in your unit, we would advo- cate implementing such a toolkit to facilitate management of trauma patients and reduce the amount time frontline staff need to be in a potential "dirty" environment during the covid- pandemic. teleconsultation-mediated nasoalveolar molding therapy for babies with cleft lip/palate during the covid- outbreak: implementing change at pandemic speed dear sir, cleft lip/palate is among the most common congenital anomalies, requiring multidisciplinary care from birth to adulthood. the nasolaveolar molding (nam) revolutionized the care provided to babies with a complete cleft, with proving its benefits to patients, parents, clinicians, and society. this therapeutic modality requires parents' engagement with nam care at home and continuous clinicianpatient/parent encounters, commencing at the second week of life and finishing just before the lip repair. the rapidly expanding covid- pandemic has challenged clinicians who are dealing with nam therapy to fully stop it, or adjust it to protect, both, the patient/parent and the healthcare team. based on the current who recommendation, to maintain social distancing, and the national regulation for the use of telemedicine, , the nam-related clinician-patient/parent relationship has timely been adjusted by implementing the non-face-to-face care model. babies with clefts are consulted individually by clinicians, proactively establishing the initial and subsequent telemedicine consultations, also providing an open communication channel for parents. based on a shared decisionmaking process, all parents have the option to completely stop nam therapy or use only lip tapping. given that each patient is at a particular stage within the continuum of nam care, numerous patient-and parent-derived issues are being addressed by video-mediated consultations. overall, this has helped explain the current covid- -related public health recommendations and precautions to parents, while addressing patients' needs and parents' feelings, fears, expectations, and answering parents' questions. moreover, clinical support is provided to patients and parents by visual inspection (looking for potential nam-derived facial irritation), and checking parents' hand-hold maneuvers, such as feeding and placement of the lip tapping and nam device, with immediate feedback for corrections. thus, the use of an audiovisual communication tool has considerably reduced the number of in-person consultations. when a face-to-face consultation could not be resolved using the telemedicine triage, an additional video-based conversation had been implemented, focusing on the key steps, established for patient/parent visits to the facility (i.e., frequent hand-cleaning, mask usage, and keeping m social distance) and on the covid- -focused screening. symptom-and exposure-screened negative parents/babies have been consulted in a time-specific scheduling with minimum waiting time to avoid crowded waiting rooms, by a clinician wearing personal protective equipment (cap, face shield, n mask, goggles, gloves, and gowns), and working in an environment with constant surface/object decontamination. parents, who screened positive for symptoms (e.g., fever, cough, sore throat), were indicated to follow to the appropriate self-care or triage mechanism, stipulated by the who guidelines and local authorities. [ ] [ ] [ ] [ ] in the covid- era, the care provision should be aligned with the latest clinical evidence. in response to the constantly changing needs, clinicians across the globe could adapt the telemedicine-based possibilities to their own environment of national/hospital regulatory bodies, technology accessibility, and the parents' level of technological literacy. as most of the issues addressed in the video conversations were recurrent reasons for consultations prior to the covid- outbreak, future investigations could assist in truly defining the key aspects of telemedicinebased clinician-patient/parent relationship in delivering nam therapy, and its impact on nam-related proxy-reported and clinician-derived outcome measures. there are no conflicts of interest to disclose. virtual clinics: need of the hour, a way forward in the future. adapting practice during a healthcare crisis the whole world is gripped by the novel coronavirus pandemic, with huge pressures on the health services globally. within the coming days, this is only going to increase the pressure on the health care services and needs robust planning and preparedness for this unprecedented situation, lest the whole system may cripple and we may see unimaginable mortalities and suffering. the whole concept of social distancing and keeping people in self isolation has reduced footfall to the hospitals but this is affecting delivery of routine care to patients for other illnesses in the hospital and telehealth is an upcoming way to reduce the risk of cross contamination as well as reduce close contact without affecting the quality of health care delivered. at the bedford hospital nhs trust, for the past one year we have been running a virtual clinic for our skin cancer suspect patients, where in after a particular biopsy if the clinical suspicion of a malignancy was low, these patients were not given a follow up clinic appointment and instead they were informed of the biopsy result through post, sent both to their gp and themselves. most patients encouraged this model to not have to come back to an appointment and this took significant pressure off our clinics. in the event we needed to see a patient, they were informed via a telephonic conversation to attend a particular clinic appointment. from an administration standpoint, this resulted in less unnecessary follow up appointments in our skin cancer follow up clinics, which could then be offered to our regular skin cancer follow up patients as per the recommended guidelines, without having to struggle with appointments. virtual clinics have previously shown to be safe and cost effective alternatives to the out patient visits in surgical departments like urology and orthopedics. they improved performance as well as improved economic output. , we have increased the use of these virtual clinics, with the onset of the novel coronavirus pandemic, in order to reduce the patient footfall to our clinics. most patients voluntarily chose not to turn up and with the risk being highest amongst the elderly, it was logical to keep them away from hospitals as far as possible. in order to achieve this, we have started virtual clinics for nearly all patients in order to triage patients that can do without having to come to the hospital for now. the world of telemedicine is the way forward in nearly all aspects of medical practice and this pandemic situation might just be the right time to establish such methods. we propose setting up of more such clinics in as many subspecialties of plastic surgery, which not only will help in the current crises situation, but will also be useful in the future to take pressure of our health care services. none declared not required funding none webinars in plastic and reconstructive surgery training -a review of the current landscape during the covid- pandemic dear sir, the covid- pandemic has resulted in cancellation of postgraduate courses and the vast majority of elective surgery. plastic surgery trainees and their trainers have therefore needed to pursue alternative means of training. in the face of cross-speciality cover and redeployment there is an additional demand for covid- specific education. the joint committee on surgical training (jcst) quality indicators for higher surgical training (hst) in plastic surgery state that trainees should have at least h of facilitated formal teaching each week. social distancing requirements have meant that innovative ways of delivering this teaching have needed to be found. a seminar is a form of academic instruction based on the socratic dialogue of asking and answering questions, with the word originating from the latin word seminarium meaning "seed plot". fast and reliable internet and the ubiquitous nature of webcams has led to the evolution of the seminar into the webinar. whilst webinars have been common place for a number of years, they represent an innovative and indispensable tool for remote learning during the covid- pandemic, where trainees can interact and ask questions to facilitate deep and meaningful learning. speciality and trainee associations have traditionally used their websites and email lists to publicise training opportunities. however, the covid- pandemic has seen a shift to social media; with people seeking constant updates and information from public figures, brands and organisations alike. surgical education has mirrored this trend, and we have increasingly observed that webinars are being launched through speciality and trainee association social channels to keep up with the fast-paced demand for accessible online content. the aim of this study was to audit cumulative compliance of active publicly accessible postgraduate plastic surgery training webinar frequency and duration against jcst quality indicators. we used the social listening tool brand tm ( https:// brand .com ). this tool monitors social media platforms for selected 'keywords' and provides analysis of search results. we used the search terms "plastic surgery webinar", "reconstructive surgery webinar", "royal college of surgeons", "bapras", "bssh", "british burns association", "plasta" and "bssh". there were mentions of these terms from th may to th may and of these were after rd march , the date that lockdown began in the united kingdom (uk). this represents an increase of , % post-lockdown. we supplemented this search strategy by searching google tm and youtube tm with "plastic and reconstructive surgery webinar". these search engines rank results in order of relevance using a relevancy algorithm, we therefore reviewed the first results only. additional webinars were identified through a snowballing technique where the host webinar webpage was searched for advertised webinars at other institutions. we included any educational webinar series aimed at trainees that was free to access, mirroring weekly plastic surgery hst teaching. free webinars which required membership registration were also included. we excluded webinars aimed at patient or parent education, webinars with less than one video, any historic webinar that did not have an accessible link and webinars behind a paywall or requiring paid membership. we systematically reviewed the search results from brand tm , google tm and youtube tm and identified webinar series currently in progress ( table ) and historic webinar series ( table ) . seven active webinar series and two historic webinar series were identified respectively. all were consultant or equivalent delivered. of the active webinar series, ( %) related to covid- , ( %) related to aesthetic surgery, ( %) related to pan-plastic surgery and ( %) related to hand surgery. the weekly total running time for active webinars amounted to h min, with h and min plastic surgery specific. this was a surplus of h min to jcst quality indicators. limitations of this study include us only identifying webinars advertised publicly. we are aware of training pro-grammes in the uk running in-house webinar series to supplement training and therefore the total available for training is likely to be higher than we have identified. we have also not reviewed the quality of educational content. we acknowledge there are good quality webinar series that require paid for membership such as those provided by the british association of aesthetic plastic surgeons and american society of plastic surgeons but it was not the aim of the study to present them here. innovation flourishes during times of crisis. the education of surgical trainees is of paramount importance and should be maintained, even during the difficult times we currently face. while operative skills will be difficult to develop, the use of technology can allow for the remote delivery of expert teaching to a large number of trainees at once. in this study we identify a number of freely available webinar series that provide a greater number of teaching hours than is recommended by the jcst. the training exists, it is up to trainees to make the most of it. none. none. dear sir, salisbury district hospital (sdh) is based in southwest england and provides a plastic surgery trauma service across the south coast, serving six local hospitals and the designated major trauma centre (mtc). prior to the covid- pandemic all patients referred to the trauma service, apart from open lower limb trauma, were reviewed in person within the trauma clinic. if surgery was required, it was usual for patients to return on a separate day for their operation and in most instances this was carried out under general anaesthetic in the main operating theatres. after discharge, patients were referred to the hand therapy and plastics dressing services and returned in person for all follow-up visits including dressing changes and therapy. patients with lower limb injuries from the mtc were transferred from southampton general hospital as inpatients to sdh for all complex reconstruction including free tissue transfer. at the start of the covid- crisis, it became quickly apparent that reducing patient footfall within our department was necessary to protect both patients and staff from the disease. this included reducing inpatient stays in hospital. we responded to this challenge in the following ways and hope that our experience will be of assistance to other trauma services over the course of the global pandemic. firstly, all patient protocols underwent significant redesign following which changes to the layout of our plastic surgery outpatient facility were made and patient flow through the department was altered and reduced. now, when patients are referred to our hand trauma service from peripheral hospitals, the initial patient consultations are carried out remotely using the 'attend anywhere' video platform. we are following the bssh covid- hand trauma guidelines for patient management. all patient decisions are discussed with the trauma consultant of the day. we are managing a greater number of patients conservatively and to aid this we have designed comprehensive patient information leaflets that enable our patients to increase understanding of their own management. patients who need to be seen in person at our department are screened for symptoms of covid- and their temperature taken at the department entrance. level ppe is worn by staff at all times. for hand trauma patients requiring surgery, this is provided on the same day to maximize efficiency and reduce the need for multiple visits. we have transformed our minor operating theatres, located adjacent to our clinic, into fully functional theatres equipped with a mini c-arm and all instruments for trauma operating. this reduces the need for our patients to be taken into the main hospital theatre suite. operations are carried out either under local anaesthetic, walant or regional block depending on complexity. all theatre staff wear level ppe and staffing is kept to a minimum. all wounds are closed with dissolvable sutures. immediately post operation, our on-site hand therapists review patients. splints are made on the same day and patients are educated about their post-operative management at this time. all follow-up is subsequently carried out virtually by the hand therapy team using 'attend anywhere'. with our hub and spoke service set up for lower limb trauma patients, we have ensured that there is an on-site consultant at the mtc every day. wound coverage is being undertaken for all patients at the mtc. two plastic surgery consultants in conjunction with the orthopaedic team carry out operating for these patients. all inter-hospital transfers for this group of patients have been stopped. choice of wound coverage for these patients is being designed to minimise inpatient stay and reduce operative time. the changes that we have made to our service in a short period of time have already been beneficial for patients, streamlining their care and reducing time spent in hospital. figure shows the drop in numbers of trauma patients that we have seen during the first four weeks of the uk lockdown ( n = in january to n = over the first weeks into lockdown). this is in line with reports from other uk units. this has given us time to refine our protocols for an expected upsurge of patients as the lockdown is lifted. furthermore, during this period where we have had extra capacity, our registrars have been trained to carry out new techniques. they now undertake insertion of both mid-lines and picc lines for medical inpatients under ultrasound guidance to support and reduce the burden placed on our anaesthetic and critical care colleagues who previously would have placed these. it is our expectation that many of the changes we have implemented to our service will be continued in the longterm. we will continue to learn and adapt our protocols as this phase of work continues. whilst many of the outcomes of the covid- pandemic will be negative, it has also been the catalyst for significant positive change within the uk nhs. dear sir, the covid- pandemic has caused unprecedented disruptions in patient care globally including management of breast and other cancers. however, cancer care should not be compromised unnecessarily by constraints caused by the outbreak. clinic availability and operating lists have been drastically reduced with many hospital staff members reassigned to the "frontline". furthermore, all surgical specialties have been advised to undertake emergency surgery or unavoidable procedures only with shortest possible operating times, minimal numbers of staff and leaving ventilators available for covid- patients. in consequence, much elective surgery including immediate breast reconstruction (ibr) has been deferred in accordance with guidance issued by professional organisations such as the association of breast surgery (uk) and the american society of plastic surgeons. , this will inevitably lead to backlogs of women requiring delayed reconstructions and it is therefore imperative that reconstructive surgeons consider ways to mitigate this and adapt local practice in accordance with national guidelines and operative capacity. in the context of the current "crisis" or the subsequent "recovery period", time consuming and complex autologous tissue reconstruction (free or pedicled flap) should not be performed. approaches to breast reconstruction might include the following options: . a blanket ban on immediate reconstruction, and all forms of risk-reducing, contralateral balancing and revisional/tertiary procedures. where reconstructive delay is neither feasible nor desirable, opting for simple and expedient surgery should be considered e.g.: a) expanded use of therapeutic mammaplasty: as a unilateral procedure in selected cases instead of mastectomy and ibr. b) exploring less technically demanding (albeit "controversial") implant-based forms of ibr: i. epipectoral breast reconstruction (fixed volume implants): this adds about minutes to the ablative surgery as the pre-prepared implant-adm complex is easily secured with minimal sutures. ii. "babysitter" tissue expander/implant: this acts as a scaffold to preserve the breast skin envelope for subsequent definitive reconstruction. . during the restrictive and early recovery phase, either a solo oncological breast surgeon or a joint ablative and reconstructive team (breast and plastic surgeon) performs surgery without the assistance of trainees or surgical practitioners. for joint procedures, the plastic surgeon acts as assistant during cancer ablation and as primary operator for the reconstruction. despite relatively high rates of complications for implant-based ibr (risking re-admission, prolonged hospital stays or repeat clinic visits), avoiding all ibr will lead to long waiting lists and have a negative psychological impact, particularly among younger patients. this will also impair aesthetic outcomes due to more extensive scars and inevitable loss of nipples. whilst appreciating the restrictions imposed by covid- , there is opportunity to offer some reconstructive options depending on local circumstances, operating capacity and the pandemic phase. we suggest that these proposals involving greater use of therapeutic mammaplasty as well as epipectoral and "babysitter" prostheses be considered in efforts to offset some of the disadvantages of covid- on breast cancer patients whilst ensuring that their safety and that of healthcare providers comes first. dear sir, the covid- pandemic has shifted clinical priorities and resources from elective and trauma hand surgery with general anaesthesia (ga) to treat the growing number of covid patients. at the time of this correspondence, the pandemic has affected over million people resulting in deaths worldwide, with uk deaths, with numbers still climbing. this has particularly affected our hand trauma services which serves north london, a population of more than million. we receive referrals from a network of hospitals in addition to emergency departments of the royal free group of hospitals and numerous gp practices and urgent care centres. in the first week following the british government lockdown, which commenced march rd, we experienced a % drop in referrals, from to a day. subsequently, numbers have been steadily rising to - a day by th of april. the british association of plastic, reconstructive and aesthetic surgeons, the british society for surgery of the hand and the royal college of surgeons of england, have all issued guidance: both encouraging patients to avoid risky pursuits, which could result in accidental injuries and to members how to prioritise and optimise services for trauma and urgent cancer work. we have adapted our hand trauma service to a 'one stop hand trauma and therapy' clinic, where patients are assessed, definitive surgery performed and offered immediate post-operative hand therapy where therapists make splint and give specialist advice on wound care and rehabilitation including an illustrated hand therapy guide. patients are categorised based on the bssh hand injury triage app. we already have a specific 'closed fracture' hand therapy led clinic, to manage the majority of our closed injuries. we combined this clinic with the plastic surgeons' led hand trauma clinic, and improved its efficiency further by utilising the mini c-arm fluoroscope within the clinic setting. this enabled us to immediately assess fractures and perform fracture manipulation under simple local anaesthesia. we have successfully been able to perform % of our operations for hand trauma under wide awake local anaesthesia no tourniquet (walant). prior to the pandemic, we used walant for selected elective and trauma hand surgical cases. in infected cases, where local anaesthesia is known to be less effective, we have used peripheral nerve blocks. previous data showed % of our trauma cases were conducted under ga, % under la, and % under brachial or peripheral nerve blocks. we have specifically modified our wound care information leaflets to minimise patient hospital attendance. afterwards patients receive further therapy phone consultations and encouragement to use the hand therapy exercise app developed by the chelsea and westminster hand therapists. the patient is given details of a designated plastic surgery nhs trust email address, for direct contact with the plastic surgery team: for concerns, questions and transfers of images. we have to date received emails, of which have been from patients directly, and the remainder from referring healthcare providers. the majority of inquiries are followed up via a telephone consultation and only complex cases or complications, attend face-to-face follow-up. this model has successfully combined assessment, treatment and post-op therapy into a one-stop session, which has greatly limited patient exposure to other parts of the hospital, such as the radiology and therapy departments. the other benefit of such clinic is an improved outcome through combined decision making. there is also a cost saving benefit compared to our traditional model of patient care. we have treated patients based on this model so far, who have been suitable for remote monitoring. on average we have saved plastics dressing clinic (pdc) visits for wound checks per patient, as a very minimum. we have previously calculated the cost of pdc at our centre at £ per visit and for our patients this translates to an approximately saving of £ per month just on pdc costs. if patients each month could be identified for remote monitoring, this could potentially lead to an annual saving of more than £ , . in addition, the estimated cost-saving by converting the mode of anaesthesia from ga to walant has been shown to cause a % reduction. the concept of a one-stop clinic has already been successfully implemented in the treatment of head & neck tumours, following introduction of nice guidelines in and the covid- pandemic has made us redesign a busy metropolitan service for hand injuries along the same lines. we believe this model is a good strategy and combining this with more widespread use of the walant technique, technology such as apps and telemedicine, as well as encouraging greater patient responsibility in their post-operative care and rehabilitation; is the way forward. we hope sharing this experience will result in improved patient care at this time of crisis. 'this is a saint patrick's day like no other' declared the irish prime minster on march th , whilst announcing sweeping social restrictions in a response to the worsening covid- pandemic. this nationwide lockdown involved major restrictions on work, travel and public gatherings and signified the government's shift from the suppression to the mitigation phase of the outbreak. the national covid- task force produced a policy specifying the redeployment of heath care workers to essential services such as the emergency department and intensive care. with the introduction of virtual outpatient clinics and the curtailment of elective operating lists, the apparent clinical commitments of a plastic surgeon during this pandemic has lessened. trauma is a continual and major component of our practice ; however, a decline in emergency department presentations has fuelled anecdotal reports of a reduction in the trauma workload. with diminishing resources, the risk of staff redeployment and consequences of poor patient outcomes we aim to assess the effect of the current lockdown due to covid- pandemic on plastic trauma caseload. we performed a retrospective review of a prospectively maintained trauma database at a tertiary referral hospi- during the first days of the lockdown, patients attended plastic surgery trauma clinic, in which ( . %) underwent a surgical procedure. as seen in figure , these numbers are comparable over the same time frame for the two previous years. upper limb trauma accounted for the near majority of referrals. frequency and type of surgery performed during the lockdown were similar to the previous two years, as seen in table . the percentage of patients requiring general anaesthesia was . % ( / ) in , . % ( / ) in , and slightly higher in at . % ( / ). we have refuted any anecdotal evidence proposing a decline in plastic trauma caseload during the covid nationwide lockdown. comparing the same time in previous years, the lockdown has produced an equivalent trauma volume. despite, the widespread and necessary restriction of routine elective work, somewhat surprisingly the pattern and volume of trauma remains similar to preceding years. with people confined to their household, it is the 'diy at home' associated injuries which attributes to this trend. and the exemption from regulations of certain industries such as agriculture and the food preparation chain. whilst not every trauma risk may be mitigated, the potential for these diy injuries to overwhelm the healthcare service has resulted in the british society for surgery of the hand (bssh) cautioning the general public on the safety of domestic machinery. as healthcare systems are stretched further than ever before we all must recognise the need for adaptation and structural reorganisation to treat those of our patients most in need during this pandemic. staff redeployment is a necessary tool to maintain frontline services; nonetheless, we wish to highlight the outcomes of this study to the clinical directors with the challenging job of allocating resources. our trauma presentations have not reduced during the first days of this pandemic, resources (staff and theatre) should still be accessible for the plastic surgery trauma team, with observance of all the appropriate risk reduction strategies as documented by british association of plastic, reconstructive and aesthetic surgeons. none. none. in light of the ongoing covid- pandemic, the american society of plastic surgeons (asps) has released a statement urging the suspension of elective, non-essential procedures. this necessary and rational suspension will result in detrimental financial effects on the plastic surgery community. given the simultaneous economic downturn inflicted by public health social-distancing protocols, there will be a bear market for elective surgery lasting well past the bans being lifted on elective surgeries. this effect will largely be due to the elimination of discretionary spending as individuals attempt to recover from weeks to months of lost earnings. as demonstrated during the - recession, economic decline was associated with a decrease in both elective and non-elective surgical volume. private practice settings performing mostly cosmetic procedures were particularly vulnerable to these fluctuations and demonstrated a significant positive correlation with gdp. the surgery community must prepare for the economic impact that this pandemic will have on current and future clinical volumes. these effects are likely to be more severe than the previous recession as surgeons are currently indefinitely unable to perform elective surgeries, coupled with the immense strain on hospital resources at this time. given this burden, elective surgery cases may be some of the last to be added back to the hospital once adequate resources are restored. while surgeons are temporarily unable to operate, they do have the potential to use telehealth in order to arrange preoperative consults and postoperative follow-up appointments. this could be accomplished in private practice settings with the use of telehealth services such as teladoc health, american well, or zoom, which allow for live consultation with patients without unnecessary exposure of patients or providers to potential infection. the main limitation of these types of appointments is the lack of an inperson physical exam, so providers have found that billing based on time spent with the patient is more effective with this tool. this could generate revenue and facilitate future surgical cases after the suspension of in-person elective patient care has been lifted. several strategies should be considered by the elective surgery community to minimize financial losses. many financial entities have changed their policies in order to support small businesses. examples include the small business administration offering expanded disaster impact loans and deferment of the federal income tax payments by three months to july . another option employers may leverage is temporarily laying off of employees so that employees can apply for and collect an expanded unemployment package by federal and state governments thereby reducing the payroll burden on stagnant practices with no cash flows and providing employees with a steady source of income during the pandemic. the employer's incentive to do this may be reduced with the potential suspension of the payroll tax on employers and loan forgiveness to employers who continue to pay employees wages. once elective procedures are again permitted, plastic surgeons that have retained a reconstructive practice should make a strategic business decision to increase reconstructive surgery and emergent hand surgery bookings as historically these procedures are less fluctuant with the economy. other options to maintain aesthetic case volume include price reductions or temporary promotions. however, it is important that these be adopted universally in order to minimize price wars between providers. as physicians, it is principle that surgeons practice nonmaleficence and minimize non-essential patient contact for the time being. however, this time of financial standstill should be used constructively to prepare for the financial uncertainty in the months to come. none demic advise certain groups to stringently follow social distancing measures. inevitably some health care workers fall into these categories and working in a hospital places them at high risk of exposure to the virus. studies have shown human to human transmission from positive covid- patients to health care workers demonstrating that this threat is real , and as in other infectious diseases is worse in certain situations such as aerosol generating and airway procedures , . there is therefore a part of our workforce that has been out of action reducing available workforce at a time of great need. in our hospital a group of vulnerable surgical trainees ranging from ct to st , and also consultants, have been able to keep working while socially isolating within their usual workplace. in light of covid- our hospital, a regional trauma centre for burns, plastic surgery and oral and maxillofacial surgery, was reorganized to increase capacity for both trauma and cancer work. as part of this a virtual hand trauma service has been set up. the primary aim of the new virtual hand trauma clinic was to allow patients to be triaged in a timely manner while adhering to social distancing guidelines by remotely accessing the clinic from home. further aims were to reduce time spent in hospital and reduced time between referral and treatment. in brief, patients referred to our virtual hand trauma clinic from across the region receive a video or telephone consultation using attend anywhere software, supported by nhs digital. following the virtual consultation patients are then triaged to theatre, further clinic, or discharged. our group of isolating doctors, plus a pharmacist and trauma coordinator, have been redeployed away from their usual face to face roles and are now working solely in the virtual trauma clinic. they are able to work to provide this service in an isolated part of the hospital named the 'virtual nest.' the nest is not accessible in a 'face to face' manner by non-isolating staff or patients. this allows a safe 'clean' environment to be maintained. the virtual team is able to participate in morning handover with other areas of the hospital via video conferencing using webex software. the nest workspace is large enough to allow social distancing between clinicians and by being on site they benefit from availability of dedicated workspaces with suitable it equipment and bandwidth. it is widely recognised that reconfiguration of hospitals and redeployment of staff has meant that training is effectively 'on hold' for many trainees. we have found that a benefit of the new virtual hand trauma clinic is that trainees can continue to engage with the intercollegiate surgical curriculum programme with work based assessments in a surgical field. while direct observation of procedural skills and procedure based assessment are not feasible, case based discussions and clinical evaluation exercises have been easily achievable due to trainees managing patients with involvement of supervising senior colleagues in decision making. this plus a varied case mix seen has enhanced development of knowledge, decision making, leadership and communication skills. as trainees are unable to attend theatre practical skills may suffer depending on how long clinicians are non patient facing. this has been acknowledged by the gmc in the skill fade review; skills have been shown to decline over - months . although it can only be postulated at the current time colleagues who are patient facing but redeployed may face a similar skill decline. the structure of the team is akin to the firm structure of days gone by with the benefits that brings in terms of support and mentorship. patients benefit from having access to a group of knowledgeable trainees, supported by consultants, and a service accessible from their own home. this minimizes footfall within our hospital, exposure to, and spread of covid- . local assessment of our practice is ongoing but we have found that this model has enabled a cohort of vulnerable plastic surgery trainees to successfully continue to work whilst reducing the risk of exposure to covid- and providing gold standard care for patients. none. nothing to disclose. dear sir, a scottish sarcoma network (glasgow centre) special study day on th march at the school of simulation and visualisation, glasgow school of art, with representatives from sarcoma uk, beatson cancer charity and the bbc. traditional patient information leaflets inadequately convey medical information due to poor literacy levels: - % of uk population have the lowest adult literacy level and % the lowest "health literacy" level (ability to obtain, understand, act on, and communicate health information). it was hypothesised that an entirely visual approach, such as ar, may obviate literacy problems by faciliating comprehension of complex dimensional concepts integral to reconstructive surgery. we report the first augmented reality (ar) in patient information leaflets in plastic surgery. to our knowledge we are among the first in the world to develop, implement, and evaluate an ar patient information leaflet in any speciality. developed for sarcoma surgery, the ar patient leaflet centred around a prototypical leg sarcoma. a storyboard takes patients through tumour resection, reconstruction, and the potential post-operative outcomes. input from specialist nurses, sarcoma patients, and clinicians during a scottish sarcoma network special study day in march informed the final content ( figure ). when viewed by smartphone camera (hp reveal studio, hp palo alto, california usa), photos in the ar leaflet automatically trigger additional content display without need for qr codes or internet connectivity: ( ) sequential tumour resection ( a d alt flap model was developed using body-parts d (research organization of information and systems database centre for life science, japan) and custom anatomical data. leaflet evaluation by consecutive lower limb sarcoma patients was exempted from ethics approval by greater glasgow and clyde nhs research office as part of service evaluation. ar leaflets were compared with pooled data from traditional information sources (sarcoma uk website patient leaflets ( ), self-directed internet searches ( ), generic sarcoma patient leaflets ( ); some patients used > source). the mental effort rating scale evaluated perceived difficulty of comprehension (or extrinsic cognitive load), as a key outcome measure in comparison to traditional information sources. patient satisfaction was assessed by likert scale ( was very, very satisfied and very, very dissatisfied). statistical analysis performed with social science statistics, . ar leaflets were rated as . (very, very low mental effort), traditional information sources as . (high mental effort) [unpaired t -test p < . ]. likert-scale satisfaction was . , indicating a very, very high satisfaction. when asked "do you think the ar leaflet would make you less anxious about surgery?", / ( %) patients responded 'yes'. when asked "would you think other patients would like to have a similar ar leaflet before surgery" and "would you like to see further ar leaflets to be developed in the future?", % responded "yes". no correlation was found between age or educational level and mental effort rating scale scores for ar patient leaflet (data not shown). subjective feedback analysis found that self-directed internet searches had too much unfocussed information: " (i) didn't want to google as may end up with all sorts" and "(there is) good and bad stuff on the internet, don't know what you're looking at". all patients felt the visual content in ar leaflets helped their understanding: "incredible…that would have made a flap easier to understand", "tremen-dous… good way of explaining things to my family", "so much better seeing the pictures, gives an idea in your head", and "helpful for others with dyslexia". traditional patient leaflets were often difficult to comprehend: "(i) didn't fully understand the sarcoma leaflets", "couldn't take information in from leaflets". feedback recommended adding simple instructions on the leaflet, however the ar leaflet is intended for use by the clinician in clinic, and to be so simple that no instructions are required once software is downloaded to the patient's smartphone (i.e., point and shoot without technical expertise, menus, or website addresses). all patients desired an actual paper leaflet for reassurance, preferring something physical show their family rather than direction to a website or video. this study demonstrates significant reduction in extraneous cognitive load (mental effort required to understand a topic) with ar patient leaflets compared to traditional information sources ( p < . ). ar visualisation may make inherently difficult topics (intrinsic cognitive load), such as reconstructive surgery, easier to understand and process. significant learning advantages exist over tradi-tional leaflets or web-based videos, including facilitating patient control, interactivity, and game-based learning. all contribute to increased motivation, comprehension, and enthusiasm in the learning process. ar leaflets reduced anxiety ( % patients), and scored very highly for patient satisfaction with information, which is notable given increasing evidence of strong independent determination of overall health outcomes. this study provided impetus for investment in concurrent development of other ar leaflets across the breadth of plastic surgery, and non-plastic surgery specialties. chief scientist office (cso, scotland) funding was recruited to aid development of improved, free, fully interactive d ar patient information leaflets and a downloadable app. ethical approval is in place for a randomised controlled trial to quantify the perceived benefits of ar in patient education. our belief is that ar leaflets will transform and redefine the future plastic surgery patient information landscape, empowering patients and bridging the health literacy gap. none. dear sir, we investigated if age has an influence on wound healing. wound healing can result in hypertrophic scars or keloids. from previous studies we know that age has an influence on the different stages of wound healing. - a general assumption seems to be that adults make better scars than children. knowledge of the influence of age on healing and scarring can give opportunities to intervene in the wound healing process to minimize scarring. it could guide patients in their decision when to revise a scar. it could also lead patients and physicians in their decision of the timing of a surgery, if the kinds of surgery allows this. this study is a retrospective cohort study at the department of plastic, reconstructive, and hand surgery of the amsterdam university medical center. all patients underwent cardiothoracic surgery through a median sternotomy incision. all patients had to be at least one year after surgery at time of investigation. hypertrophic scars were defined as raised mm above skin level while remaining within the borders of the original lesion. keloid scars were defined as raised mm above skin level and extending beyond the borders of the original lesion. the scars were scored with the patient and observer scar assessment scale (posas) as primary outcome measure. as secondary outcome measures we looked at wound healing problems and scar measurements. in order to ensure that the results of this study are as little as possible influenced by the already known risk and protective factors for hypertrophic scarring, the patients were questioned about co-existing diseases, scar treatment, allergies, medication, length, weight, cup size (females) and smoking. their skin type was classified with the fitzpatrick scale i to vi. all calculations were performed using spss and the level of significance was set at p ≤ . . patients were enrolled in this study. group contained children and group contained adults. there is a significant difference between the two groups for the amount of pain in the scar scored by the patient. this item was given higher scores by adults than children ( p = . ). there is no significant difference between the two groups for the other posas items (itchiness, color, stiffness, thickness, and irregularity), the total score of the scar and the overall opinion of the scar scored by the patient ( table ) . there is a significant difference between the two groups in pliability of the scar scored by the observer. the posas item pliability of the scars of the children was assessed higher, thus stiffer, than in adults ( p = . ). there is no significant difference between the two groups for the other posas items (vascularization, pigmentation, thickness, relief, and surface), the total score of the scar and the overall opinion of the scar scored by the observer ( table ) . there is no significant difference between children and adults in the occurrence of wound problems post-surgery. there is no significant difference in scar measurements between children and adults. in children we found three hypertrophic scars and two keloid scars. in adults we found seven hypertrophic scars and three keloid scars. for both groups together that is a percentage of . hypertrophic and keloid scars ( table ) . patients with fitzpatrick skin type i and iv-vi scored significantly higher, thus worse, in their overall opinion of the scar ( p = . ) than patients with skin type ii and iii. observer and patient assessed the overall opinion of the scar significantly higher (worse) in people who had gone through wound problems (respectively p = . and p = . ) than those who had not. we found no significant differences in the primary outcome measure between men and women, cup size a-c and d -g, smokers and non-smokers, bmi < and bmi > , allergies and no allergies, and scar treatment and no scar treatment. age at creation of a sternotomy wound does not seem to influence the scar outcome. this is contrary to what is often the fear of a parent of a child who needs surgery early in life. comparing scars remains difficult because of the many factors that can influence scar formation. we found that scars have the tendency to change, even years after they are made. a limitation of the study is the retrospective design. the long follow-up period after surgery is a strength of the study. to our best knowledge this is the first study that compares scars of children and adults to specifically look at the clinical impact of age on scar tissue. in order to detect even more reliable and possibly significant differences between children and adults, more patients should be enrolled in future prospective studies. for now we can conclude that there is no significant difference in the actual scar outcome between children and adults in the sternotomy scar. if we extend these results to other scars, the timing of surgeries should not depend on the age of a patient. none. none. metc. reference number: w _ # . . we published a systematic review of randomized controlled trials (rcts) on early laser intervention to reduce scar formation in wound healing by primary intention. while comparing our results with two other systematic reviews on the same topic, , we identified various overt methodological inconsistencies in those other systematic reviews. issue . including duplicate data ( table ) : karmisholt et al. included two rcts of which both reported the identical data on five people. the inclusion of duplicate data can bias the results of a systematic review and should be prevented in the quantitative as well as the qualitative synthesis of evidence. abbreviations. id: identity; n.l.t.: no laser treatment; pcs: prospective cohort study; pmid: pubmed identifier; rct: randomized controlled trial. a) listed are rcts which were included by at least one of the three identified systematic reviews. the systematic reviews are ordered by search date from left to right. b) "search date" refers to the searching of bibliographic databases by the authors of the corresponding systematic reviews. c) "publication date" refers to the publication history status according to medline®/pubmed® data element (field) descriptions. d) "n.l.t." means that the authors of the rcts compared laser treatment with no treatment or a treatment without laser. e) "pcs" means that the authors used this term to label the corresponding rct. f) "-" indicates that an rct could not have been identified because the publication of the corresponding rct happened after the search date. g) "missing study" means that an rct could have been identified because the publication of a corresponding rct happened before the search date. h) "excluded" that the authors of the present review excluded the corresponding rct based on the exclusion criteria provided. i) "not analyzed" means that an rct was reported within an article but the corresponding data were not included in the metaanalysis. j) "other laser" means that the authors of the rcts compared various types of laser treatment. attached the label "prospective cohort" to almost all considered studies including rcts and seven nonrandomized studies. in rcts, subjects are allocated to different interventions by the investigator based on a random allocation mechanism. in cohort studies, subjects are not allocated by the investigator but rather allocated in the course of usual treatment decisions or peoples' choices based on a nonrandom allocation mechanism. we believe that 'cohort study' is certainly not an appropriate label for rcts. furthermore, it is known for a long time that the shorthand labeling of a study using the words 'prospective' and 'retrospective' may create confusion due to the experience that these words carry contradictory and overlapping meanings. issue . mixing data from various study designs: karmisholt et al. did not clearly separate randomized from nonrandomized studies. combinations of different study design features should be expected to differ systematically, and different design features should be analyzed separately. issue . unclear definition of outcomes and measures of treatment effect: kent et al. reported, quote: "the primary outcome of the meta-analysis is the summed measure of overall efficacy provided by the pooling of overall treatment outcomes measured within individual studies." we think that the so-called "summed measure" is not defined and not understandable. the meta-analysis reported in that article included mean and standard deviation values from four rcts. these rcts applied endpoints and time periods for assessment which differed considerably among the included studies. it appears obscure to us which data were transformed in what way to finally arrive in the meta-analysis. we believe that traceability and reproducibility of data analyses are mainstays of systematic reviews. issue . missing an understandable risk of bias assessment: kent et al. reported, quote: "the risk of bias assessment tool provided by revman indicated that all studies had - categories of bias assessed as high risk." the term "revman" is a short term for the software "review manager provided by cochrane for preparing their reviews. the cochrane risk-of-bias tool for randomized trials is structured into a fixed set of domains of bias including those arising from the randomization process, due to deviations from intended interventions, due to missing outcome data, those in measurement of the outcome, and in selection of the reported result. we believe that the risk of bias assessment reported by kent et al. is not readily understandable and presumably does not match standard requirements. systematic reviews of healthcare interventions aim to evaluate the quality of clinical studies, but they might have quality issues in their own right. the identification of various inconsistencies in two systematic reviews on plateletrich plasma therapy for pattern hair loss should prompt future authors to consult the cochrane handbook ( https: //training.cochrane.org/handbook ) and the equator network ( http://www.equator-network.org/ ). the latter provides information to various reporting standards such as prisma for systematic reviews, consort for rcts, and strobe for observational studies. the authors declare no conflict of interest. dear sir, journal clubs have contributed to medical education since the th century. along the way, different models and refinements have been proposed. recently, there has been a shift towards "virtual" journal clubs, often using social media platforms. our team has refined the face-to-face journal club model and successfully deployed it at two independent uk national health service (nhs) trusts in . we believe there are reproducible advantages to this model. over months at one nhs trust, journal club events were held, with iterative changes made to increase engagement and buy-in of the surgical team. overall, tangible outputs included submissions of letters to editors, of which have been accepted. following this, the refined model was deployed at a second nhs trust, which had expanded academic support increasing its impact. over months, journal club events were held, with submissions of letters to editors, of which have been accepted. thus, in months of , the two sequential journal clubs generated submissions for publication, with different authors. these tangible outputs are matched by other intangible benefits, such as improving critical appraisal skills. this is assessed in uk surgical training entry selection and is also a key skill for evidence-based professional practice. therefore, we feel this helps our team members' career progression and clinical effectiveness. key aspects of the model include: . face-to-face meetings continue to have multiple intangible benefits there is a trend towards social media and online journal clubs. while such initiatives have considerable benefits, maintaining face-to-face contact in a department allows for an efficient discussion, and enhances teambuilding. instead of replacing face-to-face meetings with virtual ones, we use social media platforms, such as whatsapp, to support our events. this includes communications to arrange the event in advance, and for maintaining momentum on post-event activities, such as authoring letters to journals from the discussion. while some articles describing journal club models highlight the benefit of expert input in article selection, we also view it as a learning opportunity. a surgical trainee is allocated to present each journal club, with one of our three academically appointed consultant surgeons chairing and overseeing. trainees are encouraged to screen the literature and identify articles beforehand and make a shared decision with the consultant. the article must be topical and have potential to impact clinical practice. doing this prior to the session allows the article to be circulated to attendees with adequate time to read it. we routinely use both reporting guidelines (e.g., prisma for systematic reviews), and also methodological quality guidance (e.g., amstar- for systematic reviews) to guide trainees and structure the journal club presentation. in addition to three consultants with university appointments guiding critical appraisal, a locally based information scientist also joins our meetings. during journal club discussion, emphasis is placed on relating the article to the clinical experience of team members. this provides context and aids clinical learning for trainees. while undertaking critical appraisal may be a noble endeavour, in busy schedules, it is important that it adds value for everyone involved. reviewing contemporary topics can inform clinical practice for all levels of surgeon in the team, presenting the article improves trainees' presentation skills, and publishing the appraisal generates outputs that help trainees to progress. . publishing summaries of journal club appraisals can impact on multiple levels journal club does not only contribute to our trainees' development and departmental clinical practice. it benefits our own research strategy and quality, and open discussion of literature in plastic surgery contributes to a global culture of improving evidence. scheduling events on a regular basis increases familiarity with reporting and quality guidance and allows for the study of complementary article types (e.g., systematic review, randomised trial, cohort study). our iterations suggest that the following structure is most effective: joint article selection one week before event, dissemination to audience, set time and location during departmental teaching, chairing by an academic consultant with information scientist and senior surgeons present, presentation led by a surgical trainee, open-floor discussion of article and its implications for our own practice, summary, drafting of letter to the editor if appropriate. as we have used variations of this model successfully at two independent nhs trusts, we believe that these tactics can be readily adapted and deployed by others as well. nil. dear sir, surgical ablation of advanced scalp malignancies requires wide local excision of the lesion, including segmental craniectomies. the free latissimus dorsi (ld) flap is a popular choice for scalp reconstruction due to its potential for mass surface area resurfacing, ability to conform to the natural convexity of the scalp, reliable vascularity and reasonable pedicle length. one of the disadvantages of ld free flap use is the perceived need for harvest in in a lateral position. this necessitates a change in position of the patient intraoperatively for flap raise and can add to the overall operative time. current literature in microvascular procedures on the elderly demonstrates that a longer operative time is the only predictive factor associated with an increased frequency of post-operative medical and surgical morbidity. as most patients undergoing scalp malignancy resection are elderly it is important to reduce this surgical time in this cohort of patients. , we present our experience of reconstruction of composite cranial defects with ld flaps using a synchronous tumour resection and flap harvest with supine approach to reduce operative times and potential morbidity. all patients undergoing segmental craniectomies with prosthetic replacement and ld reconstruction under the care of the senior surgeons were included in the study. patients were positioned supine with a head ring to support the neck; a sandbag is placed between the scapulae and the arm on the chosen side of flap raise is free draped. a curvilinear incision is made posterior to the midaxillary line ( figure ). the lateral border of the ld muscle is identified, and dissection continued in a subcutaneous plane inferiorly, superiorly and medially until the midline is approached. the muscle is divided at the inferior and medial borders, and the flap lifted towards the pedicle. once the pedicle is identified, the assistant can manipulate the position of the free draped arm to aid access into the axilla; the pedicle is clipped once adequate length has been obtained. the flap is delivered through the wound and detached ( figure ). donor site closure is carried out conventionally.the flap inset is performed using a "vest over pants" technique utilising scalp over muscle by undermining the remaining scalp edges. a non-meshed skin graft is used to enhance aesthetic outcome. a total of patients underwent free ld muscle flaps. all were muscle flaps combined with split-thickness skin grafts. the study population included ten male patients and one female. the age range was - years with a mean age of . years. the defect area ranged from cm - cm . a titanium mesh was utilised for dural cover in all patients fixed with self-drilling × . mm cortical screws. the primary recipient vessel used was the superficial temporal artery and vein. however, in cases where a simultaneous neck dissection and parotidectomy are necessary for regional disease, the facial artery and vein are used ( n = in this series) or contralateral superficial temporal vessels. the ischaemia time ranged from - min, with a mean of . min. there were no take backs for flap re-exploration. the overall flap success rate was %. marginal flap necrosis with secondary infection occurred in one patient with a massive defect (at one week post-op). the area was debrided and a second ld flap was used to cover the resultant defect ( %). a further posterior transposition flap was used to cover a minor area of exposed mesh. the scalp healed completely. the total operating time ranged between - min, with a mean of min. all patients were followed up at and then four weeks for wound checks. the ld flap remains a popular choice due to its superior size and ability to conform to the natural convexity of the scalp compared with other flap choices. also, unlike composite flaps which often require postoperative debulking procedures, the ld muscle flap atrophy's and contours favourably to the skull. however, the traditional means of access to this flap requires lateral decubitus positioning of the patient, which can hinder simultaneous oncological resection. the supine position facilitates access for neck dissection, especially if bilateral access is required. our approach ensures that the tumour ablation and reconstruction is carried out in a time efficient manner in an attempt to reduce postoperative medical and surgical complications. synchronous ablation and reconstruction are key in reducing overall operative time and complication risk and is practised preferentially at our institute. it is important to maintain a degree of flexibility to achieve this -there may be situation where supine positioning overall is more favourable. likewise, there are situations relating to flap topography where a lateral approach to tumour removal and reconstruction is preferred. the resecting surgeon or reconstructive surgeon may have to compromise to achieve synchronous operating but is worthwhile to reduce overall total operative time. none. not required. once established, lymphorrhea typically persists and can present as an external lymphatic fistula. lymphorrhea occurs in limbs with severe lymphedema, as a complication after lymphatic damage, and in obese patients. some cases are refractory to conservative treatment and require surgical intervention. reconstruction of a lymphatic drainage table three patients had primary lymphedema, had age-related lymphedema, had obesity-related lymphedema, and had iatrogenic lymphorrhea. in the cases of iatrogenic lymphorrhea, the lesions were located in the groin and the others in the lower leg. abbreviations: bmi, body mass index; f, female; m, male. three patients had primary lymphedema, four had agerelated lymphedema (aging of the lymphatic system and function is thought to be the cause of age-related lymphedema .), three had obesity-related lymphedema, and two had iatrogenic lymphorrhea ( table ) . one of cases of lymphorrhea in the inguinal region was caused by lymph node biopsy and the other by revascularization after resection of malignant soft tissue sarcoma. compression therapy had been performed preoperatively in cases (using cotton elastic bandages in cases). four patients wore a jobst r compression garment. compression therapy was difficult to apply in patients. the duration of lymphorrhea ranged from to months. the severity of lymphedema ranged from campisi stage to ( table ). the clinical diagnosis of lymphorrhea was confirmed by observation of fluorescent discharge from the wound on lymphography. no signs of venous insufficiency or hypertension were observed in the subcutaneous vein intraoperatively. all anastomoses were performed between distal lymphatics and proximal veins. postoperatively, lymph was observed to be flowing from the lymphatic vessels to the veins. two to lvas were performed in the region distal to the lymphorrhea and - in the region proximal to the lymphorrhea in patients with lower limb involvement. six lvas were performed in patients with lymphorrhea in the inguinal region ( table ) . all patients were successfully treated with lvas without perioperative complications. the volume of lymphorrhea decreased within days following the lva surgery in all cases and had resolved by weeks postoperatively. the compression therapy used preoperatively was continued postoperatively. there has been no recurrence of lymphorrhea or cellulitis since the lvas were performed. an -year-old woman had gradually developed edema in her lower limbs over a period of - years. she had also developed erosions on both lower legs ( figure ). compression with cotton bandages failed to terminate the percutaneous discharge; about ml of lymphatic discharge through the erosion was noted each day. ultrasonography did not suggest a venous ulcer resulting from venous thrombosis, varix, or reflux. four lvas were performed in each leg ( distal and proximal to the leak). the lymphorrhea had mostly resolved by days postoperatively. the erosions healed within weeks of the surgery. no recurrence of lymphorrhea was noted during months of follow-up. iatrogenic lymphorrhea occurs after surgical intervention involving the lymphatic system. it is also known to occur in patients with severe lymphedema. obesity and advancing age are also risk factors for lymphedema. most patients with lymphorrhea respond to conservative measures but some require surgical treatment. patients with lymphorrhea are at increased risk of lymphedema. lymphorrhea that occurs after surgery or trauma is caused by damage to lymphatic vessels that are large enough to cause lymphorrhea. lymphorrhea that occurs in association with lipedema or age-related lymphedema indicates accumulation of lymph that has progressed to lymphorrhea. it is possible to treat lymphorrhea by other methods, including macroscopic ligation, compression, or negative pressure wound therapy . however, it is impossible to reconstruct a lymphatic drainage route using these procedures. we hypothesized that lymphorrhea can be managed by using lva to treat the lymphedema. lva is a microsurgical technique whereby an operating microscope is used to perform microscopic anastomoses between lymphatic vessels and veins to re-establish a lymph drainage route. the primary benefits of lva are that it is minimally invasive, can be performed under local anesthesia, and through incisions measuring - cm. one anastomosis is adequate to treat lymphorrhea and serves to divert the flow of the lymphorrhea-causing lymph to the venous circulation. if operative circumstances allow, or more anastomoses are recommended for the treatment of lymphorrhea complicated by lymphedema. lymphedema is a cause of delayed wound healing, and lva procedures are considered to improve wound healing in lymphedema via pathophysiologic and immunologic mechanisms . lva is a promising treatment for lymphorrhea because it can treat both lymphorrhea and lymphedema simultaneously. the focus when treating lymphedema has now shifted to risk reduction and prevention, so it is important to consider the risk of lymphedema when treating lymphorrhea. none over-meshing : meshed skin graft we were curious to learn if it's feasible to mesh already meshed skin grafts. we run our skin bank at the department of plastic surgery and used allograft skin that was tested microbiologically positive and thus not suitable for patient use. grafts were cut into cm x . cm pieces and meshed using mesh carriers to : and over-meshed with : . . we used two kind of mesh carriers for : . meshes. the meshed grafts were maximally expanded and measured again. the results were expressed as ratios, figure . we found that, over-meshing results in . -fold increase in graft area regardless of the mesh carrier used. figure illustrates close-up picture of the over-meshed graft. in the close-up picture the small : incisions are still visible. in those undesirable "oh no the graft is too small"or "the graft is too large" -situations this technique has its advantages. we have used over-meshed graft in a skin graft harvest site, supplemental figure, with acceptable outcome. it seems that the tiny extra incisions in the overmeshed skin graft do not deteriorate the aesthetic outcome from the : . mesh. what is the clinical value of the tiny incisions, we don't know, but we approximate it to be minimal if even that. to best of our knowledge, only one previous publication has addressed the over-meshing of skin grafts . henderson et al. showed in porcine split thickness skin grafts that overmeshing resulted in increase of . ratio, a bit larger compared to our results. taken together, the results point to the direction that meshing of already meshed graft is feasible and does not destroy the architecture of the original or succeeding mesh. each author declares no financial conflicts of interest with regard to the data presented in this manuscript. supplementary material associated with this article can be found, in the online version, at doi: . /j.bjps. . . . numerous autologous techniques for gluteal augmentation flaps have been described. in the well-known currently employed technique for gluteal augmentation, it is noticeable that added volume is unevenly distributed in the buttock. in fact, after a morphological analysis, it becomes clear that the volume is added to the upper buttock to the expense of the lower buttock. according to wong's ideal buttock criteria, the most prominent posterior portion is fixed at the midpoint on the side view. additionally, mendieta et al. suggest that the ideal buttock needs equal volume in the four quadrants and its point of maximum projection should be at the level of the pubic bone. we describe a technique of autologous gluteal augmentation using a para-sacral artery perforator propeller flap (psap). this new technique can fill up all the quadrants vertically with a voluminous flap shaped like a gluteal anatomic implant. gluteal examination is done in a standing and prone position. patients must have a body mass index less than kg/m , an indication for a body lift contouring surgery, gluteal ptosis with platypygia and substantial steatomery on the lower back. when the pinch test is greater than cm this is defined as substantial steatomery. preoperative markings: the ten steps a. standing position . limits of the trunk. the median limit (mlt) and the vertical lateral limit (llt) of the trunk are marked. . limits of the buttock. the inferior gluteal fold (igf) is drawn. the vertical lateral limit of the buttock (llb) is defined at the outer third between the mlt and the llt. . lateral key points. points c and c' are located on the vertical lateral limits: point c is to cm below the iliac crest, depending on the type of underwear. point c' is determined by an inferior strong tension pinch test performed from point c. mhz. this diagnostic tool is easy to access, non-invasive, and above all, reliable in the identification of perforating arteries, with sensitivity and a positive predictive value of almost %. usually, one to three perforators are identified on each side and marked. . design of the gluteal pocket. the shape is oval, with the dimensions similar to those of the flaps. the base is truncated and suspended from the lower resection line. the width of the pocket is one to two centimeters from the lmt laterally and two centimetres from llt medially. the inferior border of the pocket is not more than two fingers'-breadth above the ifg. therefore, the pocket lies medial in the gluteal region. . design of the flap. the flap is shaped like a "butterfly wing" with the long axis following a horizontal line. after a °medial rotation, the flap has a shape similar to an anatomical gluteal prosthesis. the medial boundary is two fingers'-breadth from the median limit of the buttock, and the width is defined by the two resection limits. the patient is placed in a prone position, arm in abduction. the flap is harvested from lateral to medial direction, first in a supra-fascial plane then sub-fascial when approaching the llb. the dissection is completed when the rotation arc of the flap is free of restriction ( °− °), and viewing or dissection of the perforators is usually not required. to create the pocket, custom undermining is done in the sub-fascial plane according to the markings. the flap is then rotated and positioned into the pocket. the superficial fascial system is closed with vicryl (ethicon) and the deep and superficial dermis are closed with a buried intradermal suture and running subcutaneous suture with . monocryl (ethicon). a compressive garment (medical z lipo-panty elegance coolmax h model, ec/ -h) was worn postoperatively for one month ( figure ). rhinoplasty is one of the most common procedures in plastic surgery and - % of the patients undergo revision. dorsal asymmetry is the leading ( %) nasal flaw in secondary patients. careful management of the dorsum to achieve a smooth transition from radix to tip is necessary. camouflage techniques are well known maneuvers for correcting dorsal irregularities. cartilage, fascia, cranial bone, and acellular dermal matrix were previously used for this aim. , bone dust is an orthotopic option, which is easily moldable into a paste. it is especially useful in closed rhinoplasty, where our visual acuity on the dorsum is reduced. we introduce a new tool, a minimally invasive bone collector, as an effective and safe device for harvesting bone dust from the nasal bony pyramid to obtain camouflage on the dorsum and for performing ostectomy simultaneously. patients were operated for nasal deformity by the senior author (o.b.) with closed rhinoplasty between february and november . in all cases, a minimally invasive bone collector was used for ostectomy and the harvest of bone dust. included patients were primary cases with standardized photos, complete medical records, and -year follow-up. written informed consent for operation and publishing their photographs was obtained and the study was performed in accordance with standards of declaration of helsinki. the authors have no financial disclosure or conflict of interest to declare. patient data were obtained from rhinoplasty data sheets and photographs were used for the analysis of nasal dorsum height, symmetry, and contour. physical examinations were carried out for detecting irregularities. micross (geitslich pharma north america inc., princeton, new jersey) is a bone collector, which allows easy harvest, especially in narrow areas. micross comes with a package containing sterile disposable scraper. it is externally mm in diameter and has a cutting blade tip. a collection chamber allows harvesting maximum of . cc graft at once. a sharp technique improves graft viability. incisions for lateral osteotomies were used to introduce micross when the planned ostectomy site was nasomaxillary buttress. infracartilaginous incision was used when the desired ostectomy site was dorsal cap or radix. bone dust was collected into a chamber with a rasping movement. the graft is mixed with blood during the harvest, this obtains an easily moldable bone paste (surgical technique is described in the video). after the completion of osteotomies and cartilaginous vault closure, the bone paste was placed on the site of bony dorsum, which is likely to show irregularities postoperatively. a nasal splint was used to maintain contour. the bone graft was not wrapped into any other graft. eighteen patients underwent primary closed rhinoplasty with -year follow-up. seventeen of patients were female and one was male. harvesting sites were nasomaxillary buttress in patients, radix in patients and dorsal cap in patients. the total graft volume was between . and . cc/per patient. the nasal dorsum height, symmetry, contour, and dorsal esthetic lines were evaluated using standardized preoperative and postoperative photographs. dorsal asymmetry, overcorrection of the dorsal height or residual hump were not observed in of the patients ( figures - ). only patient had a visible irregularity of the dorsum. physical examination revealed palpable irregularities in patients. none of the patients required surgical revision for residual or iatrogenic dorsum deformity. asymmetries and irregularities of the upper one-third of the nose, lead to poor esthetic outcomes, and secondary revision surgeries. to treat open roof after hump resection; lateral osteotomies, spreader grafts, flaps and camouflage grafts are commonly used. warping, resorbtion and migration, visibility, limited volume, donor site morbidity, and the risk of infection are the main disadvantages of grafts. Örero glu et al. have presented their technique of using diced cartilage combined with bone dust and blood. tas have reported results with harvesting bone dust with a rasp and using this for dorsal camouflage. the disadvantages of harvesting with a rasp were difficulty with collecting dust from the teeth of the rasp and losing a certain amount of graft material during the harvest. with using micross, a harvested graft is collected in the chamber, thereby the risk of losing the graft material is resolved. replacing "like with like" tissue concept is important, therefore the reconstruction of a bone gap can be achieved successfully with bone grafts. to limit the donor site morbidity, we prefer to harvest bone from the dorsal cap, which was preoperatively planned to be resected. the preference of lateral osteotomy lines as the donor site facilitates osteotomies by thinning the bone. the device allows us to effectively harvest the bone under reduced surgical exposure. simultaneous harvest and ostectomy contributes to a reduced operative time. operative cost is relatively low in comparison with alloplastic materials. in this series, we did not experience resorbtion, migration, visibility problems, or infection with bone grafts. a new practical, safe, and efficient tool for rhinoplasty was introduced. graft material was successfully used for smoothing the bony dorsum without any significant complications. none. not required. the authors have no financial disclosure or conflict of interest to declare in relation to the content of this article. no funding was received for this article. the work is attributed to ozan bitik, m.d. (private practice of plastic, reconstructive and aesthetic surgery in ankara, turkey) dear sir, early diagnosis of wound infections is crucial as they have been shown to increase patient morbidity and mortality. hence, it is important that such infections are detected early to guide decision-making and management . currently, the most common methods of identifying wound infection is by clinical assessment and semi-quantitative analysis using wound swabs. bedside assessment is subjective, and it is shown that bacterial infection can often occur without any clinical features. on the other hand, swabs have the disadvantages of missing relevant bacterial infection at the periphery of the wound due to the sampling technique as well as delaying diagnostic confirmation which may lead to a change in the bioburden of the wound. although tissue biopsy is the gold standard diagnostic tool, it is seldom used as it is invasive, has a higher technical requirement and is also more expensive. a hand-held and portable point-of-care fluorescence imaging device (moleculight i:x imaging device, moleculight, toronto, canada) was introduced to address the limitations of the other diagnostic methods . this device takes advantage of the fluorescent properties of certain by-products of bacterial metabolism such as porphyrin and pyoverdine. when excited by violet light (wavelength nm), porphyrins will emit a red fluorescence whereas pyoverdine has a cyan/blue fluorescence. the types of bacteria that produce porphyrins include s. aureus, e. coli , coagulase-negative staphylococci, beta-hemolytic streptococci and others whereas pyoverdine which emits cyan fluorescence is specific to pseudomonas aeruginosa. this allows users to localise areas of bacterial colonisation at loads ≥ amongst healthy tissue which instead emits green fluorescence . the benefits of this device are that it is portable, non-contact which means minimising cross-contamination, non-invasive and it provides real-time localization of bacterial infection. all these features allow it to be a useful tool to aid diagnosis and guide further investigation and management. many previous studies that have examined the efficacy of auto fluorescent imaging in diagnosing infections in chronic wounds - . however, equally important is identifying infections in acute wounds which will help guide antimicrobial management as well as surgical debridement. often, broad-spectrum antibiotics are given where clinical assessment remains inconclusive. this, however, may lead to an increase in antimicrobial resistance. therefore, the use of moleculight i:x to identify infections in acute open wounds in hand trauma was evaluated. we collected data from patients who attended the hand trauma unit over a -week period prior to irrigation and/or debridement. wounds were inspected for clinical signs of infection and autofluorescence images were taken using the moleculight i:x device. wound swabs were taken, and the results of these interpreted according to the report by the microbiologist. autofluorescence images were interpreted by a clinician blinded to the microbiology results. patients were included, and data collected from wounds. wounds ( . %) showed positive clinical signs of infection, ( . %) were positive on autofluorescence imaging and ( . %) of wound swab samples were positive for significant infection. autofluorescence imaging correlated with clinical signs and wound swab results for wounds ( . %). in one case, the clinical assessment and autofluorescence imaging showed positive signs of infection but the wound swabs were negative. to the best of our knowledge, this is the first time the use of autofluorescence imaging in an acute scenario was investigated. in this study, out of of the wound swab samples that were positive, autofluorescence imaging correctly identified both ( %) ( fig. ) . one of the autofluorescence images which showed red fluorescence on the wound and which was clinically identified as infected showed growth of usual regional flora on microbiological studies. the reason behind this could be due to the method of sampling from the centre of the wound. on autofluorescence image, the areas of significant bacterial growth were on the edges of the wound ( fig. ) . this example illustrates the potential of using autofluorescence imaging to guide more accurate wound sampling. this has also been shown in a non-randomised clinical trial performed by ottolino-perry et al. . from a surgeon's perspective, autofluorescence imaging can guide surgical debridement by providing real-time information of the infected areas of the wound. furthermore, because of its portability, this device can also be used in intra-operative scenarios to provide evidence of sufficient debridement. although easy to use, the requirement for a dark environment causes a logistical problem. the manufacturers have realised that this is a limitation of the device and have created a single-use black polyethene drape called "darkdrape" which connects to the moleculight i:x using an adapter to provide optimal conditions for fluorescence imaging. while autofluorescence imaging can help clinicians to decide whether to start antibiotics or not, it does not provide any information on the sensitivities of the bacteria. another limitation with autofluorescence imaging we encountered in our study is the difficulty with imaging acute bleeding wounds where blood shows up as black on fluorescence and therefore may mask any underlying infection. in conclusion, autofluorescence imaging in acute open wounds may be useful to provide real-time confirmation of wound infection and therefore guide management. none declared. none received. supplementary material associated with this article can be found, in the online version, at doi: . /j.bjps. . . . when compared with the two previously published studies, publication rates have improved from and have not continued to decline. interestingly, the number of publications in jpras has fallen. this may be explained by a rise in the impact factor of the journal, increasing competitiveness for publications as well as an expansion in the number of surgical journals. we observed that journal impact factor for free paper publications was significantly greater and likely reflects the stringency of the bapras abstract vetting process. comparison with other specialties is inherently difficult, primarily due to differences in study design and inclusion criteria. exclusion of posters, inclusion of abstracts published prior to presentation and studies not referenced in pubmed affect the reported publication rates. a large meta-analysis, assessing publication of abstracts, reported rates of %. rates from other specialties are shown in figure . although our figures of close to % may seemingly rank low versus other specialties, including abstracts published prior to presentation would increase the publication rate to %, therefore making it more comparable. however, this would not be a direct comparison to the two previous bapras studies. one may debate that the academic value of a meeting should be judged upon its abstract publication ratio. however, the definition of a publication is itself clouded, with an increasing number of journals not referenced in the previous 'gold standard' of pubmed, including a number of open access journals. most would still argue the importance of stringent peer review as the hallmark of a valuable publication and perhaps this along with citability should remain the benchmark. in an age where publications are key components of national selection and indeed lifelong progression in many specialties, we must ensure that some element of quality control remains so as not to dilute production of meaningful data. we have been able to reassess the publication rates for the primary meeting of uk plastic surgery. the bapras meeting remains a high-quality conference providing a platform to access the latest advances in the field. significant differences in the methodology of available literature make other speciality comparisons challenging. however, when these are accounted for publication rates are similar. within a wider context, with the increase in open access journals, it has become ever more difficult to define a 'publication'. if publication rate is to be used as a surrogate for meeting quality, then only abstracts published after the date of meeting should be included. in order to continually assess the quality of papers presented at bapras meetings, the conversion to publication should be regularly re-audited. none. dear sir, global environmental impact and sustainability has been a heated topic in the recent years. plastics and singleuse items are widely, and perhaps unnecessary, used in the healthcare sector. various recent articles , discuss the negative impacts of this in the surgical world, but can we look at the nhs sustainability as a bigger picture? whilst it is a positive step to be considering how we can reduce the environmental impact of modern operating practice, it risks falling into the trap of being overly focused and not taking an holistic view of how the health service as a whole can become more environmentally focused and reduce costs. in fact, the operating theatre is one of the more difficult places to make change. single use medical devices seem like an obvious item to replace with a more environmentally friendly re-usable alterative, but what about patient safety? such a change would require the implementation of new workflows and supervision structures to make sure patient safety is maintained. these take time to create, will meet resistance in their design and implementation, and may not ultimately be adopted. in order to overcome these challenges, we must take a holistic view of the hospital environment -doing this reveals numerous opportunities for improvement with minimal impact on patient safety. the nhs incurs significant waste through using energy unnecessarily. some examples are readily visibly working in a hospital for a just few weeks: computers are left on standby through the night and at weekend; lights are left on throughout the night; and empty rooms are heated or cooled when left unoccupied. other sources of energy waste are less visible, but it is likely that some machinery (particularly air conditioning units) would show rapid return on investment through energy savings if they were replaced on a more regular basis. in the past, saving energy would have required a sustained campaign to educate staff and still be subject to the vagaries of human management (forgetting to switch the heating off on a friday night could lead to more than two days of wasted energy if not revisited until monday). today, solutions based on internet of things (iot) technology can use sensors to monitor the environment and take action to reduce consumption. with the use of ai and machine learning, these systems are becoming advanced such that they can even monitor and anticipate energy usage allowing rooms to be heated or cooled at times which mean that when staff arrive in the relevant room it is the ideal temperature. the nhs is starting to use such technology, with wigan hospital as the first example to install intelligent lighting. adoption should not be limited to lighting, however, and the nhs needs to adopt best practice from the commercial sector. for example, sensorflow based in singapore, provide an intelligent system that optimises cooling/heating costs for hotels around south east asia, saving the operators up to % in energy costs. , without doubt, these systems can also apply to hospital infrastructures and can help the nhs further reduce energy consumption. in addition to reducing energy consumption, the reduction of single use plastics has become a key focus in recent years and the nhs has started to address this issue. at least million single use plastic items were purchased by the nhs last year. the target to phase out plastic items used by retailers in the next months is laudable, however there is also a significant amount of disposable plastic items used in staff coffee rooms and hospital canteen. getting rid of such items completely and encourage staff to use reusable coffee cups and metal cutlery can potentially compound the cost-saving and environmental benefits. the nhs has established an early leadership position tackling environmental challenges -the first european intelligent lighting installation and ambitious targets to cut disposable plastic items -but more needs to be done. to maximise impact, the nhs needs to be seen as a whole (not by department) with the most senior executives in the health service driving national level change. we read with interest the recent article 'healthcare sustainability -the bigger picture'. the wider picture of the nhs environmental impact and sustainability clearly needs to be addressed. however, large-scale improvement projects to hospital buildings, such as intelligent lighting and heating systems, are likely to require huge investment in infrastructure and modernisation that the nhs in its current form is unfortunately unlikely to be able to make. we believe that the field of medical academia should similarly be contributing to environmental sustainability. firstly, the shelves of hospital libraries and offices internationally are lined with print copies of journals. we reviewed the surgical journals with the highest impact factors and found that all were still offering the option of a subscription of print copies, with of these printing monthly issues. consumers are able to access all journals electronically through institutional subscriptions or via the nhs openathens platform, which in our view is a more time-efficient way to search for articles, read them and to reference them. as such, we commend jpras for their recent move to online-only publication. additionally, with the increasing use of social media to discuss research and the creation of visual abstracts for articles to encourage readership, this will be likely to encourage this shift further. secondly, the environmental impact of the current academic conferencing culture must be addressed. by the end of training, a uk surgical trainee spends an average of £ attending academic conferences, but beyond this personal expenditure, what is the environmental cost? for each conference we attend, the printing of poster presentations, conference programmes and certificates all detrimentally impact our environment. furthermore, consider the conference sponsor bags we receive, filled with further printed material, plastic keyrings, stress-balls and disposable pens, all contributing to the build-up of plastic in our oceans. conferences, such as the british association of plastic and reconstructive surgeons scientific meeting, have now started using electronic poster submissions, with presentations being held consecutively on large television screens -but further measures are possible. a well-designed conference smartphone app forgoes the need for printed programmes and leaflet advertising from sponsors and could include measures to reduce the carbon footprint, such as promotion of ride-share options for venue travel. the concept of virtual conferences has also been explored. organisers of an international biology meeting recently asked psychologists to assess the success of a parallel virtual meeting, with satellite groups organising local social events afterwards. more than % of the delegates joined online and there was an overall % increase those attending the conference; a full analysis of the success of this approach to conferences is awaited. virtual conferences may enable delegates to sign in from multiple time zones and minimise travel, disruption of clinical commitments and time away from family. this option is being pursued by the reconstructive surgery trials network (rstn) in the uk, whereby the annual scientific meeting will be delivered using teleconferencing technology at four research active hubs across the uk, reducing delegate travel substantially and the conference's carbon footprint in turn. there is a clear but unmeasurable benefit of networking face-to-face for formation of personal connections, exchange of knowledge and opportunities for collaboration. the use of social media, instant messaging applications and modern teleconferencing technology are vital to retain this valuable aspect of academic conferencing. equally, perhaps there is a balance to be found, with societies currently holding biannual meetings moving to include one virtual, or running a parallel virtual event for those travelling long distances. the academic community must play a role in environmental sustainability by reducing the carbon footprint of our journals and conferences. jcrw is funded by the national institute for health and research (nihr) as an academic clinical fellow. none for completion of submission. none. we read with interest the study by sacher et al., who compare body mass index (bmi) and abdominal wall thickness (awt) with the diameter of the respective diea perforator and siea. they found that there was a significant ( p < . ) positive correlation between these variables, concluding that this association may mitigate for the increased perioperative risk seen in patients with high bmi. their findings disagree with a previous smaller study by scott et al. reconstruction in the high bmi patient group can be challenging, and is associated with higher complication rates. despite this, satisfaction with autologous reconstruction appears similar across bmi categories. as the authors discuss, perfusion, as a function of perforator diameter, is of key relevance to the safety of performing autologous breast reconstruction in patients with higher bmi. larger perforator sizes relative to total flap weight have been suggested to reduce the risk of post-operative flap skin or fat necrosis. while this is likely an oversimplification, as flap survival will also depend on multiple factors including perforator row compared to abdominal zones harvested, it does suggest that if the high bmi patient group has reliably larger perforators then their risk profile may be reduced. however, we suggest caution regarding reliance on the correlation they found between bmi or awt and perforator size when planning free tissue transfer. while they demonstrate p values suggesting correlation between bmi or awt and perforator diameter, the r (correlation coefficient) values that they determined through pearson correlation analysis are low, ranging from . to . . the resulting r (coefficient of determination) values are therefore in the range . - . , suggesting that only . - % of the variation in perforator diameter can be related to bmi or awt. it is therefore likely that other variables, such as height and historical abdominal wall thickness, that were not accounted for in the correlation analysis also play roles in determining perforator size, in addition to anatomical variation. in addition, their analysis and results depend on a linear relationship between the variables, which may not be the case. therefore although the authors demonstrate a correlation between abdominal wall thickness and perforator size, there is substantial variation between individual patients and so this relationship cannot be relied upon when planning autologous reconstruction. we read with interest pescarini's et al. article entitled 'the diagnostic effectiveness of dermoscopy performed by plastic surgery registrars trained in melanoma diagnosis'. the article is of great interest in highlighting the potential of plastic surgery registrar training in domains such as dermoscopy, especially for those trainees looking to specialise in skin cancer. training in these experiential skill domains is essential to building a diagnostic framework, and the comparable accuracy in diagnosis to dermatologists reflects this. it would be of great benefit to understand further how diagnostic accuracy evolves along the inevitable learning curve experienced using the dermoscope. pescarini et al. comment briefly on method of training but we believe the timeline is key, as is mentorship and regular appraisal. terushkin et al. found that for the first year of dermoscopy training benign to malignant ratios in fact increased in trainee dermatologists before going on to decrease potentially secondary to picking up more anomalies but not yet having the skill set to determine if these are benign or not. there is no reason to suggest that plastic surgery trainees' learning curves should differ significantly. this of course would skew the data presented in terms of accuracy at the end of the three year study period. more helpful would be a demonstration of how accuracy changes with time and experience, as one would expect, and of course how these rates are comparable to those of dermatologists. this would have implications for training programmes where specific numbers of skin lesions or defined timeframes for skin exposure during training are set as benchmarks for qualification. this is particularly pertinent for uk trainees; the nice guidelines for melanoma state that dermoscopy should be undertaken for pigmented lesions by 'healthcare professionals trained in this technique'. to understand the number of lesions that trainee plastic surgeons have to assess with a dermatosope before their diagnostic accuracy improves -or the time needed to achieve that accuracymight be a key factor for placement duration and numbers required for trainees to become consciously competent dermoscopic practitioners. reproducible training programmes in this regard are therefore vital. it must be pointed out that the role of the dermascope for plastic surgeons is likely to be narrower than for our dermatological colleagues. within the uk, the role of the plastic surgeon is primarily reconstructive, with some subspeciality involvement in diagnosis of melanomas and a range of non-melanomatous skin cancers and skin lesions. the dermoscope is primarily a weapon in the diagnosis of insitu or early melanoma for plastic surgeons where diagnostic certainty is unclear following a referral for consideration for surgical removal. where doubt remains over a naevus, surgical excision is still the normal safe default. dermatologists use dermoscopes for a broad range of diagnostic purposes on a wide variety of skin conditions. the familiarity and expertise with this instrument that they garner is therefore not surprising. we must be clear in resource-limited healthcare systems about what our specific roles are as plastic surgeons and how the burden of patient assessment is shared to appropriately deploy our skills within the context of a broader multidisciplinary framework. accuracy with the dermoscope is essential to safely treating patients in a binary fashion -should the lesion be removed or monitored? comparison with dermatological expertise is helpful as a guide and dermoscopy has an important diagnostic role for plastic surgeons, but we should not strive to be equivalent in skills to dermatologists with dermascopes at the expense of the development of vital surgical reconstructive skills and excellence throughout plastic surgery training. response to the comment made on the article "the diagnostic effectiveness of dermoscopy performed by plastic surgery registrars trained in melanoma diagnosis" we strongly agree with the benefit correlated to understand the learning curve experienced by plastic surgery registrars using the dermoscope. as stated in our article, the limit of our study is its retrospective nature. moreover, the training and the level of competence differed between the three registrars. at the beginning of the data collection, two of them were at their third year of specialist training and were using dermoscope since at least one year while the other one was at his first year. all the registrars attended specific but different dermoscopy courses and all of them completed a h on site training with a competent consultant. for this reason, the expertise partially differed among the three registrars. nevertheless, we believe a years' period should be long enough to truly homogeneously estimate the accuracy in diagnosis of melanoma by them. in fact, townley et al. demonstrate the attendance of the first international dermoscopy for plastic surgeons, oxford, improved the accuracy of diagnosing malignant skin lesions by dermoscopy rather than using naked eye examination. we believe a well-planned prospective study should be of great benefit in term of planning a reproducible dermoscopy plastic surgery-oriented training program. this could help to estimate when a clinician can be considered as competent dermoscopic practitioner. it should be underlined as learning how to use dermoscope is something is not possible to do from time to time but it need effort and self-study. we believed is important to properly plan a formal training in dermoscopy for all the plastic surgery registrars who will use this tool in their practice. vahedi et al. stated, as per their survey, only one of % of the plastic surgery trainees that used dermoscope in their practice had formal training. as all trainees perform outpatient appointments dealing with skin lesions, especially for trainees looking to specialize in skin cancer, we believed the expertise gained through specific course and training is not at expense of the development of surgical reconstructive skills, but instead it can lead improvement in performing outpatient appointment. proper use of dermoscope will make the skin cancer specialized plastic surgeon more confident and truthful if not in detecting melanoma at least in leaving evident benign lesions. keeping always in mind a multidisciplinary approach and a close cooperation between dermatologists and plastic surgeon is of paramount importance in skin cancer treatment. there is no conflict of interest for all of the authors. dear sir, as the author mentioned in this publication, the correction of infra-orbital groove by microfat injection did increase the postoperative satisfaction of lower blepharoplasty surgery . in this study, we want to explore whether this procedure can replace the previous fat pad transposition. months after the microfat injection, we have observed that fat continues to be present but its volume gradually disappears, and, with some, it totally vanishes. with fat pad transposition, the fat volume does not decrease, it seems that both have their advantages and disadvantages because the volume of transplanted fat after lower blepharoplasty might disappear gradually by time. survival of transposed fat through fat pad transposition is the best, creating a more natural look at the tear trough. however, the volume of augmentation might not be enough. it would be exceptional if we could combine both advantages; that is, to administer microfat injection after fat transposition. but prior to that, we would like to share the experience of the author. the fat pad is usually transposed to periosteum by two limits: one is the transposition of the medial fat pad to the inner groove and the other one is the transposition of the central fat pad to the center of the infra-orbital groove. as mentioned by the author, we fill the superficial layer (under the skin) and the periosteum layer (deep layer). injection into the deeper layer is not performed after lower blepharoplasty but before the musculocutaneous flap was closed. after fat pad transposition is completed, we would first cover up the musculocutaneous flap before asking the patient to sit up. then, the surgeon assesses whether a further filling of the groove with the fat is needed or not. if necessary, the musculocutaneous flap is opened and more fat is injected in-between the fat pads into the groove, but, definitely, not into the fat pads. the reason why we do the injection before the flap is closed is to accurately perform the insertion and to avoid entering into the intra-orbital fat pad, which may worsen the presence of eye bags. we inject the superficial fat only after the flap wound is closed. this procedure modifies the groove under the eye more accurately. we share with you our surgical methods with the hope that fat utilization and fat pad transposition will greatly improve surgical satisfaction. dear sir, eiben and gilbert are thanked for their comments. they may be correct in the original description of the respective flaps, but the five-flap z-plasty in our experience has always been known colloquially as the jumping man flap. indeed, extra caution is required in burns secondary reconstruction. the skin of these patients is typically thin, often scarred and unforgiving. flaps should never be undermined unless in an area of completely virgin tissue. the modification we presented does result in an apparently thinner base for the 'arm limb' flaps, but traditionally wider based flaps would have been transferred and then trimmed with the same outcome. the tiny sizes involved in paediatric eyelid surgery would not be the best forum to experiment, and certainly mustardé's original design would seem safest in that setting. we had uniquely sought to also measure precisely the geometric gain in length, and felt that the result was impressive. none letter to the editor: evaluating the effectiveness of plastic surgery simulation training for undergraduate medical students we read with interest the recent correspondence regarding the effectiveness of plastic surgery simulation for training undergraduate medical students. we are in wholehearted agreement with the statement regarding medical school curricula lacking exposure to plastic surgery and commend the authors for their efforts to pique the interest of medical students in our specialty. we wish however to point out some vagueness that, unless clarified, could be misleading to your readership. the correspondence states: "the decrease in competition ratios for plastic surgery". we believe that current data supports the opposite view. taking into account published data from health education england over the last years , there has in fact been a % rise in the competition ratios from to ( fig. .) suggesting an increasing interest in the specialty. highlighting this increase in demand supports the authors' desire for more undergraduate exposure to plastic surgery. this increased input in the uk curriculum would also help all medical students become aware of the support plastic surgeons can provide to other specialties as this is a particular feature of the specialty. in an increasingly specialised medical world, we feel it is important that all doctors are equipped with the knowledge to best serve their patients. no funding has been received for this work and the authors have no competing interest. dear sir/madam, in response to critical personal protective equipment (ppe) shortages during the covid- pandemic, medsupply-driveuk was established by ent trainee ms. jasmine ho, and medsupplydriveuk scotland by two plastic surgery trainees (ms. gillian higgins and mrs. eleanor robertson). we applied the principles of creative problem solving and multidisciplinary collaboration instilled by our specialty. since march , we have recruited over volunteers to mobilise over , pieces of high quality ppe donated from industry to the nhs and social care. we have partnered with academics and leaders of industry to manufacture: surgical gowns, scrubs and visors using techniques including laser cutting, injection molding, and d printing. we have engaged with nhs boards and trusts and politicians at local, regional and national level to advocate for healthcare worker protection in accordance with health and safety executive and coshh legislation including: engineering controls and ppe that is adequate for the hazard and suitable for task, user and environment. public health england (phe) currently advise ffp level of protection only in the context of a list of aerosol gener-the authors have no competing interests. ating procedures . a surgical mask confers x ( %) protection, ffp /n x ( - %) and ffp - , x ( > %) protection ( figure ). as sars-cov- is a novel pathogen, evidence is naïve and evolving, and since transmission occurs via aerosol, droplets and fomites from the aerodigestive tract, all uk surgical associations have issued guidance to use higher levels of ppe for procedures that are not included in the phe list ( ) . cbs, entuk and baoms have issued statements supporting the use of reusable respirators and power air-purifying respirators, and their use is approved by phe, health protection scotland, public health agency, public health wales, nhs and the academy of medical royal collages . the first author has experienced the need to quote bapras guidance in defense of their use of ppe . medsupplydrive (uk and scotland) hope to empower all healthcare workers to demand provision of adequate (i.e. will protect from sars-cov- ) and suitable (for the task, user and environment) ppe by engaging with their employers directly or through unions, royal colleges and associations. as a nation we must learn from other countries who successfully protected their workforce. data suggests that staff death is avoidable with the use of occupational health measures and ffp grade ppe , despite which at least uk health care workers have died of covid- . the strain placed on systems by sars-cov- , with reduced access to operating theatres, beds, equipment and staff has the potential for serious detrimental consequences for surgical training . ppe shortages and the subsequent necessity for rationing is causing additional harm. due to global demand and supply chain failures, ffp disposable masks for people with small faces are in particularly short supply. the majority of these individuals are female, and they are currently provided with no solution apart from avoiding "high risk" operating if/when this resource runs out; further depriving them of training opportunities. reusable respirators provide superior respiratory protection over disposable ffp masks due to design characteristics. they are more likely to provide reliable fit due to increased seal surface area (half face mm, full face mm). as they are designed to be decontaminated between patients and after each shift they are both economically and ecologically advantageous whilst also reducing fit testing burden and negating reliance upon precarious supply chains. there are factories in the uk which already make reusable respirators and medsupplydrive have been contacted by uk manufacturers looking to retool to meet this demand. although some nhs trusts remain reluctant to use reusable respirators, others have already adopted them routinely, using manufacturer decontamination and filter change advice. one nhs trust has supplied every member of their workforce with a reusable respirator as a sustainable plan for ongoing pandemic waves. it is apparent that healthcare workers are unable to access sufficient quantities of high quality respiratory protection. reusable respirators provide adequate protection from sars-cov- as well as being eminently suitable for a wide range of users, tasks and environment. we call on those reviewing decontamination and filter policy for reusable respirators to appreciate the urgency of the situation and expedite the process to enable all health and social care workers to access the respiratory protection that they need. at the epicenter of the covid- pandemic and humanitarian crises in italy: changing perspectives on preparation and mitigation love in the time of corona references . world health organization world health organization. who director-general's opening remarks at the mission briefing on covid- - plastic and reconstructive medical staffs in front line national health commission of the people's republic of china. press conference of the joint prevention and control mechanism of the state council nam therapy-evidencebased results covid- : how doctors and healthcare systems are tackling coronavirus worldwide governmental public health powers during the covid- pandemic: stay-at-home orders, business closures, and travel restrictions a plastic surgery service response to covid- in one of the largest teaching hospitals in europe transmission routes of -ncov and controls in dental practice who declares covid- a pandemic covid- : uk starts social distancing after new model points to potential deaths telehealth for global emergencies: implications for coronavirus disease (covid- ) prospective evaluation of a virtual urology outpatient clinic virtual fracture clinic delivers british orthopaedic association compliance quality indicators for plastic surgery training available at url: available at url: https: //en.wikipedia.org/wiki/seminar (accessed internet resource: the telegraph. the inflexibility of our lumbering nhs is why the country has had to shut down internet resource: the british society for surgery of the hand. covid- resources for members caring for patients with cancer in the covid- era maxillofacial trauma management during covid- : multidisciplinary recommendations asps statement on breast reconstruction in the face of covid- pandemic statement from the association of breast surgery th march : confidential advice for health professionals blazeby jmbreast reconstruction research collaborative. short-term safety outcomes of mastectomy and immediate implant-based breast reconstruction with and without mesh (ibra): a multicentre, prospective cohort study how the wide awake tourniquet-free approach is changing hand surgery in most countries of the world. hand clin hand trauma service: efficiency and quality improvement at the royal free nhs foundation trust one -stop" clinics in the investigation and diagnosis of head and neck lumps the implications of cosmetic tourism on tertiary plastic surgery services the need for a national reporting database references . policy on the redeployment of staff trauma management within uk plastic surgery units president of the british society for surgery of the hand. ( ) th march highlights for surgeons from phe covid- ipc guidance american society of plastic surgery website. asps guidance regarding elective and non-essential patient care the effect of economic downturn on the volume of surgical procedures: a systematic review an analysis of leading, lagging, and coincident economic indicators in the united states and its relationship to the volume of plastic surgery procedures performed telemedicine in the era of the covid- pandemic: implications in facial plastic surgery united states chamber of commerce website. resources to help your small business survive the coronavirus transmission of covid- to health care personnel during exposures to a hospitalized patient early transmission dynamics in wuhan, china, of novel coronavirus-infected pneumonia otorhinolaryngologists and coronavirus disease (covid- ) quantifying the risk of respiratory infection in healthcare workers performing high-risk procedures skills fade: a review of the evidence that clinical and professional skills fade during time out of practice, and of how skills fade may be measured or remediated ad hoc committee on health literacy for the council on scientific affairs training strategies for attaining transfer of problemsolving skill in statistics: a cognitive-load approach use of a virtual d anterolateral thigh model in medical education: augmentation and not replacement of traditional teaching? augmenting the learning experience in primary and secondary school education: a systematic review of recent trends in augmented reality game-based learning aging and wound healing tissue engineering and regenerative repair in wound healing duration of surgery and patient age affect wound healing in children investigating histological aspects of scars in children formation of hypertrophic scars: evolution and susceptibility early laser intervention to reduce scar formation in wound healing by primary intention: a systematic review early laser intervention to reduce scar formation -a systematic review effectiveness of early laser treatment in surgical scar minimization: a systematic review and meta-analysis cochrane handbook for systematic reviews of interventions version prospective or retrospective: what's in a name? how to run an effective journal club: a systematic review the evolution of the journal club: from osler to twitter free flap options for reconstruction of complicated scalp and calvarial defects: report of a series of cases and literature review the effect of age on microsurgical free flap outcomes: an analysis of , cases factors affecting outcome in free-tissue transfer in the elderly reconstruction of postinfected scalp defects using latissimus dorsi perforator and myocutaneous free flaps long-term superiority of composite versus muscle-only free flaps for skull coverage indocyanine green lymphography findings in older patients with lower limb lymphedema microsurgical technique for lymphedema treatment: derivative lymphatic-venous microsurgery lower-extremity lymphedema and elevated body-mass index lymphorrhea responds to negative pressure wound therapy lymphovenous anastomosis aids wound healing in lymphedema: relationship between lymphedema and delayed wound healing from a view of immune mechanisms evolving practice of the helsinki skin bank skin graft meshing, overmeshing and cross-meshing gluteal implants versus autologous flaps in patientswith postbariatric surgery weight loss: a prospective comparative of -dimensional gluteal projection after lower body lift redefining the ideal buttocks: a population analysis classification system for gluteal evaluation blondeel and others. doppler flowmetry in the planning of perforator flaps frequency of the preoperative flaws and commonly required maneuvers to correct them: a guide to reducing the revision rhinoplasty rate temporalis fascia grafts in open secondary rhinoplasty the turkish delight: a pliable graft for rhinoplasty bone dust and diced cartilage combined with blood glue: a practical technique for dorsum enhancement the use of bone dust to correct the open roof deformity in rhinoplasty wound microbiology and associated approaches to wound management moleculight _ ix _ user _ manual _ rev _ . _ english the use of the moleculight i:x in managing burns: a pilot study improved detection of clinically relevant wound bacteria using autofluorescence image-guided sampling in diabetic foot ulcers efficacy of an imaging device at identifying the presence of bacteria in wounds at a plastic surgery outpatients clinic publication rates for abstracts presented at the british association of plastic surgeons meetings: how do we compare with other specialties? are we still publishing our presented abstracts from the british association of plastic and reconstructive surgery (bapras)? full publication of results initially presented in abstracts the true cost of science publishing science for sale: the rise of predatory journals plastics in healthcare: time for a re-evaluation green theatre wigan's hospital organisation is first health trust in europe to install intelligent lighting sensorflow provides smart energy management for hotels in malaysia nhs bids to cut up to million plastic straws, cups and cutlery from hospitals healthcare sustainability -the bigger picture on behalf of the council of the association of surgeons in training cross-sectional study of the financial cost of training to the surgical trainee in the uk and ireland plastic waste inputs from land into the ocean low-carbon, virtual science conference tries to recreate social buzz body mass index and abdominal wall thickness correlate with perforator caliber in free abdominal tissue transfer for breast reconstruction patient body mass index and perforator quality in abdomen-based free-tissue transfer for breast reconstruction increasing body mass index increases complications but not failure rates in microvascular breast reconstruction: a retrospective cohort study are overweight and obese patients who receive autologous free-flap breast reconstruction satisfied with their postoperative outcome? a single-centre study predicting results of diep flap reconstruction: the flap viability index the diagnostic effectiveness of dermoscopy performed by pastic surgery registrars trained in melanoma diagnosis analysis of the benign to malignant ratio of lesions biopsied by a general dermatologist before and after the adoption of dermoscopy assessing suspected or diagnosed melanoma dermoscopy-time for plastic surgeons to embrace a new diagnostic tool? the use of dermatoscopy amongst plastic surgery trainees in the united kingdom modification of jumping man flap combined double z-plasty and v-y advancement for thumb web contracture plastic surgery in infancy evaluating the effectiveness of plastic surgery simulation training for undergraduate medical students united kingdom mr. b.s. dheansa queen victoria hospital recommended ppe for healthcare workers by secondary care inpatient clinical setting, nhs and independent sector personal protective equipment (ppe) for surgeons during covid- pandemic: a systematic review of availability, usage, and rationing covid- : protecting worker health. annals of work exposures and health memorial of health & social care workers taken by covid- nursing notes covid- robertson canniesburn plastic surgery and burns unit georope geo-technical and rope access solutions, west quarry none. the authors have no financial interests to declare in relation to the content of this article and have received no external support related to this article. no funding was received for this work. the authors would like to thank catriona graham, sarcoma specialist nurse who helped in the evaluation of this study. the authors kindly thank the beatson cancer charity, uk (grant application number - - ), the jean brown bequest fund, uk, and the canniesburn research trust, uk for funding this study. the sponsors had no influence on the design, collection, analysis, write up or submission of the research. supplementary material associated with this article can be found, in the online version, at doi: . /j.bjps. . . . none. the authors declare no funding. jeremy rodrigues provided data from the two nhs trust journal clubs and invaluable advice. nil. all authors declare that there were no funding sources for this study and they approved the final article. supplementary material associated with this article can be found, in the online version, at doi: . /j.bjps. . . . all authors disclose any commercial associations or financial disclosures. none. none. none. none. all authors agree to the fact there are no conflicts of interest to declare. no funding was provided for this letter. the authors have no financial or personal relationships with other people or organizations, which could inappropriately influence the work in this study. the authors have no financial disclosure or conflict of interest to declare in relation to the content of this article. no funding was received for this article. supplementary material associated with this article can be found, in the online version, at doi: . /j.bjps. . . . dear sir, long has the term 'publish or perish' been considered medical doctrine and this has historically been a prerequisite for progression in research-driven specialties such as plastic surgery. national, or indeed international, presentation is pivotal to disseminating information, but also provides a stepping-stone to future publications. in the uk, bapras meetings have always represented the ideal platform for this. of significant interest is the conversion of accepted abstracts into peer-reviewed publications.previous studies , have assessed abstract publication for bapras meetings and have shown a declining conversion rate. we re-assessed this in order to establish whether this reported downtrend is continuing and how plastic surgery compares to other specialties.all abstracts from bapras meetings between winter and summer were analysed. later meetings were excluded to allow adequate lag time for publication. abstracts were identified retrospectively from conference programmes accessible via the bapras website ( www.bapras. org.uk ). pubmed ( https://www.ncbi.nlm.nih.gov/pubmed/ ) and google scholar ( https://scholar.google.com/ ) databases were used to search for full publications. cross-referencing of published papers with abstracts for content was completed to ensure matched studies.abstracts published prior to the conference date were excluded. two-tailed t -testing was used to assess for statistical significance between variables. none. none. dear sir, diver and lewis described a modification of the "jumping man flap". in fact, what they have described is a modification of the -flap z-plasty. this was described by hirschowitz et al. it is not a jumping man as it has no body.the true jumping man flap was described by mustarde for the correction of epicanthal folds and telecanthus.we have used the -flap z-plasty particularly for the release of st web space contractures following burns, the modification of raised curved scars of the trunk and limbs following burns, and for the correction of epicanthal folds in small children.using the diver and lewis modification in burn cases results in thin and less vascular flaps. when correcting epicanthal folds in children the flaps are so small that reducing their size in any way would make it near impossible to suture the flaps correctly. no conflicts of interest. key: cord- -cg bewqb authors: ngwira, a.; kumwenda, f.; munthali, e.; nkolokosa, d. title: a snap shot of space and time dynamics of covid- risk in malawi. an application of spatial temporal model date: - - journal: nan doi: . / . . . sha: doc_id: cord_uid: cg bewqb background: covid- has been the greatest challenge the world has faced since the second world war. the aim of this study was to investigate the distribution of covid- in both space and time in malawi. methods: the study used publicly available data of covid- cases for the period from th june to th august, . semiparametric spatial temporal models were fitted to the number of weekly confirmed cases as an outcome data, with time and location as independent variables. results: the study found significant main effect of location and time with the two interacting. the spatial distribution of covid- showed major cities being at greater risk than rural areas. over time the covid- risk was increasing then decreasing in most districts with the rural districts being consistently at lower risk. conclusion. future or present strategies to avert the spread of covid- should target major cities by limiting international exposure. in addition, the focus should be on time points that had shown high risk. covid- is a corona virus disease (covid which was first reported in wuhan, china in . it is characterized by severe acute respiratory syndrome (sars) hence also known as sars-cov - (who, ). since its onset, covid- has been one of the greatest disease pandemics of all times. from its discovery in december in china, people world over have been confirmed of the disease and over people have died as of august (who, ). the space trend worldwide has shown the americas ( ), europe ( ) and south east asia ( ) being the hardest hit as of th august, . africa had a slow progression of the disease at the beginning of the disease, earlier , but the continent had rising cases in the middle of the year, , with confirmed cases and deaths as of th august, . malawi at this time of the year had recorded confirmed cases and deaths (unicef malawi, b). the first three cases of covid- were recorded on nd april, (unicef malawi, a). understanding disease space and time dynamics is important for the epidemiologists as with space distribution, the hot spot areas are marked for intervention. in addition, possible drivers of the epidemic in those hot spots are suggested for further scientific investigation. regarding temporal distribution, times with high disease risk are also identified which gives crew to possible causes including, in particular, seasonal changes. a number of studies on spatial temporal distribution of covid- have been conducted (chen et ye and hu, ) . the majority of these though have used the geographical information system (gis) technology as compared to statistical modelling using spatial temporal models. a few studies that have used the statistical approach to spatial temporal analysis to my knowledge are gayawan et al ( ) who used the possion hurdle model to take into account excess zero counts of covid- cases, briz-redon and serrano aroca ( ) who used the separable random effects model with structured and unstructured area and time effects, and chen et al ( ) who used the inseparable spatial temporal model. in addition, in africa, spatial temporal analysis of covid- cases has been limited (gawayan et al, ; arashi et al, ; adeneknle et al, ) as of th august. in malawi, at the time of this study, no study on spatial temporal distribution of covid- cases had been spotted. only one study that focused on prediction of covid- cases using mathematical models was seen (kuunika, ) . the aim of this study was to determine the spatial temporal trends of covid- confirmed cases in malawi while using the spatial temporal statistical models. the objectives of the study were: • to establish the estimated or predicted risk trend by geographical location • to estimate the temporal risk trend of covid- by geographical location. . cc-by-nc-nd . international license it is made available under a is the author/funder, who has granted medrxiv a license to display the preprint in perpetuity. (which was not certified by peer review) preprint the article has been organized as follows. first, study methods are described in terms of data collection and statistical analysis. thereafter, results and discussion of results are presented. finally conclusions and implications regarding the findings of the study are made. the study used the publicly available districts' covid- confirmed daily cases data for malawi which was extracted from the malawi data portal website (https://malawi.opendataforafrica.org) after registering with the portal. the total population, population density, and percentage of people with running water data for each district were also extracted from the same data portal. the population size for each district was used as the expected number of people to be infected in each district. population density and percentage of people with running water in each district were taken as covariates. though the cases started to be recorded on nd april, , the extracted data for covid- cases used for spatial temporal modelling in this study, only covered the period from th june to th august. this was the case, considering that covid- daily cases for the districts were only available from th june on the portal. the study period was divided into six weeks as follows: descriptive analysis involved the time series plot of the cumulative confirmed cases and those who had died of covid- for the whole country from the beginning of the epidemic to the time this study was conducted, that is, th august, . it also involved bivariate correlation of daily confirmed cases of covid- and their potential covariates, that is, population size, population density and percentage of people with running water in each district. the potential covariates would be selected for further multiple variable modelling if their p-values were less than . . after the descriptive analysis, the multiple variable spatial temporal models were fitted in r using the bayesian approach while using the integrated nested laplace approximations (inla where it η is the predictor specified as . cc-by-nc-nd . international license it is made available under a is the author/funder, who has granted medrxiv a license to display the preprint in perpetuity. (which was not certified by peer review) preprint the copyright holder for this this version posted september , . the model with predictor ( ) will be denoted by a. in ( ), μ is the overall disease relative risk, i u is the area level unstructured random effect and i v is the area level structured random effect, and φ is overall time trend and i ϕ is the area specific time trend. the unstructured area level effects were modelled by the independent normal distribution with zero mean, that is, σ , and the structured random effects were assigned the intrinsic conditional autoregressive (icar) according to besag ( ) , that is, . the weakness of model a is the linearity assumption on the effect of time on the relative risk of the disease. to take a more flexible approach on the effect of time, nonlinear spatial temporal models were also explored. the predictor, it η for the nonlinear spatial temporal model for the time effect is specified as follows: and let it be denoted by b. the i u and i v in the model are the area level unstructured and structured effects respectively as defined in ( ), and t γ and t β are the unstructured and structured temporal effects. the unstructured time effects was modelled by the independent normal distribution with zero mean, that is, ) , ( λ σ γ n t and the structured temporal effects were assigned the first order random walk prior distribution defined as: a second order random walk was also explored in case the data would show a more pronounced linear trend. the second order random walk is defined as: the last term in ( ), it δ , represents the interaction between area and time. four forms of interaction between space and time are possible according to knorr-held ( ) . the first form of interaction assumes interaction between the unstructured region effect ( i u ) and the unstructured temporal effect ( t γ ) (denote it by model b ), and in this case the interaction effect is assigned the independent normal distribution, that is, the second type of interaction, is the interaction of structured area effect ( i v ) and the unstructured temporal effect ( t γ ) (denote it by model b ). this form of interaction assumes conditional intrinsic (car) distribution for the areas for each time independently from all the other times. the third is the interaction between unstructured area effect ( i u ) and the structured temporal effect ( t β ) (denote this by model b ). the prior distribution for each area is assumed to be a second order random walk across time. the last possible space time interaction is that between area structured effect ( i v ) and the structured time effect ( j β ) (call it model b ). in this case, the second order random walk prior that depends on neighboring . cc-by-nc-nd . international license it is made available under a is the author/funder, who has granted medrxiv a license to display the preprint in perpetuity. (which was not certified by peer review) preprint the copyright holder for this this version posted september , . areas was assigned for each area. the prior for the variance parameter for the independent normal distribution (i.i.d) and that for the spatial effect (besag) was the gamma ( , . ). the prior for the random walk variance was the gamma ( , . ). the intercept was assigned the default normal, n ( , ). the model choice was by the deviance information criteria (dic) as proposed by spiegelhalter et al ( ) , where a smaller dic means a better model in terms of fit and complexity. it is the sum of the measure of model fit, called the deviance denoted by d and the effective number of parameters denoted by d p . the selected model was then used to estimate the relative risk, it r . figure shows the graph of cumulative confirmed and dead cases of covid- from the time the first case was reported to th august, . there were confirmed cases and deceased cases as of th august, . generally, there were low total cases of those who died of covid- as compared to those who were confirmed. , p-value = . ), revealed lack of correlation. there was also no significant correlation between confirmed cases and percentage of people with running water in each district ( . = ρ , p-value = . ). since the p-values of the correlation coefficients were more than . , the significance level set to select potential covariates, the two covariates, population density and proportion of those with running water were dropped when fitting the spatial temporal models of the weekly confirmed cases of covid- . . cc-by-nc-nd . international license it is made available under a is the author/funder, who has granted medrxiv a license to display the preprint in perpetuity. the copyright holder for this this version posted september , . table shows the dic of the fitted spatial temporal models defined in the methods section. model b and b , had smaller dic than the rest of the models, and the dic difference between the two models was not significant as it was not greater than (spiegelhalter et al, ) . the results of the model with rw , that is, model b are therefore presented and discussed. table presents the variance parameters of the random effects. all model terms including the interaction effect of location and time, were significant predictors of the risk of contracting covid- as the estimated variances were significantly greater than zero, since the confidence intervals excluded zero. area level spatial effects and the effects of time modelled by the second order random walk prior (rw ), were highly significant as evidenced from their bigger variances. the spatial temporal distribution of overall fitted risk (figure , figure ) , shows that by space, mzimba, mzuzu and nkhata bay in the north; lilongwe, lilongwe city and mchinji in the center; blantyre, mwanza, zomba and mangochi in the south; were at increased risk to being confirmed of covid- . overtime, from week to week , the risk in mzimba and mzuzu, had been decreasing and the risk in blantyre was consistently high. most of the districts in the rural areas were consistently at low risk of contracting covid- over time. . cc-by-nc-nd . international license it is made available under a is the author/funder, who has granted medrxiv a license to display the preprint in perpetuity. . cc-by-nc-nd . international license it is made available under a is the author/funder, who has granted medrxiv a license to display the preprint in perpetuity. figure presents the spatial risk of contracting covid- . spatial risk represents the resdual risk due to unobserved or unmeasured factors of covid- . in general, the spatial risk looks randomly distributed in week , , and and non-random in week and . precisely though, in the first week, the risk was high in the south than in the center and north. in the second week, the spatial risk was high in the north and a little bit in the center close to the western border, than in the south. in the third week, the spatial risk was high in dowa (center) and south of the lake, that is, machinga and mangochi. in the fourth week, the spatial risk was high in areas surrounding major cities in all the three regions. in the fifth week, the spatial risk shifted to the south and to one district in the northern region. most areas in the last week, had reduced spatial risk to covid- , with exception to two central districts. the study looked at a snap shot of spatial temporal distribution of covid- in malawi by focusing on the period, th june to th august, while using the inseparable statistical spatial temporal model. the use of inseparable model allowed the investigation of the joint or interaction effect of time and location on covid- cases. the use of non-parametric model for time effect (rw ) also enabled the capturing of the subtle influences of time on the risk of contracting covid- . the space distribution of covid- risk in malawi in the given time period, shows the cities and the surrounding areas being at increased risk. the explanation to the observed spatial gradient is a matter of conjecture. one possible factor driving the observed spatial pattern would be the population size and population density. the cities have higher population density than the rural and covid- is therefore more likely to spread fast through the . cc-by-nc-nd . international license it is made available under a is the author/funder, who has granted medrxiv a license to display the preprint in perpetuity. (which was not certified by peer review) preprint the copyright holder for this this version posted september , . movement and frequent contact between people. case comparison investigations have found positive correlation of population density and covid- (penerliev and petkov, ) , where for example in italy, lombardy, the population density is three times higher than in piedmont and the incidence rate was also over three times higher than in piedmont. evidence of high population density as a risk of disease transmission has been seen in india where influenza transmission rates in india have been found to increase above a population density of people per square kilometer (african centre of strategic studies, ). the other possible contributor to the observed rural city spatial gradient of covid- risk in malawi would be international exposure. in this case, cities have higher international exposure than the rural through international flights among others, which would mean more imported cases. evidence of international exposure as a risk factor of covid- transmission has been observed in africa as a whole where countries with high international exposure like south africa, nigeria, morroco, egypt, and algeria have had higher covid- cases than their counterparts. international exposure as a fuel of covid- transmission has also been documented in brazil where it was found that cases increased with increase in international flights jetting into the country (pequeno et al, regarding the temporal distribution, the disease risk was increasing gently from week to week and increased sharply from week to week when it started to decline in most regions. the relative sharp increase in risk in week and may be attributed to the effects of post presidential general election which was held on rd june, . the unrestricted political rallies before the election might have caused a spike in covid- risk thereafter. in addition, the rise in covid- cases during this time would be attributed to the decreasing temperatures at this time of the year as this is the time of cold season. the decline of risk from week to the last week may be due to the increasing temperatures as this time marks the beginning of hot season. negative correlation between covid- cases and temperature has been documented (pequeno et al, ). the study did not go without weaknesses. the first weakness was that, due to the absence of population size for each area at each time point, the base population at risk for each area was assumed to constant across time which was not practically valid. the other weakness was that the study did not look at future predictions of covid- risk beyond the specified period of the study to give an idea how the disease would progress thereafter. this would have important implications particularly on planning activities that had been brought to a halt by covid- like education and football games. nonetheless, the study gave an overview of the disease dynamics in both space and time in the specified time frame so as to identify hot spots in both space and time for further epidemiological investigations or interventions. . cc-by-nc-nd . international license it is made available under a is the author/funder, who has granted medrxiv a license to display the preprint in perpetuity. (which was not certified by peer review) preprint the copyright holder for this this version posted september , . the study found a significant effect of both location and time on covid- risk and the effect of either of the two depended on the other, that is, interaction. the risk of covid- for major cities was high compared to the rural districts and that over time, the risk for rural areas remained relatively lower than in cities. the risk of getting covid- in almost all districts started to decline in the last week which was in august. the implications of the study are that future interventions to halt the disease transmission in case the disease repeats itself, should target the major cities like blantyre, zomba, mangochi, lilongwe and mzuzu and that by time, attention should be paid to the month of june and july when it is very cold. who. coronavirus disease (covid- ): situation report - distribution of the covid- epidemic and correlation with population emigration from wuhan spatio-temporal patterns of the -ncov epidemic at the county level in hubei province the spatio-temporal epidemic dynamics of covid- outbreak in africa. medrxiv preprint changes in the spatial distribution of covid- incidence in italy using gis-based maps modelling spatial variations of coronavirus disease (covid- ) in africa spatial analysis and prediction of covid- spread in south africa after lockdown spatial distribution and impact assessment of covid- on human health using geospatial technologies in india a spatio-temporal analysis for exploring the effect of temperature on covid- early evolution in spain pougué biyong jn. the relatively young and rural population may limit the spread and severity of covid- in africa: a modelling study population flow drives spatio-temporal distribution of covid- in china the spatial and temporal pattern of covid- and its effect on humans development in china the impacts of covariates on spatial distribution of corona virus (covid- ): what do the data show through ancova and mancova application of geospatial technologies in the covid- fight of ghana spatial distribution and geographic mapping of covid- in northern african countries; a preliminary study spatiotemporal distribution and trend of covid- in the yangtze river delta region of the people's republic of china quantifying the potential burden of novel coronavirus bayesian analysis of space-time variation in disease risk bayesian image restoration with two applications in spatial statistics bayesian modelling of inseparable space-time variation in disease risk bayesian measures of model complexity and fit (with discussion) geodemographic aspects of covid- mapping risk factors for the spread of covid- in africa key: cord- -j r veou authors: sipetas, charalampos; keklikoglou, andronikos; gonzales, eric j. title: estimation of left behind subway passengers through archived data and video image processing date: - - journal: transp res part c emerg technol doi: . /j.trc. . sha: doc_id: cord_uid: j r veou crowding is one of the most common problems for public transportation systems worldwide, and extreme crowding can lead to passengers being left behind when they are unable to board the first arriving bus or train. this paper combines existing data sources with an emerging technology for object detection to estimate the number of passengers that are left behind on subway platforms. the methodology proposed in this study has been developed and applied to the subway in boston, massachusetts. trains are not currently equipped with automated passenger counters, and farecard data is only collected on entry to the system. an analysis of crowding from inferred origin–destination data was used to identify stations with high likelihood of passengers being left behind during peak hours. results from north station during afternoon peak hours are presented here. image processing and object detection software was used to count the number of passengers that were left behind on station platforms from surveillance video feeds. automatically counted passengers and train operations data were used to develop logistic regression models that were calibrated to manual counts of left behind passengers on a typical weekday with normal operating conditions. the models were validated against manual counts of left behind passengers on a separate day with normal operations. the results show that by fusing passenger counts from video with train operations data, the number of passengers left behind during a day’s rush period can be estimated within [formula: see text] of their actual number. public transportation serves an important role in moving large numbers of commuters, especially in large cities. transit performance is an important determinant of ridership, and transit services that offer short and reliable waiting times for commuters offer a competitive alternative to driving, which contributes to reduced congestion and improved quality of life. crowding is a major challenge for public transit systems all over the world, because it increases waiting times and travel times and decreases operating speeds, reliability, and passenger comfort (tirachini et al., ) . studies show that crowding in public transit increases anxiety, stress, and feelings of invasion of privacy for passengers (lundberg, ) . the covid- pandemic has also highlighted the public health risks associated with passenger crowding in transit vehicles. although transit ridership dropped precipitously during the pandemic in cities around the world, concerns about crowding on transit continue as economies re-open, commuters return to work, and agencies plan for the future. when overcrowded, commuters may not be able to board on the first train or bus that arrives. these commuters are left behind the vehicle that wished to board, and their number is directly related to various basic performance measures of public transportation there are a number of technologies that can be used to observe, count, and track pedestrians and pedestrian movements in an area. digital image processing for object detection is an appealing approach for transit systems because surveillance videos are already being recorded in transit stations for safety and security purposes. the video feed records passenger positions and movements in the same way that a person would observe them, as opposed to infrared or wireless signal detectors that merely detect the movement of a person past a point or their proximity to a detector. the detection of objects in surveillance videos is an invaluable tool for passenger counting and has numerous applications. for example, object detection can be used for passenger counting or tracking, recognizing crowding, and hazardous object recognition. in a relevant application, velastin et al. ( ) uses image processing techniques to detect potentially dangerous situations in railway systems. computer vision is the duplicate of human vision aiming to electronically perceive, understand and store information extracted from one or more images (sonka et al., ) . there are various techniques to use computers to process an image for object detection by extracting useful information. recent methods use feature-based techniques rather than segmentation of a moving foreground from a static background, which was used in the past. then, the detected features are extracted and classified, typically using either boosted classifiers or support vector machine (svm) methods (viola, ; cheng et al., ) . svm is one of the most popular methods used in object detection algorithms and especially passenger counting, because it offers a method to estimate a hyperplane that splits feature vectors extracted from pedestrians and other samples (cheng et al., ) , differentiating pedestrians from other unwanted features. boosting uses a sequence of algorithms to weight weak classifiers and combine them to form a strong hypothesis when training the algorithm to attain accurate detection (zhou, ) . current methods for object detection take a classifier for an object and evaluate it at several locations and scales in a test image, which is time-consuming and creates numerous computational instabilities at large scales (deng et al., ) . the most recent methods, such as region based convolutional neural network (r-cnn), use another method to decrease the region over which the classifier runs and includes the svm. first, category-independent regions are proposed to generate potential bounding boxes. second, the classifier runs and extracts a fixed-length feature vector for each of the proposed regions. finally, the bounding boxes are refined by the elimination of duplicate detections and rescoring the boxes based on other objects on the scene using svms (girshick et al., ) . the bounding box is a rectangular box located around the objects in order to represent their detection (coniglio et al., ; lézoray and grady, ) . the resulting object detection datasets are images with tags used to classify different categories (deng et al., ; everingham et al., ) . an open-source software tool called you only look once (yolo) uses a different method than the above-mentioned techniques for object detection. it generates a single regression problem to estimate bounding box coordinates and class probabilities simultaneously by using a single convolutional network that predicts multiple bounding boxes and class probabilities for these boxes (redmon, ; redmon et al., ) . another advantage of yolo is that, unlike other techniques such as svms, it sees the entire image globally instead of sections of the image. this feature enables yolo to implicitly transform contextual information to the code about classes and their appearance and at the same time makes yolo more accurate, making fewer than half the number of errors compared to fast r-cnn . yolo uses parameters for object detection that are acquired from a training dataset. yolo can learn and detect generalizable representations of objects, outperforming other detection methods, including r-cnn. the ability to train yolo on images has the potential to directly optimize the detection performance and increase the bounding box probabilities . the calibration of parameters for object detection using an algorithm like yolo requires training datasets with a large number of tagged images. although a custom training set that is specific to the context of application (e.g., mbta transit stations) would be desirable for achieving the most accurate object detection outcomes, it is very costly to create a large tagged training set from scratch. the common objects in context (coco) dataset is a large-scale object detection, segmentation, and captioning dataset that is freely available to provide default parameter values for yolo. the coco dataset is not specific to passengers or transit stations, but it is a general dataset that includes , images, . million tagged objects and object types, including "person" (lin et al., ) . nevertheless, the tool is effective for identifying individual people in camera feeds, and the use of general training data allows the same tool to be applied in other contexts without requiring additional training data. the proposed methodology aims to estimate the number of left behind passengers at a transit station when trains are too crowded to board. fig. presents a flowchart of the data and methods used in this study in order to provide a roadmap for the analysis described in this paper. the methods rely heavily on two data sources that are automatically collected and recorded (shown in blue): train tracking records that indicate train locations over time, and surveillance video feeds. additional archived data on inferred travel patterns from farecard records is used only to identify the most crowded parts of the system (shown in purple), and manual counts are used to estimate and validate models (shown in red). for model implementation, the proposed models require only the automatically collected input data. the first step of the analysis presented in this paper is to identify the stations and times of day when crowding is most likely to cause passengers to be left behind on the platform. this analysis is used only for determining where to collect data to demonstrate the implementation of the proposed model. this step could be skipped for cases in which the locations for implementation are already c. sipetas, et al. transportation research part c ( ) known. the identification of study sites involves a crowding analysis that makes use of two data sources: train tracking records, which denote the locations of trains over time; and origin-destination-transfer (odx) passenger flows, which are inferred from passenger farecard data. peaks in train occupancy and numbers of boarding passengers show where and when passengers are most likely to be left behind, as described in section . . then, section . describes an analysis of surveillance camera views to determine which stations have unobstructed platform views and station geometry that allows the automated video analysis techniques to be used to count passengers. train tracking data, which includes the time each train enters a track circuit, is automatically recorded into the mbta research database. by comparing this data against manual observations of the times that train doors open and close in the station, a linear regression model is estimated to predict dwell time from the train tracking records, as described in section . . this model is used to obtain automated dwell time estimates as inputs to the model of left behind passengers. automated counts of the number of passengers on each station platform are obtained using yolo, an automated image detection algorithm. the parameters of the algorithm are associated with the freely-available coco training dataset, as described in section . the threshold for object identification is calibrated, as described in section . , by applying the algorithm to the surveillance video feed and comparing with manual counts of the passengers remaining on the platform after the doors have closed (section . ) and the passengers entering and exiting the platform (section . ). with the parameter values and calibrated threshold, yolo produces estimates of the number of passengers on the platform as a time series. the number of passengers that remain on the platform after the doors close is a raw automated passenger count, as shown in section . . these raw counts are not very accurate as a direct measure (section . ), but they provide a useful input for modeling the number of left behind passengers. a logistic regression is used to predict the probability that a passenger is left behind on the station platform based on automated dwell time estimates and/or automated passenger counts from video. the model parameters are estimated using the manually c. sipetas, et al. transportation research part c ( ) observed counts of passengers left behind on the station platforms as the observed outcome. in this study, data collected on november , , were used for model estimation. the diagnostics, parameters, and fit statistics are presented for three models in section . . the quality of the proposed models is evaluated through validation against manually collected counts on a different day. in this study, the estimated models are used to predict the number of left behind passengers using automated dwell time estimates and automated passenger counts on january , . the accuracy of the model predictions is then calculated relative to manually observed passenger counts on the same day, as shown in section . . implementation of the model to make ongoing estimates of the numbers of passengers left behind each departing train requires only train tracking data and surveillance video feeds as model inputs. the manual observations of door opening/closing times and the number of passengers on the platforms are used only for estimating model parameters. the models then produce predictions of the number of passengers left behind each departing train based only on data that is automatically collected. therefore, the numbers of left behind passengers and the associated impact on the distribution of wait times experienced by passengers can be tracked as a performance measure over time. if data feeds were processed as they are recorded, it would also be possible to implement the models to make real-time predictions of the left behind passengers. to test the implementation of object detection with video in transit stations, a first step is to identify locations and times to collect video feeds as well as direct manual observations of left-behind passengers. for this study, stations were selected based on a crowding analysis and evaluation of station geometry and camera view characteristics. the goal was to identify stations with the greatest likelihood of passengers being left behind during a typical morning or afternoon rush and where object detection techniques would be most successful. the analysis focused on the orange line, which is miles long with stations. oak grove and forest hills are the northern and southern end stations, respectively. there are two main reasons for choosing this specific line. first and most important, it has no branch lines, so all travelers can reach their destination by boarding the next available train. this simplifies the identification of left-behind passengers. second, it passes through several transfer stations in the center of boston, which highlights its significance for passengers' daily commuting. a crowding analysis is a necessary step to identify the times and stations where crowding is observed and left behinds have the highest probability of occurring. the data used in this part of the analysis have been extracted from the rail flow database in the mbta research and analytics platform. the rail flow dataset includes aggregated boarding and alighting counts by time of day with -min temporal resolution averaged across all days in a calendar quarter. an example is given in fig. for : - : pm in winter . these data are derived from the origin-destination-transfer (odx) model, which makes use of afc and avl systems to infer the flow of passengers within the subway (sánchez-martínez, ). the odx model identifies records from afc that can be linked in order to infer transfers or return trip patterns. for example, a passenger using a charlie card (mbta's farecard) to enter a rail station and later board a bus near a different rail station can be assumed to have used the rail system and then transferred to the bus. another passenger who enters one rail station in the morning and enters a different rail station in the afternoon may be completing a round-trip commute, so the destination of the morning and afternoon trips can be inferred by linking the two trips. some trip origins and/or destinations cannot be inferred, for example if the fare is paid with cash or the trip has only one farecard transaction. for more details about the odx model, the reader is referred to sánchez-martínez ( ) , where the model's application inferred the origins of % and the destinations of % of the total number of fare transactions. for the crowding analysis in this paper, cumulative counts of passengers boarding and alighting at each station have been created along the direction of train travel using the aggregated railflow data. for a -min time period, b n t ( , ) is the cumulative count of all passengers that board trains in the direction of interest at stations preceding and including station n during time interval t. similarly, a n t ( , ), is the cumulative count of passengers that are assumed to have exited trains traveling in the direction of interest at stations preceding and including station n during time interval t. it should always be true that a n t b n t ( , ) ( , ), because passengers can only alight a train after boarding it. the difference between the cumulative boardings, b n t ( , ), and alightings, a n t ( , ), is the estimated passenger flow, q n t ( , ), between station n and + n during each -min time period. this calculation is approximate, because cumulative counts are calculated for a single -min time period, and real trains take more than min to traverse the length of a line. to calculate the number of passengers per train, the passenger flow per time period must be converted to passenger occupancy, o n t ( , ) (passengers/train), which is calculated by multiplying the passenger flow by the scheduled headway of trains, h t ( ) (minutes), at time t. c. sipetas, et al. transportation research part c ( ) the headway is divided by min to account for the fact that the passenger flow is per -min time period. this measure is an approximation of the number of passengers onboard each train that is based on the assumptions that headways are uniform and passengers are always able to board the next arriving train. in reality, variations in headways may lead to increased crowding after longer headways, increasing the likelihood that some passengers will be left behind. the mbta service delivery policy (sdp) (mbta, ) provides guidelines for reliability and vehicle loads. in the mbta sdp (mbta, ), the maximum vehicle load was explicitly defined as % of seating capacity in the peak hours (start of service to : am; : pm - : pm) and % of the seating capacity in other hours. the sdp notes that accurately monitoring the passenger occupancy of heavy rail transit is not yet feasible on the mbta system. nevertheless, the guidelines from table b in the sdp are used to identify general crowding levels, recognizing that each orange line train is six cars long and has a total of seats. a visualization of average train occupancy for the winter rail flow data is shown in the color plot in fig. a . the color for each station and -min time interval corresponds to the value of o n t ( , ) . since the trains have seats, red parts of the plot indicate large numbers of standing passengers, with dark red indicating crowding near vehicle capacity. this figure shows that in the northbound direction, the most severe crowding occurs between downtown crossing and north station shortly before : pm. note that the crowding appears to decrease before rebounding again at : pm. this is due to the change in scheduled headway at : pm from min to min, which increases occupancy, as calculated in eq. ( ). c. sipetas, et al. transportation research part c ( ) a more detailed visualization combines transit vehicle location records and inferred origin-destination trip flows from a specific date. as mentioned already, the odx trip flows are constructed with simplifying assumptions about passenger movements; for example, all passengers entering a station are assumed to board the first arriving train. despite such assumptions, however, the model is valuable for many applications. the trajectories in fig. b are associated with the recorded arrival and departure times of train at each station. the colors are associated with the estimated train occupancy based on the inferred boardings and alightings, assuming that no passengers are left behind. the trajectory plot shows that the headways between trains can vary substantially, especially for c. sipetas, et al. transportation research part c ( ) the stations north of downtown crossing. longer headways are followed by more crowded trains, because more passengers have arrived to board since the previous train. the occurrence of left-behind passengers would make actual train occupancies slightly lower for the trains following long headways. those left-behind passengers would then be waiting to board the next train, thereby increasing the occupancy on one or more subsequent trains. tracking the average number of passengers onboard trains provides an indicator for the likelihood of passengers being left behind, because full trains leave little room for additional passengers to board. during the most crowded times of the day, it is also useful to look at the numbers of passengers boarding and alighting trains at each station. passengers are most likely to be left behind at stations where trains arrive with high occupancy, few passengers alight, and many more passengers wait to board. by this measure, north station in the afternoon peak appears to be an ideal candidate for observing left behind passengers. using the same method for the southbound direction, sullivan square station was identified as an ideal candidate location for data collection in the morning peak. other candidate stations include back bay, chinatown and wellington stations. in addition to identifying stations with the greatest likelihood of passengers getting left behind crowded trains, the stations that are selected for detailed analysis should also have characteristics that are amenable to successful testing of video surveillance counting methods. there are a variety of station layouts and architectures that contribute complicating factors to the analysis of left behind passengers, and the goal of this study is to identify the potential for the adopted detection method under the best possible conditions. ideal conditions for the proposed analysis are: • dedicated platform for line and direction of interest -in this case, all passengers on a platform are waiting for the same train, so any passenger that does not board can be counted as being left behind. in the case of an island platform, observed passengers may be waiting for trains arriving on either track. in the mbta system, more than half of the station platforms for heavy rail rapid transit in the city center (the most crowded part of the system) meet this criterion. • high quality camera views -surveillance cameras vary in age, quality, and placement throughout the mbta system. newer cameras have higher definition video feeds. the quality of the view is also affected by lighting conditions, especially at aboveground station where sunlight and shadows can affect the clarity of the images. • platform coverage of camera views -the surveillance systems are designed to provide views of the entire platform area for security purposes. in some stations, the locations of columns obfuscate the views, requiring more cameras to provide this coverage. surveillance camera views were considered from five stations on the orange line (back bay, chinatown, north station, sullivan square, and wellington) that were identified through crowding analysis as candidate stations. ultimately, north station was selected as the study site for the northbound direction afternoon peak period because the station exhibits consistent crowding and the geometry provided good camera views. samples of the camera views from this station are shown in fig. . manual observations on the platform needed to be collected to establish a ground truth against which to compare alternative methods for measuring and estimating the number of passengers left behind crowded trains. detailed data collection at north station was conducted during afternoon peak hours ( : - : pm) on midweek days during non-holiday weeks (wednesday, november , , and wednesday, january , ) . three observers worked simultaneously on the station platform to record observations. although train-tracking records (ttr) report the times that each train enters the track circuit associated with a station, there is no automated record of the precise times that doors open and close. since passengers can only board and alight trains while the doors are open, recording these times manually is important for identifying when passengers board trains, when they are left behind, the precise dwell time in the station, and the precise headway between trains. each of the three observers recorded the times of doors opening and closing. the average of these observations is considered the true value. a simple linear regression model shows that observed dwell times (time from doors opening to doors closing) can be accurately estimated from automatic records of ttr arrival and departure times associated with each station. fig. shows the data and regression results combining manual counts for november , and january , . there is no systematic difference between records from different days, and the r is greater than . , indicating a good fit. all stations from tufts medical center through haymarket and the northbound platform at north station on the orange line ( platforms), three out of four blue line stations in downtown boston ( platforms), and all northbound platforms for the red line from south station to porter ( platforms) meet this criterion. c. sipetas, et al. transportation research part c ( ) each observer counted the number of passengers left behind on the station platforms after the train doors closed. in order to avoid double-counting, each observer was responsible for observing passengers in a two-car segment of the six-car train (front, middle, and back). some judgement was necessary in determining which passengers to count, because some passengers linger on the platform after alighting the train and some choose to wait for a later train even when there is clearly space available to board. the goal of the left-behind passenger count is to measure the number of passengers that are left behind due to crowding within ± passengers of the true number. in addition to counting the number of passengers left behind by crowded trains, it is important for model calibration to get an accurate count of the number of passengers waiting to board each arriving train. given the large number of commuters using the heavy rail system during commuting hours, it is not possible to accurately count this total number of passengers in person. surveillance video feeds of escalators, stairs, and elevators used to access the platform of interest were used to manually count the number of passengers entering and exiting the platform offline. specifically, an open-source software tool was used to track passenger movements by logging keystrokes to the video timestamp during playback (campbell, ) . counts were conducted by watching the surveillance video playback of each entry and exit point from the platform and logging the entry and exit of each individual passenger. the resulting data log records the time (to the nearest second) that each passenger entered and exited the platform. since the platforms of interest serve only one train line in one direction, all entering passengers are assumed to wait to board the next train, and all exiting passengers are assumed to have alighted the previous train. combining these counts with the direct observations of the number of passengers left behind each time the doors close provides an accurate estimate of the number of passengers that were successfully able to board each train. fig. illustrates the cumulative numbers of passengers entering the platform (blue curve) and boarding the trains (orange curve). the steps in the orange curve correspond to the times that the train doors close. if passengers are assumed to arrive onto the platform and board trains in first-in-first-out (fifo) order, the red arrow represents the waiting time that is experienced by the respective passenger, which is estimated as the difference between the arrival and the boarding time. a timeseries of the actual number of passengers waiting on the platform is constructed by counting the cumulative arrivals of passengers to the platform over time and assuming that all passengers board departing trains except those that are observed to be left behind. this ground truth for data collected on november , , is shown in blue in fig. . the sawtooth pattern shows the growing number of passengers on the platform as time elapses from the previous train. the drops correspond to the times when doors close. at these times, the platform count usually drops to zero. when passengers are left behind, the timeseries drops to the number of left behind passengers. one such case is illustrated with the red arrow just before : in fig. . fig. . selected camera views from north station, orange line, northbound direction. c. sipetas, et al. transportation research part c ( ) . automated detection of passengers on platforms in video feeds the yolo algorithm uses pattern recognition to identify objects in an image. the coco training dataset was used to define the object detection parameters in yolo, as described in section . a threshold for certainty can also be calibrated to adjust the number of identified objects in a specific frame. if the threshold is set too high, the algorithm will fail to recognize some objects that do not adequately match the training dataset. if the threshold is set too low, the algorithm will falsely identify objects that are not really present. in order to identify the optimal threshold, frames from camera views were analyzed. each frame was analyzed separately for threshold values ranging from % to % to determine the optimal threshold value in relation to a manual count of passengers visible in the frame. the optimal threshold across all camera views is %, which minimizes the mean squared error between yolo and manual counts as shown in table . fig. shows the identified objects at each threshold level for the same frame from a camera installed in north station. the input for yolo is a set of frames, each of which are analyzed independently to detect objects. the algorithm runs quickly enough to analyze each frame in less than one second, so the surveillance video feeds are sampled at one frame per second to allow yolo to run faster than real time. although the analysis for this paper was conducted off line, it would be possible to implement the algorithm in real time. the output from yolo is a text file that lists the objects detected for each frame and the bounding box for the object within the image. a time series count of passengers on the platform is simply the number of "person" objects identified in the corresponding c. sipetas, et al. transportation research part c ( ) frames from each sample video feed. fig. a shows the raw passenger counts on the platform at north station for the time period from : pm - : pm on november , . although there are noisy fluctuations, there is a clear pattern of increasing passenger counts until door opening times (green). a surge of passenger counts while doors are open (between green and red) represents the passengers alighting the train and exiting the platform. passenger counts drop off dramatically following the door closing time (red), except in cases that passengers are left behind. for example, the third train in fig. a arrives after a long headway and shows roughly nine passengers left behind. to facilitate analysis of the automatic passenger counts from the surveillance videos, it is useful to work with a smoothed time series of passenger counts. using a smoothing window of ± seconds, the smoothed series is shown in fig. b . this smoothed time series is more suitable for a local search to identify the minimum passenger count following each door closing time. this represents the count of left-behind passengers identified through the automated object detection process. the smoothed video counts from the three surveillance camera feeds used to monitor the northbound orange line platform at north station are shown as the green curve in fig. . the automated passenger counting algorithm clearly undercounts the total number of passengers on the platform. the reason for this large discrepancy is that the algorithm can only identify people in the foreground of the images, where each person is large. therefore, the available camera views do not actually provide complete coverage of the platform for automated counting purposes. furthermore, when conditions get very crowded, it becomes more difficult to identify separate bodies within the large mass of people. the problem of undercounting aside, it is clear that the automated counts generate a pattern that is representative of the total number of passengers on the platform. using regression, the smoothed timeseries can be linearly transformed into a scaled timeseries (the orange curve in fig. ) , which minimizes the squared error compared with the manually counted timeseries. using this scaling method, the data from november , , were used to compare estimated counts of left-behind passengers in the peak periods with the directly observed values. this provides a measure of the accuracy of automated video counts. the total number of left-behind passengers estimated by this method is presented in table , where the root mean squared error (rmse) is calculated by comparing the number of passengers left-behind each time the train doors close. the scaling process, which makes the blue and orange curves in fig. match as closely as possible, results in substantially overcounted left behinds, because the scaling factor tends to over-inflate the counts when there are few passengers on the platform. as a direct measurement method, automated video counting is not satisfactory, at least as implemented with yolo. however, fig. c. sipetas, et al. transportation research part c ( ) shows a clear relationship between the video counts and passengers being left behind on station platforms, so there is potential to use the video feed as an explanatory variable in a model to estimate the likelihood of passengers being unable to board a train. in order to improve the accuracy of estimates of the number of passengers left behind on subway platforms, a logistic regression model is formulated to estimate the probability that each passenger is left behind based on explanatory variables that can be collected automatically. a logistic regression is used to estimate the number of passengers left behind by way of estimating the probability that each waiting passenger is left behind, because the logistic function has properties that are more amenable to this application. since passengers are only left behind when platforms and trains are very crowded, a linear regression has a tendency to provide many negative estimates of left behind passengers, which are physically impossible. the binary logit model, by contrast is intended for estimating the probability that one of two possible outcomes is realized (e.g., a passenger is either left behind or not left behind). the estimated probability from a logit model is always between and , so the resulting estimate of the number of left-behind passengers is always non-negative and cannot exceed the total number of waiting passengers. for estimation of the logistic regression, each passenger is represented as a separate observation, and all passengers waiting for the same departing train are associated with the same set of explanatory variables. over the course of a -h rush period, there are typically about trains serving north station, serving , to , passengers per period, and leaving behind well over c. sipetas, et al. transportation research part c ( ) passengers. logistic regression models are generally expected to give stable estimates when the data set for fitting includes at least observations for each outcome, so there is sufficient data to estimate parameters for a model that is structured this way. the logistic function defines the probability that a passenger is left behind by where x is a vector of explanatory variables, is a vector of estimated coefficients for the explanatory variables, and is an estimated alternative-specific constant. the estimation of the model can be thought of as identifying the values of and that best fit the observed outcomes y corresponds to a passenger being left behind, and = y corresponds to a passenger successfully boarding. the underlying assumption in this formulation is that the likelihood of being left behind can be expressed in terms of a linear combination of explanatory variables and a random error term, , which is logistically distributed. the explanatory variables that are considered in this study are as follows: . dwell time (time from door opening to door closing) or difference of ttr arrival and departure times . video count of passengers on platform following doors closing these explanatory variables can all be monitored automatically, without manual observations. video counts of passengers on the platform following doors closing are obtained from the object detection process described above. although dwell time is an appropriate explanatory variable because doors stay open longer when trains are crowded, the dwell time is not directly reported in archived databases. as demonstrated in fig. , observed dwell times can be accurately estimated from automatic records of ttr arrival and departure times. this leads to using ttr reported values of difference between train arrival and departure instead of dwell times for the model development. since these are essentially the same explanatory variable, we call this difference "dwell time" for the remainder of the paper. initially, three models were estimated, making use of only ttr data (model ), only video counts (model ), and then fused ttr c. sipetas, et al. transportation research part c ( ) and video counts (model ). the data from november , , were used to develop these models. the number of passengers waiting on the platform (as described in section . ) are used to determine the number of observations for estimating the parameters of the logit model. in total, passengers boarded arriving trains at north station during the rush period and of them were left behind. this leads to a sample size of passengers for the logistic models. models and are simple logistic regressions, each with only one independent variable. neither model has influential values (i.e., values that, if removed, would improve the fit of the model). model uses both ttr data and video counts, so it is important to diagnose the model's fit, especially with respect to the assumptions of the logistics regression. first, multicollinearity of explanatory variables should be low. the correlation between dwell time and video count is . and the variance inflation factor is . , both indicating that the magnitude of multicollinearity is not too high. second, no influential values were identified. third, the logistic regression is based on the assumption that there is a linear relationship between each explanatory variable and the logit of the response, p p log( /( )), where p represents the probabilities of the response. fig. shows that dwell time is approximately linear with the logit response, while there is somewhat more variability with respect to the video counts. neither plot suggests that there is a systematic mis-specification of the model. a summary of the estimated model coefficients and fit statistics is presented in table . the log likelihood is a measure of how well the estimated probability of a passenger being left behind matches the observations. the null log likelihood is associated with no model at all (every passenger is assigned a % chance of being left behind), and values closer to zero indicate a better fit. the value is a related measure of model fit, with values closer to indicating a better model. for all three models, the estimated coefficients have the expected signs and magnitudes. the positive coefficients for dwell time and video counts indicate a positive relationship with the probability of having left-behind passengers, which is intuitive. in order to compare models, the likelihood ratio statistic is used to determine whether the improvement of one model is statistically significant compared to another. the likelihood ratio test statistic is calculated by comparing the log likelihood of the restricted model (with fewer explanatory variables) to the unrestricted model (with more explanatory variables): comparing model (restricted) to model (unrestricted), one additional variable in model , indicates one degree of freedom, which sipetas, et al. transportation research part c ( ) requires > d . to reject the null hypothesis at the . significance level. comparison between models and gives = d . , indicating that model provides a significant improvement over model by adding video counts. comparison between models and gives = d . , which is also a significant improvement. the akaike information criterion (aic) is an additional model fit statistic that weighs the log likelihood against the complexity of the model. although model has more parameters, the aic is greater than for model or model , indicating that the improved log likelihood justifies the inclusion of both ttr and video count data. the logistic regression provides an estimate of the probability that passengers are left behind each time the train doors close. in order to translate this probability into a passenger count, the estimated number of passengers waiting on the platform from the scaled video count is used as an estimate of the number of passengers waiting to board. table shows the validation results when the models were applied to data collected on january , , for north station. the scaling factor used for the number of passengers waiting on the platform is estimated from november , data. considering the estimated number of left behind passengers for each train separately, it is observed that these models achieve higher accuracy when there are a few passengers left behind. overall, model exhibits error of only . % since it estimates that passengers are left behind in total when passengers were observed to be left behind. model gives a lower estimate of passengers being left behind, which leads to an error of approximately %. as shown in table and table , direct video counts (unscaled and scaled) do not provide accurate estimates of the total numbers of passengers left behind without some additional modeling. the unscaled video counts underestimate the total, while the scaled video counts overestimate the total. the logistic regression provides much better results. although there are some discrepancies for specific train departures, the estimated numbers of passengers left behind are not significantly biased and the total number of passengers left behind during the three-hour rush period is similar to the manually counted total. the logistic regressions estimate the probability of a passenger being left behind using only the explanatory variables listed in table . however, the estimated number of left behind passengers is calculated by multiplying the probability by the scaled video count of passengers on the platform at the time the doors opened, as estimated from the ttr data. therefore, the estimated number of c. sipetas, et al. transportation research part c ( ) passengers left behind with model and model rely only on ttr data that is currently being logged and supplemented by automated counts of passengers in existing surveillance video feeds. the models therefore utilize explanatory variables that are monitored automatically, and they can be deployed for continuous tracking of left behind passengers without needing additional manual counts. the logistic models could actually perform even better if there were a way to obtain a more accurate count of the number of passengers waiting for a train. during the morning peak period, the count of farecards entering outlying stations can provide a good estimate for the number of passengers waiting to board each inbound train. this is more challenging at a transfer station, like north station, in which many passengers are transferring from other lines. in some cases, strategically placed passenger counters could provide useful data. nevertheless, table presents the performance of the developed logistic regression models if their estimated probabilities are multiplied by the actual number of passengers on the platform instead of the estimated number as in table . this reveals the value of more accurate data, because model decreases its error compared to table . model in table estimates passengers being left behind in the afternoon rush on the observed date when the previous estimate was , which is a reduction of error from % to % for this model compared to the observed left behind passengers. another way to evaluate the performance of the developed models is to consider whether or not trains that leave behind passengers can be distinguished from trains that allow all passengers to board. through the course of data collection and analysis, the number of passengers being left behind because of overcrowding can only be reliably observed within approximately ± passengers. the reason for this is that sometimes people choose not to board a train for reasons other than crowding, and one or two passengers left on the platform did not appear to be consistent with problematic crowding conditions. if a train is defined to be leaving behind passengers when more than passengers are left behind, the results presented in table can be reinterpreted to evaluate each method by four measures: the number of trains in a time period that leave behind passengers due to overcrowding. . correct identification rate: the percent of trains that are correctly classified as leaving behind passengers or not leaving behind passengers, as compared to the manual count. this value should be as close to as possible. . detection rate: the percent of departing trains that were manually observed to leave behind passengers that are also flagged as such by the estimation method. this value should be as close to as possible. . false detection rate: the percent of departing trains that are estimated to leave behind passengers but have not, according to manual observations. this value should be as close to as possible. there is an important distinction to make here, because there are two ways that the model to identify trains leaving behind passengers can be used: (a) to estimate the number of trains that leave behind passengers, in which case we only care about measure ; or (b) to identify which specific trains are leaving behind passengers, in which case measures through are important. depending on how the data will be used, application (a) or (b) may be more relevant. for example, application (a) provides an aggregate measure of the number of trains leaving behind passengers. application (b), on the other hand, is what would be needed to get toward a real-time system for identifying (even predicting) left-behind passengers. a comparison of the four measures is presented in table for the trains that departed north station between : pm and : pm on january , . unscaled video counts provide a good estimate of the number of trains that leave behind passengers sipetas, et al. transportation research part c ( ) (measure ), but suffer from a low detection rate and high false detection rate. scaled video counts are poor estimators for the occurrence of left-behind passengers because they are high enough to trigger too many false detections. the modeled estimates both perform well in approaching the actual number of trains leaving behind passengers. model has the best performance for measures through . it never falsely identifies a train as leaving behind passengers, and it correctly detects most occurrences of passengers being left behind. like the count estimates above, both model and model rely on the scaled video counts to estimate the number of passengers waiting on the platform when the train doors open, so a fusion of ttr records and automated video counts provide the most reliable measures. another application of the model is to consider the distribution of waiting times implied by the estimated probabilities that passengers are left behind each departing train. from the direct manual counts, a cumulative count of passengers arriving onto the platform and of passengers boarding trains provides a timeseries count of the number of passengers on the platform. if passengers are assumed to board trains in the same order that they enter the platform, the system follows a first-in-first-out (fifo) queue discipline. although it is certainly not true that passengers follow fifo order in all cases, this assumption allows the cumulative count curves to be converted into estimated waiting times for each individual passenger. the fifo assumption yields the minimum possible waiting time that each passenger could experience, and the waiting time for each passenger can be represented graphically by the horizontal distance between the cumulative number of passengers entering the platform and boarding trains (see fig. for data from november , ). the yellow curve in fig. a represents the cumulative distribution of waiting times that are implied by the observed numbers of passengers entering the platform if all passengers on the platform are assumed to be able to board the next departing train. we call this the expected waiting time. the blue curve in fig. a is the cumulative distribution of waiting times if the number of left-behind passengers are accounted for when trains are too crowded to board. we call this the observed waiting time, because it reflects direct observation of passengers waiting on the platform using manual counts. the distribution indicates the percentage of passengers that wait less than the published headway for a train departure, which is the reliability metric used by the mbta. for the orange line during peak hours, the published headway is min ( s). currently, the mbta is only able to track the expected wait time as a performance metric. the difference between the yellow and blue curves indicates that failing to account for left-behind passengers leads to overestimation of the reliability of the system. the models developed in this study provide the estimated probability that a passenger is left behind each time the train doors close. in the absence of additional passenger count data, a constant arrival rate is assumed over the course of the rush period, the door closing times from ttr and the probability of passengers being left behind from model can be used to estimate the cumulative passenger boardings onto trains over time. under the same fifo assumptions described above, the distribution of experienced waiting times can be estimated based on train-tracking and video counts. by this process a cumulative distribution of waiting times is estimated using probabilities from model is shown as a red curve in fig. b , which we call the uniform arrivals modeled wait time. table includes the values of experienced waiting times for the observed, the expected, and the modeled distributions. this table also shows how the accuracy of estimating waiting times can be improved if we consider the actual arrival rate under the same assumptions used to develop the uniform arrivals modeled wait time. we call this distribution the actual arrivals modeled wait time. the earth mover's distance (emd) is used to measure the difference between the observed distribution and the expected, uniform arrivals and actual arrivals modeled distributions (rubner et al., ) . as shown in table , the emd for the expected case is much higher than the emd for the modeled cases, which indicates that the proposed model reduces errors. the modeled distributions of waiting times closely approximate the observed distribution. this suggests that the estimated probabilities of passengers being left behind each departing train are consistent with the overall passenger experience. the percentage of passengers experiencing waiting times lower or equal to the min published headway is % for both the observed and uniform arrivals model curve, and % for the actual arrivals model curve. the automated count of left behind passengers provides a close approximation of the actual service reliability when applied to the independent data collected on january , . the expected distribution, which does not account for left-behind passengers produces an estimate of % of passengers waiting less than min. the expected distribution overestimates the reliability of the system by failing to account for the waiting time that left-behind passengers experience. this paper presents a method for measuring passengers that are left behind overcrowded trains in transit stations without records of exiting passengers. a study performed by miller et al. ( ) also addresses this challenging case using manual video counts to calibrate the developed models. the methodology proposed in this paper uses archived data with automatic video counts as inputs to estimate the total number of left behind passengers during peak demand periods. the automatic video counts are obtained through the implementation of image processing tools. this paper presents an investigation of the effects of accounting for left behind passengers on the estimation of the current reliability metric used by the mbta, the experienced waiting times. following a preliminary study of crowding conditions on the mbta's orange line, data collection and analysis focused specifically on northbound trains at north station during the afternoon peak hours. data was collected on two typical weekdays and confirmed that overcrowding is a common problem, even on days without disruptions to service. this is an indication that the system is operating very near capacity, and even small fluctuations in headways lead to overcrowded trains that result in left-behind passengers. this study specifically investigated the potential for measuring the number of left-behind passengers using existing data sources c. sipetas, et al. transportation research part c ( ) and automated passenger counts derived from existing surveillance video feeds. the analysis of automated passenger counts was based on the implementation of a fast, open-source algorithm called you only look once (yolo) using existing training sets that identify people as well as other objects. the performance is fast enough that frames from surveillance video feeds could potentially be analyzed in real time. although video counts were not accurate in isolation, the development of models to use automated video counts with automated train-tracking records (model ) demonstrated good results for different applications. in predicting the number of trains leaving c. sipetas, et al. transportation research part c ( ) behind passengers, the developed models can correctly identify whether or not passengers were left behind for % of the trains. the number of passengers that are left behind during the afternoon rush period can be estimated within % of their actual number using only automated video counts and automatically collected train tracking records. with actual counts of the numbers of passengers on the station platform at each train arrival the model can predict the number of left behind passengers with % of the actual number. furthermore, the modeled distribution of experienced waiting times reduced the total emd error by more than % compared to the error of the operator's expected distribution, where left-behind passengers are not considered. this highlights the need of accounting for left-behind passengers when tracking the system's reliability metrics. there are a number of ways that this study could be extended. one approach would be to implement and evaluate the developed models over more days. in terms of passenger flow data, the odx model has some known drawbacks given existing limitations, such as lack of tap-out farecard data or passenger counters on trains. in systems without these limitations, the developed models could achieve higher accuracy. the methodology presented here could also be combined with the previous study by miller et al. ( ) in order to improve the overall process for estimating left behind passengers in subway systems without tap-out. comparing the two studies, miller et al. ( ) achieves higher accuracy for very crowded conditions, whereas our method performs better when there are few passengers left behind. the automated object detection presented in our study could also be combined with the model proposed by miller et al. ( ) as part of its real-time implementation in case of special events where real-time afc is not available. in the area of image processing, a number of steps could be taken to improve the accuracy of video counts and extend the feasibility to more challenging station environments. suggested approaches include comparing the algorithm with other fast and accurate video detection algorithms and training the algorithm to detect heads rather than whole bodies. although there are limitations to any single data source, the potential for improving performance metrics through data fusion and modeling continues to grow. valuing crowding in public transport: implications for cost-benefit analysis simple player uncovering the influence of commuters' perception on the reliability ratio training mixture of weighted svm for object detection using em algorithm people silhouette extraction from people detection bounding boxes in images what does classifying more than , image categories tell us? imagenet: a large-scale hierarchical image database estimating the cost to passengers of station crowding waiting time perceptions at transit stops and stations: effects of basic amenities, gender, and security rich feature hierarchies for accurate object detection and semantic segmentation the distribution of crowding costs in public transport: new evidence from paris crowding in public transport: who cares and why? crowding cost estimation with large scale smart card and vehicle location data does crowding affect the path choice of metro passengers? transit service and quality of service manual discomfort externalities and marginal cost transit fares image processing and analysis with graphs: theory and practice crowding and public transport: a review of willingness to pay evidence and its relevance in project appraisal microsoft coco: common objects in context urban commuting: crowdedness and catecholamine excretion mining smart card data for transit riders' travel patterns estimation of denied boarding in urban rail systems: alternative formulations and comparative analysis massachusetts bay transportation authority estimation of passengers left behind by trains in high-frequency transit service operating near capacity smart card data use in public transit: a literature review a behavioural comparison of route choice on metro networks: time, transfers, crowding, topology and sociodemographics darknet: open source neural networks in c you only look once: unified, real-time object detection the earth mover's distance as a metric for image retrieval inference of public transportation trip destinations by using fare transaction and vehicle location data: dynamic programming approach image processing, analysis, and machine vision crowding in public transport systems: effects on users, operation and implications for the estimation of demand a motion-based image processing system for detecting potentially dangerous situations in underground railway stations feature-based recognition of objects ensemble methods: foundations and algorithms inferring left behind passengers in congested metro systems from automated data this study was undertaken as part of the massachusetts department of transportation research program. this program is funded with federal highway administration (fhwa) and state planning and research (spr) funds. through this program, applied research is conducted on topics of importance to the commonwealth of massachusetts transportation agencies. sipetas, et al. transportation research part c ( ) key: cord- -ozm f dy authors: naqvi, zainab batul; russell, yvette title: a wench’s guide to surviving a ‘global’ pandemic crisis: feminist publishing in a time of covid- date: - - journal: fem leg stud doi: . /s - - - sha: doc_id: cord_uid: ozm f dy it has been quite a year so far(!) and as the wenches we are, we have been taking our time to collect our thoughts and reflections before sharing them at the start of this issue of the journal. in this editorial we think through the covid- pandemic and its devastating effects on the world, on our lives and on our editorial processes. we renew our commitment to improving our operations as a journal and its health along with our own as we deploy wench tactics to restore, sustain and slow down to negotiate this new reality, this new world. we conclude with an introduction to the fascinating contents of this issue along with a collaborative statement of values on open access as part of a collective of intersectional feminist and social justice editors. through all of the pain and suffering we focus our gaze on hope: hope that we can come through this global crisis together engaging in critical conversations about how we can be better and do better as editors, academics and individuals for ourselves, our colleagues and our journal. in these strange times…in these uncertain times…in these unprecedented times. how quickly our conversations and communications have become prefixed with a constant reminder of our current situation. our concern, our sympathies and our connectedness have all increased for one another as we 'check in' with those we interact with regularly, and crucially, those we don't. as pandemic-related lockdowns in the uk and many parts of the world continue, causing many to experience restrictions in their movement and routines that they have never encountered before along with the enforced closure of businesses and places of work, we have been reflecting on the spaces and positions we inhabit as feminist individuals, academics and editors. in this editorial we think through some of the consequences of the covid- pandemic and state responses to the spread of the virus in the context of our ongoing efforts to employ decolonising techniques and deploy wench tactics (fletcher et al. ; naqvi et al. ) . in doing so, we seek to make sense of our new lived realities, although in many ways, just this attempt to make sense of the effects on our existence is both bewildering and revealing. one way this lack of sense is most starkly manifest is in the way it plays with and disrupts time. we experience time as both exponentially sped up and painfully slowed down. over the three months during which we have tried to draft this editorial there have been political, social and economic changes and events too numerous to detail; literally thousands of people have died. but somehow this flux is accompanied by a nagging sense of stasis; we're mulling the same issues, many of us are 'stuck' in our homes with or away from family, and we are still beholden to the virus. this discombobulating confrontation with the contingency of linear time leads us here to feminist work that contemplates time and timeliness and to a necessary reflection on the nature of scholarly work and publishing and the temporal imperatives driving them. in mulling our work and the time in and according to which it occurs, we are also led inevitably to a rumination on health: what is 'health', and who possesses it? the oversimplified answer is that human health refers to our state of physical, mental and emotional wellbeing; and that surely everyone has health which makes it a global concern. unfortunately, we have seen that it really isn't that simple or tidy. fassin ( ) warns us that concern for global health is not something we can take for granted. both 'global' and 'health' are contested concepts (bonhomme ) . global health is neither universal nor worldwide; it is not free from the politics of life and the value-giving processes that lead to lives being weighed against each other. the term 'global' is not only a geographical signifier but a "political work in progress that calls on us to remain ever mindful of the imperial durabilities of our time" (biehl , referring to stoler . in what follows we reflect further on the lessons of the pandemic and how we view health: as a global public good that we should all be working together to improve and maintain for everyone? or as a privilege that in this patriarchal capitalist society is only available to those who can afford it? we return to the question of time and the timely and try to think collectively with our feminist colleagues about publishing, 'slow scholarship' and wench tactics. we consider what feminist leadership looks like in a time of covid- and renew our calls for a firm commitment to decolonising academic publishing and the university. for us, this has recently manifested in a collective statement on publishing and open access, which we have jointly produced and signed with several other intersectional feminist and social justice journal editorial boards. the editorial concludes with an acknowledgement of recently retired editorial board members and an introduction to the copy included in this issue of fls. they are dirty, they are unsuited for life, they are unable, they are incapable, they are disposable, they are non-believers, they are unworthy, they are made to benefit us, they hate our freedom, they are undocumented, they are queer, they are black, they are indigenous, they are less than, they are against us, until finally, they are no more. (indigenous action ) a pandemic is the worldwide spread of a new disease (who ) in her discussion of affliction, disease and poverty, das ( ) discusses the way in which definitions of global health centre on the control of communicable diseases. controlling the spread of infectious disease between us then, is how we measure the success of global health. along the same vein, the world health organization (who) has defined a pandemic as "the worldwide spread of a disease" ( ). these two statements may seem innocuous but contain layers of historical and contemporary oppressions trapped within their layers of meaning. it would be naive to claim that the covid- crisis is the first phenomenon to lay bare the structural injustices and inequalities which already plague us. and that is the key takeaway for us: we are already plagued and have always been plagued with communicable diseases including war, poverty, racism, colonialism, sexism (bonhomme ; siyada ) . all around the world, lives are lost unnecessarily because of these diseases and now the privileged among us are personally at risk we urgently realise that the status quo is a problem-that we are all fragile (msimang ) . these existing diseases have spread worldwide and mutate into new forms all the time: we have been living through multiple, simultaneous pandemics our entire lives and for many of us, this is only now being thrown into stark relief. if we broaden this out further, the editors in us start to inquire into health as a broader concept. health is not only relevant to the human condition but can be applied to systems, processes and institutions such as the academy and the academic publishing industry. the crisis has highlighted the urgent need for us to reflect on the health of our academic lives and spaces along with the ways in which social plagues have infected our ways of writing, editing, working and being. in doing so, we plan and strategise the best ways to be 'wenches in the works' (franklin ) taking advantage of this period to deploy wench tactics and rest, restore and sustain. we aim then, to take an all-encompassing approach to health to interrogate the sicknesses and weaknesses that afflict our spaces and worlds and then try to act for change. whilst the who states that a pandemic involves a new disease spreading throughout the world, the current situation shows us that it is not just the spread of this viral pathogen that is causing the pandemic. it is the combination of intersecting oppressions with the spread of the covid- viral molecules that make up the pandemic. we always benefit from employing decolonising techniques, by looking back to the past to better understand the present. the history of global health is a neo-colonial project (biehl ; magenya ) mired in imperialist and eurocentric attitudes towards disease control. disease control has been repeatedly used "to bolster the moral case for colonialism" (flint and hewitt , ) with colonialist administrators considering their ability to control the spread of infectious diseases in the colonies as an important skill. this speaks to their civilising missionary attitude that in purportedly tackling the spread of infectious disease, they were benefitting the natives and their presence was therefore positive (flint and hewitt , ) . this is further reflected in the international regulations for containing infectious disease spread, which have historically emphasised controls to protect european and north american interests. the us' endorsement of the who prior to its inception in was predicated on concerns for trading relationships, which were central to us economic growth (white ) . the us was only dedicated to "wiping out disease everywhere" when there was a risk disease would enter its borders and affect its economy (white (white , . these imperialist attitudes around prioritising the health and economies of majority white countries in europe, north america and australia display the lack of regard for the deaths of those outside of these territories. the spread of cholera in haiti in , the ebola outbreaks (one of which is still ongoing in drc), avian flu, swine flu, and bse have all been deemed epidemics which are not seen as serious or widespread enough to count as pandemics despite the alarming number of sufferers and fatalities. these disease outbreaks all have one thing in common: they affected racialised people in "exotic, far-away and (made-to-be) poor lands" . these are lands that have deficient healthcare systems and resources because of imperial exploitation. this perception of communicable disease outbreaks as afflicting unhealthy and dirty others in far-away places underpins global health approaches and policy along with responses to the outbreaks in the west. even the labelling of the covid- crisis as a 'global pandemic' is loaded with meaning: it represents that the disease has spread to and is also killing white people in the west on a mass scale. if it were not affecting this subset of the population in a meaningful way, it would 'just' be an epidemic. this is clearly demonstrated by the uk government's woeful response to the disease and the increasing mortality rate we are experiencing. aaltola states that "diseases exist, flourish and die wider than physical environments where they adapt to local memories, practices and cultures" ( , ). by thinking that the uk is invincible because of imperial arrogance, the government has wilfully ignored the memories, practices and cultures of other countries with experience of managing such crises. instead of rushing to save lives, there was a rush to save the economy telling us that there has been no progress in mindset since the s. who suffers the most as a result of this? the marginalised, the poor and the underpaid key workers who are disproportionately not white because diseases are "embedded in and violently react with the fabric of political power" making them "signifiers of the underlying patterns of power" (aaltola , ; asia pacific forum on women, law and development ). the virus may not discriminate but systems and structures by which the imperial state dictates who lives and survives do (anumo ; rutazibwa ) . this discrimination has spilled onto our streets and into our living rooms with the racist and xenophobic discourses that have led to the unjust treatment of vulnerable minorities in society. from the us president deliberately calling covid- the 'chinese virus' to the abuse shouted at east asian people on the streets, reactions to the virus have been rooted in intolerance and ignorance. these unsurprising (and imperialist) responses are further manifested in policy responses which disadvantage the most vulnerable of us including minorities with greater representation in lower socioeconomic groups; victims of abuse who are now told to stay locked up in the house with their abuser; elderly people in care homes or congregate living arrangements; those with 'pre-existing' conditions or disabilities and; immigrants who are being subjected to yet more nationalist rhetoric around border control and surveillance (see also step up migrant women uk amidst these waves of pain and suffering we reel as we witness the continual devaluing of life with the deaths of breonna taylor on march; belly mujinga on april; george floyd on may; bibaa henry and nicole smallman on june; dominique rem'mie fells on june and riah milton on june. these are just a few of the black women and men whose lives have been cruelly and callously taken this year. the list is endless: bleeding and weeping like an open wound. the black lives matter movement was given the attention it deserves by the national and international media as people took to the streets in solidarity, but the wheels of justice move slowly, grinding regularly to a halt leaving us feeling helpless and hopeless at times. we wanted to express our support and decided to share links to free downloads of our articles on a twitter thread written by black authors and papers that adopt critical race approaches. we have now provided a period of openaccess to some key contributions made by black scholars and activists to fls over the last years. what might have always been an inadequate response/intervention quickly revealed some undeniable shortcomings in the journal, its processes and the academic publishing context we negotiate every day. the lack of contributions published in the journal by black scholars is undeniable and something we intend to reflect further on as a board. we know that this needs to be better and we need to have more critical conversations around realising this. we are not looking for an immediate fix or cure. much like the coronavirus, there is none forthcoming yet. but health for humans and for journals is in a state of constant flux, it is an ongoing journey and we can only keep trying to take steps in the right direction to be the best we can: to affirm the value of black scholarship and black lives and counter the appalling racism of the institutions and operations that has heightened in visibility throughout this pandemic. so, yes, we are afflicted and have been since before this pandemic spread, but amongst the fear and the trauma, this situation has revealed hope in humanity and offered an opportunity to step back and re-evaluate. if what they say is true, things will never be the same again and that's exactly what we need: for things to change for the better; to remind us that our health is our wealth and that we need a (genuinely) global effort towards achieving this and not just for certain parts of the world. as feminist academics, writers, dreamers and above all wenches, we have been thinking about how to best deploy our tactics, our resources and our energies to change the 'global' health of academic publishing for the better. an approach to health which is not geared towards saving the economy of the industry first, but the people, their ideas and creativity, their knowledges from all over this suffering world. this leads to us to try and make sense of the gendered and racialised impacts of the virus and this 'new reality' on the workforce and our work as part of an industry: the academic industry. as is to be expected, the recent crisis has laid bare and exacerbated existing socioeconomic inequalities. in the united kingdom, for example, british black africans and british pakistanis are over two and a half times more likely to die in hospital of covid- than the white population (platt and warwick ) . researchers speculate that among the reasons for the higher death rates among black, asian and minority ethnic (bame) populations in the uk include the fact that that a third of all working-age black africans are employed in key worker roles, % more than the share of the white british population. pakistani, indian and black african men are respectively %, % and % more likely to work in healthcare, where they are particularly at risk, than white british men (siddique ) . underlying health conditions which render people more vulnerable to risk from infection are also overrepresented in older british bangladeshi men and in older people of a pakistani or black caribbean background. over % of those national health service workers who have died due to covid- thus far were from bame ethnic backgrounds (cook et al. ) . in addition to these stark ethnic and racial mortality disparities, research continues to emerge attesting to the disproportionate health, social protection and security, care, and economic burdens shouldered by women as the pandemic progresses (united nations ). while men appear to carry a higher risk of mortality from the virus, the differential impacts on men and women of covid- remain largely ignored by governments and global health institutions, perpetuating gender and health inequities (wenham et al. ) . in terms of labour politics, a noticeable shift in working patterns during the pandemic has, perhaps unsurprisingly, disproportionately impacted women. as joanne conaghan observes, the effects of the pandemic are compounded for women due variously to "…their weak labour market position in low paid, highly precarious, and socially unprotected sectors of employment, their greater propensity to be living in poverty, along with the practical constraints which a significant increase in unpaid care work is likely to place on women's ability to pursue paid work" ( ). conaghan points out that the gender division of labour manifests itself in different ways throughout history, affecting the social and economic status of women. but what the covid- crisis reveals in this historical time period is the extent to which labouring practices for many have been 'feminised', "not just in the sense that the proportion of women participating in paid work has exponentially increased but also because the working conditions traditionally associated with women's work-lowpaid, precarious, and service-based rather than manufacturing-have become the norm" (conaghan ). thus women workers, but also vulnerable young people, migrants, and low-paid precarious workers are further exposed by the "perfect storm of poverty, destitution, sickness and death" generated by covid- (conaghan ) . how then are these labour realities relevant to us in the academic publishing sector? some editors are reporting a noticeable downturn in submissions by women authors and, in some cases, an upturn in submissions by men (fazackerley ) , which would be consistent with conaghan's thesis. the current paradigm, however, provides us with another opportunity to look at the mode of production operating in journal publishing, one that we at fls are implicated in and have long been critical of (fletcher et al. (fletcher et al. , . our insistence that academic publishing, and feminist publishing in particular, be seen as a political endeavour drives a lot of our editorial policies including an emphasis on the importance of global south scholarship, employing decolonising techniques in our editorial practice, our involvement in the recent global south writing workshops (naqvi et al. ) and our continuing support for early career researchers (ecrs), particularly those from marginalised or minoritised communities. we remain troubled, however, by the insidious ambivalence of the neoliberal university as it lumbers on, undeterred and uninterested in the new lives we are all trying to adjust to. it was of serious concern to us, for example, that the ref publication deadline remained unaltered well into the onset of the pandemic with associated impacts on journal editors and boards, reviewers and authors. in another appalling example of how structural disadvantages for black researchers are embedded in the zainab naqvi and kay lalor have recently secured a grant from feminist review trust to run a workshop for 'global south' feminist ecrs based in the uk who work in the social sciences and humanities. see https ://www.femin ist-revie w-trust .com/award s/. we joined with many colleagues in signing this open letter to demand an immediate cancelation of the publication deadline: https ://femre v.wordp ress.com/ / / /call-for-the-immed iate-cance llati on-ofthe-ref- -publi catio n-perio d/. may . academy, we are currently watching the unfolding saga of none of the £ . million worth of funding allocated by ukri and nihr to investigate the disproportionate impacts of covid- on 'bame' communities being awarded to black academics. this is compounded by the revelation that of the grants awarded, had a member of the awards assessment panel as a named co-investigator. many of those in our feminist community have come together over the last four months in various fora to share ideas and to support one another as we both adjust to this new paradigm, and resist the continued imposition of the old one (see, for example, graham et al. ) . we took part in collective discussion in july with colleagues on the editorial boards of feminist theory, feminist review, european journal of cultural studies, european journal of women's studies and sociological review about the academic publishing in the context of covid- . that discussion enabled us to share resources and build morale with a view to envisioning a future for feminist and critical academic publishing. a future in which we challenge existing models of open access in publishing as a starting point. the issues with open access are manifold and we have reflected on these previously (fletcher et al. ) . we aim to build and strengthen the links between the board and our fellow social justice and feminist journals to address this along with the other problems that we have identified, experienced and maybe even fed into as editors. as a first step, we have written a collaborative statement on our joint reflections concerning open access which sets out a non-exhaustive list of some of the values we wish to embody as journals and imbue our editorial work and processes with going forward. you can read the collective statement below at the end of this editorial, and we encourage other journals to join and sign the statement. reflecting on what has changed in this time of covid- and what has stayed the same has led us back in many ways to where we started with wench tactics (fletcher et al. ) . how do we engender our own time and space when what little time and space there is isn't really for us? returning to a conscious consideration of timeliness and to the promise of the decolonial public university might be a way to carve out time and space anew or to resist the pull back to 'normality'. one way of undoing time in the institutional contexts in which we find ourselves is through attempting to articulate and practice an ethics of slowness. mountz and colleagues deploy a feminist ethics of care in trying to reimagine working conditions that challenge the imperatives of the neoliberal university ( ). the authors emphasise the need to prioritise "slow-moving conversation[s] on ways to slow we support this open letter that has been produced by ten black women colleagues to call on ukri for transparency and accountability regarding this: https ://knowl edgei spowe r.live/about /. august . the letter points out that according to hesa data, only . % of full-time research positions in the uk are awarded to black and mixed heritage women exposing the seriousness of this marginalisation where black researchers cannot even get grants to do research with their own communities. down and claim time for slow scholarship and collective action informed by feminist politics" (mountz et al. (mountz et al. , . this understanding of slow scholarship is predicated, of course, on a thoroughgoing critique of neoliberal governance and its drivers, which have fundamentally transformed the university in the uk (and elsewhere) over the last years. karin van marle points out how neoliberal epistemologies crowd out other ways of knowing and being such that they become common sense. this has a chilling effect on the university, which "instead of being a space where multiple views and knowledges are celebrated… becomes a very specific place of exclusion and limitation" ( ). van marle insists that we try and think of the university by reference to a different set of aesthetics: "at least it should be one that acknowledges bodily-presence, sensory experiences, complexity and the need to slow down, to step aside from counting, competitiveness and suffocation" ( ). amid covid- we are on the precipice of an economic catastrophe for higher education in which many of our colleagues will lose their jobs and the futures of early career researchers and those without permanent jobs is looking more precarious than ever. we are also concerned about those vulnerable and disabled colleagues, pregnant people and others who can't acclimatise to the changes that are going to be demanded of us. we need to combine our ethos of slow scholarship with sustainable collective labour politics that prioritises the most vulnerable among us, and one that is particularly attentive to the disadvantages that devolve in line with the socio-economic/class, race and gender disparities discussed above. that the sector has long been poised on a knife-edge is something that many of our colleagues and unions have been warning about, and in the uk this has become more and more acute as austerity politics ravage the state sector, of which the university used to be a part. the notion of the public university feels often like a concept that is fast fading in our collective consciousness, but publishing, teaching and living in a time of covid- makes it prescient once again. as corey robin puts it: "public spending, for public universities, is a bequest of permanence from one generation to the next. it is a promise to the future that it will enjoy the learning of the present and the literature of the past. it is what we need, more than ever, today" ( ). that imagining how we want our world(s) and universities to be is also a profoundly decolonial imperative is something that we must reckon with and take responsibility for (see also, otto and grear ). as many institutions of higher learning in the global north have been forced to confront their complicity in the global slave trade and in other forms of imperialism in the wake of #blacklivesmatter, we have to insist on meaningful accountability and not, as foluke adebisi warns, pr stunts or marketing sops to 'diversity' politics ( ). adebisi makes clear the importance of locating ourselves as researchers and teachers as a continuing part of the university's legacy, and the need to acknowledge racism and colonialism as ongoing processes: "my constant fear is that in the process of universities 'coming to terms', our proposals can turn out to be non-contextualised recommendations that do not take into account the embedded and extended nature of slavery and the slave trade" ( ). what if we showed to our students, asks adebisi, "in very concrete terms, exactly how the past bleeds into the present, how we walk side by side with histories ghosts, how we breathe coloniality every day, how our collective history is literally present in every single thing we do?" ( ). in other words, how can we effectively distinguish, asks olivia rutazibwa, between teaching and learning that foregrounds the will to power versus the will to live? by this she means that in our attempts at decolonising we "go beyond the merely representational" by engaging with and understanding the very materiality of being and the systems that determine and produce our lives (and deaths) (rutazibwa , ) . that such pedagogical and activist praxis necessarily requires time, space and slow conversations is immediately clear. trying to think through slowness in the context of feminist decolonial editorial praxis is also a key aspect of wench tactics (fletcher et al. ; fletcher ) . being a wench in the works entails us deploying tactics to influence how our journal is used, accessed and circulated. we add to this by now utilising wench tactics to influence how our journal is produced. intrinsic in this is the timeline around production, use, access and circulation. as we work from home, experiencing lockdowns and shielding and distancing, time simultaneously runs away from us and stretches out before us. to ground ourselves then, we take a step back: we step out of the rat race that life has become and prioritise health; for ourselves, for others and for the journal. we first set out to rest and restore. we break out of the increasingly frantic rhythms and deadlines that are being fired at us by our institutions and do something else-we aim to remind ourselves of who we are and what we do. in practical terms, this has reminded us that the production and success of our journal are not dependent on us alone but on others including our amazing authors, reviewers, copyediting team and of course our readers. in recognition of the hard work, commitment and engagement of all these people, we have given extended periods for the different steps in the issue production process from reviews to revisions and even writing this editorial. as we do this, we remain defiant and difficult in the face of the publishing industry's environment which requires constant, enthusiastic engagement. this is mirrored in higher education more broadly as we are inundated with email after email about all the changes we must effect to our teaching, research and general working practices in the upcoming year. we need to rest and take restoration measures; we need time and resources to return to ourselves and using wench tactics is an important way to achieve this. to support our rest and restoration, we have also been guided by slow scholarship principles which sideline the measures of productivity, competition and finances underpinning the current institutional and structural approaches to this crisis. instead we emphasise slower conversations and work to sustain ourselves, our health and the journal. we withdraw from institutional priorities which value automation-levels of speed so that we can sustain critical engagement with ourselves, our editorial practices and one another. we place worth and value then on ways of being and working which sustain us, nourish us and keep us grounded, reminding ourselves and one another that it is completely understandable things will take time, need more time, deserve more time than the industry wants us to believe. again, this requires us to be difficult and defiant; a decolonial feminist technique which reconfigures what is seen as valuable and worthy. here, we critically question what is currently being positioned as valuable and worthy in industrial terms and then re-order the list to move our rest, restoration and sustenance to the top. getting things done is valuable and worthy but ensuring that we are rested and restored so that we can sustain ourselves, our engagement with the work we do, and our health are more so. fls is a community and we have been taking time in our meetings to make the space to check in with one another, hear how each member is doing and to practise building care and solidarity with and for one another. this is not limited to our meetings but even the spaces and platforms outside of our 'formal', scheduled interactions. we aim to be there for each other on social media and in collective and individual ways. in doing so, we seek to model best practices of feminist leadership. inspired by leila billing's writings ( ), we first make the invisible, visible and cultivate cultures of mutual care. we make ourselves visible to one another, and to others-we want to be accessible to all of you and remind our colleagues and readers that like you, we are human beings struggling with our lives, health and commitments during this crisis. we are there for each other in an ongoing state of mutual care. in response to the terrible impacts this crisis and the already toxic aspects of the academy are having on minoritised ecrs and to make us more visible and model these mutual care principles, zainab and kay have secured funding to run a writing and mentoring workshop for 'global south' feminist ecrs in the social sciences and humanities based in the uk. more information will be released on our social media channels, so please look out for it and apply if you are eligible and interested. finally, we want to model feminist leadership by imagining and celebrating alternatives. this is exhibited in our recent work to imagine what a life after existing models of open access could and should look like with our colleagues from other feminist and social justice journals (see below). the dreamers inside us envisage alternative ways to share our research and celebrate forms and productions of knowledges that are not given enough attention by us or the academy. in her work around complaint sara ahmed advances the formation of the 'complaint collective' ( ). when we complain, we object to something that should not be happening, but also because we are hopeful about how things could be different (ahmed ) . as we speak out against existing publishing models, we are optimistic about how things can change and become connected with others who have the same complaints and same hope. this leads us to form a complaint collective with our fellow editors and those who are also concerned about the status quo giving us the necessary space, time and opportunity to collaboratively imagine, celebrate and speak out in hope for an alternative model of publishing that is healthier, more equitable and representative. change and movement are inevitable and as we face the challenges of the present and dream about how we can make things better for the future, we now celebrate several of our cherished colleagues who are moving on to new and exciting things. before we introduce the papers that make up this issue of the journal, we want to acknowledge the work of our colleagues who have recently retired from their roles on the editorial board of fls. julie mccandless, nicola barker and diamond ashiagbor are irreplaceable members of our collective and we already miss their sage wisdom, warmth and dedication to fls. all three joined the fls editorial board in , when the journal became independent of the university of kent and were instrumental in guiding the journal as it has grown over the last six years. julie mccandless is a powerhouse whose commitment to and influence on fls cannot be overstated. she was a co-ordinating editor for the journal for most of her tenure and authors will remember her thoroughness, care and generosity as an editor. nicola barker was a book reviews editor during her time on the board and her invaluable contributions to our lively discussions and decision-making processes filled our time together with warmth and laughter. finally, diamond ashiagbor, as well as serving as a book review editor for a significant period of her tenure, brought such vast experience and rigour to her role on the editorial board we will dearly miss her wise counsel. we send our love and solidarity with these wonderful colleagues (and fellow wenches) and wish them well as they continue to blaze a feminist trail for us. and so, we are on the lookout for some more wonderful colleagues to join our editorial board. we have released a call for members aiming to recruit colleagues from the uk and ireland through an application process. if you are interested in joining the board, please do apply. we want the board to be as representative as possible and especially encourage colleagues with a feminist background at any career stage from minoritised groups to apply. if you have any questions about applying, please do get in touch with us, we would be delighted to tell you just how much fun it is to be a wench in the works. this issue of the journal includes some remarkable feminist legal scholarship, notable for its breadth, both scholarly and geographically. caroline dick's article entitled 'sex, sexism, and judicial misconduct: how the canadian judicial council perpetuates sexism in the legal realm' is a fascinating and sobering look bias in decisions of the canadian judicial council. dick considers two separate judicial misconduct complaints adjudicated by the council, one in which a male judge exhibited bias against women while adjudicating a sexual assault trial and a second in which graphic, sexual pictures of a female judge were posted on the internet without her knowledge or consent. dick concludes that the decisions of the council indicate that it is itself perpetuating gendered stereotypes informed by the notional ideal victim, further perpetuating sexism both in canadian courtrooms and among the judiciary. in our second article of this issue, maame efua addadzi-koom carefully examines the history and effectiveness of the maputo protocol, a uniquely african instrument on women's rights that was established with the promise of addressing the regional peculiarities of african women. analysing what little case law there is invoking the protocol and concerning gender-based violence against women, addadzi-koom takes stock of the potential of the protocol and the burgeoning due diligence principle on the women's rights jurisprudence of the ecowas community court of justice (eccj). addadzi-koom concludes her discussion with some recommendations arguing that the protocol and the due diligence principle should be more widely applied by the eccj to centre women's rights in the sub-region and beyond. in '"is this a time of beautiful chaos?": reflecting on international feminist legal methods' faye bird delves deep into feminist jurisprudence with an intriguing interrogation of margaret radin's work, and in particular, her distinction between 'ideal' and 'non-ideal' to evaluate different methodologies for critiquing international law and institutions. bird asserts that (re)viewing radin's framework in this context presages a new and more fruitful feminist pluralism through which we might better navigate institutional strategising. having featured heavily in faye bird's foregoing article, in our next paper dianne otto reflects artfully on the latest iteration of the feminist judgments project in her review essay: "feminist judging in action: reflecting on the feminist judgments in international law project". otto observes aspects of the feminist judgments that were transformative, before turning to the contributors' 'reflections', which highlight some of the obstructions encountered and compromises made in the processes of judging. otto concludes that the new collection makes a useful and compelling contribution to concretising feminist methods and highlighting the role of international jurisprudence as a feminist endeavour, while contributing to the insight of the feminist judgments project more broadly by exposing the scope and limits of justice delivered by the legal form of judging. the issue is completed by book reviews of three exciting new titles, all of which speak to issues of immediate concern to feminist legal scholars: eva nanopoulos reviews honor brabazon's wonderful edited collection neoliberal legality: understanding the role of law in the neoliberal project; lynsey mitchell considers the research handbook on feminist engagement with international law, edited by susan harris rimmer and kate ogg and; felicity adams reviews emma k russell on queer histories and the politics of policing. we are, as always, eternally grateful for the generosity and collegiality of our reviewers, without whom the journal could not function. we conclude this editorial with the recently written feminist and social justice editors' collaborative statement of intent on the values and principles we wish to adopt and embody in our work and efforts to survive, thrive and maybe even dismantle parts of the academic publishing machine. our journey and vital conversations around and towards health continue as we try to become better editors, academics and women: taking the time and resources, to be our best (and healthiest) wench selves. we are a collective of intersectional feminist and social justice journal editors. we reject the narrow values of efficiency, transparency and compliance that inform current developments and policies in open access and platform publishing. together, we seek further collaboration in the development of alternative publishing processes, practices and infrastructures imbued with the values of social and environmental justice. the dominant model of open access is dominated by commercial values. commercial licenses, such as cc-by are mandated or preferred by governments, funders and policy makers who are effectively seeking more public subsidy for the private sector's use of university research, with no reciprocal financial arrangement (berry ) . open access platforms such as academia.edu are extractive and exploitative. they defer the costs of publishing to publishers, universities and independent scholars, while selling the data derived from the uses of publicly funded research. as such they represent the next stage in the capitalisation of knowledge. commercial platforms are emphatically not open source and tend towards monopoly ownership. presenting themselves as mere intermediaries between users, they obtain privileged access to surveil and record user activity and benefit from network effects. a major irony of open access policy is that it aims to break up the giants of commercial journal publishing but facilitates existing or emerging platform monopolies. the tech industry-now dominating publishing, and seeking to dominate the academy through publishing-having offered open access as a solution to the ills of scholarly publishing is currently offering solutions to the problems caused by open access including discoverability, distribution, digital preservation and the development and networking of institutional repositories that stand little to no chance of competing with academia.edu. platforms are not only extractive but have material effects on research, helping to effect a movement upstream in the research cycle whereby knowledge is redesigned, automatically pre-fitted for an economy based on efficiency, competition, performance, engagement or impact. alongside the transformative agreements currently being made between commercial journal publishers (mainly) and consortia led by powerful universities, publishers such as elsevier are gaining greater access to the research cycle and to the data currently owned by universities. open access benefits commercial interests. the current model also serves to sideline research and scholarship produced outside of universities altogether, creating financial barriers to publishing for scholars outside of the global north/west and for independent scholars, as well as for early career researchers and others whose institutional affiliation is, like their employment status, highly precarious and contingent, and for authors who do not have the support of well-funded institutions and/or whose research is not funded by research councils. moreover, stem fields and preprint platforms are determining the development of open access publishing cultures. these are forms of content management that offer cost reduction and other efficiencies by erasing the publisher and minimising editorial function. they raise questions of quality assurance and further the technologisation, standardisation and systematisation of scholarly research such as the automation of peer review, and the disaggregation of journals and books into article or article-based units that can be easily monitored and tracked. therefore, the underlying values of widening participation, public knowledge and the fair sharing of resources need to be reclaimed. platforms should be refitted for ahss scholarship (where speed, for example, is not an indicator of the importance of research) and integrated with more conventional modes of dissemination and distribution more suited to the field and its preference for print monographs. platform development should be distributed and institutionally owned and instead of replacing the publisher-as-problem, it should recognise and represent a more diverse set of publishing interests, stemming from scholar-led and university press publishers that are mission-driven and not-for-profit. it should enable and sustain the innovation generated through intellectual kinship across diaspora spaces. open access reaches into, and disrupts the academy through policy mandates that are, at present, unfunded or underfunded and that defer more of the costs of publishing onto a sector that could not support them even before the covid- pandemic and its catastrophic effect on institutional finances as well as individual lives and wellbeing. as a collective of feminist and social justice journal editors we believe that journal publishing during and after the pandemic should seek to end the exploitation of scholarly labour and foreground a new ecological economics of scholarly publishing based on cooperation and collaboration instead of competition; responsible householding, or careful management of the environment rather than the extraction; and the fair-sharing of finite resources (such as time and materials). rather than extracting more resource (including free labour) from an already depleted and uneven sector, thereby further entrenching inequalities within and between universities globally, and sidelining scholarship produced outside of universities altogether, journal publishing after open access should be responsive and responsible toward the wellbeing, values and ambitions of diverse scholars and institutions across ahss and stem in the global south and the global north. we will learn from, and engage with other collaborative ventures such as amel-ica in latin america, coko in the us and copim in the uk. building on these initiatives, which are primarily concerned with implementing open science or open humanities agendas, we are inaugurating a more radical project of reevaluating and reorganising journal publishing: • replacing the values of efficiency, transparency and compliance with those of equality, diversity, solidarity, care and inclusion • providing a more sustainable and equitable ecological economics of scholarly publishing in tune with social and environmental justice • working collectively and collaboratively rather than competitively • thinking and acting internationally, rather than through parochial national or regional policies • working across publishing and the academy with a view to responsible householding and accountability in both sectors • seeking to work across funding and institutional barriers, including between stem and ahss scholars • seeking further collaborations and partnerships in order to build new structures (disciplines, ethics, processes and practices of scholarship including peer review, citation, impact, engagement and metrics) and infrastructures to support a more healthy and diverse publishing ecology • challenging the technologisation and systematisation of research by working to increase our visibility as editors and academics making us and our publications more accessible and approachable for those who are minoritised in academic publishing publishing after open access does not have a resolution (let alone a technological solution) or endpoint, but rather is a continual process of discussion, controversymaking and opening up to possibilities. we do not know what journal publishing after open access is, but we do know that we must work together in order to create a just alternative to the existing extractive and predatory model, an alternative that operates according to a different set of values and priorities than those that dominate scholarly publishing at the moment. these values and priorities need to inform or constitute new publishing systems committed to the public ownership rather than the continued privatisation of knowledge. we recognise that the choice we face is not between open and closed access, since these are coterminous, but between publishing practices that either threaten or promote justice. we fully recognise the scale of the challenge in promoting justice against the global trend of entrenched populism, nationalism and neoliberalism. collective action and intervention is a start point, and we take inspiration from the recent statement issued by the black writers' guild. our open exploration of the future of journal publishing will be informed by the history of radical and social justice publishing and by intersectional feminist knowledge and communication practices that are non-binary, non-hierarchical, situated, embodied and affective. against the instrumentalisation and operationalisation of knowledge, we will foreground both validation and experimentation, authority and ethics. we will ask, against a narrow implementation of impact and metrics, what really counts as scholarship, who gets to decide, who gets counted within its remit, and what it can still do in the world. we believe that knowledge operates in, rather than on the world, co-constituting it, rather than serving as a form of mastery and control. the re-evaluation of knowledge and its dissemination is, therefore, we believe, a necessary and urgent form of re-worlding. we are open to other journals joining this collective. if you are interested please get in touch with any of the signatories below: european journal of cultural studies european journal of women's studies feminist legal studies feminist theory the sociological review understanding the politics of pandemic scares: an introduction to global politosomatics complaint as diversity work why complain? feministkilljoys unchecked corporate power paved the way for covid- and globally, women are at the frontlines. cambridge core covid- highlights the failure of neoliberal capitalism: we need feminist global solidarity the uses of open access. stunlaw, philosophy and critique for a digital age theorizing global health what does feminist leadership look like in a pandemic? medium epidemics and global history: the power of medicine in the middle east coronavirus, colonization, and capitalism. common dreams exclusive: deaths of nhs staff from covid- analysed covid- and inequalities at work: a gender lens. futures of work affliction: disease health and poverty (forms of living) that obscure object of global health women's research plummets during lockdown -but articles from men increase. the guardian on being uncomfortable wench tactics? openings in conditions of closure playing with the slow university? thinking about rhythm, routine and rest in decelerating life. presentation at qmul sexism as a means of reproduction: some reflections on the politics of academic practice colonial tropes and hiv/aids in africa: sex, disease and race dialogue on the impact of coronavirus on research and publishing indigenous action. . rethinking the apocalypse: an indigenous anti-futurist manifesto making a feminist internet in africa: why the internet needs african feminists and feminisms for slow scholarship: a feminist politics of resistance through collective action in the neoliberal university homesick: notes on a lockdown back at the kitchen table: reflections on decolonising and internationalising with the global south sociolegal writing workshops international law, social change and resistance: a conversation between professor anna grear (cardiff) and professorial fellow dianne otto (melbourne) are some ethnic groups more vulnerable to covid- than others? the institute for fiscal studies the pandemic is the time to resurrect the public university. the new yorker on babies and bathwater: decolonizing international development studies the corona pandemic blows the lid off the idea western superiority https ://olivi aruta zibwa .wordp ress.com/ / / /the-coron a-pande mic-blows -the-lid-off-the-idea-ofweste rn-super iorit what they did yesterday afternoon british bame covid- death rate 'more than twice that of whites'. the guardian coronavirus pandemic in the shadow of capitalist exploitation and imperialist domination of people and nature-statement by the regional secretariat for the north african network for food sovereignty. committee for the abolition of illegitimate debt migrant women: failed by the state, locked in abuse duress: imperial durabilities in our times. durham: duke up. united nations. . policy brief: the impact of covid- on women. april life is not simply fact" -aesthetics, atmosphere & the neoliberal university covid- : the gendered impacts of the outbreak the art of medicine -historical linkages: epidemic threat, economic risk, and xenophobia key: cord- -mwitcseq authors: bu, f.; steptoe, a.; mak, h. w.; fancourt, d. title: time-use and mental health during the covid- pandemic: a panel analysis of , adults followed across weeks of lockdown in the uk date: - - journal: nan doi: . / . . . sha: doc_id: cord_uid: mwitcseq there is currently major concern about the impact of the global covid outbreak on mental health. but it remains unclear how individual behaviors could exacerbate or protect against adverse changes in mental health. this study aimed to examine the associations between specific activities (or time use) and mental health and wellbeing amongst people during the covid pandemic. data were from the ucl covid social study; a panel study collecting data weekly during the covid pandemic. the analytical sample consisted of , adults living in the uk who were followed up for the strict week lockdown period from st march to st may . data were analyzed using fixed effects and arellano bond models. we found that changes in time spent on a range of activities were associated with changes in mental health and wellbeing. after controlling for bidirectionality, behaviors involving outdoor activities including gardening and exercising predicted subsequent improvements in mental health and wellbeing, while increased time spent on following news about covid predicted declines in mental health and wellbeing. these results are relevant to the formulation of guidance for people obliged to spend extended periods in isolation during health emergencies, and may help the public to maintain wellbeing during future pandemics. a number of studies have demonstrated the negative psychological effects of quarantine, lockdowns and stay-at-home orders during epidemics including sars, h n influenza, ebola, and covid- - . these effects include increases in stress, anxiety, insomnia, irritability, confusion, fear and guilty [ ] [ ] [ ] . to date, much of the research on the mental health impact of enforced isolation during the pandemic has focused on the mass behavior of "staying at home" as the catalyst for these negative psychological effects. but there has been little exploration into how specific behaviors within the home might have differentially affected mental health, either exacerbating or protecting against adverse psychological experiences. re-allocation of time use has been shown from other social shocks where people suddenly are forced to spend a significant amount of time at home, with individuals quickly having to adapt behaviorally to new circumstances and develop new routines. for example, during the - recession, adults in the us who lost their jobs reallocated % of their usual working time to "non-market work", such as home production activities (e.g. cleaning, washing), childcare, diy, shopping, and care of others, and spent % of the time on leisure activities, including socializing, watching television, reading, sleeping, and going out . similarly, during the covid- pandemic, research suggests that while many individuals were able to continue working from home, others experienced furloughs or loss of employment, and many had to take on increased childcare responsibilities . further, individuals globally experienced a sharp curtailing of leisure activities, with shopping, day trips, going to entertainment venues, face-to-face social interactions, and most activities in public spaces prohibited. analyses of google trends have suggested negative effects of these limitations on behaviors, showing a rise in search intensity for boredom and loneliness alongside searches for worry and sadness during the early weeks of lockdown in europe and the us . but it's not yet clear what effect these changes in behaviors had on mental health. there is a substantial literature on the relationship between the ways people spend their time and mental health. certain behaviors have been proposed to exert protective effects on mental health. for instance, studies on leisure-time use show that taking up a hobby can have beneficial effects on alleviating depressive symptoms , engaging in physical activity can reduce levels of depression and anxiety and enhance quality of life [ ] [ ] [ ] [ ] , and broader leisure activities such as reading, listening to music, and volunteering can reduce depression and anxiety, increase personal empowerment and optimism, foster social connectedness, and improve life satisfaction [ ] [ ] [ ] [ ] [ ] . however, other behaviors may have a negative influence on mental health. engaging in productive activities (e.g. work, housework, caregiving) has been found in certain circumstances to be associated with higher levels of depression , and sedentary screen time can increase the risk of depression , especially when watching news or browsing internet relating to stressful events. this relationship between time use and mental health is bidirectional, as mental ill health has been shown to predict lower physical activity , lower motivation to engage in leisure activities and increased engagement in screen time . however, there have been little data on the association between daily activities and mental health amongst people staying at home during the covid- pandemic. further, it is unclear if activities that are usually beneficial for mental health had similar psychological benefits during the pandemic. this topic is pivotal as understanding time use will help in formulating healthcare guidelines for individuals continuing to stay at home due to quarantine, shielding, or virus resurgences during the current global crisis and in potential future pandemics. therefore, this study involved analyses of longitudinal data from over , adults captured during the first two months of 'lockdown' due to the covid- pandemic in the uk. it explored the time-varying relationship between a wide range of activities and mental health, including productive activities, exercising, gardening, reading for pleasure, hobby, communicating with others, following news on covid- and sedentary screen time. specifically, given research showing the inter-relationship yet conceptual distinction between different aspects of mental health, we focused on three different outcomes. anxiety combines negative mood states with physiological hyperarousal, while depression also combines negative mood states with anhedonia (loss of pleasure), and life satisfaction is an assessment of how favorable one feels towards one's attitude to life , . crucially, symptoms of anxiety and depression can coexist with positive feelings of subjective wellbeing such as life satisfaction, and even in the absence of any specific symptoms of mental illness, individuals can experience low levels of wellbeing . so this study sought to disentangle differential associations between time use and multiple aspects of mental health. as these relationships can be complex and are likely bidirectional, this study explored (a) concurrent changes in behaviors and mental health to identify associations over time, and (b) whether changes in behaviors temporally predicted changes in mental health, accounting for the possibility of reverse causality by using dynamic panel methods. participants data were drawn from the ucl covid- social study; a large panel study of the psychological and social experiences of over , adults (aged +) in the uk during the covid- pandemic. the study commenced on st march involving online weekly data collection from participants for the duration of the covid- pandemic in the uk. whilst not random, the study has a well-stratified sample that was recruited using three primary approaches. first, snowballing was used, including promoting the study through existing networks and mailing lists (including large databases of adults who had previously consented to be involved in health research across the uk), print and digital media coverage, and social media. second, more targeted recruitment was undertaken focusing on (i) individuals from a low-income background, (ii) individuals with no or few educational qualifications, and (iii) individuals who were unemployed. third, the study was promoted via partnerships with third sector organisations to vulnerable groups, including adults with pre-existing mental illness, older adults, and carers. the study was approved by the ucl research ethics committee ( / ) and all participants gave informed consent. the full study protocol, including details on recruitment, retention, and weighting is available at www.covidsocialstudy.org in this study, we focused on participants who had at least two repeated measures between st march and st may , when the uk went into strict lockdown on the rd march and remained largely in that situation until st june (although the lockdown measures started to be eased earlier in different uk nations). this provided us with data from , participants (total observations , , mean observations per person . range to ). depression during the past week was measured using the patient health questionnaire (phq- ); a standard instrument for diagnosing depression in primary care . the questionnaire involves nine items, with responses ranging from "not at all" to "nearly every day". higher overall scores indicate more depressive symptoms. anxiety during the past week was measured using the generalized anxiety disorder assessment (gad- ); a well-validated tool used to screen and diagnose generalised anxiety disorder in clinical practice and research . there are items with -point responses ranging from "not at all" to "nearly every day", with higher overall scores indicating more symptoms of anxiety. life satisfaction was measured by a single question on a scale of to : "overall, in the past week, how satisfied have you been with your life?" thirteen measures of time-use/activities were considered. these included (i) working (remotely or outside of the house), (iii) volunteering, (iii) household chores (e.g. cooking, cleaning, tidying, ironing, online shopping etc.) or caring for other including friends, relatives or children, (iv) looking after children (e.g. bathing, feeding, doing homework or playing with children), (v) gardening, (vi) exercising outside (including going out for a walk or other gentle physical activity, going out for moderate or high intensity activity such as running, cycling or swimming), or inside the home or garden (e.g. doing yoga, weights or indoor exercise), (vii) reading for pleasure, (viii) engaging in home-based arts or crafts . cc-by . international license it is made available under a is the author/funder, who has granted medrxiv a license to display the preprint in perpetuity. (which was not certified by peer review) the copyright holder for this preprint this version posted august , . . https://doi.org/ . / . . . doi: medrxiv preprint activities (e.g. painting, creative writing, sewing, playing music etc.), engaging in digital arts activities (e.g. streaming a concert, virtual tour of a museum etc.), or doing diy, woodwork, metal work, model making or similar, (ix) communicating with family or friends (including phoning, video talking, or communicating via email, whatsapp, text or other messaging service), (x) following-up information on covid- (e.g. watching, listening, or reading news, or tweeting, blogging or posting about covid- ), (xi) watching tv, films, netflix etc. (not for information on covid- ), (xii) listening to the radio or music, and (xiii) browsing the internet, tweeting, blogging or posting content (not for information on covid- ). each measure was coded as, rarely (< mins), low ( mins- hrs) and high (> hrs), except for low-intensity activities such as volunteering, gardening, exercising, reading, and arts/crafts. these were coded as, none, low (< mins) and high (> mins). we used a 'stylized questions' approach where participants were asked to focus on a single day and consider how much time they spent on each activity on the list. however, given concerns about the cognitive burden of focusing on a 'typical' day (which involve aggregating information from multiple days and averaging), we asked participants to focus just on the last weekday (either the day before or the last day prior to the weekend if participants answered on a saturday or sunday). this approach follows aspects of the 'time diary' approach, but we chose weekday to remove variation in responses due to whether participants took part on weekends . data analyses started by using standard fixed-effects (fe) models. fe analysis has the advantage of controlling for unobserved individual heterogeneity and therefore eliminating potential biases in the estimates of time-variant variables in panel data. it uses only withinindividual variation, which can be used to examine how the change in time-use is related to the change in mental health within individuals over time. as individuals are compared with themselves over time, all time-invariant factors (such as gender, age, income, education, area of living etc.) are all accounted for automatically, even if unobserved. compared with standard regression method, it allows for causal inference to be made under weaker assumptions in observational studies. however, fe analysis does not address the direction of causality. given this limitation, we further employed the arellano-bond (ab) approach , which uses lags of the outcome variable (and regressors) as instruments in a first-difference model (eq. ). the ab model uses − and further lags as instruments for − − − . the rationale is that the lagged outcomes are unrelated to the error term in first differences, − − , under a testable assumption that are serially uncorrelated. further, we treated the regressors, , as endogenous ( ( ) ≠ ≤ , ( ) = , > ). therefore, should be instrumented by − , − and potentially further lags. the ab models were estimated using optimal generalized method of moments (gmm). to account for the non-random nature of the sample, all data were weighted to the proportions of gender, age, ethnicity, education and country of living obtained from the office for national statistics . to address multiple testing, we provided adjusted p values (q values) controlling for the positive false discovery rate. these were generated by using the 'qqvalue' package . all analyses were carried out using stata v and the ab models were fitted using the user-written command, xtabond . . cc-by . international license it is made available under a is the author/funder, who has granted medrxiv a license to display the preprint in perpetuity. the copyright holder for this preprint this version posted august , . . https://doi.org/ . / . . . doi: medrxiv preprint demographic characteristics of participants are shown in table s in the supplement. as shown in table , the within variation accounted for about % of the overall variation for depression, and % for anxiety. anxiety explained % of the variance in depression (r= . , p<. ) and % of the variance in life satisfaction (r=- . , p<. ), while depression explained % of the variance in life satisfaction (r=- . , p<. ). there were also substantial changes in the time-use/activity variables ( figure ). over % of participants changed status in all activities, except for volunteering ( %) and childcare ( %). increases in time spent working, doing housework, gardening, exercising, reading, engaging in hobbies, and listening to the radio/music were all associated with decreases in depressive symptoms ( table , model i-i). the largest decrease in depression was seen for participants who increased their exercise levels to more than minutes per day, who increased their time gardening to more than minutes per day, or who increased their work to more than hours per day. on the contrary, increasing time spent following covid- news or doing other screen-based activities (either watching tv or internet use/social media) were associated with an increase in depressive symptoms. when examining the direction of the relationship (table , model i-ii), increases in gardening, exercising, reading, and listening to the radio/music predicted subsequent decreases in depressive symptoms. however, increases in time spent following news on covid- predicted increases in depressive symptoms, as did increases in time spent looking after children or moderate increases in communicating via videos, calling or messaging with others. increases in time spent gardening, exercising, reading and other hobbies were all associated with decreases in anxiety, while increasing time spent following covid- news and communicating remotely with family/friends were associated with increases in anxiety ( table , model ii-i). the largest decrease in anxiety was seen for participants who increased their time on gardening, exercising or reading to minutes or more per day. when looking at the direction of the relationship (table , model ii-ii), increases in gardening predicted a subsequent decrease in symptoms of anxiety. but increasing time spent following news on covid- predicted an increase in anxiety. life satisfaction increases in time spent working, volunteering, doing housework, gardening, exercising, reading, engaging in hobbies, communicating remotely with family/friends, and listening to the radio/music were all associated with an increase in life satisfaction, while increasing . cc-by . international license it is made available under a is the author/funder, who has granted medrxiv a license to display the preprint in perpetuity. the copyright holder for this preprint this version posted august , . . https://doi.org/ . / . . . doi: medrxiv preprint time spent following covid- news was associated with a decrease in life satisfaction ( table , model iii-i). when looking at the direction of the relationship (table , model iii-ii), increases in volunteering, gardening and exercising predicted a subsequent increase in life satisfaction. but increasing time spent following news on covid- , working, and looking after children predicted a decrease in life satisfaction. we carried out sensitivity analyses excluding keyworkers who might not have been isolated at home in the same way and therefore might have had different patterns of behaviors during lockdown. the results were materially consistent with the main analysis (see the supplementary material). this is the first study to examine the impact of time-use on mental health amongst people during the covid- pandemic. time spent on work, housework, gardening, exercising, reading, hobbies, communicating with friends/family, and listening to music were all associated with improvements in mental health and wellbeing, while following the news on covid- (even for only half an hour a day) and watching television excessively were associated with declines in mental health and wellbeing. whilst the relationship between time use and behaviors is bidirectional, when exploring the direction of the relationship using lagged models, behaviors involving outdoor activities including gardening and exercising predicted subsequent improvements in mental health and wellbeing, while time spent watching the news about covid- predicted declines in mental health and wellbeing. our findings of negative associations between following the news on covid- and mental health echo a cross-sectional study from china showing that social media exposure during the pandemic is associated with depression and anxiety . the fact that exposure to covid- news is largely screen-based, and the fact that watching high levels of television or high social media engagement unrelated to covid- was also found to be associated with depression could suggest that this finding is more about the screens than the news specifically . however, the association with following the news on covid- was independent of these other screen behaviors and was found for even relatively low levels of exposure ( mins- hours). further, there have been wider discussions of the negative impact of news during the pandemic, including concerns about the proliferation of misinformation and sensationalised stories on social media , and information overload, whereby the amount of information exceeds people's ability to process . it is notable that these associations were found for all measures of mental illhealth and wellbeing and even in lagged models that attempted to remove the effects of reverse causality, suggesting the strength of its relationship with mental health. however, other activities were shown to have protective associations with mental health. in particular, outdoor activities such as gardening and exercise were associated with better levels of mental health and wellbeing across all measures, with many of these results maintained in lagged models. these results echo many previous studies into the benefits of outdoors activities [ ] [ ] [ ] [ ] . exercise (including gentle activities such as gardening) can . cc-by . international license it is made available under a is the author/funder, who has granted medrxiv a license to display the preprint in perpetuity. the copyright holder for this preprint this version posted august , . . https://doi.org/ . / . . . doi: medrxiv preprint affect mental health via physiological mechanisms (such as reducing blood pressure), neuroendocrine mechanisms (such as reducing levels of cortisol involved in stress response), neuroimmune mechanisms (including reducing levels of inflammation associated with depressive symptoms and increasing the synthesis and release of neurotransmitters and neurotrophic factors associated with neurogenesis and neuroplasticity), and psychological mechanisms (including improving self-esteem, autonomy and mood) . particularly during lockdown, such activities (which provided opportunities to leave the home) may have helped in providing physical and mental separation from fatiguing or stressful situations at home, offering a change of scenery, and proving a feeling of being connected to something larger . hobbies such as listening to music, reading, and engaging in arts and other projects were also associated with better mental health across all measures. this builds on substantial literature showing the benefits of such activities in reducing depression and anxiety, building a sense of self-worth and self-esteem, fostering self-empowerment, and supporting resilience . the associations presented here show that these activities have remained beneficial to mental health during lockdown. however, these associations were not retained as consistently across lagged models. this suggests that they may be linked more bidirectionally with mental health, with changes in mental health also driving individuals' motivations to engage with these activities. there are several other noteworthy findings from these analyses. first, volunteering was associated with higher levels of life satisfaction, including across lagged models that explored with the direction of association, but not with other aspects of mental health. previous studies have suggested psychological benefits of volunteering, but our findings suggest that it plays a specific role in supporting evaluative wellbeing during the pandemic . second, both work and housework had some protective associations when looking at parallel changes with mental health over time. however, when looking at lagged models, housework does not appear to have been a precursor to changes in mental health, whilst frequent working was associated with lower life satisfaction, independent of other types of predictors. this echoes research highlighting working from home as a cause of stress for many people during the covid- pandemic . similarly, looking after children was not associated with changes in mental health in our main models, but increases to high volumes of childcare were associated with higher levels of depression and lower life satisfaction over time. this could reflect strain from spending substantial amounts of time on childcare or, as such increases may reflect changes in other aspects of home life such as a partner having to reduce childcare to go back to work, it could also reflect other stressors that may have in fact been driving changes in mental health. finally, communicating with family/friends had mixed effects in our main models, but when exploring the direction of association, it was in fact associated with higher levels of depression. this could be explained by data from previous studies showing that while face-to-face interactions can decrease loneliness (which is associated with mental health including depression), communication over the telephone (or other digital means) can in certain circumstances increase loneliness, perhaps as it is perceived as a less emotionally rewarding experience . this study has a number of strengths including its large sample size, repeated weekly follow-up over the weeks of uk lockdown, and robust statistical approaches being applied. however, the ucl covid- social study did not use a random sample. . cc-by . international license it is made available under a is the author/funder, who has granted medrxiv a license to display the preprint in perpetuity. the copyright holder for this preprint this version posted august , . . https://doi.org/ . / . . . doi: medrxiv preprint nevertheless, the study does have a large sample size with wide heterogeneity, including good stratification across all major socio-demographic groups, and analyses were weighted on the basis of population estimates of core demographics, with the weighted data showing good alignment with national population statistics and another large scale nationally representative social survey. but we cannot rule out the possibility that the study inadvertently attracted individuals experiencing more extreme psychological experiences, with subsequent weighting for demographic factors failing to fully compensate for these differences. this study looked at adults in the uk in general, but it is likely that "lock-down" or "stay at home" orders had different impact on time-use for people with different sociodemographic characteristics, for example age and gender. while our analyses statistically took account of all stable participant characteristics (even if unobserved) by comparing participants against themselves, future studies could examine how the relationship between time-use and mental health differs by individuals' characteristics and backgrounds. we also lack data to see how behaviors during lockdown compared to behaviors prior to covid- , so it remains unknown whether changes such as increasing time spent on childcare or leisure activities were unusual for participants and therefore not part of their usual coping strategies for their mental health. finally, we asked individuals to focus on the last available weekday in answering the questions on time use. whilst this has been shown to improve the quality and accuracy of recollection, it does mean that variations in time use across the entire week are not captured. finally, whilst we standardised our questions to the last week day and used the same response with all participants consistently across lockdown (which is well recognised as an approach in tracking time use, as discussed in the methods section), it is nevertheless possible that behaviors across weekends may also have been influencing mental health independent of weekday behaviors. overall, our analyses provide the first comprehensive exploration of the relationship between time-use and mental health during lockdowns due to the covid- pandemic. many behaviors commonly identified as important for good mental health such as hobbies, listening to music, and reading for pleasure were found to be associated with lower symptoms of mental illness and higher wellbeing. these results were seen when exploring parallel changes in time use and behaviors, attesting to the importance of both encouraging health-promoting behaviors to support mental health, and understanding mental health when setting guidelines on healthy behaviors during a pandemic. we also explored the direction of the relationship, finding that changes in outdoor activities including exercise and gardening were strongly associated with subsequent changes in mental health. however, increasing exposure to news on covid- was strongly associated with declines in mental health. these results are important in formulating guidance for people likely to experience enforced isolation for months to come (either due to quarantine, self-isolation or shielding) and are also key in preparing for future pandemics so that more targeted advice can be given to individuals to help them stay well at home. . cc-by . international license it is made available under a is the author/funder, who has granted medrxiv a license to display the preprint in perpetuity. the copyright holder for this preprint this version posted august , . . cc-by . international license it is made available under a is the author/funder, who has granted medrxiv a license to display the preprint in perpetuity. the copyright holder for this preprint this version posted august , . . cc-by . international license it is made available under a is the author/funder, who has granted medrxiv a license to display the preprint in perpetuity. the copyright holder for this preprint this version posted august , . is the author/funder, who has granted medrxiv a license to display the preprint in perpetuity. the copyright holder for this preprint this version posted august , . . https://doi.org/ . / . . . doi: medrxiv preprint . cc-by . international license it is made available under a is the author/funder, who has granted medrxiv a license to display the preprint in perpetuity. the copyright holder for this preprint this version posted august , . . https://doi.org/ . / . . . doi: medrxiv preprint . cc-by . international license it is made available under a is the author/funder, who has granted medrxiv a license to display the preprint in perpetuity. the copyright holder for this preprint this version posted august , . the impact of covid- epidemic declaration on psychological consequences: a study on active weibo users the depressive state of denmark during the covid- pandemic anxiety and depression among general population in china at the peak of the covid- epidemic the experience of quarantine for individuals affected by sars in toronto survey of stress reactions among health care workers involved with the sars outbreak the psychological impact of quarantine and how to reduce it: rapid review of the evidence time use during the great recession lockdown in the uk: why women and especially single mothers are disadvantaged assessing the impact of the coronavirus lockdown on unhappiness, loneliness, and boredom using google trends fixed-effects analyses of time-varying associations between hobbies and depression in a longitudinal cohort study: support for social prescribing? gardening is beneficial for health: a meta-analysis physical activity for cognitive and mental health in youth: a systematic review of mechanisms growing minds: evaluating the effect of gardening on quality of life and physical activity level of older adults self-esteem, self-efficacy, and social connectedness as mediators of the relationship between volunteering and well-being who health evidence synthesis report. cultural contexts of health: the role of the arts in improving health and wellbeing in the who european region volunteering and well-being: do self-esteem, optimism, and perceived control mediate the relationship? why fiction may be twice as true as fact: fiction as cognitive and emotional simulation key: cord- -ppu idl authors: russo, daniel; hanel, paul h. p.; altnickel, seraphina; berkel, niels van title: predictors of well-being and productivity among software professionals during the covid- pandemic -- a longitudinal study date: - - journal: nan doi: nan sha: doc_id: cord_uid: ppu idl the covid- pandemic has forced governments worldwide to impose movement restrictions on their citizens. although critical to reducing the virus' reproduction rate, these restrictions come with far-reaching social and economic consequences. in this paper, we investigate the impact of these restrictions on an individual level among software engineers currently working from home. although software professionals are accustomed to working with digital tools, but not all of them remotely, in their day-to-day work, the abrupt and enforced work-from-home context has resulted in an unprecedented scenario for the software engineering community. in a two-wave longitudinal study ($n~=~ $), we covered over psychological, social, situational, and physiological factors that have previously been associated with well-being or productivity. examples include anxiety, distractions, psychological and physical needs, office set-up, stress, and work motivation. this design allowed us to identify those variables that explain unique variance in well-being and productivity. results include ( ) the quality of social contacts predicted positively, and stress predicted an individual's well-being negatively when controlling for other variables consistently across both waves; ( ) boredom and distractions predicted productivity negatively; ( ) productivity was less strongly associated with all predictor variables at time two compared to time one, suggesting that software engineers adapted to the lockdown situation over time; and ( ) the longitudinal study did not provide evidence that any predictor variable causal explained variance in well-being and productivity. our study can assess the effectiveness of current work-from-home and general well-being and productivity support guidelines and provide tailored insights for software professionals. the mobility restrictions imposed on billions of people during the covid- pandemic in the first half of successfully decreased the reproduction rate of the virus [ , ] . however, quarantine and isolation also come with tremendous costs on people's well-being [ ] and productivity [ ] . for example, the psychosocial consequences of covid- mitigation strategies have resulted in an estimated average loss of . years of life [ ] . while prior research [ ] has identified numerous factors either positively or negatively associated with people's well-being during disastrous events, most of this research was cross-sectional and included a limited set of predictors. further, whether productivity is affected by disastrous events and, if so, why precisely, has not yet been investigated in a peer-reviewed article to the best of our knowledge. this is especially relevant since many companies, including tech companies, have instructed their employees to work from home [ ] at an unprecedented scope. thus, it is unclear whether previous research on remote work [ ] still holds during a global pandemic while schools are closed, and professionals often have to work in non-work dedicated areas of their homes. it is particularly interesting to study the effect of quarantine on software engineers as they are often already experienced in working remotely, which might help mitigate the adverse effects of the lockdown on their well-being and productivity. therefore, there is a compelling need for longitudinal applied research that draws on theories and findings from various scientific fields to identify variables that uniquely predict the well-being and productivity of software professionals during the quarantine, for both the current and potential future lockdowns. the software engineering community has never before faced such a wide-scale lockdown and quarantine scenario during the global spread of the covid- virus. as a result, we can not build on pre-existing literature to provide tailored recommendations for software professionals. accordingly, in the present research, we integrate theories from the organizational [ ] and psychological [ , ] literature, as well as findings from research on remote work [ , , ] and recommendations by health [ , ] and work [ ] authorities targeted at the general population. this longitudinal investigation provides the following contributions: -first, by including a range of variables relevant to well-being and productivity, we are able to identify those variables that are uniquely associated with these two dependent variables for software professionals and thus help improve guidelines and tailor recommendations. -second, a longitudinal design allows us to explore which variables predict (rather than are predicted by) well-being and productivity of software professionals. -third, the current mobility restrictions imposed on billions of people provide a unique opportunity to study the effects of working remotely on people's well-being and productivity. our results are relevant to the software community because the number of knowledge workers who are at least partly working remotely is increasing [ ] , yet the impact of working remotely on people's health and productivity is not well understood yet [ ] . we focus on well-being and productivity as dependent variables because both are crucial for our way of living. well-being is a fundamental human right, according to the universal declaration of human rights, and productivity allows us to maintain a certain standard of living and thus also affects our overall well-being. thus, our research question is: research question: what are relevant predictors of well-being and productivity for software engineers who are working remotely during a pandemic? in the remainder of this paper, we describe the related work about wellbeing in quarantine and productivity in remote work in section , followed by a discussion about the research design of this longitudinal study in section . the analysis is described in section , and results discussed in section . implications and recommendations for software engineers, companies, and any remote-work interested parties is then outlined in section . finally, we conclude this study by outlying future research directions in section . to slow down the spread of pandemics, it is often necessary to quarantine a large number of people [ , ] and enforce social distancing to limit the spread of the infection [ ] . this typically implies that only people working in essential professions such as healthcare, police, pharmacies, or food chains, such as supermarkets, are allowed to leave their homes for work. if possible, people are asked to work remotely from home. however, such measures are perceived as drastic and can have severe consequences on people's well-being [ , ] . previous research has found that being quarantined can lead to anger, depression, emotional exhaustion, fear of infecting others or getting infected, insomnia, irritability, loneliness, low mood, post-traumatic stress disorders, and stress [ , , , , , ] . the fear of getting infected and infecting others, in turn, can become a substantial psychological burden [ , ] . also, a lack of necessary supplies such as food or water [ ] and insufficient information from public health authorities adds on to increased stress levels [ ] . the severity of the symptoms correlated positively with the duration of being quarantined and symptoms can still appear years after quarantine has ended [ ] . this makes it essential to understand what differentiates those whose mental health is more negatively affected by being quarantined from those who are less strongly affected. however, a recent review found that no demographic variable was conclusive in predicting whether someone would develop psychological issues while being quarantined [ ] . moreover, prior studies investigating such predictors focused solely on demographic factors (e.g., age or number of children [ , ] ). this suggests that additional research is needed to identify psychological and demographic predictors of well-being. for example, prior research suggested that a lack of autonomy, which is an innate psychology need [ ] , negatively affects people's well-being and motivation [ ] , yet evidence to support this claim in the context of a quarantine is missing. to ease the intense pressure on people while being quarantined or in isolation, research and guidelines from health authorities provide a range of solutions on how an individual's well-being can be improved. some of these factors lie outside of the control for individuals, such as the duration of the quarantine, or the information provided by public authorities [ ] . in this study, we therefor focus on those factors that are within the control of individuals. however, investigating such factors independently might make little sense since they are interlinked. for example, studying the relations between anxiety and stress with well-being in isolation is less informative, as both anxiety and stress are negatively associated with well-being [ , ] . however, knowing which of the two has a more substantial impact on people's well-being above and beyond the other is crucial, as it allows inter alia policymakers, employers, and mental health support organizations to provide more targeted information, create programs that are aimed to reduce people's anxiety or stress levels, and improve people's well-being, since anxiety and stress are conceptually independent constructs. thus, it is essential to study these variables together rather than separately. the containment measures not only come at a cost for people's well-being but they also negatively impact their productivity. for example, the international monetary fund (imf) estimated in june that the world gdp would drop by . % as a result of the containment measures taken to reduce the spread of covid- -with countries particularly hit by the virus, such as italy, would experience a drop of over % [ ] . this expected drop in gdp would be significantly larger if many people were unable to work remotely from home. however, previous research on the impact of quarantine typically focused on people's mental and physiological health, thus providing little evidence on the effect on productivity of those who are still working. luckily, the literature on remote work, also known as telework, allows us to get a broad understanding of the factors that improve and hinder people's productivity during quarantine. the number of people working remotely has been growing in most countries already before the covid- pandemic [ , ] . of those working remotely, % do so for all of their working time. the vast majority of remote workers, % would recommend others to do the same [ ] , suggesting that the advantages of remote work outweigh the disadvantages. the majority of people who work remotely do so from the location of their home [ ] . working remotely has been associated with a better work-life balance, increased creativity, positive affect, higher productivity, reduced stress, and fewer carbon emissions because remote workers commute less [ , , , , , , ] . however, working remotely also comes with its challenges. for example, challenges faced by remote workers include collaboration and communication (named by % of , surveyed remote workers), loneliness ( %), not being able to unplug after work ( %), distractions at home ( %), and staying motivated ( %) [ ] . while these findings are informative, it is unclear whether they can be generalized. for instance, if mainly those with a long commute or those who feel comfortable working from home might prefer to work remotely, it would not be possible to generalize to the general working population. a pandemic such as the one caused by covid- in forces many people to work remotely from home. being in a frameless and previously unknown work situation without preparation intensifies common difficulties in remote work. adapting to the new environment itself and dealing with additional challenges adds on to the difficulties already previously identified and experienced by remote workers, and could intensify an individual's stress and anxiety and negatively affect their working ability. the advantages of remote work might, therefore, be reduced or even omitted. substantial research is needed to understand further what enables people to work effectively from home while being quarantined [ ] . the current situation shows how important research in this field is already. forecasts indicate that remote work will grow on an even larger scale than it did over the past years [ , ], therefore research results on predictors of productivity while working remotely will increase in importance. some guidelines have been developed to improve people's productivity, such as the guidelines proposed by the chartered institute of personnel and development, an association of human resource management experts [ ] . examples include designating a specific work area, wearing working clothes, asking for support when needed, and taking breaks. however, while potentially intuitive, empirical support for those particular recommendations is still missing. adding to the complexity, the measurement of productivity, especially in software engineering, is a debated issue, with some authors suggesting not to consider it at all [ ] . nevertheless, individual developer's productivity has a long investigation tradition [ ] . prior work on developer productivity primarily focused on developing software tools to improve professionals' productivity [ ] or identifying the most relevant predictors, such as task-specific measurements and years of experience [ ] . similarly, understanding relevant skillsets of developers that are relevant for productivity has also been a typical line of research [ ] . eventually, as la toza et al. pointed out, measuring productivity in software engineering is not just about using tools; instead, it is about how they are used and what is measured [ ] . in the present research, we build on the literature discussed above to identify predictors of well-being and productivity. additionally, we also include variables that were identified as relevant by other lines of research. furthermore, we chose a different setting, sampling strategy, and research design than most of the prior literature. this is important for several reasons. first, many previous studies included only one or a few variables, thus masking whether other variables primarily drive the identified effects. for example, while boredom is negatively associated with well-being [ ] , it might be that this effect is mainly driven by loneliness, as lonely people report higher levels of boredom [ ] -or vice versa. only by including a range of relevant variables, it is possible to identify the primary variables, which can subsequently be used to write or update guidelines to maintain one's well-being and productivity while working from home. second, this approach simultaneously allows us to test whether models developed in an organizational context such as the two-factor theory [ ] can also predict people's well-being in general and whether variables that were associated with well-being for people being quarantined also explain productivity. third, while previous research on the (psychological) impact of being quarantined [ ] is relevant, it is unclear whether this research is generalizable and applicable to the covid- pandemic. in contrast to previous pandemics, during which only some people were quarantined or isolated, the covid- pandemic strongly impacted billions globally. for example, previous research found that people who were quarantined were stigmatized, shunned, and rejected [ ] ; this is unlikely to repeat as the majority of people are now quarantined. fourth, research suggests [ ] that pandemics become increasingly likely due to a range of factors (e.g., climate change, human population growth) which make it more likely that pathogens such as viruses are transmitted to humans. this implies that it would be beneficial to prepare ourselves for future pandemics that involve lockdowns. fifth, the trend to remote work has been accelerated through the covid- pandemic [ ] , which makes it timely to investigate which factors predict well-being and productivity while working from home. the possibility to study this under extreme conditions (i.e., during quarantine), is especially interesting as it allows us to include more potential stressors and distractors of productivity. this is critical. as outlined above, previous research on the advantages and challenges of remote work can presumably not be generalized to the population because mainly people from certain professions and specific living and working conditions might have chosen to work remotely. sixth and finally, a longitudinal design allowed us to test for causal inferences. specifically, in wave , we identified variables that explain unique variance in well-being and productivity, which we measured again in waves . this is important because it is possible that, for example, the amount of physical activity predicts well-being or that well-being predicts physical activity. additionally, we are able to test whether well-being predicts productivity or vice versa -previous research found that they are interrelated [ , ] . the variables we are planning to measure in the present longitudinal study are displayed in figure . to facilitate its interpretation, we categorized the variables in four broad sets of predictors, which are partly overlapping. we include all variables related to people's well-being and productivity that we discussed above and measured on an individual level. to summarize, while the initial selection of predictors is theory-driven, based on previous research, or recent guidelines, the selection of predictors included in the second wave is data-driven. during the covid- pandemic, many governments and organizations have called for volunteers to support self-isolation (see, for example, [ , ] ). while also relevant to the community at large, research suggests that acts of kindness have a positive effect on people's well-being [ ] . additionally, volunteering has the benefit of leaving one's home for a legitimate reason and reducing cabin fever. we therefore decided to include volunteering as a potential predictor for well-being. coping strategies such as making plans or reappraising the situation are, in general, effective for one's well-being [ , ] . for example, altruistic acceptance -accepting restrictions because it is serving a greater good -while being quarantined was negatively associated with depression rates three years later [ ] . conversely, believing that the quarantine measures are redundant because covid- is nothing but ordinary flu or was intentionally released by the chinese government (i.e., beliefs in conspiracy theories), will likely lead to dissatisfaction because of greater feelings of non-autonomy. indeed, beliefs in conspiracy theories are associated with lower well-being [ ] . we further propose that three needs are relevant to people's well-being and productivity [ , ] . specifically, we propose that the need for autonomy and competence are deprived of many people who are quarantined, which negatively affects well-being and motivation [ ] . further, we propose that the need for competence was deprived, especially for those people who cannot maintain their productivity-level. this might especially be the case for those living with their families. in contrast, the need for relatedness might be over satisfied for those living with their family. another important factor associated with one's well-being is the quality of one's social relationships [ ] . as people have fewer opportunities to engage with others they know less well, such as colleagues in the office or their sports teammates, the quality of existing relationships becomes more important, as having more good friends facilitates social interactions either in person (e.g., with their partner in the same household) or online (e.g., video chats with friends). moreover, we expect that extraversion is linked to well-being and productivity. for example, extraverted people prefer more sensory input than introverted people [ ] , which is why they might struggle more with being quarantined. extraversion correlated negatively with support for social distancing measures [ ] , which is a proxy of stimulation (e.g., being closer to other people, will more likely result in sensory stimulation). finally, research on predictors of productivity while working from home can be theoretically grounded in models of job satisfaction and productivity, such as herzberg's two-factor theory [ ] . this theory states that causes of job satisfaction can be clustered in motivators and hygiene factors. motivators are intrinsic and include advancement, recognition, work itself, growth, and responsibilities. hygiene factors are extrinsic and include the relationship with peers and supervisor, supervision, policy and administration, salary, working conditions, status, personal link, and job security. both factors are positively associated with productivity [ ] . as there are little differences between remote and on-site workers in terms of motivators and hygiene factors [ ] , the two-factor theory provides a good theoretical predictor of productivity of people working remotely. in our two-wave study, we are covering an extensive set of predictors, as identified above. based on the literature mentioned earlier, we expected the strength of the association between the predictors and the outcomes' well-being and productivity to vary between medium to large. therefore, we assumed for our power analysis a medium-to-large effect size of f = . and a power of . . power analysis with g*power . . . [ ] revealed that we would need a sample size of participants. to ensure data quality and consistency, and to account for potential dropout in participants between the two waves, we invited almost participants who were identified as software engineers in a previous study [ ] to participate in a screening study in april . to collect our responses, we used prolific, a data collection platform, commonly used in computer science (see e.g., [ ] ). we opted for this solution because of the high reliability, replicability, and data quality of dedicated platforms, especially compared with the use of mailing lists [ , ] . to administer the surveys, we used qualtrics and shared it on the prolific platform. the screening study was tailored for the covid- pandemic and was completed by professionals. here, we aimed to select only participants from countries where lockdown measures where put into place. countries with unclear, mixed policies or early reopening (e.g., denmark, germany, sweden) were excluded. similarly, our participants were supposed to actively work from home during the lockdown for more than h a week. in the first wave of data collection, which took place in the week of april - , participants completed the first survey. participation in the second wave (may - ) was high ( %), with completed surveys. participants have been uniquely identified through their prolific id, which was essential to run the longitudinal analysis while allowing participants to remain anonymous. in each survey, we included three test items (e.g., "please select response option 'slightly disagree"'). moreover, we controlled if the participants were still working from home in the reference week and if lockdown measures were still in place in their respective countries. as none of our participants failed at least two of the three test items, all participants reported working remotely and answered the survey in an appropriate time frame, and we did not exclude anyone. the mean age of the participants was . years (sd = . , range = ; women, men). participants were compensated in line with the current us minimum wage (average completion time seconds, sd = . ). we employed a longitudinal design, with two waves set two-weeks apart from each other towards the end of the lockdown, which allowed us to test for internal replication. also, running this study towards the end of the lockdowns in the vast majority of countries allowed participants to provide a more reliable interpretation of lockdown conditions. we chose a period of two weeks because we wanted to balance change in our variables over time with the end of a stricter lockdown that was discussed across many countries when we run wave . many of our variables are thought to be stable over time. that is, a person's scores on x at time is strongly predictive of a person's scores on x at time (indeed, the test-retest reliabilities we found support this assumption, see table ). the closer the temporal distance between wave and , the higher the stability of a variable. in other words, if we had measured the same variables again after only one or two days, there would not have been much variance that could have been explained by any other variable, because x measured at time already explains almost all variance of x measured at time . in contrast, we aimed to collect data for wave while people were still quarantined. if at time of the data collection people would still be in lockdown and at time the lockdown would have been eased, this would have included a major confounding factor. thus, to balance those two conflicting design requirements, we opted for a two weeks break in between the two waves. we describe the measures of the two dependent (or outcome) variables in subsection . . predictors (or independent variables) are explained in subsections . , . , . , and . . wherever possible, we relied on validated scales. if this was not possible (e.g., covid- specific conspiracy beliefs), we created a scale. all items are listed in the supplemental materials. additionally, we also explore whether there are any mean changes in the variables we measured at both times (e.g., has people's well-being changed?) well-being was measured with an adapted version of the -item satisfaction with life scale [ ] . we adapted the items to measure satisfaction with life in the past week. example items include "the conditions of my life in the past week were excellent" and "i was satisfied with my life in the past week". responses were given on a -point likert scale ranging from (strongly disagree) to (strongly agree, α time = . , α time = . ). productivity was measured relative to the expected productivity. we contrasted productivity in the past week with the participant's expected productivity (i.e., productivity level without the lockdown). as we recruited participants working in different positions, including freelancers, we can neither use objective measures of productivity nor supervisor assessments and rely on self-reports. we expect limited effects of socially desirable responses as the survey was anonymous. the general understanding and the widespread belief that many people could not be as productive as they usually are during the lockdown in (e.g., due to stress or caring responsibilities). we operationalized productivity as a function of time spent working and efficiency per hour, compared to a normal week. specifically, we asked participants: "how many hours have you been working approximately in the past week?" (item p ), "how many hours were you expecting to work over the past week assuming there would be no global pandemic and lockdown?" (item p )to measure perceived efficiency, "if you rate your productivity (i.e., outcome) per hour, has it been more or less over the past week compared to a normal week?" (item p ). responses to the last item were given on a bipolar slider measure ranging from ' % less productive' to ' %: as productive as normal' to '≥ % more productive' (coded as - , , and ). to compute an overall score of productivity for each participant, we used the following formula: productivity = (p /p ) × ((p + )/ ). values between and . would reflect that people were less productive than normal, and values above would indicate that they were more productive than usual. for example, if one person worked only % of their normal time in the past week but would be twice as efficient, the total productivity was considered the same compared to a normal week. we preferred this approach over the use of other self-report instruments, such as the who's health at work performance questionnaire [ ] , because we were interested in the change of productivity while being quarantined as compared to 'normal' conditions. the who's questionnaire, for example, assesses productivity also in comparison to other workers. we deemed this unfit for our purpose as it is unclear to what extent software engineers who work remotely are aware of other workers' productivity. also, our measure consists of only three items and showed good test-retest reliability (table ) . test-retest reliability is the agreement or stability of a measure across two or more time-points. a coefficient of would indicate that responses at time would not be linearly associated with those at time , which is typically undesired. higher coefficients are an additional indicator of the reliability of the measures, although they can be influenced by a range of factors such as the internal consistency of the measure itself and external factors. for example, the test-rest reliability for productivity is r = . lower than for most other variables such as needs or well-being, but this is because the latter constructs are operationalized as stable over time. in contrast, productivity can vary more extensively due to external factors such as the number of projects or the reliability of one's internet connection. self-discipline was measured with -items of the brief self-control scale [ ] . example items include "i am good at resisting temptation" and "i wish i had more self-discipline" (recoded). responses were registered on a -point scale ranging from (not at all) to (very; α = . ). coping strategies was measured using the -item brief cope scale, which measures coping dimensions [ ] . example items include "i've been trying to come up with a strategy about what to do" (planning) and "i've been making fun of the situation" (humor). responses were on a -point scale ranging from (i have not been doing this at all) to (i have been doing this a lot). the internal consistencies were satisfactory to very good for two-item scales: self-distraction (α = . ), active coping (α = . ), denial (α = . ), substance use (α = . ), use of emotional support (α = . ), use of instrumental support (α = . ), behavioral disengagement (α = . , α = . ), venting (α = . ), positive reframing (α = . ), planning (α = . ), humor (α = . ), acceptance (α = . ), religion (α = . ), and self-blame (α = . , α = . ). loneliness was measured using the -item version of the de jong gierveld loneliness scale [ ] . the items are equally distributed among two factors, emotional; α = . , α = . ) (e.g., "i often feel rejected") and social; α = . , α = . (e.g., "there are plenty of people i can rely on when i have problems"). participants indicated how lonely they felt during the past week. responses were given on a -point scale ranging from (not at all) to (every day). compliance with official recommendations was measured using three items of a compliance scale [ ] . the items are 'washing hands thoroughly with soap', 'staying at home (except for groceries and x exercise per day)' and 'keeping a m ( feet) distance to others when outside.' reponses were given on a -point scale ranging from (never complying to this guideline) to (always complying to this guideline, α = . ). anxiety was measured using an adapted version of the -item generalized anxiety disorder scale [ ] . participants indicate how often they have experienced anxiety over the past week to different situations. example questions are "feeling nervous, anxious, or on edge" and "not being able to stop or control worrying". responses were given on a -point scale ranging from (not at all) to (every day, α = . , α = . ). additionally, we measured specific covid- and future pandemic related concerns with two items "how concerned do you feel about covid- ?" and "how concerned to you about future pandemics?" responses on this were given by a -point scale ranging from (not at all concerned) to (extremely concerned; α = . ) [ ] . stress was measured using a four-item version of the perceived stress scale [ ] . participants indicate how often they experienced stressful situations in the past week. example items include "in the last month how often have you felt you were unable to control the important things in your life?" and "in the last month how often have you felt confident about your ability to handle your personal problems?". responses were registered on a -point scale ranging from (never) to (very often; α = . , α = . ). boredom was measured using the -item version [ ] of the boredom proneness scale [ ] . example items include "it is easy for me to concentrate on my activities" and "many things i have to do are repetitive and monotonous". responses were on a -point likert scale ranging from (strongly disagree) to (strongly agree; α = . , α = . ). daily routines was measured with five items: "i am planning a daily schedule and follow it", "i follow certain tasks regularly (such as meditating, going for walks, working in timeslots, etc.)", "i am getting up and going to bed roughly at the same time every day during the past week", "i am exercising roughly at the same time (e.g., going for a walk every day at noon)", and "i am eating roughly at the same time every day". responses were taken on a -point likert scale ranging from (does not apply at all) to (fully applies; α = . , α = . ). conspiracy beliefs was measured with a -item scale as designed by ourselves for this study. the first two items were adapted from the flexible inventory of conspiracy suspicions [ ] , whereas the latter three are based on more specific conspiracy beliefs: "the real truth about coronavirus is being kept from the public.", "the facts about coronavirus simply do not match what we have been told by 'experts' and the mainstream media", "coronavirus is a bioweapon designed by the chinese government because they are benefiting from the pandemic most", "coronavirus is a bio-weapon designed by environmental activists because the environment is benefiting from the virus most", and "coronavirus is just like a normal flu". responses were collected on a -point likert scale ranging from (totally disagree) to (totally agree, α = . ). extraversion was measured using the -item extraversion subscale of the brief hexaco inventory [ ] . responses were given on a -point likert scale ranging from (strongly disagree) to (strongly agree; α = . , α = . ). low scores on extraversion are an indication of introversion. since we found at wave that extraversion and well-being were positively correlated contrary to our hypothesis (see below), and, in our view, contrary to widespread expectations, we decided to measure in wave what participants' views are regarding the association between extraversion and well-being. we measured expectations with one item: "who do you think struggles more with the current pandemic, introverts or extraverts?" response options were 'introverts', 'both around the same', and 'extraverts'. autonomy, competence, and relatedness needs of the self-determination theory [ ] was measured using the -item balanced measure of psychological needs scale [ ] . example items include "i was free to do things my own way' (need for autonomy; α = . , α = . ), "i did well even at the hard things" (competence; α = . , α = . ), and "i felt unappreciated by one or more important people" (recoded; relatedness; α = . , α = . ). participants were asked to report how true each statement was for them in the past week. responses were given on a -point scale ranging from (no agreement) to (much agreement). extrinsic and intrinsic work motivation was measured with the item extrinsic regulation -item and intrinsic motivation subscales of the multidimensional work motivation scale [ ] . the extrinsic regulation subscale measures social and material regulations. specifically, participants were asked to answer some questions about why they put effort into their current job. example items include "to get others' approval (e.g., supervisor, colleagues, family, clients ...)" (social extrinsic regulation; α = . ), "because others will reward me nancially only if i put enough effort in my job (e.g., employer, supervisor...)" (material extrinsic regulation; α = . ) and "because i have fun doing my job" (intrinsic motivation; α = . ). responses were given on a -point scale ranging from (not at all) to (completely). mental exercise was measured with two items: "i did a lot to keep my brain active" and "i performed mental exercises (e.g., sudokus, riddles, crosswords)". participants indicated the extent to which the items were true for them in the past week on a -point scale ranging from (not at all) to (very; α = . ). technical skills was measured with one item: "how well do your technological skills equip you for working remotely from home?" responses were given on a -point scale ranging from (far too little) to (perfectly). diet was measured with two items [ ] : "how often do you eat fruit, excluding drinking juice?" and "how often do you eat vegetables or salad, excluding potatoes?". responses were given on a -point scale ranging from (never) to (three times or more a day; α = . ) quality of sleep was measured with one item: "how has the quality of your sleep overall been in the past week?" responses were given on a -point scale ranging from (very low) to (perfectly). physical activity was measured with an adapted version of the -item leisure time exercise questionnaire [ ] . participants were be asked to report how many hours in the past they have been mildly, moderately, and strenuously exercising. the overall score was computed as followed [ ] : × mild + × moderate + × strenuously. missing responses for one or more of the exercise types were be treated as . quality and quantity of social contacts outside of work were measured with three items. we adapted two items from the social relationship quality scale [ ] and added one item to measure the quantity: "i feel that the people with whom i have been in contact over the past week support me", "i feel that the people with whom i have been in contact over the past week believe in me", and "i am happy with the amount of social contact i had in the past week." responses were given on a -point likert scale ranging from (strongly disagree) to (strongly agree; α = . , α = . ). volunteering was measured with three items that measure people's behavior over the past week: "i have been volunteering in my community (e.g., supported elderly or other people in high-risk groups)", "i have been supporting my family (e.g., homeschooling my children)" and "i have been supporting friends, and family members (e.g., listened to the worries of my friends)". responses were given on a -point scale ranging from (not at all) to (very often; α = . ). quality and quantity of communication with colleagues and line managers was measured with three items: "i feel that my colleagues and line manager have been supporting me over the past week", "i feel that my colleagues and line manager believed in me over the past week", and "overall, i am happy with the interactions with my colleagues and line managers over the past week." responses were given on a -point likert scale ranging from (strongly disagree) to (strongly agree; α = . , α = . ). distractions at home was measured with two items: "i am often distracted from my work (e.g., noisy neighbors, children who need my attention)" and "i am able to focus on my work for longer time periods" (recoded). responses were given on a -point scale ranging from (not at all) to (very often; α = . , α = . ). the participants' living situation was reported in the following categories. living with (babies/infants), (toddlers), (children), (teenager), and (adults), and additionally, it was displayed with how many people the participant is currently living. financial security was measured with two items that reflect the current but also the expected financial situation [ ] : "using a scale from to where means 'the worst possible financial situation' and means 'the best possible financial situation', how would you rate your financial situation these days?" and "looking ahead six months into the future, what do you expect your financial situation will be like at that time?". responses were given on a -point scale ranging from (the worst possible financial situation) to (the best possible financial situation; α = . ). office set-up was measured with three items: "in my home office, i do have the technical equipment to do the work i need to do (e.g., appropriate pc, printer, stable and fast internet connection)", "on the computer or laptop i use while working from home i do have the software and access rights i need", and 'my office chair and desk are comfortable and designed to prevent back pain or other related issues". responses were given on a -point likert scale ranging from (strongly disagree) to (strongly agree; α = . ). demographic information were assessed with the following items: "what is your gender?", "how old are you?" "what type of organization do you work in" (public, private, unsure, other), "what is your yearly gross income?" (us< , , u s - , , us . − , , u s , - , , us , − , , > u s , ; converted to the participant's local currency), "in which country are you based?", "have you been working from home or remotely in general before february ?" (yes, no, unsure), "what percentage of your time have you been working remotely (i.e., not physically in your office) over the past months?", "in which region/state and country are you living?", "is there still a lockdown where you are living?". the data analysis consists of two parts. first, we used the data from time to identify the variables that explain variance in participant well-being and productivity beyond the other variables. second, we used the pearson productmoment correlation coefficient (r), to identify which variables were correlated with at least r = . with well-being and productivity, to test whether they predict our two outcomes over time. r is an effect size which expresses the strength of the linear relation between two variables. we used . as a threshold as we are interested in identifying variables that are correlated with at least a medium-sized magnitude [ ] with one or both of our outcome variables. also, a correlation of ≥ . indicates that the effect is among the top % in individual difference research [ ] . finally, selecting an effect size of this magnitude provides an effective type-i error control, as in total, we performed correlation tests at time alone ( independent variables correlated with the two dependent variables, which were also correlated among each other). given a sample size of , this effectively changes our alpha level to . , which is conservative. this means that it is very unlikely that we erroneously find an effect in our sample even though there is no effect in the population (i.e., commit the type-i or false-positive error) we did not transform the data for any analysis. unless otherwise indicated above, scales were formed by averaging the items. the collected dataset is publicly available to support other researchers in understanding the impact of enforced work-from-home policies. to test which of the variables listed in figure explains unique variance in well-being and productivity, we performed two multiple regression analyses with all variables that were correlated with the two outcome variables with ≥ . . in the first analysis, well-being is the dependent variable; in the second analysis, we use productivity as the dependent variable. this allows us to identify the variables that explain unique variance in the two dependent variables. however, one potential issue of including many partly correlated predictors is multicollinearity, which can lead to skewed results. if the variance inflation factor (vif) is larger than , multicollinearity is an issue [ ] . therefore, we tested whether the variance inflation factor would exceed before performing any multiple regression analysis. to analyze the data from both time-points, we performed a series of structural equation modeling analyses with one predictor variable and one outcome variable using the r-package lavaan [ ] . unlike many other types of analyses, structural equation modeling adjusts for reliability [ ] . specifically, models were designed with one predictor (e.g., stress), and one outcome (e.g., wellbeing) both as measured at time and at time . we allowed autocorrelations (e.g., between well-being at time and at time ) and cross-paths (e.g., between stress at time and well-being at time ). autocorrelations are essential because without them we might erroneously conclude that, for example, stress at time predicts well-being at time although it is the part of stress which overlaps with well-being, which predicts well-being at time [ ] . to put it simply, we can only conclude that x predicts y if we control for y . no items or errors were allowed to correlate. this is usually done to improve the model fit but has also been criticized as atheoretical: to determine which items and errors should be allowed to correlate to improve model fit can only be done after the initial model is computed and thus a data-driven approach which emphasizes too much on the model fit [ ] . the regression (or path) coefficients and associated p-values were not affected by the type of estimator. we compared in our analyses the standard maximum likelihood (ml), the robust maximum likelihood (mlr), and the multi-level (mlm) estimator. the pattern of correlations was overall consistent with the literature. at time , variables were correlated with well-being at r ≥ . (table ) . stress, r = −. , quality of social contacts, r = . , and need for autonomy, r = . were strongest associated with well-being (all p < . ). the pattern of results from the coping strategies were also in line with the literature [ ] : self-blame, r = − . , p < . , behavioral disengagement, r = − . , p < . , and venting r = − . , p < . were negatively correlated with well-being. interestingly, generalized anxiety was more strongly associated with well-being than covid- related anxiety (r = − . vs −. ) which might suggest that specific worries have a less negative impact on well-being . contrary to our the pearson's correlation coefficient (r) represents the strength of a linear association between two variables and can range between - (perfect negative linear association), (no linear association), to (perfect positive linear association). the regression coefficient b indicates how much the outcome changes if the predictor increases by one unit. for example, the b of stress predicting well-being is -. . this indicates that a person who has a well-being level of has a stress level that is of -. units lower than a person who has a well-being level of . a multiple regression with generalized anxiety and covid- related anxiety supports this interpretation: only generalized anxiety, b = − . , se = . , p < . , but not expectations, extraversion was positively correlated with well-being, both at waves and . the pattern of the associations was similar at time . a reason for participants' misinterpretation of the intensity to struggle with working from home for introverts could be explained by introverts usually having to avoid unwanted social interactions, and due to being quarantined, they now have to put effort into having social interactions actively. the added challenge to contribute more energy than usual to not being too lonely and changing their usual behavioral pattern demands much more from introverts than extraverts. at time , four variables were correlated with productivity at r ≥ . (table ) : need for competence, r = −. , distractions, r = −. , boredom, r = −. , and communication with colleagues and line-managers r = . . surprisingly, work motivations were uncorrelated with well-being at α = . . at time , only distraction was still correlated with productivity, r = − . , p < . . the strength of association of most variables with productivity dropped between time and , which means that those variables associated with productivity at wave were no longer or less strongly associated with productivity at wave . the strengths of correlations remained the same when we computed spearman's rank correlation coefficients rather than pearson's correlations (spearman's coefficient is a non-parametric version of pearson's r and ranges also between - and ). at time , we added additional questions to better understand the counterintuitive finding that well-being and extraversion are positively correlated. interestingly, the finding that extraversion is positively correlated with well-being during lockdown is contrary to the expectations of most participants. when asked whether introverts or extraverts struggle more with the covid- pandemic, only participants correctly predicted introverts, where stated extraverts, with participants believing that both groups struggle equally. this highlights the value of our research because people's intuition can be blatantly wrong. through an analysis of the participants' statements about the informant's (i) choice, the explanation became more articulated. we now report selected quotes from participants, including their level of extraversion, in wave . some informants reported their direct experience supporting the feeling that extraverts struggle more than introverts. "i'm introverted, and i don't feel the pandemic has affected me at all. rules aren't hard to follow and haven't feel bad. i feel for extraverts; they would struggle a bit with the rules." nonetheless, a minority of participants also provide alternative interpretations. according to those, both introverts and extraverts have difficulties in reaching out to people, although in different ways. the motivation for such answers is that both personality types struggle with different challenges. "both types need company, just that each needs company on their own terms. introverts prefer deeper contact with fewer people and extraverts less deep contact with a greater number of people." [i- , extraversion score = . ] "extraverts miss human contact; introverts find it even harder to mark their presence online (e.g., in meetings)." [i- , extraversion score = . ] interestingly, there is one informant which provide an insightful interpretation, aligned with our results. "introverts usually have more difficulty communicating with others, and confinement worsens the situation because they will not try to talk to others through video conferences." [i- , extraversion score = . ] the lack of a structured working setting, where introvert are routinely involved, causes further isolation. being 'forced' to work remotely significantly increased difficulty in engaging with social contacts. this means that introverts have to put much more effort into interacting with others instead of their typical behavior of reduced interaction in office-based environments. whereas extraverts have it easier to find some way to maintain their social contacts, introverts might struggle more. thus, the lockdown had a more negative impact on the well-being of introverts than of extraverts, as shown in table . to test which of the predictors had a unique influence on well-being and productivity, we included all variables that were correlated with either outcome with at least . at time . this is a conservative test because many predictors are correlated among each other and thus taking variance from each other. also, it allowed us to repeat the same analysis at time because all predictors which correlated with either well-being or productivity at time with r ≥ . were included at time . in a first step, we tested whether multicollinearity was an issue. this was not the case, with vif < . for all four regression models and thus clearly below the often-used threshold of [ ] . sixteen variables correlated with well-being r ≥ . (table ). together, they explained a substantial amount of variance in well-being at time , r = . , adj.r = . , f ( , ) = . , p < . , and at time note. r: correlations, b: unstandardized regression estimates, r it : test-rest correlation. signif. codes: * * * < . , * * < . , * < . , . < . , r = . , adj.r = . , f ( , ) = . , p < . . at time , stress (negatively), social contacts, and daily routines uniquely predicted well-being at α = . (see table , column , and table ) . at time , need for competence and autonomy, stress, quality of social contacts, and quality of sleep uniquely predicted well-being at α = . (see table , column , and table ). together, stress and quality of social contacts predicted at both time points significantly well-being. four variables correlated with productivity r ≥ . (table ) . together, they explained % of variance in productivity at time , r = . , adj.r = . , f ( , ) = . , p < . , and % at time , r = . , adj.r = . , f ( , ) = . , p = . . at both time points, none of the four variables explained variance in productivity beyond the other three variables, suggesting that they all are associated with productivity but we lack statistical power to disentangle the effects (tables and ). there is an ostensible discrepancy between some correlations and the estimates of the regression analyses which requires further explanations. an especially large discrepancy appeared for the variable need for competence, which correlated positively with well-being at time and , r = . with p < . , and r = . with p < . , but was negatively associated with well-being when controlling for other variables in both regression analyses, b = -. with p = . , and b = -. with p = . . this suggests that including a range of other variables, that serve as control variables, impact the results. indeed, exploratory analyses revealed that need for competence was no longer associated with well-being when we included need for autonomy. that is, when we performed a multiple regression with the needs for autonomy and competence as the only predictors, need for competence became non-significant. need for competence also includes an autonomy competent, which might explain this. it is easier to fulfill one's need for competence while being at least somewhat autonomous [ ] . further, including generalized anxiety and boredom reversed the sign of the association: need for competence became negatively associated with well-being. including those two variables remove the variance that is associated with enthusiasm (boredom reversed) and courage (generalized anxiety reversed), which might explain the shift to negative association with well-being. together, controlling for need for autonomy, generalized anxiety, and boredom, takes away positive aspects of need for competence, leaving a potentially cold side that might be closely related to materialism, which is negatively associated with well-being [ ] . test-retest reliabilities were good for all variables, supporting the quality of our data (last column of table , column ). in total, we performed structural equation modeling (sem) analyses to test whether well-being and productivity are predicted by or predict any of the independent variables for well-being, including one model in which we tested whether well-being predicts productivity or vice versa, and four models for productivity. since the probability of a false signif. codes: * * * < . , * * < . , * < . , . < . positive is very high, due to the high number of models analyzed, we used a conservative error rate of . . we are using a different threshold for the longitudinal analysis than for the correlation analyses since we did a different number of tests for the latter. one example of our sem analyses is presented in figure , where we looked at the predictive-causal relationship between stress and well-being in waves and . the boxes represent the items and the circles the variables (e.g., stress). the arrows between the items and the variables represent the loadings, that is how strongly each of the items contributes to the overall variable score (e.g., item of the stress scale contributes least and item most to the overall score at both time points). the circular arrows represent errors. the bidirectional arrows between the variables represent the covariances, which are comparable signif. codes: * * * < . , * * < . , * < . , . < . to correlations. the one-handed arrows show causal impacts over time. the arrows between the same variables (e.g., well-being and well-being ) show how strongly they impact each other and are comparable to the test-retest correlations. the most critical arrows are those between well-being and stress as well as between stress and well-being . they show whether one variable causally predicts the other. the most relevant values in figure are presented in table . columns - show that stress and well-being were significantly associated at time , b = - . , se = . , p < . . this association was mirrored at time , b = - . , se = . , p = . (columns - ). columns - show that stress at time did not significantly predict well-being at time , b = - . , se = . , p = . . columns - of the second part of table also show that well-being at time did not predict stress at time , b = . , se = . , p = . . columns - of the second part show the autocorrelation of well-being, that is how strongly well-being at time predicts well-being at time , b = . , se = . , p < . . autocorrelations can be broadly understood as the unstandardized version of the test-retest correlations (reliability) reported in table . finally, columns - of the second part show the autocorrelation of stress, which are also significant b = . , se = . , p < . . we conclude that no model revealed any significant associations at α = . . thus, no variable at time (e.g., stress) is able to explain a significant amount of variance in another variable (e.g., well-being) at time . we only found a negative tendency regarding distraction → p roductivity with b = -. , p = . . furthermore, table shows which variable is more likely to have a stronger impact on the other over time. for example, p roductivity → distraction has a b = . , p = . , suggesting that it is much more likely that distraction influence negatively productivity, rather than productivity influencing the level of distraction. additionally, we explored whether there are any mean changes between time and , separately for all variables. for example, has the well-being increased over time? this would suggest that people adapted further within a relatively short period of two weeks to the threat from covid- . table shows that the arithmetic mean (m ) of well-being has indeed slightly increased between time and , m = . vs m = . . a closer look revealed that participants reported higher well-being at time compared to time , reported the same level of well-being, and a lower level of well-being. further, on average people's score of behavioral disengagement and quality of social contacts increased, whereas emotional loneliness and the quality of communication with line managers and coworkers decreased. note. t: t-value of a paired sample t-test; higher: absolute number of people who scored higher on a variable at time compared to time ; lower: number of people who scored lower at time ; equal: people whose score has not changed over time. our finding that office-setup is not significantly related to well-being and productivity seems to contradict a recent cross-sectional study by ralph et al. [ ] that investigated how the fear of bioevents, disaster preparedness, and home office ergonomics predict well-being and productivity among software developers. in that study, ergonomics was positively related to both wellbeing and productivity. to measure ergonomics, the authors created six items concerning distractions, noise, lighting, temperature, chair comfort, and overall ergonomics. the first two items are closely related to our measure of distraction, which was negatively associated with well-being in wave of our sample, r = -. , and productivity, r= -. . in contrast, the following four items are more closely associated with office-setup in our survey, which was positive but not significantly associated with well-being, r = . , and productivity, r = . . to better understand such inconsistency with our result, we run a replication analysis using ralph et al.'s data. to test whether ergonomics' effect is mainly driven by distraction and noise, we combined the first two items into variable ergonomics-distractions (recoded, higher scores indicate less distraction) and the other four items into ergonomics-others. indeed, ergonomics distractions was more strongly correlated with well-being, r = . , and productivity, r = . , than was ergonomics-other, rs = . and . , respectively. this suggests that our findings replicate those of ralph et al. and emphasize the importance of distinguishing between distraction and office set-up. the covid- pandemic and the subsequent lockdown have had a definite impact on software professionals who were primarily forced to work from home. the first significant outcome of this research is that there are many variables that are associated with well-being and productivity. although we could not determine any causal relationship, the effect sizes for both waves are medium to large for several variables which have mainly shown high stability of the results over time. also, well-being and productivity were positively associated. in other words, neglecting well-being will likely also negatively impact productivity. therefore, we agree with ralph et al.'s [ ] recommendation that pressuring employees to keep the average productivity level without taking care of their well-being will lower productivity. however, we would also like to present an alternative interpretation that having productive employees will strengthen their sense of achievement and improve their well-being. in the following, we focus on practical recommendations based on the most reliable predictors of well-being and productivity that we identified in our study through our regression analysis: need for autonomy, stress, daily routines, social contacts, need for competence competence, extraversion, and quality of sleep as predictors of well-being, in table . distractions and boredom related to productivity are discussed in table . persistent high-stress levels are related to adverse outcomes in the workplace [ ] and people's well-being. to reduce stress, bazarko et al. [ ] recommend practicing mindfulness-based stress reduction training and practices that can be performed at home. participating in such a program can lead to lower levels of stress and a lower risk of work burnout. grossman et al. recommended other stress reduction methods. [ ] . moreover, naik et al. [ ] , who found that mindfulness meditation practices, slow breathing exercises, mindful awareness during yoga postures, and mindfulness during stressful situations and social interactions can reduce stress levels. together, the results of these studies suggest that mindfulness practices, even when performed at home, can reduce stress, which could also improve software engineers' well-being while being quarantined. the quality of social contacts as part of the overall quality of life has a significant impact on people's well-being, as discovered in this study. therefore, employers should be interested in enabling their employees to spend time with people they value and encourage them to build strong, meaningful relationships within their work environment. creating a virtual office, (e.g., using an online working environment such as 'wurkr') allows people to work with the impression of sharing a physical workspace online to communicate more comfortably and work together from anywhere. for example, in order to simplify conversations, the slack plugin 'donut' [ ] randomly connects employees for coffee breaks with the purpose to get to know each other better by spending some time chatting virtually. besides, our finding that quality of social contact, but not living alone is associated with well-being, is in line with the literature. quality of contact with one's partner and family independently predicted negatively depression, whereas the frequency of these contact did not [ ] . together, this suggests that findings from the literature can overall be generalized to people being quarantined. organizing the day in a structured way at home, appears to be beneficial for software professionals' well-being. people tend to overwork when working remotely [ ] . this could be further magnified during quarantine where usual daily routines are disrupted, and thus working might become the only meaningful activity to do. therefore, it is essential to develop new daily routines in order to not be completely absorbed by work and to prevent a burnout [ ] . therefore, scheduling meetings and designating time specifically for hobbies or spending time with family and friends is helpful while working from home and helps to satisfy employees' needs for social contacts. to fulfill people's need for autonomy, it is necessary to allow employees to act on their values and interests [ ] . while coordinating collaborative workflows and managing projects remotely comes with its challenges [ ] . for remote workers it is crucial to have flexibility in how they structure, organize, and perform their tasks [ ] . it is therefore helpful to delegate work packages instead of individual tasks. this makes it easier for individuals to work selfdirectedly and thus to fulfill their need for autonomy. to fulfill employees' need for competence, it is necessary to provide them with the opportunity to grow personally and advance their skill set [ ] . two of the mainly required and highly demanded skills in remote work environments are communication skills and the ability to use virtual tools, such as presentation tools or collaborative project planning tools [ ] . raising awareness for the unique requirements of virtual communication is crucial for a smooth working process. therefore, working remotely requires specific communication skills, such as mindful listening [ ] or asynchronous communication, which allows people to work more efficiently [ ] . collaborative tools such as github, trello, jira, google docs, klaxoon, mural, or slack can simplify work processes and enable interactive workflows. besides the training and development of employees' specific virtual skill set, it is also recommended to invest in employees' personal development within the company. taking action and offering employees the opportunity to grow will not only evolve their role but also strengthen their loyalty towards the employer and, therefore, employee retention [ ] . introverted software professionals seem to be more affected by the lockdown than their more extraverted peers. this finding is counter-intuitive since extraverted people prefer more direct contacts than introverted people [ ] . our interpretation of these results is that introverts have a much higher burden to reach out to colleagues than extraverted ones. also, being introverted does not mean that there is no need for social contacts at all. while in the office they had chances to be involved with colleagues both in a structured or unstructured fashion, at home it is much more difficult as they have to be more proactive to reach out to colleagues in a more formalized setting, such as online collaboration platform (e.g., ms teams). therefore, software organizations should regularly organize both formal and informal online meeting occasions, where introvert software engineers feel a lower entry barrier to participate. quality of sleep is also a relevant predictor for well-being. although it might sound obvious, there is a robust association between sleep, well-being, and mindfulness [ ] . in particular, howell et al. found that mindfulness predicts quality of sleep, and quality of sleep and mindfulness predict well-being. distractions at home are a challenging obstacle to overcome while working remotely. designating a specific work area in the home and communicating non-disturbing times with other household members are easy and quick first steps to minimize distractions at the workplace at home. another obstacle that distracts remote workers more frequently is cyberslacking, which is understood as spending time on the internet for non-work-related reasons during working hours [ ] . cyberslacking and its contribution to distractions at home for remote workers were not included in this study but would be worth exploring in future research. when people experience, boredom it makes them feel "...unchallenged while they think that the situation and their actions are meaningless" [ , p. ]. especially people who thrive in a social setting at work are in danger of being bored quickly while working in isolation from their homes. the enumerated recommendations above, such as assigning interesting, personally tailored, and challenging work packages, using collaborative tools to hold yourself accountable, and having social interactions while working remotely, also help reduce boredom at work. ideally, employees are intrinsically motivated and feel fulfilled by what they do. if this is not the case over a more extended period, and the experienced boredom is not a negative side effect of being overwhelmed while being quarantined, it might be reasonable to discuss a new field of action and area of responsibility with the employee. to conclude, working from home certainly comes with its challenges, of which we have addressed several in this study. however, at least software engineers appear to adapt to the lockdown over time, as people's well-being increased, and the perceived quality of their social contacts improved. similar results have also been confirmed by a survey study of , new zealanders' remote workers [ ] . walton et al. found that productivity was similar or higher than pre-lockdown, and % of professionals would like to continue to work from home, at least one day per month. this study also reveals that the most critical challenges were switching off, collaborating with colleagues, and setting up a home office. on the other hand, working from home led to a drastic saving of time otherwise allocated to daily commuting, a higher degree of flexibility, and increased savings. limitations are discussed using gren's five-facets framework [ ] . reliability. this study used a two-wave longitudinal study, where over % of the initial participants, identified through a multi-stage selection process, also participated in the second wave. further, the test-retest reliabilities were high, and the internal consistencies (cronbach's α) ranged from satisfactory to very good. construct validity. we identified variables, which were drawn from the literature, and a suitable measurement instrument measured each. where possible, we used validated instruments. otherwise, we developed and reported the instruments used. to measure the construct validity, we also reported the cronbach's alpha of all variables across both waves. however, we note that despite a large number of variables in our study, we still might have missed organizations should redesign employees goals by letting them choose tasks as much as possible and diversify activities. negative predictor in both waves (b w = −. , b w = −. ) organizations should support software engineers to set up a dedicate home office. routines and agreements with family members about working times also help to be more focused. one or more relevant variables, which would have been significantly turned out in our analysis. conclusion validity. to draw our conclusions, we used multiple statistical analyses such as correlations, paired t-tests, multiple linear regressions, and structural equation modeling. to ensure reliable conclusions, we used conservative thresholds to reduce the risk of false-positive results. the threshold depended on the number of comparisons for each test. additionally, we did not include covariates, nor did we stop the data collection based on the results, or performed any other practice that is associated with increasing the likelihood of finding a positive result and increasing the probability of false-positive results [ ] . however, we could not make any causal-predictive conclusion since all sem analyses provided non-significant results, using a threshold of significance that reduces the risk of false-positive findings. finally, we made both raw data and r analysis code openly available on zenodo. internal validity. this study did not lead to any causal-predictive conclusion, which was the main aim of the present study. we can not say that the analyzed variables influence well-being or productivity or vice versa. we are also aware that this study relies on self-reported values, limiting the study's validity. further, we adjusted some measures (i.e., productivity). participants were not supposed to report their perceived productivity but to make a comparison, which has been computed independently afterward in our analysis. we also underwent an extensive screening process, selecting over software engineers of the initial initial suitable subjects. typical problems related to longitudinal studies (e.g., attrition of the subjects over a long-term period) do not apply. the dropout rate between the two waves has been low (under %). we run this study towards the end of the lockdown of the covid- pandemic in spring . in this way, participants were able to report rooted judgments of their conditions. waves were set at two weeks distance, which ensured that lockdowns had not been lifted yet during the data collection of wave , but was also not close enough so that variability in each of the variables would already be sufficiently high between the two-time points. since this was a pandemic, the surveyed countries' lockdown conditions have been similar (due to standardized who's recommendations). however, we did not consider region-specific conditions (e.g., severity of virus spread) and recommendations. also, lockdown timing differed among countries. to control these potential differences, we asked participants at each of the two waves if lockdown measures were still in place, and if they were still working from home. since all our participants reported positively to both these conditions, we did not exclude anyone from the study. external validity. our sample size has been determined by an a priori power analysis, manageable for longitudinal analyses. however, this study was designed to maximize internal validity, focusing on finding significant effects, rather than working with a representative sample of the software engineering population (with n ≈ , such as russo and stol [ ] did, where the research goal focused on the generalizability of results). the covid- pandemic disrupted software engineers in several ways. abruptly, lockdown and quarantine measures changed the way of working and relating to other people. software engineers, in line with most knowledge workers, started to work from home with unprecedented challenges. most notably, our research shows that high-stress levels, the absence of daily routines, and social contacts are some of the variables most related to well-being. similarly, low productivity is related to boredom and distractions at home. we base our results on a longitudinal study, which involved software professionals. after identifying relevant variables related to well-being or productivity during a quarantine from literature, we run a correlation study based on the results gathered in our first wave. for the second wave, we selected only the variables correlated with at least a medium effect size with well-being or productivity. afterward, we run structural equation modeling analyses, testing for causal-predictive relations. we could not find any significant relation, concluding that we do not know if the dependent variables are caused by independent ones or vice versa. accordingly, we run several multiple regression analysis to identify unique predictors of well-being and productivity, where we found several significant results. this paper confirms that, on average, software engineers' well-being increased during the pandemic. also, there is a correlation between well-being and productivity. out of factors, nine were reliably associated with wellbeing and productivity. correspondingly, based on our findings, we proposed some actionable recommendations which might be useful to deal with potential future pandemics. software organizations might start to experimentally ascertain whether adopting these recommendations will increase professionals' productivity and well-being. our research findings indicate that granting a higher degree of autonomy to employees might be beneficial, on average. however, while ex-tended autonomy might be perceived positively experienced by those with a high need for autonomy, it might be perceived as stressful for those who prefer structure. it is unlikely that any intervention will have the same effect on all people (since there is a substantial variation for most variables), it is essential to have individual differences in mind when exploring the effects of any interventions. thus, adopting incremental intervention, based on our findings, where organizations can get feedback from their employees, is the recommended strategy. future work will explore several directions. cross-sectional studies with representative samples will be able to test whether our findings are generalizable and do get a better understanding of underlying mechanisms between the variables. we will also investigate the effectiveness of specific software tools and their effect on the well-being and productivity of software engineering professionals with particular regard to the relevant variables. the full survey, raw data, and r analysis code are openly available on zenodo doi: https://doi.org/ . /zenodo. . the impact of telework on emotional experience: when, and for whom, does telework improve daily affective well-being? how will country-based mitigation measures influence the course of the covid- epidemic? survey of stress reactions among health care workers involved with the sars outbreak teleworking: benefits and pitfalls as perceived by professionals and managers does herzberg's motivation theory have staying power the impact of an innovative mindfulness-based stress reduction program on the health and well-being of nurses employed in a corporate setting relationship quality profiles and well-being among married adults does working from home work? evidence from a chinese experiment the psychological impact of quarantine and how to reduce it: rapid review of the evidence acts of kindness and acts of novelty affect life satisfaction the need for cognition the factors affecting household transmission dynamics and community compliance with ebola control measures: a mixed-methods study in a rural village in sierra leone health surveillance during covid- pandemic improving employee well-being and effectiveness: systematic review and meta-analysis of web-based psychological interventions delivered in the workplace personality differences and covid- : are extroversion and conscientiousness personality traits associated with engagement with containment measures? you want to measure coping but your protocol's too long: consider the brief cope assessing coping strategies: a theoretically based approach managing a virtual workplace cipd: getting the most from remote working help now nyc a power primer perceived stress in a probability sample of the united states danish health authority: questions and answers on novel coronavirus coronavirus/spoergsmaal-og-svar/questions-and-answers emotion beliefs in social anxiety disorder: associations with stress, anxiety, and well-being getting the most from remote working the satisfaction with life scale empirical evaluation of the effects of experience on code quality and programmer productivity: an exploratory study the relationship between materialism and personal well-being: a meta-analysis disrupted work: home-based teleworking (hbtw) in the aftermath of a natural disaster big tech firms ramp up remote working orders to prevent coronavirus spread european social survey: ess round : european social survey round data boredom proneness-the development and correlates of a new scale statistical power analyses using g* power . : tests for correlation and regression analyses the concomitants of conspiracy concerns the multidimensional work motivation scale: validation evidence in seven languages and nine countries structural equation modeling with lavaan a -item scale for overall, emotional, and social loneliness: confirmatory tests on survey data effect size guidelines for individual differences researchers a growing socioeconomic divide: effects of the great recession on perceived economic distress in the united states a simple method to assess exercise behavior in the community exploring the needs of teleworkers using herzberg's two factor theory standards of validity and the validity of standards in behavioral software engineering research: the perspective of psychological test theory mindfulness-based stress reduction and health benefits: a meta-analysis sars control and psychological effects of quarantine, toronto, canada motivation to work crowdsourcing personalized weight loss diets relations among mindfulness, well-being, and sleep web-based cases in teaching and learning-the quality of discussions and a stage of perspective taking in asynchronous communication ecology of zoonoses: natural and unnatural histories using task context to improve programmer productivity the world health organization health and work performance questionnaire (hpq) public risk perceptions and preventive behaviors during the h n influenza pandemic why we should not measure productivity study on determining factors of employee retention psychological impacts of the new ways of working (nww): a systematic review employee wellbeing, productivity, and firm performance monotasking or multitasking: designing for crowdworkers preferences explicit programming strategies the experience of sarsrelated stigma at amoy gardens why do high school students lack motivation in the classroom? toward an understanding of academic amotivation and the role of social support what makes a great software engineer? in: proceedings of the th ieee/acm international conference on software engineering defining the epidemiology of covid- studies needed depression after exposure to stressful events: lessons learned from the severe acute respiratory syndrome epidemic extraversion and preferred level of sensory stimulation using behavioral science to help fight the coronavirus the psychological impact of teleworking: stress, emotions and health the relevance of psychosocial variables and working conditions in predicting nurses coping strategies during the sars crisis: an online questionnaire survey a meta-analysis of interventions to reduce loneliness transparency, communication and mindfulness the impact of the coronavirus on hr and the new normal of work years of life lost due to the psychosocial consequences of covid mitigation strategies based on swiss data effect of modified slow breathing exercise on perceived stress and basal cardiovascular parameters psychological and epidemiological predictors of covid- concern and health-related behaviors nhs: mental wellbeing while staying at home nhs: your nhs needs you -nhs call for volunteer army prolific.ac-a subject pool for online experiments beyond the turk: alternative platforms for crowdsourcing behavioral research a social-cognitive model of pandemic influenza h n risk perception and recommended behaviors in italy pandemic programming: how covid- affects software developers and how their organizations can help understanding, compliance and psychological impact of the sars quarantine experience covid- outbreak on the diamond princess cruise ship: estimating the epidemic potential and effectiveness of public health countermeasures a critique of cross-lagged correlation lavaan: an r package for structural equation modeling and more. version . - (beta) gender differences in personality traits of software engineers self-determination theory and the facilitation of intrinsic motivation, social development, and well-being exploratory experimental studies comparing online and offline programming performance the balanced measure of psychological needs (bmpn) scale: an alternative domain general measure of need satisfaction false-positive psychology: undisclosed flexibility in data collection and analysis allows presenting anything as significant a brief measure for assessing generalized anxiety disorder: the gad- posttraumatic stress disorder in parents and youth after healthrelated disasters a short boredom proneness scale: development and psychometric properties high self-control predicts good adjustment, less pathology, better grades, and interpersonal success factors influencing psychological distress during a disease epidemic: data from australia's first outbreak of equine influenza social relationships and depression: ten-year follow-up from a nationally representative study on boredom: lack of challenge and meaning as distinct boredom experiences a within-person examination of the effects of telework the -item brief hexaco inventory (bhi) new zealanders attitudes towards working from home building autonomous learners: perspectives from research and practice using self-determination theory dealing with feeling: a meta-analysis of the effectiveness of strategies derived from the process model of emotion regulation statistically controlling for confounding constructs is harder than you think knowledge, attitudes, and practices among members of households actively monitored or quarantined to prevent transmission of ebola virus disease margibi county the importance of shared values in societal crises: compliance with covid- guidelines conspiracy suspicions as a proxy for beliefs in conspiracy theories: implications for theory and measurement world health organization: considerations for quarantine of individuals in the context of containment for coronavirus disease (covid- ): interim guidance we thank the editors-in-chief for fast-tracking our manuscript. the authors would also like to thank gabriel lins de holanda coelho for initial feedback on this project. key: cord- -defbarkz authors: keane, martin g.; wiegers, susan e. title: time (f)or competency date: - - journal: j am soc echocardiogr doi: . /j.echo. . . sha: doc_id: cord_uid: defbarkz nan martin g. keane, md, fase, and susan e. wiegers, md, fase, philadelphia, pennsylvania seventy years ago, susan's -year-old father's first teaching job was in a one-room schoolhouse in fayetteville, maine. he recently came across the ''register'' that he was required to keep of his students' daily attendance. he explained that attending an adequate proportion of school days was the sole determinant of whether or not a child was promoted to the next grade. ''apparently, actual accomplishment was not considered,'' he laughed. indeed, time spent in training is an essential component of the development of skill and expertise-time in rank, time on service, and time devoted to learning and performing the skill in question. linked with time spent in training, appropriately robust experience to develop expertise requires repeated exposure to and performance of tasks essential to the skill over that time-amounts of consults/evaluations, accumulation of procedures, numbers of echocardiograms. it is important to recognize, however, that competency in the skill is the outcome of interest. time and numbers are merely surrogate markers. the core cardiovascular training statement (cocats )-task force -outlined the expected behaviors and work product for echocardiographers. levels of training from most basic echocardiographic knowledge (level i) to most advanced knowledge suitable for an echocardiography lab director (level iii) are clearly defined by duration of echo-specific training as well as specified numbers of procedures (transthoracic, transesophageal, and stress echocardiography) performed by the trainee. the task force clearly recognized that competency-based evaluations and assessments of echocardiographic knowledge base are essential elements in the certification of skill. however, as lab/program directors responsible for the providing certification letters over many years, it has been our experience that the ''focus'' of the fellowship trainees (and sometimes of their mentors as well) is frequently geared toward meticulous documentation of ''time served'' and ''procedures performed,'' as evidence for the proverbial notches in their belts. suitable evaluation of the individual candidate's competency is potentially at risk for being overlooked. necessity is the mother of invention, however, and the ongoing covid- crisis may prove instrumental in shifting the focus of echocardiography training evaluation from time and numbers to consideration of alternative measures of skill. dr. jose madrazo rightly and extensively illustrates this important point in his letter in this issue of journal of the american society of echocardiography. as with the johns hopkins program, training programs across the country are faced with significant decline in the volume of all forms of echocardiographic evaluation as clinical focus shifts toward the care of an overwhelming number of patients with covid- . dr. madrazo notes that social distancing measures, so crucial to thwarting the spread of sars-cov , additionally hamper opportunities for hands-on training as well as face-to-face mentoring and supervision between expert echocardiographer and trainee. he makes numerous worthwhile recommendations for alternative experiential and evaluative tactics. these must be considered for implementation in the certification process, especially during a pandemic that is here to stay for the foreseeable future. we applaud his recommendations and reiterate a need for a shift toward competency-based assessment. the recent american college of cardiology/american heart association/american society of echocardiography advanced training statement focused on select competencies and echo procedure volumes for level iii advanced training. the document is unique in its greater focus on delineating strategies for the evaluation of competency, in addition to recommended numbers of advanced echo techniques and procedures performed. it recognized that the endorsed volumes for specific advanced echo techniques and procedural guidance to achieve level iii have been developed by the expert committee consensus, in consultation with echocardiography training authorities across the country. in all instances, these procedure volumes are noted to be recommendations only. they serve as recognition that diverse trainees develop competency at different levels of experience-some quickly, others requiring more procedural practice. perhaps it is an appropriate time for a similar shift toward competency-based assessment when certifying level i and level ii training as well. to that end, the advanced training document delineates several evaluation tools that can be utilized for robust competency documentation : . examination . direct observation . procedure logbooks . simulation . conference presentation . multisource evaluation . echo lab quality improvement and quality assurance projects to supplement the procedural logbooks and decrease in face-toface direct observation of skills, alternative evaluation methods are readily available. we admit that faced with weaker trainees, it can be easier to recommend ''more studies'' rather than giving uncomfortable and negative feedback regarding their current level of accomplishment. as noted, ''distant'' overreading of fellow-interpreted studies can be as valuable, or even more so, than direct observation. more conscious effort must be expended on the part of the expert mentor to virtually review all aspects of each study overread in order to provide the trainee with as comprehensive an assessment and education as occurs in side-by-side reading. whenever distancing norms permit, every effort should be expended to maintain direct supervision, with at least one or two trainees in direct contact with the mentor using proper personal protective equipment. in addition to evaluation of interpretive skills, it remains possible to evaluate the breadth and depth of trainee knowledge through participation in and presentation of didactic conferences. today's sophisticated video conferencing mechanisms allow for a remarkable level of interaction under difficult circumstances. we endorse additional novel video conference applications, including interactive case reviews and case series presented to the fellowship group. finally, while formal national board of echocardiography board examination remains a final tool to assess knowledge base, allowing trainees more time and access to board review courses, board review questions, and seminars in both print and online format will only serve to enhance the comprehension and critical thinking of echocardiography-focused fellows. in terms of the performance of echocardiography techniquestransthoracic, transesophageal, and stress modalities-it is undeniable that frequent access to individual scanning of patients with a multitude of pathologies is essential. in the face of a relative dearth of clinical subjects, as well as concerns regarding prolonged interpersonal exposure and possible coronavirus transmission, programs (and certification authorities) must adapt, with utilization of simulators and other techniques focused on recognition of technical adequacy and pitfalls in acquisition of previously acquired study images. several simulation systems are available for purchase. these systems can often analyze probe position and angling in three-dimensional space far more effectively than with a human mentor. although the views and the simulator ''patient'' are idealized, fellows benefit tremendously from exposure to repeated simulator scanning to perfect their technique in all echocardiographic windows. pathologic cases can also be programmed, with appropriate clinical scenario and the opportunity for the operator to evaluate diverse pathologies in multiple views. both transthoracic and transesophageal performance are included on most simulators, and the trainee may be evaluated directly by a mentor or more remotely using extensive recording of probe motion and images obtained by the simulator. stress echo cases-using both practice sets or via simulator-can be virtually ''performed'' and/or reviewed in similar fashion. ongoing participation in echo laboratory quality assurance projects, even when done on a remote basis, increases the sophistication of understanding of proper performance and application of echocardiographic techniques-essential for both level ii and level iii training. remote evaluation of clinical requests for echo examina-tions and application of appropriate use criteria principles further broaden a trainee's knowledge base. recognition of the appropriate application and mentored interpretation of an increased number of point-of-care ultrasound studies is an additional and unique skill that has been enhanced in the covid- pandemic. furthermore, exposure to the echocardiographic findings of covid- patients and the unique clinical scenarios (such as elevation in biomarkers) that mandate at least a limited echocardiographic evaluation of the covid- patient will be an essential part of overall competency in the future. the covid- pandemic and the clinical exigencies that accompany it have merely magnified the difficulties with a time-based and numbers-/volume-based documentation of echocardiographic skill. the pandemic has conversely provided extensive opportunities for innovation and expansion of traditional educational and assessment strategies. most importantly, the desired outcome of true echocardiographic competence at all levels of training can be achieved despite the change in the training paradigms. cocats task force : training in echocardiography new challenges and opportunities for echocardiographic education during the covid- pandemic: a call to focus on competency and pathology ase advanced training statement on echocardiography (revision of the acc/aha clinical competence statement on echocardiography) key: cord- - y yisfk authors: chan, justin; foster, dean; gollakota, shyam; horvitz, eric; jaeger, joseph; kakade, sham; kohno, tadayoshi; langford, john; larson, jonathan; sharma, puneet; singanamalla, sudheesh; sunshine, jacob; tessaro, stefano title: pact: privacy sensitive protocols and mechanisms for mobile contact tracing date: - - journal: nan doi: nan sha: doc_id: cord_uid: y yisfk the global health threat from covid- has been controlled in a number of instances by large-scale testing and contact tracing efforts. we created this document to suggest three functionalities on how we might best harness computing technologies to supporting the goals of public health organizations in minimizing morbidity and mortality associated with the spread of covid- , while protecting the civil liberties of individuals. in particular, this work advocates for a third-party free approach to assisted mobile contact tracing, because such an approach mitigates the security and privacy risks of requiring a trusted third party. we also explicitly consider the inferential risks involved in any contract tracing system, where any alert to a user could itself give rise to de-anonymizing information. more generally, we hope to participate in bringing together colleagues in industry, academia, and civil society to discuss and converge on ideas around a critical issue rising with attempts to mitigate the covid- pandemic. several communities and nations seeking to minimize death tolls from covid- , are resorting to mobilebased, contact tracing technologies as a key tool in mitigating the pandemic. harnessing mobile computing technologies is an obvious means to dramatically scale-up conventional epidemic response strategies to do tracking at population scale. however, straightforward and well-intentioned contact-tracing applications can invade personal privacy and provide governments with justification for data collection and mass surveillance that are inconsistent with the civil liberties that citizens will and should expect-and demand. to be effective, acceptable, and consistent with the need to observe commitments to privacy, we must leverage designs and computing advances in privacy and security. in cases where it is valuable for individuals to share data with others, systems must provide voluntary mechanisms in accordance with ethical principles of personal decision making, including disclosure, and consent. we refer to efforts to identify, study, and field such privacy-sensitive technologies, architectures, and protocols in support of mobile tracing as pact (p rivacy sensitive protocols and mechanisms for mobile c ontact t racing). the objective of pact is to set forth transparent privacy and anonymity standards, which permit adoption of mobile contract tracing efforts while upholding civil liberties. the basic idea is that users broadcast signals ("pseudonyms"), while also recording the signals they receive. notably, this colocation approach avoids the need to collect and share absolute location information. credit: m eifler. this work specifies a third-party-free set of protocols and mechanisms in order to achieve these objectives. while approaches which rely on trusted third parties can be straightforward, many naturally oppose the aggregation of information and power that it represents, the potential for misuse by a central authority, and the precedent that such an approach would set. it is first helpful to review the conventional contact tracing strategies executed by public health organizations, which operate as follows: positively tested citizens are asked to reveal (voluntarily, or enforced via public health policy or by law depending on region) their contact history to public health officers. the public health officers then inform other citizens who have been at risk to the infectious agent based on co-location, via some definition of co-location, supported by look-up or inference about locations. the citizens deemed to be at risk are then asked to take appropriate action (often to either seek tests or to quarantine themselves and to be vigilant about symptoms). it is important to emphasize that the current approach already makes a tradeoff between the privacy of a positively tested individual and the benefits to society. we describe mobile contact-tracing functionalities that seeks to augment the services provided by public health officers, by enabling the following capabilities via computing and communications technology: • mobile-assisted contact tracing interviews: a citizen who becomes ill can use this functionality to improve the efficiency and completeness of manual contact tracing interviews. in many situations, the citizen can speed up the interview process by filling in much of a contact interview form before the contact interview process even starts, reducing the burden on public health authorities. the privacy-sensitivity here is ensured since all the data remains on the user's device, except for what they voluntarily decide to reveal to health authorities in order to enable contact tracing. in advance of their making a decision to share, they are informed about how their data may be used and the potential risks of sharing. • narrowcast messages: public health authorities can make available custom-tailored messages to specific, relevant subsets of citizens. for example, the following message might be issued: "if you visited the x eldercare center between march th and th, please email yy@hhhealth.org" or "please refrain from entering playground z until april th because it needs to undergo decontamination." a mobile app can download all of these messages and display those relevant to a citizen based on the app's sensory log or potential future movements. this capability allows public health officials to quickly warn people when new hotspots arise, or canvas for general information. it enables a citizen to be well-informed about extremely local pandemic-relevant events. • privacy-sensitive, mobile tracing: proximity-based signals seem to provide the best available contact sensor from one phone to another; see figure for the basic approach. proximity-based sensing can be done in a privacy-sensitive manner. with the approach, no absolute location information is collected nor shared. variants of proximity-based analyses have been employed in the past for privacy-sensitive analyses in healthcare [ ] . taking advantage of proximity-based signals can speed the process of contact discovery and enable contact tracing of otherwise undiscoverable people like the fellow commuter on the train. this can also be done with a third-party-free approach providing similar privacy tradeoffs as manual contact tracing. this functionality can enable someone who has become ill with symptoms consistent with covid- , or who has received confirmation of infection with a positive test for covid- , to voluntarily and under a pseudonym, share information that may be relevant to the wellness of others. in particular, a system can manage, in a privacy-sensitive manner, data about individuals who came in close proximity to them over a period of time (e.g., the last two weeks), even if there is no personal connection between these individuals. individuals who share information do so with disclosure and consent around potential risks of private information being shared. we further discuss disclosure, security concerns, and re-identification risks in section . importantly, these protocols, by default, keep all personal data on a citizens' phones (aside for pseudonymous identifiers broadcast to other local devices), while enabling these key capabilities; information is shared via voluntary disclosure actions taken, with the understandings relayed via careful disclosure. for example, if someone never tests positive for covid- or tests positive but decides not to use the system, then *no* data is ever sent from their phone to any remote servers; such individuals would be contacted by standard contact tracing mechanisms arising from reportable disease rules. the data on the phone can be encrypted and can be set up to automatically time out based on end-user controlled policies. this would prevent the dataset from being accessed or requested via legal subpoena or other governmental programs and policies. we specify protocols for all three separate functionalities above, and each app designer can decide which ones to use. these protocols notably have different value adoption curves: narrowcast and mobile-assisted contact tracing have a value which is linear in the average adoption rate while privacy-sensitive mobile tracing has value quadratic in the average adoption rate due to requiring both ends of the connection be working. this quadratic dependence implies low initial value so we expect narrowcast and mobile-assisted contact tracing to provide initial value in adoption while privacy-sensitive mobile tracing provides substantial additional value once adoption rates are high. we note that there are an increasing number of concurrent contact tracing protocols being developedsee in particular section for a discussion of solutions based on proximity based tracing (as in figure ). in particular, there are multiple concurrent approaches using proximity based signaling; our approach has certain advantageous properties, as it is particularly simple and requires very little data transfer. one point to emphasize is that, with this large number of emerging solutions, it is often difficult for the user to interpret what "privacy preserving" means in many of these protocols . one additional goal in providing the concrete protocols herein is to have a broader discussion of both privacy-sensitivity and security, along with a transparent discussion of the associated re-identification risks -the act itself of alerting a user to being at risk provides de-anonymizing information, as we discuss shortly. from a civil liberties standpoint, the privacy guarantees these protocols ensure are designed to be consistent with the disclosures already extant in contract tracing methods done by public health services (where some information from a positive tested citizen is revealed to other at risk citizens). in short, we seek to empower public health services, while maintaining civil liberties. we also note that these contact tracing solutions are not meant to replace conventional contact tracing strategies employed by public health organizations; not everyone has phones, and not everyone that has a phone will use this app. therefore, it is still critical to leverage conventional approaches, along with the figure : pact tracing protocol. first, a user generates a random seed, which they treat as private information. then all users broadcast random-looking signals to users in their proximity via bluetooth and, concurrently, all users also record all the signals they hear being broadcast by other users in their proximity. each person's broadcasts (their "pseudonyms") are a function of their private seed, and they change these broadcasted pseudonyms periodically (e.g. every minute). whenever a user tests positive, the positive user then can voluntarily publish, on a public server, information which enables the reconstruction of all the signals they have broadcasted to others during the infection window (precisely, they publish their private seed, and, using the seed, any other user can figure out what pseudonyms the positive user has previously broadcasted). now, any other user can determine whether they are at risk by checking whether the signals they heard are published on the server. note that the "public lists" can be either lists from hospitals, which have confirmed seeds from positive users, or they can be self-reports (see section . ). credit: m eifler. approaches outlined in this paper. in fact, two of our protocols are designed for assisting public health organizations (and are designed with input from public health organizations). throughout, we refer to an at risk individual as one who has been in contact with an individual who has tested as positive for covid- (under criteria as defined by public health programs, e.g., "within feet for over minutes"). before we start this discussion, it is helpful to consider one principle which the proposed protocols respect: "if you do not report as being positive, then no information of yours will leave your phone." from a more technical standpoint, the statement that is consistent with our protocols is: if you do not report as being positive, then only random ("pseudonymized") signals are permitted to be broadcast from your phone. these random broadcasts are what allows proximity based tracing; see figure for a description of the mobile tracing protocol. it is worthwhile to note that this principle is consistent, in spirit, with conventional contract tracing approaches, where only positively tested individuals reveal information to the public health authorities. with the above principle, the discussion at hand largely focuses on what can be inferred when a positive disclosure occurs along with how a malicious party can impact the system. we focus the discussion on the "mobile tracing" protocol for the following reasons: "narrowcasting" allows people to listen for events in their region, so it can viewed as a one way messaging system. for "mobile-assisted interviews," all the data remains on the user's device, except for what they voluntarily reveal to public health authorities in order to enable contact tracing. all the claims are consequences of basic security properties that can formally be proved about the protocol, and in particular, about the cryptographic mechanism generating these random-looking signals. we start first with what private information is protected and what is shared voluntarily, following disclosure and consent. the inferential risk is due to that the alert itself is correlated with other information, from which a user could deduce de-anonymizing information. . if i tested positive and i voluntarily disclose this information, what does the protocol reveal to others? any other citizen who uses a mobile application following this protocol who has been at risk is notified. in some versions the time(s) that the exposure(s) occurred may be shared. in the basic mobile tracing system that we envision, beyond exposure to specific individuals, no information is revealed to any other citizens or entities (authorities, insurance companies, etc). it is also worthwhile noting that, if you are negative, then the protocol does not directly transmit any of your private information to any public database or any other third party; the protocol does transmit random ("pseudonymized") signals that your phone broadcasts. . re-identification and inferential risks. can a positive citizen's identity, who chooses to report being positive, be inferred by others? identification is possible and is a risk to volunteers who would prefer to remain de-identified. preventing proximity-based identification of this sort is not possible to avoid in any protocol, even in manual contact tracing as done by public health services, simply because the exposure alert may contain information that is correlated with identifying information. for example, an individual who had been in close proximity to only one person over the last two weeks can infer the identity of this positively tested individual. however, the positive's identity will never be explicitly broadcast. in fact, identities are not even stored in the dataset: it is only the positive person's random broadcasts that are stored. . mitigating re-identification. can the app be designed so as to mitigate re-identification risks to average users? while the protocol itself allows a sophisticated user, who is at risk, to learn the time at which the exposure occurred, the app itself can be designed to mitigate the risk. for example, in the app design, the reidentification risk could be mitigated by only informing the user that they are at risk, or the app could only provide the rough time of day at which the exposure occurred. this is a mild form of mitigation, which a malicious or sophisticated user could try to circumvent. we now directly address questions about the potential for malicious hackers, governments, or organizations to compromise the system. in some cases, cryptographically secure procedures can prevent certain attacks, and, in other cases, malicious disclosure of information is prevented because the protocol stores no data outside of your device by default. only cryptographically secure data from positively confirmed individuals is stored outside of devices. . integrity attacks. if you are negative, can a malicious citizen listen to your phone's broadcasts and, then report positive pretending to be you? no, this is not possible, provided you keep your initial seed private (see figure ). furthermore, even if the malicious party records all bluetooth signals going into and out of your phone, this is not possible. this attack is important to avoid, since, suppose a malicious entity observes all bluetooth signals sent from your phone. then, you would not want this entity to report you as positive. this attack is not possible as the seed uniquely identifies your broadcasts and remains unknown to the attacker, unless the attacker is able to successfully break the underlying cryptographic mechanism, which is unlikely to be possible. . inferential attacks. can a positive citizen's location, who chooses to report being positive, be inferred by others? it is possible for a malicious party to simultaneously record broadcasts at multiple different locations, including those that the positive citizen visited. using these recordings, the malicious party could infer where the positive citizen was. the times at which the citizen visited these locations can also be inferred. . replay and reliability attacks. if a citizen is alerted to be at risk, is it possible the citizen was not in the proximity of a positive individual? there are a few unlikely attacks that can trigger a false alert. one is a replay attack. for example, suppose a malicious group of multiple individuals colludes to try and pretend to be a single individual; precisely, suppose they all use the same private seed (see figure ). then if only one of these malicious individuals makes a positive report, then multiple people can be alerted, even if those people were not in the proximity of the person who made the positive report. the protocol incorporates several measures to make such attacks as difficult as possible. . physical attacks. what information is leaked if a citizen's device is compromised by a hacker, stolen, or physically seized by an authority? generally, existing mechanisms protect access to the storage of a phone. should these mechanisms fail, the device only stores enough information to reconstruct the signals broadcast over a period of time prior to the compromise which amounts to the length of the infection window (i.e., two weeks), in addition to collected signals. this enables some additional inference attacks. it is not possible to learn whether the user has ever reported positive. given that we would like the protocol to be of use to different states and countries, we seek an approach which allows for both security in reporting and for flexibility from the app designer in regions where it may make sense to consider reports which are self-confirmed positives tests or self-confirmed symptoms. reporting. does the protocol support both medically confirmed positive tests and self-confirmed positives tests? yes, it supports both. the uploaded files contain signatures from the uploading party (i.e. from a hospital lab or from any app following the protocol). this permits an app designer the freedom to use information from health systems and information from individuals in possibly different manners. in less developed nations, it may be helpful to permit the app designer to allow for reports based on less reliable signatures. reliability. how will the protocol handle issues of false positives and false negatives, with regards to alerting? what about cases when users don't have (or use their) mobile phones? the protocol does not explicitly address this, but a deployment requires both thoughtful app design and responsbile communication with the public. with regards to the former, the false positive and false negative rates have to be taken into account when determining how to make at risk reports. more generally, estimates of the probabilities can be helpful to a user (or an otherwise interpretable report); such reports can be particularly relevant for those in high risk categories (such as the elderly and immuno-compromised individuals). furthermore, not everyone has a smartphone, and not everyone with a smartphone will use this app. thus, users of this app -if they have not received any notification of exposure with covid- positive cases -should not assume that they have not been around such positive cases. this means, for example, that they should still be cautious and follow all appropriate current public health guidelines, even if the app has not alerted them to possible covid- exposure. this is particularly important until there is sufficient penetration of the app in any local population. we now list threats that are outside of the scope of the protocol, yet important to consider. care should be taken to address these concerns: • trusted communication. communication between users and servers must be protected using standard mechanisms (i.e., the tls protocol [ ] ). • spurious entries. self-reporting allows a malicious user to report themselves positive when they are not, and generally may allow several fake reports (i.e. a flooding attack). mitigation techniques should be introduced to reduce the risk of such attacks. • invalid authentication. positive reports should be validated using digital signatures, e.g., by healthcare providers. this requires appropriate public-key infrastructure to be in place. additional vulnerabilities related to misuse or misconfiguration of this infrastructure can affect reliability of positive reports. • implementation issues. implementation aspects may weaken some of our claims, and need to be addressed. for example, signals we send over bluetooth as part of our protocol may be correlated with other signals which de-anonymize the user. we now provide an overview of the three functionalities of pact. this section describes and discusses a privacy-sensitive mobile tracing protocol. our protocol follows a pattern wherein users exchange ids via bluetooth communication. if a user is both infected (we refer to such users as positive, and otherwise as negative) and willing to warn others who may have been at risk via proximity to the user, then de-identified information is uploaded to a server to warn other users of potential exposure. the approach has been followed by a number of similar protocols -we describe the differences with some of them in section . in appendix b, we discuss an alternative approach which may offer some efficiency and privacy advantages, at the cost of relying on signatures as opposed to hash functions. low-level technical details are omitted, e.g., how values are broadcast. further, it is assumed the communication between users and the server is protected using the transport layer security (tls) protocol. we first describe a variant of the protocol without entry validation, and discuss how to easily extend it to validate entries below. • parameters. we fix an understood time unit dt and define ∆ such that ∆ · dt equals the infection window. (typically, this would be two weeks.) we also fix the bit length n of the identifiers. (typically, n = .) we also use a function g : { , } n → { , } n which is assumed to be a secure cryptographic pseudorandom generator (prg). if n = , we can use g(x) = sha- (x). • pseudorandom id generation. every user broadcasts a sequence of ids id , to generate these ids, the user initially samples a random n-bit seed s , and then computes for i = , , . . .. after i time units, the user only stores s * ← s max{i−∆, } , the time t * at which s * was generated, the current s i , and the time t i at which s i was generated. note that if the device was powered off or the application disabled, we need to advance to the appropriate s i . • pseudorandom id collection. for every id broadcast by a device in its proximity at time t, a user stores a pair (id, t) in its local storage s. • reporting. to report a positive test, the user uploads (s * , t start = t * , t end = t i ) to the server, which appends it to a public list l. the server checks that t start and t end are reasonable before accepting the entry. once reported, the user erases its memory and restarts the pseudorandom id generation procedure. • checking exposure. a user downloads l from the server (or the latest portion of it). for every entry (s * , t start , t end ) in l, it generates the sequence of ids id * , . . . , id * ∆ starting from s * , as well as estimates t * i of the time at which each id * i was initially broadcast. if s contains (id * i , t) for some i ∈ { , . . . , ∆} such that t and t * i are sufficiently close, the user is alerted of potential exposure. setting delays. to prevent replay attacks, an entry (s * , t start , t end ) should be published with a slight delay. this is to prevent an id * ∆ generated from s * being recognized as a potential exposure by any user if immediately rebroadcast by a malicious party. entry validation. entries can (and should) be validated by attaching a signature σ on (s * , t start , t end ) when reporting, as well as (optionally) a certificate to validate this signature. an entry thus has form (s * , t start , t end , σ, cert). entries can be validated by multiple entities, by simply re-uploading them with a new signature. a range of designs and policies are supported by this approach. upon an initial update, a (weakly secure) signature with an app-specific key could be attached for self-reporting. this signature does not provide any real security (as we cannot guarantee that an app-specific signing key remains secret), but can be helpful to offer improved functionality. third-parties (like health-care providers) can re-upload an entry with their signature after validation. an app can adopt different policies on how to display a potential exposure depending on how it is validated. we also do not specify here the infrastructure required to establish the validity of certificates, or how a user interacts with a validating party, as this is outside the scope of this description. fixed-length sequences of ids. as stated, during the first ∆ − time units a user will have generated a sequence of fewer than ∆ ids. during this time, the number of ids the user has generated from its current s * is determined by how long ago the user started the current pseudorandom id generation procedure (either when they initially started using the protocol or when they last submitted a report). this may be undesirable information to reveal to a party that gains access to the sequence of ids (e.g. if the user submits a report or if the party gains physical access to the user's device). so to avoid revealing this information, a user may optionally iterate to s ∆ and use id ∆ as the first id they broadcast when starting or restarting the pseudorandom id generation procedure. synchronized updates. suppose a user updates their seed every dt amount of time after whenever they happened to originally start the id generation process. then it may be possible to correlate two ids of a user by noticing that the times at which the ids were initially broadcast were separated in time by a multiple of dt. to mitigate this it would be beneficial to have an agreed schedule of when all users update their seed. for example, if dt is minutes then it might be agreed that everyone should update their seed at midnight utc, followed by : , : , and so forth. privacy and integrity properties of the protocol follow from the following two propositions. (their proofs are omitted and follow from standard techniques.) in the following discussion, it is convenient to refer to an id value id i output by a user as unreported if it is not within the ∆ id's generated by a seed the user has reported to the server. proposition (pseudorandomness) all unreported ids are pseudorandom, i.e., no observer (different than the user) can distinguish them from random looking strings (independent from the state of the user) without compromising the security of g. proposition (one-wayness) no attacker can produce a seed s which generates a sequence of ∆ ids that include an unreported id generated by an honest user (not controlled by the adversary) without compromising the security of g. to discuss the consequences of these properties on privacy and integrity, let us refer to users as either "positive" or "negative" depending on whether they decided to report as positive, by uploading their seed to the server, or not. • privacy for negative users. by the pseudorandomness property, a negative user u only broadcasts pseudorandom ids. these ids cannot be linked without knowledge of the internal state of u. this privacy guarantee improves with the frequency of updating the seed s i -ideally, if a different id i is broadcast each time, no linking is possible. this however results in less efficient checking for exposure by negative users. • privacy for positive users. upon reporting positive, the last ∆ ids generated by the positive user can be linked. ( we discuss what this means below, and possible mitigation approaches.) however, by pseudorandomness, this is only true for the ids generated within the infection window. older ids and newer ids cannot be linked with those in the infection window, and with each other. therefore, a positive user has the same guarantees as a negative user outside of the reported infection window. • integrity guarantees. it is infeasible for an attacker to upload to the server a value s * which generates an unreported id that equals one generated by another user. this prevents the attacker from misreporting ids of otherwise negative users and erroneously alerting their contacts. timing information and replay attacks. the timestamping is necessary to prevent replay attacks. in particular, we are concerned by adversaries rebroadcasting ids of legitimate users (to be tested positive) outside the range of their devices. this may create a high number of false exposures to be reported. an attack we cannot prevent is the following relay attack: an attacker captures an id of an honest user at location a, sends it over the internet to location b, where it is re-broadcast. however, as soon as there is sufficient delay, the attack is prevented by maintaining sufficiently accurate timing information. (one can envision several accuracy compromises in the implementation, which we do not discuss here.) strong integrity. our integrity property does not prevent a malicious user from reporting a seed s * generating an id which has been already reported. given an entry with seed s * , the attacker just chooses (for example) s * as the first half of g(s * ). the threat of such attacks does not appear significant. however, they could be prevented with a less lightweight protocol, as we explain next. we refer to the resulting security guarantee as strong integrity. each user generates a signing/verification-key pair (sk, vk) along with the initial seed. then, we include vk in the id generation process, in particular let (s i , id i ) ← g(s i− , vk). an entry now consists of (s * , t start , t end , vk, σ), where σ is a signature (with signing key sk) on (s * , t start , t end , vk). entries with invalid signatures are ignored. (this imposes slightly stronger assumptions on g -pseudorandomness under related seeds sharing part of the input and binding of vk to s i .) the cen protocol, discussed in section , is the only one that targets strong integrity, though their initial implementation failed to fully achieve it. (the issue has been fixed after our report.) one explicit compromise we take is that ids of a positive user can be linked within the infection window, and that the start and end time of the infection window is known. for example, an adversary collecting ids at several locations can detect that the same positive user has visited several locations at which it collects broadcast identifiers. this can be abused for surveillance purposes, but arguably, surveillance itself could be achieved by other methods. the most problematic aspect is the linking of this individual with the fact that they are positive. a natural approach to avoid linking, as in [ ] , is for the the server to only expose the ids, rather than a seed from which they are computed. however, this does not make them unlinkable. imagine, at an extreme, that the storage on the server is append only (which is a realistic assumption). then, the ids belonging to the same user are stored sequentially. one can obfuscate this leakage of information in several ways, for example by having the server buffer a certain amount of new ids, and shuffle them before release. nonetheless, the actual privacy improvement is hard to assess without a good statistical model of upload frequency. this also increases the latency of the system which directly harms its public health value. a user could also learn at which time the exposure took place, and hence infer the identity of the positive user from other available information. we stress that the application can and should refuse to display the time of potential exposure -thus preventing a "casual attacker" from learning timing information. however, a malicious app can always remember at which time an id has been seen. contact tracing interviews are laborious and often miss important events due to the limitations of human memory. our plan to assist here is to provide information to the end user that can (with consent) be shared with a public health organization charged with performing contact tracing interviews. this is not an exposure of the entire observational log, but rather an extract of the information which is requested in a standard contact tracing interview. we have been working with healthcare teams from boston and the university of washington on formats and content of information that are traditionally sought by public health agencies. ideally, such extraction can be done working with the user before a contact tracing interview even occurs to speed the process. healthcare authorities from nyc have informed us that they would love to have the ability to make public service announcements which are highly tailored to a location or to a subset of people who may have been in a certain region during specific periods of time. this capability can be enabled with a public server supporting (area x time,message) pairs. here "area" is a location, a radius (minimum meters), a beginning time and an ending time. only announcements from recognized public health authorities are allowed. anyone can manually query the public server to determine if there are messages potentially relevant to them per their locations and dwells at the locations over a period of time. however, simple automation can be extremely helpful as phones can listen in and alert based on filters that are dynamically set up based on privately-held locations and activities. upon downloading (area x time, message) pairs a phone app (for example) can automatically check whether the message is relevant to the user. if it is relevant, a message is relayed to the device owner. querying the public server provides no information to the server through the protocol itself, because only a simple copy is required. we discuss some alternative approaches to mobile tracing. some of these are expected to be adopted in existing and future contact-tracing proposals, and we discuss them here. hart et al. [ ] provides a useful high-level understanding of the issues involved in contact tracing. they discuss, among other topics, the value of using digital technology to scale contract tracing and the trade-offs between different classes of solutions. pact users upload their locally generated ids upon a positive report. an alternative is to upload collected ids of potentially at risk users. this approach (which we refer to as the dual approach) has at least one clear security disadvantage and one mild privacy advantage over pact. (the latter is only true if the system is carefully implemented, as we explain below.) disadvantages: reliability and integrity attacks. in the dual approach, a malicious user cannot be prevented from very easily reporting a very large number of ids which were not generated by users in physical proximity. these ids could have been collected by colluding parties elsewhere, at any time before the report. such attacks can seriously hurt the reliability of the system. in pact, to achieve a similar effect, the attacker needs to (almost) simultaneously broadcast the same id in direct proximity of all individuals who should be falsely alerted to be potentially at risk. pact ensures integrity of positive reporting by exhibiting a seed generating these ids, known only to the reporter. a user u cannot frame another negative user u as a positive user by including an id generated by u . in the dual approach, user u could be framed for example by uploading ids that have been broadcast in their surroundings. advantage: improved temporal ambiguity. both in the dual approach and in pact-like designs, a user at risk can de-anonymize a positive user from the time at which the matching id was generated/collected, and other contextual information (e.g., a surveillance video). the dual approach offers a mitigation to this using re-randomization of ids. we explain one approach [ ] . let g be a prime-order cyclic group with generator g (instantiated via a suitable elliptic curve). . each user u chooses a secret key s u as a random element in z p . . each broadcast id takes the form id i = (g ri , g risu ), where r , r , . . . are random elements of z p . . to upload an id with form id = (x, y) with a report, a positive user uploads instead a re-randomized version id = (x r , y r ), where r is a fresh random value from z p . . to determine whether they are at risk, user u checks whether an id of the form id = (x, y) such that y = x su is stored on the server. under a standard cryptographic assumption -the so-called decisional diffie-hellman (ddh) assumptionthe ids are pseudorandom. further, a negative user who learns they are at risk cannot tell which one of the ids they broadcast has been reported, as long as the reporting user re-randomized them and all ids have been generated using the same s u . note that incorrect randomization only hurts the positive user. crucially, however, the privacy benefit inherently relies on each user u re-using the same s u , and we cannot force a malicious user to comply. for example, to track movements of positive users, a surveillance entity can generate ids at different locations with form (x, y) where y = x s l and s l depends on the location l. identifiers on the server with form (x, x s l ) can then be traced back to location l. a functionally equivalent attack is in fact more expensive against pact, as this would require storing all ids of users broadcast at location l. we discuss an alternative centralized approach here, which relies on a trusted third party (ttp), typically an agency of a government. such a solution requires an initial registration phase with the ttp, where each user subscribes to the service. moreover, the protocol operates as follows: . users broadcast random-looking ids and gather ids collected in their proximity. . upon a positive test, a user reports to the ttp all of the ids collected in their proximity during the relevant infection window. the ttp then alerts the users who generated these ids, who are now at risk. in order for the ttp to alert potentially at risk users, it needs to be able to identify the owners of these identifiers. there a few technical solutions to this problem. • one option is to have the ttp generate all ids which are used by the users -this requires either storing them or (in case only seeds generating them are stored) a very expensive check to identity at risk users. • a more efficient alternative for the ttp (but with larger identifiers) goes as follows. the trusted thirdparty generates a public-key/secret-key pair (sk, pk), making pk public. it also gives a unique token τ u to each user u upon registration, which it remembers. then, the i-th id of user u is id i = enc(pk, τ u ). (note that encryption is randomized here, so every id i appears independent from prior ones.) the ttp can then efficiently identify the user who generated id i by decrypting it. privacy considerations. such a centralized solution offers better privacy against attackers who do not collude with the ttp -in particular, only pseudorandom identifiers are broadcast all times. moreover, at risk individuals only learn that one of the ids they collected belongs to a positive individual. a -risk users can still collude, learning some information from the time of being reported at risk, and correlate identifiers belonging to the same positive user, but this is harder. the biggest drawback of this solution, however, is the high degree of trust on the ttp. for example: • the ttp learns the identities of all at risk users who have been in proximity of the positive subject. • the ttp can, at any time and independently of any actual report, learn the identity of the user u who broadcasts a particular id, or at least link them to their token τ u . this could be easily exploited for surveillance of users adopting the service. security consideration. as in the dual approaches described above, it is trivial for a malicious party identifying as honest to report valid identifiers of other users (which may have been collected in a distributed fashion) to erroneously alert them as being at risk. replay attacks can be mitigated by encrypting extra meta-data along with τ u (e.g., a timestamp), but this would make ids even longer. if the ttp is malicious it can target specific users to falsely claim they are at risk or to refrain from informing them when they actually are at risk. it is also possible to design protocols based on the sensing of absolute locations (gps, and gps extended with dead reckoning, wifi, other signals per current localization methods) consistent with "if you do not report as being positive, then no information of yours will leave your phone" (see section ). for example, a system could upload location traces of positives (cryptographically, in a secure manner), and then negative users, whose traces are stored on their phones could intersect their traces with the positive traces to check for exposure. this could potentially be done with stronger cryptographic methods to limit the exposure of information about these traces to negative users; one could think of this as a more general version of private-set intersection (psi) [ , , ] . however, such solutions would still reveal traces of positives to a server. there are two reasons why we do not focus on the details of such an approach here: • current localization technologies are not as accurate as the use of bluetooth-based proximity detection, and may not be accurate enough to be consistent with medically suggested definitions for exposure. • approaches employing the sensing and collection of absolute location information would need to rely more heavily on cryptographic protocols to keep the positive users traces secure. however, this is an approach worth keeping in mind as an alternative, per assessments of achievable accuracies and relevance of the latter accuracies for public health applications. there are an increasing number of contact tracing applications being created with different protocols. we will briefly discuss a few of these and how their mobile tracing protocols compare with the approaches described in section . and . the privacy-sensitive mobile tracing protocols proposed by coepi [ ] , covidwatch [ ], as well as dp t [ ] , have a similar structure to our proposed protocol. we briefly describe the technical differences between all of these protocols and discuss the implications of these differences. similar to our proposed protocol, these are based on producing pseudorandom ids by iteratively applying a prg g to a seed. coepi and covidwatch use the contact event numbers (cen) protocol, in which the initial seed is derived from a digital signature signing key rak and g is constructed from two hash functions (which during each iteration incorporate an encoding of the number of iterations done so far and the verification key rvk which matches rak). another proposal is the dp t [ ] protocol, in which g is constructed from a hash function, a prf, and another prg. the latter prg is used so that a single iteration of g produces all the ids needed for a day. these ids are used in a random order throughout the day. both of these (under appropriate cryptographic assumptions) achieve the same sort of pseudorandomness and one-wayness properties as our protocol. the incorporation of rvk into g with cen is intended to provide strong integrity and allow a reporting user to include a memo with their report that is cryptographically bound to the report. two ideas for what such a memo might include are a summary of the user's self-reported symptoms (coepi) or an attestation from a third party verifying that the user tested positive (covidwatch). because a counter of how many times the seed has been updated is incorporated into g, a report must specify the corresponding counters. this leaks how long ago the user generated the initial seed, which could potentially be correlated with identifying information about the user (e.g., when they initially downloaded the app). an earlier version of cen incorrectly bound the digital signature key to the identifiers in a report. suppose an honest user has submitted a report for id j through id j (for j < j ) with a user chosen memo. given this report, an attacker could create their own report that verifies as valid, but includes the honest user's id i for some i between j and j together with a memo of the attacker's choosing. a fix was proposed after we contacted the team behind the cen protocol. the random order of a user's ids for a day by dp t is intended to make it difficult for an at risk individual to identify specifically when they were at risk (and thus potentially, by whom they were exposed). a protocol cannot hope to hide this sort of timing information from an attacker that chooses to record the time when they received every id they see; this serves instead as a mitigation against a casual attacker using an app that does not store this sort of timing information. in our protocol and cen, information about the exposure time is not intended to be as hidden at the protocol. in our protocol the time an id was used is even included as part of a report and used to prevent replay attacks, as discussed earlier. cen does not use timing information to prevent replay attacks, but considers that an app may choose to give users precise information about where they were exposed (so the user can reason about how likely this potential exposure was to be an actual exposure). a similar protocol idea was presented in [ ] . it differs from the aforementioned proposals in that individual ids are uploaded to the server, rather than a seed generating them (leading to increased bandwidth and storage). alternatives using bloom filters to reduce storage are discussed, but these inherently decrease the reliability of the system. dp t also recently included a similar protocol as an additional option, using cuckoo filters in place of bloom filters. the tracetogether [ ] app is currently deployed in singapore. it uses the bluetrace protocol designed by at team at the government technology agency of singapore. this protocol is closely related to the encryption-based technique discussed in section . . the private kit: safe paths app [ , ] intends to use an absolute-location-centric approach to mobile tracing. they intend to mitigate some of the downsides discussed in section . by reported location traces of positive users to be partially redacted. it is unclear what methodology they intend to use for deciding how to redact traces. the trade-off in this redaction process between how easily a positive user can be identified from their trace and how much information must be removed from it (decreasing its usefulness). they intend to use cryptographic protocols (likely based on [ ] ) to minimize the amount of information revealed about positive users' traces. a group of scientist at the big data institute of oxford university have proposed the use of a mobile contact-tracing app [ , ] based on their analysis in [ ] . the nexttrace [ ] project aims to coordinate with covid- testing labs and users, providing software to enable contact tracing. the details of these proposals and the privacy protections they intend to provide are not publicly available. the projects we refer to are only a small selection of the mobile contract-tracing efforts currently underway. a more extensive listing of these projects is being maintained at [ ] , along with other information of interest to contract tracing. discussion and further considerations most protocols like ours store a seed on a server, which is then used to deterministically generate a sequence of identifiers. details differ in how exactly these sequences are generated (including the adopted cryptographic algorithms). however, it appears relatively straightforward for apps to be modified to support all of these different sequence formats. a potential challenge is data from different protocol may provide different levels of protection (e.g., the lack of timing information may reduce the effectiveness against replay attacks). this difference in reliability may be surfaced via the user-interface. in order to support multiple apps accessing servers for different services, it is important to adopt an interoperable format for entries to be stored on a server and possibly, to develop a common api. we acknowledge that ethical questions arise with contact tracing and in the development and adoption of any new technology. the question of how to balance what is revealed for the good of public health vs individual freedoms is one that is central to public health law. we iterate that privacy is already impacted by tracing practices. in some nations, positively tested citizens are required, either by public health policy or by law, to disclose aspects of their history. such actions and laws frame multiple concerns about privacy and freedom, and bring up important questions. the purpose of this document is lay out some of the technological capabilities, which supports broader discussion and debate about civil liberties and the risks that contact tracing can pose to civil liberties. another concern is accessibility to the service: not everyone has a phone (or will have the service installed). one consequence of this is that the quality of contract tracing in a certain population inherently depends on factors orthogonal to the technological aspects, which in turn raises important questions about fairness. tracing is one part of a conventional epidemic response strategy, based on tests, tracing, and timeouts (ttt). programs involving all three components are as follows: • test heavily for the virus. south korea ran over tests per person found with the virus. • trace the recent physical contacts for anyone who tests positive. south korea conducted mobile contact tracing using telecom information. • timeout the virus by quarantining contacts until their immune system purges the virus, rendering them non-infectious. the mobile tracing approach allows this strategy to be applied at a dramatically larger scale than only relying on human contact tracers. this chain is only as strong as its weakest link. widespread testing is required and wide-scale adoption must occur. furthermore, strategies must also be employed so that citizens takes steps to self-quarantine or seek testing (as indicated) when they are exposed. we cannot assume percent usage of the application and concomitant enlistment in ttt programs. studies are needed of the efficacy of the sensitivity of the effectiveness of the approach to different levels of subscription in a population. and nsf ( , ) . stefano tessaro acknowledges support from a sloan research fellowship and from the nsf under grants cns- , cns- . • bluetooth message: a bluetooth message consists of a fixed-length string of bytes. it is used with the bluetooth sensory log to discover if there is a match, which results in a warning that the user may have been in contact with an infected person. • message: a message is a cryptographically signed string of bytes which is interpreted by the phone app. this is used for either a public health message (announced to the user if the sensory log matches) or a bluetooth message. with the above defined, there are two common queries that the server supports as well as an announcement mechanism. • getmessages(region, time) returns all of the (area, message) pairs that the server has added since time for the region. the app can then check locally whether the area intersects with the recorded sensory log of (location,time) pairs on the phone, and alert the user with the message if so. • howbig(region, time) returns the (approximate) number of bytes worth of messages that would be downloaded on a getmessages call with the same arguments. howbig allows the phone app to control how much information it reveals to the server about locations/times of interest according to a bandwidth/privacy tradeoff. for example, the phone could start with a very coarse region, specifying higher precision regions until the bandwidth required is acceptable, then invoke getmessages. (this functionality is designed to support controlled anonymity across widely varying population densities.) • announce(area,message) uploads an (area, message) pair for general distribution. to prevent spamming, the signature of the message is checked against a whitelist defined with the server. we propose an alternative to the protocol in section . . one main difference is that the server cannot generate the ids broadcast by a positive user, and only stores a short verification key used to identify ids broadcast by the positive user. while this does not prevent many of the inference scenarios we discussed above, this appears to be a desirable property. as we explain below, this protocol offers a different cost for checking exposure, which may be advantageous in some deployment scenarios. this alternative approach inherently introduces risks of replay attacks which cannot be prevented by storing timestamps, because the server obtains no information about the times at which ids have been broadcast. to overcome this, we build on top of a very recent approach of pietrzak [ ] for replay-attack protection. (along similar lines, this can also be extended to relay-attack protection by including gps coordinates, but we do not describe this variant here.) • setup and parameters. we fix an understood time unit dt. we make use of a digital signature scheme specifying algorithms for key generation, signing, and verification, denoted kg, sign, and vrfy, respectively. we also use a hash they also determine the current time t i = t d + dt · (i − ). finally, the user samples n-bit random strings r i and r i and computes the identifier as where σ i = sign(sk d , r i ||h i ) and h i = h(r i , t i ). they broadcast (id i , r i , t i ). when day d ends the user deletes their signing key sk d . (the verification key vk d is not deleted, until an amount of time equal to the infection window has elapsed.) • pseudorandom id collection. for every id i = ((σ i , r i , h i ), r i , t i ) broadcast by a device in their proximity, a user first checks if t i is sufficiently close to their current time and if h i = h(r i , t i ). if so, they store id i in their local storage s. • reporting. to report a positive test, the user uploads each of their recent vk d to the server, which appends them to a public list l. once reported, the user erases their memory and restarts the pseudorandom id generation procedure. • checking exposure. a user downloads l from the server (or the latest portion of it). for every entry vk in l and every entry (σ, r, h) in s, they run vrfy(vk, σ, r||h). if this returns true, the user is alerted of potential exposure. efficiency comparisons. let ∆ be the number of ids broadcast over the infection window. let s = |s| be the size of the local storage. let l be the number of new verification keys a user downloads. to check exposure, the protocol from section . roughly runs in time where t g is the time needed to evaluate g. in contrast, for the protocol in this section, the time is where t vrfy is the time to verify a signature. one should note that t vrfy is generally larger than t g , but can still be quite fast. (for example, ed enables fast batch signature verification.) therefore, the usage of this scheme makes particular sense if a user does not collect many ids, i.e., s is small relative to ∆ · log(s). assumptions. we require the following two standard properties for the hash function h: • pseudorandomess: for any x and a randomly chosen r ∈ { , } n , the output h(r, x) looks random to anyone that doesn't know r. • collision resistance: it is hard to find distinct inputs to h that produce the same output. of our digital signature scheme we require the following three properties. the first is a standard property of digital signature schemes. the latter two are not commonly required of a digital signature scheme, so one needs to be careful when choosing a signature scheme to implement this protocol. we have verified that these properties are achieved by ed under reasonable cryptographic assumptions. • unforgeability: given vk and examples of σ = sign(sk, m) for attacker-chosen m, an attack cannot produce a new (σ , m ) for which vrfy(vk, σ , m ) returns true. • one-wayness: given examples of σ = sign(sk, m) for attacker-chosen m (but not given vk), an attacker cannot find vk for which vrfy(vk , σ, m) returns true for any of the example (σ, m). • pseudorandomess: the output of sign(sk, ·) looks random to an attacker that does not know vk or sk. privacy and security properties. we discuss the privacy and integrity properties this protocol has in common with the earlier protocol, as well as some newer properties not achieved by the earlier protocol. • privacy for negative users. by the pseudorandomness property, the signatures broadcast by a user u look pseudorandom. beyond that, u broadcasts two random strings and their view of the current time t i which is already known by any device hearing the broadcast. thus these broadcasts cannot be linked without knowledge of the internal state of u. as before, this privacy guarantee improves with the frequency of generating new ids. • privacy for positive users. upon reporting positive, the ids broadcast by a user within a single day can be linked to each other. ids broadcast on different days can be linked if the server does not hide which vk's were reported together. older ids from days before the infection window and newer ids from after the report cannot be linked with those in the infection window or with each other. therefore, a positive user has the same guarantees as a negative user outside of the reported infection window. • integrity guarantees. it is infeasible for an attacker to upload to the server a value vk which verifies an unreported id that was broadcast by another user. this prevents the attacker from misreporting ids of otherwise negative users and erroneously alerting their contacts. • replay protection. the incorporation of t in each id prevents an attacker from performing a replay attack where they gather ids of legitimate users (to be tested positive) and re-broadcast the ids at a later time to cause false beliefs of exposure. a vk reported to the server cannot be used to broadcast further ids that will be recognized by other users as matching that report. • non-sensitive storage. because h(r i , t i ) looks random, the information intentionally stored by the app together with an id does not reveal when the corresponding interaction occurred. (of course, it may be possible to infer information about t i through close examination of how the id was stored, e.g., where it was written in memory as compared to other ids.) information sharing across private databases assessing disease exposure risk with location histories and protecting privacy: a cryptographic approach in response to a global pandemic high-speed high-security signatures anonymous collocation discovery:taming the coronavirus while preserving privacy coepi: community epidemiology in action quantifying sars-cov- transmission suggests epidemic control with digital contact tracing efficient private matching and set intersection outpacing the virus: digital response to containing the spread of covid- while mitigating privacy risks rfc : edwards-curve digital signature algorithm (eddsa) delayed authentication: replay and relay attacks on dp- t phasing: private set intersection using permutationbased hashing apps gone rogue: maintaining personal privacy in an epidemic rfc : the transport layer security (tls) protocol version . . internet engineering task force (ietf) private kit: safe paths; privacy-by-design contact tracing decentralized privacy-preserving proximity tracing sustainable containment of covid- using smartphones in china: scientific and ethical underpinnings for implementation of similar approaches in other settings unified research on privacy-preserving contact tracing and exposure notification for covid- from web search to healthcare utilization: privacy-sensitive studies from mobile data we gratefully acknowledge dean foster for contributions that are central in designing the current protocol, along with contributions throughout the current document. the authors thank yael kalai for numerous helpful discussions, along with suggesting the protocol outlined in section . . we thank edward jezierski, nicolas di tada, vi hart, ivan evtimov, and nirvan tyagi for numerous helpful discussions. we also graciously thank m eifler for designing all the figures. sham kakade acknowledges funding from the washington research foundation for innovation in data-intensive discovery, the onr award n - - - , nsf grants #ccf- and #ccf . jacob sunshine acknowledges funding from nih (k da ) a number of practical issues and details may arise with implementation. . with regards to anonymity, if the protocol is implemented over the internet, then geoip lookups can be used to localize the query-maker to a varying extent. people who really care about this could potentially query through an anonymization service. . the narrowcast messages in particular may be best expressed through existing software map technology. for example, we could imagine a map querying the server on behalf of users and displaying public health messages on the map. . the bandwidth and compute usage of a phone querying the full database may be to high. to avoid this, it's reasonably easy to augment the protocol to allow users to query within a (still large) region.we mention one such approach below. . disjoint authorities. across the world, there may be many testing authorities which do not agree on a common infrastructure but which do wan to use the protocol. this can be accommodated by enabling the phone app to connect to multiple servers. . the mobile proximity tracing does not directly inform public authorities who may be a contact. however, it does provide some bulk information, simply due to the number of posted messages.there are several ways to implement the server. a simple approach, which works fine for not-to-many messages just uses a public github repository.a more complex approach supporting regional queries is defined next. anyone can ask for a set of messages relevant to some region r where r is defined by a latitude/longitude range with messages after some timestamp. more specific subscriptions can be constructed on the fly based on policies that consider a region r and privately observed periods of time that an individual has spent in a region. such scoped queries and messaging services that relay content based on location or on location and periods of time are a convenience to make computation and communication tractable. the reference implementation uses regions greater in size than typical geoip tables.to be specific, let's first define some concepts.• region: a region consists of a latitude prefix, a longitude prefix, and the precision in each. for example, new york which is at . n, - . e can be coarsened to n, - e with two digits of precision (the actual implementation would use bits).• time: a timestamp is specified in the number of seconds (as a bit integer) since the january , .• location: a location consists of a full precision latitude and longitude• area: an area consists of a location, a radius, a beginning time, and an ending time. key: cord- -q qpec t authors: nijhuis, r. h. t.; guerendiain, d.; claas, e. c. j.; templeton, k. e. title: comparison of eplex respiratory pathogen panel with laboratory-developed real-time pcr assays for detection of respiratory pathogens date: - - journal: j clin microbiol doi: . /jcm. - sha: doc_id: cord_uid: q qpec t infections of the respiratory tract can be caused by a diversity of pathogens, both viral and bacterial. rapid microbiological diagnosis ensures appropriate antimicrobial therapy as well as effective implementation of isolation precautions. the eplex respiratory pathogen panel (rp panel) is a novel molecular biology-based assay, developed by genmark diagnostics, inc. (carlsbad, ca), to be performed within a single cartridge for the diagnosis of respiratory pathogens (viral and bacterial). the objective of this study was to compare the performance of the rp panel with those of laboratory-developed real-time pcr assays, using a variety of previously collected clinical respiratory specimens. a total of clinical specimens as well as external quality assessment (eqa) specimens and different middle east respiratory syndrome coronavirus isolates have been assessed in this study. the rp panel showed an agreement of . % with the real-time pcr assay regarding pathogens found in the clinical specimens. all pathogens present in clinical samples and eqa samples with a threshold cycle (c(t)) value of < were detected correctly using the rp panel. the rp panel detected additional pathogens, of which could be confirmed by discrepant testing. in conclusion, this study shows excellent performance of the rp panel in comparison to real-time pcr assays for the detection of respiratory pathogens. the eplex system provided a large amount of useful diagnostic data within a short time frame, with minimal hands-on time, and can therefore potentially be used for rapid diagnostic sample-to-answer testing, in either a laboratory or a decentralized setting. i nfections of the upper and lower respiratory tract can be caused by a diversity of pathogens, both viral and bacterial. community-acquired respiratory tract infections are a leading cause of hospitalization and responsible for substantial morbidity and mortality, especially in infants, the elderly, and immunocompromised patients. the etiological agent in such infections differs greatly according to season and age of patient, with highest prevalences being those of respiratory syncytial virus (rsv) in children and influenza virus in adults. rapid microbiological diagnosis of a respiratory infection is important to ensure appropriate antimicrobial therapy and for the effective implementation of isolation precautions ( ) . in the last decade, many conventional diagnostic methods such as culture and antigen detection assays have been replaced by molecular assays for diagnosing respiratory tract infections. multiplex real-time pcr assays have been developed and implemented for routine diagnostic application, detecting a wide variety of pathogens ( - ). these assays have shown high sensitivity and specificity, but the limited number of fluorophores that can be used per reaction resulted in the need to run several real-time pcr assays to cover a broad range of relevant pathogens. commercial assays using multiplex ligation-dependent probe amplification (mlpa), a dual priming oligonucleotide system (dpo), or a microarray technology were developed to overcome this problem and are able to detect up to viruses simultaneously ( , ) . all applications mentioned require nucleic acid extraction prior to amplification. for routine diagnostics, these methods are most suited for batch-wise testing, with a turnaround time of ϳ to h. to decrease the time to result and enable random access testing, syndromic diagnostic assays have been developed. these assays combine nucleic acid extraction, amplification, and detection in a single cartridge per sample and are suitable for decentralized or even point-of-care testing (poct) with a time to result of Ͻ h. a novel rapid diagnostic, cartridge-based assay for the detection of respiratory tract pathogens using the eplex system ( fig. ) was developed by genmark diagnostics, inc. (carlsbad, ca). the eplex respiratory pathogen panel (rp panel) is based on electrowetting technology, a digital microfluidic technology by which droplets of sample and reagents can be moved efficiently within a network of contiguous electrodes in the eplex cartridge, enabling rapid thermal cycling for a short time to result. following nucleic acid extraction and amplification, detection and identification are performed using the esensor detection technology (fig. ) , as previously applied in the xt- system ( ) . in the current study, the performance of the syndromic rp panel was compared to those of laboratory-developed real-time pcr assays, using clinical specimens previously submitted for diagnosis of respiratory pathogens. the positive clinical specimens contained a total of respiratory pathogens as detected by laboratory-developed real-time pcr assays (table ) . as shown in table , the nonnasopharyngeal (non-nps) specimens comprised of the total respiratory pathogens. testing all samples with the rp panel resulted in an overall agreement for ( . %) targets from specimens, prior to discrepant analysis. of the specimens containing a single pathogen, the detected targets were concordant in / specimens. for samples with coinfection, the same pathogens could be identified in / , / , and / in the case of , , and pathogens present, respectively. eight of discordant targets (pcr ϩ /rp Ϫ ) had a positive result with threshold cycle (c t ) values of Ͼ (fig. ) . retesting with a third assay confirmed of real-time pcr-positive targets being human bocavirus (hbov; n ϭ ), rhinovirus (rv; n ϭ ), parainfluenza virus type (piv ; n ϭ )), human coronavirus (hcov) oc (n ϭ ), hcov e (n ϭ ), hcov hku (n ϭ ), and human metapneumovirus (hmpv; n ϭ ). the two unresolved pcr ϩ /rp Ϫ results consisted of two hmpv-positive samples (c t values of . and . ). the rp panel yielded a positive result in specimens, where the laboratorydeveloped test (ldt) remained negative (pcr Ϫ /rp ϩ ), including additional pathogens previously undetected by ldt in the positive specimens and one influenza a h n virus that was detected as influenza a virus by ldt ( table ) . seven of these additional targets could be confirmed, including three of rv/enterovirus (ev) (all confirmed as rv), two of piv , and one each of hbov and hcov nl . one of the selected negative samples tested positive for human adenovirus (hadv) in the rp panel but could not be confirmed by discrepant testing. all other negative specimens tested negative in the rp panel as well. both middle east respiratory syndrome coronavirus (mers-cov) isolates could be detected by the rp panel. by testing a -fold dilution series of both isolates, it was shown that mers-cov with a c t value of Ͻ in the laboratory-developed real-time pcr assay could be detected using the rp panel, while detection with a c t value of Ͼ was achievable but was not reproducible in every instance. of the specimens from the quality control for molecular diagnostics (qcmd) respiratory ii pilot external quality assessment (eqa) study panel, were detected in full agreement with the content as reported by qcmd ( table ). the false-negative tested specimens both contained hcov nl , of which one was a coinfection in an hmpv-positive sample. both specimens had been tested with the laboratory-developed real-time pcr assay as well and were found positive for hcov nl , both with c t values of . . the qnostics evaluation panel consisted of samples, including different respiratory pathogens and one negative sample ( table ). the rp panel detected of the specimens in agreement with the content, whereas hadv type and chlamydophila pneumoniae were not detected. real-time pcr detection of these specimens was performed to confirm the presence of the respective pathogen in the specimen and was found positive for both hadv (c t value of . ) and c. pneumoniae (c t value of . ). the hybridized molecule is then exposed to another sequence-specific probe that is bound to a solid phase, which is a gold electrode (a). upon binding of the two molecules, the ferrocene comes in close proximity to the gold electrode, where an electron transfer that can be measured using genmark's esensor technology on the eplex system can occur (b). the performance of the eplex rp panel was assessed by retrospective testing of clinical respiratory specimens (obtained in to ) comprising five different types of specimens. although the rp panel had been ce in vitro diagnostic (ce-ivd) cleared for detection of respiratory pathogens from nps swabs only, we included a range of alternate sample types that can be obtained and tested for respiratory pathogens in the diagnostic setting. by including a total of respiratory non-nps specimens with different pathogens (table ) , it was shown that the rp panel was able to accurately detect the pathogen(s) in the different types of specimens, as the assay showed % concordance with ldt. for sputum samples, preprocessing with sputasol was introduced after the initial tested specimens, since false-negative result was found, which was resolved on retesting with sputasol pretreatment. further studies need to determine the frequency of preprocessing of sputum samples before efficiently running the rp panel. specimens for inclusion in this study were previously tested at two different sites, using both their own systems and validated assays. although the initial setups of the ldt assays were the same ( , ) , minor adjustments of the assays and the use of different pcr platforms may affect the performance of the ldts and therefore were a limitation of this study. comparison of the results from the rp panel with the results from the routine multiplex real-time pcr showed an agreement of . % in pathogens tested. targets with a c t value of Ͼ , the rp panel showed good detection rates with regard to lower viral or bacterial loads as well (fig. ) . although the performance of the rp panel appeared to be excellent using the tested specimens in this study, for piv (n ϭ ) and c. pneumoniae (n ϭ ) the number of clinical specimens that could be analyzed was too low for a proper assessment of the assay, which was a limitation of this study. in different specimens, the rp panel identified pathogens that had not been detected by routine testing (pcr Ϫ /rp ϩ ). in addition, one influenza a virus detected by ldt could be detected as influenza a h n virus by the rp panel. one of the selected negative samples was shown to contain an hadv, while all other pcr Ϫ /rp ϩ targets were detected as copathogens to other positive targets in the samples. all the pcr Ϫ /rp ϩ targets were found in samples obtained from institute. discrepant analysis a small number of ldt-negative specimens (n ϭ ) was included in this study since the main objective of this study was to determine the performance of the rp panel in detecting respiratory pathogens. although this is a limitation of the current study, we believe that this issue will be addressed extensively in upcoming prospective clinical studies. owing to the lack of clinical specimens containing mers-cov, dilutions of two different culture isolates were tested in this study, of which dilutions with c t values of Ͻ as shown by the laboratory-developed real-time pcr assay could be detected consistently. it should be noted that the real-time pcr assay has been developed for research use and has not yet been validated for clinical use. assessment of the rp panel using eqa samples from qcmd and qnostics showed results that are in line with the results obtained from clinical specimens. a total of targets included in the eqa samples could not be detected using the rp panel, showing c t values of Ͼ (n ϭ ) and . (n ϭ ) when tested by real-time pcr. the rp panel on the eplex system enables rapid testing and can be used as a diagnostic system in either a laboratory or a decentralized setting that is closer to the patient. the assay turned out to be rapid and straightforward to perform. compared to routine testing, hands-on time of the rp panel was very low (Ͻ min), whereas the hands-on time of the routine testing was about to min, depending on the nature and number of samples tested. the overall run time of the platforms was also in favor of the eplex system, as it takes approximately min for nucleic acid extraction, amplification, hybridization, and detection, whereas routine testing takes up to h and min using different systems and multiple real-time pcr assays in multiplex. an important advantage of the eplex system is the possibility of random access testing, compared to batch-wise testing in the current diagnostic real-time pcr approach. with a relatively short turnaround time and the potential to randomly load and run up to specimens, the eplex system is very suitable for testing stat samples, which require immediate testing. in contrast to ldts, where c t values represent a quantitative indicator, the eplex system generates qualitative results only. the c t value is dependent on many different factors such as sample type and course of infection and can therefore differ greatly, even within a single patient. hence, a qualitative result, e.g., identification of the pathogen, is the major factor for patient management. the costs of reagents per sample are relatively high for eplex compared to ldt. however, when taking into account the hands-on time of technicians and the clinical benefit of more rapid results, the assay will most likely be more cost-effective. studies evaluating a rapid diagnostic assay for respiratory pathogens, such as the filmarray respiratory panel (biofire diagnostics, salt lake city, ut), have already shown the impact of rapid diagnostics for respiratory pathogens, since it decreased the duration of antibiotic use, the length of hospitalization, and the time of isolation, delivering financial savings ( , ) . although the rp panel on the eplex system has the same potential, clinical studies remain to be conducted to fulfill this potential. in conclusion, this study shows excellent performance of the genmark eplex rp panel in comparison to laboratory-developed real-time pcr assays for the detection of respiratory pathogens from multiple types of clinical specimens and eqa samples. the system provides a large amount of useful diagnostic data within a short time frame, with minimal hands-on time, helping to reduce laboratory costs for labor and deliver a faster result to the clinician in order to aid in appropriate antimicrobial therapy. therefore, this syndrome-based diagnostic assay could be used as rapid diagnostic testing in many different settings. clinical specimens selected for this study have previously been submitted and tested prospectively for diagnosis of respiratory infections at either the specialist virology center at the royal infirmary of edinburgh (rie) or the medical microbiology laboratory at the leiden university medical center (lumc). specimens were selected using the laboratory information management system of the corresponding institute, without prior selection based on c t value. ethical approval for this study was granted by the medical ethical committee provided that anonymized samples were used. diagnostic testing by lab-developed tests. in short, the routine testing method consisted of total nucleic acid extraction by the nuclisens easymag system (ϳ min; biomérieux, basingstoke, united kingdom) or the magna pure lc system (ϳ min to h min depending on the number of samples; roche diagnostics, almere, the netherlands), at the rie and the lumc, respectively. an input volume of l per specimen and elution volume of l were used for all specimen types. amplification and detection were performed by real-time pcr using the abi fast thermocycler ( h; applied biosystems, paisley, united kingdom) or the bio-rad cfx thermocycler (ϳ h min; bio-rad, veenendaal, the netherlands), at the rie and the lumc, respectively. real-time pcr assays were tested with updated versions (where needed) of primers and probes as described previously ( , ) . rp panel. original clinical specimens were retrieved from storage at Ϫ °c and thawed at room temperature. after vortexing, l of the specimen was pipetted into the sample delivery device with a buffer provided by the manufacturer. for out of sputum samples, preprocessing was done using sputasol (oxoid, basingstoke, united kingdom) according to the manufacturer's procedures (with the exception of washing the sputum) and incubation at °c for min on a shaker at rpm. after gentle mixing of the specimen and buffer in the sample delivery device, the mixture was dispensed into the cartridge using the sample delivery port, which was subsequently closed by sealing with a cap. after scanning of the barcode of the eplex rp panel cartridge and the barcode of the corresponding sample, the cartridge was inserted into an available bay of the eplex system. the test then started automatically and ran for approximately min. a single cartridge of the rp panel is able to detect respiratory pathogens, including differentiation of subtypes of influenza a virus, parainfluenza virus, and respiratory syncytial virus (rsv) ( table ) . internal controls for extraction, bead delivery, and movement within the cartridge are present, as well as those for amplification, digestion, and hybridization of dna and rna targets. for every specimen tested, a sample detection report was created, comprising the results for all targets and internal controls. results of the targets are reported as positive or not detected. if an internal control fails, this will be noted on the detection report and samples should be retested with a new cartridge. discrepant testing. in the case of discrepant results, the discordant sample was retested either with a new eplex cartridge if the real-time pcr was positive and the rp panel was negative (pcr ϩ /rp Ϫ ) or with the laboratory-developed real-time pcr assay in the case of pcr Ϫ /rp ϩ results. for unresolved discrepancies, additional testing with a third pcr assay (different primers and probe) was performed for final resolution. laboratory diagnosis of pneumonia in the molecular age epidemiology and clinical presentations of the four human coronaviruses e, hku , nl , and oc detected over years using a novel multiplex real-time pcr method diagnosis of human metapneumovirus and rhinovirus in patients with respiratory tract infections by an internally controlled multiplex real-time rna pcr rapid and sensitive method using multiplex real-time pcr for diagnosis of infections by influenza a and influenza b viruses, respiratory syncytial virus, and parainfluenza viruses , , , and comparison and evaluation of real-time pcr, real-time nucleic acid sequence-based amplification, conventional pcr, and serology for diagnosis of mycoplasma pneumoniae development and clinical evaluation of an internally controlled, single-tube multiplex real-time pcr assay for detection of legionella pneumophila and other legionella species improved diagnosis of the etiology of community-acquired pneumonia with real-time polymerase chain reaction comparison of the luminex respiratory virus panel fast assay with in-house real-time pcr for respiratory viral infection diagnosis comparison of two commercial molecular assays for simultaneous detection of respiratory viruses in clinical samples using two automatic electrophoresis detection systems comparison of the genmark diagnostics esensor respiratory viral panel to real-time pcr for detection of respiratory viruses in children performance of different mono-and multiplex nucleic acid amplification tests on a multipathogen external quality assessment panel evaluation of real-time pcr for detection of and discrimination between bordetella pertussis, bordetella parapertussis, and bordetella holmesii for clinical diagnosis point-of-impact testing in the emergency department: rapid diagnostics for respiratory viral infections impact of a rapid respiratory panel test on patient outcomes we thank yvette van aarle, mario bussel, wilfred rijnsburger, and tom vreeswijk of the lumc and laura mackenzie of the edinburgh rie for performing the assays including (discrepant) diagnostics. also, we thank colleagues from the global emerging infections surveillance and response system (geis) of the u.s. naval medical research unit (namru- ; cairo, egypt) for providing us the jordan/n isolate and l. enjuanes from the centro nacional de biotecnologia (cnb-csic; madrid, spain) for providing the recombinant middle east respiratory syndrome coronavirus isolate emc/ .none of the authors have conflicts of interest to declare. genmark diagnostics, inc., provided kits and reagents to perform this study and was responsible for study design. genmark diagnostics, inc., did not have any influence on the content of the submitted manuscript. key: cord- -loi vs y authors: mueller, markus; derlet, peter; mudry, christopher; aeppli, gabriel title: using random testing in a feedback-control loop to manage a safe exit from the covid- lockdown date: - - journal: nan doi: . / . . . sha: doc_id: cord_uid: loi vs y we argue that frequent sampling of the fraction of infected people (either by random testing or by analysis of sewage water), is central to managing the covid- pandemic because it both measures in real time the key variable controlled by restrictive measures, and anticipates the load on the healthcare system due to progression of the disease. knowledge of random testing outcomes will (i) significantly improve the predictability of the pandemic, (ii) allow informed and optimized decisions on how to modify restrictive measures, with much shorter delay times than the present ones, and (iii) enable the real-time assessment of the efficiency of new means to reduce transmission rates. here we suggest, irrespective of the size of a suitably homogeneous population, a conservative estimate of for the number of randomly tested people per day which will suffice to obtain reliable data about the current fraction of infections and its evolution in time, thus enabling close to real-time assessment of the quantitative effect of restrictive measures. still higher testing capacity permits detection of geographical differences in spreading rates. furthermore and most importantly, with daily sampling in place, a reboot could be attempted while the fraction of infected people is still an order of magnitude higher than the level required for a relaxation of restrictions with testing focused on symptomatic individuals. this is demonstrated by considering a feedback and control model of mitigation where the feed-back is derived from noisy sampling data. the covid- pandemic has led to a worldwide shutdown of a major part of our economic and social activities. this political measure was strongly suggested by epidemiologic studies assessing the cost in human lives depending on different possible strategies (doing nothing, mitigation, suppression). [ ] [ ] [ ] mitigation can be achieved by different strategies, such as physical distancing, contact tracing, restricting public gatherings, and the closing of schools, but also the testing for infections. the quantitative impact of very frequent testing of the entire population for infectiousness has been studied in a recent unpublished work by jenny et al. in ref. [ ] . we will estimate in sec. iii that to fully suppress the covid- pandemic by widespread testing for infections, one needs a capacity to test millions of people per day in switzerland. this should be compared to the present number of ' tests per day across switzerland. however, we show that tracking and control of this pandemic is possible by testing a much smaller number of randomly selected people per day. in addition, we will argue that even with currently available testing rates, extremely valuable information on the rates of transmission depending on geographic regions of switzerland can be obtained. figure summarizes the key concept of the paper, namely a feedback and control model for the pandemic. the key output from random testing is the growth rate of the number of infected people, which itself is regulated by measures such as those enforcing physical distances between persons (physical distancing), and whose tolerable values are fixed by the capacity of the healthcare system. a feedback and control approach, [ ] familiar from everyday implementations such as for thermostats regulating heaters and air conditioners, should allow policy makers to damp out oscillations in disease incidence which could lead to peaks in stress on the healthcare system as well as the wider economy. a further important benefit of this feedback and control scheme is that it allows a much faster and safer reboot of the economy than with the current feedback through confirmed infection numbers, for the latter is heavily delayed and reflects the state of the pandemic only incompletely. the resulting difference in the ability to control the disease is illustrated in fig. . without feedback and control informed by a key parameter, analogous to the temperature provided by the thermometer in the thermostat example, measurable in (near) real time, there is a huge delay between policy changes and the observable changes in terms of positively tested people. to release restrictions safely, the fraction of infected people must decrease to a level i * * such that a subsequent undetected growth during - days will not move it above the critical fraction i c manageable by the healthcare system. the current situation where we are mainly looking at lagging indicators, namely infection rates among symptomatic individuals or even deaths, is comparable to driving a car from the back seat and with knowledge only of the damage caused by previous collisions. to minimize harm to the occupants of the vehicle, driving very slowly is essential, and oscillations from a straight course are likely to be large. daily random testing reduces the delay between changes in policy and the observation of their effects very significantly. moreover, it directly measures the key quantity of interest, namely the fraction of infected people and its growth rate, information that is very valuable to gauge further interventions. such information is much harder to infer from data about positively tested patients only, by fitting it to specific epidemiological models with their inherent uncertainties. the shortened time delay due to feedback and control allows a reboot to be attempted at much higher levels of infections, i * > i * * , which implies a much shorter time in lockdown. the paper is organized as follows. we summarize and explain the key findings of the paper in simple terms in sec. ii. in sec. iii, we discuss the use of massive testing as a direct means to contain the pandemics, showing that it requires a -fold increase of the current testing frequency. in sec. iv, we define the main challenge to be addressed: to measure the quantitative effect of restrictive measures on the transmission rate. section v introduces the idea of randomized testing. section vi constitutes the central part of this paper. it is shown how data from sparse sampling tests can be used to infer essentially instantaneous growth rates, and their regional dependence. we define a model of testing feed-back driven intervention strategy and analyze it theoretically. this model is also analyzed numerically in sec. vii. section viii generalizes the modelling to a regionally refined analysis of the epidemic growth pattern which becomes the preferred choice if higher testing rates become available. we conclude with sec. ix by summarizing our results and their implication for a safe reboot after the current lockdown. in the appendix we address the use of contact tracing and argue that it can complement, but not substitute for random testing. the key quantity measured by random testing is the growth rate k of infection numbers. if k exceeds a tolerable upper threshold κ + , restrictions are imposed. for k below a lower threshold κ − , and if infection numbers are below critical, restrictions are released. in the absence of a substantial influx of infected people from outside the country, and provided infection numbers are below a critical value, the optimal target of the growth rate is k = , corresponding to a marginally stable state, where infections neither grow nor decrease exponentially with time. if higher testing rates are available, the measured observables and control strategies can be geographically refined. we argue that the moderate number of ' random tests per day yields valuable information on the dynamics of the disease. assuming that at a given time a fraction of about i ≈ . % of the population is infected, the order of infected people will be detected every day. can such a small number of detected infections be useful at all, given that these numbers fluctuate significantly from day to day? the answer is yes. we show that after a few days the acquired signal becomes stronger than the noise level. it is then possible to establish whether the infection number is growing or decreasing and, moreover, to obtain a quantitative estimate of the instantaneous growth rate k(t). one of our central results is eq. ( c) for the time where the signal becomes clear, which we rewrite in the simplified form where k is the current growth rate of infections to be detected, and r is the number of tests per day. the numerical constant c depends on the required signal to noise ratio. a typical value when detecting large values of k is c ≈ − . this result shows that the higher the number of tests r per day, the shorter the time to detect a growth or a decrease of the infected population. the smaller the dynamics of the pandemic with and without a feedback and control scheme in place, as measured by the fraction i of infected people (logarithmic scale). after the limit of the health system, i c , has been reached, a lockdown brings i down again. the exponential rate of decrease is expected to be very slow, unless extreme measures are imposed. the release of measures upon a reboot is likely to re-induce exponential growth, but with a rate difficult to predict. three possible outcomes are shown in blue curves in the scenario without testing feedback, where the effect of the new measures becomes visible only after a delay of - days. in the worst case, i grows by a multiplicative factor of order before the growth is detected. a reboot can thus be risked only once i ≤ i * * ≡ i c / , implying a very long time in lockdown after the initial peak. due to the long delay until policy changes show observable effects, the fluctuations of i will be large. random testing (the red curve) has a major advantage. it measures i instantaneously and detects its growth rate within few days, whereby the higher the testing rate the faster the detection. policy adjustments can thus be made much faster, with smaller oscillations of i. a safe reboot is then possible much earlier, at the level of i ≤ i * ≈ i c / . current growth rate k , the longer the time to detect it above the noise inherent to the finite sampling. how long would it take to detect that a release of restrictive measures has resulted in a nearly unmitigated growth rate of the order of k = . (which corresponds to doubling every days)? even with a moderate number of r = per day, we find that within only ∆t ≈ − days such a strong growth will emerge above the noise level, such that countermeasures can be taken (see fig. ). during this short time, the damage remains limited. the infection numbers will have risen by a multiplicative factor between and . this degree of control must be compared to a situation where no information on the current growth rate is available, and where the first effects of a new policy are seen in the increased number of symptomatic, sick people only - days later. over this time span, with a growth rate of k = . , the infection numbers will have grown by a factor of - before one realizes eventually that an intervention must be made. random testing decreases both the time scale until informed policy adjustments can be taken and the temporal fluctuations of the infection numbers. as in any feedback and control loop, the more frequent the testing is, the shorter are the delay times, and thus the smaller are the fluctuations. the various benefits of increasing the testing frequency are shown in fig. , which are obtained by simulating a specific mitigation strategy, where we built in the uncertainty about the efficacy of political interventions. the shorter delay times and the reduced fluctuations result in decreased strain on the health system, lower economic costs, and a lower number of required interventions. in addition to these benefits, a higher testing rate r also opens the opportunity to analyze geographic differences and refine the mitigation strategy accordingly, as we discuss in sec. viii. if the massive frequency of . million tests per day becomes available in switzerland, it will be possible to test any swiss resident every to days. if the infected people that have been detected are kept in strict quarantine (such that they will not infect anybody anymore with high probability), such massive testing could be sufficient to prevent an exponential growth in the number of cumulated infections without the need of draconian physical distancing measures. we now explain qualitatively our approach to reach this conclusion. a refined analysis has been given in ref. [ ] . the required testing rate can be estimated as follows. let ∆t denote the average time until an infected person infects somebody else. the reproduction number r, i.e., the number of new infections transmitted on average by an infected person, falls below (and thus below the threshold for exponential growth) if non-diagnosed people are tested at time intervals of no more than ∆t . thus, the required number of tests over the time ∆t , the full testing rate τ − full , is where is the number of inhabitants of switzerland. without social restrictions, it is estimated that [ ] ∆t ≈ days, such that i.e., about . million tests per day would be required to control the pandemics by testing only. if additional restrictions such as physical distancing etc., are imposed, ∆t increases by a modest factor and one can get by with indirectly proportionally fewer tests per day. nevertheless, on the order of million tests per day is a minimal requirement for massive testing to contain the pandemics without further measures. however, even while the swiss capabilities are still far from reaching million tests per day, testing for infections offers two important benefits in addition to identifying people that need to be quarantined. first, properly randomized testing allows to monitor and study the efficiency of measures that keep the reproduction number r below . this ensures that the growth rate k of case numbers and new infections is negative, k < . second, frequent testing, even if applied to randomly selected people, helps suppress the reproduction number r and thus allows policy to be less restrictive in terms of other measures, such as physical distancing. to quantify the latter benefit, observe that the effect of massive testing on the growth rate k is proportional to the testing rate. [ ] let us assume that without testing or social measures one has a growth rate k . then, if the testing rate τ − full is sufficient to completely suppress the exponential growth in the absence of other measures, a smaller testing rate τ − decreases the growth rate k down to (τ − /τ − full ) × k . the remaining reduction of k to zero must then be achieved by a combination of restrictive social measures and contact tracing. it is possible to refine the argument above to take account of the possibility of a spectrum of tests with particular cost/performance trade offs, i.e., a cheaper test with more false positives and negatives could be used for random testing, whereas those displaying symptoms would be subjected to a "gold standard" (pcr) assay of viral genetic material. a central challenge for establishing reliable predictions for the time evolution of pandemics is the quantification of the effect of social restrictions on the transmission rate. [ ] policymakers and epidemiologists urgently need to know by how much specific restrictive measures reduce the growth rate k. without that knowledge, it is essentially impossible to take an informed decision on how to optimally combine such measures to achieve a (marginally) stable situation, defined by the condition of a vanishing growth rate indeed, marginal stability is optimal for two reasons. first, it is sustainable in the sense that the burden on the health system does not grow with time. second, it is the least economically and socially restrictive state compatible with the stability requirement. in secs. v and vi, we show how marginal stability can be achieved, while simultaneously measuring the effects of a particular set of restrictions. we claim that statistically randomized testing can be used in a smart way, so as to keep the dynamics of the pandemics under control as per the feedback loop of fig. . we emphasize that this is possible without the current time delays of up to days. the latter arises since we only observe confirmed infections stemming from a highly biased test group that eventually shows symptoms long after the initial infection has occurred. the idea of smart testing is the following. one regularly tests randomized people for infectiousness we stress that randomized testing is essential to obtain information on the current number of infections and its evolution with time. it serves an additional and entirely different purpose from testing people with symptoms, medical staff, or people close to somebody who has been infected, all of whom constitute highly biased groups of people. the first goal of random testing is to obtain a firm test/confirmation of whether the current restrictive measures are sufficient to mitigate or suppress the exponential growth of the covid- pandemic, and whether the effectiveness differs from region to region. in case the measures should still be insufficient, one can measure the current growth rates and monitor the effect of additional restrictive measures. it is important that the set of randomly selected people must change constantly, so that it should happen extremely rarely that a given person is tested twice. here, we solely focus on a person being infectious, but not on whether the person has developed antibodies. the latter test indicates that the person has been infected any time in the past. testing for antibodies and (potential) immunity has its own virtues, but aims at different goals from the random testing for infections that we advocate here. by following the fraction of infections as a function of time, we can determine nearly instantaneously the growth rate of infections, k(t), and thus assess and quantify the effectiveness of socio-economic restrictions through the observed changes in k following a change in policy. this monitoring can even be carried out in a regionally resolved way, such that subsequently, restrictive or relaxing measures can be adapted to different regions (urban/rural etc.). a suppression of the covid- pandemic is achieved if, for a sufficiently long time, the number of infections decays exponentially with time. mitigation aims to reduce the exponential rate of growth in the number of infections. stability is achieved when that number tends to a constant. once stability is reached, one may start relaxing the restrictions step by step and monitor the effect on the growth rate k as a function of geographic regions. . cc-by-nc-nd . international license it is made available under a author/funder, who has granted medrxiv a license to display the preprint in perpetuity. is the (which was not peer-reviewed) the copyright holder for this preprint . https://doi.org/ . / . . . doi: medrxiv preprint we first analyze random testing for the case where we treat the country as a homogeneous entity with a population n . this will allow us to understand how testing frequency affects key characteristics of policy strategies. we consider the following model. let u be the actual undetected number of infected people. (we assume that detected people do not spread the disease.) the spreading of infections is assumed to be governed by the inhomogeneous, linear growth equation where k(t) is the instantaneous growth rate and Φ(t) accounts for infections arising from people crossing the national border. we will later set that influx to zero. an equation of the form ( ) is usually part of a more refined epidemiological model [ ] [ ] [ ] that accounts explicitly for the recovery or death of infected persons. for our purpose, the effect of these has been lumped into an overall time-dependence of the rate k(t). for example, it evolves as the number of immune people grows, restrictive measures change, mobility is affected, new tracking systems are implemented, hospitals reach their capacity, testing is increased, etc. nevertheless, over a short period of time where such conditions remain constant, and the fraction of immune people does not change significantly, we can assume the effective growth rate k(t) to be piecewise constant in time. we will exploit this below. for t < , we assume stability with such a stable state needs to be reached before a reboot of the economy can be considered. at t = restrictive measures are first relaxed, resulting in an increase of the growth rate k from k to k , which we assume positive, hence, compensating counter measures are required at later times in order to avoid another exponential growth of the pandemic. we now want to monitor the performance of policy strategies that relax or re-impose restrictions, step by replacing the function k(t), assumed to be differentiable, by a piecewise constant function is a good approximation provided wherek(t) is the time derivative of k(t) and ∆t(k) is given by eq. ( a) with the replacement k → k(t). step. the goal for an optimal policy strategy is to reach a marginally stable state ( ) (i.e., with k = ) as smoothly, safely, and rapidly as possible. in other words, marginal stability is to be reached with the least possible damage to health, economy, and society. this expected outcome is to be optimized while controlling the risk of rare fluctuations. to model the performance of policy strategies we neglect the contributions to the time evolution of k(t) due to the increasing immunity or the evolution in the age distribution of infected people. we also neglect periodic temporal fluctuations of k(t) (e.g., due to alternation between workdays and weekends), which can be addressed in further refinements. instead, we assume that k(t) changes only in response to policy measures which are taken at specific times when certain criteria are met, as defined by a policy strategy. an intervention is made when the sampled testing data indicates that with high likelihood, k(t) exceeds some upper threshold likewise, a different intervention is made should k(t) be detected to fall below some negative threshold note that if there is substantial infection influx Φ(t) across the national borders, one may want to choose the threshold κ + to be negative, to avoid a too large response to the influx. from now on we neglect the influx of infections, and consider a homogeneous growth equation. to reach decisions on policy measures, data is acquired by daily testing of of random sets of people for infections. we assume that the tests are carried out at a limited rate r (a finite number of tests divided by a nonvanishing unit of time). let i(t, ∆t) be the fraction of positive infections detected among the r ∆t tests carried out in the time interval [t, t + ∆t]. by the law of large numbers, it is a gaussian random variable with mean and standard deviation the current value of k(t) is estimated as k fit (t) by fitting these test data to an exponential, where only data since the last policy change should be used. the fitting also yields the statistical uncertainty (standard deviation), which we call δk(t). it will take at least - days to make a fit that is reasonably trustworthy. if the instability threshold is surpassed by a certain level, i.e., if . cc-by-nc-nd . international license it is made available under a author/funder, who has granted medrxiv a license to display the preprint in perpetuity. is the (which was not peer-reviewed) the copyright holder for this preprint . https://doi.org/ . / . . . doi: medrxiv preprint a new restrictive intervention is taken. if instead a new relaxing intervention is taken. here, the parameter α is a key parameter defining the policy strategy. it determines the confidence level that policymakers require, before deciding to declare that a stability threshold has indeed been crossed. this strategy will result in a series of intervention times starting with the initial step to reboot at t = . in the time window [t ι , t ι+ ], the growth rate k(t) is constant and takes the value where the policy choice ∆k (ι) > corresponding to a restrictive measure is made to bring back k(t) below the upper threshold κ + , while the policy choice ∆k (ι) < is made to bring back k(t) above the lower threshold κ − . the difficulty for policymakers is due to the fact that so far the quantitative effect of an intervention is not known. we model this uncertainty by assuming ∆k (ι) to be random to a certain degree. if at time t, k fit (t) crosses the upper threshold κ + with confidence level p, we set t ι = t and a restrictive measure is taken, i.e., ∆k (ι) is chosen positive. we take the associated decrement ∆k (ι) to be uniformly distributed on the interval , ∆k this describes that while the policymakers aim to reset the growth factor k to κ + , the result of the measure taken may range from having no effect at all (when ∆k (ι) = ) to overshooting by a factor of (when ∆k (ι) = ∆k opt,+ being optimum. if instead k fit (t) crosses the lower threshold κ − with confidence level p at time t, we set t ι = t and a releasing measure is taken, i.e., ∆k (ι) is chosen negative. again, with the optimum choice ∆k the process described above is stochastic for two reasons. first, the sampling comes with the usual uncertainties in the law of large numbers. second, the effect of policy measures is not known beforehand (even though it may be learnt in the course of time, which we do not include here). it should be clear that the faster the testing the more rapidly one can respond to a super-critical situation. a significant simplification of the model occurs when the two thresholds are chosen to vanish, in which case with |∆k (ι) | uniformly distributed on the interval in this case the system will usually tend to a critical steady state with k(t → ∞) → , as we will show explicitly below. in this case the policy strategy can simply be rephrased as follows. as soon as one has sufficient confidence that k has a definite sign, one intervenes, trying to bring k back to zero. the only parameter defining the strategy is α. let us now detail the fitting procedure and analyze the typical time scales involved between subsequent policy interventions when choosing the thresholds ( ). after a policy change at time t ι , data is acquired over a time window ∆t. we then proceed with the following steps to estimate the time t ι+ at which the next policy change must be implemented. step : measurement we split the time window of length ∆t after the policy change into the time interval and the time interval testing delivers the number of infected people for the time interval ( b) and . cc-by-nc-nd . international license it is made available under a author/funder, who has granted medrxiv a license to display the preprint in perpetuity. is the (which was not peer-reviewed) the copyright holder for this preprint . https://doi.org/ . / . . . doi: medrxiv preprint for the time interval ( c), where we recall that r denotes the number of people tested per unit time. given those two measurements over the time window ∆t/ , we obtain the estimate with the standard deviation as follows from the statistical uncertainty n ι,γ (∆t) of the sampled numbers n ι,γ (∆t) and standard error propagation. step : condition for new policy intervention a new policy intervention is taken once the magnitude |k fit ι (∆t)| with k fit ι (∆t) given by eq. ( f) exceeds α δk(∆t) with δk(∆t) given by eq. ( g). here, α controls the accuracy to which the actual k has been estimated at the time of the next intervention. the condition for a new policy intervention thus becomes . ( b) step : comparison with modeling we call i(t) the actual fraction of infections (in the entire population) as a function of time, which we assume to follow a simple exponential evolution between two successive policy interventions, i.e., the normalized solution to the growth equation ( ) on the interval t ι < t < t ι+ . the expected number of newly detected infected people in the time interval ( b) is similarly, the predicted number of infected people in the time interval ( c) is step : estimated time for a new policy intervention we now approximate n ι, and n ι, by replacing them with their expectation value eqs. ( a) and ( b), respectively, and anticipating the limit we further anticipate that for safe strategies the fraction of infected people i(t) does not vary strongly over time. more precisely, it hovers around the value i * defined in fig. . we thus insert into eq. ( b) and solve for ∆t. the solution is the time until the next intervention from which we deduce the relative increase of the fraction of infected people over the time window. this relative increase is close to if the argument of the exponential on the right-hand side is small. we will show below that the characteristics and of the first time interval [t , t ] set the relevant scales for the entire process. from eqs. ( c) and ( d), we infer the following important result. the higher the testing frequency r, the smaller the typical variations in the fraction of infected people, and thus in the case numbers. the band width of fluctuations decreases as r − / with the testing rate. note that, as one should expect, it is always the average rate to detect an infected person, r i * , which enters into the expressions ( c) and ( d). the higher the fraction i * , the more reliable is the sampling, the shorter is the time to converge toward the marginal state ( ), and the smaller are the fluctuations of the fraction of infected people. . cc-by-nc-nd . international license it is made available under a author/funder, who has granted medrxiv a license to display the preprint in perpetuity. is the (which was not peer-reviewed) the copyright holder for this preprint . https://doi.org/ . / . . . doi: medrxiv preprint if the fraction i * is too low the statistical fluctuations become too large and little statistically meaningful information can be obtained. on the other hand, if the fraction of infections drops to much lower values, then policy can be considered to have been successful and can be maintained until further tests show otherwise. we seek an upper bound for a manageable ı * . we assume that a fraction p ch icu of infected people in switzerland needs to be in intensive care. more precisely, p ch icu is the expected time (in switzerland) for an infected person to spend in an intensive care unit (icu) divided by the expected time to be sick. here, we will use the value p ch icu = . . let ρ icu be the number of icu beds per inhabitant that shall be allocated to covid- patients. the swiss national average is about [ ] ρ ch icu ≈ for the pandemics not to overwhelm the health system, one thus needs to maintain the infected fraction safely below together with similar constraints related to the capacity for hospitalizations, medical care personnel and equipment for specialized treatments. we take the constraint from intensive care units to obtain an order of magnitude for the upper limit admissible for the infected fraction of people, i. the objective is to mitigate the pandemic so that values of the order of i c or below are achieved. before that level is reached restrictions cannot be relaxed. it may prove difficult to push the fraction of infected people significantly below i c , since the recent experience in most european countries shows that it is very hard to ensure that growth rates k fall well below . the main aim would then be to reach at least stabilization of the number of infected people (k = ). for the following we thus assume that the fraction of infections i will stagnate around a value i * of the order of i c . we will discuss below what ratio i * /i c can be considered safe. we seek the testing rate that is needed to obtain a strategy with satisfactory outcome. we assume that after the reboot at t = , the initial growth rate may turn out to be fairly high, say of the order of the unmitigated growth rate. in many european countries a doubling of cases was observed every three days before restrictive measures were introduced. this corresponds to a growth rate of we assume an initial growth rate of just after the reboot. we choose the reasonably stable confidence parameter in sec. vii we will find that this choice strikes a good balance between several performance criteria. we further assume that the rate of infections initially stagnates at a level of the level i * should, however, be measured by random testing before a reboot is attempted. we should then ensure that the first relative increase of does not exceed a factor of . from eq. ( b), we thus obtain the requirement for the testing rate r. this yields an estimate of the order of magnitude required. in the next section we simulate a full mitigation strategy and confirm that with additional capacity for just about ' random infection tests per day a nation-wide, safe reboot can be envisioned for switzerland. we close with two observations. first, this minimal testing frequency is just twice the testing frequency currently available for suspected infections and medical staff in switzerland. second, while the latter tests require a high sensitivity with as few false positives and negatives as possible, random testing can very well be carried out with tests of much lower quality. indeed, a lower sensitivity acts as a systematic error in the estimate of the infection rate, which, however, drops out in the determination of its growth rate k. after the reboot at time t = further interventions will be necessary, as we assume that the reboot will have resulted in a positive growth rate k . in subsequent interventions, the policymakers try to take measures that aim at reducing the growth rate to zero. even if they had perfect knowledge of the current growth rate k(t), they would not succeed immediately since they do not know the precise quantitative effect of the measures they will take. nevertheless, had they the perfect knowledge of k(t), our model assumes that they would at least be able to estimate the effects to an extent such they would . cc-by-nc-nd . international license it is made available under a author/funder, who has granted medrxiv a license to display the preprint in perpetuity. is the (which was not peer-reviewed) the copyright holder for this preprint . https://doi.org/ . / . . . doi: medrxiv preprint not need to intervene more strongly than twice of what would be necessary to reduce with time k(t) to . this assumption implies that, if α is large, so that k(t) is known with relatively high precision at the time of intervention, the growth rate k is smaller than k in magnitude with high probability (tending rapidly to as α → ∞). the smaller α however, the more likely it becomes, that k(t) is overestimated, and a too big corrective measure is taken, which may destabilize the system. in this context, we observe that the ratio is a random variable with a distribution that is independent of ι in our model. to proceed, we assume that α is sufficiently large, i.e., such that the probability for ρ ι < to be true is indeed high. the second policy intervention occurs after a time that can be predicted along the same lines that lead to eq. ( c). one finds where ∆t is given by eq. ( a) . since, the growth rate k is likely to be smaller than k in magnitude, the third intervention takes place after yet a longer time span, etc. if we neglect that the fitted value k fit ι (t) differs slightly from k ι (a difference that is negligible when α ), our model ensures that ρ ι is uniformly distributed in [ , ]. after the ι-th intervention the growth rate is down in magnitude to to reach a low final growth rate k final , a typical number n int (k final ) of interventions are required after the reboot, where where the last approximation holds in the limit of large enough α. the time to reach this low rate is dominated by the last and first time intervals through the estimate thus, the system converges to the critical state where k = , but never quite reaches it. at late times t , the residual growth rate behaves as k final ∼ t − / . one uses eq. ( ) to reach this conclusion. the parameter α encodes the confidence policymakers need about the present state before they take a decision. here we discuss various measures that allow choosing an optimal value for α. as α decreases starting from large values, the time for interventions decreases, being proportional to α / according to eq. ( a). likewise the fluctuations of infection numbers will initially decrease. however, the logarithmic average − ln ρ ι in the denominator of eq. ( ) will also decrease from , and thus the necessary number of interventions increases. moreover, when α falls significantly below , interventions become more and more illinformed and erratic. it is not even obvious anymore that the marginally stable state is still approached asymptotically. from these two limiting considerations, we expect to be an optimal choice for α. let us now discuss a few quantitative measures of the performance of various strategies, which will allow policymakers to make an optimal choice of confidence parameter for the definition of a mitigation strategy. the time to reach a certain level of quiescence (low growth rates, infrequent interventions) is given by the time ( ), and thus by the expectation value of ∆t . as a measure for the political cost, c p , we may take the number of interventions that have to be taken to reach quiescence. as we saw in eq. ( ), it scales inversely with the logarithmic average of the ratios of growth rates, ρ, i.e., if restrictions are over-relaxed, the infection numbers will grow with time. the maximal fraction of infected people must never be allowed to rise above the manageable threshold of i c . this means that continuous (random) monitoring of the fraction of infected people is needed, so that given the knowledge from the time before the reboot, about the conditions under which the system can be stabilized, lock-down conditions can always be imposed at a time that is sufficient to prevent reaching the level of i c . beyond this consideration one may want to . cc-by-nc-nd . international license it is made available under a author/funder, who has granted medrxiv a license to display the preprint in perpetuity. is the (which was not peer-reviewed) the copyright holder for this preprint . https://doi.org/ . / . . . doi: medrxiv preprint keep the expected maximal increase of infection numbers low, which we take as a measure of health costs c h , note that as defined, c h is a stochastic number. its mean and tail distribution (for large r) will be of particular importance. imposing restrictions such that k < imply restrictions beyond what is absolutely necessary to maintain stability. if we assume that the economic cost c e is proportional to the excess negative growth rate, −k (and a potential gain proportional to k), a measure for the economic cost is the summation over time of −k(t), which converges, since k(t) decays as a sufficiently fast power law. hereto, c e is a stochastic variable that depends on the testing history and the policy measures taken. however, its mean and standard deviation give a good idea of the performance in terms of economic considerations. we introduced in sec. vi a feedback and control strategy to tune to a marginal state with vanishing growth rate k = after an initial reboot. interventions were only taken based on the measurement of the growth rate. however, in practice, a more refined strategy will be needed. in case the infection rate drops significantly below i * , one can safely afford to have a positive growth rate k. we thus assume that if i(t)/i * falls below some threshold i low = . , we intervene by relaxing some measures, that we assume to increase k by an amount uniformly distributed in [ , k ], but without letting k exceed the maximal value of k high = . . likewise, one should intervene when the fraction i(t) grows too large. we do so when i(t)/i * exceeds i high = . in such a situation we impose restrictions resulting in a decrease of k by a quantity uniformly drawn from [k high / , k high ]. the precise algorithm is given in appendix b. figure shows how our algorithm implements policy releases and restrictions in response to test data. the initial infected fraction and growth rate are i( ) = i c / = . and k = . , respectively, with a sampling interval of one day. to more easily demonstrate the feedback protocol, we employ a high value of α = and a number the unmitigated exponential growth with the initial growth rate k is also plotted as the black line. of r = tests per day, resulting in a higher confidence in the estimated growth rate and a longer time (> days) until intervention. figure a displays the infection fraction, u (t)/n , as a function of time, derived using our simple exponential growth model, which is characterized by a single growth rate that changes stochastically at interventions [eq. ( ) without the source term]. in the absence of intervention, the infected population would grow rapidly representing uncontrolled runaway of a second epidemic. at each time step (day) the infected fraction of the population is sampled. the result is normally distribution with mean and standard deviation given by eqs. ( e) and ( f) to obtain i(t). the former are represented by small circles, the latter by vertical error bars in fig. . if i/i * lies outside the range [i low , i high ], we intervene as described above. otherwise, on each day k fit (t) and its standard deviation are estimated using the data since the last intervention. with this, at each time step, eqs. ( m) to ( o) decide whether or not to intervene. in fig. , each red circle represents an intervention and therefore either a decrease or increase of the growth rate constant of our model. fig. shows the evolution of the fraction of infected . cc-by-nc-nd . international license it is made available under a author/funder, who has granted medrxiv a license to display the preprint in perpetuity. people. after an initial growth with rate k subsequent interventions reduce the growth rate down to low levels within a few weeks. at the same time the fraction of infected people stabilizes at a scale similar to i * -for the given parameter-set this is a general trend independent of realization. figure b displays the instantaneous value of the model rate constant and also the estimated value together with its standard deviation. the estimate follows the model value reasonably well. one sees that the interventions occur when the uncertainty in k is sufficiently small (given the large choice of α = ). we now assume that we have the capacity for r = per day, and assess the performance of our strategy as a function of the confidence parameter α in fig. . values of α ≤ lead to rapid, but at the same time erratic interventions, as is reflected by a rapidly growing number of interventions. for larger values of α, the time scale to reach a steady state increases while the economic and health costs remain more or less stable. a reasonable compromise between minimizing the number of interventions, and shortening the time to reach a steady state suggests a choice of α ≈ − . it is intuitive that the higher the number r of tests per day is, the better the mitigation strategy will perform. the characteristic time to reach a final steady state decreases as r − / , see eq. ( a). other measures for per- formance improve monotonically upon increasing r. this is confirmed and quantified in fig. , where we show how the political, health. and economic cost decreases with increasing test rate. after a reboot it is likely that the growth rate k jumps back to positive values, as we have always assumed so far. the time it takes until one can distinguish a genuine growth from intrinsic fluctuations due to the finite number of sample people depends on the growth rate k , see eq. ( a). in the worst case where the reboot brings back the unmitigated value k , one will know within - days with reasonable confidence that the growth rate is well above zero. this is shown in fig. . in such a catastrophic situation, an early intervention can be taken, while the number of infections has at most tripled at worst. this reaction time is - times much faster than without random testing! . cc-by-nc-nd . international license it is made available under a author/funder, who has granted medrxiv a license to display the preprint in perpetuity. is the (which was not peer-reviewed) the copyright holder for this preprint . fig. . time after which a significant positive growth rate is confirmed in the worst case scenario for which the growth rate k jumps to k = . after reboot. an intervention will be triggered in - days. results are shown for a confidence level α = and r = test a day. the circles are the mean values, the vertical lines indicate the standard deviations of the first intervention time. we have shown that the minimal testing rate r min ( ) is sufficient to obtain statistical information on the growth rate k as applied to switzerland as a whole. this assumes tacitly that the simple growth equation ( ) describes the dynamics of infections in the whole country well. that this is not necessarily a good description can be conjectured from recent data on the current rates with which numbers of confirmed infections in the various cantons grow. these data indeed show a very significant spread by nearly a factor of four suggesting that a spatially resolved approach is preferable, if possible. similar heterogeneity of the time evolution of infection numbers can even be seen within a single big city, such as london. if the testing capacity is limited by rates of order r min , the approach can still be used. but caution should be taken to account for spatial fluctuations corresponding to hot spots. one should preferentially test in areas that are likely to show the largest local growth rates so as not to miss locally super-critical growth rates by averaging over the entire country. if however, higher testing frequencies become available, new and better options come into play. valuable information can be gained by analyzing the test data not only for switzerland as a whole, but by distinguishing different regions. it might even prove useful not to lift restrictions homogeneously throughout the country, but instead to vary the set of restrictions to be released, or to adapt their rigor. by way of example, consider that after the spring vacation school starts in different weeks in different cantons. this regional difference could be exploited to probe the relative effect of re-opening schools on the local growth rates k. however, obviously, it might prove politically difficult to go beyond such "naturally" occurring differences, as it is with no doubt a complex matter to decide what region releases which measures first. a further issue is that the effects might be unclear at the borders between regions with different restrictions. there may also be complications with commuters that cross regional borders. finally, there may be undesired behavioral effects, if regionally varying measures are declared as an "experiment". such issues demand careful consideration if regionally varying policies are applied. even if policy measures should eventually not be taken in a region-specific manner, it is very useful to study a regionally refined model of epidemic dynamics. indeed a host of literature exists that studies epidemiological models on lattices and analyzes the spatial heterogeneities. [ , ] in certain circumstances those have been argued to become even extremely strong. [ ] in the present paper, we will content ourselves with a few general remarks concerning such refinements. we reserve a more thorough study of regionally refined testing and mitigation strategies to a later publication. let us thus group the population of switzerland into g sets. the most natural clustering is according to the place where people live, cities or counties. the more we partition the country, the more spatially refined the acquired data will be, and the better tailored mitigation strategies could potentially become. however, this comes at a price. namely, for a limited national testing rate r tot , an increased partitioning means that the statistical uncertainty to measure local growth rates in each region will increase. the minimal test rate r min that we estimated on the right-hand side of eq. ( ) still holds, but now for each region, which can only test at a rate r = r tot /g. to refine switzerland g regions we thus have the constraint that the total testing capacity exceeds on the other hand, if testing at a high daily rate r tot becomes available, nothing should stop one to refine the statistical analysis to g ≈ r tot /r min to make the best use of available data. one might also consider other distinguishing characteristics of groups (age or commuting habits, etc.), but we will not do so here, since it is not clear whether the increased complexity of the model can be exploited to reach an improved data analysis. in fact we expect that the number of fitting parameters will very quickly become too large by making such further distinctions. . cc-by-nc-nd . international license it is made available under a author/funder, who has granted medrxiv a license to display the preprint in perpetuity. is the (which was not peer-reviewed) the copyright holder for this preprint . https://doi.org/ . / . . . doi: medrxiv preprint b. spatially resolved growth model each of the population groups m ∈ { , · · · , g} is assumed to have roughly the same size, containing people, u m of whom are infected, but yet undetected. the spreading of infections is again assumed to follow a linear growth equation (where we neglect influx from across the borders from the outset) ( ) here, the growth kernel k(t) is a g × g matrix with matrix elements k mn (t). the matrix k(t) has g (complex valued) eigenvalues λ n , n = , · · · , g. the largest growth rate is given by for the sake of stability criteria, κ(t) now essentially takes the role of k(t) in the model with a single region, g = . we note that the number of infections grows exponentially if κ(t) > , and decreases if κ(t) < . as in the case of a single region, we assume k(t) to be piece wise constant in time, and to change only upon taking policy interventions. in the simplest approximation, one assumes no contact between geographically distinct groups, that is, the offdiagonal matrix elements are set to zero [k m =n (t) = ] and the eigenvalues become equal to elements of the diagonal: k m (t) ≡ k mm (t). as current cantonal data suggests, the local growth rate k m (t) depends on the region, and thus k m (t) = k n (t). it is natural to expect that k m (t) correlates with the population density, the fraction of the population that commutes, the age distribution, etc. if on top of the heterogeneity of growth rates one adds finite but weak inter-regional couplings k m =n (t) > (mostly between nearest neighbor regions), one may still expect the eigenvectors of k(t) to be rather localized (a phenomenon well known as anderson localization [ ] in the context of waves propagating in strongly disordered media). by this, one means that the the eigenvectors have a lot of weight on few regions only, and little weight everywhere else. that such a phenomenon might occur in the growth pattern of real epidemics is suggested by the significant regional differences in growth rates that we have mentioned above. in such a situation it would seem preferable to adapt restrictive measures to localized regions with strong overlap on unstable eigenvectors of k(t), while minimizing their socio-economic impact in other regions with lower k m (t). c. mitigation strategies with regionally refined analysis as mentioned above, in the case with several distinct regions, g > , an intervention becomes necessary when the largest eigenvalue κ(t) of k(t) crosses an upper or a lower threshold (with a level of confidence α again to be specified). if the associated eigenvector is delocalized over all regions, one will most likely respond with a global policy measure. however, it may as well happen that the eigenvector corresponding to κ(t) is well-localized. in this case one can distinguish two strategies for intervention: (a) global strategy one always applies a single policy change to the whole country. this is politically simple to implement, but might incur unnecessary economic cost in regions that are not currently unstable. (b) local strategy one applies a policy change only in regions which have significant weight on the unstable eigenvectors. this means that one only adjusts the corresponding diagonal matrix elements of k(t) and those off-diagonals that share an index with the unstable region. likewise, regions that have i m < i * and have negligible overlap with eigenvectors whose eigenvalues are above κ − , could relax some restrictions before others do. fitting test data to a regionally refined model will allow us to estimate the off-diagonal terms k mn (t), which are so far poorly characterized parameters. however, the k mn (t) contain valuable information. for instance, if a hot spot emerges [that is, a region overlapping strongly with a localized eigenvector with positive re λ n (t)], this part of the matrix will inform which connections are the most likely to infect neighboring regions. they can then be addressed by appropriate policy measures and will be monitored subsequently, with the aim to contain the hot spot and keep it well localized. this model allows us to calculate again economic, political, and health impact of various strategies. it is important to assess how the global and the local strategy perform in comparison. obviously this will depend on the variability between the local growth rates k m (t), which is currently not well known, but will become a measurable quantity in the future. at that point one will be able to decide whether to select the politically simpler route (a) or the heterogeneous route (b) which is likely to be economically favorable. we are currently engaged in developing an analysis tool to quickly process test data for multi-region modelling. we are developing and assessing intervention strategies with the perspective of running it daily with the best available current data and knowledge. we will report on these activities in subsequent memoranda. . cc-by-nc-nd . international license it is made available under a author/funder, who has granted medrxiv a license to display the preprint in perpetuity. is the (which was not peer-reviewed) the copyright holder for this preprint . https://doi.org/ . / . . . doi: medrxiv preprint we have analyzed a feedback and control model for managing a pandemic such as that caused by covid- . the crucial output parameters are the infection growth rates in the general population and spatially localized sub-populations. when planning for an upcoming reboot of the economy, it is essential to assess and mitigate the risks of relaxing some of the restrictions that have brought the covid- epidemic under control. in particular, the policy strategy chosen must suppress a potential second exponential wave when the economy is rebooted, and so avoid a perpetual stop-and-go oscillation between relaxation and lockdown. feedback and control models are designed with precisely this goal in mind. having random testing in place, the risk of a second wave can be kept to a minimum. additional testing capacity of r min = day − tests (on top of the current tests for medical purposes) carried out with randomly selected people would allow us to follow the course of the pandemics almost in real time, without huge time delays, and without the danger of increasing the number of infected people by more than a factor of two, if our intervention strategy is followed. if testing rates r significantly higher than r min become available, a regionally refined analysis of the growth dynamics can be carried out, with g ≈ r/r min regions that can be distinguished. in the worst case scenario, where releasing certain measures immediately make the country jump back to the unmitigated growth rate of k = . day − , random testing would detect this within - days from the change coming into effect. this is in stark contrast to the nearly days of delay required for symptomatic individuals to emerge in statistically significant numbers. after such a time delay a huge increase (a factor of order ) of infection numbers may have already occurred, which would be catastrophic. daily random testing safely prevents this. thereby the significant reduction of the time delay is absolutely crucial. note that without daily polling of infection numbers and without knowledge about the quantitative effect of restriction measures, a reboot of the economy could not be risked before the number of infections has been suppressed by at least a factor of - below the current level. given the limits of suppression rates that can be achieved without most draconic lockdown measures, this will require a very long time and thus translates into an enormous economic cost. in contrast, daily polling will allow us to carefully reboot the economy and adjust restrictive measures, while closely monitoring their effect. since the reaction times are so much shorter, one can safely start an attempted reboot already at infection numbers corresponding roughly to the status quo. at some point one might consider the option to start releasing different sets of restrictions in different regions, with the aim to learn faster about their respective effects and thus to optimize response strategies in subsequent steps. we are grateful to emma slack, giulia brunelli, and thomas van boeckel for helpful discussions, and the erc hero project for supporting ga. appendix a: assessment of contact tracing as a means to control the pandemics let us briefly discuss the strategy of so-called contact tracing as a means to contain the pandemics, as has been discussed in the literature [ ] . we argue that contact tracing is a helpful tool to suppress transmission rates, but is susceptible to fail when no other method of control is used. contact tracing means that once an infected person is detected, people in their environment (i.e., known personal contacts, and those identified using mobile-phone based apps etc) are notified and tested, and quarantined if detected positive. as a complementary measure to push down the transmission rate, it is definitely useful, and it represents a relatively low cost and targeted measure, since the probability to detect infected people is high. however, as a sole measure to contain a pandemic contact tracing is impractical (especially at the current high numbers of infected people) and even hazardous. the reason is as follows. it is believed that a considerable fraction f asym of infected people show only weak or no symptoms, so that they would not get tested under the present testing regime. the value of f asym is not well known, but it might be rather high ( % or even much higher). such asymptomatic people will go undetected, if they have not been in contact with a person displaying symptoms. if on average they infect r people while being infectious, and if r f asym > , there will be an exponential avalanche of undetected cases. they will produce an exponentially growing number of detectable and medically serious cases. the contact tracing of those (upward in the infection tree) is tedious, and cannot fully eliminate the danger of such an avalanche. contact tracing as a main strategy thus only becomes viable once the value of f asym is well established, and one is certain to be able to control the value of r such that r f asym < . • t = , , · · · : time in days (integer). . cc-by-nc-nd . international license it is made available under a author/funder, who has granted medrxiv a license to display the preprint in perpetuity. is the (which was not peer-reviewed) the copyright holder for this preprint . https://doi.org/ . / . . . doi: medrxiv preprint • n int : number of interventions (including the reboot at t = ). • t int (j): first day on which the j'th rate k j applies. on day t int ( ) ≡ the initial reboot step is taken. • ∆t(j) = t int (j + ) − t int (j): time span between interventions j and j + . • t first : first day on which the current rate k = k(t) applied. • i(t): fraction of infected people on day t. • k(t): growth rate on day t. • r: number of tests per day. • c h : health cost. • c e : economic cost. • k min = . : minimal growth rate targeted. • i low = . : lower threshold for i/i * . if i/i * < i low , no intervention is made even if k is above α δk. • i high = : upper threshold for i/i * . if i/i * > i high , an intervention is made even if k is still smaller than α δk. • k low = − . : minimal possible decreasing rate considered. • k high = . : maximal possible increasing rate considered. • t min = : minimal time to wait since the last intervention, for interventions based on the level of i(t). • b = . : parameter defining the possible range of changes ∆k due to measures taken after estimating k. |∆k/k est | ∈ [b, ]. • α: confidence parameter. • n (t): cardinality of random sample of infected people on day t. the number n (t) is obtained by sampling from a gaussian distribution of mean i(t) r and standard deviation i(t) r and rounding the obtained real number to the next non-negative integer. • t first = t int ( ) = . • n int = . • c h = . • c e = . • k( ) = k = . . (initial growth rate) • i( ) = i * . common choice i * = i c / = . . • draw n ( ). • k( ) = k( ). (no intervention at the end of day ) • set t = . define i(t) = i(t − ) e k (t− ) , define c h = max{c h , i(t)/i * }, determine what will be k(t + ), by assessing whether or not to intervene: if t = t first , then k(t + ) = k(t). (no intervention) else distinguish three cases: . if i(t)/i * < i low and t − t first ≥ t min , then k(t + ) = min{k(t) + x k , k high } with x = unif[ , ]. . if i(t)/i * > i high and t − t first ≥ t min , then k(t + ) = max{k(t) − ( + x)/ k high , k low } with x = unif[ , ]. . if i low < i(t)/i * < i high , then • set ∆t ≡ t − t first + • compute k est (t first , ∆t), and δk est (t first , ∆t) using sec. b . if |k est | > k min and [k est > α δk est or k est < −α δk est ], set k(t + ) = k(t) − xk est with x = unif[b, ]. if k(t + ) > k high , put k(t + ) = k high . if k(t + ) < k low , put k(t + ) = k low . else k(t + ) = k(t), t = t + . if an intervention was taken above: put n int = n int + . define t int (n int ) = t + . define ∆t(n int − ) = t int (n int ) − t int (n int − ). set t first = t + . . cc-by-nc-nd . international license it is made available under a author/funder, who has granted medrxiv a license to display the preprint in perpetuity. is the (which was not peer-reviewed) the copyright holder for this preprint . https://doi.org/ . / . . . doi: medrxiv preprint if |k est | < k min and k(t) < k min and t − t first > , exit. else return to daily routine for next day. computing k est (t first , ∆t) and δk est (t first , ∆t): if ∆t is even: define n (t first + ∆t/ + m). • else return k est = , δk est = . if ∆t is odd: n (t first + (∆t + )/ + m), • else return k est = , δk est = . time to first intervention: ∆t( ) health cost: c h political cost: n int economic cost c e strategies for mitigating an influenza pandemic covid- reports from the mrc centre for global infectious disease analysis impact of non-pharmaceutical interventions (npis) to reduce covid- mortality and healthcare demand cybernetics: or control and communication in the animal and the machine the reproductive number of covid- is higher compared to sars coronavirus infectious diseases of humans; dynamic and control epidemic modeling: an introduction stochastic epidemic models: a survey the number of icu beds in switzerland was taken from the neue zürcher zeitung from march on the critical behavior of the general epidemic process and dynamical percolation epidemic models and percolation infinite-randomness critical point in the twodimensional disordered contact process absence of diffusion in certain random lattices the efficacy of contact tracing for the containment of the novel coronavirus (covid- ) coronavirus: policy design for stable population recovery: using feedback to maximize population recovery rate while respecting healthcare capacity key: cord- -m ikl da authors: goh, hoe-han; bourne, philip e. title: ten simple rules for researchers while in isolation from a pandemic date: - - journal: plos comput biol doi: . /journal.pcbi. sha: doc_id: cord_uid: m ikl da nan the scale and intensity of the coronavirus disease (covid- ) worldwide pandemic is unprecedented in all our lifetimes. it has changed our lifestyles and our workstyles, in a manner and to a degree that is likely to persist for some time. here we offer some guidance, in the familiar ten simple rules format, for how to navigate a stressful situation, considering it realistically as both a curse and an opportunity. this is written for all of us involved in scientific research-graduate student, postdoc, academic, staff scientist, in academia, government or industry. each such person has so much to contribute in a time of need but is simultaneously also a member of a worldwide population under threat. what can one do in a time like this? this is so fundamental that we have taken the liberty of calling it rule . as one reviewer put it, "put on your own oxygen mask before taking care of others." what use are you to your loved ones and our society at large if your mental and physical state are less than optimum, or worse still, if you do not survive? you may be a researcher who studies the basic biochemistry of infectious diseases or who performs pandemic modeling-we need all your dedication and expertise as we collectively work through this threat. more likely, your scientific training is in a variety of other fields. use that training. review the literature and the public data being produced, understand, and explain to others the need for seemingly onerous measures such as social distancing; you can use past pandemics as your guide [ ] . take every opportunity to use your scientific understanding to explain the importance of protective clothing, isolation, and appropriate hygiene. it can be read anywhere but so can false or misleading claims. use your scientific knowledge to influence others as best you can with facts and solid arguments, never shying away from clearly indicating what we do not know (that's just as much a part of science as what we do know). science-seriously undervalued in many parts of the world in recent years-has almost literally overnight become what the general population craves to understand, so as to grasp their predicament. use your training and knowledge wisely and effectively, allowing all to benefit. do so via social media, such as facebook, twitter, instagram, and linkedin, to disseminate useful information to the general public and reinforce #stayathome. you do not need to be a leader to show leadership. we can all lead in our own way. whether it is as a group leader providing strength, comfort, compassion, flexibility, and direction to your team or as a graduate student providing the same to fellow graduate students or undergraduates, it all counts. it counts for compassion shown to those less fortunate. researchers are one of the least likely groups to be seriously disadvantaged. remember and act on that. leadership comes in many forms, but one that might help here is the setting of new goals in times of unsettled circumstances. labs and individuals have goals. working towards those goals in uncertain times provides a sense of purpose and accomplishment, which would seem so important. goals can be in various time spans, from daily to the time span of the pandemic and beyond (months, years). goal setting is particularly difficult for experimental work when the lab is closed and equally difficult for young researchers when a thesis or paper needs to be completed. try and be creative with goals that are achievable and that offset the sense of loss. another form of leadership is to appropriately complement those that support your research, in whatever way. this is important at the best of times, but particularly so at the worst of times. show leadership in understanding. as researchers, for all that is lost, it is mostly trite relative to what many have lost by way of family members, jobs, and a way of life. look to lead efforts that address this imbalance either through university or other programs that reach out to the community. you have skills that can help others-employ them. if you can't find ways, then at least do more for your profession-tutor others remotely, review more, etc. by institution, we mean everything from the government (federal, state, and local) to your workplace to your individual laboratory. enormous effort has likely gone into contingency planning, albeit not exactly for the scenario that we now face. follow those plans, even if they have been made late relative to the spread of the virus. those plans include how best to work remotely and knowing your classification tier (your level of "essentiality" within your organization-are you central or not to the operational capacity and wellbeing of the organization?). essentiality can relate to the care and maintenance of laboratory animals, equipment, or materials for which safety is paramount (e.g., radioisotopes), your students, etc. act according to that essentiality. each of us occupies a unique niche. if a contingency of operation plan (coop), guideline, or something similar exists, study it and follow directives from your organization, as your safety and wellbeing are their top priority. (if you're unsure if one exists, ask your immediate supervisor.) be aware of support groups and other mechanisms to help you through these difficult times. finally, provide feedback on the guidance you are receiving, good or bad-providing such guidance is part of an agile process. this is particularly important when the plan is not clear or does not help you as a stakeholder. slack channels are a good option for you to try and get involved in contingency planning. how science is conducted has changed, almost overnight. some of what you did before will likely not be possible, but as you will see in the rules that follow, new opportunities arise. likely, most of your computational work can continue-essential staff (rule ) will keep servers, high-performance computing facilities, networks, etc. running, albeit under greater strain than usual. compensate for the loss of other forms of work and try and recreate yet others as best you can while being remote. set up your home office to be comfortable and functional if it is not already. you will have likely gained time that was previously spent on commuting; use that time wisely. embrace the new opportunities that exist but also consider the physical and mental wellbeing that comes from keeping a regular schedule and associated calendar, diet, exercise regime, etc. we are social animals in both personal and professional lives. most of us have the technology to create some level of virtual normality in our otherwise physical communications. use all the synchronous and asynchronous tools you have at your disposal on both computer and cellular networks to recreate as much as possible of the normal. you can still have your regular group meetings virtually via zoom, google meet, microsoft teams, etc. do turn on your camera during video calls as seeing familiar faces helps to alleviate the sense of loneliness. it also forces you to dress accordingly and keeping an organized workspace as a reminder that you are still working in your cozy home. keep the team spirit going with regular communications and checking in with your colleagues if they are coping well or need assistance. slack channels are good for this; physical presence is certainly irreplaceable, but a silver lining is that we will be in a better place when the pandemic subsides because of the extreme stress-testing of technology. to illustrate this point from our own experience, we have put classes online that would have taken years to achieve otherwise. our faculty have quickly embraced technologies they would otherwise ignore; we have records of lab meetings, lectures, workshops, etc. that we would not otherwise have. in short, our scientific digital footprint has expanded significantly out of necessity; the most advantageous and beneficial of our new habits will persist in our post-pandemic world. recognize this and prepare for it-do not be shy or reticent on the new format and opportunities to communicate your science. it is amazing and extremely heartening to see scientists come together in the face of adversity. we all have a part to play in the face of this pandemic-to return us to a sense of normality and to make sure that, as a society, we learn from these tragic events. although open science (sharing knowledge, data, software, etc.) should be the norm always, it has become much more imperative now. use this opportunity to both give and take. make your data, software, and papers immediately available through public resources and preprint servers. donate anything from your lab that is not being used when it may benefit others. you can seize this opportunity to develop online courses [ ] or workshops [ ] based on the various open learning platforms available nowadays. this is not only useful to share your knowledge and skills but also helps build your scientific reputation, even in times of adversity. this is especially true for data scientists in teaching various bioinformatics and programming skills. it is important to keep up to date with the latest development of the pandemic situation. what can you contribute both to society and to science, as a researcher helping combat the pandemic? it could demand more than "thinking outside of the box" to throw away the box in helping those at the front line! remember researchers are disadvantaged in different ways. the more experimental your work, the more you are likely disadvantaged. do what you can for the most disadvantaged. continue to support scientific publications by accepting invitations to review manuscripts (an example of a type of activity that is pandemic-compatible). perhaps it is a good time for you to widen your public outreach and write for public media? again (see rule ) , to fully utilize the potential of social media to disseminate useful information to the general public and reinforce #stayathome. you can also get involved with webinars [ ] or online conferences [ ] , either as a participant or organizer. for bioinformaticians, you will be familiar with the online sharing of computational analyses [ ] or software tools [ ] ; for others, this will be a key time to develop skills on multisite collaborations [ ] . you can even seek help from online scientific communities [ ] . but also keep in mind rule : all you contribute can be undone by overstating your case at a time when exactly what we know and don't know is made abundantly clear. as a researcher, there are likely some things you simply cannot do right now. on the other hand, there may be time for either scholarly or professional pursuits that you've always had on your to do list but for which you never had time. there is always that review or paper to be written that is constantly on the back burner, the exploration of a new area of research with never the time to read the background literature, that software that needs to be written or rewritten, a grant you have always meant to write and so on. it is never too late for experimental biologists to sharpen programming skills [ ] . refer to rule in thinking about these opportunities in terms of what can contribute scientifically in this time of need. an example would be online materials for a broad audience that imparts your knowledge and skill for the common good. for us, this article embraces rules and . personal adversity changes everything, including family relationships. when the adversity is population-wide, as in the case of a pandemic, the potential for positive collective impacts at the societal level is simply unparalleled, almost by definition. conducting research can be an all-consuming challenge to work-life balance. confinement provides time to reflect and begin to act if you feel your balance may be off. don't just use time saved to work, but use it to make a difference to others in nonscientific ways. for many, this will come out of necessity when the daycare center is closed, or aging parents need help. make those who need you in ways they did not before a priority. evaluate priorities (see rule ) and share the responsibilities fairly and equitably. reassessment of work-family balance is especially hard with both parents working and your children at home with no childcare. likewise, if you are on a tenure clock, there is added pressure to perform in impossible circumstances. look to what your institution offers in help, and discuss the situation with those to whom you report. exploring interests not related to your research is important at the best of times and especially important at the worst of times. it is all too easy to wander over to the computer and just keep working. embrace distractions. think about your physical and mental wellbeing. other interests are critical here. is there a way those interests can benefit others less fortunate in this time of need? perhaps do something that works on emotions in need of bolstering right now. take humor (https://slydor.com/comics-explained-working-from-home-vs-office/). perhaps tap into your creativity to create your own comics [ ] . do what works for you that explores your artistic (or really any nonscientific) side. this is also a unique opportunity for researchers with busy schedules that are often fragmented with meetings and such to improve our sedentary lifestyles by getting outside to exercise, hike in nature, and so on (minding the -meter criteria interpersonal distances, of course). suddenly, what used to feel so important is less so in the face of a crisis. think about what is indeed important now, and consider what you will do differently when it is over. considering also (like a new year's resolution), you may not keep to what you commit to. writing down what is important during this stressful time and referring back to it in both the near and distant future is, at least, a start. doing so has nothing to do with doing the research; but, as a researcher, part of that evaluation should be considering the value of the various aspects of your research. perhaps in the light of recent events, other avenues of inquiry that use your skillset are more or less important than you would have imagined before the pandemic? inspiration for this article came from hhg, who provided a first draft, and peb while (virtually) staring into the eyes of the graduate students that make up his school of data science. what can we do next to help? what can you do? tell us by commenting on this article, using social media, or writing to us directly. soper ga the lessons of the pandemic ten simple rules for developing a mooc ten simple rules for developing a short bioinformatics training course ten simple rules for organizing a webinar series ten simple rules for organizing a virtual conference-anywhere ten simple rules for writing and sharing computational analyses in jupyter notebooks ten simple rules for taking advantage of git and github ten simple rules to enable multi-site collaborations through data sharing ten simple rules for getting help from online scientific communities ten simple rules for biologists learning to program ten simple rules for drawing scientific comics we never imagined writing such an article. this article is dedicated to all those on the front line of the pandemic. thanks to cameron mura, claudia scholz, and mark borodovsky for their valuable input into this article.an earlier version of this article was made public on march , as a google document (major preprint servers would not take it as it is not a research article) with authors from central virginia, usa and selangor, malaysia. neither had yet to be hit hard by covid- at that time. key: cord- -a b hyyr authors: nan title: th annual meeting of the gth (gesellschaft für thrombose- und hämostaseforschung) date: journal: ann hematol doi: . /bf sha: doc_id: cord_uid: a b hyyr nan the variable molecular weight (mw) of vwf is due to differences in the number of subunits comprising the protein. it is assumed that endothelial cells secrete large polymeric forms of vwf and that smaller species arise from proteolytic cleavage. vwf has two main properties: it stabilizes factor viii protecting it from inactivation by activated protein c or factor xa, and it mediates platelet adhesion to subendothelium of the damaged blood vessel wall. each vwf subunit contains binding sites for collagen and for platelet giycoproteins gp ib and gp iib/i~a. multiple interactions of the multivalent vwf lead to extremely strong binding of platelets to subendothelial surface, that is capable of resisting high wall shear rate in the circulating blood. only the largest multimers are hemostatically active. lack of the largest vwf multimers was observed in patients with yon wiuebrand disease type a. unusually large molecular forms of vwf were found in patients with thrombotic thrombocytopenlc purpura. proteolytic enzyme(s) may be involved in the physiologic regulation of the polymeric size of vwf and thus play an important role in the pathogenesis of vwf abnormalities in some patients with congenital or acquired disorders of hemostasis. we have purified (- , -fold) from human plasma a vwf degrading proteas¢ using affinity chromatography and gel filtration. the proteolytic activity was associated with a high mw protein (mr - kd). vwf was resistant against the protease in a physiologic buffer but became degraded at low salt concentration or in the presence of m urea. proteolytic activity had a ph optimum at g- and was not inhibited by serine protease inhibitors or sulfl~ydryl reagents. inhibition by chelating agents was best reversed by barium ions, the observed properties of the vwf degrading enzyme differ from those of all hitherto described pretenses. analysis of cleaved vwf showed that the peptide bond tyr-g met had been cleaved -the same bond that has been proposed to be cleaved in rive. the endothelium releases the vasodilator nitric oxide (no) and the vasoconstrictor endothelin (et)-i. no is formed from l-arginine via the activity of constitutive nitric oxide synthase (cnos or enos). an inducible form of nos (inos) is activated by cytokines. no activates guanylyl cyclase in vascular smooth muscle and platelets, leading to the formation of cgmp which induces relaxation or platelet inhibition, respectively. in vessels, no is responsible for endothelium-dependent relaxations; in vivo it exerts a vasodilator tone which can be enhanced by shear forces and receptor-operated agonists such as acetylcholine, bradykinin, thrombin, atp and adp. infusion of no-inhibitors in vivo leads to vasoconstriction and increases in blood pressure and oral administration to hypertension in the rat. within the endothelium, no inhibits et gene expression and release of the peptide via cgmp. hence, no-induced hypertension is associated with increased plasma et levels. et, a -amino acid peptide, has potent vasoconstrictor properties via eta-and in part etb-raceptors on vascular smooth muscle. in endothelial cells, et activates etb-receptors linked to no and prostacyclin formation. under basal conditions, little et is formed, but is increased by thrombin, angiotensin ii, arginine vasopressin, cytokines and ox-ldl. et antagonists have been developed and allow to study the effects of et in vivo. et and no most likely play an important role in disease states such as hypertension, atherosclerosis, coronary artery disease, heart failure, pulmonary hypertension and subarachnoid hemorrhage. clinical trials to further define their role in these disease states are now under way. in summary, the endothelium is an important regulator of vascular tone and structure in vitro and in vivo. in disease states, their interaction is imbalanced leading to enhanced vasoconstriction, thrombus formation and structural changes of the blood vessel wall. pharmacological tools aiming to inhibit those changes are now being developed. j.m. harlan, r.k. w/nn, s. sharar, and n. vedder universit , of washington, seattle, washington ischemia-reperfusion injury has been implicated in the pathogenesis of a wide variety of efinical disorders. [n preclinical models, tissue damage .clearly occurs during ischemia, but, parado.,dcally, may be exacerbated during reperfusion. this reperfusion injury appears to involve activation of the intlammato~, cascade with generation of complement 'components, lipoxygenase products, and chemokines as proximal mediators and neutrophils as final effectors of vascular and tissue damage. we have examined the role of leukocyte adhesion in reperfusion injury in two models -the rabbit ear as a model of isolated organ injury and hemorrhagic shock and resuscitation in the rabbit and primate as a model of traumatic shock and multiple organ failure. data regarding the efficacy, timing, and safety of leukocyte adhesion blockade using selectin-or integrindirected reagents in these models w/ll be presented. the current status of anti-adhesion therapy in other pre-clinical models and early clinical trials will be re~ ewed. an amidolytic assay for the determination of activated protein c (apc)-resistant factor va (fva) has been developed. this assay measures the cofactor activity of fva in diluted plasma samples via the rate of thrombin formation. the apc response is calculated from two fv determinations: one performed in the presence (apc-fv) and one in the absence of recombinant apc. the apc-fv activity is expressed as percentage of the initial fv activity and indicates the sensitivity of fva to apc. normal ranges were established by analysing plasma samples of healthy individuals and an apc-fv activity above % was found to be indicative for apc-resistance (apc-r). in a control group of patients the apc-r assay gave abnormal results in patients. dna analysis confirmed heterozygous fv r q mutation in all patients and confirmed the non-carrier status in all of the patients yielding normal results. an aptt-based apc-r assay performed on the same group of patients showed abnormal results in two of the non-carrier patients. one of these patients was diagnosed as positive for lupus anticoagulant, whereas the reason for the wrong positive result in the second patient remains unclear. eleven patients were analyzed before start of oral anticoagulation and during oral anticoagulant treatment. comparison of the assay results demonstrate a correlation of % indicating that the assay is independent of the activities of vitamin k-dependent clotting factors. the apc-r amidolytic assays allows specific and sensitive detection of fva-resistant to apc. the assay is performable in plasma samples of all persons in whom the diagnosis of apc-r may be indicated. in patients treated with oral anticoagulants or showing other clotting abnormalities affecting the aptt the apc-r amidolytic assay is helpful to establish the diagnosis of apc-r. dept of pediatrics, university hospitals kiel and mtinster, germany resistance to activated protein c (apcr), in the majority of cases associated with the arg gin point mutation in the factor v gene is present in more than % of patients < years of age with unexplained thrombophilia. to determine to what exteut this relativdy common gene mutation affects the risk of thromboembolie events in infants and children, its occurrence was investigated in a population of children with unexplained venous or arterial thromboernbotism: thrombosis of the central nervous system (cns, n= ), vena portae (n= ), deep vein thrombosis (n= ), vena caval occlusion (n= ), neonatal renal venous thrombosis (rvi'; n-- ), neonatal stroke (n= ), stroke (n= ), arteria femoralis ocdusion (n= ). four ont of these patients showed a positive history of unexplained familial thrombophilia. apcr was measured in an activated thromboplastin time (afit) according to dahlbtick. the results were expressed as apc-ratios: clotting time obtained using the apc/caci -solution divided by dotting time obtained with cac . concerning the special properties of the childhood hemostatic system, infants and children with apcr < were considered to be apc-resistent only when the results were confirmed in a : l dilution with factor v deficient plasma (instrumentation laboratory munich, germany). plasma of healthy children served as controls. the arg gin mutation of the factor v gene was assayed by amplification of the dna samples by pcr followed by digestion of the amplified products with the restriction enzyme mul i. results were confirmed by sscp -analysis or by direct sequencing of dna from patients with apcr. consistent with the ap'it based method out of children with venous (v) thrombosis and eight out of patients with arterial (a) vascular insults showed the common factor v mutation. additional coagulation defects (autithrombin, protein c type i, enhanced antiphospbolipid igg, enhanced lipoprotein (a)) were found in % (v) and % (a). furthermore, we diagnosed exogenlc reasons (septicemia, postpastal asphyxia, fetopathia diabetica, central line and steroid/asparaginase administration) in six out of (v) and three out of (a) children with thrombosis and apcr. all four patients with a positive family history of thrombophilia (mothers only !) showed the common factor v mutation arg gin. in the control group the prevalence of apcr was . %. the high incidence of additional exengenic factors in children with apcr confirm literature data of previously described inherited coagulation disorders during infancy and childhood: an acquired risk of thromboembolic disorders masks the coagulation deficiency in the majority of patients with an inherited prethrombotic state. furthermore, the incidence of % apc resistant children with arterial insults in this study challenge the view that apcr is associated with venous but not with arterial thrombo-st . activated protein c resistance and plasminogen deficiency in a family with thrombophilia m, zttger , f. demarmels biasiutti , ch. mannhalter , m. furlan , b.~e httmatologisches zentrallabor der universit~t, inselspital, ch- bern klinisches institut fur medizinische und ehemische labordiagnostik, universit~t wien, a- wien several hereditary defects of the proteins regulating blood coagulation have been associated with familial thrombophilia. since the recent discovery of activated protein c (apc) resistance due to the factor v rf q mutation as a highly prevalent hereditary risk factor for venous thromboembolism (tel, evidence is accumulating that familial thrombophilia may be due to a combination of genetic defects. thus, protein c-or protein s-deficient patients having suffered from te seem to be more likely to carry the factor v r q mutation than expected from its allelic frequency in the population. we report a family (see figure) in which plasminogen deficiency ( . u/ml) had been found in the propositus having had twice postoperative deep vein thrombosis (dvt) at ages of and yrs, respectively, as well as in family members ( . - . u/ml). out of these plasminogen deficient individuals, only the propositus' daughter had suffered from recurrent dvt at age < yrs. reinvestigation of this family in showed factor v r q mutation in the propositus, his daughter, an asymptomatic sister and a brother with postoperative pulmonary embolism (pe). his father had had postoperative pe; he is deceased and could not be examined. ~, [] plasminogen deficiency ~ ,rll factor v rf q mutation /~ propositus ~¢ bistory of dvt and/or pe e superficial phlebitis j ~,,~ not investigated " " deceased even though this family is small for establishing unequivocal association of te with known defects, the two most severely affected individuals with recurrent te at ages < yrs had combined plasminogen deficiency and apc resistance whereas those with isolated plasminogen deficiency were asymptomatic. these data support the concept of multigenie interactions leading to familial thrombophilia. resistance to activated protein c (apc resistance) is the most common dsk factor for venous thrombosis (vt). in most cases apc resistance is caused by a single point mutation at position arg in the factor v gene (factor v leiden). while ample data in hetarozygous patients have been published, reports in homozygous patients are limited. we studied patients ( males [m] , females it]) in whom a homozygous mutation had been verified by dna analysis. the median age at the time of the study was . years (y) (range - y). twenty-five patients had experienced vt ( m, f). four patients were discovered dunng family studies and were asymptomatic, three were children (between and y) and one patient was a y old man. in males the first thrombosis occurred at a median age of y (range - y), in females this was at a significantly younger median age of y (range - y). twelve of the symptomatic females had taken oral contraceptives (oc, estregen content . - .ling) for to months (median m) pdor to thrombosis. in women vt occurred during pregnancy, in female it was precipitated by hormone replacement therapy. in contrast, in ,< males the thrombosis happened spontaneously, in males it followed surgery. the sites of thrombosis were dvt in males and females, dvt and pulmonary embolism (pe) in females and male, dvt and caval vein thrombosis in female and superficial thrombophlebitis in males and female. eight females had at least one pregnancy, in total children and abortions. two had thrombotic events during pregnancy and after delivery. all homozygous patients showed apc ratios between . and . (mean . + . ). conclusion: patients with homozygous fv leiden have similar clinical symptoms as patients with deficiencies of antithrombin-, protein c or protein s deficiency. however, in contrast to these defects a very high dsk dudng oral contraceptive medication leading to an ealier manifestation in females can be observed. several synthetic (efegatran, argatroban, inogatran and napsagatran) and recombinant (hirudin, peg-hirudin and hirulog) antithrombin agents are in different stages of nlinical development for cardiovascular and thrombotic indications. while the specificity of these agents for thrombin is a concern, little has been done to study the effects of these agents on other serine proteases involved in coagulation and fibrinolytic processes. fihrinolytic compromise by site directed thrombin inhibitors has been reported recently (thromb res ( ): - , ) . while these agents have been shown to inhibit plasmin and related enzymes, little or no informatienentheir effects onthe generation and functionality of apc is available. sinceapc plays an increasingly important role both as an antienagulant enzyme, by inhibiting factors v and viii, end a pro-fibrinelytic enzyme, by stimulating the release of t-pa fi'om endothelial sites, an inhibition of apc may result in both pro-coagulant state and fibrinolytic deficit. representative thrombin inhibitors (dup -a prototype boronic acid peptide derivative, efega~an, argatroban, hirulog, hirudin and feg-hirudin) have been compared for their ability to inhibit apc (american red cross). these bionhemically defined studies in which the remaining activity of apc after incubation with a thrombin inhibitor was determined speclrophotometrically with a ehromogenic substrate (s- , pharmacia, franklin, oh) , demonstrated that dup and efegatran inhibit apc in a coneentrafinn dependent manner (ic~ = . and , gm resp~tively), hirnlog inhibits apc weakly ( p ¢i produced only % inhibition), while argatroban and ~ have no anti-apc activities, while hirulog, hirudin and argatroban produced no direct enti-apc activities, it is conceivable that they may inhibit thrombomodnlin-bound thrombin and thus prevent activation of protein c, resulting in a functional apc deficit and failure to improve clinical outcomes despite higher dosage. while initially it was thought that sole targeting of thrombin will provide monospacifin anticoagulant agents devoid of some of the adverse effects observed with heparin, the recent clinical trials clearly suggest that thrombin is not the only determinant of thrombogenesis. furthermore, potent antithrombin agents such as hirudin, hirulog and peptides, indirectly inhibit the generation of apc, by compromising thrombomodulin-bound thrornhin and such agents as efegalran and dup also produce direct apc inhibition. endogenous inhibition of formed ape by thrombin inhibitors may therefore compromise the feedback regulatory funetiens of apc and may lead to thrombotic amplification in fully enticoagulated patients. these studies warrant prcelinlcal assessment of thrombin inhibitors to evaluate their relative inhibitory effects on apc. poor anticoagulant response to activated protein c (apc resistance) causes a significant portion of deep vein thrombosis (dvt) whereas its association with coronary artery disease (cad) and myocardial infarction (md is still controversial. therefore, we investigated recently hospitalised patients suffering from cad with or without previous mi. the cad was proven by coronary angiography. apc resistance was analysed by using the ap'l-i'-based assay coatest apc resistance (chromogenix). eleven patients showed an apc sensitivity index below . viewed as apc resistance. using pcr technology, the factor v mutation causing apc resistance g "-) a) has been shown in nine of these eleven patients. this represents . % ( / ), compared to , % found in healthy blood donors ( / ). one homozygous carrier (male, age ) was identified (apc sensitivity index . ) who suffered from dvt at age . recent angiography demonstrated diffuse cad, no thrombotic events were reported in his family. in contrast, multiple thrombotic manifestations (dvt, mi, stroke) occurred in the relatives of four heterozygous patients. we conclude that the prevalence of apc resistance is rather low in patients with cad. nevertheless, the natural history of coronary manifestation of apc resistance seems to vary, probably depending on the presence and severity of cardiovascular risk factors. resitanee to activated protein c (apc resistance) is the most cormnon hereditary cause of thrombophilia and significantly linked to factor v leiden pcr based methods are used to identify the crucial point mutation in the factor v g(me. we designed primers in order to identify factor v leiden by allele-specific pcr amplification. amplification specificity for factor v was ensured by the 'primer fv , located at the introng/intronl border of the g~ae. one sense and two antisense primers were used in ~vo separate primer mixes specific for factor v arts (wildtypo) or factor v otn (factor v leiden) yielding bp products each. in each pcr reaction a pair of primers amplifying a fragment of the human growth hormone gene was included, fimctioning as an internal positive amplification control ( bp pcrfragment). after an initial denaturation step /.tl samples ( rig genomic dna) were subjected to two-temperature cycles fouowed by threetemperature cycles. for visualisation p of the amplification product were run on a % agarose gel presmined with ethidittm bromide. the presence or absence of specific pcr amplification allowed defiu/te allele assignment without the need for any postamplifieation specificity step. the in~ernal positive control primers it~cate a sucessf-u/pcr amplification, allowing the assignment of homozygosity. in a prospective study p~e.ients with tlaromboembolic events were analysed using this technique and enmpared with pcr -rflp according to bertina et al. the concordance between these techniques was %. in patients a heterozygous factor v ohas mutation was detected, whereas one pa ent with recurrent thrombcembolism was homoz-ygous. no false-positive or false-negative results worn observed in the homozygous as well as hcterozygous samples. in addition in samples identified to carry the point mutation by al/ele-specifin pcr anxplification automatic :~equencing confirmed the heterozygous or homozygous point mutation. due to its time-and cost-saving features allele-specific amplification should be considered for screening of factor v leiden. background: an initial intravenous course of tmfractionated heparin ~ljasted on the basis of the activated partial thromboplastin time is the currmt standard treatment for most patients with venous thrombosis. low-molecularweight hqmrin pre~a~tious can be administered subcutaneously, once or twice daily, without laboratory monitoring. we compared the relative effic.~y and safety of low-molecular weight heparin versus anfractionated heparin for the initial treatment of deep venous thrombosis. methods: english-language reports of randomized trials vtta'e identified th~ a medline search ( through ) and a complementary extensive manual search. reasons for exclusion from the analysis were no hepada dosage adjustments, the lack of um of obj~tive tests for deep venous thrombosis, dos~ranging studies that used higher doses of low-molecularweight heparin than are ctareatly in use, and the failure to provide blind endpoint ~sossmeat. we assessed the incidence of symptomatic recurrent vinous thromboembolic disease, the incidence of clinically ii~t bleeding and mortality. results: twelve of the identified trials satisfied the predetermined criteria. the relative risk reductions for symptomatic thromboembolic complicatious, clinically ~t bleeding, and mortality varied firom . % aad were all statistically significantly in favor of low-molecular-wedght hqtmrin. coadusions: low-mol~ular--weight hoparim administered subcutaneously in fixed doses adjusted for body weight aml without laboratory nmaitori~ =re more effective and safea" tlum adjn~_-dose standard h~. sauce low molec~dar weight hqmrim vary in o~apositiou =ad pharma~ogical im)fil~ the benefits of each ~ shodd ~tabllsbcd separately. unfraetionated hcparin (uh) and low molecular weight heparin (lmwh) are widely used for the prevention and treatment of thrombotic disorders. uh and lmwh induce platelet aggregation in vitro. rgd peptides compete with fibrinogm for the binding to the glycoprotein receptor (gp lib-ilia) of platelets and inhibit platelet aggregation. to inhibit the heparin-indueed platelet ~tion and prolong the half-life in blood of rgd peptides, we linked ac-rgdv-ssggs-ahx-yk eovalently to lmwh in a ratio : . the peptide is composed of three regions: a. rgd-gives the specificity for the receptor gp lib-ilia; b..ssggs-ahx-is the spacer between carrier and ligand, which should facilitate the intnraetion between the conjugate and the gp lib-ills receptor; c. -yk arc functional antino acids for iodination (y) and for covalent attachment (k) to the cattier lmwh. the aggregation achieved with different concentrations of lmwh, lmwh-eonjugate and lmwh/rgd-peptide mixture in a ratio : was mea.~ared after rain.; maximum aggregation after platelet activation with pm adp was set equal to %. platelet aggregation in normal human plateletrich eitrated plasma (prp; /p.l) was induced by lmwh in a dose ~ndent manner. heparin can induce antbodies which interact with platelets and endothelial cells. this causes thrombocytopenia and thromboembolic complications. hitpatients do need effective parenteral anticoagulation. we treated patients ( m, fm), median age years ( - ) with laboratory prooven hit (hipa-test) with recombinant (r-)hirudin. as these patients had been preseleeted by their immunological response during heparin treatment and the treatment duration of the study was longer than in any other study using thlrudin, all patient samples were investigated for anti-r-hirudin antibodies himdin antibodies were screened by a sandwich elisa using r-hirudin fixed to the solid phase as antigen. all plasma samples were screened for anti-hirudin antibodies of the igg class, but solar only a suset of samples for lge anti-r-himdin antibodies. of patients ( . %) developed anti-hirudin antibodies of the igg class. anti-hirudin antibodies were not detectable not before days of r-hirudin administration solar no ige anti-hirudin antibodies were found. none of the patients devdoped thromboeytopenia or allergic symptoms. however, in a subset of patients the anti-hirudin antibodies enhanced the anticoagulatory effect of r-hirudin. in patients the hirudin dosage had to be decreased by - fold to maintain a stable aptt level, in patients, despite stable r-hirudin maintenance dose the aptt increased to values > see.. during the study patients with anti-hirudin antibodies had to be reexposed to a second course of r-hirudin for parenteral anticoaguhtion none of these patients developed any allergic reaction. in conclusion we found a high proportion of anti-hirudin antibodies in hatpatients treated with r-hirudin for more than days. these antibodies seem to have minor clinical relevance in regard to allergic reactions. however, one has to consider that these antibodies may influence the pharmacokinetics of rhimdin and thereby enhance its antieoagulatory potency. therefore, aptt must be monitored closely in patients receiying r-hirudin for more than days a major concern in the use of hirndin, the most potent and specific thrombin inhibitor, is the risk of bleeding associated with the potential effect of this drug on hemostasis, particularly when the antithrombotic therapy is combined with invasivo procedures, fibrinolytic treatment, or patient's predisposition to abnormal bleeding. thus, availability of an antagonist to hirudin would be essential for instant neutralization of the antithrombotie action. however, thueh a hirudin antagonist is unknown in nature. to prepare an antagonist to hirudin, a mutant derivative of human prothrombin, in which active site aspartate at position is replaced by an asparagine, has been designed, expressed in recombinant chinese hamster ovary cells, and purified to homogeneity. d n-prothrombin was converted to the related molecules d n-meizothrombin and d n-thrombin by limited proteolysis by e. carinatue venom and o. scvutellatus venom, respectively. both d n-thrombin and d nmeizothrombin exhibited no thrombin activity and titration resulted no detection of the active site. however, binding to solid phase immobilized hirudin and fluorescence studies confirmed that the binding to the most specific thrombin inhibitor, hirudin, was conserved in both proteins, hi vitro examinations showed that d n-thrombin and d nmeizothrombin bind to immobilized hirudin, neutralize hirudin as well as in the purified system and in human blood plasma and re-activate the thrombin-hirudin complex. animal model studies confirmed that d nthrombin and d n-meizothromi.,in act as hirudin antagonist in blood cireulatlon without detectable effects on the coagulation system. while i.v. injections of hirudin in mice resulted in an increase in partial thromboplastin time, thrombin time and anti-thrombin potential, additional injections of d n-thrombin and d n-meizothrombin resulted in a normalization of these coagulation parameters. elevation of plasma homocysteine is a hereditary disorder of methionine metabolism associated with a high risk of arterial vascular disease. however, as yet relatively little attention has been directed towards the association between hyperhomocysteinemia and juvenile venous thromboembolism (vte). consequently the aim of our study was to evaluate the prevalence of hyperhomocysteinemia (hyper-hcys) and juvenile vte. patients: patients ( men, median age ys; women, median age ys) who had at least one verified episode of vte before the age of ys were investigated in regard to their total plasma hcys levels. none of the patients had renal or liver dysfunction or evidence of any autoimmune or neoplastic disease. methods: plasma total homocysteine levels were determined by hplc with fluorescence detection. hyperhomocysteinemia was defined as hcys levels exceeding the upper limit of the normal range obtained in our laboratory from healthy control subjects ( males, median age ys, hcys % ci: . - . pmol/l; females, median age . ys, hcys % ci: . - . ,gruel/l). resuits: out of patients had hyper-hcys, giving a prevalence of . %. of these patients, were male and female, indicating that the relation between elevated plasma hcys levels and vte may not be as strong in woman as in men. discussion: according to previous reports, our study shows that there is a high prevalence of hyper-hcys in patients with juvenile vte. however, the mechanisms by which hyper-hcys can provoke vte and whether hcys is an exclusive risk factor or if it contributes to other existing predispositions, possibly working as a trigger factor is unknown yet. some authors suggest hcys-iaduced effect on factor v activation or inhibition of thrombomodalin-dependent protein c activation. in addition an influence on thrombocyte aggregation has been postulated. conclusion: measurement of hcys levels may be useful in the evaluation of patients with a history of juvenile venous thromboembolism and could be clinically important as hyper-hcys is easily corrected by vitamin supplementation. detailed determination of the pathogenesis of vte in patients with hyper-hcys should be the aim of further investigations. a deficiency of one of the coagulation inhibitors antithrombin (at), protein c (pc) or protein s (ps) and resistance to activated protein c (apc resistance) are established risk factors for venous thromboembolism (vte). in the majority of patients with apc resistance, the .tug gin mutation (factor v leiden) is present. whereas deficiencies of one of the coagulation inhibitors are rare in the normal population, the allele frequency of factor v leiden is - % in western europe. heterozygous individuals have a - fold, homozygous an fold increased risk for vte the typical clinical features of all abnormalities are deep vein thrombosis, pulmonary embolism, superficial vein thrombosis and thrombosis at unusual sites, like mesenteric vein thrombosis or cerebral vein thrombosis. the thrombotic risk is low during childhood, but increases considerably after the th year of age. a retrospective study in adult patients out of families with a symptomatic deficiency of at, pc or ps revealed that around % of surgical interventions and traumas of the lower extremities were complicated by vte. therefore, these patients should receive thrombosis prophylaxis al~er surgery and trauma if their age is higher than years. pregnancy is associated with a very high risk for vte in individuals with at deficiency and prophylaxis should be initiated already in the first trimester. after delivery, thrombosis prophylaxis is adviced for all females known to have an abnormality. oc increase the risk, especially in at deficient and in homozygous factor v leiden females and are therefore contraindicated in these individuals. oc do also increase the risk for vte in patients heterozygous for factor v leiden and females known to have this abnormality should be discouraged from taking oc or should at least be informed on their increased risk. university hospital-', jerusalem, israel, hospital bcan.iou ~. paris, france, increased frequency of thrombocmbolie events have been observed iu patients with b-thalassemia. our findings of shortened platelet survival and enhanced urinary excretion ofthmmboxanc a: metabolitcs (blood : (blood : , suggested an increased platclet activation in tbese patients. we also fouud that isolated thalassemie rbc enhance prothronlbin activation, suggesting an increased membrane exposure of procoagulant phospholipids i.e, phasphatidylserine (am j. hematol. : , ) . we now show that annoxin v, which has a high specificity and affinity for anionic phospholipids inhibits pmthrombm activation by factor xa, by binding to thalassemic rbc (ic~, = . nm). kerckhoff-klinik, bad nauheim ~, medizinische poliklinik bonn , institut for immunologie und transfusionsmedizin universit~ll greifswald a antibody-mediated intravascular platelet activation is believed to be the basis for both arterial and venous thrombosis in patients with hat. while the development of arterial thrombosis can explained sufficiently by intravascular platelet activation, it is a matter of discussion whether additional risk factors are involved in the pathogenesis of hat-related venous thrombosis. since resistance to activated protein c (apc) is the most common inherited risk factor for venous thrombosis described the frequency of apc resistance among a population of hat patients has been studied. hat was diagnosed using the heparin-induced platelet aggregation assay and confirmed by the ~ c-serotoninrelease test. the diagnosis of apc resistance was established by two functional assays and genetic analysis. at time of diagnosis of hat, patients showed venous thromboembolic complications. among these, were found positive for apc-resistance. pulmonary embolism was diagnosed in hat patients, of them were apc resistance positive. none of the hat patients who showed exclusively thrombocytopenia were apc resistance positive. early oral anticoagulation (oa) was initiated in patients after the diagnosis of hat has been established. six of these patients developed serious thrombotic complications including skin necrosis. these results demonstrate that apc resistance is an additional and common risk factor for the development of hat-related venous thrombosis. early initiation of oa during an acute episode of hat dramatically increases the risk of thrombosis. therefore, oa in hat patients should be initiated only after platelet counts have been returned to baseline levels and effective parenteral anticoagulation is achieved. controlled trials for primary and secondary prevention of stroke g. de ( aetano, c. cerletti and v. bertel~ consorzio mario negri sud, santa maria imbaro, italy this presentation will review the antithrombotic treatments to prevent ischemic stroke that have been evaluated in controlled clinical trials. in two studies of aspirin therapy for pdmary prevention in male physicians there was no reduction in the incidence of stroke, while that of first myocardial infarction was significantly lowered. similar results were obtained in a prospective study in a large cohort of women taking aspirin daily. the incidence of vascular death was not modified by aspirin in any of these trials. this is possibly due .to an excess of strokes associated to aspirin treatment: indeed the four vascular events avoided in us physicians under aspirin prevention for five years would result from five myocardial infarction and one vascular death avoided and two additional strokes occurred. oral anticoagulant therapy decreases the relative risk of stroke in patients with non valvular atrial fibrillation. warfarin appears to be superior to aspirin, but the latter drug is a useful alternative when long-term anticoagulant therapy cannot be administered. a metanalysis of about trials and over , patients with different vascular diseases treated with aspirin (at different doses) and/or other platelet inhibitors showed % overall reduction of vascular events including stroke. the optimal dose of aspirin for secondary stroke prevention could not be established. in patients with previous minor strokes or tia there was % reduction of vascular events and % of non fatal strokes. the avoidance of nine strokes of any cause among the expected in patients at risk would result from the sum of ischemic events avoided and a haemorrhagic one occurred in excess. ticlopidine was reported to reduce the risk of stroke in two large tdals (one in patients with major stroke), but there is no evidence that it is better or safer than aspirin. we compared the effect of the direct specific thrombin inhibitors, napsagatran (na) and rec. hirudin (rh) with unfractionated heparin (uh) on the further growth of preformed thrombi. as a model of thrombogenesls, an annular perfusion chamber exposing rabbit aortic subendothelium was perfused with native rabbit blood at an arterial wall shear rate ( /s). fibrin and platelet thrombi were allowed to form during a min perfusion period after which the test agents were given iv as a bolus and a continuous infusion ( and pg/kg/min, n= ) and the perfusion continued for min. the control groups were perfused for or rain (n= ). fibrin deposited and platelet thrombi formed on subendothelium were evaluated by microscopic morphometry. the % surface coverage with fibrin was not reduced in the drug-treated groups since fibrin deposition was similar in the and min control groups ( + % and : %, respectively, mean:l:sem). platelet thrombus area (ta) in the control groups increased from + pm /pm after min to + pm /lim after rain perfusion. na at g/kg/min reduced ta by % to values ( +_ ptm / ~m) lower than those of the min control group whereas rh at this dose reduced ta by % ( -j: .tm /i.tm). uh at both doses was ineffective. these findings show that in contrast to uh the direct thrombin inhibitors na and rh inhibit the growth of preexisting thrombi. these results could be explained by the higher potency of na and rh as compared to uh for inhibiting clotbound thrombin (gast et al., blood coagul fibrinol , .' - ) and suggest that thrombus-bound thrombin is an important modulator of platelet thrombus growth and/or stability in this thrombosis model. platelet adhesion -the initial event of thrombosis -is believed to be completely prevented by intact endothelium. we challenged this theory by superfusing intact human umbilical vein endothelial monolayers with activated human platelet rich plasma utilizing the stagnation point flow adhesio-aggregometer (spaa). the spaa provides flow mediated contact of platelets with the superfused surface. heparinized ( . - . u/ml) platelet rich plasma (prp) was obtained from healthy volunteers and activated by addition of adenosine diphosphate (adp " - m). platelet deposition was recorded on-line by video as well as by measuring scattered light. fixed samples were examined by phase contrast and electron microscopy, inhibition experiments were performed with either the tetrapeptide rgds, the non-peptide gpiib/llla-inhibitor ro- - or a monoclonal antibody directed against the gpilb/llla complex. stimulation with adp prompted platelets to adhere to intact endothelium single or as microaggregates of a diameter of up to micrometer. adhesion was dependent upon convective transport resulting in platelet collision with the endothelial monolayer. infusion of rgds or ro- - into the flowing, adp-stimulated prp completely prevented platelet adhesion to the endothelium as well as subsequent aggregation. when the inhibitor inflow was stopped while adp stimulation persisted, adhesion and aggregation occurred immediately. re-establishing the inflow of the inhibitors -with still continued adp stimulation -led to disintegration of the adhering aggregates. when prp preincubated with the monoclonal antibody against gpllb/llla was superfused, platelet adhesion to the endothelium and aggregation were irreversibly blocked. our results suggest that convective transport and stimulation of platelets are prerequisites to overcome endothelial thromboresistance and that subsequent platelet adhesion to the endothelium is mediated via the platelet gpilb/llla receptor complex. prevent thrombus formation affer ptca i.p. tepanova t, g.v. bashkov , l.p.kapralova, s.p. domogatsky ~ cardiology research center t and national haematology scientific cettter russian academy of medical sciences, moscow, russia percutaneous transluminal coronary angioplasty (ptca) results in atheroselerotie plague rapture, vascular wall damage and thrombogenic collagen exposure. subendothelial collagen type i-lll is a very ~rong agonist of platelet-dependent thrombus formation in arteries. the anlithrombotic action of rabbit polyclonal antibodies to rat collagen type i-ill and their chemically synthetized conjugate with monoclonals to human recombinant two-chain/one-chain urokinase type plasminogen activator (rtcu-pa/rseu-pa), cross reacting with rat tcu-pa/scu-pa was studied both an in vifro and in vivo. anticollagen antibodies and bispecific conjugate inhibited human platelet adhesion, aggregation and formation of thrombi-like ~ructures induced by rat collagen immobilized with the polystiroi surface in a condition mimics the high shear rate in the large elastic-type arteries. the short-term treatment of the collagen-soaked silk thread by the collagen antibodies suppressed the platelet-dependent thrombus formation in the arterio-venous shunt in rats by _+ % (p< . ) as well as by the bispecific conjugate ( _+ %, p< . ). the treatment of collagen-adsorbed conjugate by rtcu-pa did not increase the autithrombotic effect of bifunctional antibodies. the present date suggest, that the local administration of the anticollagen antibodies at the site of atherosclerotic plague rapture may tm the efficient tool for prophylaxis of platelet-dependent thrombus formation in arteries after ptca. increased levels of certain hemostatic factors have been shown to be related to an increased risk of cardiovascular events. hypercoagulability is suggested to predispose to arterial thrombosis and thereby to participate in atherogcncsis. we therefore assessed fibrinogen, prothrombin fragment + (fi+ ) and yon willebrand factor (vwf) antigen in consecutive patients (aged + years) with known coronary artery disease (cad) who all underwent coronary angiography. the extent of coronary artery disease was quantified according to modified criteria of the american heart association (total, proximal and distal "score"). furthermore the intima-media thickness (imt) was determined in the carotid and femoral arteries by standardized ultrasonographie measurement, vwf antigen was found to correlate positively with the total and proximal coronary score (r= , p< . and r=o. , p< . ). while fi+ showed no correlation with the coronary scores, it was significantly correlated with the imt values in the carotid arteries (r= . . p< . ). after differentiating tertiles of the parameters patients belonging to the upper tertile of fi+ concentrations had significantly higher imt values of the carotid and femoral arteries ( . _+ . mm vs. . +_ . mm in the lower tertile, p< . : . _+ . mm vs. i. -+ . ram, p= . ) whereas in patients belonging to the upper tertile of vwf antigen concentrations the proximal coronary artery score was significantly higher (!. -+ . vs. . + . in the lower fertile, p< . ). fro correlation of fibrinogen concentrations and extent of cad or imt values of the carotid and femoral arteries could be demonstrated. in conclusion procoagnlatory mechanisms as indicated by elevated concentrations of yon willehrand factor antigen and fi+ may be contributing factors in atherogenesis. we have previously shown that pgei is a potent inhibitor of pdgf-ioducod proliferation of vascniar smooth muscle cells (vscm) and inhibits dna replication by a camp-related mechanism (grol~er et al, ) . the present study investigates of whether or not this aatimitogeni¢ activity of pget can be amplified by trapidil, a compound that has been shown recently to inhibit the incidence of restenosis of hmnan coronary arteries subsequenmt to ptca (maresta et al. ) , vsmc were prepared from coronary arteries of adult bovine hearts, passagod and kept under standard tissue culture conditions. cells of passage - wore incubated in serumfree medium for h in the presence of indomethacin ( p.m). addition of pdgf-bb ( ng/ml) under these conditions stimulated dna-replication as assessed from 'hthymidin lncm'poration, by .- laid above control level. trapidil at idvl caused a minor reduction of pdgf-induced mitogenesis whereas t) of the compound resulted in a marked reduction of dna replication by % (p < . , n = ). pgei at . nm diminished the incorporation rate by t % while the simultaneous administration of both pged and trapidil ( idyll caused a significantly stronger response as seen from n reduction of ~h-thymidine incorporation rate by % (p < . , n = ). as a possible mechanism of action, trapidil might have inhibited phosphodiesterases. to establish this, we measured the camp-depcudont proteinkiaasc (pk) a activity in cell homogenates. trapfdil increased the basal fka-activity from % to % of the maximum response while the response to pget ( am) amounted to %. coincubation of pgei with trapidil caused a % stimulation of pka activity, sugesting a small though detectable inhibition of vscm phosphodiesterases by trapidil at anttmitogenic concentrations. essentially similar results wore obtained when thrombin was used as the mitogenic agent. the data demonstrate a significant antimitogenic effect of trapidil at p.molar concentrations that are in the range of plasma levels after therapeutic administration of the compound in rive. at these concentratrations, pget induced inhibition of mitogenesis is markedly enhanced by trapidil. inc. i~enna, and ~cenlral itematnlogy laboroto~. , university hospital of bern pibrinogen (fg), yon willebrand factor antigen (vwf) and tissue-type plasminogeu activator antigen (l-pal have recently been shown in be independent risk factors for subsequent coronary events in patients with angina pectoris (nejm ; : ) although paul antigen has also been proposed as a risk factor, conclusive dam showing its predictive value is still lacking. furthermore, we have recently shown in a study investigating survivors of myocardial infarction that not only are fg, t-pa and pai-i significantly increased in these patients when compared to a heahhy conlro[ group, but pci activity is also elevated ( hrornb. tfaemost. ; : abst.) , hi order to obtain cut.off points for the individual parameters, frequency histogram plotl; were transformed into straight line cumulative frequency (probit) plots (thromb i/aemost. ; : ) . the cut-off valu~ for the four parameters were determined as follows: fg at . g/l, t-pa at . ng/ml, pal-i at ng/ml and pc[ at % of a normal pooled plasma. utilising there cut.off points it was then possible to determine the accumulative discriminatow effectiveness of the parameters. when fg w;qs employed alone as the discriminatow factor, it was observed that % ( ) of the coronary heart dir, ease (chd) group eilher had the cul..off value or were below it aud % ( ) of the normal group were above the cut-off value, thus, resulting in % false ne$atives and % false positives. when a second additional risk factor, t-pa wa_~ introduced, the number of false negatives dropped to % [i.e. % ( / ) had two, risk factors elevated] and the number of false positives to % to investigate whether a third parameter could discriminate further, pai-i antigen was used to analyse the rcnudning false positives and negatives. an additional % could be detected, resuhing in % of the chd group having three risk factors elevated. similarly, the number of normal aubjecta with three parameters elevated dropped by % to % furthermore. when a fourth parameter was introduced, namely pci, it was round to discriminate a further % in the chd group, thereby increasing tile di~riminalion to %. the number of false positives dropped to %, additionally, determination of pci increased the discrimination of patienta having had multiple infarctions from °/= when thrce parameters were mcasured to %. from these results it can be concloded that determination of fibrinogen levels alone is not sumcicnt to separate patients from controls as t-pa adds significant discrimination. pai-i antigen which correlated strongly with t-pa did not significantly increase the discriminatory potential of both fg and i-pa. however, by employing pci as a fourth paramctcr, virtually complete separation between the chd and normal groups as well as rurthcr recoguitiou of' patients having had multiple infarctions could be obtained. to test the hypothesis that oral contraceptives (oc) enhance exercise-induced activation of blood coagulation we examined women ( + (sd) years, bmi . + . kg/m , vozm.. + ml/kg/min) without oc between day and of the menstrual cycle and women ( + (sd) years, bmi , ± , kg/m', vo max + ml/kg/min) taking oc ( mg desogestrel and mg ethinylestradiol) between day and of drug intake. prothrombin fragment + (ptf + ) and fibrtnopeptide a (fpa) were measured before and after running for one hour on a treadmill at a speed corresponding to the anaerobic threshold. mean heart rate [ ± vs. ± min ) and mean plasma lactate ( . ± . vs. . + . mmot/i) wera comparable during exercise between control and oc group, respectively. results for markers of thrombin and fibrin formation were: ptf + (nmol/i) fpa (ng/ml) control before , ± . . + . after . + . . + . " oc before . + . . + . after . + . * + . -+ . * + * p < . vs. baseline, + p < . between groups. we conclude that oral contraception with mg desogestrel and mg ethinylestradiol enhances exercise-induced thrombin and fibrin formation, our data suggest that exercise testing might be useful for evaluating the risk of thrombosis associated with different compositions of oc. a. haushofer +, wm. halbmayer +, j. radek +, m. dittel *, r. spiel *, h prachar *, j. mtczoch *, m. fischer + + zentraltaboratorium mit thrombose-und c~rinnungsambulanz -krankenhaus lai~: * . medizinische ab[eilung mtt ka~liolo$i¢ -krankenhaus lainz und ludwig bottzmann-lnstitut ftlr herzinfarktforsohung, wien fifty-one patients (age . ± . a; m / ) implanted with coronary stems palm~-schatz, gianturco-roubin, micro stcnts) received a now antithrombotic treatment using a combination of ti¢lopidine (tic) × mg/d for days and acetyl salicylic acid (asa) zoo mg/d for long-term treatment. patients (pat) only received tu standard hepartn as i.v. bolus immediately before stent implantation (day l ). side effects and changes in hematological (day i to , . and [= without t[cil liver and kidney parameters (day , , , ) were monitored. thirty-eight pat ( %) came for the controls to our del~rtment and were additionally monitored by thromboelastograpy (teg) and bleeding time (bt) (day g and ). the other pat were monitored externally, side effects were reported. thrombin geucration after stenting was monitored from day i to by prothrombin fragment + (f + ) and thrombin-antithrombin-lll-comptex (tat). "k" of the "leg decreased (day vs ; p< . l ). bt prolongation was negatively correlated with the bodysurf ace area (tic+asa: p< . , asa: p< . l) and showed a reduction after withdrawal of tic ( l sec, / so: [median, quartiles] vs. sec, sec; p< .ix) ). f + and tat of day i (blood collection: , , , h after intervention, f + : . nmol/i, . /i. nmol/l: tat: . pg/ , . / . ilg/ ) were lower compared to day to (f i + : . nmol/l, ]. /i . nmol/l; tat: . pg/i, . / . ijg/ ; p< . ). tic scorns not to be a strong thrombin generation inhibitor. during stenting one pat (i. %) sustained a non penetrating mci and one ( . %) an ischaemic stroke. tic+asa were very effective, only with one pat ( . %) stent thrombosis (acute) occurred. side effects: / . % gastrointestinal (one lead to hospitalization), / . % hematomas at the needle site in the groin (one surgical intervention), / . % leucopcnias (one agranulozytosis with hospitalization), / . % allergic skin reactions and / . % increased liver enzymes (got, gpt, "pgt, alkaline phosphatase; > × of the j. ). with one pat with gastrointestinal disturbances and skin reactions tic had to be withdrawn and treatment was changed to oral anticoagulatlon + asa. one pat showed a combination of skin reactions, gastrointestinal distufl~aneas and on day a heavy reaction of the liver enzymes ( j. after weeks). a decrease of the white blood count (day : . gh, . / . g/l, day : . g/l, . / . g/l; p< . i) could be observed. the safety of the therapy with tic+asa should be elucidated and extensively discussed. the serpins c esterase inhibitor (cllnh), antithrombin iii (atiii), alantitrypsin (slat), and a -antiplasmin (azap) are known inhibitors of coagulation factor xla (fxla). although initial studies suggested al at to be the main inhibitor of fxla, we recently demonstrated cllnh to be a predominant inhibitor of fxla in vitro in human plasma (wuillemin et el., blood ; : ) . the present study was performed to investigate the plasma elimination kinetics of human fxla-fxla inhibitor complexes injected in rats. the amounts of complexes remaining in circulation were measured using elisas. the plasma tl/ of clearance was min for fxla-alat complexes, whereas it was , , and min for fxla-cllnh, fxla-a ap, and fxla-atiii complexes, respectively. thus, due to this different plasma tl/ , preferentially fxla-alat complexes may be detected in clinical samples. this was indeed shown in plasma samples from thirteen children with meningococcal septic shock (mss), a clinical syndrome which is complicated by activation of coagulation, fibrinolytic, and complement systems. fxla-fxla inhibitor complexes were assessed upon admittance to the intensive care unit. fxla-a at complexes were elevated in all patients, fxla-c nh complexes in nine, fxla-atiii complexes in one patient, and no elevated fxla-a ap complexes were found. we conclude from this study that, ( ) although c inh is the predominant fxla inhibitor, fxla-alat complexes may be the best parameter to assess activation of fxi in clinical samples, ( ) measuring fxla-fxla inhibitor complexes in patient samples may not help to clarify the relative contribution of the individual serpins to inactivation of fxla in rive, and ( ) fxl is activated in patients with meningococcal septic shock. dudng the coagulation of plasma about % of the (x ap present is covalently crosslinked to fibrin by factor xiila (aoki und sakata , thomb. res. : - ) . we investigated the binding of azap by factor xiila to soluble fibrin (desaabb-fibdno) whose polymerization was inhibited by an isolated fibrin ddomain named d=,,, (haverkate and tiemann , thromb. res. : - ) . d==. is known to have an intact fibrin-polymerization site and is able to block the prolongation of the fibrin protofibrils at an early stage depending on its concentration. lateral association to fibrin fibers does not take place, since the inhibited protofibnls formed at the conditions used here do not reach a sufficient length (williams et el. , biochem. j. : - ; hantgan et al. , ann. n. y. acad sci. : - ) . material and method: soluble desaabb-fibrino was prepared by incubation of (lztl)-fibrinogen ( . mg/ml), d= o ( . mg/ml; molar ratio d==o to fibrin : ) and . u/ml thmmbin for min. then q sl)-c~ ap ( p.g/ml), faktor ×ill ( ulml) and ca ) ( mmol/i) were added. the crosslinking reaction was stopped at different times of factor xiila-incubation by adding of urea/edtasolution. the suspension was analysed by ultracentrifugation on gradients containing saccharose, urea and sos. re~ultl: the elution profiles of the ultrecentifugation-gradients show the formation of cmsslinked fibrin oligomers of increasing size depending on the time of factor xiila-action. the crosslinked fibrin polymers contained about % of the fibrin initially added. although factor xiila acted well, crosslinking of azap in the fibrin oligomers could not be observed. conclusl n: as we already demonstrated (kelach et el. , ann, hematol. (suppll) : a ) the crosslinking of azap to fibnn clots depends on the structure of the fibdn network, especially on the degree of lateral association of the fibrin pmtofibdla. in desaabb-fibrino no lateral association of fibrin protofibnls takes place under the conditions chosen here. thus it is consistent with our theory that we did not observe any binding of aiap to the fibrin oligomers of desaabb-fibrino. human pci is a non-specific serpin that inhibits several proteases of the coagulation and fibrinolytic systems as well as tissue kallikrein and the sperm protease acrosin. it is synthesized in many organs including liver, pancreas, and testis. the physiological role of pci has not been defined yet. recently, we have cloned and sequenced the mouse pci gene (zechmeister-machhart etal., manuscript in prep.) . this enabled us to study pci gone expression in murino tissues using mouse pci edna and crna probes. by northern blot analysis, mouse pci tar.ha was exclusively found in the reproductive tract (testis, seminal vesicle, ovary), all other organs analyzed -including the liver were negative for pci mkna, indicating that in the mouse pci is not a plasma protein. to determine which cells of the reproductive tract synthesize pci, cellular localization was assessed by in situ hybridization of mouse testis and ovary sections. in testis, pci mrna was present in the spermstogonia layer and in leydig cells, while sertoli cells and peritubular myoid cells were negative. these results are consistent with the immunohistological localization of human pc (laurell et al,, ) . in the mouse ovary, stroma cells of the medulla and around the follicles were positive for pci mrna. no pci expression was detected in theca or granulosa cells. we also studied the regulation of mouse pci gone expression by steroid hormones in vivo. [n mature male mice castration caused an increase in pci mrna in seminal vesicles, which was reversible upon the administration of testosterone. in tissues of intact adult male and female mice, pci mrna levels decreased after injection of human chorionic gonedotropin (hcg), while in castrated male mice, hco had no effect on seminal vesicle pci mrna. progesterone and -b estradiol decreased ovarian pci mrna levels in immature female mice. these data suggest direct down-regulation of mouse pci by sex steroids. the different tissue specific pci-geoe expression in men and mice furthermore indicates a different biological role of this serpin in the two species. ctr. transgene technology, leuven "[' -tissue factor ('it) is a kda glycoprotein mainly known a the primary cellular initiator of blood coagulation. whether tf expression may also play a role in development is unknown, but the lack of spontaneous viable mutations of the tf gene in rive leads to the speculation that its absence may not be compatible with normal embryonic development. to determine the significance of "if in ontogenesis, the pattern of tf expression in mouse development was examined and compared to the 'if distribution in human postlmplantation embryos and fetuses of corresponding gestational age. at early embryonic period of both murine ( . and . pc) and human (stage ) development there is a strong tf expression in both ectodermal and entodermal cells. "if decoration was seen during ontogenetic development in tissues such as epidermis, myocardium, bronchial epithelium, and hepatocytes, which express "if in the adult organism. surprisingly, during renal development and in adult organism tf expression differs between men and mice. in humans maturing stage glomerali were "if positive whereas in mice glomeruli were negative and instead epithelia of tubular segments were tf positive. in ncuroepithelial cells there was a striking 'if expression indicating a possible role of'if in neumlation. moreover, there was a robust tf expression in tissues such as skeletal muscle, and pancreas, which do not express in adult. in contrast to tf, its physiologic ligand factor vii was not expressed in early stages of human embryogenesis, but was detectable in fetal liver, the temporal and spatial pattern of tf expression during murine and human development support the hypothesis, that 'if serves as an important mo~hojzenic factor darinz embrvozenesis. to serve as an anticoagulant, protein c (pc) must be activated by a complex formed between the enzyme thrombin (t) and its cofactor thrombomodulin (tm). therefore, downregulation of endothelial cell surface expressed tm, for example, triggered by an inflammatory stimulus, could become a critical factor in effective pc activation. in order to develop a recombinant (r) pc mutant which can be activated independently of the tkm-complex, a peptide sequence including p - in the activation peptide of pc was modified to be identical to the factor xa (fxa)-cleavage site in prothrombin. the mutant was expressed in hu cells, purified and its anticoagulant properties characterized. using purified fxa the mutant showed activation rates between . and . nmlmin at pc concentrations between and nm, while the rpc wild type was insensitive for fxa activation. the activation reaction is calcium-dependent reaching maximal activation rates at a calcium concentration of mm and was enhanced to . -fold by addition of anionic phospholipids (pl). in contrast to the wild type pc the rpc mutant was insensitive for activation by the t/i-m complex. addition of the mutant to normal human plasma induces a prolongation of tissue-factor and p-it-based clotting assays. using normal human plasma as a source for fxa the the activation rates of the mutant were found -fold higher than in the purified system if tissue factor was used to generate fxa. in conclusion, our data demonstrate that the rpc mutant is effectively activated by fxa in a purified as well as in a plasma system. interestingly, the activation rates are enhanced in the presence of pl and normal human plasma. fudher studies should clarify the potential use of this mutant as a novel anticoagulant. thrombln plays a pivotal role in thrombotic events. the time course of thrombln concentration in blood or plasma after activation is of special interest to answer a variety of questions. with a chromogenic assay developed by hemker et el. [thromb. haemostas, , , ] it became possible to measure the generation of thrombin in activated plasma continuously. inhibitors of clotting enzymes which are to be developed as anticoagulants should be able to inhibit thrombin generation or to immediately block generated thrombin. we have used a test based on hemker's thrombin generation assay to elucidate which potency and specificity an inhibitor of factor xa needs to efficiently block thrombin generation in human plasma. thrombin generation after extrinsic (tissue factor) or intrinsic (ellagic acid) activation was followed using the chromogenic substrate h-~ala-gly-arg-pna (pentapharm ltd.). a series of synthetic low molecular weight inhibitors as well as naturally occurring inhibitors of factor xa with different potency were investigated. because of the inhibition of activated factor x the generation of thrombin in plasma is delayed and the amount of the generated thrombin is reduced. the concentrations which cause a % inhibition of thrombin generation (icso) correlate with the k~ values of the inhibitors. low molecular weight inhibitors with k~ values of about nmol/i inhibit the generation of thrombin after extrinsic activation with icso in micromolar range. after activation of the intrinsic pathway tenfold lower concentrations are effective. the strongest inhibitory activity after extrinsic as well as intrinsic activation is shown by recombinant tick anticoagulant peptide (r-tap) with ic~o of . pmol/i (axtdqsic) and . pmo/i (intdnsic). in the compadson of synthetic low molecular weight inhibitors of thrombin end factor xa which have similar k= values for the inhibition of the respective enzyme (lowest i< nmol/i), factor xa inhibitors are less effective tn the thrombin generation assay. in contrast, the highly potent xa inhibitor r-tap shows a stronger inhibition of thrombin generation than the tight binding thrombin inhibitor hirudin. background: resistance to degradation of coagulation factor v by activated protein c is associated with a point mutation in which adenine is substituted for guanine at nucleotide in the gene coding for factor v. to date this specifc mutation appears to be the most common inherited abnormality which predisposes patients to venous thrombosis. for this reason a reliable, fast and automatable system for the diagnosis of the described point mutation is required. the conventional methods used to identify the mutation are based on allele-specific restriction enzyme site analysis or direct sequencing. these methods have disadvantages for a large scale dna diagnosis, which include the need for electrophoresis or a high cost and time consumption. methods: an alternative strategy of dna diagnosis, the allele-specific oligonucleotide ligation assay, was adapted for the diagnosis of tile point mutation of factor v. following pcr amplification of the target dna, tile procedure was performed completely automatically on a robotic workstation with an integrated elisa reader using a -well microtiter plate. allelespecific restriction enzyme site analysis was performed to confirm the genotypes. results: in patients with the mutation and in individuals without the mutation the genotypes determined with the conventional allele-specific restriction enzyme site analysis were in % concordance with the elisabased oligonucleotide ligation assay. discussion: the pck-oligonucleotide ligation assay applied as automated detection system for the identification of the coagul;mon factor v point mutation allows tile rapid, reliable, and large scale analysis of patients at risk for thrombosis. resistance to the asticoa=m~lant activity of activated protein c (apc resistance) has emerged as the most con'anon inherited thrombophilic state. patients lreterozygous for factor v leiden are more likely to suffer from thromboembolie events than controls. this risk is even more pronounced in homozygotes. due to the low sensitivity and speeifity of most coagulation tests some investigators suggest to examine patients for the presence of factor v leiden mutation by pcr-based methods. re e~tly we presented an aptt-based functional test (acceleria inactivation test ait): : diluted plasma ( bi) is mixed with factor v deficient plasma ( ~tl) and aptt reagent ( .d), incubated at °c and then coagulation is induced by caci and a.pc ( ~d). using a standard curve, the clotting time (see) is transferred in per cent accelerin inactivation (%ai). using this test, the widely used apc-ratio as well as pcr-based factor v leiden detection (confirmed by direct sequencing) we prospectively studied consecutive patients with thromboembolic events. patients without the factor v mntation eonsitently showed more flazm % al with the exception of one patient with severe factor deficiencies (including factor v) due to hepatic failure and heterozygous for factor v-leiden resulting in */. ai, there was a complete concordance between the pcrbased method and dysaseelerinemia detected by ait. due to these result a specifity and sensitivity of ait above % was calculated. furthermore, a clear discrimination could be obsoved beween heterozygotes ( % , to < years; > to < years) with a normal population of children. the mutation g a was found with an unexpected high prevalenee of % in our normal controls. however, the prevalence was significantly higher in the age groups: to< , years ( %) and > to < years ( %). in patients between > , to < years the overall prevalence was similar to the control ( %). however in patients of this age with spontaneous thrombosis apcr was also a significant risk factor ( %). our results emphasize the impact of apcr for thrombogenesis in children. however, the significance is agedependent and does possibly reflect the different physiology of haemostasis in our three age groups. activated protein c (apc)-resistanec is a newly reeognised risk factor for thrombosis. in at least % of the cases it is caused by a single point mutation in the factor v gene (g->a at nucleotide ), which predicts replacement of arginin with ghitamin. one of the apc cleavage sites in factor va is located c-terminal of arginin , and mutated factor va (factor v leiden) is resistant to apc-mediated inactivation. from epidemiologic studies it is known, that this abnormality can be found in about one third of patients with thrombosis. apc-resistance is a major basis for venous thromboembolism and is prevalent in about . % of the general caucasian population. recurrent spontaneous abortion (rsa) affects - % of couples and represents a major concern for reproductive medicine. in spite of extensive endocrine, genetic, serologic and anatomic evaluation some - % of rsa women remain unexplained. a frequent morphologic finding in placentae of aborted pregnancies is an increase of fibrin deposition within the intervitlous space. because of these findings we studied the prevalence of apc-resistance in women with rsa (more than miscarriages) of unknown origin. in of cases we found a pathologic apc-resistance, both patients had a history of recurrent thrombosis and were heterozygous for factor v leiden. the prevalance of apc-resistance is , % and thus equals the prevalence in the general population. our data do not support the hypothesis that apc-resistanee is a risk factor for recurrent spontaneous abortion. h~matologisches zentrallabor der universit~t, inselspital, bern resistance to activated protein c (apc) due to the mutation arg --~ (]in of factor v (factor v leiden mutation) is the most frequent hereditary thrombophilic defect known today, with a prevalence of - % in patients with idiopathic venous thromboembolism and of about - % in the general population. with an allele frequency of % the expected number of homozygous individuals is about in . homozygous and heterozygons individuals differ considerably with respect to the relative risk of thrombosis ( -fold versus -fold) as well as to the age of the first thrombotic event ( versus years). deficiency of the vitamin k dependent protein s (p$), an important cofactor of apc, is another hereditary thrombophilia which is, however, much rarer than apc resistance with a prevalence of to % in patients with venous thromboembolism. factor v leiden mutation as well as ps deficiency are associated with impaired anticoagulatory activity of apc, which is most pronounced in case of the combination of the two defects. the combination of ps deficiency (with an assumed prevalence similar to that of pc deficiency) with heterozygous or homozygous apc resistance can be expected with a probability of : ~ or : ~ , respectively. it is well known that ps levels decrease towards the low normal or even subnormal range during pregnancy. moreovar, there is increasing evidence that the sensitivity of plasma to the antieoagulatory effect of apc decreases during pregnancy resulting in an acquired apc resistance. these pregnancy associated effects art obviously much more relevant in case of preexisting ps deficiency or apc resistance and should contribute to the elevated thrombotic risk during pregnancy in a subject with either of the two defects, and even more so for a woman who suffers from both defects. we describe a young woman with a combination of homozygens apc resistance ( apc ratio . , normal range: . - . ), pronounced ps deficiency (free ps .ll u/i, total ps . u/i, normal range: . - . u/ and . -lag u/i, respectively) and, moreover, impaired fibrinolysis (no change of euglobulin -lysis time after rain venous occlusion) who developed deep vein thrombosis after cesarean section in her first pregnancy. examination of her familiy showed heterozygous apc resistance in her asymptomatie father (apc -ratio . ) , combination of heterozygous apc resistance (apc -ratio . ) and ps deficiency (free ps . u/i, total ps . u/i) in her nsymptomatic mother and no defect in her sister. considering the fact that the mother was still thrombosis free at the age of one may assume that the thrombosis risk in the proposita was mainly influenced by the homozygnsity for apc resistance. s. ehrenforth, m. adam, b. zwinge, i. scharrer university hospital, dept. of angiology, frankfurt a.m., germany introduction: apc resistance has been shown to be the most commonly inherited defect which constitutes a risk factor for venous thrombosis (vt). however, most of the present epidemiological studies concerning apc-r prevalence in thrombophilia were derived from results of tests conducted onplasmas collected under various conditions. this may influence the great differences reported on the prevalence of apc-r among these patients. for example, it has been shown that freezing of plasma specimens prior to analysis of apc-r causes a significant decrease in the assay results.the aim of our study was to evaluate the influence of eentrifugation conditions on the results obtained with the chromogenic apc-r assay. patients and methods: blood was collected from patients (t women, men; fv gent.type: r/r , r/q , q/q ) through veinpuncture into trisodmm ciwat ( : ). platelet-rieh and platelet-poor plasma was obtained by immediately centrifugation at "c for , , , , , rain at , , , , and rpm. additional, pnp obtained from healthy individuals ( male, female without hormonal trealraent) was prepared equally. apc-response was determined within one hour after centrifugation using the coatest apc resistance kit from chromogenix. results: for both, pnp and sin/gle plasma samples, we observed continuous higher af'c-ratios after increasing cenwifugation intensity. for example, an increase from to rpm resulted m an increased apcratio from . to . ( min), from . to . ( rain) respectively. even though less distinctive, similar results were observed concerning the duration ol eentrifugation: when the duration was increased from to minutes we observed a continuous increase in apc-ratio, for example from . to . when using rpm and from . to . when using rpm. the decrease of the ratio after low eentrifugation is the eonse- nence of the shortening of affft in the presence of apc, without a signhcant influence of basal al:rl~ without apc. conclusion: our results demonstrate that centrifugation conditions are important to consider for the interpretation of apc-r results. supporting our observations, recent studies from sidelmann et al. have shown that an increase in plasma platelet concentration, low eentrifugation respectively, causes a signficant decrease in the apc-response. however, so far the mechanism responsible for the significant effect of both on apc-r assay results is unknown. although technically simple, the biochemical cemplexitiy inherent in the chromogenic apc-r assay necessitates a standardized plasma handling procedure to secure a reproducible determination of apc-il compapjson of different assays for determination of apc-resistance with the geno'fyping factor v (arg -> glu) g. siegert*, s. gehrisch*, e. runge**. r. naumann**, r. kn fler*** *institute of clinical chemistry, **clinic of internal medicine, *** childrens hospital resistance to apc diagnosed on the basis of prolongated clotting time in the aptt assay" is now considered a major cause of thrombophilia. in the majority apc resistance is ~ted with a point mutation in factor v molecule (arg glu), but both are not synonym. protongated baseline aptt is a limitation of the assay. following the determination is not possible in risk groups of patients (factor)ill deficiency, lupus anticoagulan and in patients under anticoagulant therapy. in these causes a dilution of plasma in factor v deficient plasma is recommended. the immunochrom assay is based on the inactivation of factor villa by apc. the aim of the study was to compare different functional apc response assays with the result of the dna analysis. apc response was tested in healthy probands, thro~ patients and family members using the lmmtmochrem assay, the contest (chromogeaix) and the contest with + dilution of the plasma in native factor v deficient plasma (immune). the dna analysis was performed as described by bertina. one patiem was homozygoas for factor v mutstion~ a hetemzygous result was obtained in members of the control group, in patients and in family members. in all cases with factor v mutation the ratio of the immunochrom assay was lower than the laboratory own value, independent on anticoagulant therapy. pathological ratios in this assay were also obtained in one member of a family" with high thrombotic incidence (dna arg/arg) and in patients under anticoagulant therapy ( two of this patients are one cloned twins). in the contest a ai~ response was diagnosed in all cases with factor v mutation without anticoagulant therapy and in % of heterozygous patients under anticoaglant therapy. results of the test using the dilution in factor v deficient plasma showed a good agreement vath the results of the dna analysis but the method is obviously only sensitive for the factor v mutation. the reason for pathogical ratios in the lrnmunechrem assay in wildtype patients is unclear. the majority of this patients is treated with anticoagulants, a comparison with the contest is not possible. interestingly in one patient under heparin and low ratio in the immunochrom assay' after reduction of hepann the ratio of the coatest was also low. it seems necessary to investigate in which distance to the thrombotic events the apc resistance should be tested. following pathological ratios in ftmctional apc assays must be discussed: high levels of factor viii and or v wiuebrand antigen (acute phase reactien), other mutations in factor v and viii. the factor v dilution assay should be replaced by the dna analysis. due to their differing compositions, the "sensitivities" of various aptf reagealts differ not only with respect to factor depletions, heparin and fibrin-fibrinogen degradation products, but also with regard to pathological inhibitors. for lupus anticoagulants this means that "lupus-sensitive" reagents can be delineated from "lupus-insensitive" reagents. with a "lupus-insensitive" ai~ reagent there is no or only slight prolongation of the aptt in the plasma under investigation, whereas with a "lupus-sensitive" reagent marked prolongation is observed. for the meaninof~l use of aptr reagents it is necessary to know the extent to which they are influenced by lupus anticoagulants. the following apti' reagents were tested: • ptt-reageaz, p'rta, ptta liquid, ptt-la, pti'-lt (boehringer/stago) • pathromtin, pathromfin sl, necthroratin (behring) • platelin s, piatehn excel ls (organon tekinka) • actin-fs, actin-fsl (dade) • aptt silica lye, aptt silica liquid (instntmentation laboratory) the material for investigation consisted of plasmas from patients with lupus anticoagulants. a confirmatory test (lupus anticoagulant test, immune) was positive for all of the patients. measurements were made using the sta coagulation analyser (boehringer/stago). it can be seen from the results that in some instances very different prolongations were obtained in identical plasmas by using differing aptt reagents. low susceptibility to lupus anticoagulants was shown by actirt fs (dade), ptt-reagenz (bcehrlnger) and neothromfin (behring). high susceptibility was shown by platetin excel ls (organon teknika), ptt-la and pti'-lt (boehringer/stago). lupus anticoaguhant screening with the aptt reaction is promising when two aptr reagents differing as greatly as possible in their lupus anticoagulant sensitivity are used. the resistance to the anticoagulant response of activated protein c (apc) is a major cause of venous thrombosis. apcresistance is due to a single mutation in factor v gene, which predicts replacement of arg in the apc-cleavage site with gln (factor v leiden mutation). in contrast to other known genetic risk factors for thrombosis, this factor v g-a mutation has a high prevalence in the common population of western europe (average - %). we have determined the prevalence of the factor v g-a mutation in a population of probands of north-eastern part of germany. the mutation was found in %. (heterozygoty were found in subjects person was homozygous.) the results are compared with our studies of populations from argentine and poland. me analysed the factor v g-a mutation in patients with thrombosis from germany and hungary. this mutation has been found in about % of these patients. in contrast, the frequency of this mutation was strongly reduced in a group of patients with thrombosis and pulmonary embolism of argentine ( heterozygotes in patients; %). the results of these different populations will be described and discussed. past medical history: venous thromboembolic events (re) at , and i years; intermittent oral anticoagulation (oac) without te's. diagnosis of autoimmune disorder with elevated antinuelear-antibody-fiters and positive lupus-anticoagulant test. no other relevant illnesses; family history uneventful. two weeks prior to the referral to us -acute febrile illness with nausea, diarrhea, abdominal pain; hospitalisation, treatment with iv antibiotics and anticoagulation with fraetionated heparin; development of extensive deep vein thrombosis (dvt) of the right leg; initiation of full-dose unfractionated heparin; decline of platelet count from to a nadir of g/l; referral to our department. on admission an extensive coagulation screen yielded the following results (n/normal, t/elevated, i/reduced, +/positive, -/negative): pt t, aptt t, tr n, factor ii, v, viii n, factor vii, ix, xi, xii /,, fibrinogan t, atiii n, protein c, s *, activated protein c sensitivity ratio . ($), fv-leidenmutation pcr -, fibrinolytic system n, tat t, ft÷ t, lupus anticoagulant +, heparin induced platelet antibodies +; no diagnosis of a specific autoimmuna disorder could be made. an immunosuppressive therapy with corticosteroids and anticoagulation with recombinant hirudin were init'~at~; no p~ogr~zsion of the dvt oeeured and normalisation of the platelet count was observed. during follow-up under oac ) and low-dose corticosteroids, the patient was well, the pathologic coagulat;.on results, including lupusanticoagulant and activated protein c resistance, have returned to normal; no further te's have been observed. in summary we present a case of a complex coagulation disorder as part of an autoimmune process, resulting in a clinically manifest prothrombotie dysbalance including lupus anticoagulant, acquired resistance against activated protein c and heparin induced thrombocytopenia (type ii), entering complete remission under combined immunosuppressive and anticoagulant therapy. in the last years, a vast number of simplified analytical procedures have been developed for the diagnosis of haemostatic disorders. today the detection method have evolved from the mechanical hooking method or ball coagulometry to optical systems, which additionally can utilise chromogenic substrates or immunological methods. in these systems the clotting time is derived from algorithms (e.g. threshold or maximum of the first or second integral). we studied healthy subjects, aged to years and patients, aged to years using a new aptt reagent (pathromtin $l). the results were compared with those obtained with a routinely used reagent (pathromtin). the reference range, factor-, heparin-and lupus anticoagulant sensitivity were determined. analysis was performed using the behring fibrintimer a (bfa) with optomechanical clot detection, the behring coagulation timer (bct) with op-"dcal clot detection by threshold and the dw test and dw confirm for lupus anticoagulant diagnostic. our results showed that the new pathromtin sl reagent met the demands for a higher factor and lupus anticoagulant sensitivity. it is highly suitable for monitoring heparin therapy and gave comparable results with the optical and the optomechanical analyser systems, hence reagent c~n be used for both systems. restenosis following percutaneous transluminal angioplasty (pta) continous to be a major clinical problem. neoinfimal hyperplasia, being the major undedying cause, can not be sufficiently avoided. vadous plasmatic coagulation and fibrinolytic factors, have been associated with artedal restenosis. anticardiolipin antibodies (act_) have been established as dsk factors for venous or arterial thrombosis. methods: in a cohort of patients ( men and women, age ± years) undergoing pta of a peripheral artery we prospectively evaluated whether acl could influence months restenosis rate. patients were clinically examined before, and months after pta. noninvasive grading of artedal stenosis was done by duplex scanning of jet peak velocities. restenosis was arbitrarily defined as more than % occlusion of the lumen at the site of dilatation months after successful intervention. laboratory investigation at the same time included acl and other known atherosklerosis risk markers, such as fibdnogen (fbg), yon willebrand factor (vwf), homocystein (hcy), c-reactive protein (crp). thrombin generation markers, such as thrombin-antithrombin iii complexes and prothrombin fragments + , as well as thrombomodulin (fm) as an endothelial activation marker, were also measured. results: / ( . %) patients were considered to have developed restenosis after months. / ( %) patients were found to have positive igg-( - gpl) and/or igm-acl ( - mpl) at all three measurements. / was negative before but seroconverted (igm) months after pta. / ( %) acl-positive and ( . %) acl-negative developed restenesis at months (chi-square p-value= . ). all above mentioned coagulation parameters did not differ between acl-positive and -negative patients, measured before or months after pta. some of them are shown below (values before pta): fbg ( basilar artery stenosis is a rare event in young children. risk factors are head or neck trauma with consecutive dissection of the vertebral artery, cardiac diseases or hypercoagulability. elevated lipoprotein (a) (lp(a)) serum levels in adults can mediate atherosclerosis. in addition, lp(a) might interfere with fibrinolysis. here we report on a year old boy , who presented with acute brain stem symptoms. history revealed neither trauma nor infectious disease. conventional and mr angiography showed stenosis of basilar artery without ischemic lesions. laboratory findings were normal in routine blood and csf tests. global coagulation parameters as well as procoagulant and anticoagulant factors were normal. cardiac and autoimmune disease could be ruled out. lp(a) serum levels were significantly elevated to mg/dl (normal range < mg/dl). analysis of other family members revealed a hereditary hyperlipoproteinemia (a) which might explain family history of an increased incidence of myocardial infarction and cve in elderly family members. clinically the patient recovered completely from brain stem symptoms after heparinization and subsequent oral anticoagulation with phenprocoumon. however, radiological signs of basilar artery stenosis were progredient. in a recently developed specific test, an elevated anti-phosphatidylserin antibody titer was detected one year after primary diagnosis. in conclusion, this is the first report on a child with stenosis of the basilar artery and elevated levels of lp (a). it is unclear, whether apa contributed to the onset of basilar artery stenosis or developed secondary due to endothelial defects after thrombosis and anticoagulation. apa, however, might increase the risk of further thrombotic events in this patient. in patients with thrombotic events respectively patients with systemic lupus erythematodes antioardiolipin antibodies (aca) aund lupus anticoagulant (la) were ~ea~ured. for aca detecting we use the assays from elias for igg-and ig~}-antibodies. we use as sensitive methods for detecting la in our laboratory the testkits from diagnostlca stago (staclot la with hexagonal array of phospholipids, ,ptt-la a very sensitive pttmethod and staclot p~p-a platelet neutralization procedure) and the ptt from organon teknik~ (platelin excel ls with two incubation times, and minutes). i"~e results of this tests were compared with three new or~e on german market: specktin apot (aktlvated plasma clotting time), specktin aptt (aptt wlth purified soy extract) and pecktin la (phospholipid preparation in concentrations between and ~g/ml); all wak chemie. traditional aptt reagents were developed for the sensitive detection of factor vib an ix as a cause of hemorhage. high sensitivity against lupus anticoagulants, which also prolong aptt, was not required for this purpose, with increasing recognition of the importance of antiphospholipid antibodies as a risk factor for thrombembolism, more sensitive reagents were designed, which now reliable detect this condition. using such reagents as a screening test in a general hospital makes it necessary to distinguish both conditions quickly. we here report an algorhythm, by which we use an inhibitor (lupus anticoagulant) sensitive (sta aptt, boehringer) and an inhibitor insensitive reagent (actin fs, dade) to distinguish anticoagulants and factor deficiencies as a cause of prolonged aptt. citrate plasma from patients with various diseases showed an unexpectedly abnormal inhibitor sensitive aptt (> s). plasmas with factor deficiencies remained abnormal with the insensitive aptt reagent. a regular correction of their defect occured on mixing with normal plasma. by measurement of single coagulation factors five patients with contact factor xii deficiency were found. this condition is associated with thrombosis and very rarly with bleeding. three patients with factor xi deficiency and two patient with factor ix deficiency were also identified. antiplatelets, of any kind, permits a secondary prevention of myocard ischemic lesions. there is no general consensus regarding secondary prevention of cerebral ischemic lesions. aspirin remains the most common substance, ticlopidlne also brings about prevention, but with important secondary effects. european stroke prevention study i has demonstrated that the combination of antiplatelets, in particular aspirin/dipyridamole (persantln), is also very active. to collect more information, esps was organized and patients receiving either a placebo,either mg aspirin,either mg sustained release form of dipyridamole (persantin (r)), or the combination aspirin/dipyrldamole, were recruited. it ended march st with the following conclusions: i-aspirin, mg a day, brings about a significant secondary reduction of stroke ( .z %), after a two year follow-up. notwithstanding the low dose of aspirin, haemorrhages remain important. -dipyridamole, at mg a day, brings about a significant reduction of stroke (i . ~), similar to that of aspirin. one could thus substitute mg aspirin by mg dipyridamole. -the combination of mg aspirin and mg dipyridamole brings about a significantly greater reduction of stroke ( . ~). esps revealed that a low dosage of aspirin is active, that dipyridamole alone is also active, but that the combination of both gives far better results. the study of the primary end-points,the study of the survival curves, the factorial statistical analysis and the pairwise comparison analysis, led to these conclusions. the conclusions drawn from esp£ underline that the combination aspirin/dipyridamole is a privileged choice for cerebral ischemia, the state of activation of circulating platelets in acute cerebral ischemia is controversial. activation of platelets on single cell level can be assessed by determining the shape change or the expression of antigens such as p-selectin (cd ). shape change is an early and rapidly reversible event in platelet activation whereas p-selectin is irreversibly expressed on the platelet surface upon stimulation. methods: we investigated untreated patients within one day after cerebral ischemia, patients months after stroke treated with warfarin, and age and sex matched control subjects without vascular risk factors. venous blood was collected into a fixation solution blocking the metabolic processes in platelets within milliseconds. we determined the fraction of resting discoid platelets by phase contrast microscopy. the expression of p-selectin was measured by flowcytometry. results: the fraction of platelets expressing p-selectin was higher in patients with acute cerebral ischemia ( . _+ . %) than in control subjects ( . _+ . %; p< . , u-test). patients with stroke (n= , . + . %) and patients with transient ischemic attack (tia; n= , . -+ . %) had similar values. patients months after stroke still had higher values ( . + , %, p< . ) than control subjects. the rate of discoid platelets was not different between patients with acute ischemia (n= , . -+ . %), patients months after stroke (n= , . -+ . %) and control subjects (n= , . _+ . %). platelet count was not significantly different between groups. conclusion: the elevated proportion of platelets expressing pselectin indicates strong platelet activation in acute cerebral ischemia and in a majority of patients months after stroke. assessment of pselectin revealed a higher sensitivity for platelet activation after stroke or tia than analysing the reversible shape change. further studies have to clarify if monitoring of platelet activation by flowcytometry is helpful as a prognostic tool and to evaluate therapeutic strategies after stroke. vascular smooth muscle cell (smc) proliferation and migration into neointima are the hallmarks of atherogenesis. the complexity of these processes and their concerted action and interaction of molecules are yet to be fully elucidated, one crucial molecule seems to be the urokinase-type plasminogen activator receptor (upar) recently also assigned as cd antigen, upar serves a dual function: ( ) it directs upa proteolytic activity to a special location on the cell surface and ( ) induces cellular signals leading to various phenotypic changes. we have investigated the signal-transducing capacity of upar in human smcs and provide here a molecular explanation for uparrelated cellular events. activation of these cells with upa (even with inactivated catalytic center) results in the induction of tyrosine phosphorylation, suggesting modulation of upar-associated protein tyrosine kinases (ptks) upon ligand binding. we obtained patterns of tyrosine-phosphorylated proteins with molecular masses of ~ - and - kd. using antibodies against different types of ptks as well as immunoprecipitation-and immunoblotting techniques the ptks involved in the upar-signalling complex were identified to be members of the src-ptk family. the cotocalization of upar and ptks at the cell surface of smcs was further confirmed by confocal microscopy studies. we conclude that the upar-ptk complex is most likely involved in this signal transduction pathway that provides the coordinated action of extracellular proteolysis, adhesion, and cell activation, which is required for cell migration. this mechanism may be crucial for the progression of atherosclerotic plaques. activation markers of haemostasis have been found elevated in relation to diabetic vascular lesions. simultaneous pancreas-and kidney transplantation (pkt) in type i diabetes has been shown to improve diabetic complications and long term survival. we measured haemostatic vascular risk factors and activation markers in plasma of patients after successful pki', patients after pkt and rejection of the pancreas graft and patients after pkt and rejection of the renal graft. blood samples were taken during routine ambulatory visits, patients were free of any ongoing acute disorder or transplant rejection and under continuous immunosuppressive medication. despite individually adjusted insulin therapy hba plasma levels increased after pancreas rejection ( , vs ,i , p< . ). platelet counts and plasma levels of fibrinogen, f + fragment, tat-, app-complex and-fibrin monomer were found significantly elevated as compared to diabetic controls but not significantly different with respect to complete or partial successful pkt. one major reason of the increased activation state of haemostasis may be cyclosporin treatment given to all patients, t-pa and pal i plasma levels were within the normal range and significantly correlated to plasma triglyceddes (r. . ; p< . ). d-dimer plasma levels were significantly lowered after pancreas rejection ( ( ) vs ( ) nglml; mean(sem) p< . ), which might reflect impaired fibrin degradation related to increased glycosylation of fibrinolytic factors. in conclusion, despite the marked improvement of glucose and lipid metabolism, plasma markers of activation of coagulation and flbrinolysis are not decreased to normal after simultaneous pancreas and kidney transplantation. according to the investigations of fowler et al. and pepe et al. the probability of an ards occurring with one risk factor is - %, and in the presence of several risk factors, %. goris et al. and johnson et al. determined the level of severity with the aid of a fixed scale: the injury severity score. all these investigations are however not to be interpreted as typical following coronary surgery. these investigations demonstrated that the kallikrein and factor xii systems are of great importance as intraoperative risk factors. here the factor xii system plays a major role with direct or indirect activation of the kauikrein-kinin system with the splitting products alpha-factor xiia and bfactor xha respectively. all ards scores take the pmn-elastase into account. if the pmn-elastase values ( pg/l) are constantly high postoperatively then lung complications are to be expected. patients developing an ards displayed significantly lower alpha -macroglobulin values. patients who developed a highly significantly raised kallikrein-like acdvity (> u/i) after the beginning of bypass and showed constantly high values during ecc are difficult to keep under control due to the blood pressure behaviour. the platelet pal also shows a significant rise and intraoperatively runs analogous to platelet factor , only antiparailel, since it attacks the endothelium. we were able to show that pai- is suitable as an indirect marker for a possibly developing restenosis. % of the patients investigated with lowered pai- values in the postoperative phase did not develop a restenosis. however, with patients showing significantly rising pa[- values from the st. to rd. postoperative day % of all the cases had a restenosis. a further risk factor in this respect are significantly raised fibrinogen levels which lay over % at the end of surgery. if these fibrinogen values do not fall from the st. postoperative day onwards a raised risk of thrombosis must be reckoned with in the absence of therapeutic intervention. the following parameters represented haemostaseological risk parameters with significant behaviour within the framework of this study: ) regards the blood pressure behaviour, the kallikrein-like activity (> u/i); ) with regards to the lung complications, aipha -macroglobulin and pmn-elastase (> g/i); ) and final/y as a possible marker for a developing restenosis pai- and fibrinogen (> %). resulting from numerous clinical studies homocysteinemia is found to be an almost independent risk factor of atherosclerosis including thrombotic complications as well as of venous thromboembolism. experimental investigations on the underlying mechanisms suggest endothelial cell damage accompartied by the development of an atherogenic and thrombogenic potential, increased platelet reactivity, oxidative modification of ldl, and enhanced affinity of lp(a) for fibrin. to our knowledge no results are published on the influence of homoeysteine on leukoeytes although these cells are deeply involved in pathological events within the vasculature. therefore, as a first approach different functional parameters of human polymorphonuclear leukozytes (pmnl) were followed under incubation with , , and i.tm (final concentration) dl-homocysteine (hc) in isolated fractions or whole blood, respectively: l) spontaneous mobility of pmnl, measured as migration distance into micropore filters in a modified boyden-chamber, is found to be significantly enhanced by the two smaller hc concentrations. ) chemotaxis induced by . i.tm formylmethionylleueylphenylaianine (tmlp) shows no significant differences. )monitoring of chemiluminescence signals (autolumat lb , berthold) is complicated as hc influences the luminol-mediated indicator reaction. adjusting appropriate conditions the following results are obtained: spontaneous chemiluminescence and that induced by zymosane, tmlp, and the ca +-ionophore a are entranced by the two higher hc concentrations. there are, however, differences between the blood donors as a minority does not respond to hc in repeated measurements. with phorbol -myristate acetate the signal is diminished by hc in all cases and with all concentrations. ) phagoeytosis induced by zymosane (microscopic evaluation) as well as by opsonized e. coil (cytoflowmetric evaluation) is significantly increased by the two higher hc concentrations. conclusion: the activation of human pmnl is enhanced with respect to the majority of investigated stimuli by hc in concentrations reached under pathophysiological condititions. the effect of pysical exercise on hemostatic parameters was studied in patients (male, mean age: [range - ] yrs) with angiographically documented coronary artery disease (cad) and in controls (male, [ - ] yrs) both participating in an hour group exercise session for cardiac rehabilitation. in each group relevant arteriosclerotic lesions in carotid, abdominal and leg arteries were excluded by doppler ultrasound examinations. patients were all under -blocking agents and aspirin. plasma levels of prothrombin fragment + (ptfi+ ) and fibrinopeptide a (fpa) reflecting formation of thrombin and fibrin, respectively, were measured at rest and immediately after hour of exercise consisting of jogging, light gymnastics and ball games. training intensity in both groups was comparable as indicated by the mean heart rate during exercise corresponding in patients to + % (mean-+sd) and in controls to -+ % of the maximal heart rate previously determined on a bicycle ergometer. baseline values for ptf + were significantly lower in oatients ( . -+ . nmol/i; mean-+sd) than in controls ( . -+ . ; p< . i. after exercise we found an increase of ptf + in controls to . -+ . nmol/i (p< . ) while in patients ptfi+ remained unchanged ( . -+ . after). accordingly, exercise induced r se of fpa was more pronounced in controls (from . -+ . to . -+ . ng/mt; p< . ) than in patients (from . -+ . to . + . ng/ml; p< . t). we conclude that in terms of thrombin and fibrin generation exercise training does not exert detrimental effects on hemostasis in patients with cad. lower baseline values and lack of exercise induced increases of ptf + in patients with cad might be attributed to medication with aspirin and/or b-blocking agents. periodontitis marginalis (pm) is an inflammatory oral disease that is caused by gram-negative bacteria and that has a high incidence in the second half of the life. clinical signs of pm are gingival bleeding, periodontal pockets, alveolar bone destruction and loss of teeth. recent epidemiologlcal studies have provided some evidence for an association between pm and atherosclerosis. in the present paper we will summarise some of the results that we have obtained in studies on patients with pm as well as on patients with hypercholesterolaemia (hc) and atherosclerosis. pm was frequently found to be associated with hc ( % in rapidly progressive pm) and increased reactivity of peripheral blood neutrophils and platelets (e.g. generation of oxygen radicals and paf-induced aggregation). patients with hc and atherosclerosis had a higher frequency of severe pm when compared with data on the community periodontal health. the severity of pm was higher in patients with plasma cholesterol levels _> . mm when compared to those with plasma cholesterol < . mm. in patients with coronary atherosclerosis the severity of pm was significantly correlated with plasma cholesterol level, systolic blood pressure and the number of diseased coronary arteries. these results provide further evidence for an association between pm, hc and atherosclerosis. it can be speculated that hc is not only a risk factor for atheroscterosis but also a risk factor for pm and acts by increasing the reactivity of neutrophils and platelets. on the other hand, pm as a mild chronic inflammation could promote the development of atherosclerosis due to effects of endotoxins on vessel wall, blood cells and haemostatic factors. it has been also speculated that phagocyting leukocytes in the inflamed periodontal tissues could contribute to oxidative modification of ldl. so far, there is no evidence that atherosclerosis may contribute to the pathogenesis of pro protein z (pz) is a vitamin k dependant plasma protein synthesized in the liver. it promotes the association of thrombin with phosphorlipid surfaces. recently it has been shown that a deficiency of pz may lead to a bleeding tendency. in patients undergoing chronic hemodialysis, disorders of hemostasis are common. to examine if plasma levels of pz are altered in patients with end stage renal disease we determined pz in plasma of patients at the beginning of hemodialysis treatment. the results were compared with a group of healthy controls. the difference of pz levels in plasma of patients with end stage renal disease with the control group was not significant. control group was + ng/ml and in patient group was + - ng/ml. one patient with marked bleeding tendency after hemodialysis pz was ng/ml. we concluded that patients with bleeding disorders pz determination may be helpful. the normal range of actin fs was reinvestigated in a multicentric approach. a protocol was developed which requests from each center to assess the aptt with one common and one variable lot of actin fs in samples of suspected normals. inclusion and exclusion criteria based upon the results of clotting assays, liver enzymes and clinical data were defined. results: a total of results was obtained. the majority of centers in this study used the electra or c (mla). results for the electra group (n = ) showed a precision for the common lot of actin fs with a common lot of a three level control from . % (level ) to . % (level ) with an excellent accuracy between the centers. clotting times with the variable lots of actin fs were very similar. the results from normals, however, showed a somewhat higher dispersion using the common lot of actin fs. of centers had almost identical mean values (range . to . sue) whereas one reported shorter and one longer clotting times ( . and . sec). results with the variable lots gave almost identical results as the common one. a total of results of all lots gave a normal range of . to . ( - % percentiles) on electra. mean values on acl (n = ) were . , on bct, . sec, on amga coagulometric, . sec, on amga turbidimetric, . sec (n = each). all centers used sarstedt monovettes with . sodium citrate. discussion: the results of this study demonstrate the lot to lot consistency of all lots of reagents included in this study since the common and variable lots showed very consistent results. interestingly in the large group of electra users the normal ranges showed some differences, though the controls in all centers were almost identical. this confirms the recommendation that a normal range as stated from the manufacturer should be used for orientation only and that each laboratory should assess its own range. direct acting anticoagulant agents such as hirudin (r-h), argatroban (arg), efegatran (efe) and peg-hirudin (ph), represent specific and potent inhibitors of thrombin. blood samples collected in r-h ( ~g/ml), arg ( ~tg/ml), efe ( ~tg/ml) and ph ( ~tg/ml) do not clot for extended periods (> hours), thus allowing for the collection of plasma for analytical purposes. unlike heparin, these agents do not require any plasma cofactor for their anticoagulant effect. in contrast to citrate, oxalate, edta and heparin, these antithrombin agents do not alter the electrolyte or protein composition of blood. thus, blood collected in these agents may provide a physiologically intact (native) sample for clinical laboratory profiling. we have used all of these agents to prepare whole blood and plasma samples for various diuical laboratory measuroments. plasma samples collected with these agents are obviously not suitable for global clotting tests (pt, aptt, thrombin time, fibrinogen); however, these agents are optimal anticoagulants for the collection of samples for the molecular markers of hemostatie activation, such as fibrinogen/fibrin related degradation products, prothrombin fragment, protease cleavage products, tfpi, tnf and other protein mediators. electrolytes, blood gases, enzymes and protein profiling can also be satisfactorily measured on blood samples collected with these agents. antithrombin anticoagelatad blood used fur hematologic analysis showed equivalent blood count and differential results as that obtained with edta blood. unlike other anticoagulants, these agents do not interfere in the cell staining process. washed blood cells can also be prepared using antithrombin aents supplemented buffers for morphologic and fuuctional studies. thrombin inhibitors such as hirudin have also been used for flow cytometry and image analysis of blood cells and tissue exudates. our observations suggest that these anticoagulants can be used as suitable anticoagulants for clinical laboratory blood sampling. these agents can also be used as a flush anticoagnlant fur most automated instruments as these exhibit superior anticoagulant properties to heparin. furthermore, the hematologic parameters obtained in antithrombin anticeagnlated blood may be physiologically more relevant than those determined on blood collected in edta, citrate or heparin. antithrombin ul determination is one of the most popular method for in vitro diagnostic of number of different disorders. human fhrombin a~nity purified on heparin-rnodified silica-based sorbents was used for level of antithrombine lu determination by abilgaard method in blood of patients with pregnancy pathology, acute leukemia, thrombocytopenia and anemia. it was founded, that antithrombin level is decreased to - % of normal values in case of pregnancy pathology, to < % -in case of acute leukemia and thrombocytopenia, to s % -in case of anemia. obtained results show the strong relationships between named disorders and patient antifhrombin iii level. therefore anfifhrombine iii estimation may be used as simple and quick method for preliminary diagnosis of above named disorders. bm coasys is a complete automated analyzer system for coagulation tests. it is well suited for routine coagulation testing in random access in a medium throughput laboratory environment. analytical performance and practicability were tested by a common evaluation program in five hospital laboratories. within run and day to day cv's were below % in different samples (controls, patients) . comparison in different therapeutic ranges confirms the declaration of the isi-value for calculating inr-values. normal values for coagulation tests with results in pdmary units were checked in samples and confirmed. due to the optical measuring principle of the bm coasys there was a little tendency to shorter times with the thrombin reagent. in conclusion the performance of coagulation tests with the bm coasys was rated as well or better compared to existing systems in the laboratories with advantages due to short timed familiarizing and easy handling. flexibility and stability of the system permit optimal integration and innovation into the w rkflow of the routine laboratory. the purified thrombin and antithrombin iii (at iii) have a great interest in the clinical diagnostic and treatment practice, so their isolation methods are very important. molecules of these proteins have some fragments replying for interaction with native glycosaminoglycan, heparin. this interaction is used for isolation and purification of thrombin and at ul from native materials, blood plasma or its fractional products. we have done comparative studying these proteins purification on heparin sorbenfs, which contain heparin immobilized on sificagel, modified by glycidooxipropyl, gamma-aminopropyl and tosyl chloride groups, or on cellulose: heparin-epoxy-silica ( ), heparin-gammapropyl-silica ( ), heparjn-tozylsilica ( ), and heparin-cellulose ( ). we founded that thrombin binds with all sorbents, while at iii doesn't binds with sorbents and . there wasn't any difference between silica and cellulose sorbents in thrombin desorbfion by t m naci. at iii binds more stronger with heparin-ceuulose t[~,~n with silica sorbents but specific activity and purity degree were approximately the same on both kinds of sorbents. thrombin specific activity and purity degree were approximately twice higher on sorbents and in comparison with sorbents ! and ( - nih units/mg versus -t nih unlts/mg). therefore, sorbents and can be used for isolation and purification of thrombin and sorbents t and can be used for isolation and puriiication of at ii . we used these sorbents for large scale purification of named proteins. purified thrombin was used for production of diagnostic kits for anfithrombine iii, fibrinogen, fibrin/fibrinogen degradation products and thrombin time determination. after an aerobic or anaerobic physical exercise various alterations of the hemostatic system were detected. numerous investigations of the hemostatic system exist of running and of bicycle ergometer exercise but not of swimming. young volunteers (n= ; median age years) were investigeted~ there was an aerobic exercise (achieved heart rate -- /min, lactate < mmol; n= ) and an anaerobic exercise (achieved heart rate ~ lo/min, lactate > mmol/ ; n= ). in both groups there was a significant shortening of the ptt. under anaerobic conditions hematocrit and quick significantly increased. factor viii activity rose significantly in both groups. indicating plasmatic clotting activation there was a significant increase in molecular markers tat and f + only under anaerobic conditions (tat from , to , pg/ ; f + from , to , nmol/ ). indicating activation of fibrinolysis t-pa activity increased significantly in the anaerobic group (from , to , iu/ml) but not in the other group. this findings indicate that there is e balance in the hemostatic system by activation of clotting as well as of fibrinolytic system in young volunteers during exercise by swimming dependend on the degree of exercise load. membranes as well as compact, porous disks are successfully used for fast analytical separations of biopolymers. as far as capacity, speed and performance of separation are concerned, the supports are as effective as other recently developed fast media for the separation of biopolymers °). so far, technical difficulties have prevented the proper scaling-up of the processes and the use of membranes and compact disks for preparative separations. in this report, the use of a compact tube made of poly(glycidyl methacrylate) for fast preparative separations of proteins is shown as a possible solution of these problems. the units have yielded excellent results, regarding performance and speed of separation as well as capacity. the application of compact tubes made of poly(glycidyl methacrylate) for the preparative isolation of the coagulation factors viii and ix from human plasma shows that this method can even be used for the separation of very sensitive biopolymers. in terms of yield and purity of the isolated proteins, this method was comparable to preparative column chromatography. the period of time required for separation was five times shorter than with corresponding column chromatographical methods. our measurements showed an excellent correlation of the two systems (r= , ). the maximum amplitudes on the roteg were on average . % higher than on the hteg, corresponding to a slightly lower reverse momentum of the measuring system in comparison to the hteg. we report first results out of the evaluation of sta compact (boehringer mannheim/diagnostica stack)). sta compact is designed for automated analyses of routine and special coagulation (chronometfic, photometric [ nm] and turbidimetric [ run]) tests. in addition, it does measure ,,derived" fihrinogen. tests as follows were evaluated: prothrombin time (pt), partial thromboplastin time (aptt), fibrinogen (clauss method), thrombin time, at iii (chromogen), hepato quick, as well as the factors ii, v, vii, x, and viii. results: within run cvs of the clotting tests were below % (calculated on the basis of seconds) in most cases, day to day cvs below % (not measured for factors, yet). at iii yielded within run cvs below % in the decision range. measuring ranges: at iii: - %; fibrinogen: . - . g/l (plasma -dilution / ), after rerun with other dilutions: from . g/ (dilution: / ) to g[l (dilution: / ). method comparisons, using sta as reference, yielded slopes close to . and negligible intercepts. throughput: with routine clotting tests about tests/h, in a sample selective access mode. we conclude, that sta compact allows precise measurement of routine and special coagulation tests. it is also a reliable system for photometric tests and well suited for intermediate workloads as well as stat analyses. we did evaluate ptt lt, a new liquid, silica based ptt reagent. special attention was given to reference interval and heparin sensitivity. the new reagent is well suited for the measurement of intrinsic clotting factors and is reported to have high sensitivity for lupus anticoagulants (higher sensitivity than sta aptt [boehringer mannheim = bm]). it is stable for days in the cooled compartment of the sta analyzer. methods: all experiments were made on sta. for comparison, we used three other ptt reagents (a lab. routine, silica based aptt, as well as sta aptt and sta ptt kaolin from bm). in addition, thrombin time ( u/ml thrombin, sta thrombin reagent) and heparin (chromogenic xa test, rotachrom heparin) were measured. results: within mn imprecision (n= ) was below . % cv in the normal range and in two controls (mean values: s, s), and . % in a heparin plasma (mean: s). between day imprecision (d= ) was below % in two controls ( mean values: s and s). the upper limit of the reference range is s ( . th perc., median: s; patients with normal coagulation status [routine aptt, fib., pt], median age: years); almost identical reference ranges were obtained with sta aptt and the routine ptt reagent, while sta ptt kaolin showed significantly lower values ( . th perc.: s, median s). method comparison study: good agreement using plasmas from patients without heparin: (y= a + . x, n= , range of(x) from to s, r = . ; x = sta aptt). the median values from patients under high dose heparin were: routine ptt: s, sta aptt: s, ptt -lt: s sta ptt kaolin s, thrombin time s and heparin . iu/ml in conclusion, results of the new reagent compare well to our routine ptt and to sta aptt system reagent. it allows sensitive monitoring of high dose heparin therapy and is well suited for detecting abnormalities of the intrinsic clotting factor pathway. is a standard technique since many years. the interpretation of the thrombelastograms has been widely based on phenomenologic observations, while there is a lack of exact information concerning the coagulation mechanisms leading to the teg amplitude (a're~). the aa'ec is a measure for the mechanical stiffness of the clot and depends on: a) fibrin formation and adequate polymerisatiun of a -dlmensional network: measurements with nonrecalcified citrated blood activated with adp or epinephrine (both n= ) did not show any clot formation in the teg this relies on the need for a mechanical coupling between the teg pin and cup over a distance of mm, which is accomplished by the fibrin network therefore, teg can only be performed under thrombin formation and thus under thrombin-activation of the platelets in the sample. factors, which inhibit platelet aggregation but don't limit thrombin-activation of platelets, cannot be monitored by teg. b) the attachment of the dot on the surface of the teg pin and cup. according to recent literature we suggest that the attachment of the clot in the teg relies exclusively on fibrinogen/fibrin adsorption to the surfaces of the pin and cup. interruption of this attachment can result in lower amplitudes or the so-called ,,stairway" phenomenon. we could show a complete interruption of the clot attachment by dipping the pin for one second in % albumine solution (n= ). c) the fibrinogen concentration (fg) and platelet count (pc) of the sample. in volunteers we found only a poor correlation of the maximum amplitude (ma) with fg alone (r= . ) or pc respectively (r= . ), while there was a very good nonlinear correlation to the product of fg and pc. we suggest that the fibrin network forms the main structure of the clot while the thrombocytes enhance its stiffness in a concentration-dependent manner. this effect of the ptatelets can be completely reversed by gplibfllla antagonists. d) adequate coagulation activation: in nonactivated teg even small amounts of inhibitors can lead to a significant reduction of the ateg. conclusion: alterations in teg measurements can be judged more properly when the underlying mechanisms are understood. the consideration of the limitations of the method allows a more specific interpretation of the results. as a response on a customer request we did investigate the sample stability of blood samples for the aptt. the study was set up in a way that simulated the conditions of a large private laboratory in which the samples arrive several hours after blood collection. blood was drawn from donors into . % sodium citrate and mixed well before it was divided into several aliquots which were kept at room temperature. the aliquots were centrifuged after ~ , , , and h after venopuncture and the plasma was analyzed immediately with different reagents on electra . results: there was a clear difference in the change of the apttover time with these reagents. also f viii (determined with a chromogenic assay with complete and standardized activation) change considerably. reagent a: ellagic acid, plant phospholipid, reagent b: sulfatide/kaolin, phopholipids, reagent c: ¢llagi¢ acid, plant and rabbit brain phospholipids the increase of aptt was apparently not a function of the decrease of fviii because the in vitro f viii sensitivity of reagent b. was inferior to reagent a though reagent b showed more prolongation of the aptt than reagent a. reagent c, however, showed only minor changes in the aptt. discussion: these data show that the sample stabifity of the aptt is reagent dependent and that it is not simply a function off viii sensitivity. other factors such as the buffer system but also the sensitivitiy towards other factors than f viii seem to contribute. a comparison of the technical principle of the roteg coagulation analyser and conventional thrombelastographic systems an. calatzis, p. fritzsche. al calatzis, +m. kling, +r. hipp, a. stemberger institute for experimental surgery and +institute of anesthesiology yechnische universit~t m nchen thrombelastography (teg) was introduced by hartert in as a method for continous registration of the coagulation process. in we presented the roteg coagulation analyser, using a newly developed technical method. in teg systems according to i/artert the sample (blood or plasma) is placed in a cup which is alternately rotated to the right and left by , °. a cylindrical pin, which is suspended freely on a torsion wire, is lowered into the blood. when coagulation starts, the clot begins to transfer the rotation of the cup to the pin against the reverse momentum of the torsion wire. the angle of the pin is electromagnetically detected, transformed to the teg amplitude and continously recorded. in the roteg the pin is attached to a short axis, which is guided by a ball beating. thus all possible movement is limited to rpotation (r_oteg). the cup is stationary, and the pin is rotated alternately by ° to the right and left by a feather system. when a clot is formed, it attaches to the surfaces of the pin and cup and starts preventing their relative movements against the reverse momentum of the feather. here the reduction of the rotation of the pin, which is detected optically, is tranformed to the teg amplitude. as can be shown by theoretical analysis and by control measurements, the roteg provides the same measuring capabilities as conventional teg systems. the main advantage is the solid guiding of the measuring system, which makes the roteg easily transportable and less susceptible to shock or vibration during measurement. yhrombelastography (teg) is a standard monitoring procedure for evaluation of coagulation. usually only nonactivated native blood teg measurements (nateg) are performed, which leads to a) a long time interval until coagulation and fibrinolysis parameters are available b) very high susceptibility of the measurement to inhibitors like heparin, which disturbes the judgement of other components of coagulation, c) unspecific results. our aim was to develop a coagulation monitoring system based on teg providing fast and specific information on the different components of coagulation. methods: the following measurements are performed in paralel using disposable pins/cups (haemoscope): a) extrinsic activated teg (exteg): al whole blood (wb) + ~tl innovin (recombinant thromboplastin reagent, dade). b) intrinsic activated teg (integ): al wb + ~tl kaolin (suspension g/l, behring). c) aprotinin teg (apteg): exteg + kie aprotinin (trasylol, bayer). d) heparinase teg (hepteg) as decribed in ( ). results: exteg and integ provide information on the extrinsic/intrinsic system within - min and information on the platelet/fibrinogen status within - min. because of the addition of potent activators integ and exteg can be performed when inhibitors like heparin are present in the circulation. fibrinolysis effects can be seen on exteg and integ and by comparison of exteg and apteg (apteg: invitro-fibrinolysis inhibition by aprotinin). if fibrinolysis is detected by exteg or integ and aprotinin-susceptibility is verified by apteg, aprotinin therapy will be initiated. heparin effects are revealed by hepteg. discussion: by the comparison of parallel teg measurements which have been differently activated, specific and fast information on the different aspects of the clinical coagulation status is provided. the presented tests can be easily performed bedside and only a small specimen of whole blood is needed ( , - , ml). introduction: a severely prolonged aptt ( s; normal: ~os) was observed during preoperative screening for a planned splenectomy in a -year-old man with an year history of osteomyelofibrosis. fellewing neer-normal~atien ( s) of the ap'ci" after rain preincubation in a kaolin based aptt assay, pk deficiency was suspected and studies were performed to further investigate the nature of the pk deficiency as well as the mechanism underlying the normalization of the prolonged aptt by increasing the preincubation time. methods: the apl-r assay was peal'armed using kaolin/inesithin. high molecular weight kininogen clotting activitiy (hk:c), fxii:c end pk:c were measured by an aptt based assay using neothromtin ® (behnng) and rain (pk:c) or min (hk:c, fxli:c) preincubation. pk amidolytic activity (pk:am) was assayed using cosset pk ~ (chromogenix) and pk antigen (pk:ag) by quantitative immunoblotting. fxll and hk proteolysis dunng activation of plasma by kaolin ( mg/ml at =c) or ds ( . ~tglml at =c) was demonstrated by immunotilotting assays of fxii and hk following sds-page. assay pk:c pk:am pk:a~i fxfi: the propositus had pk:c< %, pk:am= % and pi~ag< . % as compared to normal pooled human p(asma (nhp). his son and two daughters had pk:c- % and normal aptt values, incubation of the propositus' plasma with ds did not result in fxii or hk cleavage within rain, whereas jn nhp detectable f×ii and hk proteolysis occurred after rain and complete proteolysis was observed after - rain. in contrast, kaolin activation of propositus' plasma led to slow activation of fxii after rain, presumably by autoactivation, and to fxlla-induced hk proteolysis. near-normalization of the propositus' aptt by prolongation of the preincubation time paralleled fxii autoactivation as evidenced by immunobletting. we describe a propositus with severely prolonged aptt due to hereditary, crm negative pk deficiency suffering from omf. activation with a particulate suspension of kaolin led to slow fxii autoactivation and hk proteolysis, whereas ds in solution did not induce fxii or hk cleavage. fxii autoactivation seems to be responsible for the normalization of the prolonged aptt in pk deficiency after prolonged preincubation times. in our study we compared a conventional bag with silicone tubing (a) for blood donation with new ones (] from biotrans and c from baxter) with a newly developed y-shaped adapter. this adapter is integrated into the tubing and therefore provides the advantage for drawing blood samples in a closed system. the systems were identical in amount and content of anticoagulant, i. e. ml of cpd per bag resulting in approximately % of the final whole blood volume. the purpose of the study was to determine whether the different tubings can influence the quality of plasma products conceming the blood coagulation system. in plasma samples we measured several factors of the procoagnlatory and fibrinolytic systems. intralndividual control eitrated (. m) blood samples were initially drawn from the contralateral cubital vein from the same male donor ( in each group). in all bag samples we found small but significantly higher levels of the global test parameters ap'it and ti" compared to controls, indicating a higher amount of anticoagulant. pt, however, revealed no differences, thus suggesting that factor activities were not altered (statistics according to mann-whimey). increase of procoagulatory activity measured as tat complexes showed elevated levels in bags a and c whereas prothrombin fragments fl+ decreased only in a. conceming the fibrinolytic system, plasminogen a~tivators and pai- values were diminished in all three systems < a < c) compared to controls. d-dimers were lowest in a followed by slightly higher values in c, controls and b. fibrin monomers did not reveal any significant differences: a < c < controls < b. in summary, the quality, of the different blood sampling devices was comparable to the intraindividual controls as to factor activities measured by global tests. the activation of the procoagulatory and fibrinotytic systems was slightly but in most cases significantly higher in the two new devices than in the conventional one. all values, however, obtained from the plasma samples did not exceed the normal range of healthy blood donors. therefore we concluded that the two new closed blood drawing systems are favorable for blood donating procedures. in patients with acute myocardial infarction (ami) and thromholytie therapy ( patients with rt-pa, patients with streptokinase and one with heparln) with ck, myoglobin and ekg criterions the patients were divided in two groups (reperfusion/no fellow two hours after starting the thrombolytic therapy) . blood samples were taken before, rain, i h, h, h, h, h after lysis and than every day till day . because of the central role of factor xii in activation of coagulation, fibrinolysis, kallikreln-kinin-system and complement cascade we investlgate the role of factor xila initiated by ami and the relation of factor xiia to the thrombolytie agent and reoeclusion rate. for the investigatlens we take the kits from shield diagnostics (xiia), behring diagnostica (c~-inactivator, pl~nogen, ~-antip]~n~n, pap), chromogefiilx ab (prekallikrein) and di~nostica stage (vile). the results: there is an increase of factor xiia immediately after starting the fihrinolysis (max. rain after starting); the increase /i independently of the thrombolytie agent. parallel to factor xiia raises factor viia without significant changes of c - naotor and prekalllkrein. that means: activation of xiia and fibrinolytic pathway leads to relatively mild c.hanges in kallikrein system, hut to significant activation of extrinsic system by vila-tissue factor. in some patients is an additional rise in the system xiia -viia, when the fibrinolytic system is already in the normal range. there will need further investigations to define the risk of reocclusion as a result of activation of faktor viia by faktor >li ia. autoimmune thrombocytopenic purpura (aitp) is a frequent complication of chronic lymphocytic leukemia (cll] which developes on different stages of the disease and needs special treatment measure. mechanism of autoimmune disorders in cll remains uncleared. we investigated immunologic phenotype of blood lymphoid cells in patients suffering from cll with aitp. in these patients we did not observe disorders in expression of b-lineage markers as compared with cll patients without immune complications ( patients). but in the st group of the patients the greater number of b-celts expressed markers of activation. according to ig heavy chain expression, the lymphocytes in most cases of cll complicated by aitp had more mature phenotype. in all patients with k phenofype of cll lymphocytes we found immune disorders. the development of aitp was accompanied by lowered level of t cells and changed dis'flibution of their immunoreguiatory subsets: diminished number of cdz~cells and increased one of cd~'÷lymphocytes. the results of our investigations undirectly proved that malignant b-cells in cll are involved in production of autoantibodies against blood cells. dysbalance in t-cell system with functional disturbances of immunoregulation are significant in development of autoimmune complications in cll a in women with severe fvii deficency (< %) hypermenorrhagia may cause life threatening blood loss. therefore, hysterectomy at a young age is reported frequently in the literature. a year old girl without history for a bleeding disorder was transfered with hypermenorrhagia. the initial laboratory data revealed an abnormal quick-test of % due to fvll of , %, normal platelet count and hemoglobin level of , g/dl. antifibrinolytic therapy (tranexamic acid x mg/kg bw/d) and lynestrenol substitution were started to reduce the hemorrrhage. despite treatment the daily blood loss increased to a maximum of ml. therefore, substitution therapy with recombinant fvila (rfvila) (novonordisk) was started at a dose of ilg/kg bw q h. subsequently blood loss decreased to ml/d, but even with an increasing dose of rfvlla up to i~g/kg bwq h (fvil activity max. % min after injection) and additional hormonal support with a lh-fsh-anatgonist some hemorrhage remained. a short .course of methergin was stopped due to severe pain. ultrasound of the uterus revealed a hypertrophic endometrium causing the persistent bleeding. it decreased slowly over several weeks and hemorrhage stopped completely after d. the total rfvlla dose administered was rag. no side effects were observed. no transfusions of blood products were necessary. currently, menstrual cycle is suppressed by estriosuccinate. conclusion: due to close cooperation with a specialised gynecologist, hypermenorrhagia was controlled and in this woman with severe fvll deficiency hysterectomy was avoided. in three male members aged between and years of a family suffering from inherited bleeding disorders the diagnosis of protein z deficiency was established. plasma protein z evaluated by elisa (asserachrom protein z, diagnostica stago, france) ranged between and ng/ml. the patients mostly suffered from moderate bleeding complications like prolonged bleeding secondary to trauma or invasive measures and also spontaneous hematuria. previous laboratory investigations revealed variable platelet function deficiencies and transitory boderline decrease of von-willebrand factor. spontaneous bleedings were rarely recognized, however, they occured more frequently when analgetics were taken. bleeding complications showed good response to hemostyptic measures and antifibrinolytic therapy. the use of pcc containing a high level of protein z in these patients is restrained to severe bleeding disorders or major surgery. defibrotide is a mammalian polydeoxyribonucleotide derived anti-ischemic and antithrombotic drug (crinos s.p.a., v"flla guardia, italy). while the drug is known to produce polytherapeutic effects owing to its multicomponent nature, the exact mechanisms of its anti-ischemic effects remain unknown at this time. since defibrotide is found to be effective in ischemic disorders such as paod, vod related occlusive disorders and related rnicroangiopathic conditions, we studied the effect of this drug on the contraction of dog and pig arterial strip/rings obtained from various sites. in vitro supplementation ofdefibrotide to the organ bath containing control dog and pig arterial rings did not modulate the serotonin and thromboxane (generated) contraction, however, tissues obtained from dogs treated with mg/kg defibrotide iv exhibited a profound desensitization to the agonist induced contractile process. the time course of these effects was found to be much larger than the plasma half-life of defibrotide. this presentation will provide additional data on the effect of defibrotide on the contraction of vascular smooth muscles as a possible explanation for the anti-ischemic effects of defibrotide. a. wehmeier, a. popescu, w. schneider klinik for h,~matologie, onkologie und klinische immunologie der heinrich-heine-universit&t d sseldorf in chronic myeloid leukemia (cml), evolution of blast crisis is the limiting factor of survival. however, as in other chronic myeloproliferative disorders, bleeding and thrombotic complications are a major source of morbidity but their incidence has rarely been analysed in larger patient groups. we retrospectively evaluated patients with cml during chronic phase ( cases), accelerated disease ( cases), and blast cdsis ( cases), and determined the incidence of thrombohemorrhagic complications in relation to the stage of the disease. in chronic phase, patients had bleeding complications ( . %/patient year) and patients thrombotic episodes ( %/patient year). the incidence of bleeding increased significantly in accelerated disease ( patients, . %/patient year) and blast crisis ( patients, %/patient year), and many patients had repeated complications. contrary to our expectations, the incidence of thrombotic complications also increased to . %/patient year in accelerated phase and . % /patient year in blast crisis, tn chronic phase, patients died because of bleeding events. in accelerated phase, patients died due to bleeding and patient due to thrombotic complications. in blast crisis, bleeding was associated with deaths, and pulmonary embolism with deaths. analysis of the cause of thrombohemorrhagic complications revealed that in chronic phase, bleeding was often associated with uncontrolled busutfan therapy, whereas in blast crisis, severe bleeding occurred mainly when platelet counts were low and peripheral blasts increased. however, there was no obvious explanation for thrombotic complications. we conclude that bleeding and thrombotic complications are a major source of morbidity and mortality also in cml, and that the incidence of such complications increase in advanced stages of the disease. klinik for innere medizin °, klinikum schwerin patients suffering from primary or secondary amyloidosis may occasionally acquire a coagulation disorder characterised by isolated factor x deficiency. we report on a -years-old man who presented with lower gastrointestinal bleeding and prolonged prothrombin time (quick %). amyloidosis was suspected and proven using biopsy of the rectum and histological analysis. in addition, a monoclonal gammopathy of undetermined significance was diagnosed by immunofixation (light chain, type x). detailed investigation of the prolonged prothrombin time led to the discovery of a pronounced factor x deficiency (residual activity %). inhibitors of coagulation factors could not be demonstrated. the treatment of the patient consisted of red blood cell transfusion and infusion of prothrombin complex concentrates. due to the extremely rapid clearance of infused factor x, no increase of its activity was observed. chemotherapy of the monoclonal gammopathy was initiated (melphalan/ prednisone). over the following six months the frequency of major bleeding episodes gradually decreased. however, subclinical occult bleeding continued. the factor x activity was repeatedly found between and %. we support the suggestion from literature data that clinically relevant bleeding episodes are likely to occur in patients with amyloidosis-associated factor x deficiency if the residual activity is below %. sepsis and septic shock is a disease entity which is characterized by inflammatory reactions (sirs), coagulation abnormalities (dic), organ failure (mof) and severe hemodynamic alteration frequently leading to death in a shock. the aim of our studies was to investigate the efficacy of antithrombin iii (kybernin ®) on ~he outcome of septic shock in a pig endotoxemic model. pigs, in this model respond to lps with elevated tnflevels, decreased leukocytes and platelets counts, increased tat and fibrin monomer levels, hypotension and in increase of the pulmonary arterial pressure (pap), indicating impaired lung function. a total number of male castrated juvenile domestic pigs ( - kg) were anaesthetized, ventilated mechanically and infused with saimonella abortus equi lipopolysaccharide (s. equ-lps) over three hours ( . ~g/kg * h). a swan-ganz-catheter was inserted into the pulmonary artery to measure the pap. animals were allocated to two groups,, the treatment group (n = ) received antithrombin iii (at iii) according to the following regimen: u/kg (t = - , i. v. infusion), u/kg (i. v. bolus, t = ) and u/kg (t = - rain, i. v. infusion). the placebo group ( n = ) received the appropriate amount of human serum albumin: - - mg/kg (same schedule as with at iii). main objective was defined as the mortality rate at six hours a_~er s. equ-lps infusion. whereas in the placebo group out of animal died (mortality rate: %) all at iii-treated pigs survived the observation period of hours (p < . , x -test). the at iii group was shown to have a lower pap than the control group, especially the second peak of hypertension was abolished by at iii. it is therefore concluded that at iii should be a useful tool for the treatment of severe sepsis and septic shock. in a nationwide monthly survey all childrens hospitals in germany (esped) were asked to clinical and therapeutical informations about children suffering from pmi. during july till june children were registered. from these, had either ecchymoses and/or necroses related to an increased mordibity and mortality ( %), whereas showed no bleeding signs except for petechiae. of these children one died. the therapeutic interventions concerning hemostasis are listed according to the defined two risk ~oups. from the patients with ecchymoses or necroses, / received combination therapy (compared to / with petechiae or no bleeding sign) of at iii, heparin and/or plasma. only t child received protein c concentrate. the data show that children with low risk did in part receive higher doses of heparin and/or at iii concentrate than did high risk patients, whereas plasma therapy was adjusted to severity of eoagnlopathy. furthermore, the wide range of given therapeutics allows no information about the different medications. therefore, controlled studies with respect to the different therapeutic interventions in children with high risk pmi is desirable. a fully automated procedure for the reptilase time assay y. schmitt ( ) and h.j. kolde ( ) ( ) institute for laboratory medicine, st~dtisches klinikum, darmstadt, frg, ( ) dade diagnostics, unterschlei heim the reptilase time assay is a relatively simple technique for the detection of fibrinogen degradation products and fibrinogen deficiency or abnormality. the procedure is performed with citrated plasma and batroxobin reagent, a snake venom enzyme from bothrops atrox. this enzyme cleaves fibrinogen by releasing fibdno peptide a only but not fibdno peptide b. in contrast to the physiological enzyme thrombin that is readily neutralized by antithrombin iii and hepadn batroxobin is not inactivated by physiological inhibitors. at present this assay is mainly performed manually or on mechanical instruments. we have adapted this assay to the electra fully automated coagulation analyzer (medical laboratory automation, pleasantville, n.y.) using the thrombin clotting time procedure in the instrument software with batroxobin reagent (dade dia- the clot formation is registrated turbidimetrically and the dotting time is pdnted. the within run precision (n= ) of this procedure was tested with two plasmas from the daily routine and was between . and . %. in normal samples we found clotting times from . to . sec. in samples with liver disease (confirmed by pseudochlinesterase < u/ml) or on thrombolysis therapy with streptokinase or urokinase the fully automated assay on the electra was compared to the semiautomatic method using a kc coagulometer (amelung, lemgo, germany) based on a rolling metal ball pdnciple and magnetic endpoint detection. the two assays agreed very well with a correlation coefficient of r = , and a regression line according to passing and bablok of y = . x + . . these data show that the reptilase time can be performed with good precision and with good correlation to the manual technique on mechanical instruments on the electra . introduction: disseminated intravasal coagulation (dic), due to a massive activation of the coagulation system, is frequently observed in intensive care patients suffering from severe underlying diseases. laboratory diagnosis of dic is based on different coagulation tests, but unfortunately the routine haemostaseological parameters react with latency in the course of acute dic objective: in four cases from a cohort of patients with severe sepsis and dic we analysed special haemostaseological parameters (tat, f -t , d-dimers, human-leucocyte-elastese (file), catepsin g and heparin cofactor ii (hc ii)) and correlated them with a mof-score in order to test their predictability on the prognosis of these patients. results: all patients were substituted with at iii concentrate. l, the investigated patients median time of treatment with at iii concentrate was ( - ) days and median time of dic-duration was ( - ) days. none of the presented patients died during observation period. all analysed parameters, except d-dimers, showed a sufficient correlation with the evaluated mof-score (tat: r= , ; f -f : r= , ; hle: r= , ; catepsin g: r=- , ; hc ii: "=- , ). the d-dimers did not correlate with the mof-score, which is probably due to the delayed reactive hyperfibrinolysis in the course of dic. furthermore, the decrease of the tat-complexes, f -f , hle and catepsin g levels were followed by an increase of at hi and hc ii activity. conclusion: in general the analysed activation markers and coagulation parameters are sufficiently to describe the ongoing process of the dic. the hyperfibrinolytic activity of dic is sufficiently represented by the d-dimer test, but is of defered reactivity in the course of dic. unfortunately these parameters are not established in the routine monitoring of dic on intensive care units and therefore further studies are needed to investigate the practicability and reliability in the daily routine monitoring. we have previously reported that notoginsenoside r (ng-r ) has an effect on counteracting lipopolysaccharide (lps) induced upregulation of plasminogen activator inhibitor- and tissue factor expression in cultured human umbilical vein endothelial ceils in vitro and in mice in vivo [fibrinolysis ; :(suppl ) ]. in this study we investigated the effect of ng-r on prevention of lps induced lethal toxicity in mice. because mice are relatively resistant to lps when applied as a single agent, we sensitized them by simultaneous treatment with d-galactosamlne. the % lethality induced by lps ( . mg/mouse) plus d-galactosamine ( mg/mouse) in c hs-ie mice was reduced to % by simultaneous administration of ng-r ( . mg/mouse) with lps/galactosamine (p< . by x test). ng-r also significantly delayed lps/galactosamine induced lethal toxicity from hours to hours with all animals surviving beyond hours. because lethality induced by lps involves the synergistic effect of multiple effector molecules such as tumor necrosis factor (tnf)-ct, interleukin (il)-i, interferon ' etc., we also investigated the effect of ng-r on lps induced tnf-ct production from leukocytes in cultured human whole blood cells (hwbcs) ex vivo. the production of tnf--ct induced by lps ( ng/ml for hours) in the supernatant of hwbcs was inhibited by % and % respectively, when the cells were incubated ng/ml or ng/ml lps together with i~g/ml ng-r , respectively (tnf-ct concentration, ng/ml lps treated cells: + pg/ml, i ng/ml lps plus l.tg/ml ng-ri treated cells: + pg/ml, p< . ; ng/ml lps treated cells: _+ pg/ml, ng/ml lps plus pg/ml ng-r treated cells + pg/ml, /'=- . ). the present results suggest that ng-r can prevent the onset of lps toxicity as well as the lps induction of cytokines. therefor ng-ri may be effective in preventing the effects of septic shock in gram-negative infections. to elucidate the mechanisms by which coagulation is initiated in septic patients in vivo, coagulation measurements were prospectively evaluated in patients with severe chemotherapyinduced neutropenia. this group of patients was chosen because of their high risk of developing severe septic complications, thus allowing serial prospective coagulation testing prior to and during evolving sepsis or septic shock. patients with febrile infectious events were accrued to the study. of these, patients progressed to severe sepsis and an additional patients to septic shock. at onset of fever, factor (f) vlla activity, f vii antigen and antithrombin iii (at iii) activity decreased from normal baseline revels and were significantly lower in the group of patients who progressed to septic shock compared to those that developed severe sepsis (medians: . versus . ng/ml, versus u/dl and versus %; p < . ). the decrease of these variables in septic shock was accompanied by an increase in a marker of thrombin generation like prothrombin fragment + (medians: . versus . rim; p=o. ). these differences were sustained throughout the septic episode (p < . ). f vlla and at ill levels of < . ng/ml and < %, respectively, at onset of fever predicted a lethal outcome with a sensitivity of and %, and a specificity of and %, respectively. in contrast, fxila-alpha antigen levels were not different between both groups at onset of fever and were only marginally higher further during the course of septic shock (p=o. ). thus, septic shock in neutropenia is associated with significant coagulation activation, presumably driven by the tissue factor pathway rather than the contact system. furthermore, in septicemia both f vlla and at iii measurements are sensitive markers of an unfavourable prognosis. hemostatic parameters in sepsis patients treated with anti-tnfct monoclonal antibodies c. salat , p. boekstegers , e. holler , , b. reinhardt i, r. pihusch , k. werdan , m. kaul , t. beinert , e. hiller med. klinik iii und i , klinikum grosshadern der ludwig-maximilians-universitat monchen, h~imatologikum der gsf , knoll ag ludwigshafen tumor necrosis factor et (tnfc~) is a central mediator in the pathogenesis of sepsis and septic shock. as administration of anti-tnfct monoclonal antibodies was able to protect animals from an otherwise lethal endotoxin challenge clinical studies were initiated in patients with sepis. tnfct exerts a procoagulant effect, e.g. by enhancing pai-i and activating thrombin as indicated by an increase in tat and pf / levels. therefore it may be involved in disseminated intravascular coagulation in sepsis. we determined tat, pf / , d-dimers, tpa, upa, pai-i and vwf levels in patients with sepsis or septic shock. patients received the anti-tnfa monoclonal antibody mak f (knoll ag, ludwigshafen), whereas patients served as controls. we found a significantly lower level ofupa in anti-tnfc~ treated patients. since the difference existed before onset of treatment it can not be attributed to tnfot antagonisation. all other parameters investigated did not differ significantly between the two groups throughout the study period. failure to detect modulation of hemostasis by anti-tnf~ might be explained by delayed initiation of treatment in clinical sepsis. in animal experiments it has been observed that the antibody prevented lethal endotoxin effects when given prophylactically or minutes after endotoxin challenge, but not when it was administered . hours later. in addition, beneficial clinical and hemostatic effects of tnfet antagonisation might be observed only in subgroups of patients with hyperinflammatory sepsis. larger studies addressing this point are under way. protease receptors for thrombin and trypsin have been described for different cell lines. we investigated the ability of trypsin to activate human umbilical vein endothelial cells (huvec). cell activation was measured by the increase of intracellular free ca * (caff) with help of microscope fiuorometry (fura- ) and by the von willebrand factor release measured by a sandwich elisa. incubation of huvec with thrombin ( u/ml) or trypsin ( nm) showed a - fold increase of c~ff. a subsequent homologous stimulation after s lead to a - fold lower concentration of ca~ ÷ compared to the first stimulation. therefore cells have been desensitised by the first stimulation. inhibition of the proteolytical activity of trypsin by soybean trypsin inhibitor was followed by failure of trypsin inducing an increase of ca~ ÷ concentration. in cross stimulation experiments with thrombin and trypsin, we could demonstrate, that cells first stimulated with thrombin showed a second maximal response by subsequent stimulation with trypsin. the same effect was measured with first stimulus trypsin and second stimulus thrombin. trypsin and thrombin induced a release of von willebrand factor ( - fold in comparison to unstimulated cells). we found a vwf release dependent on the concentration of trypsin similar to thrombin. an electrophoretic analysis of the released von willebrand factor showed a different multimeric composition of vwf between trypsin and thrombin stimulation. these results indicate, that there might be a protease receptor on huvec for trypsin being different from the thrombin receptor. clinical and laboratory findings of coagulopathy were investigated by an -year-survey to children's hospitals. meningococcal infections were evaluable. severe disease (characterized by need for mechanical ventilation, dialysis and/or catecholamines) was seen in of these children; of those survived and died. clinical signs of severe coagulopathy were seen in children: ecchymoses (n = ) and skin necrosis (n = ) were associated with increased mortality ( % and %, resp., compared to . % overall mortality). five of surviving children with skin necroses required surgical interventions (skin transplantation and/or amputations). petechiae were frequent (n = ) and as isolated finding not related to severe disease or fatal outcome ( % mortaliy). platelet counts at admission were lower in non-survivors ( th- th percentile: - . /gl, median: . /i.tl) than in survivors ( th- th percentile: - . /i.tl, median: . /gl). at iii values showed no difference between survivors and non-survivors. protein c was available in few patients (n = ): in this subgroup, protein c was lowered in patients with limited disease ( th- th percentile: - %, median: %) as well as severe disease ( th- th percentile: - %, median: %). in conclusion, the findings "ecehymoses" and "skin necroses" were related to fatal outcome and therefore included in a prognostic score for severity of meningncoccal disease. the influence of irradiation on pai-i and vwf levels in human umbilical vein endothelial cell cultures k. fragiadaki, c. salat, r. pihusch, b. reinhardt, m penovici, e. hiller med. klinik iii, klinikum grosshadern der ludwig-maximilians-universitat monchen an elevation of pai- in bone marrow transplant recipients developing veno-occlusive disease (vod) of the liver has been described earlier. endothelial cell damage due to the preparative myeloablative radioehemotherapy is supposed to be an important step in the pathogenesis of the disease, which is characterized by an obstruction of small intrahepatic venules. in order to investigate a possible role of irradiation we studied the influence of several doses ( , , , gy) on pai- and vwf levels in the supematant of human umbilical vein endothelial cell cultures (huvec). pai- antigen and vwf were determined by enzyme immunoassays. whereas pai- and vwf levels remained unchanged alter irradiation with gy and in control cultures, a rise was observed one day after irradiation with gy (mean day "-)day + ) in pai- ( , % --) , %) and vwf ( %--) , %) levels. the increase was more pronounced and reached levels of statistical significance after a dose of cry (pai- %--) , % and vwf %--) %). both pai- and vwf levels decreased on day after irradiation with and gy. our results indicate that irradiation induces an increase of pal- and vwf in endothelial cells. nevertheless, this effect was observed only in doses above those ones used during conditioning when patients receive x gy. additional factors seem to be of significance. cytokines like tnfo~ enhance pai- and vwf in endothelial cell cultures and are known to be elevated in bmt-associated complications. it can be speculated that irradiation in concert with these factors may contribute to the development of veno-occlusive disease. disseminated intravascular coagulation is characterized by high consumption of coagulation factors, systemic elevation of fibrinolysis by tpa and concomitant elevation of pai-i secreted from inflamed endothelial cells. in an attempt to investigate the contribution of inflammatory cytokines, endothelial cells lines of microvascular origin were stimulated in vitro and pal- antigen was measured h, h and h after stimulation. in contrast to results published from experiments performed with macrovascular human umbilical vein cells (huve), our results obtained with different microvascular endothelia isolated from skin, solid tumor tissue and bone marrow revealed that inflammatory cytokines reduced pal- antigen levels. in addition to tnf-a ( ng/ml) and lps ( pg/ml), we found that il- ( u/ml) and gm-csf ( u/rot) also reduced pai-i levels within the first h of incubation (from ng/ml to - ng/mll and the effect was even more pronounced after h and h (from ng/ml to ng/ml). il- ( u/ml) and lps ( pg/l) also reduced constitutive levels of pal- but the effect occured later than h after addition of the stimulator. the strongest synergistic effect was demonstrated with gm-csf plus il- resulting in pal- suppression of % after h and % after h. in contrast, g-csf ( u/ml) induced the immediate ( to ng/ml after h and to ng/ml after h) upregulation of pal- antigen. stimulation of pat- levels was also observed with tgf-i~ ( pg/ml), however not earlier than h of incubation. interestingly, both stimulatory cytokines, ie. g-csf and tgf- , alone were able to counteract the decrease of pat- antigen by tnf-a but only a combination of g-csf plus tgf-g neutralized the effect by il- . results indicate that inflammatory cytokines regulate pal- fibrinolysis in a synergistic and antagonistic fashion. we established the culture of human brain microvascular endothelial cells (hbmec) in order to investigate the pathophysiology of hu~man cerebral malada, which is still associated with a high mortality rate. it is widely accepted that among the reasons for the fatal outcome of cerebral malaria, the interaction of endothelial cells with cytokines and paras lites with subsequent changes in haemostaseological parameters is involved. the human microvascular endothelium may therefore play a deci §ive role in the pathophysiology of cerebral malaria. ery throcytes containing later stages of p. falciparum specifically bind to capillary ec in vivo (sequestration). tnf-cq il- and il- are considerably elevated in severe malaria. coagulation factors such as tissue factor and von willebrand factor are affected by malada suggesting the involvement of the hbmec in cerebral malada. so far, research on the involvement of the hbmec has been performed on ec cultured from human umblilical veins (huvec). the relevance of this model may be questioned on t, ,he grounds that the capillary endothelium probably plays a greater role than the endothelium of the large vessels. besides, some propertie.$ of the endothelioum seem to vary, upon the organ of origi/n. for the~ reasons, our laboratory has established the hbmec as a model to study the pathophysiology of human cerebral malaria. to demonstrate the relevance of this model in the context of malaria, hbmec were challenged with sera from different patients with severe p. falciparum malaria and with serum from a healthy donor. we can demonstrate that in cells challenged with malaria patient sera icam- and substance p were upregulated. on the other hand cells challenged with serum from a healthy donor expressed neither icam- nor substance p. these results strongly suggest the relevance of this model for vessel involvement in malaria. both, histamine and serotonin have been described as potent stimulators of yon willebrand factor (vwf) release from human umbilical vein endothelial cells (huvec). we performed experiments to differentiate the receptors for histamine and serotonin induced vwf release. absolutely unexpected we don't found any significant vwf release after the addition of serotonin to huvec or human artery endothelial cells (huaec) in concentrations from . ijm to pm. in the case of histamine ( . pm - pm) we measured a vwf release - fold compared to unstimulated cells. this release was in the same order of magnitude as the release induced with u thrombin. to verify these results we measured the effect of histamine and serotonin on the intracellular ca ÷ concentration (ca~ ÷) in huvec and huaec. cells were labelled with fura- and the change in fluorescence after agonist addition was measured with a microscope fluorometer. using the same agonist concentrations as above we found an - fold increase of caj . with histamine or thrombin but no effect by addition of serotonin. this results indicate a similar activation of human endothelial cells by histamine and thrombin and that serotonin don't stimulate endothelial vwf release or increase of cay. activation and/or dysfunction of the endothelium can be triggered by cytokines (e.g. interleukin- , tumor necrosis factor-alpha) or bacterial substances (e.g. endotoxins) and may contribute to shock and multi organ failure. pal-l and tm were assessed as parameters of activated endothelium following bsct in three to four days intervals from start of conditioning therapy through day + . data were compared to the occurrence of sepsis, veno-occlusive disease (vod), capillary leakage syndrome (cls) and graftversus-host-disease (gvhd). patients with neither complication served as controls. no *days after stem cell tranplantation pai- and tm were increased in all patients with sepsis, cls~ vod and/or gvhd. pai- peaked at days to and the increase was highest in sepsis and lowest in cls. the increase in tm values was somewhat delayed (day + ) and was highest in vod and cls and lowest in gvhd. pai- and tm are sensitive markers of endothelial activation in sepsis, vod, cls, and/or gvhd, but they do not allow a differention between these complications. endothelin (et) is the most potent vasoconstrictor. it is known that et plasma concentration is correlated with a poor prognosis in patients with non ischemic cardiomyopathy (cm). the contribution of the heart to the production of et is still unknown. to investigate the pathogenetic mechanism in patients without coronary artery disease (cad), we examined patients with hypertension ( . pulmonary capillary wedge pressure (pcwp) was measured in all patients. et and its precursor big-endothelin (bet) were determined at rest and after pharmacological stimulation with dipyridamole ( . mg/kg body weight), that increases coronary blood flow by factor - on a non endothelial pathway. cardiac coronary et and bet concentrations were determined from the arterial blood samples, obtained from the aorta, and simultaneously from the coronary sinus (venous blood). blood samples were collected into ice chilled vacutainer tubes and stored after centrifugation at - *c. et and bet were analysed after extraction by a sepal< c cartridge by radio immuno assay technique (immundiagnostik). it is concluded that et is increased with elevated filling pressures of the heart in patients with cm. it is not produced in considerable quantity by the heart neither at rest nor at increased blood flow. there ore the lung has to be considered as the major organ for the production of et and bet in patients without cad. to characterize the incompatibility of blood with foreign surfaces valide in vitro methods especially in testing of platelet function are neceessary. it seems to be effective to use test systems which can also be helpful lateron in the clinic when foreign surfaces (e.g. venous catheters) are used and evaluated in so called phase- -studies. we studied the influence of reference polymers under standardized and controlled flow conditions on platelets in citrated blood specimen of healthy blood donors.the following tests were performed pre and post platelet-pol)aner contact: decrease of platelet count, platelet aggregation (wu-gmtemeyer index), analysis of platelet spreading capacity on standardized plastic surfaces by using a visual microscopic evaluation according to breddin and bfirck ( ) and an interactive computer-aided system (ibas, kontron gmbh, manchen, frg) by digitalizing the morphological picture of the platelet slides and area detection with a resolution of x pixels. results: platelet counts showed significant differences pre and post polymer contact, the wu-grotemeyer index demonstrated platelet activation only by blood contact with large volumes of polymeric material whereas both visual and computer-assisted evaluation of platelet spreading ability revealed a marked shift in the different classes of platelets: platelet activation results in a decrease of large structural elements and an increase of elements with spider threads. (pre contact (n= ): :~- large forms of platelets, ~- small forms and :l- spider forms; post contact (n= ): -+- large forms, a: small forms and ± platelets with spider threads). in some series there were significant differences between visual and computer-aided evaluation in the detection of small and spider forms. however, the relative increase of these nonspread spider forms could be stated with beth methods (wilcoxon test). we therefore conclude, that platelet morphometry with both methods is a sensitive and reliable ex vivo method to evaluate platelet interactions with artificial surfaces and can also be used lateron in phase- -studies in patients. however, the ibas-system requires further maprovement in hard-and so,ware to reduce the high expenditure of this method. despite for the most part standardised methods such as hypothermia, cardioplegia the perioperative myocardial infartion rate is still high at approx. %. in cardiovascular surgery it is well known that various cardioplegic solutions are employed for myocardial protection during the ischemic phase. in order to evaluate the possible influence of these solutions we selected two of the most commonly used cardioplegic solutions for investigation in a randomised double-blind study: htk (group ) and st. thomas (group ). after randomisation each group consisted of patients who had to undergo aortocoronary bypass surgery. aim of the investigation was to establish possible varying cellular changes during the reperfusion phase or in the early operative phase in order to be better able to apply reinforcing clinical measures. in the context of this study the classical enzyme-diagnostical methods ck,ck-mb and ldh as most useful, however not as convincing. still, we have in the meanwhile been able to show that the cardiac muscle troponin t proves a particularly sensitive parameter regards differentiated ischemic damage to the myocardium. ~his we were able to conflrm in extensive preliminary trials. cardiac troponin t was registered with a one-step lmmunoassay using two highly specific monoclonal antibodies directly via two different epitopes of cardiac troponin t. simultaneously the corresponding pre-and postoperative ecg was registered. further, within this context we investigated parameters that indicate cellular damage, such as platelet factor (pf ), t-pa, interleukin- and pmn-elastase. in the reperfusion phase in group there is a significant rise in tmponin t while in group these values remain practically unchanged up to the st. postoperative day. of special importance is interleucin since according to most recent studies the release of this substance leads to platelet activation via the arachidonic acid metabolism. this pathway must, further, be regarded within the context of free radical formation. on the st. postoperative day the values in group are significantly higher. the effects of membrane damage is also observed via pf and the pmn-elastase to be different in both groups. on the basis of this study we arrive at the conclusion that the htkcardioplegia is essentially less damaging than that of the st. thomas solution. ( ) r. hetzer ( ) ( ) department of hematology and oncology, vimhow klinikum, humboldt university, berlin, germany ( ) we investigated the influence of two different vad systems on these hemostatic changes. vads were implanted in patients [ bi-vad (berlin heart), left vad (novacor n )] with end-stage heart disease who were awaiting heart transplantation. the following hemostatic parameters were measured during the first days of bddging or until heart transplantation: thrombin-antithrembin iii (tat) complexes, prekallikrein, factor (f) xll, plasminogen, or -antiplasmin, and i?,thremboglobulin. results: during the first week of bridging, significantly higher tat levels were observed in novacor patients compared to berlin heart patients. prekallikrein activity levels were significantly lower in the berlin heart patients in the early bridging period. all other parameters were comparable in both groups throughout the entire observation period. differences in hemostatic parameters became apparent only in the early bridging period with more enhanced pmthrombin activation in the novacor group and more prominent contact activation in the berlin heart group. avoidance of the transmission of viral infections and saving in the use of blood products encouraged the use of apparatwe intraoperative autetransfusion techniques. patients and methods: arer randomization apparative intraoperative autotransfusion was performed in x patients during elective hip surgery using i-iaemonetics cell saver ill, haemonetics cell saver v, electromedics elmd, haemolite and fresenius continuous autotransfnsion system (cats). at defined tmaes we detenmned a lab panel (clinical chemistry, lipids, proteolytic capacity, hemolysis, coagulation panel) at determination points in the reservoir, the retransfused blood and in the patient. results: no significant differences concerning proteolytic capacity, prothrombin time, platelets, lipids, electrolytes. increased hemolysis (p< . ) in the hcs iii group vs. the other groups (lo rain. after application of the retransfnsed blood). low heparin concentrations of retransfused blood in the hcs iii group( . +- . u/ml) vs. high concentrations in the cats group ( . +- . ;p-- . ). parameters of thrombin generation were elevated in the hcs iii group vs. the other groups (p= . ). conclusions: the use of different apparative autotransfnsion systems dunng elective hip surgery results in dysturbances of hemocompatibility. the activation of the coagulation system during the collection and filtering is partly influenced by the elimination kinetics and the dose regime of heparin. however intraoperative autotransfusion must be roan~ged very carefully and possibly adverse effects of perioperadve heparin peak levels have to be considered. little information is available on the management of patients with factor viii deficiency who require cardiac surgery. we report the case of a year old man with factor viii deficiency and combined severe aortic stenosis and incompetence and mitral incompetence who underwent a double valve replacement at our institution. he had a history of several bleeding episodes following minor surgery. previous factor viii levels were between and %. using standard cardiopulmonary bypass, a double valve replacement with a and mm bileaflet prosthesis in aortic and mitral position, respectively, was performed. a high dose aprotinin regime was used ( . x a iu). three doses of factor viii concentrate were given in the perioperative period, totalung u until the st postoperative day. repeated measurements of the factor viii level were performed. the postoperative chest tube drainage was rot. until the th postoperative day an additional dose of iu of factor viii was given to maintain a level of at least %. the obligatory anticoagulation was achieved initially with heparin i.v. in therapeutic dosage. due to a persistent rd degree av block a permanent pacemaker was inserted with additional iu of factor viii. on the th postoperative day warfarin was commenced aiming for an inr of . - . . the patient was discharged home therearer. he was trained to monitor his inr with a coagu chek device. no bleeding episode occurred during the first months follow up. open heart surgery can be performed safely in patients with factor viii deficiency with the use of factor viii concentrates and monitoring of factor viii levels. coating of biomaterials was developed using synthetic polymers with incorporated anticoagulants. stents were coated with a thin layer consisting of a polylactide polymer containing peg-hirudin and a stable prostacyclin analogue. these materials were tested with a ,,human shunt model" using nonant/coagulated blood of healthy volunteers. within minutes uncoated stents were covered by fibrin and aggregated platelets, which could be seen macroscopically and by scanning electron microscopy; coated stents were free from coaguiation plugs. this observations were supported by analysis of coagulatiuon activation markers. unlike coated stents, uncoated stents revealed high levels (>detection limit) of tat complexes and prothrombin fragments (f - ). in a series of experiments stents were tested in sheep. in sheep stents (coated/uncoated patmaz-schatz stents) were ptaced by conventional techniques in the left anterior descending artery. anticoagulant therapy consisted of a heparin bolus and intravenously given aspirin before stent implantation. no ant/coagulation was given thereafter. existing data show hyperplasia in the area of uncoated stents which was reduced around coated stents (this study will be finished in january ). this coating technique with incorporated anticoagulants reduces thrombogenicity during the early and late phase of biomaterial implantation. studies concerning catheters, vascular prosthesis and oxygenators are in progress. the mechanical circulatory support (mcs) is a therapy for patients (pts) with endstage cardiac insufficiency. during mcs thrombeembolic events, due to the surface thrombogenicity of the implanted device, are feared complications. activated blood platelcts play a major role in this context. therefore, patient's platelet morphology was investigated. during the period of mcs, using the novacor left ventricular assist system n , blood samples of pts were observed by means of scanning electron microscopy (sem). blood was collected preoperatively and after implantation daily during the first week as well as weekly for the first months. samples were drawn via an gauge cannula into caeodylic-acid buffered glutaraldehyde and platelets were prepared for morphological investigations. platelet alterations were classified as non activated, activated and aggregated, based on "shape change" morphology. additionally, the common blood coagulation parameters were evaluated. preoperatively, . + . % of activated platelets were found. within the first postoperative week, the mean level of activated platelets raised to . + . % (p< . ). comparing short-(< days) vs. long-term (> days) mcs, a significant difference of activated ptatelets (overall mean values) could be seen ( . +_ . % vs. . _+ . %, p= . ). during mcs a correlation between hemolysis and platelet aggregates, as well as the values of activated dotting time and activated platelets were observed. also, specific platelet deformations and damages appeared during mcs, which could not be found preoperatively. all pts with mcs showed alterations of their platelet morphology induced by the activation of the implanted synthetic material. with regard to the postoperative antithrombotic therapy, these observations should be taken into consideration. during extracorporeal circulation (ecc) the blood and its compenents are exposed to artificial surfaces and inflammatory respenses are activated, especially the complement, coagulation, fibrinolytic and kallikrein systems. furthermore leukocyte activation occurs and platelet function is impaired. these humoral and cellular systemic responses are known as the "pustperfusion syndrome" with clinical symptomes like lenkocytosis, increased capillary perraeability, accumulation of interstitial fluid and organ dysfunction. the impertance and even perhaps the existence of the damaging effects of cpb have been widely debated in the literature over the past years. many efforts have been made to reduce traumatizing factors, e.g. the use of membrane instead of bubble oxygenators. recently, heparin-coated equipmen~ and tubings have been proposed to avoid excessive contact activation during cpb, the here presented study was designed to assess changes in coagulation and flbrinolytie activity in patients undergoing cpb. in this regard we investigated coagulation parameters like fibrinogen, antithrombin, pmthrombin-fragments fl+ , thrombin-anthhmmhin complex, tissue-factor, fibrin-monomeres and parameters of the fibrinolytic system like tissue-plasminogen-activator, plasminantiplasmin-complex, d-dimers and plasminogen-activator inhibitor before, during and after cpb. the activation of the complement cascade was followed by measuring the concentration of c a, c and c c. the results demonstrate distinct alterations in above mentioned parameters. in spite of a high dose hepariulzation (act> s) combined with an antifibrinolytic tw, atment an activation of the coagulation system was observed immediately after the onset of cpb followed by an activation of the fibrinolytic system. therefore further efforts should be done to develop new anticoagulatory regiments and improve the biocompatibility of materials used for cpb. during cardiopulmonary bypass blood is exposed to nonphysiologic conditions. the contact with artificial surfaces and mechanical stress result in a periopemtive response which includes activation of the complement, coagulation, fibrinolytic and kallikrein system, activation of nentrophils with degranulation and pmtease enzyme release, oxygen radical production and the synthesis of various proinflammatory cytokines. this so-called "pest-pump intlammatory response" has been linked to respiratory distress syndrome, renal failure and neurologic injmy. our goal was to investigate the time course of eytokine levels and the activation of leukozytes and platelets and to quantitate leucocyte subpepulatioas in patients undergoing cpb. at different time points, pre, during and pest cpb, we determined the levels of interleukin (il) , il- , il- , il- , il- , il- , tumor necrosis factor ¢z (tnf-a) and interferon " ' (ifn'--/) using elisa-techulques. lymphozyte subpepulations were characterized by flow cytometry and specific monoclonal antibodies against cd (pan t-cell marker), cd (surface antigen on t-helper cells), cd (surface antigen on b-cells), monocytes were determined by cd and platelets by cd (act. gpilb/llla) and cd b (gp ib). single cell activation was analyzed using markers against cd (il- receptor), cd (il- receptor), hla-dr (mhc class ii), cd (transferrin receptor) and cd (activation inducer molecule), platelet activation was monitored with an antibody against cd (gmp- ). preliminary results revealed distinct increases in r,- , il- , and il-io following cpb whereas tnf-a and ifn--/levels were not significantly influenced. fttnhermore, activation of particular cell populations was observed. finally, our investigations should contribute to a better understanding of the complex humeral and cellular respenses induced by cpb and thus might help to develop new strategies to circumvent the negative impacts of cpb. optimal adjustment of anticoagulation in machine plasmapheresis is important for the quality of the prepared fresh frozen plasma (ffp) as well as for the safety of the donation. in the present study the suitability of prothrombin fragment ( ft+ ) in the assessment of anticoagulation during plasmapheresis was investigated. matarlal and methods: plasmapheresis procedures were performed on donors ( ~, o" ) using different plasmapheresis machines (a , baxter; mcs p, haemonetics; pph , electromedics/medtronic). acid citrate dextrose formula a (acd-a) in a ratio to whole blood of : was used for anticoagulation. the concentration of fi+ in the donor's blood was measured before and after plasmapheresis and in the prepared ffp. the actual acd-a volume used was also registered. results: there was a significant rise of the ft+ -concentration in the donors blood after plasmapheresis with each of the three automatons: a : . vs . , p < . ; mcs p: . vs . , p < . ; pph : . vs . , p < . . the ffp prepared with each machine showed the following f~+ concentrations: . ± . , . : ± . and . ± . respectively. the difference between the groups was not significant. the elevation of the ft+ -concentration in the donor's blood showed a negative correlation with the volume of the acd-a used. during of the procedures technical problems occurred (inadequate venous acces, occlusion of the citrate tube, reduced whole blood flow). after these procedures there was a marked elevation of f~+ in the donors blood ( . ± . ), accompanied by an elevated f~+ -concentration in the prepared ffp's. conclusion: these data show that ft+ is a suitable parameter for the assessment of anticoagulation during plasmapheresis. several epidemiologic studies demonstrated that fibrinogen is an independent cardiovascular risk factor and should be considered for screening programs. prothrombin time derived fibrinogen (df) measurement combines the advantage of an established highly reproducible automated method with no additional reagents, except for calibration. several studies showed that the df values correspond well with the clanss method except in cases such as thrombolytic therapy in which the df results are higher. however, no results exist whether in patients with coronary heart disease with fibrinogen as a risk factor the df values are also comparable to the established clausss method. the aim of our study was to compare df values to clauss method results in cardiac patients, especially in patients before and after coronary bypass grafting (cab(]). measurements of df were performed on an acl (il) using the pt-fibrinogen-hs reagent. fibrinogen clanss method was done on the acl using fibrinogen c reagent (il) and on a kc (amelung) with fibrinogen a reagent (boehringer maanheim). for calibration we used the calibration plasma half volume (it.) with the fihrinogen concentration proposed by the manufacturer. plasma samples were obtained from patients at admission before cabg and postoperatively up to week, and from healthy persons (staff). within assay imprecisious using normal and abnormal controls (il) were comparable with both methods showing cvs between . and . %. in normal healthy persons the medians of the df and the clanss method run on the acl were very similar ( vs rag/all), whereas kc values were about % lower ( md/dl). in cabg patients at admission we found the same differences as in normals with the clanss method (acl: vs kc : rag/all), however the df values were siginficantly higher (median mg/dl). if we took a cutoff value of mg/dl, as suggested by the results from the northwick park heart study, we would categorize into the high risk group out of patients using the df method, with the clanss-acl method and with the clanss-kc method, i.e. nearly % more patients were classified in the high risk group using the df method. postoperative samples showed the expected increases due to the acute phase response with the same magnitude of differences. because of its rapidity and reproducibility the df method is well suited for routine measurements, however, standardization remains an urgent task in order to avoid misinterpretation of results. for fibdnogen measurements in clinical laboratories, the two most widely used methods are the clotting time method according to clauss (cfib) and the sn called "derived" fibrinogen method (dfib) implemented in optical coagulometera with the fibrinogen concentration being derived flora the optical density of the fibrin clot in a standard prothrnmbin time (pt) assay. it is well known that under certain circumstances, e.g. in the presence of fibrin(ogen) degradation products (fdp), there is a discrepancy between the two methods with higher values for dfib than for cfib. yet the opposite discrepancy, i.e. fibrinogen values derived from the optical density of the clot grossly lower than values from dotting time assays, seems to be very rare and is poorly understood so far. the patient (male, years) had ingested the esterase inhibitor parathion (e ) in an attempt o f suizide and was treated with high doses of atmpin. he had no clinical signs or history or family history of bleeding or thrombotic disorders. except for a very low pseudocholinesterase activity, all laboratory results were normal ineinding pt, afft, thrombin time, and factor xiii. pt and aptt did nnt differ between an optical coagulometer (electra c, mla) and a mechanical one (kc.. , amelung). there was no evidence of disorders known to interfere with hemostasis like paraproteinemia or dyslipldemia. however, in all blood samples received for dotting tests during a period of days the macroscopic appearance of the fibrin clot was quite unusual (only slightly turbid/almost transparent) and there was a striking discrepancy between a very low or low dfib on the electra (pt reagent: thromboplastin is, dade) and a normal or high cfib (kc ; thrombin reagent, dade). on admission, values were mgml (derived) vs. mgldl (clauss). cfib rose to s mg]dl with dfib at mg/dl in the last sample on day . ~ al! samples dfib was about % (ls- ) of cf[b. when the patient's plasma was added m normal pooled plasma it caused, in a dose-dependent manner, values lower than predicted for dfib and values slightly higher than predicted for cfib. in the absence of data from additional (e.g. immunologic) methods the following principal possibilities (and combinations) have to be considered: ) normal fibrinogen concentration and clot formation rate, but abnormal optical properties of the clot (cfib correct, dfib falsely tow); ) normal optical properties of the clot, but accelerated clot formation and very low fibrinogen concentration (dfib correct, cfib falsely high). in either case, the molecular basis could be: a) a genetic or acquired molecular abnnrmality of fibrin/fibfinogen; b) an interfering substance. direct effects of the loxic agent parathion and/or the antidot drug atropin are not likely to be the cause since other patients, often with more severe parathion inmxicatian requiring higher doses of atmpin, showed normal optical density of the clot. we hope to perform a more in depth investigation of this abnormality in the future, including various methods, reagents, and instruments for fibrinogen measurement, a survey of the patient "s family, and studies of the molecular nature of the phenomenon. increased fibrinogen is known to be an independent predictor of subseqtmnt acut~ coronary syndromes. however. a multitude of methods for fibrinogen determination is available. there is a lack of standardisation among fibrinogen assays. in a family cohort study (patients'with combined hyperlipidaemia and f or hypemricaemia) fibrinogen was determined in plasma samples from family members using a functional and an immunochemical assay. the fimctional assay according to clauss was performed on the analyser ca using the test fibrinogen a from boehringer. the immmmephelometric assay was performed on ~e behring nephelometer system using the reagent and standard from behring. a good similarity between both assays was obtained at low and high flbrinogen levels as well as in samples with increased c-reactive protein (crp). values obtained by both assays correlated similar with total cholesterol, ldl--cbelesterol and apolipeprntein b. the ratio functional fibrinagen / immlmochemial fibrinogen showed no dependence on cholesterol, t-pa, v wiuebrand factor and crp. release of two fibrinopeptides a from fibrinogen generates desaa-fibrin monomer, which rapidly aggregates, forming fibrin complexes. fibrin monomers can be detected in plasma samples after chemical desaggregation of fibrin complexes using thiocyanate by monoclonal antibody binding to the alpha-chain neo-n-termini generated by fibrinopeptide release. although postulated, an intermediate of fibrin formation, carrying one fibrinopeptide a and one fibrin alpha-chain neo-n-terminus has so far escaped analytical procedures. we have employed a monoctonal antibody specific for fibrin alpha-chain neo-n-terminus, mab b , attached to magnetic microparticles, for isolation of fibrin-related material from plasma samples of patients with elevated soluble fibrin. the material was desorbed by sds-urea buffer and subjected to sds-page and immunoblotting. immunostaining with panspecific anti-fibrinogen and anti-fdp-e antisera showed a range of bands corresponding to fibrin monomers, and fibrin derivatives containing the fibrin e-domain. lmmunostaining with monoclonal anti-fibrinopeptide a antibody resulted in a doublet band corresponding in size to fibrin monomer. similar results were obtained with polyclonal antisera against fibrinopeptide a. for a more quantitative approach, desa-fibrin monomer was detected by an elisa procedure using mab b as capture and monoclonal anti-fibrinopeptide a antibody as tag. a sample with extremely high level of desaa-fibrin monomer, determined by elisa (enzymun®-test fm) was used for calibration, since reference material is not available. a correlation of r=o.g was found between desaa-fibrin monomer and relative desa-fibrin monomer levels. detection of desa-fibrin monomer required sample pretreatment with thiocyanate for desaggregadon of fibrin complexes. from these preliminary data it appears that desa-fibrin monomer accounts for a fairly constant proportion of soluble fibrin and is a polymerizing species. fibrinogen has been shown to be a major cardiovascular risk factor. especially for epidemiological studies, exact quantitation of fibrinogen in clinical plasma samples is of great imporance. fibrinogen levels are generally measured by clotting assay according to clauss, or by determination of derived fibrinogen values upon photometric measurement of prothrombin time (derfbg). the clotting assay has been shown to be influenced by high levels of soluble fibrin derivatives. the pt-derived fibrinogen levels appear rather convenient in clinical routine, since no additional reagents are needed. we have compared the clauss assay and derfbg with a turbidimetric fibrinogen assay using snake venom protease for fibrinopeptide release, performed in photometric autoanalyzers. d-direct antigen was measured in parallel using tinaqaant d-dimer lpia. results were correlated with total fibrinopeptide a release by thrombin, measured by elisa. a total of samples were included, of which samples ( %) were recorded as above measurung range by derfbg. these samples encompassed a range of . - . g/l and . - . g/l in clauss, and turbidimetric assay, respectively. the range of values measured by derfbg assay was . - . g/i, corresponding to . - . gll and . - . g/ in the clauss and turbidimetric assay, respectively. the correlation of derfbg with the clauss assay was re . , correlation with turbidimetric assay was r= . for the values actually detected. the correlation between clauss and turbidimetric assay was r= . for all values. there was no dependency of test results or inter-test variation upon d-direct. correlation graphs displayed a decreased test response of clauss assay in the high concentration range, resulting in an underestimation of fibrinogen concentration. the derfbg assay, in contrast, showed normal range values in samples from patients with fibrinotytic treatment and low fibrinogen levels in the other assays. correlation with fibrinopeptide a release was r= . for clauss assay, r= . for turbidimetric assay, and r= . for derfbg. for clinical routine, derfbg appears to be applicable for all samples between . and . g/l with exclusion of samples from patients with fibrinolytic treatment or endogeneous hyperfibrinolysis. other samples may be analyzed by clotting assay or turbidimetric assay, although the latter appears to be more suited for measurement of high range samples. for inhibition of pk is . pmol/l the antifibrinolytic activity of the inhibitors was determined by measuring the lysis of radiolabelled human plasma clots• the compounds which inhibit plasmin and pk influence remarkably the streptokinase-induced clot lysis but not lysis induced by uk and tpa. surprisingly, inhibitors of uk and tpa do not influence clot lysis induced by uk or tpa. the structure-activity relationships for inhibition of ptasmin, uk, tpa and pk could help in the design of more potent inhibitors of fibrinotytic enzymes. uk inhibitors are of interest for the development of anti-invasiveness drugs, while plasmin/pk inhibitors could be prototypes of a "synthetic aprotinin". in the ecat angina pectoris study t-pa antigen was an indepcndem risk factor of subsequent acute coronary syndromes. pat indicates the risk bat depends on other known risk factors. it should be tested in members of a family cohort study (patients with combined hyperlipidaemia and / or hyperuricaemia), if the active pal antigen or the whole pai antigen showed a stronger relation to t-pa and metabolic variables. the active pall antigen was determined using elisa actibind pat- (technoclone / lmmuno) , the whole pai-i antigen was measured using the f_lisa pat- (technoclone i immuno). t-pa activity was determined with the coaset t-pa from chromogenix, the tintelize tpa from biopool was the used test for determination of t-pa antigen. the active pat antigen showed a stronger correlation to t-pa activity and t-pa antigen than the whole pal antigen. circulating t-pa activity was influenced predominantly by the active pal antigen. both pat antigens were correlated in similar manner with metabolic variables, lipoproteins and b/vii. table: correlations of active and whole pal antigen (** p < , ) active pal antigen whole pal antigen active pat antigen , , ** whole pal antigen , ** , t-pa activity - , ** - , ** bpa antigen , ** , ** body mass index , ** , ** triglycerides , ** , ** total cholesterol , ** , ** ldl-cholesterol , ** , ** hdl-cholesterol - , ** - , ** apolipoprotein b , ** , ** apolipoprotein a i - , ** - , ** the lower relationship of the whole pat antigen to t-pa is obviously caused by patient samples with high levels of whole pat antigen in contrast to normal values of active pat as well'as of t-pa. possibly, a high ratio of whole pai antigen / active pat antigen is caused by a raise of latent pal the main form of pat in the platelets. the clinical importance of an increased ratio whole pal antigen / active pal antigen remains under investigation. the cyclic antibiotics-polypeptides bacifracin a, bacilliquin from boci/lu~, licheniformis and gramicidin s from bocil/us brevis, var. ( . b., were used for investigation. we studied their influence on the fibrinoly~c and coagulation activity in vitro• me~hods. to solution of human plasmin (thrombin). containing . mg of protein ( nih unff)/ml, the analyses' solution of antibiotics ( . - , mg) was added. then we defined the tlbrinolytlc activity of the mixes using azofibrin lysis, and fhrombin activity was determined according to the speed of fibrin clots formation from fibrinogen solution. results. in following table are submifled the results received in our laboratory {we also offer results of antibiotics influence on urokinase activity): ki, mm ki --the constant of inhibition. n. d. ~ in studied lirnils the inhibitor's activity was not observed. ---the inhibitor's activity was not define. i. --the inhibitor% activity was observed but ki not determined. +, +% +++ --effect of inhibffion (in rela*iive indexes). conc/us/on.~ the results received by us testify to the necessity of cautious approach to the use of antibiofics-polypeptides for various sorts of therapy in view of their possible influence on fibrinolytic and coagulation actlvlfy, of the organism. these results were used for preparation in our laboratory of biospeciflc sorbents containing c-ramicidin a, bacil}iquirt and gramicidin s.as ligands, they can reversibly bind thrombin, plasmin {plosminogen) and urokinase directly from crude exkacts. the enzymes are selectively eluted without substantial losses of specific activity in e yield of - %. there is a great body of rather contradictory informations dealing with fibrinolysis in liver.. cirrhosis, which can be accelerated, normal or reduced, depending on the type of cirrhosis and investigation techniques (clot-lysis, fibrinolytic component measurements). our previous finding was, that in vitro plasma-clot lysis, induced by exogeneously added tpa or streptokinase proved to be reduced, and this had a good correlation with severity of the disease and the elevation of plasmatic yon willebrand factor levels. in vitro clo~/-[ lysis tests, induced by tpa were performed in patients with alcoholic liver cirrhosis, utilising a microplate light-scattering assessment method. the tests were repeated using the same plasma samples in each patients with a microplate which was covered by cultured endothelial-cell monolayer (umbilical vein, huvec}. clot lysis speed proved to be . - times slower with huvec milieu in the control group, while in the cirrhotic patients this inhibition was stronger and resulted in -fold reduction of lysis speed. our results suggest, that cirrhotic plasma is able to accelerate the release of fibrinolytic inhibitors from cultured endothelial cells, which phenomenon may also contribute to the complex alterations of in vivo fibrinolysis in cirrhotic patients. deep vein thrombosis (dvt) is a systemic disease with prolonged clinical manifectation. anticoagulation therapy in dvt is not completely effective. thrombolytic therapy may give rise to a systemic lytic state, the fibrinospesific agents (scu-pa and t-pa) have short half-lives in the circulation. we investigated the potency of the acylated plasminogen streptokinase activator complex (gbpg-sk) to deep vein clot dissolution as compared to well known sk and apsac both in v~tro and in vivo in the model of venous thrombosis in artherio-venous shunt in rats. it was shown in in vitro study that fibrinolytic activity of plasminogen activators mainly depends on their stability in plasma. stability studies carried out by incubating sk and pg-sk activator complexes in plasma with euglobulin precipitation . total fibrinolytic activity was measured by the fibrin plate method. gbpg-sk possessed the greatest stability in human plasma than apsac or sk because of its prolonged inactivation period (the deacylation half-life for gbpg-sk was :e rain in contrast with -~ min for apsac). the stability degree of two acylated thrombolytics (gbpg-sk and apsac) was in order to inverse proportion of their first order rate deacylation constants ( . • - and . • -s sec- respectively). the fibrinolytic potency of sk, apsac and gbpg-sk was measured by -labeled fibrin clot lysis in plasma and in vivo by lysis of the preliminary formed -labeled fibrin clot inserted into the jugular vein. fibrinolytjc activity of acylated plasminogen activators gradually increased in time. under sk administration, the clot lysis came to the end by hours while apsac and gbpg-sk haven't lost their activity for - hours. gbpg-sk possessed significantly more prolonged fibrinolytic activity than apsac, the acyl-enzymes did not significantly influence on plasminogen,,.~ -anfiplasmin and fibrinogen levels in plasma according to their activity specific to fibrin-bound plasminogen. in opposite, sk produced a significant depletion of plasminogen, ~- antiplasmin and fibdnogen levels in plasma. it seems, on the basis on in vitro and an animal experimentation, than apsac with its moderately fast deacylation rate is more suitable for rapid thrombolytic effect, but gbpg-sk with its slow deacylation rate is suitable for deep vein thrombosis, when the rapid thrombolysis is less critical. it's well known that the complete lysis of thrombi usually isn't observed at the thrombolytic therapy. at present study we have attempted to quantify the possible mechanism of fibrinolysis inhibition during the thrombolysis. i-labelled partially cross-linked fibrin clots of different volume ( . - . ml) were immersed in tris-hcl buffer ( ml) containing plasmin ( - nm) at °c. the lysis rate was detected by counting of soluble fibrin degradation products (fdp). at all the eases lysis slowed down and stopped in hs though clots dissolved up to only - %. no irreversible inlaibition of plasmin caused by denaturation occur as was judged by the measurement of fibrinolytic activity at the diluted samples. however the increase of fdp concentration in surrounding buffer led to the reversible inhibition of fibrinolytic activity of plasmin up to % of baseline. the sds-page analysis under non-reduced conditions shown the acoumulation of high-molecular weight fdp at the surrounding buffer. the inhibition phenomenon could be connected with the specific binding of plasrnin with soluble fdp having exposed lysine residues and the subsequent removal of enzyme from fibrin surface. unexpectedly since the heterogeneous character of occurred reactions tile change of the clots surface area during lysis didn't affect the fibrinolysis kinetics in all the concentration intervals. to estimate the kinetic parameters the kinetic curves were linear in the coordinates [p /t (l/t*ln(isl°{(lslo.lpi)). the obtained parameters were following: keat=l. min-l,km=l. ixm,kp= . ~tm. the clinical trials have shown that fdp concentrations at the thrombolytic therapy of deep venous thrombosis and acute myocardial infarction usually was approximately in the range . - . ~tm. therefore the described phenomenon of fibrinolysis inhibition by formed fdp may take place during thrombolytic therapy. al. calatzis, an. calatzis, +m. klmg, +l. mielke, +r. hipp, a. stemberger institute for experimental surgery and +institute of anesthesiology technische universit~.t monchen thrombelastography (teg) is an established method for the detection of fibrinolysis. fibfinolysis is usually determined when the teg amplitude decreases by more than % atter the maximum amplitude is reached. this takes a considerable amount of time (more than minutes). our approach bases on the understanding of fibrinolysis as a process which runs in paraue[ to coagulation and is not exclusively subsidiary to it. the effect of fibrinolysis on the growing clot in the teg is shown by the comparison of two parallely performed teg measurements: exteg: teg measurement with standardised activation of the extrinsic system. apteg: exteg with in-vitro-fibrinolysis inhibition via aprotinin. exteg-reagent (ex): : dilution of innovin (recombinant thromboplastin reagent, dade) with aqua dest. apteg-reagent (ap): parts innovin, parts trasy[ol (aprotinin, bayer, i . kie/ml), parts aqua dest. test procedure: l p ex or ap + ~l citrated blood (cb) + lal cacl -solution , m. the only difference of the two reagents is the addition of kie aprotinin in the apteg, leading to an in-vitro fibrinolysis inhibition. the usage of disposable pins and cups (haemoscope, illinois, usa/e.m.s., vienna) is recommended for ensuring standardised conditions for both measurements. results and discussion: when there is a better clot formation in the apteg (corresponding to a lower so-cafled k-value) than in the exteg, fibdnolysis can be suspected. this technique requires only commercially available reagents and is easy to perform on conventional teg systems. due to the standardised coagulation activation with a thromboplastin reagent, fibrinolysis can be detected also when inhibitors like heparin are present in the circulation. according to our experience using this technique during liver transplantation, clinical relevant fibrinolysis can be detected as described in less than l minutes. many thromboembolic (massive pulmonary embolism, proximal deepvein thrombosis, etc.) and coronary diseases (infarction, acute phase, etc.) require fibrinolytic therapy to early recanalizafion. the application of the well-known or new thrombolytic agents needs the use of specific, simple and reproducible methods for the determination of fibdnolyfic activity. we suggest new methods for measuring the blood plasma concentrations of plasmin, plasminogen, antiplasmins, and urine urokinase activity. these methods involve the employment of chromogenic substrafe azofibrin (human fibrin, covalently labeled with p-diazobenzenesulfonic acid). method~. . ml of studied solution was added to , ml of azofibrin suspension in certain buffer ( - mg/ml) and the mixture incubated at oc for - rain. after the end of incubation the mixture was filtered, the volume of solution brought up to ml by . m naoh and the optical density was determined at nm. resuffs. azofibrin can be used for quantitative determination of proteinases activity in search of new fibrinolytic means. for comparison the results of our studies fibrinolytic activity of some proteinases with the use of azofibrin are presented: activity. with an increase of pal and ldl-and a decrease of hdl-cholesterol concentrations k is concluded that the increased cardiovascular risk in diabetes meilitus was partly caused by a down regulation of the fibrinolytic system, increase of erythrocyte aggregation and plasma viscosity. also disturbances of lipid metabolism an abnormal whr seems to be of an additional atherogenous factor in dm. plasma concentrations of thrombin-anfithrombin-iii (tat), alpha- antiplasmin-plasmin (app) complexes and ddimer were investigated in patients treated with thrombolytic therapy for acute myocardial infarction (ami) either with streptokinase (n= ), urokinase (n= ) or recombinant t-pa (rt-pa, n= ). all patients received an intravenous heparin bolus of , iu on admission, which was followed at once by an infusion of , iu/hr for the next three days titrated to maintain the partial thromboplastin time at twice control value. tat, pap and ddimer were measured by enzyme immunoassay on admission, , , , , , , hours and on day and after admission. groups did not differ significantly in regard to age, sex, delay and infarct location. on admission, no marker differed significantly between groups. thereafter, tat levels increased significantly exclusively in rtpa treated group. from to hours after admission, tat were significantly higher in rtpa treated patients than in streptokinase and urokinase treated group (p< . ). however, during continous heparin infusion, which was started immediately after stop of thrombolytic therapy, in each group tat concentrations decreased below admission values. app were significantly higher only hour after admission in the rt-pa group (p= . ). ddimer did not differ signifieanfly between groups. our results demonstrate, that rtpa induces a hypercoagulable state, which may contribute to reocclusion after successful reopening of the infarctrelated coronary artery. the significant tat decrease during continous heparin infusion supports the concomitant use of thrombin inhibitors as adjunctive therapy with thrombolytlc treatment for ami. thus, in acute myocardial infarction patients, thrombin generation is markedly influenced by the thrombolytic agent used and concomitant heparin therapy. endothelium derived relaxing factor-no (edrf-no) plays a major role in regulation of vascular tonicity and also exerts platelet inhibitory action~ however, due to the chemical nature of edrf-no few is known about its production and activity as a general index or marker of vascular function in human diseases. one way to achieve this can be measurement of nitrate/nitrite excretion in the urine, which seems to reflect vascular edrf-no production. in this report a self-developed elisa method is described, which was used for this perpose. nitrate/nitrite urinary exretion proved to significantly decreased in insulin dependent and in non-insulin dependent diabetes mellitus as well after a comparison of the excretion values to other markers of angiopathy (yon willebrand factod soluble thrombomodulin, beta -thromboglobulin) it seems to be acceptable, that urinary nitrate/nitrite excretion can be a useful indicate of diabetic vascular disorders. two major concerns still accompany the application of prothrombin complex concentrates (pcc). viral safety has to be guaranteed and therefore several measures for virus inactivation or elimination are taken during the manufacturing process. the inherent risk of thrombo-embolic side effects has to be considered. to minimize these risks and to achieve good clinical efficiency the quality criteria for pcc's are under pending discussion. it is generally accepted that a modem pcc-preparation should contain all of the four coagulation factors in a well balanced proportion and that it should also contain protein c and protein s. additionally, the concentration of activated coagulation factors should be kept at a minimum. a present pcc-produedon process mainly consists of a qae-sephadex extraction of cryopeer plasma followed by a solvent/detergent virus inactivation step. further purification is achieved by subsequent chromatography on deae-sephamse. the aim of this study was to improve product quality by avoiding f viiactivation without implementing major changes to the production process. at the same time, a second virus eliminating step was added to the production process. it could be shown that speeding up the chromatographical process by switching the deae-sepharose-chromatography from a classical axial column to a radial chromatography resulted in a significant reduction of f viia-genemtion. mainly the reduction of contact time, resulting from the highest possible flow rates, leads to the wanted effect. the relation between f vii/f viia was : or more. in order to investigate the feasibility of virus filtration the eluate of the deae-sepharose column was filtered through a virus removing ultipor vffilter. the analysis of the solution before and after fillration showed that the filtration had no influence on coagulation factors activity, protein content, proteolytic activity etc. preliminary studies showed significant virus reduction values. in the past few years the problem of expediency of the treatment aimed at developing immunological tolerance in hemophil;a patients by way of complete removal of inhibitor with high doses of factor viii has been discussed in literature. we observed patients with hemophilia. inhibitors to factor viii:c were revealed in . % of patients with hemophilia a and fo factor ix --in . % of patients with hemophilia b. the level of an inhibitor was not higher than befhesda u/ml, that is those patients were not regarded as "high responders". a high incidence of inhibifors in young patients [from to years of age, . %) compared with older patients (from to years of age, . %) testifies to the probability of inhibitors development during treatment with modern concentrated preparation of factor viii, ix. inhibitor development in patients ( . %] in the course of antihemophilic concentrates transfusions is an evidence of alloimmunization of patients with proteins. the investigations show that in the course of transfusion therapy patients develop secondary immunodeficiency due to chronic antigenic stimulation of immune system with high doses of allogenic proteins. against the background of immunodeficiency patients with hemophilia develop complications of immune character: infections complications -- . %, aufoimmune processes -- . %, secondary tumours -- . %. plasmapheresis is the most rational method of removing inhibitor in patients with low level of inhibitor ("low responders", < bu/ ml) and in patients with mean response. thus it should be noted that the treatment of patients aimed at developing immunological tolerance is not only expensive and economically unprofitable but also not indifferent fo the organism. in a recent multicenter study previously untreated patientens (pups) with severe hemophilia a were treated with a recombinant factor viii concentrate (rfviii, recombinate©). during fviii treatment ( %) developed inhibitors, high titer (> bethesda units (bu)/ml), low titer (< bu/ml) and transient inhibitors. plasma samples from before treatment and during treatment but before inhibitor occurrence were available in inhibitor patients. these plasma samples were analyzed by a highly sensitive immunoprecipitation (ip) assay for the presence of anti-tviii antibodies. in ( %) a significant increase of anti-fv]]i antibodies was seen indicating the development of a clinical relevant inhibitor titer. this immune response occurred after to (median ) exposure days (ed). in the same period only out of inhibitor patients showed a decreased in vivo recovery. in pups who developed no inhibitors plasma samples from the entire treatment period were available. an immune response to rfviii treatment was seen in pups after to ed (median ed). the immune response was later and less pronounced in comparison to inhibitor pups before inhibitor occurrence. with the ip method the detection of an early immune response is possible which might be predictive for a later inhibitor development. the inclusion of the lip method should be considered for future multicenter pup studies. in the past anaphylactie reactions to plasma and plasma components have been a common complication of replacement therapy in patients with hemophilia a and b. we report on severe bleeding episodes in patients with hemophilia a and b, respectively. both patients had a history of life threatening anaphylactic reactions after exposure to different plasma derived clotting factor concentrations including intermediate purity factor viii-and factor ix-concentrate, respectively. high purity factor concentrates were tolerated well without any allergic side effects. a years old patient with a moderate form of hemophilia a (f viii %) had a history of severe immediate reactions with skin manifestations and bronchospasm after exposure to fresh frozen plasma, ctyoprecipitate and different plasma derived factor viii-concentrates of intermediate purity. in all episodes pretreatment with corticosteroids and antihistamines was unsuccessfull in avoiding severe bronchospasm. replacement therapy with two different recombinant factor viii concentrates was tolerated well without any side effects. a years old haemophiita b patient developed hypersensitivity reactions to prophylactic factor ix substitution, which could be overcome by using a factor ix .concentrate with improved purity. a recent recurrence of hypersensitmty under this treatment was finally overcome by the use of highly purified (monoclonal antibodies) factor ix concentrate. we conclude from these findings that high purity of factor concentrates, possibly due to the absence of soluble hla-antigens, are advantageous in patients disposed to allergic reactions. introduction: antibody formation against factor (f) viii remains one of the most severe complications of repeatedly transfused patients with haemophilia a. as reported previously in our study about the incidence of fviii inhibitors, we have observed a high incidence of fviii inhibitors among our haemophilia a patients. it is still not clear why certain haemophiliacs develop antibodies and others do not. a number of previous studies suggest that there is a genetic predisposition for the fviii inhibitor development. thus, the purpose of our study was to examine, if there is a correlation between fviii antibody-formation and genetically determined histoeompatibility antigen (hla) patterns in our haemophiliacs. patients and methods: hla-class i (a, b, c) and hla-class ii (dr, dq) typing was carried out for respectively multi-transfused paediatric haemophilia a patients (fviii:c activity < %), including who had developed an antibody to fviii: were high responders (> bu), were low responders (< bu). hla-typing has been performed by a standurcl two-stage microlymphoc~.ftotoxicity procedure (drk frankfurt) using antisera with defiend hla-specifity (biotest diagnostica). results: we found an under-representation of hla-a in fviii inhibitor patients when compared with the subgroup without inhibitor. in regard to the hla-b and hla-c antigen frequencies there are no apparent differences between the groups. among the class ii antigens there were higher frequencies of dr , drw and dqwl in the non-inhibitor group. however, the reduction in hla-a , hla-cw , hla-dqw respectively hla-dr frequency for inhibitor patients as reported previously could not be confirmed in our study. conclusion: so far it remains unclear if there is a significant association of a certain hla allels with the development of fviii antibodies. recombinant factor sq (r-viii sq, pharmacia) is a b-domain-deleted recombinant factor viii. it is formulated without albumin (hsa). the product has been shown to have in vitro and in vivo biochemical characteristics similar to a plasma derived full-length protein (p-viii). the international clinical trial programme was initiated in march . pharmacokinetic studies have shown that the b-deleted r-viii sq should be given according to the same dosage principles as a full length p-viii. at present, the product is being tested in previously treated patients (ptps) and untreated patients (pups) with severe haemophilia a (viii:c < %), both during long-term treatment (on demand therapy or prophylaxis) as well as during surgery. the long-term study in previously treated patients in germany was started in january . thirteen patients have been included in centers. all patients are still on treatment with r-viii sq, most of them receiving prophylactic treatment. global treatment efficacy has in general been considered excellent or good. no serious clinical adverse events related to the study product have been reported, nor have any inhibiting antibodies to factor viii or antibodies to mouse-lgg or cho-cell components developed in the patients. further results such as data on efficacy, half-life, recovery and safety will be presented in detail at the meeting. nowadays it is not sufficient to regard hemophilia only as hemorrhagic diafhesis of coagulation genesis, caused by deficiency or molecular anomalies of coagulation factor, without taking into account the immunity state. on examination of patients (pts) (hemophilia a -- pts, hemophilia b -- pts, willebrandt's disease u pfs) the development of immune complications was revealed in . %. chronic persistent hepatitis ( . %), chronic active hepatitis ( . %), herpes simplex ( . %), chlamidiosis ( . %), bacterial infection ( . %} were regarded as infectious complications. bacterial infections have a routine course due to preserved phagocytic function of neufrophils. and viral infections, whose ability to resistance is connected with t -cell link immunity, take on a chronic persistent course, mechanism of the development of autoimmune processes (autoimmune thrombocytopenic purpura -- . % of pts, immunocomplex disease -- . % of pts, the appearance of immune inhibifors -- . % of pts} is connected with the impairment of immunological surveillance over b -cells aufoimmune clones as a result of dysbalance in the system of t -lymphocyfes immunoregulatory subpopulations. lymphadenopathy and splenomegaly ( . %) develop due fo benign proliferation of lymphoid tissue as a result of impairment of regulatory function of t -lymphocytes system, or they may be an evidence of virus infection. we observed one episode of acute leukemia. immune complications in hemophilia patients develop against the background of secondary immunodeficiency caused by chronic antigenic stimulation of patients' immune system with high doses of allogenic proteins, which plasma preparations contain. in immune complications hemophilia patients develop hemorrhages, whose pathogenesis is quite different from that caused by coagulation factor, so it should be taken into account in the course of treatment. control of hemophilia therapy classically was based on four parameters: life span expectancy of patients, orthopedic status (normal zero), pettersson score and social integration. oren, however, these parameters described an irreversible status with permanent damage particularly of the joints, especially when patients were grown-up. in order to establish risk-adapted therapy protocols to prevent hemophllic osteoarthropathies, quality control programs have to he set-up that allow for early adjustment of dosage and substitution frequency. here bleeding frequency is one the main parameters, being a clear hint for the possible development of a target joint. since we have established a computer database (haemopat) that contains data from all patients treated in our center. tables and graphs allow for early detection of increased bleeding tendency in a given joint, and accordingly for adjustment of therapy. the results of years of measuring reasons of joint damage and not documenting the orthopathies as such will be demonstrated. parallelly a new program (haemopat win . ) will he introduced allowing for easier handling of data and their evaluation. this program will be used as of december . in combination with a substitution calender to be filled in by all patients, in which factor concentrates, lot numbers, dosage, and date of administration will he constantly recorded, this program will extend our existing database in order to follow closely clinical and orthopedic parameters of each patient, and consequently acts as strict control of therapy quality. additionally, it provides sufficient data to fulfil any documentation needs, requested by medical authorities. the program will be available for all those interested free of charge. ) kinderklinik der westf. wilhelms univ. mttuster - ) biotest pharma gmbh, dreieich haemoctin® sdh; the fviii sdh (sdh = solvent detergent and dry heat = °c, rain) from biotest pharma is a high purity (specific activity ~ ) fviii concentrate manufactured from large human plasma pools. virus validation studies have shown virus inactivation/reduction (log ) during the manufacturing process for lipid coated vints~ such as: h]v- > . ; psr > . ; vsv > , ; bvdv > . ; hcv > . * and non enveloped vimsas such as: parvo** = . ; reo > . *** and hav > . . more than hemophilia a patients (ptps = previously treated patients), baseline fviii activity < %, were included in an international drug monitoring study to follow their fviii inhibitur status. the hemophilia centers included were three centers from hungaria (helm pal children hospital and the national inst. of haematology, budapest and regional blood transfusion center, debrecen) and four centers from germany (two from berlin, one fraukfurffmain and one monster). patients were enrolled in the drug monitoring beginning aug. . at the entry none of the patients had a detectable inhibitor. at the end of sept. there were no side effects or adverse events in connection with the use of haemuetin®. before the haemoctin drug monitoring study, the patients were treated with cryoprecipitate, or purified fviii products. inhibitor testing was done on patients plasma samples using the bethesda method. repeated fviii recovery determination at one time (between to hrs) after haemoctin® application demonstrated the expected recovery and normal half life time. none of the hemophilia a patients, treated with haemuetin® sdh developed a clinical relevant inhibitor. at the beginning of the stud)', the clinical efficacy of haemuetin® was studied in hemophilia a patients and shown to give an in vivo recovery of + % by one stage assay and + % by a chromogenic assay. t ½ values were + . and . + . hrs respectively. the study for the clinical efficacy of haemoctin® sdh was repeated in a group of patients approximately two years later. although cd lymphocyte counts are known as reasonable predictors of prognosis in hiv infection, the cd count is not in all cases an infallible indicator of prognosis. therefore several serological markers are used to predict disease outcome, including beta- microglobulin ( m), immunoglobulin a (iga), lymphocyte counts (lymph) and others. in this study we followed a cohort of haemophiliacs ( with haemophilia a, with haemophilia b) and patients with severe von willebrands disease over a period of months (mean, range: - ). testing for l~ m, igg, iga, igm, cd and cd cell counts (abs. and relat.), cd /cd ratio, and absolute resp. relative leucocyte and lymphocyte counts was performed at least times a year. at the same time clinical examinations and review of history were undertaken. mean of laboratory tests for every quarter of a year and significant changes during time of observation were calculated and correlated with clinical data. - - - - - - cd + + + .- : .+. + cd ~ + + + + ± + ~ m z . + . . + . . + . . + . . ± . . ± . lymph ~ . + . . + . . + . . ± . . ± . . ± . means/pl ± standard deviation means mg/l ± standard deviation during time ef observation we found significant changes of cd (abs. and relat.), abs. cd counts, cd /cd ratio, f~ m, leucocytes and lymphocytes. the abs. cd and cd counts correlated clearly with lymphocytes und leucocytes counts but not with ~ m. the prognostic value of the tested parameters is discussed by calculation of correlations with clinical data, anti-retroviral treatment and treatment of haemophilia. the availability of high purity factor concentrates has recently encouraged clinicians to use perioperative continuous infusion of fviii or fix to prevent or reduce bleeding in patients with haemophilia. in conliast to repeated highdose bolus injections, the continuous infusion trealment regime maintains constant coagulation factor activity at a level necessary for hemostasis, reducing the total cost of treatment by about % and preventing possible side effects of bolus doses. the new application mode, however, requires stable products which tolerate slow passage through an infusion device. our objective was to test in vitro the fviii concentrate immunate (stim plus) and the fix concentrate immi.ynine (stim plus) at room temperature, under conditions of long-term contact with polypropylene tubing in an infusion pump. infusion rates were chosen to mimic clinical situation. the control samples were not infused through the pump but were otherwise treated identically. test samples were drawn before and at , , , and hours after the onset of each infusion run. fviii (one-stage, two-stage and chromogenic assay) and fix (one-stage) activity were measured using immuno reagents. presence of activated factors were measured by napt'i', while flla, fxa, plasmin and pre-kallikrein activator were detected with specific chromogenic substrates. the data showed equivalent results between test and control samples with no loss of fviii or fix activity. the potencies of both immunate (stim plus) and immunine (stim plus) remained within + % of labeued values within hours after onset of infusion. in conclusion, immunate (stim plus) and immunine (st m plus) are suitable for contiuous infusion when using automatic infusion device within applied test criteria. in htanans, circulating half-lives of asparaginase enzymes from e. coli and erwinia chrysanthemi vary within a wide range. moreover, half-lives differ not only among different e. coli strains but also among commercial e. coli preparations. to investigate the possible influence of two different sources of e. coil asparagmase (asn) preparations on the fibfinolytic system of leukemic children a prospective randomized study was performed correlating asn pharmacokiuetics (asn activity, asparagine depletion) with fibrinolytic parameters (plasminogen (plas), o. -antiphismin (ct ap), tissue-type plasminogen activator (t-pa), tissue type plasminogen activator inhibitor (pal ), d -i)imer (i)-d)). together with prednisono, vincristine and an anthracycline children received i iu-/m asn medae r (originally purchased: kyowa hakko, kyogo japan) and children iu/m crasintin r (bayer, leverkusen, germany). blood samples for pharmacokinetic and coagulation analysis were drawn before the first asn administration and every third day whilst on medication. the results are shown in the . asn activity shows a negative correlation (spearman: rho/p) to plas (-, / . ) and ct ap (-, / . ). a positive correlation was found between asn activity and d -dimer formation ( . / . ). t-pa and pal showed no relationship to asn activity. all children showed complete aspamgiue depletion at a detection limit of . um during the course of asn admiatstration. two thrombotic events occurred in the kyowa group, one of the distinctions between the two e. coli asn preparations administered ill this stndy is the absence of cystine in the kyowa asn, which also has a lower isoelectric point and a longer half-life than the bayer type a asn. with respect to this observations this may lead to longer inhibition of protein synthesis, which then may be the cause of a bigher rate of side effects. along with studies on asn pharmacokinoties dose recommendations need to be tailored to the specific asn preparation employed to ensure optimal antineoplastic efficacy while minimizing the hazard of complications. different types of coagulopathy in hepatic veno-occlusive disease (vod) and capillary leakage syn-drome (cls) after bone marrow transplantation w. ntimberger, s. eckhof-donovan, st. burdaeh and u. g bel department for pediatric hematology and oncology, heinrich heine university medical center, diisseldoff, germany it is generally accepted, that cls, coagulation activation and refractoriness to platelet transfusions are part of the syndrome of hepatic vod. we assessed patients with either vod or cls or both vod and cls, in order to analyze the influence of either syndrome on different aspects of hemostasis. vod was diagnosed according to jones et al. [transplantation ( ) ]. diagnosis of cls was >_ % increase of body weight in the past hours and non-responsiveness to furosemide [niirnberger et al., ann hematol ( ) ] . patients with vod, cls or both were compared to control patients without either diagnosis. eight patients suffered from both vod and cls, patients only from vod, and only from cls. patients had neither syndrome and served as control population. activation of the coagulation system was assessed by increase of tat-complexes and/or increased consumption of at iil the hemostasis patterns were as follows: no. introduction: lung cancer goes along with coagulation activation and increased thromboembolic risk. acute phase reaction in cancer patients leads to elevated levels of c b-binding protein (c b-bp) followed by a shift from free to c b-bp-bound protein s. we tried to find out whether there is a correlation between alterations of c b-bp, protein c protein s system and interleukin (il- ), which is one of the most potent inducers of hepatic acute phase reaction. patients: i. patients with lung cancer; . control group: patients in complete remission after lung cancer. methods: clotting methods: protein c and s activity; elisa tests: protein c antigen, tat-complexes, prothrombin fragments f i+ , il- . electroimmuno-diffusion (laurell): free and total protein s, c b-bp. results: tat-complexes and f i+ were elevated in cancer patients. c b-bp levels were slighthly increased ( ± % of n.), protein s activity was ± % of n. (control group: ± % of n.). il- in lung cancer patients was . . pg/l (control: . ± . pg/l). conclusion: one source of the hypercoagulable state in lung cancer patients is decreased protein s activity due to elevated c b-bp levels. this is probably caused by hepatic acute phase reaction which is triggered by increased il- levels. these plasma levels correlate with levels of the tumor marker ca and with the stage of the disease but correlations with patient outcome (disease recurrence and overall survival) have not previously been shown. plasma levels of d -dimer and ca (determined by sandwich elisa assays} were measured prior to treatment in women with figo stage t to iii ovarian cancer and correlated with tumor stage, relapse and overall survival over a mean follow -up period of months (range to months). levels in healthy women and patients with benign ovarian disease served as controls. the occurrence of deep vein thrombosis in the cancer patients was also determined by impedance plethysmography that, when positive , was confirmed by contrast venography. preoperative d -dimer and ca i levels in ovarian cancer patients were statistically signfficantly higher than in controls. preoperative cut off values were calculated for the prediction of cancer relapse and survival for both measurements. d -dimer levels above a cut off level of ng/ml were statistically significantly associated with the rate of relapse but ca levels were not. deep venous thrombosis occurred in % of cases but there was no difference between properafive levels of d -dimer in patients who subsequently did versus did not develop deep vein thrombosis. high levels of d -dimer are associated with more advanced disease and with poor prognosis in patients with ovarian cancer. the high levels of d -dimer are a biologic feature of the malignancy itself that may be attributable, at least in part, to increased conversion of fibrinogen to fibrin in the tumor bed with subsequent degradation of fibrin by the fibrinolytic mechanism. thus d -dimer levels may serve as a marker for overall tumor burden as well as "disease activity". a high incidence of deep vein thrombosis exists in the course of the disease in ovarian cancer patients but preoperative levels of d -dimer are not predictive of this occurence. yon tempelhoff georg -friedrich, michael dietrich, dirk schneider, lothar heilmann. dept. obstet. gynecol. city hospital of ruesselsheim. -germany. an increase of plasminogen activator inhibitor activity (pai act.) in the plasma of cancer patients has been recently discribed. we have longitudinally investigated pai act. in patients with primary breast cancer and compared the results with the outcome of malignancy. patients with untreated primary breast cancer and without proof of metastasis (t - n - m ) were eligible for this study. in all patients coagulation tests including fibrinogen {method according to clauss), d -dimer (elisa} and pal act. (upa dependent inhibition test) were performed prior to primary operation, months thereafter and at the time of cancer relapse. seventy -two healthy women and patients with benign breast disease served as controls. during a mean follow -up of + months patients ( %) developed cancer recurrence and ( . %) patients died. in all cancer patients preoperative levels of fibrinogen and pai act. were significantly higher compared to healthy women and to patients with benign breast disease. preoperatively only pal act. was significantly higher in patients with vs. without cancer recurrence ( . _+ . u/ml vs. . + . u/ml; p = . ). in patients with later recurrence pai a~t. significantly dropped months after operation (p = . ) and was again significantly increased at the time of cancer recurrence ( . _+ . ; p = . ). a preoperative cut off value (calculated via cox model) of pai act. above . u/ml was significantly associated with the rate of relapse (tog rank: p = . ) and in % of patients who died of cancer preoperative pai act. were also above this cut off. impaired fibrinolysis in patients with breast cancer is significantly associated with the outcome of cancer. a monoclonal heparin antibody (mab) has been raised against native heparin using a heparin-bovine serum albumin conjugate prepared by reductive amination. for further analyses tyramine, which was covalently bound to low molecular mass heparin by endp int attachment (malsch r et al: anal biochem ; : - ) , was labeled with -iodine at the aryl residue. the tracer antibody complex was immunoprecipitated by goat anti-mouse immunoglobuline igg. the mab recognized specifically intact heparin and heparin fractions. the lower detection limit of heparin preparations was ng/ml. no cross reactivity of the mab occurred with other glycosaminoglycans such as heparan sulfate, dermatan sulfate, chondroitin sulfate a and c. oversulfated heparin showed lower affinity to the antibody hl. than - -and - -desulfated beparin. the method established for the purification of the mab was ammonium sulfate precipitation with followed dialysis. sds-page and high pressure capillary electrophoresis prooved the high purity of the received antibody. the biological activity of mab was tested by the chromogenic assay $ and remained stabile while purified. in conclusion, the present abstract describes an purified igg monoclonal antibody directed against heparin and heparin fractions, which can be used for biological measurements. the concentration of heparin and dermatan sulfate in biological fluids is usually measured using radiolabeling. for this purpose aromatic compounds are usually used to insert radioactive iodine labeling at the saccharide backbone of the glycosaminoglycan. we developed methods for the specific labeling of hepann and dermatan sulfate at the terminal residue. tyramine was bound by reductive amination to the , anhydromannitosyl end of heparin, produced by nitrous acid degradation and confirmed by c.nm r spectroscopy. (anal biochem : - , ) this method was also used to produce a low molecular mass dermatan sulfate (lmmd)derivative after partial deacatylation. in order to choose the proper method for evaluating the specific anticoagulant activity in the row of chitosan polysulphate (cp) samples with different degrees of pol ~merization and sulphation we applied to pharmaeapea article (a~) when assessing the ability of direct anticoagulants to depress the coagulability of recalcificated sheep blood (using the rd international heparin standard), and to measuring such acti¢ity as per pharmacokinetic model (a ). the model admits the "kinetics of cp elimination be linear in ease of intravenous injection to rabbits, as it is observed in heparin: ct=co exp(-i~ x t), where ct is cp concentration at the time moment t; co is cp concentration at the moment of injection; i~ is the elimination constant. besides, it is assumed that there is a linear approximation of the anticoagulant effect on the dose, which finally makes it possible to calculate the specific actidty a : t=kt ct + tin, where t is the time of clot formation at different tlme intervals after of cp injection; t~, is the time of clot formation prior to cp injection. t value was assessed in two tests: in blood coagulation time (bct) and in activated partial thromboplastin time (aptt). no correlation was observed between a and a . at the same time the values of ifm and the period of semieliminatinn (tvz) with the use of the original method that were obtained with the help of the quantitative determination of cp in rabbit's blood taken at different time intervals after injection, showed a close correlation ( "= , p< , ) between the same parameters, obtained with the help of the of the pharmacokinetic model in bct test. thus, experimentally it was proved that the assumption of the linear elimination and the effect-dose dependence was true, which is necessary for a calculation. we recommend to use intravenous injection of the samples to animals with further assessment of the results according to the pliarmacokinetic model to calculate the specific anticoagulant activity in the row of chemically related potential direct anticoagulants. in this investigation we compared the biological activity of a low-molecular-heparm (lmw-heparin, mono embolcx®) after intravenous, subcutaneous and oral application in rats. sprague-dawly rats were anaesthetized by ketamine/diazepam and the blood samples were taken from the retro orbital sinuus. axa u/kg body weight of the lmw-heparin were injected intravenously and subcutaneously to rats each. between minutes and hours after injection serial blood samples were taken. mg/kg ( . axa u/kg) body weight of the lmw-heparin were applicated orally using a stomach tube. blood samples were taken between and hours after oral application. the antifactor xa and antithrombin activities of the plasma samples were measured, using ehromogenic assays and the substances s and s (kabi vitmm). after i.v. injection the maximum axa and alia activities were . axa u/ml and . aiia u/nil respectively. after s.c. application the antifactor xa activity of the lmw-heparin showed a maximum of . axa u/ml atter minutes. the antithrombin activity exhibited an eatiier maximum activity of . alia u/nil minutes after injection. after the oral application no increase of the axa or alia activities was measured. the lmw-heparin has a high antifaetor xa and antithrombin activity after i.v. and s.c. injection. after oral application no activity of the lmw-heparin was measurable. these results implicate that fractionated heparin is not absorbed after oral application or is inactivated in the gastrointestinal tract. to improve the activity after oral application modified hepatins have to be synthesized. in an in vitro study the effect of various heparin derivatives (calciparin, fraxiparin, cy , cy , astenose, hexasaccharide, ssh ) on thrombin-and adp-induced platelet aggregation as well as on adpmediated platelet activation in whole blood was investigated. all heparin derivatives caused a concentration-dependent inhibition of thrombin-induced aggregation of washed platelets. calciparin and astenose were found to be the most effective compounds with ic o values of . and . p, mol/l, resp.; higher concentrations ( - times) were required for the other compounds. furthermore, the heparin derivatives were studied with regard to their potentiating effect on adp-induced platelet aggregation. in a concenwation range from to u/nil calciparin, fraxiparin, cy and astenose led to a potentiation of the adpinduced aggregation whereas cy , hexasaccharide and ssh did not show this effect. the increase in aggregation was associated with an increase in thromboxane a lbrmation. in addition, the effect of calciparin, fraxiparin, cy and astenose on adp-induced platelet activation in whole blood was investigated by flow cytometric analysis using monoelonal antibodies to platelet surface receptors opiiia (cd- ) and p-selectin (cd- ). at concentrations that caused a maximum potentiation of adp-induced platelet aggregation these substances led to a strong increase of adp-mediated activation of platelets in whole blood. the effect was most pronounced when the blood was anticoagulated with calciparin and astenose, resp. in conclusion, the results suggest that the aggregation-promoting effect of heparin derivatives included in this study is dependent on the molecular weight and the degree of sulfation and is in part due to the generation of thromboxane. heparins are negatively charged polysaccharides and bind protamine forming a stable complex. here we report on the properties of microbeads ( . pro) coated by protamine. protamine chloride ( . ijm) was covalently bound to . mg paramagnetic tosyl..activated microbeads m- (dynal). the covalent binding of protamine was from . to , mg/g beads. protamine-dynabeads were produced in a phosphate buffer at different ph ( , ; , ; , and , ). the protamine-dynabeads produced ph . showed the best properties for flow cytometry analysis. in saline solution they bound lmm-heparin-tyramine-fitc (lmmh-tyr-fitc) dose dependently from . to u/ml, whereas in plasma and blood they bound lmmh-tyr-fitc from . to u/ml. dependent on the binding protocol, the microbeads also bind proteins unspecifically, i.e bovine serum albumine and protamine to a lower extent.the adsorbed proteins, however do not bind lmmh-tyr-fitc dose dependently. the saturation of the proteins on the beads was determined as their relative fluorescence intensity (rfi). in saline solution the saturation was measured at rfi, in human plasma at rfi and in whole blood at rfi. using flow cytometry erythrocyctes, lymphocytes, monocytes and granulocytes were not bound to protamine dynabeads. these data demonstrate that protamine-dynabeads can be used to measure the concentration of lmmh-tyr-fitc in saline solution, plasma and blood because they do not bind to human blood cells. the present study was designed to investigate the anticoagulant action of inhaled low molecular weight (lmw)-heparin in healthy volunteers. , iu (group t), , iu (group ), , iu (group ) or , iu (group ) lmw-heparin were given to healthy volunteers each at weeks intervals. in group tissue iactor pathway inhibitor (tfpi) antigen and activity, chromogenic factor xa assay, heptest, aptt and thrombin clotting time (tot) remained unchanged during the days observation period. in group tfpi antigen and activity, aptt, tct and the $ method remained uneffected. heptest coagulation times were . + . before, . + . sec. hrs and to . + . sec. hrs after inhalation. in group tfpi antigen increased from . + . to . + . ng/ml hrs after inhalation. tfpi activity remained unchanged. $ method increased from . to . + . iu/ml hrs after inhalation. heptest coagulation values were prolonged up to _+ . s ec after hrs and returned to normal within hrs after inhalation. aptt and tct remained unchanged. after inhalation of , iu lmw-heparin, the following changes were observed: tfpi antigen increased to +_. . ng/ml and normalized within hrs. -i'fpi activity increased to . _+ . u hrs after inhalation and was normal after hrs. antifactor xa activity, as measured by s method, increased to . + . u/ml after hrs and was normal after hrs. heptest coagulation values increased to . + . sec hrs after inhalation and normalized after hrs. aptt and tct did not change throughout the observation period. the data demonstrate a resorption of lmw-heparin by intrapulmonary route in man. no side effects were observed. recently we developed a tritium-labelled arachidonic acid ([ h]aa) release test with high sensitivity to membrane-toxic agents. the assay performed in u cells is intended to evaluate ehemicals, drugs and biomatefials with regard to their eytomembrane toxicity [kloeking et at. ( ) , toxicology in vitro , - ]. local irritation reactions are described in patients receiving therapeurieat dosages of lmw heparin. this fact prompted us to examine the following lmw hepafins and heparinoids for their membrane toxicity in u cells: reviparin-sodium, enoxaparine-sodium, mueopolysccharide polysulphate (mps), pentosan polysulfate sodium (pps), polysulfated bis-lactobionic acid amide derivatives lw (aprosulate) and lw . for this purpose, [ -- ]aa labelled u ceils were incubated with different concentrations of lmw heparins and heparinoids at °c for hour. compared with untreated cells, the [~h]aa release of cells treated with mg of the drugs was two times higher with reviparin sodium, three rimes higher with bis-lactobionic acid amide lw , five times higher with pentosan polysulfate, times higher with ertoxaparine-sodlum, but it was equal to the control with mucopolysaccharide polysulphate. the rate of araehidonic acid release in response to a test chemical may therefore be used to assess the membrane-toxic effect of this substance and to predict its the inflammatory potential in the skin. semi-synthetic glyensaminoglycans (gags) with antithrombotic properties can be prepared from the e. coli k polysaecharide by coupled chemical and enzymatic methods. the molecular weight of these semi-synthetic gags can be adjusted to obtain products mimicking the molecular profile of a low molecular weight hepatm. in order to compare the biochemical and pharmacologic properties of a semi-synthetic gag (sr a, sanofi/choay) with a commereiany available low molecular weight heparin, fraxiparine (sanofi, paris, france), valid biooheanical and pharmacologic methods were used. the molecular profile of this agent as determined by hplc exhibited a comparable distribution profile (mr= . kda) in comparison to fraxiparine (ma= . kda) . the anticoagulant properties of sr a were comparable to fraxiparine in the aptt and heptest. however, in the usp assay, this agent showed slightly weaker activity. sr a also exhibi~d comparable affinity to atffl and hcii. in comparison to fraxiparine, it produced a much weaker response in the hit screening system. in~ viv studies, sr a preduecd strong dose-dependent antithrombotic actions in both the iv and sc studies in the rabbit jugular vein stasis thrombosis model (ed =i - gg/kg). additionally, it also produced antithrombotic aefiorts in a rat jugular vein clamping model. the hemorrhagic effects of this agent were comparable to those of fraxipafine as measured in a rabbit ear blood loss model. intravenous administration of sr a also revealed a comparable pharmaeokinetie behavior to fraxiparine. no abnomaiitias of the clinical chemistry (change in liver enzymes) and hematology profile (thrombocytopenia and lencecytosis, etc.) were noted in primates. at a dosage of i and . mg/kg iv, this agent also caused a release of functional tfpi which was comparable to the observed responses of other low molecular weight heparins. these studies suggest that sr a is capable of producing similar pharmacologic effects as other low molecular weight heparms, however, additional optimization studies are required for demonslrating product equivalence. limited information on the comparative pharmacoldnetics of low molecular weight heparin (lmwh) is available on the data obtained from aptt, heptest, anti-xa and antmia assays. since these drugs are currently used for therapeutic indications using relatively high dosages and intravenous administration. aptt, heptest and antmia test may be valuable in the assessment of their effects. in order to investigate the relative pharmacokinetics of lmwh using apt'i', heptest, anti-xa and anti-iia methods, certoparin (sandoz, basel, switzerland) was administered to individual groups of healthy male volunteers ( - kg) via intravenous ( mg) and subcutaneous ( nag) routes in a crossover study. blood samples were drawn at , , , , , , , , and minutes. using a baseline pool plasma obtained from the same volunteers, calibration curves for each of the individual tests were constructed to extrapolate circulating levels of certoparin. a non-compartmental model using trapezoidal technique was used to obtain pharmacokinetic parameters such as t / , vd, and clsys. in the intravenous studies, the t / was found to be dosedependent for aptt, heptest, anti-xa and antm]a. the auc, however, was significantly different for each test and was dose-dependent following the order: apttheptest>aptt>antmia. the clsys of the antma was much faster in comparison to the other tests. the clsys of the aptt and heptest was independent of dose. however, anti-xa clsys by this route was lower than other tests. the apparent vd followed the order aptt>antmia>heptest>anti-xa. the bioavailability of the certoparin as measured by various tests ranged from - %. these studies suggest that beside providing pharmacokinetic data, aptt, heptest and anti-iia assays may provide useful data on thier safety and efficacy at high dosages. the immunological type of heparin associated thrombocytopenla (hat ii) is a severe complication of heparin treatment and is associated with arterial and venous thrombosis. only patients with absolute thrombocytopenia have prompted suspicion of hat in clinical practice. we report on a year old male, who developed thromboembolic episodes after coronary angiography like reinfarction and thrombotic episodes of a. brachialis. fibrinolytic therapy combined with i.v. unffactionated heparin treatment was the therapy of choice and was followed by severe fua~er thromboembolic adverse effects. besides an impaired fibrinolytic response and elevated antiphospholipid anitbodies, we diagnosed hat type ii in hipa and elisa (stago-boehringer, marmheim). this special patient had platelet counts within a normal range, when developing the thromboembolic episodes. it appears that the normal platelet count during the thromboembolic episodes reflect a relative thrombocytopenia. from a clinical point of view we recommend the use of a lab panel to exclude hat type ii in patients with thromboembolic episodes under therapy with fractionated or unfractionated hepafin. platelet counts within a normal range are no absolute exclusion criterion for hat ii. low molecular weight heparins (lmwhs) are now commonly used for the prophylaxis of post-surgical thromboembolic complications. in this indication, lmwhs are administered as a single or twice a day subcutaneous regimen. usually these agents are administered at - mg total dose which is equal to - anti-xa (axa) iu. newer methods such as ehromogenic substrate based axa methods and the heptest clotting time can be used to determine the effects of lmwhs during the initial phases of prophylactic therapy. this may be useful in the elderly and weight compromised patients where a fixed dosage may not be optimal and may produce bleeding effects. similarly in the overweight patients, a fixed dose may not be efficacious. thus, monitoring of lmwhs in these patients may be useful in the optimization of their therapy. lmwhs are also used in the treatment of deep vein thrombosis using both intravenous and subcutaneous protocols. high dosages of up to mg sc/day and infusions of up to axa iu/kg/hr have been administered. in these conditions, the monitoring of the circulating lmwh levels may be useful in optimizing the dosage. we have modified the aca heparin (do_pont merck, wilmington, de) method to measure the lmwh levels in the plasma of patients treated with both the prophylactic and therapeutic dosage. owing to the required turnaround time, simple operation and reliable results, this method was found to be of value in the monitoring of these agents. this presentation provides an overview of the clinical application of various lmwhs with particular reference to the need of monitoring for their effects to optimize the clinical outcome. a double-blind, multicentric, controlled trial was performed in order to compare the antithrombotic efficacy and safety of single daily doses of ie anti-xa of low molecular weight heparin (lmwh) sandoz (certoparin) and ie unfractionated heparin (ufh) tid. in patients undergoing elective total hip replacement blood samples were drawn before the first subcutaneous injection of lmwh or ufh resp., two hours after administration on the first and th postop, day and on the last day of prophylaxis (day - ), anti-xaactivity was measured by chromogenic substrate assay, heptest and aptt by clotting assays and tissue factor pathway inhibitor (tfpi) and heparin-pf -antibodies by elisa techiques. as expected, the anti-xa-activity and the heptest values were significantly higher in the lmwh-group at all time points after administration of the drugs; the mean values of heptest were sec in the ufh-and sec in the lmwh-group respectively, the aptt was not different in both groups. at the end of prophylaxis positive antibodies to heparin-pf complexes were detected ~n both groups; this however was not correlated with clinical thrombocytopenia. a detailed correlation between patients with deep vein thrombosis (dvt) and positive antibodies has still to be done (all patients were screened for asymptomatic dvt between day - by bilateral phlebography. tfpi was markedly increased in the lmwh-and only slightly elevated in the ufh-group; the differences are statistically significant. summarizing it can be concluded that antibodies to heparin-pf complexes may occur without clinical symptoms of hepafin-induced thrombocytopenia type ii and that tfpi may play a sigificant role for the antithrombotic efficacy of ufh and lmwh. unfractienated heparin represents one of the most severe and frequent causes of drug-induced thrombooytopenia. heparin-indueed thrombocytopeala (hit) occurring early in therapy is often mild and serf-limited, appearing to be caused by a direct aggregant effect of heparin on platelets (hit type i). hit type ii, however, is immune-related an may result in absolute thrombocytopenia (platelet count bu) hemopb~iacs with high fitcrs have ~ually serious ~ problems. they are resistent to mg,,flary replacement therapy, the ~ goal in the treawnent is to control severn acum bleedin~ and to eradicate the inlu'bitor perrmnanfly and to induce tolea'ance. in the tream'tmt of acute blcedings in patients with hlhibitors factor viii inhibitor bypassing ag~ts like activated prothro~ complex concenuxtes (feiba) or prothrombin complex concentrates (pcc) arc mostly used. the meehani~n of aefiou of theses concentrates is net fully investigated. their effect is usually related to the high coment of activated clotting factors ~d phosphoupids. since some years acdwated recombinant factor vii (f vii a) is used to treat patients with inl'dbitocs successfully in several clinical situations including surgery. in addition porcine factor vii is widely used in particular in the uk for the treatment of factor v].ii inhibitor patients and could demonstrade good clir cal results, in case of life threatening bleedings a temporary reducfic~ of inhihitors could be. ~hieved by using extem*,ivc plasma exchange (protein a adsorption) and immune suppression with cyclophosphamid (~alm protocol). follow~g the first description by h. bmc~'~mn some modifications for tlm induction of irmnune tolerance in hemophilia a patients have ~en propet'ed. these schedule, can be derided into high, intemxxfime and low dosage roglrmms di:ffea'jng in the dosage of factor viii infused. successful rates about to go % can be obtained with ~ and high dose regimens. but is has to be co~sidered that the~ expensive trea.t~nt regimens have a great physical and p .syc.hosocial impact to the benx~-li~s and thch" farm~e& the different immu~ mler-a.~ze mg~-'~ predominantly used in high rcsponder inhibitor. most of the patients with low concentrations of inhibitors cm be managed with factor viii in increased dosage. this is in agreement with the consensus recorrnr~rdadons for u'eatlncnt hemophiliacs in germany fi'orn . before vitamin k(vk) prophylaxis was generally accepted in japan, the incidence of infantile vk deficiency was : both idiopathic and secondary types. since , nationwide surveys have been conducted. the current incidence rate is now about one-tenth that in early . however, in a small number of eases, vk deficiency oceured despite prophylactic administration during the neonatal period. in order to clarify the absorption,excretion and transplacental transport of vk in the perinatal period,following studies were carried out. t)hepaplastintest(normotest) were performed on women in the last stage of pregnancy and each coagulation factor was estimated as well. )correlations were made between mothers'and babies'hepaplastin test values. )transplacental transport of vk was studied. the general activity of vk dependent factors in pregnant women was much higher than in non pregnant women. as far as the correlation between mothers'venous blood during delivery and cordvenous blood is concerned, in the group of mothers with hepaplastin test value of less than % of the normal adult value, the value of the hepaplastintest was less than % of normal adult value in the cord venous blood° we also demonstrated that vk passed through the placenta but only in small qualities. hiv-negative patients (median age ]yrs, - }, formerly treated with non-virusinactivated coagulation products, underwent hepatologic examination, including afp screening and sonography. .suffer from severe, from moderate or mild haemophilia a or b, from other severe coagulation factor deficiencies. had been treated with products of the swiss red cross (src) only ( with small pool cryoprecipitate}, with foreign products only, with both src and foreign products. treatment intensity was variable with> ' iu/yr in ,< ' in ] , < treatment episode/yr in ] , a total of only - treatment courses in patients. afibrinogenemic patients had prophylactic replacement therapy. hcv serology was positive in / ] patients ( %), in with detectable hcv rna ( %). the persons who escaped hcv infection, with normal alt-levels and without sonographic alterations, had low intensity treatment with small pool src preparations only. alt-levels were elevated in / anti-hcv positive patients ( %). / had abnormal sonographic findings ( %). there was a clear correlation between elevated alt-levels and abnormal sonographies: of patients with elevated alt had abnormal sonography, of with normal alt had abnormal sonography. patients had liver cirrhosis ( with clinically overt hepatopathy), ( / = %!) with hepatocellu]ar carcinoma (hcc) with elevated afp-leveis. of these patients had intraarterial embolization with ]ipiodol-epirubicin; in patients hcc diagnosis was made in a late stage. i patient with advanced liver cirrhosis underwent successful liver transplantation. of the patients with hepatopathy had severe haemophilia with temporary high alcohol intake, had mild coagulatlon disorder with few treatment episodes. possible precipitating factors were coinfection with hbv, high alcohol consumation and first exposure to hcv contaminated blood products in an advanced age, but not intensive replacement therapy. very similar results for f vlll and vwf. since the factor viii level is kept steady above the level where there is an increased risk of haemorrhage, continuous infusion is haemostatically safer and more efficacious than bolus injections, another advantage is a progressive decrease of clearence during the first days after surgery which leads to a substantial reduction of factor concentrate consumption by avoiding the innecessary peaks of bolus injections. children with severe form of haemophilia a undergoing elective surgery received continuous infusions with different plasma-derlved and recombinant f viii concentrates. before surgery, patients got bolus injections to raise the factor viii levels to more than %. during continuous infusion factor viii levels were measured two to three times a day and the infusion rate of to iu/kg/h could be reduced on the second or third day to - iu/kg/h. the clinical efficacy was excellent with no bleeding events. in children with vwd also undergoing elective surgery continuous infusions with humate pr were performed in the same way. no bleeding events were observed in these patients. none of the patients developed postoperative wound infections. the overall doses of f vtll concentrate 'were about - % lower than those required during replacement therapy with bolus doses. lg factor x frankfurt i : molecular and functional characterisation of a hereditary factor x defect (gla + lys) huhmann i., holler b., krinninger b., turecek p.l, richter g., scharrer i., forberg e., watzke h. univ. klinik f r inhere med.i, abteilung for h~tmatologie und h~mostaseologie, w~en; immuno-ag, wien ; klinikum der j.w. goethe-univ. frankfurt am main, abt. f. angiologie. factor x (fx) is a vitamin k-dependent plasma protein which is activated either by fvila/tissue factor or ixaniila. fxa is the main enzyme for conversion of prothrombin to thrombin. the congenital fx-deficiency (stuart -prower-defect) being inherited as an autosomal recessive trait subsequently leads to bleeding diasthesis of varying severity. our propositus is a year old patient presenting a mild bleeding tendency. his p'fi ( sec) is within the normal range, the pt ( % of normal) is slightly reduced. the factor x antigen level is reduced to % of normal. molecular charactedsation of the genetic defect was performed by amplification of the eight exerts and exonintron junctions by pcr and subsequent direct sequencing of the products. in comparison to the normal sequence we could determine a single mismatch within exon ii resulting in the substitution of + gla (gaa) by lys (aaa). the mutation abolishes a naturally occuring mboll site in the dna sequence of exon ii. the status of the fx encoding alleles was determined in the propositus, his mother and one of his brothers by amplification of exon ii and restriction digest with mboll. these family members were heterozygous with respect to the mutation in exert ii. fx was isolated from plasma of the propositus by monoq ion exchange chromatography. performing clotting assays with purified fx frankfurt i we determined an activity of % of normal fx upon activation with rw, % upon intdnsic activation (aptt) and % upon extrinsic activation (pt). this compares well with the results obtained from the patient plasma ( pt %, ptt % and rw % of normal) when the reduced fx-ag-level of the plasma ( %) is taken into account. we therefore conclude that the substitution of gla + to lys results in a fx molecule which is severely defective in both the intrinsic and extrinsic pathway of blood coagulation. bleeding after cardiothoracic surgery is still a frequent, important and sometimes life-threatening complication. thus, the aim of this study was to examine routine parameters of hemostasis and their predictive values for severe bleedings. this prospective study included patients undergoing cardiopulmonary bypass surgery. blood samples were drawn preoperatively as well as , , hours and , , , , , , days after surgery. blood loss from drains, transfusion of blood products and other important clinical data were monitored apart from platelet count, hematocrit, thrombin time, thromboplastin time, aptt and levels of fibrinogen, atiii and c-reactive protein; soluble fibrin (sf) was measured via protamine sulfate aggregability and total fibrin(ogen) degradation products (ftdp) by an elisa from organon teknika. n= patients were examined (age: __+ y). they lost +__ ml blood (mean+sd) into the drains within the first hours after end of surgery. a severe bleeding was defined to exist, if the blood loss exceeded this range (> ml within h). fibrin(ogen) split products proved to be a useful parameter in predicting the risk of severe bleedings : ftdp levels exceeding mg/i at end of surgery (n = ) had a negative predictive value of %, positive predictive value of %, specificity of % and a diagnostic efficacy of %. in contrast, soluble fibrin which correlated well with fibrinopeptidea (r> . , n= ) did not correlate neither with degradation products nor with bleeding complications (n = ). this observation does not match to the correspondence of sf with organ dysfunction during dic: sf reached a neg.predictive value near % and a diagnostic efficacy of > % (pat. without antifibrinolytic drugs), which complies to findings from bredbacka ( ). other parameters were less predictive than ftdp and sf. therefore, further examinations are necessary to determine the value of soluble fibrin for a risk prediction of bleeding complications or dic. a differentiation of splits products deriving from either fibrinogen, fibrin or xl-fibrin will provide further insights into fibrin(ogen) metabolism. heparin induced thrombocytopenia represents a multicomponent syndrome associated with the use of heparin and related drugs resulting in not only thrombocytopenia, but also arterial thrombosis of varying magnitude. the initial diagnosis ofthis syndrome is usually made by clinical observation and a drop in platelet count. conventional diagnostic methods include platelet aggregation responses to patient's serum and ~ c serotonin release in response to patient's serum, aggregation/agglutination of patient's platdets in response to heparins and the detection of patients anti-heparin platelet factor (hpf -ab) ned-antibodies by using elisa methodology. several other individualized methods are also used to demonstrate platelet activation. to test the diagnostic validity of the platelet aggregation (pa) c serotonin release (sr) and the relevance of hpf~-ab serum samples collected from patients with clinically eunfwmed eases of lilt syndrome were compared in parallel in various assay systems. the diagnostic efficacy of these tests varied from - % with the pa test providing better results than others. when the pa test was compared with serotonin release, a poor correlation was noted (r= . ). in contrast, the correlation between the pa and hp -ab was somewhat better (r= . ). in another study, blood samples collected from patients treated with ahigh dose low molecular weight beparin for two weeks ( mg o.d.) were tested. of these patients showed a high titre of hpf .ab without any decrease of platelet count. none of these patients were found to be positive in the c serotonin release assay. a third study included blood samples from dvt patients administered with iv heparin infusion, high dose sc lmw heparin (certoparin) and iv lmw heparin for the management of dvt. none of these patient groups ( - ) exhibited any hit responses, hmvever, the incidence of high hpf -ab titre was found to be % in heparin, % in patients with lmw heparin iv and % in lmw heparin sc groups. pa and sr studies revealed % and % false positive ~ respeetively. these studies clearty suggest that the currently available ~ for laboratory diagnosis of hit syndrome are of limited value, and caution should be exercised in the interpretation of the results obtained with these tests. heparin-induced thrombocytopenia (hit) is one of the major severe side effects during treatment with heparin. in postoperative medicine clinical studies demonstrated the prevalence of hit with unfractionated over fractionated heparins. few data are available from the non-ope "ative medicine and from patients without thmmboembolism before heparinization. in a controlled prospective randomized study the safety and efficacy of low-dose heparin was compared with a lowmolecular-weight (lmw) heparin over days in bedridden medical inpatients (haemostasis, in press). patients were randomized and controlled for the development of thrombocytopenia. thrombocytopenia was defined as a platelet count below . lid at day . patients developed thrombocytopenia in the heparin group and no patient in the lmwheparin group (p< . ). none of the patients with thrombocytopenia developed a thromboembolic complication. in a second prospective case control study patients with side effects on anticoagulants were treated with lmw-heparin once daily subcutaneously for a period of month to years. platelet count was performed every to months. none of these patients developed thrombocytopenia during heparinization with lmw-heparin. it is concluded that hit is a very rare complication in nonoperated bedridden medical patients. a decrease of platetet count may occur in about . % of patients receiving low-dose heparin. the incidence of hit with thrombosis during low-dose heparin and of hit during lmw-heparin in non-operated patients is manyfold lower and remains to be determined. terminology: instead of the term "hemorrhagic disease of the newborn (hdn)" the term vkdb should be used, since neonatal bleeding is often not due to vkdeficieacy and vkdb may occur after the neonatal period (i.e. after weeks). definition: vkdb is a bleeding disorder caused by reduced activity of vkdependent coagulation factors which responds to vk. diagnnsis: in a bleeding infant a prolonged pt (inr > . ) together with normal fibrinogen and platelet count is almost diagnostic of vkdb. the diagnosis is proven, if vk shortens the pt (after only - minutes) and/or stops bleeding. classification: classification by age of onset into early (< h~. classic fdav - ) and lale form (> i week < months), and by etiology into idionathic and ~ec nd~'y. in secondary vkdb in addition to breast feeding other factors can be demonstrated, such as poor intake or absorption of vk and increased consumption of vk. vk-prophylaxis: benefits: oral and intramuscular (i.m) vk (one dose of i nag) prevents equally well the classic form of vkdb. lm. vk appears to be more effective in preventing the late form (times -> ). the protection achieved by single oral prophylaxis (times - ) is improved by triple oral vk (times - ). risks: because of poten[ial ri~l~ associated with extremely high levels of vk and the possibility of injection injury, i.m. vk has been questioned as the prophylaxis of choice for normal neonates. since vk is involved not only in coagulation but 'also in carboxviation with multiple effects, excessive deviations from the low physiologic concentrations, which prevail in the fully breast-fed healthy mature infant should be avoided. proposal: repeated (daily or weekly) small oral doses of vk are closer to physiologic conditions than single i.m. bolus doses, which expose neonates to excessively high vk levels. the incidence of intracranial vkdb can be reduced if the grave significance of warning signs is recognized (i.e, icterns, failure to thrive, feeding problems, minor bleeding, disease with cholostasis). whether or not the more reliable absorption of the new mixed mieellar (mm~ nrenaral~i n of vk can reduce the protective oral dose of vk-.prophylaxis has to be evaluated. before vitamin k(vk) prophylaxis was generally accepted in japan, the incidence of infantile vk deficiency was : both idiopathic and secondary types. since , nationwide surveys have been conducted. the current incidence rate is now about one-tenth that in early . however, in a small number of cases, vk deficiency occured despite prophylactic administration during the neonatal period. in order to clarify the absorption,excretion and transplacentel transport of vk in the perlnatal period,followlng studies were carried out. )hepaplastlntest(normotest) were performed on women in the last stage of pregnancy and each coagulation factor was estimated as well. )correlatlons were made between mothers'and babies'hepaplastin test values. )transplacental transport of vk was studied. the general activity of vk dependent factors in pregnant women was much higher than in non pregnant women. as far as the correlation between mothers'venous blood during delivery and cordvenous blood is concerned, in the group of mothers with hepaplastln test value of less than % of the normal adult value, the value of the bepaplastlntest was less than % of normal adult value in the cord venous blood. we also demonstrated that vk passed through the placenta but only in small qualities. the point mutation g to a at nt in exon v of the factor x gene (gin to lys) has previously been found in two independent kindreds with fx deficiency. it occured in both families in an heterozygote state and was associated with two other genetic defect in the fx gene. we have identified another familiy in which this mutation occurs in a homozygote state. in this family the mutation is associated with the previously reported mutation gla to lys which also occurs in a homozygote state. the pt and ptt of the proposita and her siter are markedly prolonged. the fx activity is reduced to < % in the extrinsic system, to % in the intrinsic system and to % after activation with rvv. the fx antigen is reduced to %. the coagulation profile of this family thus is identical with that of fx vorarlberg despite the fact that the fx vorarlberg kindred is only heterozygous for the mutation glal to lys. haplotype analysis could not rule out consanquinity with the fx vorarlberg kindred. these data suggest that the mutation at nt which leads to a fairly dramatic amino acid change from glu to lys would indeed represent a polymorphism. to further address this question we cloned the fx gene in an expression vector (pcep ) for transient expression in the human embryonic kidney cell line and introduced the mutation at nt by site directed mutagenesis. hereditary deficiency of factor ixa, a key enzyme in blood coagulation, causes hemophilia b, a severe x-chromosomelinked bleeding disorder; clinical studies have identified nearly deleterious variants. the x-ray structure of porcine factor ixa shows the atomic origins of the disease, while the spatial distribution of mutation sites suggests a structural model for fx activation by phospholipid-bound flxa and cofactor villa. the . a resolution diffraction data clearly show the structures of the serine proteinase module and the two preceding epidermal growth factor (egf)-like modules; the n-terminal gla module is partially disordered. the catalytic module, with covalent inhibitor d-phe-pro-arg chloromethyl ketone, most closely resembles fxa but differs significantly at several positions. particularly noteworthy is the strained conformation of glu- , a residue strictly conserved in known fixa sequences but conserved as gly among other trypsin-like serine proteinase. flexibility apparent in electron density together with modelling studies suggests that this may cause incomplete active site formation, even after zymogen activation, and hence the low catalytic activity of fixa. most hemophilic mutation sites of surface fix residues occur on the concave surface of the bent molecule and suggest a plausible model for the membrane-bound ternary flxa-fvilla-fx complex structure: the stabilizing fvilla interactions force the catalytic modules together, completing flxa active site formation and catalytic enhancement. factor x frankfurt i molecular and functional characterisation of a hereditary factor x defect (gla + ---, lys) huhmann i., holler b., krinninger b., turecek pi., richter g., scharrer i., forberg e., watzke h.. univ. klinik ftlr innere medi, abteilung for h~matoiogie und hamostaseologie, w~en; immuno-ag, wien ; klinikum der j.w. goethe-univ. frankfurt am main, abt. f. angiologie. factor x (fx) is a vitamin k-dependent plasma protein which is activated either by fvila/tissue factor or ixaniila. fxa is the main enzyme for conversion of prothrembin to thrombin. the congenital f×-deficiency (stuart -prower-defect) being inherited as an autosomal recessive trait subsequently leads to bleeding diasthesis of varying severity. our propositus is a year old patient presenting a mild bleeding tendency. his ptt ( sec) is within the normal range, the pt ( % of normal) is slightly reduced. the factor x antigen level is reduced to % of normal. molecular characterisauon of the genetic defect was performed by amplification of the eight exons and exonintron junctions by pcr and subsequent direct sequencing of the products. in comparison to the normal sequence we could determine a single mismatch within exon ii resulting in the substitution of + gla (gaa) by lys (aaa). the mutation abolishes a naturally occuring mboti site in the dna sequence of exon ii. the status of the fx encoding alleles was determined in the propositus, his mother and one of his brothers by amplification of exon ii and restriction digest with mboll. these family members were heterozygous with respect to the mutation in exon i . fx was isolated from plasma of the propositus by monoq ion exchange chromatogrephy. performing clotting assays with purified fx frankfurt i we determined an activity of % of normal fx upon activation with rw, % upon intrinsic activation (aptt) and % upon extrinsic activation (pt). this compares well with the results obtained from the patient plasma ( pt %, ptt % and rw % of normal) when the reduced fx-ag-level of the plasma ( %) is taken into account_ we therefore conclude that the substitution of gla + to lys results in a fx molecule which is severely defective ip both the intrinsic and extrinsic pathway of blood coagulation. bleeding after cardioth~)racic surgery is still a frequent, important and sometimes life-threatening complication. thus, the aim of this study was to examine routine parameters of hemostasis and their predictive values for severe bleedings. this prospective study included patients undergoing cardlopulmonary bypass surgery. blood samples were drawn preoperatively as well as , , hours and , , , , , , days after surgery. blood loss from drains, transfusion of blood products and other important clinical data were monitored apart from platelet count, hematocrit, throm. bin time, thromboplastin time, aptt and levels of fibrinogen, atiii and c-reactive protein; soluble fibrin (sf) was measured via protamine sulfate aggregability and total fibrin(ogen) degradation products (ftdp) by an elisa from organon teknika. n= patients were examined (age: + y). they lost +__ ml blood (mean_+sd) into the drains within the first hours after end of sur. gory. a severe bleeding was defined to exist, if the blood loss exceeded this range (> ml within h). fibrin(ogen) split products proved to be a useful parameter in predicting the risk of severe bleedings : ftdp levels exceeding mg/i at end of surgery (n = ) had a negative predictive value of %, positive predictive value of %, specificily of % and a diagnostic efficacy of %. in contrast, soluble fibrin which correlated well with fibrinopeptide a (r> . , n= ) did not correlate neither with degradation products nor with bleeding complications (n= ). this observation does not match to the correspondence of sf with organ dysfunction during dic: sf reached a neg.predictive value near % and a diagnostic efficacy of > % (pat. without antifibrinolytic drugs), which complies to findings from bredbacka ( ). other paramelers were less predictive than ftdp and sf. therefore, further examinations are necessary to determine the value of soluble fibdn for a risk prediction of bleeding complications or dic. a differentiation of splits products deriving from either fibrinogen, fibrin or xl-fibrln will provide further insighls into fibrin(ogen) metabolism. this study was conducted as a randomized parallel -group clinical trial comparing the safety and efficacy of a low molecular weight heparin {lmwh} -monoembolex sandoz and unfractionated standard heparin glfh) for the perioperative prevention of venous thromboembolie disease (dvt) following major surgms' in patients with gynecologic malignancy.. three hundred and twenty women (six drop outsl werr randomized and received either times daily [l" s.c. ul.'i-i (sandoz nuemberg germany] (n = ) or once a day t i~'v'. units s.c. monoembolex (n = ) plus two placebo injections. heparin therapy was started the morning before opcrati(m and continued until the th postoperative day. up to the th poatop, day the incidence of dvt was . % (n = ; incl. pulmona~ embolisms pe) in the lmwh group and . % (n = ; incl. pe} in the ufh group. the overall incidence of clinically hemorrhagic wound complications was significantly decreased in the lmwh group . % (n = hi compared to the ufh group . % {n = ; p < . . the incidence of major hemorrhagic episodes was . % in = in the lmwh group and . %/n = ) in the ufh group. this difference was not statisticauy significant. one case of fatal pe was observed in the lmwh -treated group. five women deaths in the lmwh group were observed during the study and in the ufh group. this study demonstrates that the perioperative treaunent of low molecular weight heparins is more safety than standard heparins in gynecologic -oncologic patients undergoing major surge .ry. however, the incidence of thromboembohc complications is simmilar in both treatment regimes. to explore the effect of targeting an antithrombin to the surface of a thrombus, recombinant hirudin (hir) was covalently linked to the fab' fragment of fibrin-specific monoclonal antibody d (fab) resulting in a stable conjugate (hir-fab). in vitro, hir-fab was times more efficient than hir alone in inhibiting fibrin deposition on experimental clot surfaces in human or baboon plasma (p< . ). to validate these results in vivo, hir-fab was compared to hir in a baboon model. the deposition of ill-in-labeled platelets onto a segment of dacron vascular graft present in an extracorporeal arteriovenous shunt was measured. blood flow rate was ml/min. one hour local infusions of atu of either hir-fab or hir resulted in deposition of . x and . x plate!ets, respectively. equieffective dosages were atu hir-fab and atu hir resulting in deposition of . x and . x platelets, respectively. based on full dose response curves (n = ), hir-fab was found to be > . -fold more potent (based on activity) than hir. because of the small total amounts of antithrombins used and the short duration of these experiments, no significant systemic effects were observed. thus, fibrin-targeted recombinant hirudin prevents platelet deposition and thrombus formation more effectively than uncoupled hirudin in vitro and in an in vivo primate model. triabin, a kda protein from the saliva of the assassin bug triatoma pallidipennis, is a new specific thrombin inhibitor ( ). tt does not block the catalytic center but interferes with the anionbinding exosite of thrombin. the recombinant protein was produced with the baculovirus/insect cell system and used to study the inhibitory effect of triabin on thrombin-induced responses of human blood platelets and blood vessels. aggregation of platelets in tyrode's solution was measured turbidimetrically at °c. for the studies on blood vessels rings ( - mm) from small porcine pulmonary arteries were placed in organ baths for isometric tension recording. the integrity of the endothelium was assessed by the relaxant response to bradykinin. like hirudin, triabin inhibited the thrombin ( . u/ml)-induced aggregation of washed human platelets at nanomolar concentrations (ec = . nmol/l); whereas the adp-and collagen-induced aggregation were not suppressed. in pgf c~-precontracted porcine pulmonary arteries, the thrombin ( . u/ml)-induced endothelium-dependent relaxation was inhibited by triabin in the same concentration range as found for inhibition of platelet aggregation. higher concentrations of triabin were required fo affect the contractile response of endothelium-denuded porcine pulmonary arteries to thrombin ( u/ml). in all these assays, the inhibitory potency of triabin was dependent on the thrombin concentration used. these studies suggest that the new anion-binding exosite thrombin inhibitor triabin is one of the most potent inhibitors of the thrombin-mediated cellular effects. dept. of medicine, university hospital benjamin franklin, free university of berlin, dept. of medicine and dept. of surgery, heinrich-heine:university dusseldod after standardized training in home prothrombine estimation using the coaguchek system, consecutive patients (p) who had st. jude medical aortic or mitral valve implantation were allocated to two random arms; p were asked to control the inr themselves every third day. in the remaining p anticoagulation was managed by the home physician without recommending an interval for these controls. all p were monitored during the education period to a target therapeutic range of inr . - . . p were asked to contact their home physician immediately if the inr was measured . below or above the target range (inr-corrider . - . ). all p had out-patient re-examinations every three months. thrombotic, thromboembelic and hemorrhagic complications were documented by the p using special documentation cards. the following findings were documented during the follow-up period: . . the results of this randomized study demonstrate a significant improvement in the management of oral anticeagulation by home prothrombine estimation. significant (p< . ) more inr measurements were found inside the target therapeutic range. moreover. bleeding and thromboembolic complications could be reduced (p = . ) in the study group with home prothrombine estimation. life-threatening thromboembolic and hemorrhagic complications were not observed in p who were on home prothrembine estimation, while three such events ( . %/year) were documented in group a. local vascular injury following ptca exposes circulating platelets to prodmmbogenic stimuli. by binding to platelet gp iiblliia fibrinogen crosslinks platelets, which represents the final common pathway of platelet aggregation. fradafiban (bibu zw) is a non-peptide compound with effective, reversible inhibitory effects on fibrinogen binding to gp iib/ii/a on human platelets. in the first double-blinded, prospective phase ii study three escalating doses of bibu zw as a continuous h-i.v, infusion were tested in comparison to placebo in patients with stable angina pectoris undergoing elective ptca. the mean receptor occupancy with rag, ms and ms per hour were . , . % and . % at hours, respectively. as compared to placebo breeding time was significantly prolonged ( vs rain) during fi-adaiiban infusion with a weak dose-dependency. platelet aggregation in platetet rich plasma ex vivo with collagen ( . and . gg/ml), adp ( . and . gmol/ml) or ca-ionophor a ( . and . gg/ml) was significantly and dose-dependently inhibited as compared to placebo. using the two upper doses of fradafiban, we observed major bleeding complications in patients requiring blood transfusions or vascular surreal repair. in these patients, too, maximal antiplatelet effects could be documented. these data sugest that bibu zw is an effective fibrinogen receptor antagonist in patients. the requirement of ad hoc receptor occupancy determination or platelet function monitoring for safe and effective clinical use should be evaluated. in a placebo controlled interaction study healthy volunteers were randomized to receive either a hour infusion of peg-hirudin ( . mg/kg/h) after an i.v, bolus of . mg/kg + placebo, or mg/day acetylsalicylic acid (asa) for three days followed by a placebo infusion or the peg-hirudin infusion + asa. each volunteer received all three treaments. there was a washout period of at least days between the infusions. at short intervals aptt, activated clotting time (act), ecadntime (ect), alia-activity using the chromogenic substrate , collagen-induced aggregation, platelet adhesion and platelet induced thrombin gene,ration time (pitt) were measured, bleeding time (simplate) was studied before drug administration, on day three before the infusion and hours after start of the infusion.the infusion of peg-hirudin after and hours led to a mean hirudin plasma level of . pg/ml. asa markedly inhibited collagen induced aggregation as expected. the mean bleeding time was prolonged under the influence of peg-hirudin from . to . min, after asa from . - . min and after the combination of peg-hirudin + asa from . - . min. in each volunteer the bleeding time was longer under the combination than after asa alone. in two volunteers receiving peg-hirudin + asa the bleeding time measurement was stopped after rain. none of the coagulation parameters or platelet function tests correlated with the prolongation of the bleeding time. however the bleeding time was excessively prolonged in those volunteers who had a marked prolongation under asa alone.the combination of hirudin at a higher dosage with asa probably is associated with a relative high risk of bleeding. either the hirudin dosage should be reduced if the combination seems feasabie or asa should be given after the end of hirudin treatment. fibrinogen with the sta/stago and the mla/dade systems correlated well, but neither system correlated well with the acl/il system. at iii, protein c, protein s, and anti-xa heparin assays using stago reagents performed as expected for normals and low abnormals on the sta. factor levels on the sta/stago system were less sensitive than factor levels obtained with the dade reagents on the mla or fibrometer. using the sta/stago system, thrombin time results correlated well with the aptt and heparin levels. the thrombin time was not associated with additional manipulation for assay preparation, nor any cross-contamination of reagent or sample, since on the sta reagents do not come into contact with tubing. the sta was not sensitive to hemolytic, icteric or lipernic samples for clotting assays artd showed the same sensitivity as the mla for chromogenic assays. the overall data comparisons, high throughput, minimal operator intervention for reagent/assay change and ease of operation warrant further evaluation of the sta hemostasis analyzer. a. wehmeier, d. s hngen, c. rieth klinik for h#,matologie, onkologie und klinische immunologie der heinrich-heine-universit~it d sseldorf hirudin selectively inhibits thrombin by direct interaction. because the effect of hirudin is independent of antithrombin iii and other factors, it seems an attractive alternative to current anticoagulants. however, it is uncertain whether hirudin influences plateletassociated thrombotic disorders and how it compares with conventional and lmw heparin. we investigated the effect of recombinant hirudin preparations (rhein biotech, dt sseldorf) on platelet function tests: in vitro bleeding time, adhesion to glass beads, aggregation in platelet-rich plasma and whole blood. hirudin was used in concentrations of . - i.tg/mi, and was compared to trisodium citrate ( . %), conventional heparin ( iu/ml) or lmw heparin (fraxiparin, iu/ml). both recombinant hirudins showed normal activity in thrombin neutralization tests, and prolongation of thrombin time and aptt. however, in vitro bleeding time was not prolonged by hirudin, but was more than doubled by addition of conventional and lmw heparins. platelet retention to glass bead columns was reduced by hirudin in a dose-dependent manner to about % but was more effectively reduced by both heparin preparations and citrate. hirudin had an inhibitory effect on p!atelet aggregation in prp induced by thrombin, collagen, and predominantly epinephrine but not adp and ristocetin. in whole blood, a small effect could only be observed with hirudin concentrations of > ~g/ml as compared to citrateanticoagulated blood. in summary, thrombin inhibition by recombinant hirudin has little effect on in vitro platelet function tests in comparison to heparins and calcium depletion. the role of endothelin (et), prostaglandins and the coagulation system in the pathogenesis of acute renal failure is still to be defined. in anaesthesized pigs the effects of i.v. infusion of et ( /~g/kg) alone (group , n= ) and after pretreatment with the potent thrombin-inhibitor hirudin ( , mg/kg)(group , n= ) on haemodynamics, coagulation parameters (factor viii, antithrombin iii, precallicrein, fibrin monomers, aptt) and prostaglandins were investigated. plasma renin activity (pra)-, creatinine clearance-, urine volume-measurement and blood gas analysis were performed hourly. et-infusion caused an initial bp-reduction and marked hr-reduction followed by a transient bp-elevation and hr-reduction. activation of platelets can be directly measured by flow cytometry using monoclunal antibodies. in an in vitro study the effect of the thrombin inkibitors argatroban, efegatran, dup , recombinant hirudin and peghirudin on platelet activation induced by various agonists was studied in whole blood. blood was drawn from normal human volunteers using the double syringe technique without use of a tourniquet to avoid autoaggregatiun of platelets. for anticoagulation of blood the thrombin inhibitors mentioned above were used at a final concentration of ~tg/ml each. blood samples were then incubated at °c either with saline, r-tissue factor (rtf), arachidonic acid (aa), adenosine diphosphate (adp) or collagen. at definite times ( , . , , rain) aliquots were taken and after various steps of fixative procedure the percentage of platelet activation was measured by means of fluorescent monoclonal antibodies to platelet surface receptors gpiiia (cd- ) and p-selectin (cd- ). the agunists used induced a platelet activation of . + . % (rtf), . + . % (aa), . + . % (adp) and . + . % (collagen). flow cytometric analysis showed that all thrombin inlaibitors studied caused a nearly complete inhibition of r-tissue factor-mediated platelet activation. in contrast, after induction of platelet activation with the other agonists an increased percent cd- expression was found showing a strong platelet activation with a maximum at the same times as in non-anticoagulated blood. in conclusion, the results show that in whole blood thrombin inhibitors are effective in preventing platelet activation induced by r-tissue factor. the formation of active serine proteases including thrombin may be effectively inldbked by these agents. the observations further suggest that, while thrombin inkibitors may control serine proteases, these agents do not inhibit the activation ofplatelets mediated by other agonists. this work was supported by the grant bmft nbl . animal experimental studies on the pharmacokinetics of peg-hirudin e. bucha, a. kossmehl, g. nowak max-pianck-gesellschaft e. v., arbeitsgruppe "pharmakologische h~imostaseologie", jena hirudin, when complexed with polyethylene glycol (peg), increases its molecular weight from to kda, thereby preventing extravasation of this drug. peg-hirudin is distributed almost only in the intravascular blood space. in addition, its increased molecular weight retards the renal elimination. the elimination half-life of hirudin in rats ( + min, as determined) is increased five-fold ( ± min). with the same hirudin dose applied, the blood level of hirudin is increased -fold, measured in the -elimination phase. in the urine of rats, - % of the hirudin activity were recovered following hirudin administration, but % could be detected after peg-hirudin had been applied. after subcutaneous administration of peg-hirudin, the trnaxwalue is reached at rain (r-hirudin: min); the cmax-value is increased -fold, compared to that of r-hirudin ( . pg/ml). hours later, still one fifth of the maximum concentration (cma,) is present in the blood, and the renal elimination is still retarded. in the urine of rats, % of the hirudin activity applied were recovered in the -h urine sample. with intact renal function, following subcutaneous administration, peghirudin is abte to produce a constant blood level of hirudin over a long pedod. thrombin inkibitors such as r-hirudin (rh), argatroban (a), efegatran (e), and peghiradin (ph) are currently undergoing extensive clinical trials in such cardiovascular indications as ptca, ami, and treatment of unstable angina. a rapid assessment of the anticoagulant actions of these agents is, therefore, crucial to assure their efficacy and safety. currently, act and aptt are used to measure the anticoagulant effect of these agents. we have utilized a dry reagent technology based on the motion of paramagnetic iron oxide particles (plop) to measure the antithrombin effects of various thrombin inhibitors (cv diagnostics, raleight, nc). the heparin monitoring card has been modified to measure antithrombin agents in various anticoagulant ranges for (a) (e), (rh), and (ph). blood samples drawn from patients treated with (a) and (rh) have been evaluated and concentrations of these agents have been calculated using an external calibration curve. in the in vitro setting, citrated whole blood or citrated frozen plasma can be used to evaluate the anticoagulant effects of these agents. the results obtained are comparable to the act which is conventionally used for the monitoring of these agents. both (rh) and ( period. we would like to present a case of heparin-induced-thrombocytopenia (hit) in a years old woman who underwend open heart surgery. she suffered from a combined aortic valve disease and leading stenosis. laboratory analysis showed constant low platelet counts ( /nl) without heparin application, so that an idiopathic thrombocytopenlc purpura was suspected. but platelets also decreased after heparin application. heparin-antibodies were found in the heparin induced platelet activation assay (hipaa). treatment with corticosteroids and immunoglobulines, respectively, showed no improvement but the patient unfortunately developed a pneumonia with legionetla pneumophila. therefore, the only suitable anticoagulant for the necessary aortic valve replacement was hirudin: a bolus injection of r-hirudin of , mg/kg b.w. was administered min. bevore start of the extracorporal circulation (ecc), the heart-lung machine (hlm) was primed with mg r-hirudin and another bolus of mg of r-hirudin was administered. additionally mg of r-hirudin was applicated to the cell-saver-reservoir. during the period of ecc ecarin clotting time and aptt values were taken every ten minutes for monitoring r-hirudin concentration. the postoperative anticoagulation was performed with a constant infusion of r-hirudin starting eight hours after the end of ecc and monitored by aptt. due to mechanical aortic valve the further anticoagulation was performed with phenprocoumon, starting days postop. the therapy with hirudin showed no side-effects. hirudin, threrefore seems to be a suitable anticoagulant in patients with high risk for bleeding complications like this. doses fi:om - mg/kg gave similar post-op blood loss measurements without s dnseresponse ( - oc/kg) (less blood oozing than a historical heparin control but equivalent post-op blood loss; q- ec/kg). doses > mg/kg showed more intra-op blood loss than the lowe~ doses, but equal post-.op blood loss. the bleeding time test was less elevated than for heparin. platelet counts and hematoerit did not vary except for hemodihition on pump. liver enzymes did not vary significantly pre-op to post. act values showed arg was eliminated (dose-dependently) by hour post-op. dogs were hemodymamieally stable during the peri-operative period, and overall gave predictable responses to arg (as opposed to variable responses to heparin). in a substudy it was demonstrated that hypothermia did not affect the activity of arg, nor did varioos formnlations. this dose finding study strongly suggests that arg may be a safe and effective alternative to heparin for patients undergoing cpb. this is particularly important for the growing population of patients with hit who require cardiac surgery, for which no anticoagulant alternative is presently available. three recent clinical tdals with r-hirudin (timi , gusto and hit) have shown that the risk of severe haemorrhagic side effects was strongly associated with high aptt-levels. the large intedndividual variability of the aptt and the lack of a linear dose-effect ratio, however, limits its value for reliable monitoring of the anticoagulant effect of hirudin since even severe overdosage due to impaired renal elimination may not be detected with this assay. we have therefore evaluated the ecadn clotting time (ect) as descdbed by nowak and bucha (thromb. haemost. ; : ) under conditions which allow conclusions on its reliability in the clinical situation.for this, citrated venous blood obtained from healthy volunteers, patients with unstable angina pectoris, and patients treated with marcumar was supplemented with different concentrations of peghirudin. measurements of aptt and ect were made in duplicate. in contrast to the aptt, the ect showed a close, linear relationship with peg-hirudin plasma concentrations in the range of and ng/ml. the lineadty of this relationship was not affected by the presence of unfractionated or low molecular weight hepadns in concentrations of up to pg/ml. the ect was not affected by fibdnogen concentrations % below normal. a somewhat higher slope but no change in linearity was found in plasma from marcumar-patients with quick-values between and %. no significant differences were found between values measured in citrated blood or plasma or using different coagulation timers. the most potent thrombin inhibitor containing a benzamidine moiety is napap (k i = nmol/i). unfortunately, the pharmacokinetic properties (fast elimination by hepatic uptake and biliary excretion, poor enteral absorption) are unsuitable for the use of napap as an oral anticoagulant. the application of choice of a synthetic thrombin inhibitor would be the oral one, therefore, we looked for other lead structures. with the nc~-arylsulfonylated piperazides of -amidinophenylalanine we found a new group of derivatives which inhibit thrombin with ki-values in the nanomolar range. the piperazides exert anticoagulant activities with high selectivity, leaving activated protein c and components of the fibdnolytic system unaffected. in rats, the piperazides are rapidly eliminated from the circulation (tl/ ~ min) upon i.v. administration, too. after oral administration, the systemic bioavailability is low. upon intraduodenal administration of high doses widely varying blood levels were seen, depending on the mode of administration. to cladfy the importance of a possible hepatic first pass effect we studied in more detail the pharmacokinetics of the n~-( naphthylsulfonyl)- -amidinophenylalanine n'-acetylpiperazide in rats using hplc-analysis. like other benzamidines the piperazide is excreted via the bile to a high extent. enteral absorption rates of about % are found after blocking the hepatic uptake and biliary excretion. hence, a hepatic first pass effect appears to be the main reason for low systemic bioavailability after orallenteral administration. at the same time, fast elimination from the circulation by hepatic uptake is the main problem for maintaining effective blood levels with benzamidines. therefore, the elucidation of the structural elements influencing the absorption and elimination processes of these types of inhibitors is necessary. the piperazides of -amidinophenylalanine bear the possibility to easily introduce a wide variety of substituents on the second nitrogen of the piperazine moiety. a -year-old female patient with diabetic nephropathy increasingly developed signs of allergisation combined with dyspnea, erythema, pruritus, and circulatory insufficiency two months after start of heparin-anticoagulated haemodialysis und initial surgical application of a double lumen venous catheter. in addition, growing thrombocytopenia was observed involving a drop in platelets by %, compared to the initial values. the haemodialytic efficiency was reduced by massive thrombosis of the dialyzer and subsequent repeated interruption of treatment. at the end of may heparin antibodies were detected and the hat diagnosis was confirmed. immediately afterwards, haemodialysis treatment was continued, applying hirudin as anticoagulant. using steam-stedlised haemophan dialyzers and . mg/kg r-hirudin (iketon, italy), the minimum therapeutic blood level of hirudin ( . pg/ml whole blood) was reached. this provided therapeutically relevant blood level conditions during a . h haemodialysis. more than regular haemodialyses were run without problems. in all hirudin-anticoagulated haemodialysis treatments the ecarin clotting time was used as the method of choice for bedside blood level and dosage control. after the th haemodialysis, the frequency was reduced from ( ) to haemodialyses a week. accordingly, the hirudin dose was increased to . mg/kg. the creatinine clearance increased continuously from initially . to . ml/min after the th week of hirudin-anticoagulated haemodialysis. platelet count and haemodialytic efficiency normalized. we could demonstrate that the regular use of hirudin as anticoagulant along with dialyzers impermeable to hirudin enables very good results in haemodialysis treatment in heparin-associated thrombocytopenia, hirudin is suited for use as anticoagulant in problem patients with hepadn-induced allergy when combined with a drug monitoring method fit for bedside use. capillary electrophoresis methods provide a fast measurement of proteins. thus we developed for pharmacokinetic measurements of r-hirudin and peg-hirudin capillary electrophoresis methods. for the measurement of r-hirudin we used fused silica capillary and a borate buffer. this buffer was used to detect r-hirudin, but could not be used to measure peg-hirudin. for simultaneous measurement we used a neutral capillary to prevent protein absorption to the capillary wall. the buffer was a mm tricine buffer (ph = . field strength v/cm). it resolved r-hirudin from peg-hirudin at nm using reverse polarity. a linear correlation between the peak area and the concentration was found between pgtml and mg/ml for hirudin (r = . ) and between , and mg/ml for peg-hirudin (r = . ) was found by coinspiking of human plasma and urine with r-hirudin and peghirudin the two proteins were completely resolved. a linear correlation between the peak area and the concentration was found. the method separates r-hirudin from peg-hirudin and may be applied to biological systems to measure the concentration of r-hirudin. triabin is a thrombin inhibitor from the saliva oft. pallidipennis structurally unrelated to any protease inhibitor known and which probably functions by an interaction with the anionbinding exosite of thrombin. we used sf insect cells infected with recombinant baculovirus to produce sufficient triabin for a detailed biochemical characterization. the activity of the protein purified from cell lysates was assessed in a fibrinogen clotting assay and was found to be similar to that of the natural protein. a -fold prolongation of thrombin-clotting time and aptt was achieved with nm and nm triabin, respectively. a kinetic analysis of the thrombin-catalyzed fibrinopeptide a release from fibrinogen showed that triabin is a tight-binding inhibitor. using the graphical method of dixon, the ki was determined to be pm. introduction: thrombocytopenia is a common adverse effect of heparin therapy, in type ii hit platelet decrease induces severe complications. we here present two special cases of type ii hit. case report i: a year old male patient with dvt of the left leg was treated with therapeutic doses of heparin. from the first to the th day of therapy, platelet count decreased from to /ui. hit was confirmed by hipa-test, heparin therapy was s~opped and treatment with the heparinoid orgaran n was started. during the following days, arterial thromboses in the right a. femoralis occurred. several thrombe~tomies were not successful and although orgaran ~" was stopped because of suspected crossreactivity, amputation of the right leg could not be avoided. during the following days under hirudin-treatment platelet count normalized and no further complications occurred. case report : a year old female patients suffering from hip fracture was treated by surgery with tep-operation and received prophylactic heparin treatment. after days, platelet count decreased from initially to /ul and dvt of the right leg was diagnosed. on the same day, severe bleeding into the left leg was observed and hemoglobin concentration was diminished to . g% (before surgery . g%). hit was confirmed by hipa te~t, heparin was stopped and treatment with orgaran started. thrombocyte count normalized and no further complications occured. conclusion: hit type ii can cause severe bleeding as well as thromboembolic complications. because of possible cross-reactivity between heparin and orgaran~,, hirudin should be given in hit patients. currently thrombin time (ti), aptt, activated clotting time (act) or anti ila -activity (alia), measured by a chromogenic substrate test are used to monitor hirudin treatment or prophylaxis. the " " responds very sensitive to hirudin plasma levels end thus requires variable thrombin concentrations. aptt appears to be more adequate, however, it shows large interindividual variations and does not respond sensitive enough to higher hirudin concentrations. act is a simple whole blood clotting assay, but it is strongly influenced by the blood collection technique. the ecadn clotting time (ect) is a new clotting assay, recently described by nowak and bucha (thromb.haemost , , ) . it measures the clotting time of citrated blood or plasma after prothrombin activation by ecarin, a snake venom of echis carinatus. ec.t shows a linear dependence on different hirudin concentrations over a wide concentration range ( e.g. . - pg/ml). in a clinical interaction study healthy volunteers were administered hirudin, asa or both. male volunteers received an i.v. infusion of peg-hirudin ( . mg/kg/h) for hours after an initial i.v. bolus of . mg/kg to compare the sensitivity and reliability of ect with aptt, l-r end act. the act was measured on the hemochron , usa, ect on a fibrin timer, aptf using the aptt lyophylized silica reagent by il and alia on an acl (il-milan) with the chromogenic substrate . all tests were performed in duplicate. ect was more sensitive to different hirudin concentrations than aptt, or act. the ect results were better correlated with the alia-activity than ap'l"r and act. the lower detection range for ect is . pg/ml hirudin. ect is a very sensitive, simple and reliable test for the monitoring of hirudin treatment and prophylaxis. recombinant and synthetic inhibitors of thrombin such as hirudin, efegatran and argatroban are currently in various phases of clinical trials in several surgical and medical indications. the therapeutic effects of these agents are usually monitored by aptt whereas in cardiovascular indications, cefite act and hemotech® act are used. the reliability of both aptt and the act tests in predieting the safety of various thrombin inhibitors has been heavily debated. furthermore, some of these inkibiturs are administered simultaneously to heperinized or coumadinized patients and the obtained aptt and act results do not lady refleot the effects of these agents. fcafin is a snake venom derived fi'om echis carinatus which converts prothrombin into mesothrombin, targeting the arg~ -ile tm bond between the a and b chains of prothrombm. while thrombin inhibitors are capable of inhibiting mesothrombin, atiii/beparin complex does not have any effect. using purified ecarin, nowak and bucha ( thromb haemost : ) proposed to assay hirudin. since thrombin inhibitors exhibit similar mechanisms of thrombin inhibition, ecarin clotting time (ect) was evaluated to test its diagnostic efficacy in various experimental and clinical settings. lyphilized eoarin was obtained from knoll ag, ludwigshafen, germany). concentration dependent clotting times for himdin, efegatran and argatroban were obtained in a range of - p.g/ml. all of the antithrombin agents produced a concentration dependent prolongation of ect and showed va~angpotendies inthe order ofefegatran> argatreban> hirudin, on a gravimetriebasis. on a molar basis, the anticoagulant order of potency was found to be hirudin> afegatran> argatroban. utilizing the ect, the effect of these inhibitors on patients undergoing bolus or infusion therapy, resulting in a concentration level of ~ gt g/rnt, have been measured. unlike such global tests as pt and aptt, patients receiving simultaneous heparin or oral anticeagulants can be monitored for antithrombin specific prolongation ofthe ect. plasma samples from heparinized (aptt - sec) or coumadinized ('pt - see) patients, supplemented with argatroban or hiredin did not show any differences m the ect. a medified ecarm act comparable to the celite act has also been developed. initial results demonstrate that this test is not affected by aprotinin, heparin and reduction of the prothrorabin complexes in the inr range of . - . . these results indicate that ecarin based clotting times provide slx~etlie ~lts of circulating levels of thrombin inhibitors, which can provide reliable information to optimize their safe(y and efficacy. r-hirudin is a highly potent and selective inhibitor of the serine proteinase thrombin. after intravenous administration, r-hirudin is eliminated exclusively with the urine. its plasma half-life is very short, - h. peg-hirudin is a derivative produced by coupling polyethylene glycol (peg) to a specially designed recombinant hirudin mutein. peg-coupling results in a considerable prolongation of the plasma half-life of peg-hirudin, compared to r-hirudin. after intravenous administration of r-hirudin into rats, a very small amount of ,,hirudin-like" activity ( - % of applied activity) was recovered in the urine. in contrast, after peg-hirudin had been administered, more than % of the applied activity could be recovered in rat urine. these results suggest differences in the renal metabolism of peg-hirudin and r-hirudin. within the scope of pharmacokinetie studies in rats we investigated the appearance of biologically active metabolites of peg-hirudin after kidney passage in urine. affinity chromatography on immobilised thrombin was used as a quick and gentle method in searching for biologically active hirudin metabolites in rat urine. but it had to be completed by anion-exchange and/or reversed-phase chromatography to ensure that all active metabolites were detected. the isolated biologically active metabolites were purified by reversed-phase hplc and were biochemieally characterized. in previously reported studies we found a hlrud n derivative consisting of the amino acids - as the main metabolite in rat urine following intravenous administration of r-hirudin. this metabolite was not detected in the urine after administration of peg-hirudin, confirming the suggestion of a different renal metabolism. carrageenans are high molecular weight sulfated polygalactans of plant origin (derived from red algae) with anticoagulant properties. in previous studies we investigated the anticoagulant activity of lambda-carrageenan, a highly sulfated type of carrageonans. unlike heparin, lambda-earrageenan exerts its anticoagulant activity primarily through direct inhibition of the serine proteinase thrombin. only a part of its antithrombin activity is indirectly mediated through antithrombin iii. to investigate relations between molecular weight and biological activities, tambda-carrageenan has been hydrolysed and fractionated. the molecular weight has been determined with the aid of size exclusion hplc using dextrans as molecular weight standards. the degree of sulfation has been determined by anion-exchange hplc. we have obtained low molecular weight lambdaearrageenans ranging from , dalton to , dalton with degrees of sulfation of - % and - %. the anticoagulant and antithrombin activity of low molecular weight carrageenans have been determined using coagulation assays and purified systems, and we have compared their activities with those of heparin and other sulfated polysaecharides. further, we have investigated the ability of lambda-carrageenan and its low molecular weight derivatives to inhibit the activity of human blood phagocytes. the activity has been determined by measuring the cellular chemiluminescence in a mieroplate himinometer using a himinol-dependent assay and zymosan as phagocytosis activating agent. we have used an assay in human whole blood and assays with isolated human mononuclear and polymorphnuclear cells. the anticoagulant activity and also the ability of carrageenans to inhibit the activity of human macrophages decrease with decreasing molecular weight and decreasing degree of sulfation. the natural ocouring, yellow pigment curcumin is the major component of tumeric and is commonly used as a spice and food-coloring agent. since curcumin has been reported to have anti-tumorpromoting, antithrombotic and anti-inflammatory properties, we studied, whether curcumin acts on the transcription factors ap-l(jun/fos) and nf-~:b in cultured endothelial cells (ec). when ec were cultured in the presence of curcumin, electrophoretic mobility shift assays (emsa) demonstrated, that binding of endogenous ap- to its dna recognition motif was suppressed. inhibition was due to direct interactions of curcumin with the dna-binding motif for ap-i. enhanced ap- binding, induced after tnfa stimulation of ec, was decreased in cells pretreated with curcumin. this resulted in reduced transcription and expression of tissue factor, known to be controlled by ap-f and nf-~b. nuclear run on assays proofed, that curcumin directly reduced the tnfa mediated transcription of genes, regulated by ap- , as tf, endothelin- and c-jun. thus, curcumin did not only suppress apl(jun/fos)-binding, but also inhibited tnfa induced jun transcription, transient transfections with tissue factor promotor plasmids confirmed, that inhibition by curcumin was dependent on intact ap-i sites. beside its effect on ap-l-binding, curcumin reduced the radical dependent activation of nf-kb due to its antioxidant properties, however, this inhibition was indirect and less prominent. the relevance of the in vitro data was confirmed in vivo in mice bearing meth-a-sarcoma. when mice received curcumin before tnfa was injected, tumors showed reduced ap- activation. simultanously fibrin/fibrinogen deposition decreased, most probably due to reduced tissue factor expression. thus, curcumin inhibits ap-t activation and expression of endothelial genes controlled by ap-t in vitro and in vivo. (jung, ) . additionally, haemorhenlogical parameters (plasma viscosity, erythrocyte aggregation) were measured. in all patients aptt, bleeding time, platelet adhesiveness, von wiuebrand f~ctor and factor viii concentration and activity were determined. the patients with von willebrand disease showed characteristic morphological changes of capillary geometry. tortuosity of nailfold capillaries was markedly increased as well as the diameter of capillariez on the arterial and venous side. plasma viscosity was significantly low. multiple parameter analysis concerning to galen and gambino ( ) and using the parameters ,,plasma viscosity below . mpas", ,,torquation index higher than ", ,,erythrocyte column diameter bigger than , gin" showed a positive predictive value of %. capillary diameter and capillary tortuosity have a positive predictive value of , %. additionally, a reduction of the vasomotorie reserve and/or a decreased erythrocyte velocity in the capillaries below the reference range was found in most of the yon willebrand patients. it was quite remarkable, that of of the yon willebrand patients showed significant capillary bleedings. these findings confirm some former observations (e.g. o'brian ) and preliminary reports of our group (koscielny ). polymerase chain reaction (pcr)-based quantitation of mrna transcripts is an important tool in the investigation of the underlying molecular defects in inherited platelet disorders, such as the bernard-soulier syndrome. however, for the exact quantitation of mrna a number of methological requirements has to be met. first, a standard (s) mrna must be synthesized which is able to undergo the same processing as the target wild type (wt) mrna. secondly, the quantitation step following the pcr must differentially recognize standard and target dna, and thirdly, the assay must be precise with respect to both inter-and intraassay variability. in order to satisfy these requirements we constructed a s-gpib mrna which is identical to the wt-gpib mrna except a bp long primer recognition site at its " end allowing differentiation between the pcr amplified wt-or s-gpib cdna through incorporation of a fluorescein or biotin labelled " primer. both standard and w[ gpib mrna showed identical amplification kinetics in the pcr reaction. the amplified dna was quantified using an dna binding assay. in this assay binding of amplified dna to gcn fusionprotein-coated microtiterplates is measured. since the gcn binding motif is incorporated into the wt-and s-gpib cdna through an identical " primer, competition between s-and wt-cdna during amplification has been analyzed. at a given concentration of nm of gcn . primer no competition between the sdna and wt-dna for the primer was observed during pcr cycles. the sensitivity limit of the assay performed in this way was amol wt-gpib~, dna, and intraassay variability reached from . % to . % calculated for fmol and fmol dna, respectively. to sum up, combination of rt-pcr with the amplified dna binding assay and usage of an internal standard mrna allows sensitive and accurate quantitation of gpiba mrna in human platelets. since upa and thrombin are main conrtibutors to the process of proliferation and migration of vascular smooth muscle cells (vsmc), which is part of the pathogenesis of atherosclerosis. we are currently assessing the role of spatial expression of upa and thrombin receptor (tr) on cells with human carotid artery plaques (n= ). we have used a double immunolabeling approach, combining anti-upa and anfi-tr antibodies. to identify the different cell types, we used the following antibodies: anti a-smooth muscle actin (a-sma) for smooth muscle cells, ulex europaeus agglutinin i (uea i) for endothelial cells, inflammation cell cocktail (cd +cd ) for monocyte/macrophage and lymphocytes and an anti-proliferation cell nuclear antigen antibody (pcna) to stain proliferating cells. in the carotid atherosclerotic plaques, upa immunostaining was distributed focally, preferentially in the fibrous cap and some cells of the foam cell rich region (fcrr). it was present in distinct patterns: cytoplasmic staining. tr staining was distributed similar to upa staining. with double staining combining anti upa antibodies with anti-tr antibodies, cellular co-localisation of both upa and tr was demonstrated. these cells were identified as smooth muscle cells by -sma. inflammatory cells were mainly localized within the fcrr, they only stained for upa. in conclusion: our data demonstrates that upa and tr are coexpressed in vsmcs in human carotid artery atherosclerotic plaque tissue. we therefore conclude, that the mitogenic activity of upa is associated with the thrombin signalling pathway. in the proficiency test of the ,,deutsche gesellschaft flir klinische chemie" (dgkc) / , lyophilised plasma samples (immuno ag) were sent to participants: a normal plasma and plasmas from persons under oral anticoagulation (oac-plasmas. inr . to . ). the participants (n= ) returned the pt times obtained and in most cases (n= ) also the isi value for the thromboplastin used (isi of pack insert). the inr was calculated using the pt of normal plasma and the isi of pack insert (method i). two additional methods for inr calculation were compared with method i. according to the concept of calibrated plasmas (houbouyan et al., t ), a calibration curve was constructed using the normal plasma and the ac-plasmas. the inr calculated using the pt •fn•rma¿ plasma and the laboratory-specific isi value given as /slope of the calibration curve (method ii) or was read off directly (method hi). for inr values, calculated by the methods from the participants data (n= ), outlier elimination ( sd, iterative) was performed. the inr mean values for all calculation models remain in a narrow range. using calibrated plasmas (method i and m), less outlier were eliminated and cv's obtained were smaller than using the conventional procedure ( i ). obviously, the inr inherited problems, such as accurate isi value, pt value of normal plasma and instrument/laboratory influences on isi, can be reduced using calibrated oac-plasmas. practical approach and educational considera-tions of home prothrombin time estimation a. bernardo, a. bernardo, c. halhuber herz-kreislauf-klinik, bad berleburg, germany specific training is necessary for the patient to achieve reliable and reproducible results in prolhrombin time measurement. the training scheme is based in many respects on experience with similar training courses for home control and management of diabetes and asthma. the education program is divided into a theoretical and a practical part. the theory part has group sessions of twenty patients of a time. the practical course is reduced to a maximum of five patients. the sessions are conducted by a medical doctor and by specialized medicaf/technical assistants. on average eight hours of theoretical education and two hours of practical training are sufficient. the contents of the theoretical lessons are: • need for anticoagulation after heart valve replacement, • potential interaction between anticoagulants and other medication, • accurate recording of the measured prothrombin time results, • techniques of prospective determination of the necessary amount of anticoagulant, • calculation of the individual doses, • potential pitfalls and mistakes, • corrections in case of over-and under-dosage, • early recognition of thromboembelic and/or bleeding complications. an alternative is a full-day intensive course which can be held during the weekend. our recently reported ( ) observation that oral anticoagulant treatment causes an increase of heparin cofactor ii (hc ii) activity in plasma is now confirmed by a more extensive study. in thrombophilic patients who were on vitamin k antagonist therapy (marcumar r) we found a median hc ii level of % as compared to % for thrombophilic patients without any therapy (p < . " ) and % for healthy controls (p < . " ). moreover we observed that the increase of hc ii level was significantly correlated with increasing inr-values (r = . , p < . ). follow-up observations on some patients showed, however, clear differences in the levels of hc ii activity after onset of vitamin k antagonist therapy. thus, some patients responded rapidly with a significant increase in activity ("strong responders") while others showed only slight changes ("weak responders"). in conclusion, the determination of hc ii activity may result in an improved estimation of the risk of bleeding, especially in high intensity treated patients (inr > . ). after intracoronary stent implantation an aggressive oral anticoagulation (oac) therapy is mandatory. to find out whether coagulation activation occurs after coronary stent implantation during high dose oac therapy markers of plasmatic coagulation and d-dimer were measured. patients male patients (average age years) were examined. blood samples were taken before and right after stent implantation and during the following week. patients got mg phenprocoumon during the first three days and additionally heparin and acetylsalicylic acid (asa) were given. methods ptz, aptt, tz, protein c, tat-complexes, fi+ and d-dimer were measured. results d-dimer levels increased steadily between day and day . tatcomplexes showed a slight increase from day ( . bg/i) to day ( . ~tg/i). on day tat levels were down again ( . p,g/l). fl+ (day : . ng/ml) also showed a slight increase on day ( . ng/ml). protein c decreased steadily from day ( %) to day ( %). conclusion during the initial phase of oac therapy a coagulation activation is reported but no significant elevation of tat or fl+ was found. this result shows that additional heparin and asa therapy was sufficient to avoid systemic coagulation activation. the increase of d-direct should be interpreted as a si~=m of local fibrinolytic reaction due to stent implantation. three methods for the determination of prothrombin time from capillary blood in patients under oral anticoagulation have been investigated. two methods were run on coaguchek® monitors (boehringer mannheim) from capillary whole blood. after fingerpuncture the first drop of blood was applied to the well of a coaguchek® test strip directly from the finger-tip, whereas the second drop was sucked into a non-anticoagulated plastic capillary (hirschmann) and immediately applied to the test strip -and vice versa to eliminate any influence of first and second drop of blood. the third method was hepato quick (boehringer mannheim) which was determined out of citrated capillary blood from an earlappuncture. specimen of patients under oral anticoagulation were investigated. the method comparisons between each of the coaguchek® methods and the laboratory method show good results and the correlation between the coaguchek® methods is excellent. mean differences to the lab methods are - . inr in both cases. no mean deviation was detectable between the coaguchek® methods. scattering of coaguchek® versus hepato quick was +/- . inr in the range to inr except for three outliers and one patient with fluctuating results in the lab method which could not be resolved. introduction: haemorrhagic coumarin skin necrosis is a severe complication during initial phase of oral anticoagulant therapy. histological examination shows thrombotic occlusion of small vessels, but little is known concerning the pathophysiologic background of the bleeding component. recently, we described protein z deficiency in patients with bleeding complications of otherwise unknown origin. thus, we were prompted to measure protein z in patients with coumarin skin necrosis. patients: patients (i man, women; age: ± years) suffering from haemorrhagic coumarin skin necrosis were examined. all patients had normal liver protein synthesis function, none was under oral anticoagulant treatment during this study. method: protein z antigen test, diagnostika stago, france. results: out of the patients examined had diminished protein z levels ( , , , ug/l) in comparison to normals ( ug/l). in one of our patients, protein z was normal ( ug/l). conclusion: low protein z levels are additional risk factors for haemorrhagic coumarin skin necrosis. oral anticoagulant therapy is the treatment of choice in patients with need for long-term anticoagulation. since oral anticoagulants interfere with the function of vitamin k, it is not clear whether stable oral anticoagulation can be achieved in patients with need for continous substitution of fat-soluble vitamins including vitamin k. we report about a -year-old man who had experienced progressive hypertrophic obstructive cardiomyopathy over the preceeding years. atrial fibrillation has been first diagnosed years ago. latter on, recurrent ischemic attacks and embolism of the right arteria iliaca occurred. in the patient received extirpation of the ileum and subtotal amputation of the jejunum because of mesenteric infarction. the resulting short bowel syndrome requires continous substitution of fat-soluble vitamins. since vitamin k free preparations of fat-soluble vitamins for parenteral use are not available, prophylaxis of thrombosis has been performed with unfractionated hepadn. as a consequence of the longterm treatment with hepadn the patient developed severe osteoporosis. therefore, the decission :o discontinuate heparin therapy and initiate oral anticoagulation has been made. because of its shorter halflife warfarin (coumadin) was used instead of dicoumarol. over a weeks lasting induction phase inr values were controlled daily. a dosage regime starting with ' mg warfarin at the day of vitamin application (day ) followed by . mg on day and . mg on days , , and , respectively, was found to be optimal to maintain inr values within the target range (inr: . - . ). in order to minimize the risk of hemorrhage the vitamin administration was changed to the subcutaneous route. during an observation period of months neither any bleeding or thrombotic complications nor a vitamin deficiency occurred. these data indicate that stable oral anticoagulation can be achieved despite extreme variation of vitamin k plasma levels. portable monitors for home monitoring of inr are well established for adults on oral anticoagulants. patient's compliance is improved as well as long term outcome. experience concerning accuracy of the procedure in children is limited. inr determinations were performed in parallel from venous and capillaryblood samples of an infant on phenprocoumon, starting at the age of months. the coaguchek® monitor from boehringer mannheim was used. choosing an arbitrary range of agreement of ,qnr . for both determinations, % of the measurements were within the defined range. / outliers were due to low inr resulting from difficulties in capillary blood sampling. the degree of agreement increased when the procedure was performed at least once a week. in conclusion: inr determination with a portable monitor may be helpful in home monitoring oral an.ticoagulant therapy in young children. a dose adjustment should be done only on the base of inr determination of venous blood -if it is considered the gold standard -to avoid over-anticoagulation. a stable anticoagulation is one of the most difficult tasks in attending patients with heart-valve-prosthesis. if prothrombin times are out of the therapeutic range, the risk of bleeding or thromboembolism increases disproportionately. for this reason any improvement in anticoagulant control and/or management can have far reaching consequences in decreasing complications, in extending longevity and in improving quality of life. for the first time a clinical trial was started in and continues until today at the cardiac rehabilitation center bad berleburg, germany with patients mainly after heart valve replacement. the patients were trained to measure their own prothrombin time and to adjust their own dosage of the oral anticoagulant. within six years patients were trained: patients could be followed up with regard to their selfdetermined prothrombin times. the results were within the therapeutic range in . % of the measurements (n= . ) taken by the patients themselves. on average, the patients who determine their prothrombin time themselves did so at a weekly interval. neither major bleeding nor thromboembolic complications could be observed in the patient-years of home prothrombin estimation. it is to be hoped that the usual rate of complications can be reduced when patients determine their prothrombin time themselves at a close interval, resulting in more constant values in the therapeutic range and slight corrections of the anticoagulant dose. home prothrombin estimation promises better quality of life and has a considerable potential to achieve this goal. circulating plasma thrombomodulin (tm) is a novel endothelial cell marker, which may reflect endothelial injury. tm acts as thrombin receptor which neutralises the fibrin-forming effect of thrombin, and also accelerates the formation of the anticoagulant protein c/s pathway. tm therefore belongs to the anticoagulant defence system against thrombosis. increased tm levels have been described in various diseases such as ards, thromboemboembolic diseases, ttp, diabetes, le and cml reflecting alterations of the vascular system at the endothelial level. to find out to what extent cardiac catheterisation imtates vascular endothelium, tm concentrations (stago, asnieres, france: x iu/ml) were investigated prospectively in infants and children (three days - years). blood samples were drawn before the intervention, immediately at the end and h later, snap frozen (- °c) and investigated serially in dublicate six weeks - months later. the results (median and range values) are shown in the enhanced tm concentrations immedately after the operative intervention, followed by normalisation within h, indicates that cardiac catheterisation in pediatric patients rather leads to a short lasting irritation of the vascalar endothelium than to severe irreversible endothelial damage. recently in an al=wl" based method dahlb~ick et al described in vitro resistance to the anticoagulant effect of activated protein c (apc) in thrombophilic adult patients. apcr is in the majority of cases associated with the arg gin point mutation in the factor v gene. concerning the special properties of the neonatal hemostatic system (low vitamin k dependent coagulation factors, physiological prolongation of the pt and aptf) we adjusted this ap'it based method (chromogenix, m~,lndal, sweden) to neonatal requirements: apcr was measured in healthy infants according to dahlb~ck. the results were expressed as apc-ratios: clotting time obtained in a : , : and : dilution with factor v deficient plasma (instrumentation laboratory munich. germany) using the apc/caci solution divided by clotting time obtained with cac in the same i: , : and : dilution. in addition, plasma of neonates with septicaemia were investigated and data of infants aged birth -three months with arg gin +/-were shown. the arg gin mutation of the factor v gene was assayed by amplification of the dna samples by pcr followed by digestion of the amplified products with the restriction enzyme mnl i. results were confirmed by sscp -analysis or by direct sequencing of dna from patients with apcr. results are shown in the . ( . - . ) neonates and infants were considered to be apcr when the aptt ratio was < or = . concerning the special properties of the neonatal hemostatic system, our data show concordance with the pcr method in neonates and infants only, when the aptt based method was performed in the i: plasma dilution. case report: we report on an -year old boy with severe hemophilia b and frequent screaming at night. eeg showed spike wave activity, starting from the temporal lobe, but generalizing within seconds. complex partial seizures were diagnosed and therapy with carbamazepine was initiated. as no improvement was seen nmr was performed. this revealed lesions within the right frontal cortex. higher doses of carbamazepine were not succcssfull as was therapy with phenytoin and pfimidone respectevely. the patient is now treated with carbamazepine and valproate. he still suffers from one short seizure per day. because of his seizures we started prophylactic replacement therapy with i.e. factor ix twice per week. discussion: in wilson et al. first detected brain abnormalities in of children and adolescents with hemophilia a or b who were negative for immunodeficieney virus ( ). the most common findings ( / patients) were small, focal, nonhemorrhagic white matter lesions of high signal intensity on t weighed images. similar lesions have been reported in children with sickle cell cerebral infarction ( ) . only three of these patients had seizures, all of those having a documented history of intracranial hemorrhage. our patient has similar lesions as those described by wilson et al. but no history of intracranial hemorrhage is documented. even if tuberous sclerosis might be a differential diagnosis, we think that the abnormalities are related to hemophili a or its treatment, because the patient has no further signs of this disorder. conclusions: . in patients with hemophilia and seizures nmr might be useful as a high sensitive method for the detection of gray and white matter changes. . further studies should be initiated to determine the prevalence of pathological conditions in the brain of hemophiliac patients. disseminated intravascular coagulation (dic) is a rare, but foudroyant disease occuring in gram-negative sepsis like meningococcal septicemia. despite the avallibility of potent antibiotics, mortality in mertingococcal disease remains high ( about % ), rising to % in patients presenting with severe shock and consecutive dic. as the clinical course and the severity of manifestations of systemic meningococcal infections varies there is a need for early diagnosis of the infection and stage of coagulopathy in order to reduce the high mortality rate. few and rapidly available parameters are needed to classify the wide spectrum of clinical and laboratory findings in patients with dic. the parameters include partial thmmboplastin time, pmthmmbin time, plasma levels of fibrinogen, fibrin monomers and dimers, fibrin degradation products and the thrombocyte count. monitoring the course of hemostaseologicai findings in pediatric patients with systemic meningococeal infections we observed a change of coagulation parameters as early as in the first stages of the infection: a prolongation of partial thromboplastin time to an average of . sec (range - sec, norreal - sec), a decrease of prothrombin time to . % (range - %, normal - %) and of antithrombin iii to an average level of . u/ml (normal - u/ml ) was found to (- ) hours after admission. the consecutive development of hemostaseological parameters mentioned above permitted to define the stage of coagulopathy and thus to induce a stage related therapy. primary treatment consisted in control of shock by liquid substitution, compensation of metabolic acidosis, correction of clotting disorders ( at iii and heparin in stage of pre-dic ; at iii and fresh frozen plasma in case of advanced dic ) and treatment with g-lactam antibiotics ( e. g. cefotaxime or ceftriaxone ). an early assessment of the coagulation disorders in meningococcal disease can be based on few coagulation parameters, thus an appropriate treatment may be arranged to prevenl the patient from a fatal outcome of meningococcai septicemia and protect him from the development of a waterhouse-friderichsen-syndrome. this study was designed to prospectivdy evaluate coagulation and flbrinolyfie activation in children (neonate - years) during cardiac catheterisation with low dose flush heparin ( iu/ml saline). aptt (instrumentation laboratory: see), anti xa activity (xa; chromogenix: iu/ml), prothrombin fragment ft. (f . ; behring werkc marburg: nmol/l) and d -dimer formation (d-d; bnhring werke/vhrburg: ug/l) were investigated before (t ), at the end (t ) and h after cardiac catheterisation (t ). in addition, to evaluate the influence of inherited thrombophilia in all patients resistance to activated protein c (apcr), protein c, protein s and antithrombin were investigated. during catheterisation median (range) hepadn was administered in a total dose of ( - ) iu/kg bw. in addition infants < months of age (arterial catheterisatiun only) or patients with known thrombophilia received - iu/kg hepafin for fmther hours. the results (median and range) are shown in the ft. was sigificanfly elevated above the pediatric boundary immediately after the intervcation and nearly reached baseline values h later. in contrast no cfinically relevant fibrinolytic activation was seen: d -dimer formation increased within the pediatric boundary immediately after the catheter and returned to basdine levels h later. three children showed resitance to apc. tn one child stroke occurred before. not knowing the result of apcr in the remaining two patients only one neonate received further prophylactic heparin. the third neonate without heparin prophylaxis suffered from venous occlusion within two days after the intervenfon~ in addition, no protein c, protein s or antithrombin deficiencies were found. although administration of low dose flush heparinisation during cardiac cathetefisation could not prevent short -term coagulation activation, no thrombotic events occurred in children without inherited thrombophilia. if fnrther prophylactic hepariuisation in children with a~r, protein c, protein s or antithrombin deficiencies may prevent vascular occlusion requires a more intensive study. a.sandvoss, w.eberl, m.b rchert introduction: capillary leakage, edema and hypovolemia are common complications in preterm infants especially if birth weigth is below . g. septicemia, asphyxia and immaturity seem to be most important risk factors. to determine the influence of c -esterase inhibitor (cilna) in preventing contact phase and complement activation we investigated c na concentrations in normal and symptomatic preterm infants. methods: activity of cilna were measured by chromogenic substrate method (behringwerke), cilna concentration with radial immunodiffusion (behringwerke,germany). results: cllna-activity in asymptomatic preterm infants (n= ) was +/- % of normal at birth. healthy newborns showed activities of +/- %. cilna reached normal adult values - days after birth. preterm infants with respiratory distress syndrome(n = ) showed lower activity on day - , patients with additional septicemia (n= ) had decreasing c ina-activities in the first three days of life. individual course of cllna-activity and thrombocyte count correlated in the group with irds with and without septicemia. in children with capillary leakage onset of diuresis went parallel with raising cllna-activity. markers of contact phase (f xlla) and complement activation (c al were investigated in single cases and evidence for involvement of both systems was found. conclusion: contact activation and complement system play an important role in capillary leakage in preterm infants. cilna regulates both systems. activity of cilna correlates with clinical course, substitution therapy is possible and may improve outcome of these critical ill patients. antiphospholipid antibodies (apa) interfere with hemostasis probably by inhibition of protein c or prothrombinase complex. thereby, apa might lead to thrombosis or increased bleeding. however, incidence and clinical importance of apa has not yet been investigated in children. therefore, we assayed plasma samples of children, aged , to years (mean years) by elisa detecting igg-and lgm-antibodies directed against eardiolipin, phosphatidyl serine and phosphatidic acid. in patients with increased bleeding, thrombophilia or prolonged clotting tests a detailed coagulation analysis was performed. according to their diagnosis children were devided into groups: i. autoimmune diseases, ii. infections, iii. metabolic diseases, iv. other diseases, v. healthy children. results: apa were found in / patients. in the respective groups we demonstrated apa in the following proportions: . lgg-isotype: activitiy of c esterase inhibitor (c na) is reduced in preterm infants especially if birth weigth is below . g and respiratory distress syndrome and/or septicemia is present. capillary leakage with generalized edema, hypovolemia and hypotension is resulting in imbalance between inhibition and activation of contact phase and complement system. iln four patients we investigated seven courses of substitution ;with commercial c esterase inhibitor preparation (berinertr,behringwerke), case reports are given. all patients had clinical symptoms of capillary leakage, all had septicemia accompanied by either respiratory distress, disiseminated intravascular coagulation or mutiple organ failure. jefficiacy of substitution therapy is dose related, supranormal iactivities of cilna are necessary, reflecting raised consumption of inhibitor in ongoing disease. clinical effects on diuresis, catecholamine need and especially on thrombocyte counts are demonstrated. or arterial thromboembolic event in children e. lenz, c. heller, w. schr ter*, w. kreuz johann w. goethe-universit~itskinderklinlk, frankfurt a. main, germany * georg-augast-universit/itskinderidinik, g/ ttingen, germany venous thrombosis as well as arterial thrombo-occlusive events are rarely observed in childhood, but can lead to life-threatening situations and longterm sequelae in these patients. after the initial stage of treatment (thrembolysis or thrombectomy) the pediatrician has to decide how to efficiently prevent re-thrombosis in the individual patient. anticoagulation after venous thrombosis is generauy recommended for months after the event; if an underlying thrombophilic condition has been detected in the patient anticoagulation has to be considered lifelong. when evaluating antithrombotic therapies for children it is of importance to consider whether the anticoagulatory effect is mainly necessary in the venous or arterial vessel system. the hemorrhagic risk and side effects of the different anticoagulatory preparations have to be taken into account, especially when treating small children. only limited experiences exist concerning the suitability of the preparations for long-term anticoagulation in children and general recommendations on the ideal dosage in pediatric patients are still missing. we want to disscuss different types of anticoagulants (such as coumarins, unfractionated heparin, low molecular weight heparin (lmwh) and inhibitors of platelet aggregation) their mode of action, their suitability for pediatric patients and their side effects and relevance of these side effects especially in children. from the experience in our own pediatric patients, we would like to report on the indications, which can be given to administer these different preparations, the dosage regimen we recommend and the laboratory tests to monitor save and efficient re-occlusion prophylaxis in our patients. in this context we would like to present our data on patients with either thrombosis or arterial infarction due to a thrombophilic condition, who had all contraindicatioas to oral anticoagulation by coumarins. because prophylaxis for re-thrombosis was mandatory in these patients, lmwh was given for long-term anticoagulation in a dally subcutaneous dosage of - anti-xa u/kgbw. monitoring was done by anti-xa-test ( , - , anti-xa u/ml). under this regimen none of the patients developed re-thrombosis or bleeding complications. alopecia was seen as a side-effect. this study was designed to prospectively evaluate coagulation and fihrinolytic activation after cardiopulmonary bypass with aprotinin ( x u/kg bw) in infants and children aged . - years, and to correlate these findings to the clinical outcome. prothrombin fragment f . (f . ; behring werke marburg: nmol/l), antithrombin-serinesterase -complex (atm; stago: ng/ml), d -dimer formation (d-d; behring werke marburg: ug/l), tissue-type-plasminogen activator ag (t-pa; chromogenix: ng/ml), plasminogen activator inhibitor antigen (pai; chromogenix: ng/ml) and cl-inhibitor (c ; behring werke marburg: x - g/l) were investigated before the operation (t ), at the end of the operation (t ), and on postoperative days (t ), - (t ) and - (t ), respectively. the results are shown in the table (median and median absolut deviation): t t t t " " nv fi. . +/- . . +/- . . +/- . +/- . . +/- . the platelet (pl) function defect induced by thrombolytic agents has been attributed either to the degradation of pl surface receptors or to the anti-aggregatory effect of fgdps. in contrast to other plasminogen activators scu-pa is intimately inked with pl: they can rapidly incorporate exogenous seu-pa, release it upon stimulation and bind the proenzyme. recently we have reported that exposure of prp to recombinant scu-pa ( . -t um) in timed interval - min resulted in dose-dependent inhibition of pl aggregation. timecourse changes of the process were followed by the biexpotential kinetics: a rapid initial inhibition during the first - rain with the moderate suppression of pl aggregation in the min period. when tcu-pa ( - nm) was exposed to prp in the same conditions dose-and time-dependent inhibition of pl aggregation was also observed. since the effect was obtained no earlier than t min after exposure of tcu-pa to prp, and the threshold dose was higher. comparable inhibition of pl aggregation was obtained with nm of scu-pa versus nm of tcu-pa and the llbrinogen depletion by the end of the min period was % and % respectively. it's likely that tcu-pa and its precursor have different mechanisms of action on the pl aggregatory function. in a recent study we have shown that recombinant rscu-pa inhibits platelet (pl) aggregation in prp. to exclude the possible influence of rscu-pa/plasma interfere on this process the aggregation of washed pls was under the investigation. pls were washed according to modified mustard's method, suspended in buffer and adjusted to , / . the resuspended pls were exposed to - nm of rscu-pa for min at (;. at time points , , and min the aggregation with . iu/ml of thrombin was measured. it was found that the exposure of pls to rscu-pa ( - nm) for man resulted in marked inhibition of their aggregation. since after - man of incubation with - nm of rscu-pa the inhibitory effect on pl aggregation became less pronounce or even disappeared. when nm of rseu-pa was used the inhibition of pl aggregation became significant only by rain of exposure period and didn't change for man of investigation. the observed results may be cormeeted with uptake of rscu-pa by pls from surrounding buffer as well as with individual variations of pl response to the same concentration of rscu-pa. loss of glycosylation may result in a reduced platelet (p) survival and perhaps altered function. we analyzed the structural and functional effect of specific deglycosylation (combinations of n/o-glycosidase and neuraminidase treatment) of p and isolated p gpib. washed and formaldehyde-fixed p were digested as follows: ) with neuraminidase ( . u/ml) + o-glycosidase ( . mu/ml) + n-glycosidase ( . u/ml), ) with neuraminidase alone ( . u/ml), ) with n-glycosidase ( u/rnl) and ) with neuraminldase ( . u/ml) + o-glycosidase ( mu/ml). all reactions were performed in the presence of protease inhibitors (pmsf, leupeptin, sbti), after washing x the p and identically treated controls were analyzed by flowcytometry with the antibodies di (mab: a-gpib), i-l vlab: a-gpiiia), and the lectins wheat germ agglutinatinln (wga, for neunac) and peanut agglutinin (pna, for [ dgal( - )-galnac) which confirmed effective and specific deglycosylation by the respective enzymes (but gave only minor differences with di and h ). the botrocetin ( ) and ristocetin (r)induced agglutinations showed arer treatment ) (all enzymes) a full inhibition of r-induced agglutination but only a mildly reduced b-induced agglutination ( % of normal). treatment and (neuraminidase alone, and n-glycosidase alone) affected both agglutinations only mildly ( - % of normal).treatrnent ) (o-deglycosylation) however showed a major inhibition of r-agglutination down to %, while b-agglutination interestingly was almost fully retained. the results of the rotary shadowing electron microscopy of purified gpib suggested a collapse of the normally stretched, glycosylated, gplb, not only after the treatment with all three glycosidases, but also .after o-deglycosylation alone. we conclude that oglycosylation is most important for ristocetin-induced platelet-von willebrand factor-interaction and responsible for the typical stretched shape. the phenomenon of in vitro platelet aggregation and consequent pseudothrombocytopenia (ptcp) in the presence of calciumchelatization by na-edta and sodium-citrate was studied in blood samples of a patient. initial platelet counts electronically measured were /ul blood anticoagulated with na-edta and sodium-citrate. normal platelet counts were found in heparin-anticoagulated blood and in capillary blood. immunoglobulines of the igg and igm subclass were identified in the patients plasma. by incubation of the patient's serum with platelets of healthy individuals, platelet-clumping occurred in the presence of na-edta and sodium-citrate but not in the presence of heparin. the platelet membrane glycoproteins (gp) hb/llia, ix and iiia/vnr g-chain were involved in the antigen antibody reaction as demonstrated by specific antibodies and flow-cytometry. on platelet surface permanent calcium-exchange and -replacement is dependent on external calcium concentration. calcium depletion induced by calcium chelators as na-edta and sodium-citrate might conformationally change platelet surfaces and induce formation of neoantigens. the decrease of gp llb/illa platelet surface antigen to % (normal > %) indicated the important role of the gp iib/iiia receptor at ptcp. the saliva of tdatoma pallidipennis, a triatomine bug, was found to contain a protein called "pallidipin", that specifically inhibits collageninduced platelet aggregation but not adhesion or shape change. to investigate the mechanism of action of recombinant pallidipin the influence on platelet fibdnogen binding after activation by collagen type i in different concentrations was measured by flow cytometry. the same concentrations of pallidipin that inhibited the couagen-induced platelet aggregation completely did not cause any inhibitory effect on fibdnogen-binding in the prp from the same donor measured contemporaryly. collagen type i-induced platelet aggregation of cd -deficient platelets from two different unrelated blood donors was inhibited by the same concentration of pallidipin that inhibited aggregation of control platelets. there was no inhibition of collagen-induced fibdnogen-binding in the cd -deficient platelets as well. pallidipin did not cause inhibition of collagen-induced membrane expression of cd and cd of control and cd -deflcient platetets as measured by flow cytometry. however eadier studies had shown an inhibition of collagen-induced atp and { tg secretion by pallidipin. therefore we compared the effect of pallidipin in unstirred and stirred prp samples. while pallidipin had no effect in unstirred samples it showed strong inhibition of ptg secretion in stirred samples. we therefore conclude that pallidipin does not act on collagen-induced aggregation through cd and that the inhibition is a post fibdnogenbinding event. pallidipin does not influence the first steps in secretion, which are independent from cytoskeleton and platelet-platelet contact, but inhibits the following steps. -hydroxy-wortmannin does not inhibit the transport of nm-gold labelled fibrinogen in resting platelets. e. morgenstem, b. kehrel and k.j. clemetson medical biology, saarland univ., homburg, germany, haemostasis research, univ. muenster, germany and theedor-kocher-lnstitut, univ. bern, switzerland. wortmannin, an inhibitor of phosphoinositide -kinase and of myosin light chain kinase blocks reactions of the activated platelet. to obtain informations about the role of the contractile cytoskeleton in receptor-mediated transport of resting platelets, the effect of -hydroxy-wodmannin (hw) on the endocytosis of fibrinogen from the surface of resting platelets was studied. gel filtered platelets (gfp) were incubated for min at °c with hw ( x - m) or with iloprost. controls and gfp preincubated with hw or ilopmst were incubated with . nm-gold labelled fibrinogen molecules (fg-au; final concentration p.g/ml) at °c. the experiments were stopped after or min by rapid freezing. after freeze substitution in acetone with % osmiumtetroxide, sedal sections were prepared. the sections were examined after incubation with ascorbic acid ( % in h ) for rain at °c (to reduce metallic osmium) and silver-enhancement using danscher's ( ) method (to visualize the fg-au). examination of adp stimulated platelets in the presence of fg/ml fg-au shows that the ligand is able to mediate aggregation. the examination reveals, that fg-au was present in a low density on the platelet surface, in higher density in the surface connected system (scs), in coated pits and vesicles and separated smooth vesicles (representing endosomes?) as well as in the matrix of alpha-granules. after rain, the number of labeled granules was increasing. labels on the surface and on the mentioned cytoplasmic membranes were observed during the whole period of incubation. hw or iloprost did not alter the resting gfp and the mentioned qualitative ultrastructural findings in both preparations did not show differences to the controls. we conclude from the results with hwthat the regular contractile function of the cytoskeleton is not necessary to transport the fg-au in resting platelets. methods: edta anticoagulated whole blood was incubated with thiazole orange and analyzed with a flow cytometer. young platelets were defined by having a high fluorescence from thiazole orange (normalized to platelet size). platelets were also incubated with fluorescent antibodies to gpib, gp lib/ilia and gmp- (two colour method). results: surface expression of gpib was the same in young and older platelets. results for gp lib/ilia and gmp- (in resting and activated platelets) will be presented. conclusion: young platelets can easily be detected using thiazole orange and flow cytometry. there is no differential expression for gpib. further results will be presented. the influence of erythrocyte and thrombocyte content on the release of atp by different agents in whole blood specimens was tested. the measurement had been performed in the lumi-aggregometer using the principle of the luciferin-luciferase reaction. altogether blood samples were diluted gradually before induction of the release reaction by arachidonic acid ( , mmol/i final concentration), adp ( ijmol/i) and collagen ( , and , tjg/ml). the peak of the obtained curves was transformed into percent values of the maximal deflection by the undiluted sample (= peak in relation) and into atp concentrations (= absolute peak) after testing the atp standard in parallel for each dilution step separately. the peak in relation increases by increasing dilution with all inducers. it was identic with the atp standard and with collagen, somewhat lower with arachidonic acid and much higher by adp. a luminescence-optical effect may influence all these results. the absolute peak decreases by dilution under arachidonic acid and collagen as it was expected by the decreasing thrombocyte content of the samples. under induction by adp no decrease of the absolute peaks by increasing dilution of the samples was abserved. this can be explained only by liberation of atp from the erythrocytes. the atp standard is essential for the quantification of the release reaction. adp doesn't suit for it. collagen with a final concentration of pg/ml was proven as the best inducer. platelet aggregation induced by several agents has been photometrically investigated in disc shaped rotating cuvettes coated with vessel wall tissues obtained from human umbilical cord, either endothelium or smooth muscle cells or extracellular matrix or combinations of them. in addition, effects of endothelium incubated with several cytokines on platelet aggregation have been studied. endothelial cells strongly inhibited aggregation depending on their cell count and the concentration of the inducer. smooth muscle cells showed the same effect but very less marked. in presence of extracellnlar matrix spontaneous aggregation occured. endothelium could inhibit this spontaneous aggregation when present in the same cuvette, smooth muscle cell could not. incubation of endothelium with several cytokines increased its anti-thombotic properties. for example, at a platelet count of x /id in the prp, - m adp led to maximal aggregation in uncoated cuvettes, in presence of , x endothelial cells aggregation was completely abolished, in presence of , x " cells aggregation was decreased to %. smooth muscle cells diminished the aggregation effect of , nih thrombin to % when only one side of the cuvette was coated and to % when both sides were coated. endothelium could not inhibit aggregation induced by , x - m adp but endothelium incubated with u/ml tnf-a or u/ml intedeukin-lfl or lmm l-nitro-arginin for h did completely inhibit aggregation. platelets become sticky and adhere to surfaces or to another without contracting and secreting. during maturation of megakaryocytes finally platelets lost their genomic nuclear message. only mitochondrial dna of platelets can be identified. we focused our attention on the impact of mitochondrial dna and the mitochondrial transscriptive mechausisms during platelet activation in normals. materials and methods: leucocyte free (nagentte chamber, flow cytometric analysis) platelet rich plasma or platelet concentrates a_~er hemapheresis were filtered by pall leucocyte filters. the influence of different anticoagulants (commercially available sarstedt tubes containing citrate, heparim edta and atu/ml hirudin wacker) was examined. activation was due to a nun. hemapheresis procedure ( - fold increase of cd , cd ) and ex rive stmaulation due to niy u/ml thrombin, . m cac or combmatious. the guanidiurn method for total rna preparation were used according to t. brown: current protocols in molecular biology . - . . , . different primers of mitochondrial genome (e.g. cytochrome b and atpase) were prepared using pcr and mitochondrial transscription was examined using northern-blot-technique. results: ., there is less activation of mitochondrias using hirudin anticoagnlation, but a fold increase of mitochoindrial rna content in heparinized samples. ., stimulation with thrombin leas to an increase to . e-l rna btg/platelet, compared to . - . e- rna ~tg/platelet under unstimulated conditions.. conclusion: there is evidence for the importance of platelets mitochondrial dna and mitochondrinl transsefiption in regulation of cytosceleton and platelet activation. thrombospondin- (tsp- ) is a large homotrimeric glycoprotein originally identified as a platelet alpha-granule component. the investigation of its putative role in a variety of pathophysiologies like haemostatic disturbance, malignancy and wound healing requires specific laboratory reagents. monoclonal antibodies are one of the most powerful of these reagents. therefore, we purified human tsp- from thrombin-stimulated platelets using affinity chromatography to generate monoclonal antibodies in mice. a subclass igg monoclonal antibody designated . was purified from ascitic fluid and further characterised. western blot experiments demonstrated that this antibody reacted only with the unreduced molecule whereas the tsp- subunit chain was not recognised. no cross-reactivities with human fibrinogen, fibronectin, vitronectin and von willebrand factor were found. preliminary results indicate that the monoclonal antibody . can be used to investigate tsp- function in several assays including immunocytochemistry and cell adhesion as has been demonstrated for hl- cells. in addition, a sandwich enzyme immunoassay was developed using goat-antihuman tsp- igg and derivatised monoclonal antibody . (peroxidase, biotin) as a sensitive method for detection of tsp- in human body fluids. in the following study the expression of the platelet antigen (cd p) and the leukocyte antigen (cdllb) were measured in whole blood, in addition to platelet-leukoeyte adhesion (rosette formation) by means of multicolour fluorescent labelling (cd , cd , cd a). the measurements were carded out both in freshly drawn whole blood which had been antieoagulated with different agents, and in stirred samples of whole blood under controlled conditions ( °c, rpm, different stirring times). the results are presented as the percent positive events in each gate (platelets, leukocytes -pmnl, monocytes, lymphocytes and rosettes -plateletpositive events in the pmnl, monocyte and lymphocyte gates), whose mean fluorescence is given in addition to an index comprising the product of the percent positive events and their mean fluorescence. stirring (max rain) induced an increase of cd p on the platelet surface of ca. %, without any change in the mean fluorescence. under these conditions increased cdllb on pmnl and monoeytes could be detected. an increase in the rosette formation could also be measured (greater index), in that the percent of monocytes which were platelet-positive increased with no change in the mean fluorescence of the positive events, whereas pmnl showed an increased mean fluorescence, but not an increased number, of platelet-positive events. the time-dependent changes in rosette formation on stirring could be further increased by addition of adp. these results show that it is possible to measure rosette formation, and also the influence of effector agents (inhibitors or activators of platelets or leukocytes) on rosette formation, in whole blood using flow eytometry. itp patients undergoing splenectomy were observed after - years following operation and divided into groups. first group consisted of patients with normal platelets count and absence of haemorrhagic syndrome. second group was formed of itp-patienfs with episodes of thrombocytopenia recovery following certain time period after splenectomy. in the aim to study the cellular immunity there were carried out immunophenotypical investigations of blood samples using immunofluorescence method with monoclonal antibodies application. the increase of b-cells, expressing cd , cd , hla-dr-antigen has been revealed in the nd group. quantity of srfc, cd +, cd + cells in the blood of recovered patients was lower than in patients of the first group. this group was also characterized by statistically significantly increased level of cd + cells while the cd /cd ratio was equal to . :i: . % ( , + , % in patients of the second group, respectively, p>o, }. also the relatively high expression of activating antigens in patients with thrombocytopenia recovery after splenectomy was stated. among infectious complications in all patients observed were predominantly found various types of throat infection, mainly with unsatisfactory treatment possibilities. we have observed the opsi-syndrome in patients, being featured with marked tiredness, breath loss, intolerance of hard physical working, diminished ability to maintain physical activity. extracellular matrix (ecm) produced by human endothelial cells closely resembles the vascular subendothellal basal lamina in its organization and chemical composition. thus it contains collagens, fibroneetin, von witlebrand factor, thrombospondin, fibrinogen, vitronectin, laminin and heparin-sulphate. platelets carry different receptors on their membrane surface with specific binding capacities for one or more of these extracellular matrix proteins, such as glycoprotein (gp) iibiiia, gp ib/ix and gpiiib. incubation of platelets with ecm results in platelet adhesion, degranulation, prostaglandin synthesis and aggregation. we studied patients whose platelets showed either a receptor defect in gpiibiiia or gpiiib or a storage pool disease. adhesion experiments were performed using siliconised glass, collagen coated surfaces, immobilized fibrinogen as well as human subendothelial matrix. platelet adhesion of patients with thrombasthenia glanzmann (receptor defect of gpiibiiia) resulted in a total lack of binding to silieonised glass and immobilized fibfinogen. adhesion to collagen was almost normal in spite of the fact that only single platelets sticked to the surface and no microaggregates were observed. the adhesion to ecm was diminished and also no aggregates were detected. patients with a receptor defect in gpiiib showed normal platelet adhesion to siliconised glass and immobilized fibrinogen but binding to collagen and ecm was markedly reduced, while platelets with a storage pool defect sticked to siliconised glass but failed to adhere to ecm. by centrifugation of citrate blood ( x g, min) erythrocytes and leucocytes go to the bottom, whereas plasma and thrombocytes stream in the upper part of the probe. so the thrombocyte count doubbles in the platelet rich plasma in contrast to the platelet count in the whole blood volume. if the thrombocytes are more or less activated, they adhaere on erythrocytes, leucocytes or aggregate end are not able to stream upwards. the quotient between thrombocyte counts in prp and whole blood is a measure for thrombocyte activation. we chequed the value of this screening in different groups of patients with arterial occlusions disease (aod), chronical venous disease (cvd), diabetes mellitus (dm] and in healthy control persons (control). variation coefficient of the method is . (prp) and . (tc) respectively (coulter counter). differences to the control group are significant. changes in the patient groups in dispensaires follow up years are also significant. nicardipin -induced immunthrombocytopenia p. eichler , c. hinrichs , g greinacher l i.institut fur immunologic und transfusionsmedizin, ernst-moritz-arndt-universitat greifswald, . deister-s ntel-klinik, bad m nder drug-dependent immune-thrombocytopenias are a rare but clinically important variant of immune-thrombocytopenias. patients are at risk to suffer from severe bleeding complications. especially in patients receiving multiple drugs, diagnosis of drug-dependent immune-thromboeytopenia is often difficult. we report the case of a year old male patient who received allopurinol, captopril, digitoxin, furosemid, and nieardipin. the patient presented with hematomas (pit. count < g/l) and later developed bone marrow dysplasia. in an elisa using whole platelets and patient serum, a weak reactivity in the presence of furosemid, but a stronger reactivity in the presence of nicardipin (antagonil, ciba-geigy) could be demonstrated. the reaction pattern is given in the the enzyme-immunological determination of soluble fibrin (sf) proved to be highly sensitive and specific. this sf-elisa detected fibrin hacking fibrinopeptide a (fpa) via the monoclonal antibody t specific for the neoepitope generated on the aa-chain after the split of fpa. lill et al. recently introduced a new assay modification which utilizes the same antibody as the old one but takes advantage of a pretreatment of plasma specimens with kscn. this strong chaotropic ion is used to dissociate the various fibrin complexes possibly hiding fibrin epitopes. it was the aim of this study, therefore, to compare the two sf-elisa modifications (with and without kscn-pretreatment of specimens) . in order to examine the dynamics of thrombin-induced fibrin(ogen) metabolism we made course observations in patients with a certain form of septicemia. both assay modifications detected fibrin(ogen) derivatives which differed considerably in kinetics (n= samples from courses). the former sf-elisa (no kscn) correlated well with prothrombin fragments, thrombin-antithrombin ! -complexes and with the release of fibrinopeptide a ( r > . , n= ). results of the new sf-elisa with kscn pretreatment of patients' plasma, however, correlated conspiciously well with d-dimer levels (r > . ) but distinctly less with the markers of thrombin generation (- . < r < . ). this good correlation with d-dimer levels was unaccountable since the d-dimer maximum occured significantly later than the peak of markers of thrombin generation (p < . ). therefore, kscnpretreatment of fibrin specimens seems to lead to a change in the specificity of the fibrin assay despite usage of the same catching antibody. different half-iifes of differently composed fibrin complexes should be considered in trying to explain the findings. nevertheless, the results of the former assay without kscn-treatment correlated much better with the well-known dynamics of thrombin-induced fibrin generation during hemostasis activation than the data from the new assay modification. consequently, further examinations are necessary to specify the effect of kscn on soluble fibrin complexes and the resulting assay specificity. a rapid assay for the determination of the primary hemostasis potential (php) of whole blood has been developed (kundu et al, ) from the original method of kratzer and born. the new system employs a disposable test cartridge which holds the sample (citrated whole blood) and all components for the tests at the same time. the test procedure is very simple. the cartridge is loaded with - p.l citrated whole blood and is inserted into the platelet function analyzer (pfa aaw). the test is started automatically after a preincubation phase of . rain. the reaction starts with the contact of the whole blood and the capillary which is connected with a collagerdephinephrin coated membrane with a small aperture inside the test cartridge. under constant negative pressure the sample is aspirated and through the contact ofplatelets and vwf with collagen adherence and aggregation begins. the adhesion and aggregation process leads to the formation of a platelet plug which obstructs the flow through the aperture. the result of the php is reported as closure time (ct). additional parameters such as bleeding volumes are possible as well. first results show good reproducibility, normal values in the range of up to sec. and a good discrimination of healthy donors from patients with congenital or acquired platelet dysfunctions. the system detects aspirin induced thrombocyte function defects and von willebrand disease. in ease of an abnormal result in the collagerdepinephrin system a second type of cartridge with a collagerdadp coating can be employed. in the majority of cases aspirin induced dysfunctions are normalized and could thus detect aspirin use. the proposed system may be a valuable tool for routine assessment of the primary hemostasis potential in a routine citrate blood sample laboratory. inducing mental stress in young healthy male volunteers aged to ),ears with no previous history of thmmbophilia or a hemorrhagic diathesis was performed by a first time parachute descent from an altitude of meters. the purpose of this investigation was to find out whether there are any changes in the corpuscular and plasmatic fractions of peripheral blood. we were especially interested in elucidating changes in the procoagulatory and/or fibrinolysis systems. venous blood samples were obtained directly before and directly after the jump. flight time from the departure of the airplane to the landing of the parachutists was approximately minutes. the maximum time that elapsed between the two blood withdrawals were minutes. in a preliminary study with different voinnteem, certain fluid imbalances had been observed. absolute numbers of leukoeytes ( . vs. . l/n , erythrocytes ( . vs. . /pl), and platelets ( vs. /nl) significantly increased (p < . ), as well as the hemoglobin concentration from to g/l (p < . ). even though fluid imbalances before and after the jump had practically been excluded by measuring nearly identical hematoerit values (. vs.. ), we noticed a marked drop in aptr ( vs. sec) and a significant increase in factor viii ~tivity. as a direct stress response, we found a rise in fibrinogen concentration ( . vs. . g/l) which is one of the shortest acting acute phase proteins. concerning reactive fibrinolysis, d-dimers showed an increase in concentration from lag/l to still normal values of lag/l, which was not significant due to low numbers of values (p = . ). we observed similar changes in fibrin monomers and prothrombin fragments fl+ . from other investigations on the kinetics of the activation of the procoagulatory system we know that maximum activil is not reached until hours after initiation of activation.these investigations studied perioperative changes in different kind of operations which served as a control group concerning the degrees of tissue damage and resulting coagulation disturbances. to better understand these phenomena we plan to induce mental stress in a laboratoq' environment to further exclude unknow~a influences on the mechanisms which can activate the procoagulatory and fibrinolytic systems. triodena (t) / / ug ee, / / ug gestodene) were tested for their effect on hemostatic parameters. three groups (n= ) of healthy female volunteers were treated for months with one of these oc. blood was taken before treatment (day - of pretreatment cycle, ) and on days - of the ~ (i) and (ii) treatment cycle. indications of an activation of blood coagulation and fibrinolysis were detected as the plasma levels of prothrombin fragment f i+ and of fibrin split product d-dimer and plasmin antiplasmin complexes were found elevated during treatment. the following main regulatory components of blood coagulation, activators and inhibitors, were investigated: factor vii antigen fviiag, fvii clotting activity fviie, circulating activated factor vii cfviia and antithrombin at activity, total protein s antigen tps-ag, free protein s antigen fps-ag, protein s activity psact, circulating thrombomodulin etm fviiag, fviie and cfviia significantly increased during treatment; cfviia: : c . mu/ml a prethrombotic condition characterized by elevated levels of circulating soluble fibrin has been claimed to be a predisposing factor for accumulation of coronary thrombotic material in acute myocardial infarction. the present study includes patients with clinical suspicion of myocardial infarction. blood samples were drawn by the primary care physician, upon arrival in the hospital, and after , , , and hours of hospital stay. patients with myocardial infarction were identified by typical course in lead ecg, and upon sequential determination of troponine t, myoglobin, ck, and ck-mb. patients with primary cpr were excluded from evaluation. soluble fibrin was measured by enzymun®-test fm (boehringer mannheim). patients with acute myocardial infarction display soluble fibrin levels within the normal range (< ~tg/ml) during the initial two hours after onset of symptoms. there was no significant difference between patients with myocardial infarction and patients with coronary heart disease without myocardial infarction. slightly elevated levels were found in patients with atrial fibrillation, reflecting intracardiac fibrin formation. in patients without fibrinolytie treatment, a slight increase of soluble fibrin levels with a maximum after approximately hours is observed. most patients with fibrinolytic treatment display a considerable increase in soluble fibrin, with maximum levels immediately after infusion of the fibrinolytic agent. four patients with pulmonary embolism showed soluble fibrin levels in the range of - [.tg/ml, which remained in the same range during the entire observation period. in conclusion, circulating soluble fibrin is not increased in patients with acute myocardial infarction and does not appear to be a predictor of acute coronary events. high levels of soluble fibrin in patients with fibrinolytic therapy may reflect release of fibrin from thrombotic material, but also de novo generation of fibrin due to release of active thrombin from thrombi not necessarily located in the coronary vessels. detection of elevated levels of soluble fibrin in patients with acute chest pain should result in careful examination for signs of pulmonary embolism or aortic aneurysm. the possibility to determine activated coagulation factors opens the question if data provide evidence of an activated coagulation or fibrinolysis and if this has a prospective value. we investigated patients with confirmed thrombosis, postsurgical septieaemia and also after liver transplantation. in all patients factor viia, xii, xiia and also the fibrinolytic parameters t-pa, pai- , pap, plasminogen and a -ap were determined. in addition, f + and apc-resistance with heterocygote factor v-leiden-mutation and confirmed thrombosis. we found increased factor viia which showed partly also an increased fl+ . patients with other pathological results such as a reduced t-pa and/or increased pai- showed a low incidence of elevations in factor vii or f + . the activation of factor xii seems to be of minor importance in patients with thrombosis. a different picture is found in septic and transplanted patients. obviously factor xii-activation is of major importance in this group. a deterioration of the clinical symptoms is correlated with an increased factor xiia which is paralleled by a decrease of factor xiiactivity. the investigation of fibrinolysis parameters such as pai- and pap demonstrate a fibrinolytic disturbance of the balance. statistically significant are differences in septicaemic patients both in the surgical and in the internistical group in contrast to polytrauma patients. in patients with liver transplantations significant changes are apparently related to rejection of the transplanted organ together with a deterioration of the clinical picture. the possibility to detect activated coagulation factors may be a tool to detect changes in the hemostasis system at an early stage and to use this for an improved therapy. control of long-term oral anticoagulation is usually performed by serial determinations of the prothrombin time. however, the assessment of effective anticoagulation versus the potential risk of bleeding complication is difficult to achieve. molecular markers of blood coagulation activation might add valuable information in individual cases. we investigated patients with thromboembolic manifestations (deep vein thrombosis n= , pulmonary embolism n= , myocardial infarction n= ) for one year beginning with admission to the hospital. tat, prothrombin fragments f + , d-dirner and fibrin monomer concentrations were analysed. all markers were significantly increased at the time of initiation of anticoagulant therapy thus reflecting a prethrombotic situation. patients suffering from venous thromboembolism demonstrated higher concentrations of tat and f + in comparison to myocardial infarction ( . vs . pg/ , p=o. ; . vs . nmol/i, p= . ). f + , tat and d-dimer concentrations decreased gradually over the first days of anticoagulant therapy reaching values within the established normal ranges in all cases. f + and tat concentrations reflect the activity of the coagulation system during long-term anticoagulation whereas analysis of fibrin monomer yielded partly controversial results. we conclude that f + and tat appear to be superior to fibrin monomer for the individual control of oral anticoagulant therapy. the influence of thyroid failure on haemostasis is controversial. mainly hypoceagulable states have been described in clinically overt hypothyroidism. since hypothyroidism has been associated with an increased risk of atherosclerosis, we studied a wide range of haemostatic factors in untreated female patients with subclinical (b, n= , age + ) or overt (c, n= , age -zcj) hypothyroidism, as well as in hypothyroid women under " treatment (d, n= , age + ) and euthyroid controls (a, n= , age + ). simple screening tests (prothrombin time, activated partial thromboplastin time, fibdnogen), procoagulant factors (fvii, fviii, von willebrand factor), coagulation inhibitors (antithrombin ill, hepadn cofactor ii, protein c, protein s) and fibdnolytic factors (plasminogen, antiplasmin, plasminogen activator inhibitor, tissue plasminogen activator) were measured. results factor vii activity (vii:c), factor vii antigen (vii:ag) and their ratio were found increased in hypothyroid patients. factor viii activity showed the same tendency, whereas von willebrand factor ramained unchanged, as did all other parameters with exception of free protein s, which declined in overt hypothyroidism and in t treated subjects. these differences tended to diminish after exclusion of women with estrogen replacement therapy for menopause, but the ratio vii:cnii:ag, as well as fvii:c still remained significantly higher in hypothyroid patients. conclusions: subclinical and overt hypothyroidism are associated with significantly higher levels of factor vii:c and vii:ag. the disproportionate increase in vii:c compared to vll:ag, as shown by their ratio, might reflect the presence of activated factor vii (vila), which in turn indicates a hypercoagulable state. this pattern becomes more pronounced with the concomitant estrogen replacement after menopause. exocytosis following platelet activation leads to translocation of cd p (p-selectin), cd , and thrombospondin, from cytoplasmic granules to the cell surface membrane, where these molecules, serving as activation markers, can be detected by flow cytometry. we here report detectability of these molecules preformedprior to platelet activation -inside the cytoplasm of resting platelets. two different methods are compared, i. e. using either methanol or the fix&perm kit (an der grub) for cell membrane permeabilization. in addition, interleukin(il)-ice is shown to be present in platelet cytoplasm after methanol treatment, but not after permeabilization using fix&perm. whenever cell surface positivity for a specific marker coincides with intracellular presence, blocking of the surface membrane sites prior to membrane permeabilization is required in order to obtain fluorescence intensity attributable to cytoplasmic staining. our data demonstrate the feasibility of the methods presented for the detection of intracellular platelet molecules. this technique should also provide a means for estimating the relative quantity of intracellular platelet antigens, provided the permeabilization procedure does not lead to antigen leakage or destruction. physical exercise activates the clotting as well as the fibrinolytic system as indicated in numerous investigations of exercise by running and by bicycle ergometer but not by swimming. the positive effect of an endurance training in coronary sport groups is induced also by influences on the hemostatic system. the influences are suppression of the clotting activation by the acute exercise and by an increased fibrinolysis response. different hemostatic parameters, therefore, were analyzed before and after swimming of male coronary patients (n= ; median ag~ years, achieved heart rate: /min). indicating plasmatic clotting activation there was a significant increase in molecular markers tat and f + among the coronary patients (tat from , to , pg/ ; fi+ from , to , nmol/ ). the degree of clotting activation among the coronary patients was less than that observed in a group of young volunteers in a former investigation. this must be explained by existence of the coronary heart disease or by the higher age in the patient group. indicating an activation of fibrinolysis t-pa activity increased significantly in coronary patients (from , to , iu/ml) resulting in an unchanged balance between coagulation and fibrinolysis. from this findings of the hemostatic systems no increased risk of the coronary patients by swimming can be derived. a prerequisite, however, are precautions l±ke to devoid exercise in the anaerobic range, exclusion of major heart failure and of cardiac arrhythmias before begirming of the swim training. the principle of the fontan operation consists in anastomosing the right atrium to the pulmonary arteria, thus bypassing the right ventricle and using the only functional single ventricle as a pump for the systemic circulation. there are only few data about the influence of the changes in hemodynamics on coagulation and fibrinolysis. we investigated the coagulation system in children and young adults aged to years in a general examination to months after fontan procedure. besides other abnormalities of the coagulation system, there were significantly increased values for the thrombin-antithrombin-iii-complex (tat) in patients ( %). as a marker for an activation of the fibdnolytic system we found elevated plasmin-alpha -antiplasmin-(pap-) levels in patients ( %). less frequently, the concentrations for the prothrombin-fragments and (f and ) ( patients, %) or the d-dimer ( patients, %) were increased. we didn't find significant differences in a clot-lysis-assay between fontanoperated patients and an age-matched control group. there was no significant correlation between activation of coagulation and clinical situation or diameter of the pulmonary arteria. whether the present data can help to estimate the risk for a thrombo-embolic complication following fontan procedure, still has to be investigated. the results of the clot-lysis-assay suggest, that for lysis of thrombi the same dose of rt-pa should be used as for other patients. a nd generation functional protein s assay p. van dreden* and e. adema** * serbio, gennevilliers france, ** boehringer mannheim, tutzing germany a second generation protein s test was developed with improved sensitivity to protein s and better reagent stability. the test result was found to be unaffected by apc-resistence ( patients, heterozygote for the mutation with a apti' + apc ratio between . and . ), heparin up to iu/ml and f viii activity between and %. in the test, diluted sample is mixed with protein s deficient plasma, activated factor v, activated protein c, phospholipids and an intrinsic pathway activator. this mixture is incubated for minutes. during this time, the activated protein c inactivates part of the f va. the extend of f va inactivation depends on the protein s concentration. after minutes caci is added and the time untill clot formation is measured. the clotting time is a linear function of the protein s concentration between and % protein s. for the three preproduction lots the difference in dotting time between and % protein s was - seconds. this compares to - seconds typically obtained with the old test. within run precision (n= i on sta) is cv= - % on the basis of protein s. day to day precision (n= on sta) was found to be cv= - %, again calculated on the basis of protein s concentration. the cv of % was obtained for an avk plasma with % protein s; it corresponds to a standard deviation of only . % in protein s. the insensitivity to interferences, in particular apc-resistence and better precision and stability are expected to improve the quality/reliability of a protein s determination. in this study we evaluated the use of hormonal contraception on the parameters protein c, protein s and pal. samples from women with, without hormonal contraception and in menopause were assayed by coagulometric (protein s clotting test (behdngwerke, marburg, frg) or chromogenic methods (protein c activity test and pal reagent from behringwerke, marburg, frg) in double determination and were compared with the reference ranges. in addition thromboplastin time (thromborel s reagent) and fibrinogen (multifibrin) from behringwerke, marburg, frg, and aptt (actin fs reagent from dade corp., unterschlei heim, frg) were determined. in women using hormonal contraceptives (p< , ) and in menopause (p< , ) protein s activity was significantly reduced compared to other women (< years) while protein c acitivity did not change. in menopausal women a higher susceptibility to thrombosis was supported by an increase of aptt (p< , ) and fibronogen (p< , ). while there was no change for pal, plasminogen was significantly lower in women using hormonal contraceptives and in menopause (p< , ). we could not observe a higher turnover of coagulation and fibdnolytjc system with hormonal contraception. noteworthy was the occurence of low (< mg/dl) and borderline fibrinogen (max. mg/dl) in , % of women res. in , % of women (together with borderline aptt) who had an individuell risk for arterial disease. protein s protein c fibdno~en aptt plasminog~ without hcc , -+ , , -+ , , -+ , , [ ] [ ] [ ] [ ] , [ ] [ ] [ ] [ ] [ ] [ ] [ ] [ ] [ ] [ ] [ ] [ ] [ ] with hcc , [ ] [ ] [ ] [ ] [ ] [ ] [ ] [ ] [ ] [ ] , ± , , .+ , , [ ] [ ] [ ] , menopause , + , , ± , , : , , [ ] [ ] [ ] , hcc= hormonal contraception hemostatic parameters in a patient undergoing bone marrow and subsequent liver transplantation due to veno-occlusive disease c. salat , , e. holler t, , hi. kolbl, , b. reinhardt l, r. pihusch , p. g hring , s. poley , e. hiller l=med. klinik iii, = institut flit klin. chemie, klinikum grosshadern der ludwig-maximilians-universit~tt mfinchen, =h~tmatologikum der gsf a year old patient suffering from all received allogeneic bone marrow transplantation (bmt). after an uncomplicated early posttransplant period the patient was dismissed after weeks. a bilirubin rise with subsequent liver failure was observed during the following weeks. according to biopsy proven hepatic veno-occlusive disease (vod) liver transplantation was performed on day . unfortunately the patient died on day due to aspergillosis. we monitored levels of protein c (pc) and s (ps) as well as pall during the pre-and posttranspiant period. pal level was normal (< ng/ml) during the first weeks after bmt but increased with the manifestation of vod ( . ng/ml on day ). it reached its peak immediately before liver transplantation ( . ng/ml) and returned to normal levels within the next few days. pc levels which were normal before bmt decreased prior to clinical diagnosis of vod and were normal after liver transplantation. ps levels lay within the normal range at all timepoints. vwf was elevated before bmt ( %) and remained relatively stable during the whole investigatonal period ranging from to %. it is assumed that vod is initiated by an endothelial cell injury -possibly due to radiochemotherapy -and subsequent hypercoagulability. our results indicate that the "endothelial cell marker" vwf is not helpful in predicting vod. the kinetics of the investigated parameters underline the significance of pc and pai- as described by others and our group earlier, whereas ps does not seem to play a role in the pathogenesis of vod. the budd-chiari syndrome (bcs) is characterized by hepatic venous outflow obstruction that may be caused by the precipitation of a thrombus. it frequently coseggregates with other major diseases like myoloproliferative diseases or defects in the haemostatic system (antiprotein c and protein s deficiencies e.g.). only recently, the factor v leiden mutation (fvlm) has also been associated with bcs. we hypothesized that defects in the thrombo-modelling associated anticoagulant pathways (tmaap) are a major risk factor for the precipitation of bcs. we screened our cohort of patients (pts) with bcs for the presence of defects in the tmaap and identified pts with protein s deficiency (psd). these pts were screened for the three point mutations in exon (codon- ; ins t), exon (codon ; a-->t) and in intron (g-->a + ) of the ps alpha-gene that have been demonstrated by bertina et al to coseggregate with psd. restriction enzyme analysis and confirmation-sensitive gel electrophoresis for the detection of single-base differences in doublestranded pcr-products were employed. all living family members of the indicator pts were also screened for heterogeneties in the three point mutation as described. no single abnormality in these genes despite presence of pbd in those family members was found. in addition, pts and family members were also screened for fvlm. one pt and two of his family members, in addition to psd, were subject to fvlm. the other two lots and their family members were not subject to fvlm. in contrast to the first family, despite psd, those two pts suffered from morbus crohn and acute myeloid leukaemia as risk factors for bcs. we conclude: psd is one major risk factor for the precipitation of bcs. to precipitate this disease, one additional risk factor is required. psd may be caused by genomic defects in the protein s gene other than those described by bertina. only a few publications describe a thromboembolic disease due to dramatically reduced protein s levels being associated with viral or bacterial infections, autoimmune mechanisms are suspected but the aetiopathogenesis is still under discussion. we report on a year old boy who developed purpura fulminans of the left leg during varicella infection. on the fourth day of infection the disease started with pain and haemorrhagic efflorescence localized at the left taft. on admission the boy suffered from a purpura fulminans with central necrosis measuring x era. suspecting a hereditary thrombophilic disease we started therapy with protein c concentrate and recombinant tissue type plasminogen activator. the fellowing coagulation investigation showed a severe deficiency of protein s (total protein s-antigen < u/ml, free antigen not measurable) in combination with factor v leiden mutation. other thrombophilie and coagulation parameters did not show deviation from normal range. after weeks we saw a slight improvement of the total protein s antigen up to u/ml. the free protein s antigen was still undetectable. during the following weeks the patient recovered slowly and the protein s activity and antigen normalized. because of skin necrosis thromboembolie prophylaxis was initiated with low molecular weight heparin (fragmin®, ie/kgbw/die) and continued for months. under this therapy there were no further thromboembolic events. these results suggested an autoimmune protein s deficiency in a patient suffering from chickenpo×. an analyses of autoantibodies at the time of diagnosis showed a slight increase of the antieardiolipin antibodies (igg , iu/ml, igm , iu/ml) which normalized during hospitalisation. we suspect an antibody to protein s probably caused by similar presented viral antigens. we suppose that autoimmune mechanism during different infections in combination with a heterzygous apc-resistance may be a potential risk factor for developing thrombotic disease. in the central nervous system mrna encoding for prothrombin and thrombin receptor is present and astroglial cells in culture process and secrete thrombin. moreover, effects of thrombin on brain cells including change of neudte outgrowth and astrocyte shape are described, but the molecular mechanisms are unclear. we investigated the effects of human g/l). when compared with conventional elisa techniques (asserachrom ddi), the assay demonstrated a correlation coefficient of . on samples from normal individuals and hospitalised patients with elevated d.dimer concentrations. slope was of . and intercept was of - . . this new assay offers a full flexibility for individual testing as the calibration curve is stable for at least one week on the instrument. it is then well adapted for all the applications of d.dimer measurements in coagulation laboratories. children between an age of days and months ( median weeks ) with thrombotic or embolic occlusion of major vessels were treated with rt-pa for thrombolysis. the affected vessels were both sided renal veins or one sided renal vein and v. cava inf. in cases, the v. cava superior in , the v. cava inf. plus renal veins plus aorta in , the left ventdcle in , the aorta in , the a. femoralis in and the v. portae in case. out of occlusions were associated with an indwelling catheter. underlying dieseases were sepsis ( ), prematudty ( ), vitiurn ( ), asphyxia ( ), short bowel syndrome ( ), hus ( ), diabetes ( ), cmv ( ), exsiccosis ( ) and m. hirschsprung ( ). thrombolysis was performed with an bolus of rt-pa ( . - . mg/kg) followed by continuous infusion ( . - . (- ) mg/kg/ h, median . mg/kg/ h). low dose hepadn ( ie/kg/ h) was given dudng full dose hepadn (aptt , - times normal) after the thrombolysis. in pts. rt-pa was administered locally through the catheter and in cases systemically. in patients the vessels could be recanalised completely, in partially, in patient the therapy had to be discontinued. in vessels a reocclusion occurred. bleedings were noted in three patients, all from recent venous puncture sites. the results encouraged us to start a multi-canter trial which has been approved by the ethical committee and is open for recrural. the aim is to compare efficacy and safety of rt-pa with urokinase, the only recommended standard in the management of critical major vessel obstruction in newborns and infants. the design is a randomised, notblinded trial with a cross-over option after three days in cases without success. study end points are recanalisations, major bleedings and number of cross-overs. inclusion criteria are age under year, lifethreatening vessel obstruction, age of thrombus up to days, no precaeding fibdnolytic therapy. exclusion cdteda are cerebral hemorrage, pedventricular leukomalacia, surgery dudng the last days and cns injuries during the last months. although our knowledge on inherited thrombotic coagulation disorders has greatly expanded within the last years, there are still man}, patients with recurrent venous thrombosis in whom no obvious predasposition can be identified.thus we decided to include also so-called rare defects associated with thrombosis in our routine thrombophilia screening programme, such as fxii deficiency. fxii is an important element m the intrinsic pathway of fibrinolysis and there is evidence for an insufficient fibrinolytic activity in fxii deficient pts..up to date only few and controversial data exist about the frequency of fxii deficiency in pts. with thrombophilia. cons~uently the aim of our study was to evaluate the association between fxii deficiency and juvenile venous thrombosis in a great population. patients and methods: pts. ( female, male, aged i to ys, median age . ys) with venous thromboembolism before the age of ys were studied. one-stage clotting activity assay of fxii (fxii:c) was performed on acl using fxii deficient plasma from instrumentation laborato~. fxii antigen concentration (fxii:ag) was measured by electroimmundiffusion using reagents from behfingwerke, enzym research respectively. the normal ranges are tl~. routine reference values obtained m our labratory from healthy subjects ( males, females, median age . ys); % range: fxii:c - %, fxii:ag - %). results: / pts.were classified as fxi deficient (f , m ), giving a prevalence of . %. severe fxii deficiencies with fxii:c below % were observed in pts..ll pts= proved to have moderate fxii deficiency with fxihc ranging lrom to % and fxii:ag ranging from to %. in none of them inherited deficiencies of other well established thrombophila risk factors could be detected. none of the fxii deficient pts. had positive lupus anticoagulant tests. familial fxii deficiency was found m cases. discussion and conclusion: the precedences of fxii deficiency amongpts, with venous thromboembolism was previously described to be . - %. supporting these data, we have shown a praevalence of fxii deficiency of . %. in comparison to the frequency of other well established thrombophila risk factors we consequently have observed a relatively high prevalence of fxii deficiency m our study group.these data, from the largest such study reported, strongly indicate that fxii deficiency may not be a rare deficiency and may be more frequently associated with thrombosis than currently suspected. we describe a family with an exceptionally rare, i.e. plasminogen, deficiency, combined with subnormal activities of coagulation factor xii (hageman factor). the first thromboembolic event, pulmonary embolism in the proposita was diagnosed at age . since that time, 'spontaneous' venous thromboembolic events verified by phlebography and perfusion/ventilation lung scan recurred once every year despite oral coumarin therapy, whose intensity varied over an exceptionally wide range despite tight control the patient was repeatedly given succesful thrombolytic therapy with streptokinase or recombinant tissue plasminogen activator. her plasma plasminogen chromogenic activity was - % compared to a normal plasma pool (reference range - %), plasminogen antigen was diminished to the same extent. the patient's factor xii exhibited only - % activity in a factor-deficient plasma assay as compared to a normal plasma pool. other known risk factors for recurrent venous thromboembolism were not present : no evidence of malignancy, no obvious precipitating events, normal values of antithrombin iii, protein c, protein s, fihrinogen, thrombin time, platelets, lupus-like anticoagulant, aptt prolongation after addition of activated protein c. the proposita's mother had died at age from pulmonary embolisnt no coagulation studies are available. the proposita's sister was first diagnosed deep leg vein thrombosis at age , since that time recurrent episodes of venous thromboembolism have been diagnosed also in an other hospital. this sister's plasminogen activity was %, but factor xii activity was reduced to %. three brothers of the proposita were examined, too, all in their rd decade of life. none of them recalled symptoms of or treatment for thromboembolic disease. in one brother, factor xii activity was normal ( - %), but plasminogen only about %. in the nd brother, factor xii was very variable ( , and %), plasminogen was in the lower normal range, in the rd brother, factor xii was about % (repeatedly), plasminogen was normal. current knowledge about the risk of thromboembolism with both enzymes is limited, the optimal management remains controversial. msrgit serbsn,maria cucuruz,dan madras,carmen petrescu, natalie rosiu,rodica costa iii rd psediatric clintc,universtt v of medicine, the unsatisfactory efficiency of entihepetitis b vaccination in our haemsphiliscs suggested the control of the immune status in hiv negative patients,by establishing through flowcitomstrie with monoclonsl antibodies the lymphocyte subsets (cd ,cd ,cds,cd&/cd ratio and cd ) and by seric tmmunoglobulins levels; the immunological parameters have been correlated with the serological markers of hepatitis infections (hay, hbv,hcv ebd hdv) as well as on dependence with the treatment (blood,plasma,crysprecipitate,fector viii/ix concentrate) and the quantity of their consumption (ui/k weight/yesr).the interpretation of the results pointed out • significant lower level of cd ,cd& (p years (group ) duration. anticoagulated whole blood was incubated with fluorescent antibodies to gpib and gmp- (two colour method) and analyzed with a flow cytometer. thrombomodulin, f + , protein s, -thromboglobulin were measured according to standard procedures. results: surface expression of gmp- was not different in groups to , however, there was a tendency to higher acitvation in group (< years iddm). results for thrombomodulin, f + , protein s, -thromboglobulin will also be presented. conclusion: though it did not reach statistical significance, platelet acitvation seems to be more important during early diabetes. this wilt be correlated with endothelial and plasmatic activation markers. in our clinic four patients with hiv-related thrombocytopenia were treated with a lot of gammagard ( f abllf), which later turned out to be hcv contaminated. before infusion all patients were negative for hcv antibodies and hcv rna. to months after infusion / patients, who suffered from arc at the time of hcv infection with cd counts > /pl, seroconverted, whereas in the two other patients, who suffered from aids with cd counts below /pl, there was no seroconversion. in all cases hcv rna was found. genotyping with inno-lipa (innogenetice) showed hcv genotype l(b) in all patients. liver enzymes and hcv rna copies were measured repeatedly over a period of one year after infection. the patients with arc showed a strong increase of hcv rna titre during the first to months after infection, followed by a rapid decrease within the next months. in the patients with aids hcv rna copies increased moderately within the first to months, followed by a slow decrease. elevation of liver enzymes was mild in the aids patients and seems to be independent from the hcv rna titre. in the arc patients liver enzymes changed parallel to hcv rna titers with a delay of to months. the course of hiv infection was only slightly influenced by the acute hepatitis c as measured by cd counts, i% microglobulin and hiv rna copies. introduction:mechanisms underlying ischemia/reperfusion injury have been thoumughly investigated in experimental models. leucocytes appear to play a main role through production of cytokines and overexpresssion of adhesion molecules. in experimental animals, administration of monocional antibodies (mab) recognizing cd can reduce organ injury following ischemia/repedusion. no data, however, have been reported concerning clinical ischemia situations. patients and methods:we investigated expression of cdt , cd la, cdf l b and cd lc in granulocytes, monocytes and lymphocytes from peripheral blood of five patients undergoing elective hand surgery. the tourniquet was applied on the upper arm and heparinized samples from cubital veins were obtained before and at the end of ischemia. control samples were drawn from the nonischemic contralataral arm with the same timing, duration ot ischemia ranged between sixty and one hundred minutes ( ~ ). whole blood samples were incubated with specific, fluorochmme labelled antibodies and analyzed by fluorocytometry (facscan, becton dickinson, san jose, ca). mean fluorescence intensity (mfi), quantitatively reflecting surface expression of the indicated markers was evaluated for the individual cell populations. data were compared by the paired student's t-test, p< , was evaluated as significant. results:mfi for all markers was comparable in all cell populations in samples obtained before ischemia from both arms. in contrast, expression of cd was significantly enhanced in granulocytes ( _+ vs. _+ ), monocytes ( -+ vs. + ) and lymphocytes ( _+ vs. -+ ) from samples derived from the ischamic arm, as compared with the nonischemic arm, as measured at end of ischemia. at the same time, an increase of cdf lb on granulocytes ( ~_ vs. + ) and monocytes ( + vs. -+ ) but not on lymphocytes was found, no modifications of cdlta and cdttc expression could be observed. there was no correlation between duration of ischemia and quantitative expression of these markers, conclusions:our data indicate that relatively short ischemia periods induce an increased expression of ~ " integrins adhesion molecules on leucocytes. these results suggest, at close similarity with findings from expodmental models, that overexpression of adhesion molecules might play an important role in the induction of ischemia/reperfusion injury, in humans. in patients suffering from chronic inflammatory bowel diseases, such as morbus crohn and colitis ulcerosa, we observe massive, sometimes barely staunchable bleedings. hereby, the deficiency of coagulation factors, especially of factor xiii in plasma is established. ttowever the influence of factor xiii on the pathomechanism of the underlying disease is still under discussion. therefore we studied the f xiii content in the intestinal mucosa. an immunohistochemicat method was developed using commercially available antibodies against f xiii subunit-a, the detection of mucosal factor xiii depends on the amount of chromogen bound to the antibody-horseradish-peroxidase complex. with this method, it is possible to locate but not to quantify f xili in the intestinal tissue. therefore we developed an elisa-metbod in homogenized intestinal tissue, using commercially available antibodies. its precision was validated using a standard curve with commercially available factor xiii preparations (fibrogemmin®). the detection limit of this method is > . i.u. f xiii/ml of tissue solution. freezed dried intestinal tissue (lmg) was homogenized in ml buffer using a potter. specimens of the large bowel revealed f xiii values of , + , i.u. (x __+ sd), tissue solution. with this method it is possible to quantify tissue-bound faxtor xiii. studies are in progress to elucidate the content of f xiii in the intestine of patient's suffering from infammatory bowel diseases in order to contribute data to the pathomechanisms of f xiii deficiency. in a previous double-blind, controlled trial we were able to show that aprotinin administration has significantly contributed to reduce periand postoperative bleeding complications without increasing the risk of thromboembotic complications. the question arises whether this beneficial effect may be associated with its effects on intraoperative fibrinolysis. therefore, patients were treated with or without aprotinin ( million kiu loading dose over minutes followed by , kiu per hour), and citrated blood samples were obtained at the following time points: before operation, after induction of the anesthesia, at the beginning of operation, intraoperatively when the femur shaft was implanted, and hours postoperatively. the determinations of plasmin/antiplasmin-complexes, d-dimers, thrombin/antithrombin iii-complexes, and prothrombinfragments + were performed by means of test kits from behring, germany (enzygnostrpap micro, enzygnost r d-dimer testkit, enzygnost r tat micro and enzygnost r f + respectively). -all markers of activated fibrinolysis and blood coagulation were significantly increased in the groups with and without aprotinin treatment, the highest activities to be seen when the femur shaft was implanted. however, the values of pap and d-directs of the aprotinin group were below the values of the control group until the end of operation. the markers of activated coagulation showed the opposite effect, however the differences between the two groups were not significant. as expected, the aptt was significantly prolonged in the aprotiningroup. the aprotinin treatment was also associated with a significantly lower blood loss in these patients. -concluding it can be said it is not clear whether the blood saving effect of aprotinin may be exclusively attributed to its antiplasmin activity since the differences of the fibrinolysis parameters were not statistically significant. further blood samples should be analysed between the implantation of the femur shaft and the end of operation. in our laboratory large amounts of human prothrombin are required ( - mg/week). as we try to produce meizothrombin and meizothrombin-des-fragment- from human prothrombin and to apply it as an antidote for hirudin, the classical adsorption to barium sulphate or aluminum hydroxide from human plasma cannot be used. commercially available human prothrombin is expensive and of an unacceptable quality for our applications. in most of these batches we found small amounts of factor x and prothrombin activation products. we now developed a procedure to isolate prothrombin from "prothrombin complex concentrates" (ppsb- -bulk, drk-blutspendedienst nds.). the concentrate also contains fac-tor vii, factor ix, and factor x. the prothrombin had to be separated from these factors. the concentrate we used contained amounts of other proteins and activation products of prothrombin (e.g. prethrombin- ) as well. for the preparation of prothrombin from ppsb we used anion exchangechromatography (resource-q ®) on an fplc ®. we applied dissolved ppsb directly or after buffer exchange on sephadex g- onto the column at room temperature. the prothrombin was eluted with an naci-gradient in trisodium citrate buffer, ph . . the buffer conditions are similar to the conditions used in the preparation of ppsb. the quality of the prothrombin so obtained was sufficient for most of our experiments. a second purification step on ion-exchange resulted in a % pure product devoid of contaminating factor activities and activation intermediates as examined with coomassie and silver stained sds-page electrophoresis and assays for factor x. this prothrombin contained full enzymatic activity and its activation by specific snake venom prothrombin activators showed the known activation products. we are now able to isolate the amounts of pure prothrombin required for preclinical investigations. most of the commercially available lmwhs such as enoxaparin, fraxiparin, and fragrnin are prepared by chemical methods which can result in desulfation and other chemical modifications of the internal structure leading to differences in the pharmacologic effects. on the other hand, tiactionated lmwhs retain their native characteristics and are structurally similar to heparin. in addition, the oligosaocharide sequence responsible for atiii binding is not modified. physical methods such as gamma irradiation (~co) have been used to fi'agment sulfated glycosaminoglyeans yielding fragraents without chemical modifications (deambrosi et at. in : biomedical and biotechnological advances in industrial polysaccharides, pp. - ). utilizing this technique, depolymerized heparius exhibiting different molecular weights can be obtained. this communication reports on the biochemical and pharmacologic effects of several such depolymerized heparins to demonstrate the molecular weight dependence on biologic activity. fragments exhibiting molecular weights of , , , and kda were prepared by exposing concentrated heparin solutions to a rectilinear gamma ray beam at intermittent doses of . to mrad under controlled temperatures. unlike the chemically depolymerized heparins, these fractions did not exhibit any decrease in charge density or atiii affinity. in routine assays for heparin, a clear cut molecular weight dependance on the anticoagulant and antiprotease actions was observed. on a gravimetric basis, these agents produce superior antithrombotic actions in comparison to chemically depolymerized derivatives. these studies suggest that gamma irradiation can be used to prepare lmwhs which retain their molecular integrity and therefore may prove to exhibit a more comparable biologic profile to hepari~ futthermore, lmwhs produced by gamma irradiation lack the usual double bond fommtion which requires the use of additives which can alter the product profile. university hospital, dept. of angiology, frankfurt a.m., germany introduction: thromboembolic disease constitutes a major clinical problem and among others a defective fibrinolytic system has been suggested as a predisposing factor for the development of thrombosis. the plasma fibrinolytic system can be impaired by inherited deficiencies of plasminogen defective release from the wessel wall tissue plasminogen activator (t-p'a) or by high ptusma levels of regulatory proteins, such as plasmino- en. activator inhibilors (pal). the aim ....... of the present study w~s to eshmate the prevalence of decreased fibnnolyl~c actwlty m young pls. with thrombophilia. patients: a great population of pts. (fenmle , male ; age - ys median . ys) with venous thromtx~emolism before the age of years were investigated in regard to their plasma fibrinolytie system. in none of them well established thrombophilia risk factors could be identified previously. methods: plasminogen ~behdngwerke), pai- activity (ehromogenic assay, biopool), pal-i anugen coneentration (elisa, biopool), t-pa activity (chromogenic assay, biopool) and antigen concentration (elisa, biopool) were measured before and after venous oeclusion.vo was performed z month after the last thromboembolic epi~xle. healthy subjects (median age . ys) served as controls. results." pts.( . %) were classified as plasminogen deficiencies (activity and antigen). pts.( %) had significantly elevated levels of pal activity (up to u/ml) and pal antigen (up to ng/ml). none of the pts. with high pal levels had laboratory signs of acute phase reaction. low t-pa activity could be demonstrated and confirmed in pts., aecordingto a prevalence of . % (range: - . u/ml; reference limils: . - . u/ml). however, there was a significant negative correlation between t-pa activity and pal values. in pts. ( . %) the low t-pa activity was associated with increased pal levels whereas the t-pa antigen concentration was normal. a parallel reduction of t-pa activity and t-pa antigen (range: . - . ng/ml; reference limits: . - . ng/ml) were determined repeatedly in pts. (f , m , median age ys). thus, the prevalence of a defective t-pa release was . % in our study group. conclusion." in comparison to the frequency of inherited deficiencies of other well established thrombophila risk factors we have observed a relativel~ high prevalence of diminished t-pa activity, elevation of pal respectively in our study group. our data strongly indicate that besides t-pa and pal acuvity, antigen concentration for both parameters should be determined in pts. with thrombophilia. the antithrombotic and anticoagulant effect of the supersulfated low molecular weight heparin ssh was studied after i.v. and s.c. administration in rats. thrombus formation in the jugular vein was induced by i.v. injection of activated human serum and following stasis for rain and was assessed by a thrombus score ranging from (no thrombus formation) until (complete thrombus formation). ssh t injected either min (i.v.) or rain (s.c.) before thrombus induction caused a dose-dependent antithrombotic effect in a range from . to mg/kg i.v. and to mg/kg s.c. there were clear differences in the antithromboric effectiveness between female and male animals, i.e, in female rats antithrombotically effective doses were lower than in male rats (edh after i.v. injection in females . mg/kg, in males . mg/kg). the sex differences were confirmed in studies on the time course of the antithrombotic effect. after i.v. injection of fully effective doses ( mg/kg i.v. and mg/kg s.c., resp.) the antithrombotic effect disappeared after h in female or after h in male rats. for studies on the anticoagulant action blood was drawn from the femoral artery and after centrifugation global clotting assays were performed in plasma. similar to its antithrombotic action ssh also caused doseand sex-dependent anticoagulant effects. the most sensitive assays were the aptt and the heptest; thrombin time and prothrombin time were less or not influenced by ssh . in conclusion, ssh was found to be an effective anticoagulant and antithrombotic agent in experimental studies in rats. at present there is no explanation for the clear sex differences found in this species. venous thromboembolic disease is the most frequent complication in patients undergoing total knee replacement therapy. patients and methods: after informed consent x patients were included in an open randomized clinical study and the incidence of venous thromboembolisrn was examined using different regimes for heparin prophylaxis ( patients received fraxiparin rag once daily, patients clexane once daily and patients u calciparin twice daily). there were no differences between the groups concerning age, sex, body weight, risk factors, surgeons, decrease in hemoglobin~ and requirements for blood products. pre surgery, day , day - phiebograms were performed and also tat, dimers, fl+ prothrombin fragments were examined. results: ., dvt in patients ( . %). dvt in / patients under calciparm prophylaxis, / patients under fraxiparin and / patients under clexane treatment. ., low speciflty ( . %) of dimers and tat ( %) for detecting a dvt in these special patients undergoing knee replacement therapy, elevated fi+ fragments in the dvt group at ti and t vs the patients without dvt (t dvt: . +- . vs. . +- . -p= . ). , only / patients ( %) with dvt had clinical signs of thrombosis. conclusions: ., there is an increase of thrombin gneeration measured by tat and dimers after knee replacement therapy. there are further studies with more patients necessary to confirm that fl+ prothrombin fragments can discriminate between patients with and without dvt from a clinician's point of view. ., phlebographicauy confimled dvt in almost % of our patients demonstrate the high thromboembolic risk in these patients. von willebrand's disease (vwd) type is characterized by absence of high molecular weight muitimers. qualitative changes in the structure of the molecule might be associated with enhanced binding of von willebrand factor (vwf) to platelet glycoprotein lb. therefore in some patients vwd type is associated with severe thrombocytopenia. here, we report on a year old boy who presented with severe purpura and platelet counts about /gl at the age of years. thrombocytopenia did not respond to corticosteroids. a normalized platelet count of short duration was observed after high-dose immunoglobulins. in addition, increase of platelets was seen after anti-d treatment. thus, although platelet associated antibodies were not detected, thrombocytopenia seemed to be caused by an autoimmune mechanism. despite platelet counts above /gl, the patient experienced severe bleedings with a significant decrease of hemoglobin levels. therefore, he needed several transfusions. coagulation analysis revealed vwd. application of ddavp lead to a normalization of partial thromboplastin time (ptt) and an increase of factor viii with subsequent cessation of bleeding symptoms. recently, vwd was typed by lack of high molecular weight multimers. in conclusion, we report a case with vwd type responding to ddavp. however it is unclear, whether thrombocytopenia is part of the vwd type or of autoimmune origin. since autoimmune antibodies have not been detected, the effect of immunoglobulin treatment might be explained by blockade of enhanced binding of vwf to glycoprotein lb. von willebrad disease (vwd) with a prevalence of , % (ruggeri , rodeignere ) seems to be the most frequent inherited hemostatic disorder. • the diagnostic criteria for vwd are clinical picture, family hostory, laboratory findings: bleeding time, partial tromboplastine time (ptt), level of factor viii:e, vwf, vwf:ag, ristocetin induced platelets aggregation (ripa) and multim~-analysis.the diagnosis ofvwd is occasionally difficult, especially in early childhood because the laboratory data may vary due to time of investigation, as well as abnormalities may not be present in all sub-types the aim of this study was the evaluation of diagnostic approach to vwd in childhood and diagnostic reliability of all available laboratory tests. all previously mentioned laboratory tests have been done on our own material ( child who satisfied all criteria for vwd, boys and girls, - years old) except mulfimer analysis which was unavailable in some cases. majority of laboratory tests proved to be highly specific and necessary for diagnosis. however, the diagnostic reliability of fviii:c and adhesion of platelets is much lower in mild cases in comparison to total sample, while ptt is an unvaiied test. the most specific screening test for vwd is vwf which diagnostic reliability is almost , . the optimal strategy to establish general diagnosis of mild forms ofvwd is use of vwf and vwf:ag plus ripa if necessary and multimer analysis to classify variant types. we report on a new multimeric structural defect of vwf detected in a german family (two sisters and their three children): all members of the family who presented to our outpatient clinic had an increased spontanous bleeding tendency (moderate or strong hematoma, epistaxis, menorrhagia). prolonged bleeding could be observed after surgical procedures (adenotomia, tooth extraction) and after trauma (laceration). wound heeling was impaired in two cases. clotting assays showed slightly prolonged apti" and a mild decrease of f viii:c, vwf:ag and vwf:rcof levels. collagen binding activity was within normal ranges. bleeding time (simplate i) was slightly prolonged. the analysis of the multimeric structure in plasma showed quantitative and qualitative abnormalities: all multimers were detectable; the structure of vwf was reproducably abnormal in all family members so that the defect must be caused genetically. the thmmbocytic vwf showed neither qualitative nor quantitative alterations. minirin@ (ddavp) was administered as a test dose of , ~tg/kg bw in ml , % nacl-solution i.v. to evaluate efficacy and tolerance: clotting assays showed normalization of a_vrt, f viii:c, vwf:ag, vwf:rcof in plasma and shortening of bleeding time in three cases. an insufficient rise of vwf:ag and vwf:rcof levels could be observed in one case. one patient had no rise of f vm:c but a corrected bleeding time. multimeric analysis showed no structural change. the administration of ddavp was well tolcrated in all cases. the existance of all multimers in plasma and the normal collagen binding activity suggest that the structural abnormalities of vwf in this family does not cause functional defects so that the defect could be classified as a type i vwd. the response to ddavp was only partially effective. mild von willebrand disease (vwd) is far the most frequent congenital bleeding tendency. its diagnosis is very helpful in pre-operative check-up in order to avoid bleeding complications during surgery. following post-operative periods or monitoring the management of haemorrhagic episodes in vwd patients is also strongly recommended. current methods involve complex technologies, are time consuming and require large series. these assays lack the expected flexibility for rapid individual testing in patients. a new and flexible assay which works on the fully automatic walk-away coagulation instrument, sta, has been developed for these applications (liatest vwf). the technology is an immuno-turbidimetric method using mierolatex particles coated with rabbit polyelonal antibodies specific for vwf. the assay has a dynamic range from to % yon willebrand factor (vwf) concentration, it works with a fold dilution of tested plasma ( td) and it offers a calibration established with the nibsc international standard. the total assay time is of less than minutes and the detection threshold is of % there is no prozone effect up to concentrations higher than , % vwf. intra-assay reproducibility is < % and inter-assay one < %. in dilution studies a mean recovery of % was obtained. in a study on plasma samples from norma~ individuals, patients with high vwf concentrations, and vwd, comparison with the elisa technique demonstrated a correlation coefficient of . with a slope of . and an intercept of . . in the low assay range too, a good agreement was obtained with the elisa. we conclude that liatest vwf is a reliable, flexible, sensitive, and rapid automated assay which fits well the vw'f assay applications in coagulation laboratories. fibrinolysis, the process during which the active enzyme plasmin is generated in a regulated and localised way, is -in a classical understanding -responsible for the dissolution of blood clots formed in a vessel. for this activity, t-pa is generally assumed to be the most important plasminogen activator and its activity, is regulated by enzyme kinetic mechanisms dependent on the presence of fibrin. with this background t-pa is used for thrembolytic therapy with great success. however, data from t-pa knockout mice indicate that t-pa might not be responsible for inhibiting the spontaneous development of intravsacular thrombi but only for dissolution of fibrin formed upon a coagulation challenge. in contrast, u-pa, generally assumed to be important for extravascular proteolytie activity on activated or tumour cells, seems to lead to the development of spontaneous fibrin formation in a mouse knockout model. on the other hand, the major plasminogea activator inhibitor pal-i seems not only to regulate intravascular fibrinolysis but seems to also be important for the progression of vascular diseases (neointima formation is e.g. increased in a pai- knockout model, but increased levels of pai- seem to predict reocclusion after angioplasty). in addition to their functioning as enzymes and inhibitors, components of the fibrinolytic system seem also to be involved in signalling processes in tumour and other cells. the u-pa/u-pa-receptor system could be shown to function as a chemotactic system and to elicit a migratory and mitogenle response in monoeytes and tumour ceils as well as in vascular cells. for such a response activation of tyrosine kinases of the sre-family might be responsible in some cell lines, but other signal transduction pathways e.g. involving caveolae and the starprotein can not be excluded. there seems to be a further important role of components of the fibrinolytic system which involves serine protease inhibitors (serpins): serpins have homologies to hormone binding proteins and cleavage of serpins by their target enzymes not only leads to inactivation of the enzyme but also to a possible release of bound hormones from the serpins. from these data clearly the relevance of any regulation of the fibrinolytie, system depends on the specific function of the system to be dealt with. in addition to "fibrin binding", "receptor mediated" and "genetic control" (e.g. g vs. g in the pai-i promotor) also "signal transduction" and "hormone delivery" are distinct functions of the system with specific regulation. plasmatic for both, healthy persons as well as for patients with angina pectoris it could be shown that increased values of plasma fibrinogen, factor viic and vwf:ag are significantly associated with the risk to suffer an acute myocardial infarction or cardiac sudden death. the same holds for tpa:ag. however, a group analysis in quintiles reveals that particularly low tpa:ag values are connected with a particularly low coronary risk. unexpectedly also the acute phase protein crp is positively associated with increased coronary risk. for clinical purposes these factors have already been included into coronary risk scores in order to improve the individual risk prediction in combination with lipids and other risk factors. the assessment of the pathophysiological significance of these observations remains at dispute. pathways are discussed: . the assumption that increased plasma values of those factors indicate increased coagulation activity could so far not be established in prospective studies. . both vwf:ag and tpa:ag are produced in endothelial cells. an increase of their plasma level could therefore indicate increased endothelial cell functions which accompanies progressive atheromatosis. the risk association of the two acute phase proteins crp and fibrinogen could be interpreted analogously. . first prospective studies favour the assumption of a genetic determination to an increased production of coagulation proteins in persons at particular coronary risk. it could also be shown that there is a certain dependance of the gene-polymorphism for co-and -fibrinogen chains from the coronary risk. . even slightly elevated concentrations of fibrinogen and/or vwf:ag may influence the quality of a coronary thrombus both by increased physical stability and by reduced fibrinolytic lysibility. this could mean that an early coronary clot under these conditions could more readily develop to a stable, occlusive thrombus. a newborn with pronounced bleeding tendency had a prothrombin (prth) deficiency below . % in a clotting assay. both parents had activities of % and %, respectively. however, the immunological determination ofprth by elisa revealed normal concentrations in all family members ( %- %). furthermore, thrombin generation as investigated by a chromogenic assay using ecarin for activation of prth was normal as well. activation of prth by fxa was investigated by reealcificafion of the plasma samples and further analyzed for prth and its derivatives produced. although clotting times still were different, finally, normal levels of fl+ and tat were generated as determined by elisa. western blot analysis using polyclonal (rabbit) antibodies to prth and a monclonal antibody specific to human thrombin, revealed different patterns of prth degradation products. tat was only weakly visible in the serum of the mother and nearly absent in the child.the mobility of prothrombin and thrombin was different compared to normals indicating a lower molecular weight. after reduction of disulfide bridges a higher molecular weight of thrombin was observed compared to normals indicating an insufficient cleavage ofprth and formation ofprethrombin . these observations let suggest that prothrombin marburg is a deletion mutant lacking the cleavage region arg -ile . upon cleavage by factor xa only prethrombin is formed under liberation of fl+ . this prethrombin is able to cleave chromogenic substrates in the ecarin assay. probably, prethrombin forms a complex with atiii which is detected by elisa, but unstable under denaturing conditions as in the western blot. as a major complication of haemophilia a treatment, up to % of the severely affected patients develop antibodies to substituted factor viii. investigating patients and considering the data of further patients of the haemophilia database, we could show, that risk of inhibitor developement depends on the patient's mutation type. patients with more severe gene defects, like intron inversions, stop mutations or large deletions had a risk of about % for inhibitor developement, which was about times higher than for missense mutations or small deletions. besides an influence of mutation type, we investigated other parameters e. g. immune response genes (i-ila-genotype) and clinical aspects (treatment onset and frequency, type of concentrate) that might also affect inhibitor formation. to exclude any effect of mutation type, we focussed on patients with an intron inversion. hla-typing showed that some t-ila-alleles (dqb , bt) occurred more otten and others (dqa , dqb , dr , c ) less frequent in inhibitor patients. treatment onset, frequency and type of concentrate apparently do not affect inhibitor incidence. the results presented here, prove that inhibitor development is considerably influenced by the mutation type. this supports the hypothesis that patients with severe molecular defects have no endogenous factor viii protein and that substituted factor viii represents a foreign protein, leading to an immune response, e. g. the production of alloantibodies. in addition, the immune response seems to be modified by the hla-genotype. however oar findings (in terms of genotype and treatment parameters) can only explain part of the inhibitor pathogenesis. it is still unsolved why substituted factor viii does not lead to a recognizable immune response in / of the patients with severe molecular factor viii gene defects. consequently other factors, probably concerning the antenatal phase, must be involved. viia in the treatment of patients with inhibitors against factor viii or ix: a german/swiss/austrian multi~center trial d. ellbriiek*, i. scharrer**, j. dethling***, and the rfviia study group *section h~mostaseology, university ulm **dept. of angiology, jwg-university hospital frankfurt a.m. ***novo nordisk, mainz administration of activated recombinant factor vii (rfviia) can by-pass the fvnlwlx pathway and offers an alternative treatment for patients with antibodies (inhibitors) against these factors. from november to october , a total of bleeding episodes and surgical interventions in patients were treated with rfviia in a phase iiib multicenter trial. diagnosis was hemophilia a (n = ) or b (n=l) with inhibitor, and acquired inhibitor against factor viii (n= ). various serious bleeds, from complicated joint and gingival bleeds to lifethreatening psoas bleeds, have been treated. operations have been tooth extractions, radiosynovectomy, implantation and explantadon of porth-acaths and one adenotomy. dose regimen was - /zg/kg bw every two to three hours until clinical improvement, with subsequent dose reduction. results: for bleeding episodes, response to rfviia after hours was effective in %, partially effective in " , ineffective in "o and not evaluable in ( %) of the patients. two of the three treatment failures were associated with very long dosage intervals of rfviia. the third patient was in a critical situation with artificial high pressure respiration and polytransfusion because of a hematothorax, and suffered a terminal intracerebral bleed. the efficacy of rfviia for surgery was very good. response to treatment was independent of antibody titer. no signs of dic or activation of coagulation were noted. conduslon: in our experience, rfviia is an efficient and safe treatment for inhibitor patients with acute bleeding episodes. it should be investigated, whether rfviia can be an alternative treatment also for the hometreatment situation. successful immunetolerance therapy of f vih-inhibitor in children after changing from high to intermediate purity f vih concentrate w. kreuz, j. joseph-$teiner, d. mentzer, g. auerswald*, t. beeg, s. becker zentrum der kinderheilkunde, j. w. goethe-universit~itj frankfurt am main *professor hess kinderklinik bremen introduction: inhibitor to f viii is the most severe complication in treatment of patients with haemophilia a. the incidence of f viii inhibitors is estimated to range between - %. several authors reported that the immunetolerance therapy (itr) of f viii-inhibitors can be induced with high dose f viii concentrate. objective: this presentation will show data of four children with haemophilia a and f viii inhibitor (high responder), who had an unsuccessful lit with high dose f viii concentrate (high purity) in the first step. f viii concentrate was changed to an intermediate purity product (haemate hs®) in the subsequent course of h't. all patients received bleeding prophylaxis with an activated-prothrombin-complex-concentrate (feiba®). results: median age was ( - ) months, when the inhibitor was first detected. in all four patients the f viii inhibitor titre increased under immunetolerance treatment with f viii concentrate (high purity) in the first step of therapy. after changing the f viii concentrate (intermediate purity) the inhibitor titres decreased continuously after a rebooster effect to be within months. median duration of f viii inhibitor elimination time (until first testing of be) was ( - ) months. in all patients the f viii inhibitor was successfully eliminated. until now all patients are under prophylactic treatment with f viii concentrate and had no positive inhibitor testing since. median observation time since the first testing of be is ( - ) months. conclusion: different studies concerning immunetolerance treatment have been successful with f viii concentrates of different purity. according to our experience in these four presented patients, we assume that probably not the purity of the f viii concentrate is important for the induction of immunetoleranee, rather than the type of f viii presentation in the used concentrate. the used preparation (haemate hs®) is a f viii concentrate with high concentration of vwf, which is known to be important for the protection of f viii against degradation by proteases. this may be a mechanism for a prolonged antigen presentation to the immunesystem and thus may have a positive impact on the outcome ot itr. long scale trials are needed to prove the above assumptions. thrombasthenia glanzmann is a disease affecting platelet function because of a partial or total lack of glycoprotein (gp) ilbllla expression or a modification of this complex. since the receptor dysfunction goes along with reduced or absent platelet aggregation and adhesion, it causes bleeding complications in case of injury. here we report about a years old women, who suffered since early childhood from a severe bleeding disorder. life threating bleeding complications occured after tooth extraction and after abdominal surgery. analysis of the patients platelets revealed normal values for the platelet count, whereas their volume showed to be increased ( fl). clot retraction was diminished to %. platelet adhesion to siliconised glass and human subendothelial matrix was reduced, as was the spreading of the platelets. adp (i#m) induced platelet aggregation was inhibited, while collagen-, ristocetin-and thrombin-induced aggregation showed to be normal. cross immunelectrophoresis resulted in an atypical peak of gpiibllla with reduced electrophoretic mobility. in the electroimmunoassay according to laurell % of gpiibllla was detected. moreover we observed a markedly diminished j-fibrinogen binding. sequence analysis of the gpiib and gpiila cdna after pcr amplification unraveled a g --, a transition in gpiib, substituting gly --* glu. the structure/function relationship of this mutation has still to be investigated. we report two new abnormal fibrinogen variants, denoted as bem iv and milano xi, both having an exchange of arginine to histidine in position of the ac~-chain. routine coagulation studies revealed prolonged thrombin and reptilase clotting times, low plasma fibrinogen concentrations determined by a functional assay but normal fibrinogen levels measured by the immunological assay. the onset of turbidity increase following addition ofthrombin to purified fibrinogen was markedly delayed in both variants. release of fibrinopeptide b by thrombin, measured by reversed phase hplc, was normal whereas only one half amount of normal fibrinopeptide a was released. in addition to normal fibrinopeptide a, an abnormal fibrinopeptide a* was cleaved from both dysfunctional fibrinogens. the structural defect was determined by asymmetric pcr and direct sequencing of a gene fragment coding for the nh -terminus of the aachain. both variants were found to be heterozygous for the transition g to a at nucleotide position , leading to the substitution actl arg-->his, resulting in a delayed fibrin polymerization. the simple assay permits detection of the most common amino acid substitutions occuring in the nh -terminus of the ac~-chain of the functionally abnormal fibrinogen variants. protein c inhibitor (pci) a member of the serpin family is also known as plasminogen activator- (pal- ). pci was first described as a component of human plasma, regulating the activity of activated protein c and other sedne proteases of the human coagulation and fibdnolysis system. since then pci was found to be present in extra-plasmatic systems also. high concentrations of pci were detected in human seminal plasma suggesting a role for pci in human fertility. significant concentrations of pci mrna and antigen were located in lysosomes of proximal tubular kidney cells suggesting an intracellular function for pci in this environment. in this study we present evidence that pci is also present in human pancreas. rna from human pancreas was reverse transcribed and pcr amplified. the resulting pci cdna was identical with pci cdna from human liver. ~p labeled antisense rna probes used in in situ hybridization experiments with human pancreas tissue sections showed that pci rna was located in the acinar ceils. pancreatic fluid was analyzed by sds-page and immunoblotting. using monospecific antibodies directed against human plasma pci, a mw protein band was observed which comigrated with purified human plasma pci. our results show that pancreas cells contain a significant concentration of pci mrna. this message is localized in the secretory acinar cells. therefore we conclude that pci antigen found in pancreatic fluid is likely to originate in the pancreas. the role of pancreatic pci is unknown at present. however, since thrombosis and systemic hypercoagulable states are known complications of pancreatic diseases our results and in vitro experiments by others showing that pci can inhibit pancreatic enzymes such as chymotrypsin and trypsin indicate that pci may be part of the inhibitor potential which protects pancreatic tissue from auto degradation. these inhibitors normally prevent the release of active pancreatic proteases into the vasculature or microcirculation where destabilization of the coagulation balance and subsequent thrombus formation could occur. institute for clinical chemistry and laboratory diagnostics and *clinic for cardiology, universi w of duesseldorf p-selectin (cd p, the former granule membrane protein or gmp ) is an integrated membrane protein of platelets and endothelial cells. under inactivated conditions it is stored in the alpha granules of platelets and in the weibei-palade bodies of endothelial cells. endothelial cells covering atherosclerotic plaques show an increased expression of p-selectin. -thromboglobulin ( -tg), which is also expressed from the alpha granules of platelets during adhesion or aggregation, is regarded as a marker of platelet activation in vivo. coronary thrombosis plays a central role in the pathogenesis of acute coronary syndromes. we therefore analysed cd p and -tg in acute coronary syndromes, healthy subjects (hs, n=l i), patients with stable angina pectoris (sap, n= ), unstable angina pectoris (uap, n=l ) and acute myocardial infarction (ami, n= ). plasma samples were obtained by using ctad vacutainer tubes ( . m na~-citrate, theophylline, adenosine dipyridamole). patients with cad showed significantly increased plasma concentrations of cd p (hs: + versus sap: + ng/ml, p< . ; versus uap: + ng/ml, p< . ; versus ami: + ng/ml, p< . ) independent of the severity of clinical symptoms. in comparison only patients with ami showed significant higher -tg concentrations compared with hs (hs: + versus ami: + ng/ml, p< . ). although the cd p plasma concentrations showed no relationship to the clinical severity, hence there was a positive correlation between cd p (r= . ; p< . ; n= ) to the severity of cad classified as i, , vessel disease. it is concluded that elevated cd p concentrations are correlated with the severity of cardiovascular disease. cd p is not suitable for differential diagnosis of acute coronary syndromes, because it is elevated independently of the clinical status of the patients. the involvement of platelets in the pathogenesis of acute myocardial infarction may be indicated by the increased -tg concentrations. iklinik nr herz-, thorax-und herznahe gef&schirurgie und institut x~tr klinische chemie und laberatodumsmedizin der universint regensburg an increased blood loss following surgery with extracorporeal circulation (ecc) contributes to the morbidity and mortality. postoperative haemorrhage following ecc has been related to a platelet function defect and the activation of the blood dotting and fibrinulytic system. we investigated platelet surface antigen expression and parameters indicating activation of the clotting and fibrinolytic cascade to assess the predictive potential of these variables for increased blood loss after ecc. g patients referred for coronary bypass gra~ing with no history of a bleeding disorder and normal routine clotting tests were included. on the day prior to surge~ and immediately upon arrival on the intensive care unit blood samples were drawn. the surface expression of glycoprotein (gp) lib-ilia, gp lb, and p-selectin was meamred with and without in vitro stimulation with adenosine diphosphate (adp) using whole blood flow cytomet~y. platelet counts and platelet factor (pf ), as well as, routine clotting tests were performed. activation of the clotting and fibrinolytic system were judged from thrombin-antithrombin-iii complex fiat), fibrinogen fig), d-dimers (dd), cc -antiplasmin (tz a), prothrombin fragment + (fl+ ),and tissue plasm~ activator (t-pa). blood loss fxom chest tubes was measured hourly until removal of drains. following ecc the levels of pf , tat, dd, o~ a, fl+ , and dd were sigulticnatly increased (p< . ) compared to baseline values. gp iib-iila, gp ib, p-selectin, platelet count, and fg were significantly reduced (p< . ). analysis of variance (anova) revealed that postoperative values of gp ib (p< . ), dd (p