key: cord-0041419-jn20f6xf authors: nan title: Oral Abstracts date: 2018-05-04 journal: Am J Transplant DOI: 10.1111/ajt.14917 sha: 021fb6e79ff9c699f5f7e1b59f722297252870c9 doc_id: 41419 cord_uid: jn20f6xf nan Therapeutic principles established in non-human transplant models have value for clinical application despite cost related limitations on sample size. We have tested new therapies in marginal sized (3, (0) (1) (2) (3) (4) (5) ,000IEQ/kg) allogeneic islet transplants in NHPs. We transplanted allogeneic islets, into recipients treated with anti-CD40mAb (2C10R4) D- 0, 14, 28, 60, 90, 120 , RPM D-0-120) +/-alpha 1-antitrypsin (AAT) D- 1, 3, 7, 10) . In the second set of experiments we cultured islets in AAT overnight and transplanted marginally sized islets (2,000-5,000IEQ/ kg) into MHC mismatched recipients treated with anti-CD40mAb+RPM+IL-2 (at day 160 post-Tx). Using state-of-the-art techniques for CyTOF and SOMAscan we are now able to individualize treatment for each animal. Results: AAT given as an adjunct to anti-CD40mAb+RPM is effective and safe in NHP recipients of islet transplants. Results in terms of transplant function are far superior to that obtained anti-CD40mAb+RPM alone. In 5 NHPs transplanted without AAT, graft survival was 169-217 days despite cessation of treatment at day 120 posttransplant. In 5 NHP recipients of marginally sized islet transplants receiving AAT as an adjunct, the graft functioned without need for insulin treatment 193-310 days post-transplant. In the second set of experiments, 3 NHPs were transplanted with exceptionally small islet allografts. The islets cultured with AAT and transplanted into recipients treated with anti-CD40mAb+RPM+ IL-2. In the absence of insulin treatment, the recipients quickly became and remained normoglycemic for 200-360 days post-transplant. Using CytOF we were able to individualize the IL-2 doses needed for Treg expansion without expending other immune cells like Teff and NK. Conclusions: These results demonstrate that adjunctive AAT given with an anti-CD40mAb based regimen is effi cacious in the NHP islet allograft model: (1) by blocking infl ammation with a brief dose of AAT marginally sized islets functioned well for almost a year without the need of insulin despite cessation of treatment at day 120 (2) substitution of in vivo AAT treated islets promoted immediate and long-term function of tiny islet allografts. Transplantation of only 2,000 IEQ/kg can be used to achieve immunosuppressive drug-free insulin independence in NHPs for prolonged period (3) Gene Expression in a Cerulein-Induced Pancreatitis Model. G. Yoshimatsu, 1 C. Darden, 2 K. Kumano, 2 C. Chang, 2 P. Saravanan, 2 M. Lawrence, 2 B. Naziruddin. 3 1 Fukuoka University, Fukuoka, Japan; 2 Baylor Scott and White Research Institute, Dallas; 3 Baylor Simmons Transplant Institute, Dallas. Total pancreatectomy with islet autotransplantation (TPIAT) is an effective treatment for refractory chronic pancreatitis. Repeated or continuous pancreatitis attack leads islet damage beyond infl amed exocrine tissue. A preventive treatment approach during pancreatitis is important to improve success of islet autotransplantation. NFkB is a major transcriptional factor that regulates proinfl ammatory cytokine production. We studied the effect of Withaferin A, a plant-derived NFkB inhibitor on the suppression cytokine production in islets using a cerulein-induced pancreatitis mouse model and the outcome of syngeneic transplantation. Cerulein (CE) was administered intra-peritoneal into C57BL/6J mice by seven hourly injections for one day per week for 4 weeks to establish chronic pancreatitis. In the treatment group, Withaferin A (WA) was administered intraperitoneal in the same day of CE injection. Islets were isolated from pancreas after 1month of CE injection with/without WA, and they were transplanted into STZ-induced diabetic syngeneic mice. Islets isolated from the CE group showed signifi cant increase in mRNAs of major proinfl ammatory cytokines / chemokines, IL-6, TNFa, HMGB1, IFNγ, and IP-10 and WA treatment suppressed them. Moreover apoptosis-related genes, Bak, Bax, and XBP1 were upregulated in the islets of the CE-treated mice, while WA reduced their expression. Islets isolated WA-treated pancreatitis mice were more effi cient in restoring normoglycemia when compared to CE-treated mice. The NFkB inhibitor, WA suppressed cytokine's and apoptotic gene regulations in islets of pancreatitis mice and improved islet autotransplantation function. The results of this study suggest that pretreatment of TPIAT patients with antiinfl ammatory drugs may improve transplant outcome. CITATION INFORMATION: Yoshimatsu G., Darden C., Kumano K., Chang C., Saravanan P., Lawrence M., Naziruddin B. NFkB Inhibitor Improves Glycemic Control after Syngeneic Islet Transplantation by Suppression of Proinfl ammatory and Apoptotic Gene Expression in a Cerulein-Induced Pancreatitis Model Am J Transplant. 2018; 18 (suppl 4) . Human Pancreatic ECM Scaffolds for Islet Culture and Transplantation. D. Tremmel, S. Sackett, A. Feeney, R. Maguire, J. Odorico. Univ of Wisconsin, Madison. Extracellular matrix (ECM) plays an important role through structural and biochemical interaction with cells. Tissue-specifi c ECM, attained through decellularization (decel), has been proposed in many regenerative strategies for tissue and organ replacement. Decel of animal pancreata has been reported, but similar methods applied to human pancreas are less effective due to high lipid content. ECM-derived hydrogel can be made from many tissues, but human pancreas-derived hydrogel has not been reported. Our objective is to produce an acellular biological scaffold from human pancreas ECM (hP-ECM) for use in tissue culture and transplantation platforms. Human pancreata (Fig.1a) were decelled with spin and homogenization techniques, using physical and chemical treatments to isolate hP-ECM (1b). hP-ECM was pepsin digested (1c), neutralized and warmed to 37°C to form hP-ECM hydrogel (hP-HG)(1d). hP-ECM and hP-HG were examined for lipid and DNA removal, and retention of ECM protein and glycosaminoglycan (GAG). Cytocompatibility of hP-HG was tested in vitro (1e) and immune response to hP-HG was assessed in vivo (1f). Lipid content is signifi cantly reduced following homogenize-decel compared to spin-decel; lipid removal signifi cantly enhanced gelation of hP-HG. DNA content is signifi cantly reduced in the hP-ECM (4.2%) and hP-HG (0.44%) compared to the native pancreas (100%) while GAG content is moderately retained in the hP-ECM (20.8%) and hP-HG (4.0%). hP-ECM and hP-HG stain positively for Col1, Col4 and Laminin, but negatively for immunogenic proteins such as HLA Class I and II. hP-HG is cytocompatible with the beta cell line hINS-1, which grows on hP-HG with equal profi ciency as on Col1 or uncoated surfaces. When hPSC-derived pancreatic progenitor cells were embedded in hP-HG the cells retained their Pdx1 + fate, were proliferative (Ki67) and had a low apoptosis phenotype (Casp3). hP-HG transplanted into humanized mice had minimal cellular infi ltrate, whereas antigenic tissue was acutely infi ltrated by human T cells (CD3) and B cells (CD20). Conclusion: We developed a novel protocol for the decel of human pancreas and the production of hP-ECM and hP-HG scaffolds suitable for cell culture and transplantation applications. Identifi cation of Exosomal Isletokines Released by Pancreatic Islets in Response to Infl ammatory Stress. P. Saravanan, 1 G. Yoshimatsu, 1 R. Bhattacharjee, 1 C. Chang, 1 C. Darden, 1 B. Naziruddin, 2 M. Lawrence. 1 1 Islet Cell Lab, Baylor Scott & White Research Institute, Dallas, TX; 2 Transplantation, Baylor Simmons Transplant Institute, Dallas, TX. Purpose: To identify isletokines released by exosomes from human islets that have been exposed to hypoxic or infl ammatory conditions and defi ne stress signaling pathways that can be targeted to suppress their expression and improve islet graft survival during transplantation. Methods: Human islets were subjected to hypoxic and infl ammatory conditions. Exosomes were recovered and isolated from the medium and characterized by TEM and surface marker expression. Luminex multiplex assays were performed to identify isletokines recovered from islet exosomes. Western blot analysis was performed to identify activation of signaling components in islets exposed to hypoxia and cytokines. The phytochemical Withaferin A (WA) was used to block isletokine induction in islets, and effects of WA on stress-signaling components in islets were defi ned. Results: Human islets exposed to hypoxia and proinfl ammatory cytokines showed activation of mTOR/AKT/GSK3β and NF-κB stress signaling pathways resulting in release of exosomes containing IL-6, IL-8, MCP-1, and CXCL10. Exosomal isletokines were suppressed by pretreatment with WA. Suppression of isletokines by WA correlated with inhibition of AKT and mTOR signaling, but was independent of GSK3β activation. WA also blocked induction of canonical NF-κB RelA/p65 signaling indicating its requirement for exosomal isletokine expression. Conclusion: The study identifi es exosomal isletokines produced by stressed islets that are known to induce islet infl ammation upon transplantation. These fi ndings indicate that AKT-mTOR and NF-κB stress-induced signaling can be targeted to suppress exosomal expression of isletokines to prevent early innate immune destruction of islets during transplantation to improve transplant outcomes. Introduction: Ischemia reperfusion injury (IRI) contributes to delayed graft function (DGF). IRI/DGF in HLA-sensitized (HS) patients likely activates innate and memory injury pathways that may contribute to poor outcomes. C1INH is a serpin (serine protease inhibitor) that inhibits the classic and mannose binding lectin/mannose associated serine protease pathways (MBL/MASP). Here we report outcomes of a Phase I/II placebo-controlled study investigating the safety & effi cacy of C1INH to reduce DGF in a cohort of HS patients receiving DD kidney transplants after desensitization. Conclusions: C1INH appeared to offer a benefi t in HS patients in protecting patients from early ABMR associated graft loss. Though not signifi cant, need for dialysis @3-4 weeks as well as renal function @12M & 24M post transplant were numerically superior in C1INH treated group. Therefore, prevention strategies with emphasis on therapeutic targets could improve longterm outcomes in patients w. IRI/DGF. (AMR 3.04%±2.12% vs Control 0.63%±0.18%, P<0.05; AMR 1.72%±1.41% vs Control 0.63%±0.18%, P<0.05; ANOVA P<0.01). The receiver operating characteristic area under the curve (AUC) was 0.97 (95% confi dence interval [95%CI], 0.96 to 1.00) and 0.84 (95%CI, 0.65 to 1.00) respectively for the discrimination of AMR or dnDSA from the control group. Positive and negative predictive values for AMR at a cutoff of 1.0% dd-cfDNA were 94.1% and 84.6%, respectively. Positive and negative likelihood ratios for AMR at a cutoff of 1.0% dd-cfDNA were 10.7 and 0.12, respectively. Conclusions: Donor-derived cfDNA may be used to assess allograft injury in AMR and recipients with dnDSA but no histological lesions. Donor-derived cfDNA levels <1% refl ect the absence of AMR and levels >1% indicate a probability of injury in dnDSA (+) recipients. KDRI is an estimate of post-transplant kidney graft failure. It is based on ten donor factors that independently affect transplant outcomes, including HCV status from serologic or NAT testing. With NAT now reported, further evaluation of these donors is now possible. Methods: We conducted a retrospective matched case-control analysis of adult deceased donor kidney transplants performed between 12/5/2014 to 12/31/2016 obtained from UNOS databse. We identifi ed kidney transplants that had both KDRI score and HCV Ab and NAT testing status. We compared 205 aviremic Hep C Ab+ NAT -kidney transplants to KDRI matched control kidneys that were Hep C Ab-NAT-. Results: The aviremic HCV kidneys were recovered from donors that were signifi cantly younger 34.1±8.6 vs 42.6±10.4 years old, more likely to be white 183(89.3%) vs 1554(68.9%), less likely to have hypertension and diabetes. They were less likely to be DCD 13 vs 518 (P<0.0001). The majority of the recipients of the aviremic HCV kidneys when compared to match controls were Hepatitis C positive 90.2% vs 4.3%. The recipients were signifi cantly older 59.5±7.4 vs 53.2±13.1 years old (P< 0.0001) and more likely to be male 162 (79%) vs 1378(61.1%) (P <0.0001). They were less likely to be white. They were more likely to be diabetic 105 (51.2%) vs 882(39.1%) (P=0.001).They were on dialysis for a shorter period of time 1175 ± 886 days vs 1949 ± 1311 days (P <0.0001) and were transplanted sooner 432 ± 556 vs 938.6 ±884 days (P <0.0001). The graft survival of aviremic HCV kidneys when compared to non-Hep C kidneys trended to be superior ( Figure 1 ). If the HCV status of the aviremic kidneys were assumed to be negative, 122 more kidneys could have been allocated to patients with EPTS <20. Seven kidneys would no longer have KDPI>85%. Conclusion: Further policies might consider these fi ndings to appropriately allocate these kidneys to potentially extend graft survival and limit the organ discard rate. Figure 1 : HCV Ab+NAT-kidney allograft survival compared non-Hepatitis C kidneys Ab-NAT-Hepatitis C virus (HCV) infection in kidney transplant (KTx) donors or recipients has correlated with adverse outcomes, partly due to limited treatment options. Development of direct-acting antiviral agents (DAAs) revolutionized management of HCV, offering safe and effective therapies that may be used after KTx. We integrated SRTR data with records from a national pharmaceutical claims warehouse (2008) (2009) (2010) (2011) (2012) (2013) (2014) (2015) (2016) to identify HCV treatments after KTx. Pharmacy fi lls for of HCV treatments were assessed before and after 1/2014, representing pre-DAA (N=100,497) and post-DAA (N=57,367) eras. Treatment patterns were stratifi ed by donor (D) and recipient (R) HCV serostatus. Pharmacy fi lls for HCV treatments after KTx increased sharply in the post-DAA era, driven by sofosbuvir regimens, along with newer agents approved in 2016 (Fig 1) . Among HCV D+/R+ recipients, the cumulative incidence of HCV treatment by 1yr post-KTx increased >10-fold in the post-DAA era, from 0.6% to 10.8% (Fig 2) . HCV treatment also increased 10-fold in HCV D-/R+ recipients, from 0.3% to 3.0% at 1yr post-KTx. HCV D+/R-KTx was uncommon (0.2% and 0.5% of KTx in the pre-and post-DAA eras, respectively), but HCV treatments also increased in this group in the post-DAA era (from 0.3% to 5.3% by 1yr). In parallel with expanded HCV treatment options, HCV D+ KTx rose from 1.6% to 2.4% after 1/2014. There were not signifi cant differences in the nature of relationships between The Altruistic Living Kidney Donor Phenotype -From Inquiry to Donation. V. Kumar, P. MacLennan, M. Bonventre, M. Hanaway, R. Reed, J. Locke. University of Alabama at Birmingham, Birmingham. Background We distinguish an altruistic donor from the traditional living donor when a living person's offer to donate an organ is to "anyone" and not attached to a specifi c individual recipient. Altruistic kidney donation from living donors is an uncommon but growing practice with paucity of data in the recent literature regarding trends in altruistic donation. We herein review our program's experience with the altruistic donor from inquiry to the fi nal outcome of living kidney donation and transplantation. We examined demographic characteristics of all altruistic donor inquiries from July 2013 to November 2017 and followed them longitudinally through our living kidney donor evaluation process recording end points of screen failures, lost to follow up, completed evaluation, not approved, withdrawals, approved to donate and fi nally donating a kidney. Results There were 245 altruistic donor inquires during this 4 year period with a 1.5 fold increase from 2014 to 2015 (Figure) . Mean age at inquiry was 38 years and 64% were women, 81% self-identifi ed as white race/ethnicity (Table) . Thirty nine percent failed the initial screening process (n=95) while another 45% (n=111) were lost to follow up after passing the initial screen. Those lost to follow-up had the highest BMI (35.6kg/m 2 ). Thirty two out of the original 245 or 13% completed the full donor evaluation and at last follow up 66% of these were approved and went on to donate. Altruistic donors who completed evaluation and donated had a mean age of 42 years, were more likely to be white (81%) and female (52%)( Table) . Conclusions Evaluation of the potential altruistic donor is labor intensive with only 9% of donor inquires resulting into living kidney donation. Obesity appears to be a signifi cant reason for the loss to follow up. However donors who pass the initial screen and remain highly motivated to undergo a full evaluation have a high conversion rate for approval and donation at 66%. It will be important to compare the conversion rate to non-altruistic donors to determine if specifi c policies for evaluating altruistic donors and identifi cation of dedicated personal to follow this potential living organ donor pool are warranted. Little is known if a kidney from a hypertensive donor performs equally compared to a kidney from a non-hypertensive donor. The present study examines the effect of a kidney transplant from hypertensive donors regarding blood pressure, kidney function and histologic changes in recipients. Retrospective single center analysis of 189 (age>18, transplantation date 2008-2015) living donor kidney recipients. Recipients were followed up for one year after transplantation. Hypertension in donors was defi ned as being on one or more antihypertensive drugs or having a blood pressure >135/85 on ABPM. GFR was estimated using the CKD-Epi equation. Implantation biopsy data was available in 168 patients and one year protocol data was available in 161 patients. Biopsies were regarded as representative if they included four or more glomeruli. All biopsies were reviewed by the same experienced nephropathologist blinded to donor hypertension status. One-year follow up was complete in 183 participants. Coronary heart disease, donor age and donor BMI were signifi cantly higher in the recipients from hypertensive donors. Mean donor systolic and diastolic blood pressure was signifi cantly higher in the group with kidneys from hypertensive donors. Recipient blood pressure did not show a signifi cant difference between groups at time of transplant or after one year. Recipients from hypertensive donors needed signifi cantly more anti-hypertensive medication to reach the same blood pressure levels compared to recipients from normotensive donors. At time of transplantation and after one year TRCS was signifi cantly greater in the group with kidneys from hypertensive donors. In both groups TRCS progressed signifi cantly from time of transplantation to one year follow up. Logistic regression showed a signifi cant association between hypertension, donor age and histologic abnormalities at time of transplantation. After adjusting for multiple confounders, donor hypertension was not associated with eGFR at one year. Recipient age, donor age, acute rejection, and choice of calcineurin inhibitor were independently associated with eGFR after one year of follow up. TRCS was signifi cantly higher in hypertensive donors at time of transplantation and after one year of follow up . There was no difference in renal function after 1 year. Donor hypertension was not associated with renal function in recipients after one year. CITATION INFORMATION: Dienemann T., Schellenberg J., Amann K., Heller K., Weidemann A. Renal Function, Blood Pressure, and Histologic Changes in Living Kidney Transplant Recipients from Hypertensive Donors Am J Transplant. 2018; 18 (suppl 4) . Introduction: To overcome the shortage of organs, the infl uence of age matching in living donor kidney transplantation should be studied. We analyzed the impact of age-matching on outcome after LDKT. Method: The data of 2621 recipients who underwent LDKT were reviewed. Patients were divided into four groups according to the cutoff values of donor age 45 years and recipient age 25 years; CtrlD-CtrlR, CtrlD-RiskR, RiskD-CtrlR, and RiskD-RiskR. The renal function, the total graft survival and the death censored graft survival were compared among the groups. Result: At 1, 3, 5, 15 , and 20 years after transplant, the eGFR of recipients from control donor was signifi cantly higher than that of recipients from risk donor regardless of recipient age group (Control recipients, P<0.001; Risk recipients, P=0.005). When we compare the death censored graft survival rate for 20 years, CtrlD-CtrlR showed the highest survival rate and RiskD-RiskR showed the worst survival rate (P<0.001). In multivariate analysis, pretransplant dialysis, acute rejection within 1 year and age group were the independent risk factor for death censored graft survival. Introduction:Renal function of the living donor usually recovers to 60~70 percent of baseline function by compensatory hypertrophy mechanism. However, the degree of this compensatory hypertrophy varies from donor to donor and the factors related to the degree of this mechanism are little known. Materials and Methods:We retrospectively analyzed 103 living donors of whom completed one-year follow up after the laparoscopic donor nephrectomy from 2011 to 2016 in our institution. Of these, 39 cases were defi ned as suboptimal compensatory hypertrophy group. The defi nition of this group was as follows: If 1) 1-year eGFR was less than 60% of baseline eGFR, and if 2) 1-year eGFR was less than predicted one calculated by the correlation coeffi cient method. The rest of the donors were defi ned as a control (compensatory hypertrophy group, n=64). We investigated the factors related to the suboptimal recovery of the renal function after living donor nephrectomy. Result:Although baseline eGFRs were the same in the two groups (control: 82.4±13.5mL/min/1.73m2 vs suboptimal compensatory hypertrophy group: 82.9±14.4mL/min/1.73m2, p=0.88), donor age (control: 55.8±10.4 years old vs suboptimal compensatory hypertrophy group: 61.1±8.5 years old, p=0.009), hemoglobin A1c (control: 5.3±0.38mg/dl vs suboptimal compensatory hypertrophy group: 5.5±0.42mg/dl, p=0.002) and uric acid (control: 4.8±1.1mg/ dl vs suboptimal compensatory hypertrophy group: 5.5±1.3mg/dl, p=0.005) were signifi cantly higher in the suboptimal compensatory hypertrophy group compared to the control. Pathological chronicity fi nding on one-hour biopsy (ah≥1 + ct+ci≥1) were much higher in the suboptimal compensatory hypertrophy group than the control (control: 6.5% vs suboptimal compensatory hypertrophy group: 26.3%, p=0.008). After the multivariate logistic regression analysis, pathological chronicity fi nding (odds ratio 4.3, 95% confi dence interval 1. 1-15.9, p=0 .031) and higher hemoglobin A1c level (per 0.1%: odd ratio 1.2, 95% confi dence interval 1. 0-1.3, p=0 .014) were found to be independent risk factors for suboptimal compensatory hypertrophy. Conclusion:Pathological chronicity fi nding on one-hour biopsy and higher hemoglobin A1c level were associated with suboptimal recovery of the one-year renal function after living donor nephrectomy. CITATION INFORMATION: Nishida S., Kinoshida K., Tanaka K., Hidaka Y., Hamanoue S., Kawabata C., Toyoda M., Inadome A., Uekihara S., Yamanaga S. Factors Related to Suboptimal Recovery of Renal Function after Living Donor Nephrectomy Am J Transplant. 2018; 18 (suppl 4 Background: De novo donor surface antibody (dnDSA) development is arguably the fi rst step in the natural history of allograft injury secondary to humoral alloimmunity. This project aims to investigate the association between eplet mismatches (MM) and the development of dnDSA in a pediatric population. Methods: Retrospective cohort analysis of pediatric renal transplant recipients from 2008 to 2014 who underwent surveillance dnDSA testing with at least three years of follow up. We used the National Marrow Donor Program's Haplostats platform to impute high resolution HLA alleles (A, B, C, DR, and DQβ) from recipient and donor low resolution HLA alleles selecting the most commonly matched high resolution typing. We then generated eplet mismatches using HLA matchmaker version 2.0. We used multivariable Cox modeling to estimate associations between dichotomized MM counts (Class I & Class II total MM >10 vs. ≤10, DR & DQβ >5 vs. ≤5) and dnDSA development. Covariates included age at transplant, gender and donor type. Results: A total of 126 subjects (median age 13 years, 64% male) met inclusion criteria. Of the 126 subjects, 50% developed any dnDSA (surveillance or for cause) during follow-up with a median time from transplant of 19.3 months. In models including both dichotomized Class I and Class II variables, Class II MM count >10 was independently associated with increased risk of dnDSA development (HR 3.39, p<0 .01) while the association with Class I MM count >10 failed to reach statistical signifi cance (HR 1.70, p=0.14) . DR >5 MM alone was associated with increased risk of dnDSA development (HR 1.70, 95% CI: 1.00-2.88, p=0.048) while DQβ >5 alone was not (HR 1.45, p=0.17) . Conclusions: Class II eplet mismatches are a strong independent predictor of dnDSA development in pediatric renal transplant recipients. Under the current kidney allocation system (KAS), pediatric (PED) renal transplant (RTx) candidates have a priority to kidneys from KDPI <35% donors. However, that excludes very highly sensitized (HS, CPRA 98-100%) adult RTx candidates among other exclusions. We hypothesized that graft survival (GS) and patient survival (PS) in PED RTx recipients is superior to those of adult HS RTx recipients. We analyzed OPTN data for RTx from 2005 to 2016. We analyzed 3-year PS and GS for PED RTx compared to HS adult RTx and adult RTx recipients with different degrees of sensitization using log-rank tests. We found that for KDPI <35% donors, PED RTx recipients had a better 3-year PS compared to both fi rst time and repeat adult HS RTx recipients (p <0.0001), while 3-year GS was only superior when compared to HS adult RTx recipients with a prior Tx (p=0.0038, multiple comparisons adjusted 0.0784). For donors with KDPI 35-85%, again 3-year PS was superior in PED RTx recipients compared to HS adult RTx recipients (p<0.0001), but 3-year GS was not different. The lack of GS benefi t in PED RTx recipients is an interesting fi nding that warrants further study. We speculate that potential causes include non-adherence in adolescent patients, recurrent disease, and technical factors in the youngest patients. Many in the pediatric RTx community have considered prioritizing HS adult candidates over PED candidates unfair. Based on this data showing poorer outcomes using similar donors, and other data showing a decline in the % of kidneys going to PED recipients after KAS, prioritizing HS adult RTx candidates with a prior RTx over PED candidates should be reconsidered. Background: De-novo donor specifi c antibody (DSA) is one of the strongest independent predictors of graft loss with HLA mismatching being a major risk factor for development of de-novo DSA. Recent conceptual and technical advances permit examination of HLA mismatching at the epitope/eplet level. Given the observation that certain HLA allele mismatches appear to be more antigenic than others, this retrospective study followed the development of DSA in pediatric transplant recipients as a consequence of donor-recipient epitope/ eplet mismatches. Method: Fifty-two pediatric renal transplant recipients (31 deceased donors, 21 living donors) transplanted between 2011-2015 were reviewed for their development of DSA. HLA matchmaker software was used to identify and enumerate HLA epitope/eplet mismatches. Result: Twenty six of the 52 patients, developed DSA, the majority (21/26 or 81%) being Class II DSA. Within the DSA positive group there were more DQ epitope/eplet mismatches than DR epitope/eplet mismatches (p= 0.006). There were signifi cantly more DQ epitope mismatches in the DSA positive patients than in the DSA negative patients (p=0.008). Specifi c epitopes/eplets within the HLA-DQ locus of the donor (55PP, 70RT, 52PL and 140T) were more frequently mismatched in the DSA positive group compared to the DSA negative group (p<0.0001), suggesting that these may be the immunodominant epitopes/eplets associated with development of de-novo DSA. Here we report our experience with TCZ (anti-IL6R) in pediatric renal transplant recipients with biopsy-proven cABMR, refractory to treatment with IVIg and Rituximab. Methods: From Jan 2013 to June 2016, we identifi ed 6 pts who developed strong (MFI>10,000) de novo DSAs on annual DSA monitoring, had ABMR on biopsy and despite treatment with IVIg and Rituximab, continued to have cABMR. These patients were treated with TCZ, 4-8mg/kg monthly for 4-12 doses. Pts were monitored for iDSA (immunodominant DSA -DSA with highest MFI)), renal function, patient and graft survival and adverse effects of TCZ. Results: Mean age at TCZ: 14.2 years (6. 3-17.9yrs ) Mean time to ABMR from transplant: 2177 days (1159-4036 days) . Mean time to TCZ from diagnosis of ABMR: 457 days (185-814 days). At diagnosis of cABMR, iDSA was class 2 in all pts (5 with DQ, 1 with DR) and all iDSAs were >10,000MFI, with C1Q binding. At time of TCZ use, the iDSA were the same since time of diagnosis. 4-12 doses of TCZ was used. TCZ was well tolerated in 4/6 pts -2 pts developed fatigue and therefore discontinued TCZ infusion after the 4 th dose. At median follow up of 15 months (7-48 months) post TCZ, there was no change/decline in the renal function (mean delta change in serum creatinine: 0.01mg/dl), and no change in the iDSA. WBC counts and liver function tests were stable. 4 pts underwent a renal biopsy 3 months post TCZ, which showed mild to moderate reduction in C4d staining with no worsening of cABMR. Patient and graft survival was 100%. Viral PCRs were negative at and post TCZ. Conclusion: Development of strong de novo DSA post renal transplantation is associated with cABMR. TCZ was fairly well tolerated in all pts. The administration of TCZ in severe ABMR, refractory to B cell immunotherapy, stabilized the progression of ABMR, resulting in no graft loss and no decline in renal function. iDSA was not affected. The utility of TCZ in the treatment of cABMR should be further explored. Antibody-mediated rejection (AMR) is a major concern for allograft failure, which could be mitigated by allocation of Class II matched donors. Since children have priority, we used national data to model access to zero-mismatched donors for pediatric kidney recipients. Deceased donors registered in the Canadian Transplant Registry (CTR) since 2008 with high-resolution HLA typing (n=1661) and data from the Canadian Organ Transplant Register were used for modelling of ABO:DRB1:DQA:DQB (0MM) allele combination frequencies in the population. A prospective national pediatric cohort (n=43) and the CTR were used to model wait time for an ideally matched donor (Age 12-35, 0MM), not accounting for sensitization. 460 distinct ABO:DRB1:DQA:DQB allelic combinations were identifi ed and 214 (12.9%) were unique to a single donor. The most common (1 st quintile) had a median frequency of 1.02% (IQR 0.90-1.26%) and the least common (5 th quintile) 0.06% (IQR 0.06-0.06%). Using each donor type to simulate potential recipients, common ABO:HLA types (1 st quintile) had 0MM to 3.9% of the donor pool and access decreased by quintile (1.9, 1.3, 0.9, 0.2% Background: We have recently shown that the non-HLA antibody, AT1R-Ab, is prevalent and associated with poor outcomes in pediatric kidney transplant recipients (KTRs). The prevalence and signifi cance of other G-protein coupled receptor non-HLA antibodies in pediatric KTRs remains unclear. We aimed to determine the clinical impact of endothelin-1 type A receptor antibody (ETAR-Ab) in pediatric KTRs. Methods: 65 pediatric patients were monitored for 2 years after transplantation from August 2005 to November 2014. ETAR-Ab (ELISA), AT 1 R-Ab (ELISA), HLA DSA (Luminex), and TNF-α, IL-1β, IL-8, were measured at 6 months (m), 12m, and 24m post-transplant and during episodes of rejection. Based on a receiver operating curve analysis, > 10 and >17 units/ml was considered positive for ETAR-Ab and AT 1 R-Ab and >1000 MFI was considered positive for HLA DSA. Biopsies were performed at 6m, 12m, and 24m post-transplant per protocol and for clinical suspicion of rejection and evaluated by 2013 Banff criteria. Clinical outcomes and renal function was serially assessed (MDRD for >18 and updated Schwartz Equation for <18 years old). Results: The prevalence of patients positive for ETAR-Ab at any time point was 32% (21/65). ETAR-Ab was associated with AT 1 R-Ab (p<0.001), but not HLA DSA. AT 1 R-Ab was present in all patients who had ETAR-Ab. ETAR-Ab was associated with decrease in eGFR by 50% (p=0.045) and arteritis on biopsy (p=0.011), but not allograft loss, rejection, or hypertension ( Figure 1a ). Furthermore, patients positive for ETAR-Ab had higher median levels of IL-8 (p=0.003, Figure 1b ). Conclusions: In pediatric KTRs, ETAR-Ab in combination with AT 1 R-Ab may promote vascular infl ammation, leading to allograft dysfunction. Treatment with dual blockade may mitigate vascular injury and improve clinical outcomes. Background/purpose ABO blood type-incompatible (ABOi) liver transplantation is one of the strategies employed to overcome the shortage of donor. By using the Rituximab and plasm exchange, it is successful to prevent the antibody mediated rejection which the preexist serum anti-A or anti-B antibodies is against the donor's blood group A or B antigens on graft. However, the outcome of long-term B cell immunity against donor blood group antigens in recipients who undergo ABOi living-donor liver transplantation (LDLT) is unknown. There were 36 blood type O recipients who transplanted liver from 22 blood type A and 14 blood type B donors. We ex-vivo stimulated these ABOi LDLT recipients' B cells enriched from peripheral blood mononuclear cells (PBMC) by A, B, or O blood type red blood cell (RBC) to investigate the B cell phenotype, activity, and antibody production. The B cells from healthy O blood type volunteers served as control. Donor-specifi c antibody titers in serum remained low (≤1:32) in all recipients. However, antibodies against non-donor blood group antigens were continuously higher (> 1:128) in recipients with blood type O. After stimulated by non-donor's blood type RBC, the B cells from O blood type recipients showed more active phenotype (CD45R + or CD86 + ), but not when stimulated by donor's blood type RBC. The B cell s from O blood type recipients stimulated by non-donor's blood type RBC showed more proliferative activity on eFluor670 assay, but less proliferation when stimulated by donor's blood type RBC. Antibody production in cell culture supernatant investigated by fl owcytometry against donor blood group antigens by cells from ABOi LDLT patients was lower than in the control groups. Previous studies have suggested that with the use of with (DAA) agents to treat post transplant (HCV), there has been an increased incidence of liver graft rejection. This increase may be due to an improvement in liver function resulting in better metabolizing of immunosuppressive drugs resulting in a decrease in blood levels or immunomodulating effects of the HCV virus. We looked to see what the effects of treating HCV post liver transplant with (DAA) agents on immunosuppressive drug levels. Patients treated with interferon were excluded from the study. Through a retrospective chart review, patients' demographics, date of transplant, date of HCV recurrence, pre and post treatment viral loads were recorded. Immunosuppressive regimens and treatment regimens were noted. Tacrolimus (TAC), cyclosporine(CYA), or sirolimus(SIR) levels were measured pretreatment, 6 weeks into treatment, and post-treatment. Post-treatment viral load as well as biopsy proven rejections was recorded as well. Following data collection, statistical analysis was done using a paired t-test. Results: 54 patients were treated with post transplant for HCV with DAA regimens. All 54 patients achieved SVR. The majority of patients were treated with TAC (41), while 9 received CYA and 4 SIR the main immunosuppression, CYA and SIR levels dropped but were not statistically signifi cant. There was a signifi cant decrease in TAC levels at the end of treatment (6.7 vs 4.8,p<0.02). Four patients had their dose increased. Four out of 54 patients had biopsy proven rejection (0.074) but only one was associated with a signifi cant decrease in drug levels. The decrease of TAC level may be explained by improved hepatic function leading to enhanced drug metabolism, possibly due to the eradication of virus. While our incidence of rejection (0.07%) was low, immunosuppressive dosages and TAC levels should be monitored closely to prevent this occurrence. Thus, our study suggests that treatment of Hepatitis C may increase metabolism of TAC but does not lead to an increase in liver rejection. Oral Abstracts units vs. 5.2 units, p=0 .04) and greater use of platelets in the non-ATG group (22.5 units vs. 4.8 units, p=0 .001) but no differences in red blood cell transfusion. One year rejection rates trended toward lower rates in the rATG group (26.5% vs 44%, p=0.056). Bacterial infection rates were higher in the non-ATG group at 90 days and 1 year (non-rATG, 54.5% vs. rATG, 24.1% and non-ATG 61.9% vs rATG 33.9% p=0.004) respectively. Fungal infections were higher in the non-ATG group at 90 days and 1 year (32.8% vs 14.5% and 47.6% vs 14.5%, p=0.0006). The were no differences in 1 year CMV infection rates between the groups (non-ATG 2.6% vs. rATG 7.9% p=0.3). Conclusion: Based on this intention to treat analysis, rATG use in the DCD LTx recipients appears safe and is not associated with increased infectious or hematologic complications. The signifi cant reduction in infections in the rATG group may be related to differences in overall immunosuppression or an era effect. Additional studies are needed to better elucidate the etiology of reduced infection rates with rATG use in DCD LTx. and history of diabetes (NASH associated characteristics). The hazard of graft failure between the HCV and NASH groups were compared using multivariable Cox regression models. Our study aims to determine predictive factors leading to post-liver transplant (LT) opioid dependence. Methods A retrospective chart review was conducted including LT recipients from July 2013-June 2016. Patients were excluded if they had a history of other major surgery or trauma. A linear logistic regression was performed to predict opioid use in terms of oral Morphine equivalents, fi tting separate models for time from LT. The change over time in opiate use was also tested using a McNemar's test. The results of the multivariate analysis revealed that opioid use pre-LT was associated with increased risk of opiate use post-LT, OR ranging from 2.08-3.57 between 3 months to 2 years post-LT. The concurrent use of psychotropic medications (for anxiety, depression or bipolar disorder) increased the likelihood of opiate use at 1 and 2 years post-LT. Patients were at a lower risk of developing post-LT opiate dependence if they were female or had a strong support system. (Table 1 ). McNemar's test revealed that the prevalence of ongoing opiate use did decrease from 85.6% at 1 month post-LT to 13.3% at 2 years post-LT. This study reveals factors associated with increased post-LT opiate dependence including pre-LT opiate use, concurrent use of psychotropic medications and a history of prior alcohol dependence. Importantly a strong support system is associated with lower risk for post-LT opiate dependence. Ongoing efforts to recognize patients at high risk for opiate dependence allow the transplant community to help curb the reliance on opioid analgesics and to seek out alternative methods of pain management. The American Consortium of Early Liver Transplantation for Alcoholic Hepatitis (ACCELERATE-AH) is a multicenter cohort from 8 UNOS regions studying early liver transplant (LT) for alcoholic hepatitis (AH). We compared early and late post-LT mortality in AH vs. alcoholic cirrhosis (ALC). All alcohol-related LTs since fi rst LT for AH at 12 ACCELERATE-AH sites were included. AH was clinically-diagnosed severe AH, no prior diagnosis of liver disease or AH, and LT before 6 months of abstinence. ALC was a UNOS listing diagnosis of alcoholic cirrhosis and not within the AH group. HCV, HCC, other liver diseases, re-LT were excluded. Site-specifi c and UNOS data were used. Graft failure and death were primary outcomes. We included 822 LT recipients from 2006-2016: 123 AH and 699 ALC. Median follow-up was 2.0 years in AH and ALC. Only 28% of AH patients had the correct listing diagnosis of AH in UNOS. AH patients vs. ALC were younger (42 vs. 54, p<0.001), college educated (50% vs. 41%, p=0.005), and higher MELD (38 vs. 30, p<0.001). Cumulative unadjusted 3-yr survival (85% vs. 86%, p=0.47) was similar in AH vs. ALC. In MV analysis, AH as LT indication (HR 1.42, p=0.02) , mechanical ventilation at LT (HR 2.00, p=0.02) and DRI (HR 1.96, p=0 .002) were associated with increased risk of death. In those aged ≥50, AH vs. ALC was associated with increased risk of death ≤90 days post-LT (HR 4.07, p=0.003). Misclassifi cation of AH in UNOS is frequent, highlighting the challenge in using UNOS alone to study LT outcomes in AH. After adjusting for covariates infl uencing survival, AH as indication for LT had a 40% higher risk of post-LT death vs. ALC, which was most pronounced in age≥50 and ≤90 days post-LT. Mortality ≤90 days post-LT is unlikely related to alcohol use post-LT; further studies are essential to elucidate factors contributing to early post-LT mortality in AH. The characteristics of liver transplant (LT) recipients have evolved over time as centers adapt to allocation policy changes and the shifting demographics of the US population. We aim to analyze the temporal trends of LT recipient demographic and medical characteristics. This study analyzed demographic and medical characteristics for all LT recipients of LT from 2002-2015 using a UNOS STAR fi le. Continuous variables were analyzed by Student's t-test while categorical variables were analyzed by chisquared test. From 2002 to 2015, 79,762 patients underwent LT with a trend towards increase in annual LT volume (4,733 to 6,450) . Median age at LT from 52 to 58 (p<0.001). While the proportion of white recipients decreased, African American, Hispanic, and Asian recipients signifi cantly increased (p<0.001). Privately insured patients signifi cantly decreased from 84.1% to 56.7% (p<0.001), while publicly insured recipients more than tripled (12.8% to 42.7%, p<0.001). Additionally, LT recipients were increasingly likely to attain greater than a high school education (31.4% to 49.2%, p<0.001). LT recipients were more commonly diabetic with higher median BMI and less independently functional, as assessed by Karnofsky scores ≥ 60, at listing (74.8% to 63.7% p<0.001) and LT (65.2% to 45.8% p<0.001). The fi nal calculated MELD has signifi cantly increased from 17 to 20, p<0.001. Patients more likely underwent LT while hospitalized (15.1% to 20.2%) and in the intensive care unit (12.9% to 16.3%), both p<0.001. Hepatitis C (33.4% to 17.3%) and hepatitis B (6.3% to 1.8%) decreased as indications for LT, while NASH (7.2% to 14.2%) and hepatocellular carcinoma (7.4% to 25.7%) increased, all p<0.001. Over the study interval, both 1 and 3-year post LT survival has improved from 81.6% to 86.9% and 70.7% to 75.7%, respectively, both p<0.001. Transplantation of a more racially diverse and publicly insured group of patients suggests more equitable access to LT across socioeconomic groups in the US. Despite LT recipients being older and sicker compared to 2002, post-LT survival has signifi cantly improved. The impact of transplanting sicker, diabetic, obese, but less functional patients will need to be carefully examined beyond the 1 and 3-year outcomes scrutinized by the SRTR to truly assess our current listing practices. Background: While rare, combined Thoracic/Liver transplants are employed with increasing frequency. Consistent data regarding the technical conduct, patient selection and outcomes in this highly complex group is lacking. Methods: Patients with end stage cardiac/pulmonary and liver disease who underwent heart-liver (HL) or lung-liver (LL) transplantation were evaluated. Transplants were performed sequentially (thoracic-fi rst approach.) Liver preoperative pathology were reviewed. (12 consecutive patients, 3 LL and 9 HL tx , 2010-2017, retrospective death by performing multivariate competing risk analyses of 121,198 candidates listed in the United Network for Organ Sharing database. We incorporated the factors into an index to identify disenfranchised waitlist candidates, the transplant-death index (TDI). Signifi cant predictors of 90-day waitlist activity (either liver transplantation or death) for listed candidates included: age, lab MELD score at listing, blood type, BMI, diabetes mellitus, re-transplantation, hemodialysis, low albumin levels, life support, hepatitis C status, spontaneous bacterial peritonitis, and hospital admission. Approximately half of the patients on the waitlist had low probabilities of both death and transplantation within 90 days (TDI-inactive). The remaining 50% of patients were stratifi ed into 10 groups within a predictive index, the TDI. As the TDI increases, the likelihood of transplantation increases and the likelihood of death decreases. The low-TDI groups (TDI 10, 20) represent candidates with low probabilities of transplantation and high probabilities of death within 90 days. It is this disenfranchised subgroup that would benefi t most from considering a marginal allograft. Purpose: On 2/19/2015 major revisions to the Lung Allocation Score (LAS) calculation were made to more appropriately refl ect candidates' waiting list urgency and post-transplant survival. Many of the revisions focused on reassessing the scores for lung candidates with pulmonary vascular disease (PVD). Pre-post-implementation analyses were constructed to evaluate the revisions. Methods: OPTN data on adult lung-alone candidates and recipients were analyzed for a pre-and post-cohort between 2/19/14-2/18/16 split at the date of implementation. Waiting list data were used to determine the changes across diagnosis group pre-vs. post-implementation in LAS using a two-way ANOVA, and relative ranking on the waiting list relative to other diagnosis groups using a Wilcoxon Rank Sum Test. Transplant data were used to analyze changes in the number of transplants within each diagnosis group as well as Kaplan-Meier 1-yr recipient survival estimates overall and by diagnosis group. Results: For adult lung-alone candidates in the pre-era (N=2,559) and the post-era (N=2,541) there were signifi cant increases in the match LAS for PVD (p<0.001) and signifi cant decreases for restrictive lung disease candidates (p<0.001). Similarly, a signifi cant change was found in the relative ranking of candidates pre-vs. post-implementation for those with PVD (p<0.001), restrictive lung disease (p=0.012), and obstructive lung disease (p<0.001). There was a signifi cant improvement in the 1-yr recipient survival estimate for the pre-(85.96%) vs. post-(89.41%) era (log rank test p<0.001). Recipients with a diagnosis of cystic fi brosis and immunodefi ciency disorder and obstructive lung disease maintain the highest 1-yr survival estimates within both eras. Overall, the revisions to the LAS are benefi ting the targeted PVD candidates. Improvements were seen in the 1-yr recipient survival estimates for all patients. The OPTN Thoracic Organ Transplantation Committee will continue to monitor the revisions and assess whether any further modifi cations should be made to the LAS calculation. Purpose: Histologic assessment of lung transplant transbronchial biopsies (TBB) is not reproducible and cannot assess antibody-mediated rejection (ABMR). Molecular diagnostics (MMDx) successfully identifi ed rejection in heart and kidney transplants, and we adapted MMDx to TBB using 453 rejection-associated transcripts (RAT) derived in kidneys and validated in hearts. Methods: 242 single TBB bites from 211 recipients at 7 centers were processed using microarrays. Of those, 152 samples with high surfactant expression (refl ecting alveoli) were analyzed. Molecular rejection scores (non-rejection, TCMR, ABMR) were assigned to each TBB using 3-phenotype archetypal analysis (AA) trained on RAT expression (JHLT 36:1192) . To explore possible injury-related phenotypes in TBB we performed an independent 4-phenotype AA using RAT expression. Results: 3-phenotype AA of TBB identifi ed ABMR, TCMR, and non-rejection phenotypes ( Figure 1A) . Non-rejection scores correlated with absence of rejection transcripts (ρ S =-0.94), especially effector T cell and NK cell transcripts (NKG7, KLRD1, PRF1). TCMR scores correlated with TCMR transcripts (ANKRD22, CTLA4, ADAMDEC1; ρ S =0.91). ABMR scores correlated with ABMR transcripts (ρ S =0.80), particularly endothelial transcripts (CAV1, TEK, ROBO4). 4-phenotype AA revealed a new injury phenotype correlated with macrophage transcripts (ρ S =0.71). Scores >0.4 in this fourth dimension were described as injury ( Figure 1B) and were reported independently of the rejection phenotypes from the 3-phenotype AA. The injury score correlated weakly with all-rejection transcripts (ρ S =0.2) and represents macrophage-rich infl ammation that may arise either from rejection or other disease processes. Conclusion: MMDx describes TBB as a composite of rejection (TCMR and ABMR) and a macrophage-rich injury state. We believe that clinical phenotyping with two TBB bites to mitigate heterogeneity in alveolar content (compared to up to 10 for histology) will provide a new, safer approach that will change care by offering the fi rst accurate assessments of TCMR and ABMR. ClinicalTrials.gov: NCT02812290. Purpose: The OPTN/UNOS added fi elds to the deceased donor registration form on 3/31/2015 to capture the intent to utilize ex vivo lung perfusion (EVLP) on a donor lung prior to transplantation (Tx). The aim is to examine the utilization of EVLP geographically in the US, and to compare early transplant outcomes between perfused (or intended to be) lungs to those lungs where perfusion was not considered. Methods: All adult deceased donors with at least one lung recovered for Tx from 4/1/15-6/30/17 were analyzed. The prevalence of the intended use of EVLP was examined by OPO. Post-transplant primary graft dysfunction (PGD) and 1-year recipient survival (Tx between 4/1/15-7/31/16) were compared between donors with lungs intended for EVLP vs. not intended for EVLP. Results: For adult deceased donors (N=5,038), 9,514 lungs were recovered for the purpose of Tx in the US. EVLP was intended in 137 donors for a total of 254 lungs (20 donors of a single lung, 117 donors of double lungs) and 139 (54.7%) of EVLP donor lungs were Tx. The discard rate in the EVLP group was higher than the non-EVLP group (45.3% vs 4.2%, p<0.001). The use of EVLP varied across OPO with 20 OPOs not performing EVLP, 32 with EVLP intent for 1-10 lungs, 2 with EVLP intent for 12 lungs, and 4 with EVLP intent for >20 lungs ( Figure 1 ). There was no signifi cant difference in the odds of a lung with EVLP intended developing PGD grade greater than 1 compared to a lung without EVLP intended (OR 1.01, 95% CI (0.56, 1.84) ). 1-yr patient survival between groups (EVLP, n=47; non-EVLP, n=2,801) was not signifi cantly different (p= 0.686). The number of donor lungs intended for EVLP prior to Tx in the US encompassed <3% of the total number of recovered donor lungs. Geographically, the use of EVLP varied greatly across OPOs. Early recipient outcomes did not differ statistically between EVLP intended donor lungs compared to conventional donation. To further understand EVLP, data collected on the transplant recipient registry form will be added in 2018 to enhance the monitoring of EVLP utilization. CITATION INFORMATION: Lehman R., Carrico R., Uccellini K., Chan K., Whelan T. EVLP on Donor Lungs: National Trends and Outcomes Am J Transplant. 2018; 18 (suppl 4) . BACKGROUND: Allospecifi c CD154+T-cytotoxic memory cells (CD154+TcM) are used clinically to predict acute cellular rejection (ACR) after intestine transplantation (ITx) in children <21 years old within the 60-day post-sampling period. PURPOSE: To determine whether increased pre-transplant rejection-risk measured by CD154+TcM predicts increased immunosuppression with tacrolimus and steroids in 60-month follow-up in children with ITx who were enrolled in in pivotal studies of this test system (National Clinical Trial 1163578). METHODS: The clinical course of 22 children, in whom rejection-risk was measured with the immunoreactivity index (IR) of CD154+TcM before ITx was evaluated for 60 months after ITx. An immunoreactivity index of 1.23 or greater implies increased rejection-risk, as described previously. Tacrolimus whole blood concentrations and prednisone doses were recorded at 12-monthly intervals until month 60, and compared between children at increased and decreased rejectionrisk. Immunosuppression management was based on clinical protocols. RESULTS: Median age was 2 years (range 0.7-15). The distribution of male: female gender was 13: 9, and Caucasian: non-caucasian race was 11: 11. Ten children demonstrated increased rejection-risk and 12 children demonstrated decreased rejection-risk before ITx. In between-group comparisons over a 60-month follow-up period, children with increased rejection-risk demonstrated 1) signifi cantly shorter median time to the fi rst biopsy-proven acute cellular rejection event (22 vs 281 days, p=0.033, K-M test), 2) signifi cantly lower 5-year rejection-free survival (8% vs 40%, p=0.033, K-M test) and 3) higher mean (+/-SEM) tacrolimus whole blood, which achieved signifi cance at 12 (9.3+/-0.9 vs 4.5+/-0.7 ng/ml, p-value=0.001) 48 (6.5+/-0.9 vs 3.3+/-0.4 ng/ml, p-value=0.01) and 60 months (6.0+/-0.8 vs 3.2+/-0.3 ng/ml, p-value=0.02) after ITx. No differences were seen in steroid use or steroid doses between the two groups. CONCLUSIONS: Increased rejection-risk measured with allospecifi c CD154+Tcytotoxic memory cells before intestine transplantation predicts delayed minimization of tacrolimus in children. Therefore, pre-transplant rejection-risk assessment may be a useful adjunct to immunosuppression management after pediatric ITx. Objective Graft rejection remains the crux of intestinal transplantation. Although liver-inclusive grafts appear to be protective, the mechanism remains unknown. We hypothesize that T-cell macrochimerism more commonly develops following multivisceral transplant (MVTx) and correlates with decreased graft rejection. Methods T-cell chimerism was assessed from peripheral blood obtained at set intervals with multicolor fl ow cytometry. Macrochimerism was defi ned when donor T-cells represented >4% of T-cells in the recipient blood. We analyzed the correlation of macrochimerism with graft rejection and loss. Results Seventeen patients underwent MVTx (n=8) or isolated intestinal transplant (iTx) (n=9) from 2011-2017, with a median follow-up of 26 months. T-cell macrochimerism developed in 10 patients and peaked within 28 days, lasting as long as 378 days (average 94 days). Patients with macrochimerism had a signifi cantly higher moderate to severe rejection-free (p=0.008) and graft survival (p=0.050). Development of macrochimerism was more common in MVTx patients (6 of 8); thus MVTx was also associated with decreased rejection (p=0.020). Lack of macrochimerism was signifi cantly associated with the development of Class I donor-specifi c antibodies (p=0.001). Only one patient developed self-limited GVHD. Furthermore, the balance of HLA disparities in the graft-versus-host and host-versus-graft directions did not correlate with blood macrochimerism (R 2 =0.277). Conclusions T-cell macrochimerism strongly correlates with clinical outcome in intestinal and multivisceral transplantation, providing a potential mechanism of tolerance and a powerful tool for guiding personalized immunosuppression. them. The number of HV centers has increased throughout the years but not in the West. There was a 373% increase in the number of IT from period 1 to 2. This decreased in period 3. During periods 1 and 2, UNOS regions 2, 3, 8 accounted for 69.7% of all IT. During period 3, regions 2, 8, 10 accounted for 65.3% The number of centers doing at least 1 IT at any given year remained consistent during the 3 periods. Transplant patterns differ by region with fewer HV centers in the West. The number of IT grew in adults between periods 1 and 2 and then remained consistent in period 3 while the number of pediatric cases initially grew but then fell dramatically between periods 2 and 3. This likely refl ects widespread implementation and improvement in intestinal rehabilitation. These numbers show there has been an evolution in IT from being experimental before 1990 to the standard of care for patients with specifi c indications nowadays performed in centers with signifi cant expertise. A. As the pathophysiologic role of Tissue Resident Memory T Cells (T RM ) in intestinal transplantation (ITx) is unknown, this study aims to test the hypothesis that T RM play a critical effector role in graft-versus-host disease (GvHD) in ITx patients. B. A cohort of 16 recent ITx patients was prospectively immunomonitored via serial blood and allograft polychromatic fl ow cytometry (PFC) starting one day before ITx. Five patients with clinically and pathologically confi rmed GvHD were identifi ed with onsets between (40-59, mean=46) days after ITx. CD3 + T RM cells were phenotypically defi ned as CD62L -CD45RO +/-CD69 + CD103 +/-, effector memory T cells (T EM ) as CD45RO + CD62Lor CCR7 -CD45RA -. C. Importantly, immunomonitoring results revealed signifi cantly higher frequencies of allograft CD8 + T RM (68.2%) with an effector memory phenotype in the GvHD group versus the stable ITx cohort (23.9%)(p=0.0065) at the time of GvHD diagnosis. This was accompanied by a trend towards lower mean frequencies of allograft CD4 + T RM in GvHD versus stable ITx patients (12.7% vs. 31.0%, p>0.05). Moreover, immunophenotyping of pre-reperfusion backtable samples confi rmed the predominance of donor effector memory type CD8 + T RM (81.3%) in the allografts of GvHD patients prior to ITx. Thus, allograftderived CD8 + T RM may be critical effector cells which, after engraftment and recirculation, initiate the process of GvHD. This hypothesis was corroborated by serial blood immunomonitoring which demonstrated an increase in CD8 + T EM immediately prior to presentation of GvHD (56%) versus pre-GvHD baseline (26%). D. Prospective immunomonitoring indicates that donor allograft derived effector memory type CD8 + T RM may re-circulate after ITx and operate as effectors of GvHD, which may have important diagnostic and therapeutic implications. In intestinal transplantation, the intestinal allograft (IA) presents a formidable challenge for control of alloimmune responses due to constitutive MHC class II expression by intestinal epithelial cells (IE), abundant mobile hematopoietic cells (mHC), and continuous exposure to gut microbiota. Mechanisms and regulation of donor-specifi c T cell-mediated allograft rejection responses (DSTR) within IA are unclear due to the lack of a clinically relevant model. We developed a novel ex vivo assay system for DSTR using 3D intestinal organoid (IO) culture. Donor-specifi c CD4 and CD8 T cell lines (DST) were derived by co-culturing recipient peripheral blood mononuclear cells with irradiated donor-derived B cell lines (BL) and specifi city confi rmed by cytolysis of donor not recipient BL. Donor IOs from pre-transplant IA expressed both HLA-class I and II and were co-cultured with DST. Numbers of viable IOs were counted; a "viable organoid" defi ned as a translucent organoid with sharp border and without DST infi ltration. The Caspase-3/7 activity of IOs were also measured. When donor IO were co-cultured with DST without extracellular matrix (ECM) for 2-16 hours, caspase-3/7 activation increased as a function of E:T after 4-and 16-hours. When the IOs were co-cultured with DST on ECM for 5 days with or without a cytokine cocktail (CC; IL-2, IL7, and IL-15), IO numbers in medium with CC decreased in an E:T-dependent manner (R 2 =0.8449, p < 0.0001), but not those without CC (R 2 = 0.04196, p = 0.5231) (Fig. 1) . Addition of myeloid-derived suppressor cells (MDSC) regulated DST inhibition of donor IO formation. These results suggest that: i) DST directly induces Caspase 3/7-dependent apoptosis of IE; ii) T-cell growth factors are needed to sustain DSTR to IOs embedded in the matrix; and, iii) MDSCs suppress the DSTR. Our novel ex vivo DSTR assay system will be a useful tool for analysis of immunoregulation among IE, mHC, and DST within IA. Introduction: CMV infection remains an important cause of morbidity, allograft enteritis, and allograft loss in small intestinal (SIT) and multivisceral transplant (MVT) recipients. As the highest risk for CMV infection is associated donor positive (D+) and recipient negative (R-) serostatus we have attempted to limit D+/R-SIT and MVT at our center. Methods: We performed a retrospective analysis to determine whether a more restrictive donor selection criteria (i.e. limited D+/R-transplants) would affect the incidence of CMV infection in SIT and MVT recipients at our institution. Membrane cofactor protein (CD46) is known to attenuate the complement cascade by facilitating cleavage of C3b and C4b. In solid organ xenotransplantation, organs expressing CD46 has been shown to resist hyperacute rejection; however, the value of CD46 expression for islet xenotransplantation remains poorly defi ned. Here we attempt to delineate the role of CD46 in early neonatal porcine islet engraftment by comparing Gal-knocked out (GKO) and hCD46-knocked in (hCD46/GKO) islets in a dual transplant model. Neonatal GKO and hCD46/GKO piglets were obtained from Revivicor Inc., the pancreata were recovered and neonatal porcine islets (NPIs) were isolated using a modifi ed Korbutt technique. Rhesus macaques were used as recipients. An islet preparations for each genotype was made for each recipient and infused into separate hemilivers. Seven animals underwent dual transplant without immunosuppression and were sacrifi ced at 1 hour (n=4) or 24 hours (n=3). Both hemilivers were recovered and fi xed for immunohistochemistry (CD46, insulin, neutrophil elastase, platelet, IgM, IgG, C3d, C4d, CD68, Caspase 3). Quantitative immunohistochemical analysis was performed using the Aperio Imagescope. The weight of pancreas at recovery (p=0.41) and NPI yield (p=0.69) were comparable between two genotypes (3.23±0.09 g/piglet and 11829±2164 IEQs/g for GKO, 3.02±0.23 g/piglet and 12976±1866 IEQs/g for hCD46/GKO). Within 1 hour of intraportal infusion of xenografts, no differences were observed between the two types of islets in terms of platelet, antibody or complement deposition. Cellular infi ltration and islet apoptotic activity were similar at 1 hour too. At 24 hours, hCD46/GKO islets demonstrated signifi cantly less platelet deposition (p=0.0129) and neutrophil infi ltration (p=0.0136) compared to GKO islets. In contrast, C3d (p=0.38) and C4d (p=0.45) deposition was equal between the two genotypes. In summary, hCD46/GKO islets experienced less platelet deposition and neutrophil infi ltration compared to GKO islets, even though complement deposition on islets was similar between two types of NPIs. Our fi ndings suggest that expression of hCD46 on NPIs potentially provides a survival advantage in vivo by reducing early thrombo-infl ammatory events associated with instant blood mediated infl ammatory reaction (IBMIR) following intraportal islet infusion. CITATION INFORMATION: Gao Q., Samy K., Davis R., Song M., MacDonald A., Leopardi F., How T., Williams K., Devi G., Collins B., Kirk A. The Role of CD46 in Early Islet Engraftment in a Dual Transplant Model Am J Transplant. 2018; 18 (suppl 4) . Platelet sequestration and inappropriate coagulation cascade activation are associated with poor outcomes in multiple liver xenotransplant models. Here we evaluate a cassette of additional genetic modifi cations, primarily targeting coagulation and complement pathway regulation, and donor pharmacologic treatments on these phenomena. Methods Livers from α1,3-galactosyl transferase knockout (GalTKO), and human membrane cofactor (hCD46) pigs (Group 1, n=3) and GalTKO.hCD46 pigs also transgenic for human endothelial protein C receptor (hEPCR), thrombomodulin (hTBM), integrin associated protein (hCD47), and heme oxygenase 1 (HO-1) treated with DDAVP and clodronate liposomes (Group 2, n=4) were perfused ex vivo with whole human blood. Heparin was titrated to maintain an ACT> 400 seconds. Complete blood counts were measured at timed intervals by hemocytometer. Macrophage and platelet activation was evaluated by thromboxane (TXB2) levels, and thrombin production, as measured by prothrombin fragment F1+2 release, was assayed via ELISA. Perfusions were terminated when vascular fl ows declined due to high resistance or there were uncorrectable metabolic derangements. Mean survival in the GalTKO.hCD46 group was 305 minutes (SEM 148 min, range 115-597 min) and 856 minutes (SEM 61 min, range 720-960 min) in Group 2 (p=0.012). Average heparin required was 8837 U/hr in Group 1 and 1354 U/ hr in Group 2 (p=0.048). Platelet counts tended to remain higher in Group 2 over the fi rst hour (p = 0.158). TXB2 levels were lower in Group 2 over the fi rst hour (p = 0.044). Similarly, F1+2 levels tended to be higher in Group 1 though decreased survival times limit analysis (p=0.058). AST The fi eld of porcine islet xenotransplantation has made signifi cant progress in non-human primate studies in recent years. However, one major barrier preventing clinical translation of islet xenotransplantation is the lack of a clinically available immunosuppressive regimen. Here, we tested two costimulation blockade-based regimens in a non-human primate islet xenotransplantation model. Rhesus macaques (3-7 kg) were used as xenograft recipients. Diabetes was induced with Streptozotocin. Neonatal porcine islets were isolated from GKO or hCD46/GKO transgenic piglets (Revivicor, Inc.) by a modifi ed Korbutt technique. All recipients underwent laparotomy and islet cell suspensions (50,000 IEQ/kg) were subsequently infused into the portal vein. Recipients were induced with either basiliximab at 0.3 mg/kg on days 0, 2 (n=6) or rhesus anti-thymocyte globulin (rhATG, NHP Reagent Resource) at 4mg/kg/day on days 0-4 (n=3) . All recipients were maintained on belatacept (20mg/kg, on days -2, 0, 3, 7, 14, 21 and biweekly afterwards), tacrolimus (0.1mg/kg/d) from days -2 to 22, sirolimus (0.05mg/kg/d) from day 21 onwards, and mycophenolate mofetil daily (25mg/ kg/d). All recipients receive daily CMV prophylaxis with ganciclovir. Following transplantation, graft function was monitored with daily morning fasting glucose and biweekly intravenous glucose tolerance testing. Nine rhesus macaques successfully underwent NPI xenotransplantation. Out of the six receiving basiliximab induction, none achieved insulin independence following transplant. The median graft survival, as defi ned by the last day of detectable porcine c-peptide in serum, was 14 days, and 2 recipients did not have detectable porcine c-peptide at day 14, when c-peptide was fi rst measured post-transplant. Out of the three receiving rhATG induction, all achieved sustained lymphocyte depletion following induction. Although none reached insulin independence, graft survival was 14, 44 and 83 days respectively (median survival=44 days). A Graft rejection was confi rmed by immunohistochemistry, demonstrating islets with a dense lymphocytic infi ltrate and no insulin positivity. In summary, we demonstrated partial islet engraftment using two clinically available immunosuppression regimens. In addition, depletional induction followed by costimulation blockade may potentially prolong graft survival. Powerful gene editing technology is a promising approach to provide geneticallymodifi ed pigs as donors for clinical cell, tissue and organ xenotransplantation. Prior to starting a wide-scale clinical application of xenotransplantation, the humoral immune reject remains an important obstacle to be overcome. In the current study, we assessed human preformed antibody-dependent xenoimmune response in vitro to porcine erythrocytes from genetically-modifi ed pigs. Erythrocytes were isolated from Wild type and genetically-modifi ed pigs, including GGTA1/ β4GalNT2 knockout (DKO) and GGTA1/CMAH/ β4GalNT2knockout (TKO), as well as health persons. Commercial health human serum were purchased. The levels of human antibody binding (IgG/IgM) to the cells were analyzed using Agglutination assay and Flow cytometry. Antibodydependent cell phagocytosis was assessed by ImageStreamX Mark II Imaging fl ow cytometry, and Confocal Image. Introduction: Patients with anti-HLA-A and -B antibodies are diffi cult to match and represent a possible population for inclusion in pig-to-human xenotransplantation clinical trials. Donor specifi c antibody negatively infl uences allograft outcomes and similarly, contributes to antibody mediated rejection in xenotransplantation. Clinically signifi cant antibodies, binding blood group antigens and HLA, can provide insight to understanding antibody subsets in xenotransplantation, binding species specifi c glycosylation and swine leukocyte antigen (SLA). It is therefore critical to accurately test for antibody to guide patient selection and future porcine genetic modifi cations. Methods: Highly sensitized patient sera, with known single HLA antigen reactivity, were selected and grouped as anti-HLA-A only, -B only, both (n= 20, 24, 12) . Flow cytometric crossmatch was completed on GGTA1/CMAH/ B4GalNT2 peripheral blood mononuclear cells and red blood cells to assess IgG and IgM antibody binding. Cross-reactive group patterns in human sera compared to antigens found on tested SLA. Results: IgG antibody binding among anti-HLA-B achieved 60% of samples with minimal antibody binding (MFI<5,000) compared to only 16% of those with HLA-A only antibodies. Anti-HLA-A antibodies are found to react to porcine SLA in predicted epitope patterns. Red blood cell reactivity identify some patients have anti porcine mixed isotype IgG and IgM antibodies. Conclusion: Flow cytometric crossmatch of PBMC is a useful tool for screening and identifying anti-SLA antibodies for the sensentized. Those patients with HLA-B antibodies have a lower frequency of binding porcine cells, with many having a negative crossmatch. Antibody binding to PBMC and red cells identify need for careful patient screening and porcine genetic engineering. Introduction: Metabolic plasticity and mitochondrial bioenergetic homeostasis are altered due to a organ ischemia. However, the impact of the alloimmune response on the allograft is not known. The goal of our studies is to establish if bioenergetic dysfunction is linked to metabolic reprogramming of kidney during rejection. Methods: Kidneys from C57BL/6 were transplanted into MHC-incompatible Balb/c mice (allograft). Kidneys from C57BL/6 were transplanted into their litter-mates as isograft controls. No immunosuppression was given as these grafts have prolonged survival and ongoing cellular rejection. Major mitochondrial electron transport chain (ETC) complexes and metabolic regulators, including AMP-activated protein kinase (AMPK), the mechanistic target of rapamycin (mTOR) and Hypoxia-inducible factor 1-alpha (HIF-1α), were determined in whole kidney lysates. Western blot analysis also includes regulatory components of mitochondrial biogenesis and tissue remodeling. Results: Signifi cant loss of mitochondrial ETC complexes was observed in kidney allografts, compared to isografts in the fi rst week post-transplant. Such loss of mitochondrial complexes was accompanied by activation of several metabolic switches involved in reprogramming toward glycolytic metabolism, as evidenced by increased activity of mTOR and HIF-1α in allografts versus isografts. Notably, AMPK activation in allografts suggests that a subset of cells experienced signifi cant and prolonged bioenergetic stress. Importantly, while reduced amounts of mitochondrial ETC complexes persisted in allografts for several weeks, isografts show a remarkable recovery of ETC complexes, similar to the levels observed in naïve kidneys. Our studies indicate that loss of bioenergetic homeostasis is related to mitochondrial dysfunction and accompanied with metabolic reprogramming in rejecting allografts. Importantly, isograft shows time-dependent recovery of mitochondrial components, dissipation of mTOR and HIF-1α, normalization bioenergetic sensor and metabolic regulator AMPK. Targeting this pathway may ameliorate the kidney injury associated with rejection. Purpose: To study the impact of cytomegalovirus (CMV) recipient positive (R+) serostatus upon renal allograft injury, murine CMV (MCMV) D+/R+ kidney transplants using recipients infected with varying MCMV doses and strains were assessed for immune mechanisms of graft injury. Methods: BALB/c to C57BL/6 renal transplants were performed with cyclosporine immunosuppression. 12 weeks pre-transplant, donors were infected with MCMV ∆m157 at 10 4 plaque-forming units [pfu] ; recipients with MCMV∆m157 at low dose [10 2 pfu] or high dose [10 4 pfu], or with MCMV wildtype (MCMVwt) at 10 4 pfu. Allografts were analyzed by cytokine fl ow cytometry (CFC), cytokine array (Legendplex), and histopathology (24-point scale). Allograft-derived T cells were stimulated with peptides for either infl ationary (M38) or noninfl ationary (M45) epitopes for CFC. Groups were compared using the Student's t-test and ANOVA, accepting signifi cance at p<0.05. Results: Low-dose recipients had greater interferon-gamma+ (IFN-ɣ+) NK cells compared to high-dose recipients (2174±609 cells/g vs. 484±132 cells/g, p=0.03), greater intragraft CD4+ Th17 cells (18569±2263 cells/g vs. 6569±2555 cells/g, p<0.01), and higher Th17-associated cytokines including IL-1β, IL-6, IL-23, IL-17A, and GM-CSF. High dose recipient allografts had greater TNF-α+ NK cells than low-dose recipients (37656±10501 vs. 15647±2529, p<0.05). Differentstrain D+/R+ allografts had greater histopathologic injury than same-strain transplants (damage scores 14.3±1.8 vs. 11.9±1.4, p=0.0318), and more granzyme B+ (GzB+) NK cells (8086±2453 cells/g vs. 1063±442 cells/g, p=0.02). All D+/ R+ groups had similar numbers of intragraft virus-specifi c CD8+ CTLs, and no differences in responses to infl ationary and noninfl ationary epitopes. Conclusions: In this model, recipient virus dose and strain infl uenced characteristics of immune-mediated renal allograft injury, with signifi cant differences in Th17 and NK cells, but no differences in antiviral CD8+ CTL responses. These results suggest that mechanisms of R+ CMV-induced allograft injury may differ according to conditions of the recipient's primary infection. Hepatocyte transplantation (HcTx) is a potential therapeutic modality for patients with liver disease. Despite examples of antigen acceptance in the liver, allogeneic HcTx into patients exhibits signifi cant, but short-lived, benefi ts with immunogenicity remaining a barrier to allograft survival. In our well-characterized animal model, Hc allografts are highly immunogenic and stimulate multiple rejection mechanisms including: antibody-mediated rejection, CD4-dependent CD8-mediated rejection, and CD4-independent CD8-mediated rejection. Our published reports and new data suggest that type I NKT cells (iNKTs) are critical to the development of each rejection pathway. We fi rst reported that alloantibody titer following HcTx is critically inhibited (~10-100 fold) in the absence of iNKTs (Ja18 KO recipients). To further investigate the effects of iNKTs on humoral as well as cellular alloimmunity, we utilized immunohistochemistry (IHC), fl ow cytometry, in vivo cytotoxicity assays, and analysis of allograft survival following HcTx. Ja18 KO recipients exhibited signifi cantly reduced germinal center size (GL-7 staining by IHC; day 14 postTx) and number of splenic IL-4 + and IL-21 + CD4 + T FH cells [day 7; wildtype (WT): IL-4=100±4x10 3 , IL-21=145±5x10 3 vs. Ja18 KO: IL-4=25±3x10 3 , IL-21=75±14x10 3 cells/spleen; p<0.04] by fl ow cytometry compared to WT recipients. Additionally, CD4-dependent CD8 + T cell-mediated cytotoxicity to allogeneic targets (day 7; 89.8±2.6%) was reduced by ~70% in Ja18 KO recipients (26.9±3.3%, p<0.001). When iNKT cells are adoptively transferred (AT) into Ja18 KO recipients (same day as Tx), CD4-dependent CD8 + T cell-mediated cytotoxicity was signifi cantly enhanced (35.3±4.6%; p=0.01). Interestingly, CD4-independent, CD8 + T cell-mediated cytotoxicity (26.9±5.1%) was abrogated in CD4-depleted Ja18 KO recipients (2.0±.0.9%; p=0.009) and allogeneic HcTx exhibited prolonged survival in CD4-depleted Ja18 KO recipients (MST>day 30 vs. day 14 rejection in CD4-depleted WT recipients; p=0.03). When iNKT cells are AT into CD4-depleted Ja18 KO recipients, CD4-independent CD8 + T cell-mediated cytotoxicity was signifi cantly enhanced (6.0±0.8; p=0.001) and kinetics of allograft rejection were accelerated (MST=day 14; p=0.03). These data support the hypothesis that iNKTs are central to the humoral-and cellular-mediated response following HcTx and targeting iNKTs for immunosuppression may lead to enhanced graft survival. Introduction: Solid organ transplantation has seen a increase in the utilization of older organs. Here, we investigate how aging-associated kinetics of damage associated molecular patterns (DAMPs) including mtDNA are driving the augmented susceptibility of older organs to IRI through dendritic cells (DCs) with subsequent inferior graft survival. Methods: Old and young mice underwent bilateral clamping of the renal pedicles (22 min ischemic time). mtDNA, cytokine and senescence marker levels were tested by qPCR; DC and T cell activation were characterized by FACS. Old and young DCs were adoptively transferred into young recipients that subsequently received young or old cardiac allografts. Results: DCs of old naïve mice showed higher frequencies and levels of maturation in parallel to increased baseline levels of mtDNA. Importantly, renal IRI induced a prominent release of mtDNA into the circulation of old animals (after 48h; p=0.019) and an increased IFN-ɣ expression in splenic CD8 + T cells (p=0.0001). Isolated DCs showed a dose-dependent up-regulation of CD40 with augmented amounts of IL-6 in the presence of mtDNA; the addition of a TLR9 antagonist attenuated this pro-infl ammatory response of old DCs. In addition, old DCs promoted IFN-ɣ and IL-17 responses of allogeneic T cells in vitro. Of particular relevance, adoptive transfer of old but not young DCs prior to transplantation shortened cardiac allograft survival (p<0.0001). Treatment with the senolytic agents Dasatinib (5mg/kg) and Quercetin (50mg/kg) not only reduced local expression of senescence marker p16 and p21 in kidneys, but also reduced local and systemic mtDNA levels in old mice. Memory T cells with donor-reactivity pose a major barrier to successful allograft transplantation and tolerance induction. Longer cold ischemic storage (CIS) prior to transplantation promotes endogenous donor-reactive memory CD8 T cell infi ltration into cardiac allografts and their activation thatdirectly mediates CTLA-4Ig-resistant rejection of the highly ischemic allografts between days 15-22 post-transplant vs. > 60 days for allografts subjected to the minimal CIS. Here, we have investigated mechanisms provokingthe increased endogenous memory CD8 T cell numbers and activation withinallografts subjected to 0.5 vs. 8 hrs CIS. Using BrdU labeling, graft-infi ltrating memory CD4 and CD8 T cellssignifi cantly proliferated and accumulated within allografts subjected to 8 vs. 0.5 hr CIS. In contrast to the resistance following CTLA-4Ig treatment, peri-transplant recipient treatment with anti-CD40L mAb inhibited early graft infi ltrating memory CD4 and CD8 T cell proliferation and markedly prolonged survival of allografts subjected to 8 hr CIS beyond day 40 post-transplant. The increased proliferation of endogenous memory CD8 T cells within the higher risk allografts required recipient CD4 T cells, graft dendritic cells and graft expression of class II MHC, CD40 and IL-12p40,but not p35.qPCR analysis showed that expression of IL-12p40but not IL-12 p35 or IL-23p19 mRNA was elevated at 48 hr post-transplant in allografts subjected to 8 hr vs. 0.5 hr CIS and the increase in p40 mRNA was reduced to the levels seen in 0.5 CIS allografts by depletion of recipient CD4 T cells, but was restored by treating CD4 T cell depleted recipients with agonist anti-CD40 mAb. Consistent with this, p40 homodimers, but not p70 heterodimers, were increased in allografts subjected to 8 hr CIS as well as in recipient serum and these increases were dependent on recipient CD4 T cells. Peri-transplant anti-p40 mAb reversed CTLA-4Ig-resistant rejection of higher risk allografts and peri-transplant p40 homodimers induced the proliferation of endogenous memory CD8 T cells within allografts subjected to 0.5 hr CIS to the levels observed in allografts subjected to 8 hr CIS. These data indicate that the activation of endogenous memory CD8 T cells within higher risk cardiac allografts requires CD4 T cell help via CD40-CD154 interactions with graft dendritic cells to induce their production of p40 homodimers. Endogenous memory CD8 T cells infi ltrate MHC-mismatched cardiac allografts within 24 hrs after graft reperfusion in mice. Our recent studies indicate that prolonged cold ischemic graft storage provokes intense infl ammation within hours after allograft reperfusion and promotes endogenous memory CD8 T cell rejection of the allograft. The current studies tested anti-VLA-4 mAb inhibition of this early CD8 T cell infi ltration and activation in such higher risk allografts. Syngeneic or A/J (H-2 a ) hearts were subjected to 8 hrs of cold ischemic storage in University of Wisconsin solution and then transplanted to C57BL/6 (H-2 b ) mice. Anti-VLA-4 mAb (250 ug) was given on days -1 and 0. Grafts were harvested on day 2 and T cell, macrophage, and neutrophil infi ltration into allografts was assessed by fl ow cytometry and immunohistochemistry. mRNA encoding infl ammatory cytokines was measured by qRT-PCR from total graft homogenates. Peritransplant anti-VLA-4 mAb markedly decreased day 2 post-transplant infi ltration of CD4 and CD8 T cell and macrophage, but not neutrophil, into the higher risk allografts to the levels observed in isografts and was accompanied by signifi cant decreases in expression of genes encoding macrophage and T cell chemoattractant cytokines, but had little/no effect on expression of neutrophil chemokines. Anti-VLA-4 mAb also signifi cantly reduced intragraft expression levels of TNF-α, CXCL9, CXCL10, and IFN-ɣ on day 2 post-transplant. Background: In 2016, 800 kidneys from HCV+ deceased donors were discarded. Short-term (6-month) data from the fi rst 10 subjects in the THINKER-1 trial (NCT02743897; funder: Merck) suggested that kidneys from HCV-infected donors could be safely transplanted into HCV-negative patients. We report 12-month data on the 10 THINKER-1 recipients and 6-month data on 10 recipients from an expanded cohort (THINKER-2). Gentile C., Smith J., Kaminski M., Sicilia A., Hasz R., Suplee L., Reddy R., Bloom R., Reese P. Transplanting Kidneys from HCV-Infected Donors into HCV- Background: RCC is the most common renal tumor reported to the OPTN and reviewed by DTAC for assessment of potential donor transmission. Organs with concern for RCC are frequently discarded. Methods: All reported cases of suspected RCC between 1/1/08 through 12/31/16 were evaluated by DTAC for organ and recipient outcomes. In addition to recipient outcome data provided through the report, standard OPTN Post-Transplant Malignancy forms were reviewed to identify subsequent recipient malignancies. An event was defi ned as all potential malignancy transmissions from a donor to > 1 recipient(s). Results: 179 reports of suspected donor RCC transmissions were submitted to the DTAC; 170 were deceased donors (DD) and 9 were live donors (LD). From the 170 DD, 321 kidneys were recovered. 160 of those kidneys were transplanted into 154 different recipients. 161 kidneys were recovered and then discarded. From the 9 live donors, all kidneys were transplanted. 147 donors had suspicious lesion noted at procurement In 65 DD with suspected RCC, both kidneys were discarded; when RCC was confi rmed (N=60) it was unilateral in all but 1 (5 cases were benign tumors). 7 kidneys were lost from unrelated causes or explanted when a biopsy revealed RCC. 11 recipients were diagnosed with RCC from 10 donors at 2 to 18 months posttransplant (median = 7 m). RCC was not suspected at procurement in any of these cases. Finally, 9 patients were identifi ed with donor-derived RCC. The interval between transplant and RCC diagnosis ranged from 34 to 160 months. These tumors were felt not to have been identifi able at transplant and were deemed donor-derived but not transmitted. Conclusions: These data suggest that excision of small well-differentiated renal cell carcinomas prior to transplant provides outcomes similar to those in patients who receive kidneys from donors without RCC. Additionally, the use of contralateral kidneys and non-renal organs from these patients is not associated with an increased risk of RCC development. The kidney donor profi le index (KDPI) helps discern kidney function. While recipients must consent for KDPI >85% kidneys, clinicians often decide whether to accept KDPI 71-85% offers, specifi cally for young, healthy patients. Older patients with prolonged wait times have been shown to benefi t from these kidneys; yet other patients may face equipoise or harm. We sought to analyze the effect of accepting a kidney with KDPI 71-85% versus waiting for a better offer among different age, diabetic and wait time cohorts. The SRTR database was used to analyze fi rst-time recipients of solitary, kidney-only cadaveric renal transplants from 2006-2016. Using time-dependent regression, an exposed cohort of patients who received KDPI 71-85% transplants (higher KDPI) were compared to a cohort who either remained on the waitlist, or received KDPI <71% transplants. Patients were followed until death, graft failure, or the end of the study period. Subgroup analysis was performed among different age, wait time and diabetic cohorts. A cumulative ratio analysis was used to assess the hazard associated with higher KDPI transplant in specifi c scenarios. Results: 237,541 patients were included in the study. There was a signifi cantly decreased risk of death or graft failure over the study period in the higher KDPI cohort (HR 0.394, p<0.001) . This benefi t remained true over all age, diabetic, and wait time subgroups. Cumulative ratio analysis failed to demonstrate a scenario where a conservative approach showed benefi t over higher KDPI transplantation. Among all analyzed scenarios, recipients benefi t from higher KDPI transplantation. Practitioners should consider transplanting these kidneys rather than waiting for better offers. The adjusted odds of DGF in DCD recipients were higher for all AKI stages. There was a higher risk of graft failure or death in DCD graft recipients from donors with stage 2 AKI (HR 1.38, p<0.001). This was not noted in other DCD recipients. We studied the relative immunogenicity of HLA-A, -B, -C, -DR and -DQ MM (serological) in a cohort of 291 kidney transplant patients from the BENEFIT and BENEFIT-EXT trials who received a calcineurin inhibitor (cyclosporine) as fi rst-line immunosuppression. A total of 1129 MM were studied. From these MM, 60 de novo DSA (dnDSA) appeared. DQ MM were signifi cantly more immunogenic than MM from other loci ( Figure 1A) , accounting for 35% of all dnDSA produced. Among the highly immunogenic MM (≥10% immunogenicity and MM frequency≥7), DQ5 and DQ7 were the most immunogenic, in that 45% of MM led to the formation of dnDSA, followed by DQ2 (29%), DQ8 (22%), DQ6 (14%), DQ4 (14%), A29 (11%), A28 (11%), A1 (11%), A24 (10%), DR3 (10%), and A11 (10%) ( Figure 1B ). The less immunogenic MM (<10% immunogenicity and MM frequency<7) included: A2, A23, A26, A32, B8, B27, B45, B50, B51, B57, Cw5, Cw6, and Cw7 (for HLA class I); and DR3, DR4, DR13, DR14, DR16, DR17, and DQ3 (for HLA class II). Lastly, 95% of MM did not lead to the formation of dnDSA ( Figure 1C ). With the use of a calcineurin inhibitor as fi rst-line immunosuppression, MM at DQ loci are the most immunogenic compared to MM from other HLA loci. To prevent development of dnDSA and antibody mediated graft injury, highly immunogenic MM (upper-left quadrant in Figure 1B ) should be considered in donor/recipient matching. Many MM did not induce dnDSA (listed below the plot in Figure 1 ), and may not be immunogenic. CONCLUSIONS: 1-year results of this 2-year study show that a CNI-free and steroid-free BELA-based protocol is effective and safe 1) Similar primary endpoint rates, 2) Acute rejection rates are higher and of increased histologic severity in BELA pts, but without adverse effects on graft function and survival (data not shown), 3) Similar infections and hematologic toxicity, 4) NODAT, GI toxicity, and neurotoxicity lower in BELA treated pts compared to TAC. Introduction:Effi cacy of belatacept when converted from calcineurin inhibitors (CNI) in high immunologic risk patients has not been established. We examined the impact of HLA-sensitization on outcomes in patients converted from CNI to belatacept at our center. Methods: One hundred and eight adult kidney transplant recipients converted from CNI to belatacept between 7/1/12 to 9/30/17 were included. The outcomes of acute rejection, graft and patient survival, and eGFR slope post-conversion were compared between the highly-sensitized (HS) (cPRA≥30% or re-transplant) and non-HS (cPRA<30%) patients. : 197, 771 ). There was a higher incidence of acute rejection in HS group, but no difference in graft or patient survival post conversion. All rejections were cell-mediated and most occurred in patients converted early (<6 months) after transplant (HS-4/4; non-HS-5/6). On average, eGFR slope improved post-conversion in non-HS, but exhibited a slight decline in HS recipients. Conclusion: There was a higher incidence of acute rejection after belatacept conversion in HS vs non-HS patients, primarily in patients converted early after transplant. Conversion from CNI to belatacept may allow for better renal function but should be done cautiously in high immunologic risk patients. Optimal timing of CNI withdrawal in these patients needs further investigation. Belatacept is increasingly used as maintenance immunosuppression to improve long term outcomes in kidney transplant. Since acute rejection in patients on belatacept have been noted to be more frequent and histologically severe, identifying patients with Precision Medicine who will benefi t from belatacept is important. We investigated pretransplant recipient immune profi les to determine which lymphocyte population would be best predictor in identifying those who will be at lowest risk for costimulation blockade-resistant rejection. We prospectively enrolled 20 kidney transplant recipients (8 DDRT; 12 living) at our center to receive denovo belatacept from May 2016 to March 2017. PMBCs were collected prior to transplant and at the time of biopsies. All patients received thymoglobulin 3mg/kg for induction and were maintained on belatacept. Patients were initially on MMF but were converted to mTORi after 1 month and were maintained on prednisone 5mg daily. Protocol biopsies were performed at 6 months. On cause biopsies, 2 patients had ACR 1a (at 4 wks; 6 wks), 1 with ACR 2b (at 2 mo), and 1 with AMR (at 4 mo). 6 patients had borderline rejection on protocol biopsy. 9 patients did not have any infl ammation. 3 of 4 rejections occurred in those who were on MMF. 18 remained on belatacept and 2 were converted to tacrolimus. No correlation was found between rejection and CD4 + CD57 + T cell % in pretransplant PBMC. Patients with rejection or borderline changes had signifi cantly higher % of CD8 + CD28-T cells in pretransplant PBMC compared to those who had normal biopsies. Those with greater than 50% of CD8 + CD28 -T cells pretransplant were more likely to experience rejection (OR 18.7, p=0.02 Recent studies have identifi ed co-prescription of benzodiazepines and opioids as a risk factor for adverse outcomes in the general population. We previously described associations of opioid use before kidney transplantation (KTx) with mortality after KTx, but the implications of benzodiazepine use in the KTx population have not been described. We examined a novel linkage of SRTR registry data with records from a pharmaceutical claims warehouse (2008) (2009) (2010) (2011) (2012) (2013) (2014) (2015) to characterize benzodiazepine and opioid use in the yr before KTx and associations (adjusted hazard ratio, 95% LCL aHR 95% UCL ) with death over 1yr post-KTx. Among 75,430 KTx recipients with available medication data in the year prior to transplant, 7.3% & 43.1% fi lled a prescription for a benzodiazepine or opioid in the year prior to transplant. Use of both medications was more common among recipients who were white, unemployed, and previous KTx recipients. Benzodiazepine use rose with higher opioid use, from 3.2% among opioid nonuses to 10.2% among those with the highest level opioid use. (Fig 1) . Compared to non-users, high-level pre-transplant benzodiazepine use was associated with 51% (aHR 1.14 1.51 1.99 ) increased risk of death in the year following transplant. Opioid use bore a strong graded relationship with post-KTx survival, and prognostic impact high pre-KTx benzodiazepines was preserved after adjustment for opioids (aHR 1.04 1.37 1.83 ), although an interaction was not present (Fig 2) . Benzodiazepines use is correlated with opioiid use before KTx, and use of these agents have additive associations with post-KTx mortality. Future research is needed to defi ne mechanisms of these associations and the impact of reducing co-prescription on improving outcomes after KTx. The aim of this study is to assess the associations between traditional proxies of medication non-adherence and graft outcomes in contemporary kidney transplant recipients (KTRs). Methods: Retrospective longitudinal cohort study of KTRs between 1/2014 and 7/2017. We assessed 7 potential predictors for non-adherence: percentage of days covered (PDCs) of medications (refi ll adherence), high tacrolimus concentration variability (FK CV), age ≤30 years, race, insurance, pharmacist's assessment of patient's understanding of medication regimen at fi rst clinic visit and medicationrelated problems resulting in readmission during the fi rst year. PDC <80% was defi ned as low adherence and FK CV >40% was defi ned as high variability. These were validated using the outcomes of death-censored graft survival and rejection free survival. Results: 513 KTRs were included. Four of the 7 potential predictors of nonadherence demonstrated association with outcomes, including high tacrolimus trough variability, younger age, Medicaid insurance and readmission for medication related problems. Immunosuppression refi ll adherence (PDC), AA race, and pharmacist's assessment of the patient's understanding of medication regimen were not associated with outcomes (Table 1) . Based on this, patients were categorized as high risk if they had 2 or more of the 4 predictors of non-adherence. These patients were at a substantially higher risk of both acute rejection ( Figure 1 , aHR 3.63 (1.69-7.81 ); p=0.001) and death-censored graft loss ( Figure 1 , aHR 2.23 (0.88-5.61 ); p=0.088). Conclusion: Traditional risk-factors for non-adherence, including immunosuppression refi ll adherence and AA race may not be predictive of graft outcomes in contemporary KTRs. However, tacrolimus variability, Medicaid insurance, younger age and medication related readmissions continue to be strongly associated with acute rejection and graft loss. ). Viral co-detection occurred in 15 (6.4%) cases. The most common co-pathogens were HRV, CoV and PIV. Prolonged viral detection (>4 weeks) was seen with hMPV, CoV and HRV. The incidence of individual CRVs varied year-to-year, but seasonal patterns were similar between KTRs and the general population, with a trend towards higher rates of HRV and CoV, and lower rates of infl uenza B in KTRs ( Figure 1 ). CRVs had temporal patterns in KTRs, with seasonal peaks of RSV, CoV and infl uenza A ( Figure 2 ). Conclusions: To our knowledge, this is the fi rst study examining CRV epidemiology in KTRs. CRV trends were similar, but not identical to the general population. HRV and CoV were more frequent in KTRs. The impact of CRVs on allograft function and patient outcomes will be explored. These fi ndings can be used to inform recommendations for PCR-based respiratory testing in KTRs. Preventability was assessed using modifi ed AHRQ criteria. Rates were estimated in 100 patient-years (100 pt-yrs) and multivariable generalized linear modeling was used to assess adjusted trends in rates over time; Cox regression was used to assess the adjusted impact of readmission etiology on graft survival. Results: 1,840 KTX were included, encompassing 12-years of transplants and 1,485 readmissions during the 13 years of follow-up (80.7 per 100 pt-yrs). Infections (23.0 per 100 pt-yrs), surgical complications (10.8 per 100 pt-yrs), GI (6.6 per 100 pt-yrs) and rejection (5.5 per 100 pt-yrs) were the most common etiologies. Over time, rates for infections, surgical complications, diabetes, electrolytes and preventable readmissions all signifi cantly increased (p<0.05), while rejection readmission rates substantially decreased (p<0.01, Figure 1 ). There was considerable heterogeneity in the readmission etiology and the impact on graft loss; this also changed over time ( Figure 2 (Fig. 1) . This was driven mainly by a lower risk of death (HR=0.41, CI 0.24-0.71, p=0.002) (Fig. 2) in the older kidney transplant recipients. ACEI/ARB use was also associated with lower risk of AKI events after 1 year (HR 0.70, CI 0.52-0.95, p=0.02) (Fig. 3) . Moreover, ACEI/ARB was not associated with increased risk of acute rejection or hospitalization. Statin or ASA use did not signifi cantly impact graft outcomes. . The treatment strategy "not accepting a DCD" was favoured 70% of the time for a MELD score of 15-20, whereas it was only favoured 29% of the time for MELD >30. In 2-way sensitivity analysis, accepting a DCD-SLKT was the preferred treatment strategy irrespective of waitlist mortality and transplant rate for MELD >30, fi gure 1. In conclusion, the benefi t of waiting for DND-SLKT decreases with higher MELD scores, however the IV of waiting for DND-SLKT is small even at lower MELD scores. Thus, the use of DCD-SLKT should be encouraged in patients with advanced liver-kidney disease to improve overall QALYs (patient perspective) and organ supply (societal perspective). Transplant professionals have struggled to formulate defensible standards for combined heart-kidney and liver-kidney transplantation (HKTs and LKTs). In these recipients, recovery of native renal function can occur. They require high-quality organ donors, but kidney survival may be shorter than in kidneyalone recipients, who have no chance of recovery of native renal function, more waiting time, and a clear survival benefi t with transplantation. It has been left to specialists at individual centers to determine the necessity for the combined procedures. We profi led selection practices using UNOS data from 2012 to 2016. Among the 20 programs doing 40 or more heart-alone transplants per year, 8 did < 3% and 4 did > 10% more of them as HKTs. Among the 19 liver programs doing over 100 liver-alone transplants/year, 4 centers did < 5% and 5 did > 15% more of them as LKTs. In summary, [1] in 2016, HKTs and LKTs utilized almost 18% of all available kidneys with KDPI <35. These combined transplants respectively used kidneys with KDPI >35 about 30% and 45% of the time. [2] Many centers appear to allocate kidneys to relatively few borderline heart or liver transplant candidates, while many others are more aggressive. [3] The newly reformulated LKT selection guidelines may also be variably interpreted, and {4} may actually increase kidney use by conservative centers, as every liver recipient with a low post-transplant GFR at 3 months will be prioritized. However, because donor Oral Abstracts requirements will not be as stringent, fewer low KDPI kidneys may be used. [4] Combined transplants will likely continue to utilize many desirable low KDPI kidneys. With the ongoing organ shortage, there has been an increased interest in using moderately steatotic liver allografts. The impact of such allografts on recipients' post-liver transplant (LT) kidney function remains poorly studied. In order to investigate this in detail, we examined the correlation of moderate allograft steatosis and post-transplant acute kidney injury (AKI) in a large cohort of LT recipients with protocol biopsies. Methods: 786 LT recipients from January 2009 to December 2016 were identifi ed, and their data collected. All allografts were biopsied before implantation, and the degree of steatosis (macro-, micro-or both) was assessed by designated liver pathologists at our institution. Moderate steatosis was defi ned as macrosteatosis of 30-60%. Acute kidney injury (AKI) was defi ned per the KDIGO criteria. LT recipients with pre-LT AKI were excluded. Chi-square and unequal variance t-test were used to compare variables, and Kaplan Meier plots for survival. Renal dysfunction is a common complication after liver transplantation. The increased use of DCD livers has been associated with postoperative acute kidney injury (AKI). However, the relation between DCD grafts and development of chronic kidney disease (CKD) is less well defi ned. Our aim was therefore to assess risk factors, including graft type and AKI, with impact on development of CKD after liver transplantation. We included all patients, who underwent primary liver-only transplantation between 2007-2015 for end-stage liver disease. Patients with a survival of less than 3 month after transplantation were excluded. eGFR was calculated using the MDRD-4 formula and renal function was divided into 3 groups: no CKD (eGFR ≥60), mild CKD (eGFR 30-59) and severe CKD (eGFR <30). Postoperative AKI was defi ned according to the KDIGO criteria. A total of 961 patients were included (72% DBD & 28% DCD grafts). During the study period, 43% of the patients developed CKD. Importantly, severe CKD and end-stage renal disease occurred in only 3% and 1%, respectively. DCD recipients had a more pronounced drop in kidney function in the fi rst postoperative week, but they recovered quickly and long-term kidney function was comparable with those receiving a DBD graft ( Figure 1 ). Only recipients requiring renal replacement therapy (RRT) in the immediate post-transplant period had signifi cant impaired long-term kidney function ( Figure 2 ). This was confi rmed in a multivariable COX-regression analysis for development of CKD (AKI requiring RRT: HR 1.6, 95% CI 1.2-2.1, P=0.002). Recipients with severe postoperative AKI requiring RRT are less likely to experience a full recovery in long-term kidney function. Interestingly, CKD did not occur more frequently after implantation of DCD grafts. Overall, it is essential to identify risk factors and treat recipients at risk for severe postoperative AKI to preserve long-term kidney function. according to LDRI. We observed similar patient, liver and kidney graft survival for KDPI <20, 20-85, and >85% for LDRI <1.6 and ≥1.6, suggesting lack of synergistic interaction between LDRI and KDRI in KAL recipients. Conclusion: Patient and death censored liver graft survival is signifi cantly lower in patients that do not have access to kidney transplantation, and this effect is exacerbated for those that received a lower quality LDRI. There is a direct correlation between quality of the kidney and expected patient survival for those recipients of a KAL. The quality of the kidney is also an important factor to predict kidney and liver graft survival. The confi rmation of our results with prospective data will allow us to create a better allocation algorithm toward maximizing utility of limited resources in the future. Figure) . There were no differences in mortality risk between overdose-death or trauma-death hearts and lungs (p-values>0.05). Overdose-death kidneys, livers, hearts, and lungs were associated with 26%-12% lower mortality risk than medical-death organs (all p-values<0.05). However, overdose-death kidneys, livers, and hearts were 13-50% more likely to be discarded than trauma-death organs (p-values<0.02, Table 2 ). Overdose-death livers, hearts, and lungs were similarly likely to be discarded than medical-death organs (p-value>0.05), and overdose-death kidneys were 10% less likely to be discarded than medical-death kidneys (p-value<0.001). CONCLUSIONS: Overdose-death donor transplants were associated with similar or better patient survival, yet overdose-death organs were disproportionately discarded. The transplant community should minimize organ discard from overdose-death donors. KAS (2012 KAS ( , 2014 KAS ( , 2016 to assess actual vs. predicted center registration volume. We used a linear regression model, based primarily on historical trends, to predict billable registrations, and compared actual registrations to predicted utilizing a Bayesian analysis of the observed to expected ratio similar to MPSC assessment of centerspecifi c post-transplant outcomes from the SRTR. To better understand listing practices, we also developed a metric of center community need, defi ned as the ratio of candidates ever waiting over USRDS calculated ESRD county-level prevalence, using counties with at least two candidates waiting at the center or with home zip codes within 100 miles of the center zip code. There were 192 centers that performed at least 10 KI/KP transplants in all three years with a computable community need metric. The majority of centers were considered overall low scorers (N = 119, 62%), with only 26 (14%) being high scorers and 34 (18%) being low scorers in each of the three years. Centers are well distributed across quadrants, suggesting potential inconsistency among listing behaviors regardless of community burden. If a center lists more patients than expected, but doesn't meet the burden of the community they are serving, this information can provide center awareness and allow for potential changes in listing behaviors/patient outreach, while centers listing more than expected and experiencing low community need may be able to provide insight into their successful outreach strategies. Future analysis includes seeking engagement from centers in each of the four quadrants to identify awareness, successful outreach strategies, barriers to success, etc., and disseminating best practices with educational initiatives. Geographic disparity in access to deceased-donor liver transplantation is well described. The available liver supply in a DSA may affect patient and provider decisions on organ offers METHODS: Using SRTR offer data 2010-2014, we identifi ed 12,529 deceaseddonor livers and estimated the association between DSA-level offer acceptance and DSA supply-demand ratio using modifi ed Poisson regression adjusting for donor factors. Supply-demand ratio (recovered livers to incident waitlist candidates) was stratifi ed into quartiles, highest quartile indicating greatest supply of livers per candidate. The models were further stratifi ed into national, regional, and local offers. RESULTS: DSAs within the lowest supply quartile accepted 88.3% of local offers, while DSAs within each increasing quartile accepted 86.9%, 84.5%, and 75.5% of local offers, respectively. This trend persisted among regional and national offers (Figure) . The association between supply and acceptance varied by offer type (p-value interaction<0.001). Compared to DSAs with the lowest liver supply, DSAs with the highest liver supply were 15% (aRR: 0.82 0.85 0.88 ), 39% (aRR: 0.54 0.61 0.69 ), and 75% (aRR: 0.13 0.25 0.47 ) less likely to accept local, regional, and national offers, respectively (Table) . DSAs within the two uppermost quartiles of supply had similarly low acceptance of regional and national offers, however, they differed in acceptance of local offers. CONCLUSIONS: Higher availability of livers per candidate within a DSA was strongly associated with lower liver offer acceptance within that DSA. Allocation schemes that provide greater equity in deceased-donor availability might minimize geographic disparity and also reduce liver discard driven by high refusal rates. Transplant evaluation requires an assessment of socioeconomic challenges patients face that may derail them from obtaining a kidney transplant (KT). We developed and validated a new, single score Kidney Transplant Derailers Index (KTDI). Data were obtained from dialysis patients in Missouri (n=561) and patients presenting for KT evaluation in California (n=724). These patients were White (45%), Black (32%), and Hispanic (22%). Eight potential KT derailers were measured: fi nancial insecurity, educational attainment, type of employment, type of health insurance, perception of neighborhood safety, access to a vehicle, having a washer/dryer at home and social support. Item response theory models were used to estimate KTDI scores (T-score, mean = 50 and SD = 10) and determine which derailers best discriminated between underlying levels of socioeconomic status. Construct validity was examined with associations between the KTDI and zip-code level socioeconomic indicators from the US census, including % in poverty and % without health insurance. For patients who presented for KT (n=724), we also determined whether the KTDI score was predictive of time to transplant access. Greater fi nancial insecurity, low educational attainment, not having full-time employment, having Medicaid, not having private insurance, and no access to a vehicle were the most discriminating indicators. KTDI scores ranged from 34.2 to 63.1 (Mean: 50; higher scores indicate more derailers). Black patients and those with worse health had higher scores. There were positive associations between KTDI scores and % in poverty in the patient's zip code (ɣ = 0.26, SE = 0.03, p<0.001) and % uninsured in the patient's zip code (ɣ = 0.11, SE = 0.04, p=0.01). Time to transplant access was signifi cantly lower for patients with higher KTDI scores ( Figure 1 ). Background: Ongoing organ shortage and regional differences in organ availability have encouraged patients in need of liver transplantation (LT) to travel in order to increase the probability of LT. The aim of this study was to identify patterns and predictors of patient migration for LT in the US. Methods: Data for all adult primary deceased donor LT recipients between January 2004 and June 2017 were obtained from the Organ Procurement and Transplantation Network database. We extracted ZIP codes for the patient's residency at registration and for the hospital in which LT took place. These ZIP code data were used to calculate the distance of travel. Patients were considered to have migrated for LT if they traveled greater than 500 miles to undergo LT outside their home region. Multivariable logistic regression analysis was performed to identify predictive factors for patient migration. Results: There were 73,006 eligible LT with valid zip code data during the study period. 3,918 (5.4%) recipients traveled ≥ 500 miles from the time of listing to LT, the majority being to a different state (97.4%) and region (83.9%). Patients who migrated to a distant region for LT were older, more likely to be male, white, and educated, compared to those who did not migrate (p<0.01 for all).They had lower median allocation MELD at LT and were more likely to live in a zip code with higher income quartile, have private insurance, and be multiply listed. Background: Induction of T cell immune response c-DNA (TIRC7) is restricted to activated lymphocytes. Modulation of TIRC7 signalling prevents immune activation specifi cally at the site of infl ammation while preserving immune competency of peripheral lymphocytes. The effi cacy of a novel high affi nity, chimeric antibody against human TIRC7 was investigated to prevent allograft rejection after transplantation in combination and comparison to a calcineurin inhibitor, FK506. Method: Rat splenocytes were stimulated with either mitogen or IL2 in presence or absence of anti-TIRC7 antibody (anti-TIRC7 mAb) or control mAb to analyze cross-reactivity. Synergistic effects of anti-TIRC7 mAb with Tacrolimus (Tac) were tested in a strong histoincompatible donor/recipient combination (DA to Lewis). Kidney transplant recipients were treated either with Tac 0.2mg/kg or 0.5mg/kg or anti-TIRC7 mAb 2mg/day in combination with Tac (0.2mg/kg, days 0-7). Low dose FK506 was combined with the anti-TIRC7 mAb to prevent immunogenecity. Serum and spleens were procured by day 7 and for FACS to analyze lymphocyte subsets. Free anti-TIRC7 mAb concentration and allograft reaction was assessed in serum collected at days 1, 4 and 7 after transplantation. Anti-TIRC7 mAb kinetics was analyzed after a single dose administration to rats in serum collected on days 1,2,6,9,14 and 30. Results: Anti-TIRC7 antibody in combination with low dose FK506 prolonged graft survival up to 80 days (mean 46d). Graft survival was inferior with Tac treatment only (max. 55d; mean 24d). Immunogenicity was absent after antibody application. Free antibody concentrations were 200 and 50ug/ml on days 1 and 30, respectively. No clinical side-effects was observed in anti-TIRC7 treated animals with unchanged periphereal lymphocyte counts in the peripheral blood. Elevated intragraft TIRC7 expression was observed in biopsies of allografts undergoing rejection. Conclusion: The results suggest a pharmacodynamic synergism between the anti-TIRC7 mAb and FK506. No anti-drug antibody reaction was observed indicating no immunogenecity relevant epitops are presented to induce immune activation. Targeting of TIRC7 may provide a highly specifi c, novel approach for modulation of immune response restricted solely to tissue infi ltrating lymphocytes in transplantation. HLA-sensitization is a signifi cant immunologic barrier to successful kidney transplantation (KT). We have shown that proteasome inhibition (PI) in insolation reduces plasma cells (PCs), but does not impact donor-specifi c antibodies (DSA) due to upstream germinal center (GC) compensation. We hypothesized that PC depletion with carfi lzomib (CFZ), a 2 nd generation PI, and arrest of differentiation of upstream naive B cells with tabalumab (TAB), an anti-BAFF monoclonal antibody (mAb), would permit successful renal allotransplantation in highly sensitized nonhuman primates. Maximally MHC-mismatched rhesus pairs were sensitized with two sequential skin transplants and then desensitized with CFZ (20 mg/m 2 IV) and TAB (100 mg IV) weekly for one month. Peripheral blood, lymph node (LN), and bone marrow (BM) cells were analyzed pre-and post-desensitization. Anti-CD4 and CD8 depletion was given and swapping KT was performed. Animals were maintained on conventional triple immunosuppression post-transplant and closely monitored. Although they represent a rapidly growing population, elderly organ transplant recipients are underrepresented in clinical trials. Age-specifi c aspects of established immunosuppressants are therefore poorly understood. Here, we assessed the impact of immunosuppressive treatment with CTLA4-Ig, a fusion protein blocking costimulatory signaling between APCs and T cells through CD28, on alloimmune responses in old and young recipients (2-3 months vs. 16 months) in a fully MHC-mismatched murine transplantation models. While treatment with CTLA4-Ig prolonged skin graft survival in young recipients, the same treatment was unable to prolong graft survival in old recipients. Conversely, cardiac allografts in young mice treated with CTLA4-Ig survived indefi nitely, while 80% of old recipients treated with CTLA4-Ig had lost their graft after 100 days (log-rank test, p<0.001; n=5/group). CTLA4-Ig reduced the in-vivo proliferation of CD4 + and CD8 + T cells (as assessed by BrdU incorporation) uniformly in both young and old recipients; in contrast, CTLA4-Ig reduced CD4 + central-memory and effector-memory T cells only in young but not old recipients. Moreover, systemic frequencies of CD4 + IFN-ɣ + T cells and systemic levels of IFN-ɣ cytokine production were reduced in young recipients, but remained unchanged in old. CTLA4-Ig caused signifi cant perturbation in the Treg compartments with reduced frequencies and compromised proliferation of CD4 + CD25 + Foxp3 + cells. These differences correlated with a signifi cant reduction in expression of CD28 on T cells in old mice, while levels of CTLA4 remained stable. Immunosuppressive effects of costimulatory blockade with CTLA4-Ig showed distinct age-dependent effects in both skin and cardiac transplantation. Reduced expression of CD28 with aging may represent an escape mechanism for old alloreactive T cells, with unique clinical consequences for immunosuppression in the growing population of elderly transplant recipients. The backbone of current immunosuppression regimens remains calcineurin inhibitors (CNI) in association with steroids, mycophenolate mofetil or mTOR inhibitors. However, their long-term administration is associated with signifi cant side effects. Costimulation blockade-based therapy with CTLA4Ig has been shown to be an effective alternative to current immunosuppressive protocols with a minimal side-effect profi le. However, many studies have shown higher incidence of acute rejection, especially during the early phase post-transplantation. Recently, it has been clear that metabolic reprogramming is an integral aspect of T cell activation, differentiation and function.Previous studies have demonstrated that inhibiting T-cell metabolism at the time of T-cell activation inhibits both proliferation and cytokine production and it has been shown to effectively control T-cell responses in preclinical models of autoimmunity and transplantation.In this study, we examined the effect of combining CTLA4Ig with metabolic inhibitors (MI) 2-deoxyglucose, metformin and 6-Diazo-5-oxo-L-norleucine in improving allograft survival using murine models of skin, heart and hind-limb transplantation (BALB/c to C57BL/6). Simultaneous inhibition of T-cell glycolysis, mitochondrial oxidative phosphorylation and glutamine metabolism with MI enhanced the effi cacy of CTLA4Ig. Skin allograft survival was signifi cantly prolonged in the double-treated animals compared to animals that received only CTLA4Ig or MI. Moreover, combining CTLA4Ig with only short-term MI (30 days) could achieved long term cardiac and hind-limb allograft survival. Addition of MI to CTLA4Ig further suppressed the initial alloreactive immune response and allowed for limited immunomodulation to achieve allograft tolerance. Our results indicate that targeting T cell metabolism provides a novel approach for enhancing CNI-sparing regimens with CTLA4Ig. Objectives. Chronic rejection of transplanted organs is a major obstacle in human organ transplantation. Although the mechanisms of chronic rejection are poorly understood it is known that chronic rejection is caused by the immune cells, such as T cells and macrophages, accumulating within the graft. The immune cell movement is regulated by actin cytoskeleton and small GTPase RhoA pathway. We showed recently that the blockade of early T cell response in conjunction with the genetic or pharmacologic interference with the small GTPase RhoA pathway, by RhoA-deletion or Y27632 inhibition of RhoA/ROCK kinase, inhibits macrophage movement into the graft and abrogates chronic rejection of cardiac allografts in rodent model. Although besides the Y27632 inhibitor, there are other RhoA/ROCK inhibitors available commercially, their effi cacy in inhibition of chronic rejection remains unknown. Methods. Heart allografts from BALB/c (H-2d) donors were heterotopically transplanted into C57BL/6 (H-2b) recipients (each experimental group consisted of 3-5 animals), which received CTLA4-Ig (0.25 mg i.p. day 2 and day 4 post-transplantation) alone or in conjunction with RhoA/ROCK inhibitors. We screened six RhoA/ROCK inhibitors: Azaindole-1, Fasudil (HA-1077) HCl, GSK 269962A, SAR-407899, SB 772077B dihydrochloride and SLX-2119 for their ability to inhibit chronic rejection of heterotopically transplanted mouse cardiac allografts. We also tested macrophage infi ltration using macrophage markers and immunostaining. Results. We found that out of six tested compounds, the Fasudil and Azaindole, inhibited macrophage infi ltration, vessel occlusion and tissue fi brosis and abrogated chronic rejection of mouse cardiac allografts. The remaining inhibitors decreased only tissue fi brosis, and were ineffective in inhibiting chronic rejection. Introduction: There is no published data on the risks and benefi ts of Induction immunosuppression in simultaneous heart/kidney transplant (SHKT) patients. Methods: We analyzed all SHKT performed between 1987-2015 using the national OPTN registry. Patients were stratifi ed in to three groups based upon their induction regimen: anti-thymocyte globulin (ATG); IL2 receptor blockers (IL2RB) and no-induction (NI). Inverse probability weighted propensity (IPW) scoring was utilized to minimize maintenance immunosuppresssion choice, selection, regional and era (time of transplant) biases. Results: A total of 1107 patients were identifi ed of which 312 (28%) received NI; 332 (30%) received IL2RB; and, 463 (42%) received ATG. In general patients were demographically well-matched across the three groups. There were regional differences in the use of induction regimens with Region 2 being more likely to use NI and Region 5 more likely to use ATG. Compared with NI group, both IL2RB and ATG groups had a lower risk of acute kidney transplant rejection respectively (17.6% vs 7.4% vs 10.9%; p=0.007). The log-rank tests for Kaplan-Meier curves showed that one-and fi ve-year patient survival, and death-censored kidney graft and heart graft survivals were superior for ATG group compared with the IL2RB and NI group. IPW analysis showed that compared with the ATG group, 1 and 5 year kidney, heart and patient survivals were similar to IL2RB group but superior to the NI group. Hazard Ratio (95%CI) p-value Antibody mediated rejection (AMR) occurs in 10%-20% of cardiac transplant patients and is associated with increased mortality. The endomyocardial biopsy, used to identify microvascular injury with intravascular macrophages, activated endothelial cells, immunohistochemical (IHC) evidence of complement deposition, and/or > 10% intravascular macrophages within capillaries, remains the primary diagnostic tool for AMR. However, as recently addressed at the XIIIth Banff Allograft Pathology Conference, identifying the intravascular location of macrophages by routine histology can present diagnostic challenges. This prompted us to perform a screen of cardiac transplant cases to determine if double labeling with an endothelial and histiocytic marker could improve diagnostic accuracy. Twenty-two cardiac transplant endomyocardial biopsies previously diagnosed at a high-volume transplant center as pAMR-2 based on histologic evidence of endothelial activation, endothelial deposition with C3d or C4d or >10% intravascular macrophages were screened using a CD68/CD31 IHC double stain. To determine whether the diagnosis of pAMR-2 would be altered using the double stain, CD68 positive intravascular macrophage percentages were calculated and retrospectively compared in the same cases previously diagnosed using CD68 IHC alone. The CD68/CD31 double stain showed 13 of 22 (57%) cases which had been previously been diagnosed as >10% using a CD68 IHC stain alone, had <10% CD68 positive intravascular macrophages. Use of the double stain altered the diagnosis of pAMR-2 in 33% of cases. The double stain showed a discrepancy of overcalling intravascular macrophages by >30%, >20% and >10% in 26%, 12% and 23% of cases, respectively. The mean C4d positivity by immunofl uorescence was 75% (N=12), 37% for C3d by IHC (N=8), and 30% for >10% CD68 by IHC (N=22). The patients (mean age 51 years, 27% female) had 45 months posttransplant follow up, and a third previously had left ventricular assist devices. Based on our institution's experience at a high cardiac transplant volume center, over a third of patients were over-diagnosed as pAMR-2 using CD68 by IHC alone. We demonstrate here the value of using a CD68/CD31 double stain to accurately determine the percent of intravascular macrophages to diagnose the "I" component of pAMR-2. to-wounding (JCI 120:1862) and antibody-mediated rejection (ABMR), but T cell-mediated rejection (TCMR) has little impact (AJT 9:2520). We studied heart transplants to determine the relationship between injury, TCMR, ABMR, and graft failure after endomyocardial biopsy (EMB). Methods: 889 single EMB bites from 462 recipients were analyzed using Affymetrix microarrays. We assigned molecular rejection (non-rejection, TCMR, and ABMR) scores by archetypal analysis trained on all-rejection, TCMR, and ABMR-associated transcripts (JHLT 36:1192) . In addition, we assessed EMB injury phenotypes by expression of transcripts defi ned in acute kidney injury and expressed with high variance in hearts. Survival analyses used one random biopsy per patient. Results: To assess heart injury, we performed principal component analysis (PCA) on EMB using expression of injury transcripts (originally derived in kidney), and compared the distribution by injury PCA to their molecular rejection phenotypes (colors) ( Figure 1A) . PC1 scores most directly refl ected expression of injury transcripts and correlated strongly with molecular rejection; PC2 scores distinguished an element of injury independent of rejection ( Figure 1B) . In univariate analyses of survival 3 years post-biopsy ( Figure 1C ) features of molecular injury and molecular TCMR predicted failure whereas histologic TCMR or ABMR did not. Surprisingly, molecular TCMR posed a greater hazard than ABMR. In multivariate analysis ( Figure 1D ) injury PC2 and molecular TCMR scores were the only signifi cant hazards to graft loss. In heart transplants molecular injury is strongly associated with rejection, and, as in kidney transplants, molecular injury predicts future failure. Unlike kidneys, graft loss within 3 years post-biopsy is associated with TCMR but not ABMR. These results raise the possibility that heart TCMR elicits a more aggressive, deleterious injury response than kidneys, or that ABMR is better tolerated in hearts than in kidneys. ClinicalTrials.gov #NCT02670408. Aim: Development of de novo donor specifi c antibodies (DSA) after heart transplant is associated with an increased risk of allograft failure. The aim of this study was to determine the frequency of HLA serological antigen mismatch by locus and its relation to the development of de novo DSA in heart transplant. The study cohort included adult heart transplant recipients at our institution between 2011 and 2015 (n=548). HLA serological antigen mismatch between the recipient and donor at HLA-A, -B, -C, -DRB1, -DRB3/4/5 and -DQ was assessed. HLA antibodies were tested quarterly for the fi rst year and yearly thereafter using single antigen bead based assay. Results: More HLA antigen mismatch was present at the HLA-B locus as compared to the other loci (p< 0.05, Fig 1A) . We identifi ed 76 recipients that developed 95 de novo DSA. 69 (73%) of these de novo DSA were against HLA-DQ. In contrast, only 4 DSA were against HLA-A, 4 were against HLA-B, 3 were against HLA-C, 3 were against HLA-DRB1, and 12 were against DR51/52/53. The ratio of HLA-DQ de novo DSA to HLA-DQ mismatches was 10%, which was much higher than other HLA loci (Fig1B). Conclusion: Our data demonstrate that the high frequency of HLA-DQ de novo DSA is not dependent on the number of mismatches. HLA-DQ mismatches have higher immunogenicity than the other HLA antigens. Via fl uorescent labeled secondary antibodies, we evaluated augmented (positive both pre-and post-transplantation) or de novo (negative pre-transplantation) antibody response in these patients. Results: 6 out of 7 patients displayed an elevated general IgG immune response towards the array proteins during a rejection episode compared to the pre-transplant state. 14 antibodies (8 skin specifi c and 6 nuclear antibodies) showed to be up-regulated during a rejection episode in at least 50% of the patients. The skin specifi c antigens were mainly attributable to hair follicles. In subsequent rejection episodes from the same patient, the antibody response appeared to be more pronounced with time. In three patients, we could detect de novo production of MICA antibodies. Conclusions: This is, to the best of our knowledge, the fi rst report that indicates that skin specifi c antibodies correlate with acute rejection episodes in clinical VCA. Further research should determine whether sequential assessment of non-HLA antibodies could serve as non-invasive marker of rejection in VCA patients. Vascularized composite allotransplantation (VCA) would likely be more widely performed if maintenance immunosuppression was not essential for graft acceptance. Accordingly, we used murine models of heterotopic and orthotopic limb transplantation (BALB/c->C57BL/6) to assess the potential for peritransplant therapy to promote long-term VCA survival. Injection of CD40L mAb (MR-1) and donor splenocyte transfusion (5x10 6 cells), plus 30 days of rapamycin (RPM, 2 mg/kg/d, Alzet pumps) induced long-term VCA survival (>100 days, p<0.01). Likewise, CTLA4Ig (500 μg, i.p., on days 0, 2 and 4) plus RPM also induced long-term orthotopic VCA survival (>100 days, p<0.01). The success of either protocol required the presence of a bone-associated, radiation-sensitive component since removal of the long-bone or pre-transplant donor irradiation (800 cGy) prevented long-term allograft survival. Effi cacy also required a T or B cell component since allograft rejection occurred when using Rag1-/-donors; and long-term allograft acceptance could not be restored by infusion of donor bone marrow cells peripherally at the time of engraftment (p<0.05). Analysis of donor bone marrow showed ~40% of CD4+ T cells were Foxp3+ Treg cells, constituting the largest population of Tregs within the immune system. Use of a CXCR4 inhibitor (AMD3100) to mobilize donor bone marrow cells pre-transplant abrogated the effi cacy of either protocol (p<0.01), as did use of limbs from mice with conditional deletion of CXCR4 within Foxp3+ Tregs. Lastly, donor Foxp3+ T-regulatory (Treg) depletion, by diphtheria toxin administration to DEREG donor mice whose Foxp3+ Treg cells expressed the diphtheria toxin receptor, restored rejection with either protocol, whereas without Treg depletion long-term survival was associated with an active tri-lineage bone marrow. Long-term VCA survival is possible across a full MHC disparity using costimulation blockade-based approaches. Surprisingly, the effi cacy of costimulation blockade in these models depends on the presence of a population of radiation-sensitive, CXCR4+ Foxp3+ Tregs resident within donor bone marrow. The mechanisms by which these cells promote VCA survival posttransplant, including migration of donor Tregs to recipient lymphoid tissues, and the interactions of recipient lymphoid cells with donor bone marrow cells resident within long-bones, are under investigation. Background: Sensitized recipients have lower rates of transplantation and poorer outcomes. We and others have reported that the delayed administration of CTLA4-Ig can disrupt fully-established B cell germinal center (GC) responses and prevent the production of post-GC plasma cells, while Bortezomib has been reported to deplete plasma cells. When used individually, these drugs often lead to an incomplete reversal of established alloantibody responses, especially in sensitized recipients with high titers of circulating donor-specifi c antibodies (DSA). Here we tested the ability of Bortezomib in combination with CTLA4-Ig to reverse established alloantibody responses in skin-allograft sensitized recipients and their impact on bone marrow resident plasma cells. Materials & Method: C57BL/6 female mice were sensitized with BALB/c donor splenocytes or skin grafts and challenged with BALB/c skin grafts 10 weeks later. Mice were treated with 2 doses of Bortezomib and CTLA4-Ig (2X/ week). Bone marrow were harvested from both naïve and sensitized mice to analyze antibody secreting cell subsets. Submandibular blood collection was performed at 1-2 week intervals to measure DSA titers. Results: Treatment with 2 doses of Bortezomib and CTLA4-Ig started from 14 days post allogeneic splenocyte immunization or skin transplantation rapidly reduced DSA in naive (p= .006) and presensitized recipients (p< .001). Furthermore, repeated Bortezomib (2 consecutive daily doses given every 2-4 weeks) in combination with CTLA4-Ig sequentially reduced DSA in sensitized recipients with persistent and high-titer preformed DSA (even when treatment was started at 5 weeks post-skin transplant). Finally, Bortezomib signifi cantly deplete early (p= .014) and long lived (p= .005) bone marrow-resident plasma cells. The ability of Bortezomib in combination with CTLA4-Ig to desensitized recipients with persisting high-titer preformed DSA is due to the preferential depletion by bortezomib of long-lived plasma cells in the bone marrow and the prevention of new plasma cells generation by CTLA4-Ig. These observations may explain our clinical observations of the effi cacy of Bortezomib and Belatacept in controlling acute ABMR. Background: The diversity of the B cell immune repertoire may drive humoral injury and rejection in organ transplantation (tx). Methods: We utilized advances in next-generation sequencing of B cells and custom computational pipelines to study functionally correlated variations in the VDJ and CDR3 regions, examining 124 DNA and RNA samples from the same tx patient over serial times before tx, and at 6 and 24 mo post-tx. Each blood sample was paired with a kidney tx bx scored by progressive changes in the chronic allograft damage index (CADI) over time. Three patient groups sorted into non-progressors (NP), progressors/ no rejection (PNR) and progressors/ with rejection (PR). The immune repertorie was studied by (1) analyzing diversity measures (richness and entropy) longitudinally using linear-mixed effect model and (2) performing network analysis to characterize clonal expansion. We have defi ned a network for each sample where each vertex in the network represents a B-cell sequence where the size is defi ned by all the identical sequences. Edges are calculated using the clone defi nition (same V and J segments, same CDR3 length and 90% nucleotide identity between CDR3s) and clusters represents each clone in the repertoire. Gini index is calculated for vertex size and cluster size distribution. Results: Pre-tx immune repertoire diversity was signifi cantly greater in patients that went on to reject their tx (PR vs NP, p = 0.008). Post-tx, there was a strong interaction between clinical outcome (CADI score, AR) and time, with a signifi cant reduction in clonal diversity (p = 0.05) in PR, and an increase in NP, combined with relevant changes in the Gini index applied to vertex size and cluster size from the network analysis at 24 mo post tx (PR vs. NP, p=0.003 and p=0.001). Dominant clones persisted/ further expanded in PR patients over time. Conclusions: The diversity of recipient B cell immune repertoire plays a major role in driving AR and progressive chronic tissue injury, and this effect can be observed even prior to engraftment of the tx organ. Selected dominant clones expand in patients who reject allografts, and the preservation of these dominant clones results in a reduction in of their overall repertoire diversity. Pre-tx prediction of recipient risk of rejection, irrespective of the donor organ, provides a powerful approach for precision medicine in organ tx. Despite the use of immunosuppressive drugs, allograft rejection remains a major hurdle for the long-term success of organ transplants. Burgeoning evidence implicates memory B cells (MBCs) in allograft rejection. MBCs are long-lived, quiescent lymphocytes that form during an initial antigen encounter, and persist to mount a more rapid, potent response to rechallenge with the same antigen. MBCs form the basis of the long-term protective immunity induced by vaccination, but can mediate allograft rejection in the context of transplantation. In order to target MBCs responsible for transplant rejection, it is important to understand the mechanisms of MBC survival. Autophagy, a process for lysosomal degradation of cytoplasmic cargo, has been shown to be vital for MBC survival. We hypothesize that autophagy maintains MBC survival by regulating mitochondrial homeostasis. The Bcl-2 family proteins Bnip3 and Nix mediate mitophagy, the selective clearance of mitochondria by autophagy. To study the role of mitophagy in MBC survival, we generated mice with B cell-specifi c deletion of Bnip3 and Nix (B/ Nix -/-Bnip3 -/-). We immunized WT and B/Nix -/-Bnip3 -/mice with the model antigen NP-KLH to induce MBC formation. By ELISA and fl ow cytometry, we fi nd that B/Nix -/-Bnip3 -/mice have normal primary antibody responses and form MBCs normally. However, B/Nix -/-Bnip3 -/mice lose MBCs over time (Fig. 1a) . Microscopy shows that Nix -/-Bnip3 -/-MBCs have impaired mitophagy and increased mitochondrial mass (Fig. 1b-c) . Moreover, transcriptional and extracellular fl ux analyses indicate that Nix -/-Bnip3 -/-MBCs have altered mitochondrial metabolism, which may inhibit their survival. Overall, our results reveal a key role for mitophagy in MBC maintenance. Mitophagy may thus present a therapeutic target to attenuate MBC-mediated allograft rejection. CMV viremia was reduced from 16% to 8% (P=0.0024) and CMV syndrome decreased from 6.9% to 1.8% (P=0.0022). Recipients treated for CMV n(%) 29 (11) 7(2.1) Lymphocyte depleting induction immunosupression n(%) 118 (45) 104 (32) After the allocation change, median waiting times for all recipients in the OPO did not increase and appeared to be consistent with national and regional trends. The chance that a seronegative recipient received a high risk seroprofi le decreased from 51% to 12%. CMV seromatching signifi cantly reduces the rate of CMV infection, optimizes high and low risk CMV profi les, reduces toxicities and costs associated with prophylaxis and monitoring, and does not appear to disadvantage wait times for recipients. Rituximab is an antibody that binds to the CD20 antigen on B cells resulting in rapid and prolonged depletion. At Northwestern Memorial Hospital, a single dose of rituximab is given to renal transplant recipients with documented historical donor specifi c antibodies (DSA). There is concern that the administration of rituximab with alemtuzumab induction may lead to increased risk of infectious complications. This study aims to determine if the administration of rituximab for historical DSA in renal transplant recipients who received alemtuzumab induction leads to an increased rate of polyomavirus-BK and cytomegalovirus (CMV) infectious complications. Single center, retrospective cohort study of renal transplant recipients who received a single dose of rituximab for historical DSA with alemtuzumab induction. Patients ≥ 18 years old who were transplanted between January 1 st 2007 and October 31 st 2016 were included. These patients were matched 1:1 based on donor/recipient CMV serostatus with patients who did not receive rituximab. Incidence of polyomavirus-BK viruria, viremia, nephropathy and CMV viremia within one year post-transplant were evaluated. A total of 326 patients (163 patients in each group) were included in the study. Patient demographics are summarized in Table 1 . Incidence of polyomavirus-BK viruria (p<0.01), viremia (p<0.01) and nephropathy (p=0.05) were signifi cantly greater in the rituximab group compared to the control group ( Figure 1 ). There was no difference in CMV viremia between the rituximab and control (p=0.31). Rituximab administration for historical DSA in renal transplant recipients also receiving alemtuzumab induction was associated with higher rates of polyomavirus-BK infectious complications within one year post-transplant. Background: T-cell-mediated immunity (CMI) is crucial for the control of CMV infection posttransplant. We evaluated a novel EliSpot assay to determine whether CMI results could predict subsequent CMV events. Adult kidney transplant recipients were enrolled at 43 centers in U.S., Canada, and U.K. An EliSpot assay (T.Spot.CMV, Oxford Diagnostics) was used to enumerate IFNɣ binding spot forming units (sfu) after stimulation of PBMCs with an overlapping peptide pool of CMV pp65 and IE-1 proteins. Testing was done at end of antiviral prophylaxis (EOP) and 1,2,3,4,6 months post-EOP. The primary outcome was a CMV event in the fi rst post-transplant year, defi ned as site-determined viremia requiring antiviral therapy. We enrolled 571 KTRs (277 D+/R-and 294 R+) with a mean age of 52 ± 15.5 yrs; majority of patients were male (62%). Of these, 223 (39.1%) and 348 (60.9%) received 3 and 6 months of prophylaxis respectively. In patients who were eligible for analysis (n=372), CMV events occurred in 44 (11.8%) (37/168 (21.9%) D+/R-and 7/204 (3.4%) R+) after EOP at a median 64 days (range 1-240 days) post-prophylaxis. Using ROC curve analysis, we derived a cutoff of >50 sfu/2x10 5 PBMC for either IE-1 or pp65 as a threshold for positivity with good sensitivity and specifi city for prediction of CMV events. At EOP 23/168 (13.7%) D+/R-were positive and 147/204 (72.1%) R+ were positive. In the total cohort (n=372), CMV events were signifi cantly lower in CMI positive vs. negative patients (2.9% vs. 17.6%, p<0.0001). This was primarily driven by the R+ group where a positive CMI reduced the risk of a CMV event (OR 0.13, 95%CI: 0.049 -0.33). Time to CMV event post-EOP was signifi cantly greater in those with positive CMI (log-rank p<0.0001). Similar predictive values for subsequent CMV events were found for CMI measurements at 1 and 2 months post-prophylaxis (p<0.0001 and p=0.062 respectively). In the largest cohort of KTRs to date, we show that a standardized assessment of CMV-specifi c CMI using a novel EliSpot assay has predictive value for clinically signifi cant CMV infection. Kidney transplants (KT) with R-/D+ CMV serostatus receiving T-cell depletion induction are considered as the highest risk population for CMV infection and long-lasting prophylaxis therapy is recommended. However, studies comparing rATG with anti-IL2RA in KT with different CMV-sp serological and Cellmediated immunity (CMI) has not been investigated yet. Methods: We evaluated in 1215KT from 3 different centers (BDX, BCN, LSN) the impact of anti-IL2RA or rATG induction on the incidence of infection regarding baseline serostatus. CMV-sp CMI at baseline and at different time points posttransplantation were also compared in KT receiving rATG or anti-IL2RA. Results: In a fi rst cohort of KT(BDX, n=679), infection risk was signifi cantly higher after rATG than anti-IL2RA induction only among R+KT but not within D+R-(logrank=0.001 vs logrank=0.6, respectively). This data was validated in 2 different cohorts (BCN,n=373,logrank=0.041 and LSN, n=163, logrank<0.001). These results were more evident in KT under preemptive therapy. Analyses on CMV-sp CMI confi rmed that only pre-transplant CMI+ rATG-treated patients, but not CMI-, showed signifi cantly higher cumulative incidences of infection than anti-IL2RA-treated KT (logrank, p=0.020) (HR=1.77, 95%CI 1.08-2.89 p=0.023). Post-transplant CMV-sp CMI kinetics revealed that only rATG-treated KT with preformed CMI+ displayed a generalized depletion of CMV-sp CMI after transplantation as compared to anti-IL2RA KT. Background: The impact of CMV serostatus on kidney transplant outcomes in an era when CMV prophylactic and preemptive strategies are used routinely is not clearly established. We aimed to investigate this association through a donorpaired kidney analysis. Methods: Using UNOS/OPTN data, adult patients with fi rst deceased donor kidney transplant between 2010 and 2015 were stratifi ed into 4 groups in the main cohort: D-/R-, D+/R-, D+/R+, and D-/R+. In a paired kidney cohort, we identifi ed 2,899 pairs of D-kidney transplant with discordance of recipient serostatus (D-/R-vs D-/R+) and 4,567 pairs of D+ kidney transplant with discordance of recipient serostatus (D+/R-vs D+/R+). Death-censored graft survival, all-cause mortality and mortality due to infection were examined in both cohorts. In the multivariable analysis of the main cohort which included 52,394 recipients, D+/R-was associated with an increased risk of graft failure (HR=1.15, P=0.015), all-cause mortality (HR=1.18, P=0.001) and mortality due to infection (HR=1.46, P=0.007) when compared to D-/R-. There was also an increased risk of graft failure in D+/R+ when compared to D-/R-(HR=1.06, P=0.042). In the paired kidney multivariable analysis, D+/R-was an independent risk factor for all-cause mortality (HR=1.15, P=0.011) and infection-related mortality (HR=1.31, P=0.046) when compared to D+/R+. No difference in graft loss between D+/R-and D+/R+ in the paired kidney cohort. Conclusions: CMV mismatch (D+/R-) is still an independent risk factor for graft loss and patient mortality in the era of effective prophylactic and preemptive strategies. The negative impact of D+/R-serostatus on all-cause mortality and infection-related mortality persists after fully matching for donor factors. Background: Towards the goal of utilizing more livers for transplantation, transplant centers are looking to increase the use of organs from marginal donors. Livers from these donors, have been shown to be more susceptible to preservation and reperfusion injury. CD47 regulates NO signaling. At the time of reperfusion, a paradoxical up-regulation of its ligand thrombospondin-1 has been found to inhibit CD47 signaling, leading to decrease NO signaling and tissue perfusion. Methods: Using a porcine model of donation after cardiac death (DCD), we studied the use of antibody-mediated CD47 blockade to improve liver graft function undergoing normothermic machine perfusion. Livers from 20 pigs (5 per group) were brought under either 30 or 60 minutes of warm ischemia time (WIT) followed by the administration of treatment (antiCD47) or control (IgG) and 6 hours of normothermic liver perfusion (NELP). Results: ALT, AST, and bile production were determined at the end of NELP (6 hours) to assess liver function in all groups (A). The 60WIT control group had the highest ALT and AST levels after 6 hours of NELP compared to the rest of groups (p=0.0014 and p=0.0220, respectively). Bile production at the end of NELP was signifi cantly higher in both treatment groups compared to their controls (p=0.0079 for 30WIT and p=0.0308 for 60WIT). The expression of Caspase-3 after 6 hours of NELP was studied by immunofl uorescence staining in 60WIT livers as a marker of reperfusion injury (B). The control group showed higher expression of Caspase-3 after 6 hours of NELP (p=0.0428). pERK levels were analyzed by Western Blot in 60WIT livers as a marker of liver regeneration. Signifi cantly higher expression of pERK was found at the end of NELP in the treatment group compared to control livers (p=0.003). In the setting of normothermic preservation, CD47 blockade can improve liver graft function by reducing both mechanisms of preservation and reperfusion injury. Garcia-Aroz S., Xu M., Wang X., Hollingshead J., Khan A., Banan B., Kang L., Zhang Z., Upadhya G., Lin Y., Manning P., Chapman W. Prolonged cold ischemia storage (CIS) of donor organs is a major risk factor for acute and chronic graft injury. The goal of this study was to test how increased CIS time infl uences humoral alloimmunity using a mouse model of renal transplantation. B6 (H-2 b ) mice received BALB/c (H-2 d ) renal allografts subjected to 0.5 h or 6 h CIS in University of Wisconsin solution (0.5 h CIS and 6 h CIS groups). Second recipient kidney was removed at the time of transplantation. Compared to 0.5h CIS, recipients from 6 h CIS group had increased titers of donor-MHC class II but not class I reactive IgG DSA, elevated frequencies of both class I and class II-reactive antibody secreting cells and higher frequencies of donor-reactive IFNɣ-secreting T cells on d.14 posttransplant. Renal allografts from this group had signifi cantly higher proportion of glomeruli infi ltrated with macrophages compared to 0.5 h CIS controls. B cell depletion with anti-mouse CD20 mAb inhibited DSA generation and reduced intragraft C4d deposition and macrophage glomerular infi ltration on d.14. In addition, B cell depletion resulted in decreased priming of donor-reactive IFNɣ-secreting T cells and reduced T cell infi ltration into the graft. Regardless of CIS duration, serum creatinine levels were <0.6 mg/dl in all recipients at d. 60 posttransplant (<0.4 mg/dl in non-transplanted mice). By d. 60, the grafts from 0.5 h CIS group had moderate segmental sclerosis in a limited number of glomeruli. In contrast, 6 h CIS grafts had extensive glomerular injury including thickened capillary loops, segmental or global sclerosis and thrombotic microangiopathy with red cell congestion, intracapillary fi brin thrombi, mesangiolysis and elevated intragraft levels of acute kidney injury markers NGAL and osteopontin. In addition, 6 h CIS recipients had increased serum levels of anti-class II but not anti-class I IgG DSA and increased numbers of class I and class II-reactive ASCs compared to 0.5 CIS group. The frequencies of IFNɣ-producing donor-reactive T cells were low and comparable in both groups. Our fi ndings suggest that the augmented humoral rather than cellular immune responses account for the transplant glomerulopathy after prolonged CIS. Posttransplant infl ammation not only facilitates DSA development but also spreads the specifi city of DSA responses from class I to class II and may thus infl uence renal allograft pathology. Background: Circulating exosomes isolated from lung transplant recipients (LTxRs) with acute or chronic rejection contains lung self-antigens (SAgs), K alpha 1 tubulin (Kα1T) and collagen V (Col-V), and have been proposed to play a role in inducing immune responses leading to rejection. The goal of this study is to determine whether primary graft dysfunction (PGD), a known risk factor for rejection, will lead to exosome induction from the transplanted lungs and to determine the role of persistence of the exosomes in the clinical course. Methods: Ten LTxRs with grade 3 PGD and 5 without PGD (ISHLT criteria) were included in this study. Sera were collected 48hrs following LTx and at 30 days, exosomes were isolated using the ultracentrifugation method. Purity of the exosomes was assessed by western blot using CD9 and electron microscopy. Exosomes isolated from sera at 48hrs were characterized for SAgs (Kα1T, Col-V), MHC class II molecule and CIITA, transcription factors (NF-kB, HIF-1α), Adhesion molecules (ICAM, VCAM), co-stimulatory molecules (CD80-86), and alpha subunit of 20S-proteasome using western blot. We also isolated exosomes from all PGD 3 LTxRs on day 30. Results: Exosomes isolated from sera of PGD 3 LTxRs demonstrated increased levels of SAgs in comparison to PGD grade 0 LTxR (1.4 fold increase in Col-V (p=0.06) and 1.7 fold increase in Kα1T (p=0.04)). The exosomes also contained increased levels of MHC class II (6.6 fold increase p<0.001), CIITA (4.3 fold increase p<0.001) along with adhesion molecules (6.9 and 4 fold increase in ICAM and VCAM p<0.001), transcription factors (0.8 and 6.9 fold increase HIF-1α and NFkB (p<0.001)) and co-stimulatory molecules (6.6fold increase in CD80 and 7 fold increase in CD86 (p<0.001)) and 20S proteasome ( Although pre-clinical studies have reported the effi cacy of Sirtuin 1 (SIRT1) to protect livers from ischemia-reperfusion injury, its clinical relevance in liver transplantation (LT) has not been defi ned. In addition, cell-specifi c SIRT1 function and underling mechanisms to regulate innate immune response remains to be elucidated. Ikaros, a tumor suppressor of lymphoid lineage, is expressed in all hematopoietic cells, while its role in macrophage is unknown. We aimed to determine the clinical relevance of SIRT1 as well as to identify the signifi cance of novel macrophage SIRT1-Ikaros axis in LT. Liver biopsies were collected at 2h post-reperfusion from 60 human adult primary LT recipients recruited under IRB protocol. SIRT1 levels were analyzed by Western blots and patients were divided into low-SIRT1 (n=30) and high-SIRT1 (n=30) groups. The high-SIRT1 group had depressed cleaved caspase-3 (0.54±0.06 vs. 1.00±0.14, p<0.05); lower sALT (238±28 vs. 488±65 IU/L, p<0.05) and sAST (281±41 vs. 766±325 IU/L) at POD1; and superior post-LT survival (2-year, 94.9% vs. 83.7%) with median follow-up of 940 days. Next, to determine the cell-specifi c SIRT1 function, groups of myeloid-specifi c SIRT1 (mSIRT1) knockout and WT mice (n=7-9/gr) were subjected to partial hepatic warm ischemia (90min) followed by reperfusion (6h Many preclinical studies on protective functions of heme oxygenase-1 (HO-1; hsp32) in orthotopic liver transplantation (OLT) have encouraged application of HO-1 inducing regimens in clinical OLT as well as increase the necessity of criteria to quest for putative clinical responders. Human HO-1 gene expression is modulated by polymorphism genotype, while previous clinical studies showed donor HO-1 genotype to correlate with clinical outcomes. However, the signifi cance of recipient HO-1 inducibility remains unknown. We aimed to determine whether recipient HO-1 status may affect graft HO-1 levels and outcomes. Fifty-one liver transplant patients were recruited under IRB protocol. Liver biopsies sampled pre-transplant (prior to put-in) and 2h post-reperfusion (prior to abdominal closure) were analyzed by Western blots. Hepatic ischemia-reperfusion injury (IRI) represents a major risk factor of early graft dysfunction and acute/chronic rejection as well as a key obstacle to expanding the donor pool in orthotopic liver transplantation (OLT). Although glucocorticoid receptor (GR) signaling may enhance graft cytoprotective programs, clinical use of glucocorticoid is limited due to its adverse effects, while clinical relevance of GR-facilitated cytoprotection in OLT recipients remains unknown. We aimed to evaluate the signifi cance of hepatic GR in clinical OLT and verify the impact of recombinant human relaxin (rhRLX), which may function as a GR agonist in tissue/disease-specifi c manner. Fifty-one liver transplant patients were recruited under IRB protocol. Liver biopsies were collected after cold storage (prior to put-in) and 2h post-reperfusion (prior to abdominal closure), followed by Western blot-assisted analyses. Forty-three percent of OLTs failed to increase GR peri-operatively under the surgical stress. BACKGROUND&AIMS: Ischemia and reperfusion injury (IRI) is known as an inevitable event leading to early and late graft failure. We have performed the fi rst case of ischemia-free liver transplantation (IFLT) on July 23 rd , 2017. Herein we present the fi rst series of IFLT. In this series, 6 liver grafts were procured, preserved and implanted under continuous normothermic machine perfusion (NMP) so that ischemia injury of the grafts was completely avoided. Graft viability was assessed with references to biochemical changes in the perfusate and bile production. The IFLT cases were matched 1:3 to conventional liver transplantation (CLT) cases. The IRI injury and early transplant outcomes were compared between two groups. RESULTS: Stable perfusion fl ow, low lactate and liver enzyme levels, and continuous bile production were observed during NMP in IFLT. The 30-day graft survival was comparable (100% in IFLT versus 94.4% in CLT, p=0.750). The incidence of early allograft dysfunction (EAD) was signifi cantly lower in IFLT than in CLT group (0% versus 55.6%, p=0.022). The peak AST, ALT and total bilirubin within the fi rst 7 days were signifi cantly lower in IFLT group. The histological study revealed minimal hepatocyte, biliary epithelium and vascular endothelium injury during preservation and post-transplantation in IFLT. The infl ammatory cytokine (IL-1b, IL-6 and TNF-a) levels were much lower in IFLT. The key pathways involved in IRI were not activated after graft revascularization. The transcriptomic profi le-microarrays revealed a striking increase in proliferation-associated, anti-infl ammatory and regenerative genes in IFLT when compared to CLT. The results of the fi rst series demonstrate the safety, feasibility, and superiority of IFLT. This innovation provides a potential strategy to maximize organ utilization and optimize transplant outcomes. CITATION INFORMATION: He X., Guo Z., Huang S., Zhao Q., Tang Y., Zhang Z., Zhu Z., Ju W., Yang L., Chen G. . Demographics were similar between groups, with donor age 57±9 years and terminal sCr 2.9 ± 1.1 mg/dl (QPI-1002) and 54±5 and 2.1±0.6, respectively (Placebo). The difference between the QPI-1002 and Placebo groups in DGF incidence (dialysis in the fi rst week post-transplant) and DGF severity (defi ned as the number of dialysis sessions in the fi rst 30 days for subjects with DGF) was statistically signifi cant, with p=0.0077 and p = 0.0006, respectively; and a mean DGF severity of 0.38±0.7 days (QPI-1002) and 4.5 ±4.6 (Placebo). Distribution of dialysis days is graphically displayed below. NMP is a novel preservation method for liver grafts. We transplanted 15 human livers after NMP to test the safety and feasibility of NMP on a device developed in our institution. Livers included 10 from donors after brain death (DBD) and 5 after circulatory death (DCD). Cold ischemia time before NMP was 1hrs32mins to 3hrs59mins. NMP time was 3hrs20mins to 7hrs52mins. Livers were perfused through portal vein and hepatic artery in physiologic fl ows and pressures with perfusate based on human red blood cells and fresh frozen plasma. During NMP, bile production was 3-13 ml/hr of DBD livers and 1-6 ml/hr of DCD livers. All livers displayed lactate clearance. After transplantation, the early allograft dysfunction (EAD) ratio was 26.7% of NMP group and 43.3% (p=0.43) of the controls, preserved by cold storage matched (1:4) by age, donor risk index, MELD score, total preservation time. NMP group had lower peak alanine aminotransferase (p=0.001) and total bilirubin at post-operative day (POD) -7 (p=0.03). The length of stay (LOS) in ICU was 2.3±1.7 days of NMP group and 4.6±7.7 days of the controls (p=0.03). The LOS in hospital was 12.3±9.4 and 15.9±14.5 days respectively (p=0.25). The aminotransferases in perfusate at the end of NMP had signifi cant correlation (p=0.001) to their peak values in the fi rst 7 PODs. All NMP patients and grafts survived to dates (post-operative 7-18 months). Three cases (2 DCD, 1 DBD) had ERCP and stent to treat biliary stricture, comparable to the controls. These results indicated the safety and feasibility of our NMP preservation and potential benefi ts to protect and predict liver viability. Introduction: Steatosis is the leading reason for donor livers discarding and contributes signifi cantly to the worsening shortage of organs for transplantation. Steatotic livers commonly fail to recover function during ex-vivo normothermic machine perfusion (NMP). Whether extra-corporeal delivery of therapeutic agents could rescue more steatotic organs is not yet known. We aimed to assess the effect of a defatting cocktail tested in-vitro, using primary human hepatocytes, on lipid metabolism and viability recovery of steatotic donor livers during NMP. Methods: Six human livers designated histologically macrosteatotic were perfused using NMP for up to 24 hours after cold storage. They were randomly allocated to an intervention group in which the perfusate was supplemented with the combination of drugs and a control group that received only the vehicle (dimethyl sulfoxide [DMSO]<0.1%). Viability based on lactate clearance criteria was assessed at 4 hours. Tissue biopsies, perfusate and bile were sampled and measurements compared using non-parametric tests. Results: Donor baseline donor demographics were comparable between the groups. The median cold ischemia time was 11:51 (interquartile range 10:10-12:53) hours that was similar in both groups (p>0.99). Treatment with the cocktail resulted in a 3-fold increase in the total cholesterol in the perfusate (p=0.043) and 2.5-fold increase in triglycerides (p=0.044) compared with control livers over 12 hours. Histology showed a reduction of 18% in large droplet macrovesicular steatosis (p=0.020) over 12 hours in the treated group. Bile production was also improved in the treated group at 12-hour ( Abstract# 201 Introduction:Physiologic fl ows and pressures need to be provided by the ex vivo liver perfusion device. We hypothesized that the 1-pump designed machines would provide the same hemodynamic output provided by the 2-pump designed perfusion machines. Methods: 11 human livers that were declined for transplantation were ex vivoperfused for 8 h at 37°C with an oxygenated solution based on red blood cells and plasma, using either two pump design (n=6) or one pump design (n=5) to provide a pressure controlled continuous perfusion to the hepatic artery and portal vein.Both groups were tested during the perfusion for Hemodynamic feasibility and safety including markers of hemolysis. Results: During perfusion, the hepatic artery and portal vein hemodynamic parameters were within physiologic values in both groups with no signifi cant difference between the two groups. We observed no signifi cant drop in the hemoglobin or hematocrit in the two groups during the entire perfusion. Similarly no signifi cant increase in the potassium levels was reported in the two groups to suggest hemolysis. Background: Ischemia reperfusion injury (IRI) is a multifactorial process that impact liver graft function. Donor quality plays a critical role during IRI and early graft outcomes. Extracellular vesicles (EVs) provide a novel source of valuable biomarkers. We propose that EV-miRNAs are leaking from the liver donor into the perfusate and that these markers refl ect organ quality and relate to allograft function-post-transplant. Samples and Methods: We evaluated perfusate (10 cc) from 31 liver grafts obtained at pre implantation at the end of cold ischemia time (CIT) Our studies include determination of EV size(Tunable Resistive Pulse Sensing (TRPS) and Cryo-Electron Microscopy (CEM)), quantifi cation (using TRPS and Imaging Flow Cytometry (IFC)) and phenotyping (using IFC).Deceased donor (DD) livers included standard, extended criteria donor as well as partial (living donor (LD)) grafts. EV-miRNAs were evaluated using miScript miRNA PCR Array Human Liver miFinder 384H. For identifi cation of differentially expressed EV-miRNAs the ∆∆C t method was used. CIT, donor age, and transaminase levels at day 1 and 2 post-transplant were included as part of the analyses. Results: Using a volume of 200 cc of perfusate, we were able to quantify ~ 1.5E+08 EVs. The effect of CIT on perfusate EV-miRNA profi les from DD (CIT > 4 hours) and LD organs (CIT < 1 hour) showed increase of miR-5701 and a decrease of miR-122-5p, miR-260a. miR-19a-3p, miR-3183, miR-4301, and miR-92a-3p. Older donors livers (> 65 years old) showed a signifi cant up regulation of miR 122-3p, miR-122-5p, miR-1260 and miR-192-5p in their perfusates compared to young donor livers. Furthermore, a signifi cant increase of AST/ALT on recipient at 24 hrs post-transplantation relates to increased expression of miR-101-3p, miR-106b-5p, miiR-142-3p, miR-199b-3p, miR-22-3p, miR-29a-3p, miR-29b-3p, and miR-29c-3p. Conclusions: Perfusate EV-miRs are excellent candidates as markers of donor quality and tissue injury, they provide information about parent affected liver cells and the profi les relates to early graft outcomes. Further evaluation of cellular origin affected pathways may lead to discovery of target for therapeutic interventions. While end-ischemic dual hypothermic oxygenated machine perfusion (DHOPE) resuscitates mitochondria and reduces ischemia-reperfusion injury, normothermic machine perfusion (NMP) allows for ex-situ functional testing of donor livers. We established a combined protocol of one hour DHOPE, followed by controlled oxygenated rewarming (COR), and NMP for resuscitation and functional testing of high risk donor livers that were initially declined for transplantation nationwide, with the purpose to expand the donor pool. From July until October 2017, three livers were included. To facilitate machine perfusion at different temperatures an acellular perfusion fl uid containing an hemoglobin-based oxygen carrier (HBOC) was developed. Livers were deemed transplantable if bile production was ≥ 10 g, biliary pH >7.45, and perfusate pH and lactate levels normalized within the fi rst 150 min of NMP. Median cumulative bile production was 57 g at 150 min of NMP. Liver 1 reached normal perfusate pH and lactate levels, as well as a biliary pH of 7.55 within 150 min of NMP. Perfusate peak ALT was 540 IU/L. This liver was transplanted, with the recipient in excellent condition at 3 months of follow-up. Liver 2 reached normal perfusate pH and lactate levels within 150 min of NMP, but biliary pH was 7.39. Perfusate peak ALT was 4215 IU/L. Liver 3 did not reach normal perfusate pH and lactate levels within 150 min of NMP. Moreover, biliary pH was 7.32 and perfusate Peak ALT 8460 IU/L. Livers 2 and 3 did not meet the viability criteria and were therefore discarded. Sequential DHOPE, COR and NMP, using an acellular fl uid containing an HBOC, is feasible and enables both resuscitation and viability testing of high risk donor livers prior to transplantation. This protocol provides a tool to expand the donor pool by selecting initially declined donor livers that can be transplanted. INTRO: CAAMR after kidney transplantation (KTx) is associated with rates of allograft failure approaching 50% at 2 years from injury involving T cells, B cells, and donor specifi c antibodies (DSA). Tocilizumab(TOC) is a monoclonal antibody that inhibitis IL-6, a regulator of T-and B-cell activation, and has been used to desensitize waitlisted patients. We report our experience using TOC in patients(pts) with CAAMR. METHODS/RESULTS: Since 2015, 20 pts received TOC 8 mg/kg with >3 months follow up added to tacrolimus/mycophenolate/pred for CAAMR refractory to treatment. Mean age at KTx was 37±11.5 years, most were female (16), and received live donor KTx(14). All patients had prior AMR with DSA-Class I=4, Class II=9, Class I&II= 7 -that persisted despite plasmapheresis (9) , IVIG at 2 gm/kg(11), rituximab (5) . 16 patients also had prior ACR (borderline=9, 1A=3,1B=3, 2A=1). TOC was started an average 1648±1420 days after transplant, with a starting creatinine-=2.35±0.95, and given an average 323±281 days. In the 3 months prior to initiation of TOC, eGFR declined by 3.9 cc/min each month, compared to 0.05 cc/min each month on TOC(p= 0.008). Proteinuria also stabilized on TOC-initial urine protein:creatinine ratio (UPC) of 1.01(±1.1) vs 0.80 (±1.1) at f/u. Stabilization was not dependent on level or type of DSA, and nor did DSA change signifi cantly during follow up. There were 3 ACRs (2borderline & 1B=1) and one patient with recurrent AMR after stopping TOC that responded to re-initiation. There were 2 cases of BK viremia (0 nephropathy), 1 EBV viremia, and 1 hospitalization for pneumonia. Only one patient stopped due to infusion related reactions. There was 1 patient with primary nonfunction due to unresolving ATN+AMR and 2 graft failures (1 non-compliance and 1 progressive rejection). CONCLUSION: CAAMR is diffi cult to treat and associated with high rates of graft failure. In a group of patients with CAAMR refractory to other treatments, addition of TOC to a regimen of tac/mycophenolate/prednisone stabilized eGFR the majority despite persistent DSA with few infectious complications. KTR with multi-organ transplants, primary graft non-function or without allele-level HLA types were excluded. All and antibody-verifi ed eplets (www.Epregistry.com.br) were ascertained from allele-level donor-recipient HLA types imputed from serologic types by the National Marrow Donor Program algorithm. We fi t multivariable Cox proportional hazards models to determine the independent association between class I (HLA-A and -B) and II (-DR) eplet mismatches and death-censored graft loss. All-cause graft loss was a secondary endpoint. Results: A total of 118,382 KTR were included. Hazard ratios (HR) for death-censored graft loss were higher for antibody-verifi ed compared with all eplet mismatches. HR associated with each additional 1, 5, and 10 HLA eplet mismatches for Class I and II are shown in Table 1 Infl ammation in fi brotic renal parenchyma (i-IFTA) is associated with decreased graft survival but by Banff rules is not diagnostic for rejection. Some researchers believe that i-IFTA represents chronic active T cell-mediated rejection (TCMR). The alternative view is that i-IFTA represents a response to wounding caused by continuing nephron injury. We analyzed our indication biopsies that had been scored for i-IFTA to determine the frequency of histologic or molecular TCMR and its role in i-IFTA. One pathologist scored i-IFTA in 234 biopsies, blinded to molecular results. Using our Molecular Microscope® diagnostic system (MMDx), based on the gene expression, we conducted the archetypal analysis (a meta-classifi er based on seven molecular classifi ers) and assigned biopsies to three molecular diagnoses: antibody-mediated rejection (ABMR), TCMR and no rejection. We also assigned molecular scores of kidney injury (AKI) and T cell burden (QCAT). We compared biopsies with iIFTA to those with no i-IFTA. The i-IFTA biopsies had a higher frequency of histologic ABMR (31%) and by MMDx (37%) but not TCMR (histology 8%; MMDx 6%). Manyi-IFTA biopsies did not have rejection by histology (53%) or MMDx (54%). As expected, graft failures were also higher in i-IFTA biopsies (46% vs. 17%, Figure 1 ), mainly due to ABMR (by MMDx n=24) but not TCMR (by MMDx n=5). i-IFTA biopsies had increased molecular injury (AKI) and T cell burden (QCAT) scores, the latter compatible with the infl ammation. I-IFTA biopsies also had higher molecular classifi er scores for ABMR but not TCMR. TCMR classifi er scores in i-IFTA>0 biopsies were below 0.10 cut off for positivity. We conclude that i-IFTA is a refl ection of infl ammation triggered by the response to wounding and is seldom associated with TCMR (confi rming the Banff criteria). The injury triggering i-IFTA should be identifi ed if possible, e.g. ongoing ABMR, recurrent disease, recent successfully treated TCMR. Prognosis is worse for kidneys with i-IFTA because they are experiencing some process inducing parenchymal injury, and the infl ammation in scarred areas refl ects the response to wounding and not active TCMR. Introduction: Basiliximab, alemtuzumab, and thymoglobulin are the most commonly used induction agents in the US. While many centers use depleting antibody induction therapy in patients with DSA, the optimal induction for patients without pre-transplant DSA is not completely understood. The goal of this study is to compare the incidence of de novo DSA (dnDSA) and outcomes between induction therapies in patients with no pre-transplant DSA. Methods: 1,147 adult patients undergoing kidney transplantation at a single high-volume institution between January 2013 and May 2017 were identifi ed. Patients receiving multiple or no induction agents were excluded. 782 patients were identifi ed as having a negative virtual cross match (VXM) (absence of DSA) and were included in this study. Kaplan-Meier analysis was used to assess the incidence of dnDSA and allograft survival between induction therapies in this group. DnDSA is defi ned as the development of new post-transplant DSA, at any MFI level. Results: Of 782 included patients, the majority received basiliximab at 66.8%; 11.1% received alemtuzumab and 22.1% received thymoglobulin. Patients receiving alemtuzumab were less sensitized, more likely to be white, and younger (p<0.01). The overall incidence of dnDSA at 1 year in patients with a negative VXM was 7.2%. Patients who received alemtuzumab had the highest rate of dnDSA at 14.5% compared to 5.4% and 8.9% in the basiliximab and thymoglobulin groups, respectively (p=0.009). Importantly, there was no association between induction agent and overall actuarial graft survival (alemtuzumab 100%, basiliximab 98.2%, thymoglobulin 98.8%). There was no difference in graft survival for patients that developed dnDSA within 1 year after transplant compared to those that did not develop dnDSA. Conclusion: Alemtuzumab is associated with signifi cantly higher rates of dnDSA in patients with no pre-transplant DSA when compared to basiliximab and thymoglobulin but had no impact on kidney allograft outcomes. Additional controlled analyses and long-term follow up are needed to more completely understand this fi nding. The feasibility, complications and outcomes of solid organ transplantation in the HIV positive population are well documented and have had excellent outcomes with a slight increased risk of rejection but no signifi cant increase in the risk of opportunistic infections compared to non-HIV population. Basiliximab has been typically used for induction therapy in HIV positive organ recipients, but there is a paucity of data regarding the use of lymphocyte-depleting agents, in particular Alemtuzumab, and the effect on rates of recovery of CD4 counts, rates of rejection and opportunistic infection. We present data from a single center gathered from 14 HIV renal transplant recipients from 2012 to 2017. Induction therapy was chosen based upon perceived immunologic risk of rejection using the same determinants (level DSA, age, cancer history) as for the non-HIV population at our institution. Six patients received Alemtuzumab (average risk), 3 Basiliximab (low risk) and 4 Thymoglobulin (high risk Conclusions: Based on this small cohort, we conclude that lymphocyte depleting agents, including Alemtuzumab, are valid options for induction in HIV positive recipients undergoing renal transplantation. Regarding opportunistic infections, a case of disseminated nocardia occurred in the Alemtuzumab group only following treatment of rejection with ATG, with a downstream effect on CD4 count recovery by 12 months post transplantation. The case of shingles at 4 months when CD4 count was <100 prompted us to consider basing duration of anti-viral prophylaxis on CD4 count rather than standard duration as applied to the non-HIV positive transplant population. Overall, rates of recovery in CD4 counts were acceptable. Apart from a case of TMA that resulted in allograft loss, rejections were mild and overall graft function was acceptable. Anti-thymocyte globulin (ATG) is currently the preferential induction treatment in kidney recipients at high risk for allograft rejection. However, no study has evaluated the benefi ts of this induction strategy in terms of patient survival. We conducted a multicentric prospective study including unselected kidney recipients from 4 referral centers (2004) (2005) (2006) (2007) (2008) (2009) (2010) (2011) (2012) (2013) (2014) . We assessed the type of induction therapy (IL2R inhibitors or ATG) and the dose of ATG (mg/kg Historically, renal transplant recipients (RTRs) with chronic hepatitis C (HCV) have worse long-term outcomes. Induction with alemtuzumab (ALM) in HCV positive RTRs has been postulated to contribute to HCV disease progression but the impact on histopathological fi ndings has yet to be explored. We assessed the effect of ALM on graft outcomes using biopsy data from the fi rst year post transplant in patients with HCV. A retrospective cohort study of HCV positive RTRs from 1/2000 to 3/2017 was conducted. Maintenance immunosuppression included tacrolimus, MMF, and steroid taper per center protocol. The primary outcome was acute cellular rejection (AR); secondary outcome was graft survival. A total of 113 RTRs with HCV were identifi ed, 91 patients received 179 biopsies within 1 year post transplant. Lymphocyte-depleting induction, with ALM or antithymocyte globulin (LD group) was used in 72% of RTRs while 28% received non-lymphocyte-depleting induction, including basiliximab or steroids alone (non-LD group). Baseline characteristics are summarized in Table 1 . There was no difference in 1-year AR-free survival or graft survival between the LD and non-LD group (p=0.56, p=0.6). AR-free survival was similar in patients receiving HCV NAT positive versus HCV NAT negative donor organs (p=0.42). Graft survival was worse in the viremic donor group (p=0.045). In patients who received NAT positive donor organs, 19 grafts failed, of which 3 had been successfully treated for HCV. ALM induction and donor HCV NAT status do not negatively impact one-year AR risk; however, HCV NAT positive donor status does portend worse long term graft survival. As the KDPI is equally impacted by HCV Ab and HCV NAT positivity, a modifi cation of this algorithm may better predict graft survival. Larger studies are needed to confi rm this fi nding. Purpose: Delayed graft function (DGF) is associated with increased allogaft immunogenicity and decreased long-term survival. We hypothesized that high levels of donor urinary biomarkers for injury and infl ammation amplify infl ammation after reperfusion in recipients and that Thymoglobulin (Thymo) reduces reperfusion injury and risk of DGF. Methods: In a multicenter prospective cohort, we collected urine samples from deceased donors at organ procurement for injury and infl ammation biomarker measurements (IL-18, KIM-1, MCP-1, NGAL, C3a, C5a). We then recorded induction therapy and post-transplant outcomes for the 572 recipients of kidneys from these donors. Results: Urinary biomarkers revealed the kidney injury and complement activation were common among deceased donors. 90% of the recipients received Thymo at a median dose of 5.8 mg/kg. When analyzed above or below median Thymo dose, higher Thymo dose was associated with DCD donor status, longer cold ischemia time, high cPRA. Thymo dose was not associated with a) recipient DGF, b) posttransplant renal function on day 7 and 12-months, or c) graft failure rate at 1 year. Thymo dose was also not associated with reduced DGF rate in organs with deceased donor acute injury (defi ned by AKIN). We noted signifi cant interaction between the 2nd tertile of urinary C5a (and a trend for the 3rd tertile) and Thymo dose on the outcome of DGF. Compared to kidneys with lower C5a levels (1st tertile) the odds of DGF were higher with elevated C5a and the use of lower Opioid exposure is a concern after live donation (LD) for transplant. We theorized that a protocol using pregablin pre-operation to desensitize nerves followed by ketorolac after surgery can control pain thus requiring less perioperative narcotics. The aim was to determine a non-opioid analgesics protocol for LD nephrectomies could decrease the use of narcotics without an increase in complications compared to standard of care (SOC There was a bimodal pattern to recipient age with biggest peak at age 65 and a second lower peak at age 37. There was also bimodal pattern to the difference in ages between donor and recipient with the biggest peak at a difference of 1year, and a second smaller peak at a difference of 25. Mean HLA mismatch level 3.55+/- Purpose: Living kidney donor outcomes are tremendously important, especially given expanding donor eligibility. Relative to other predictors, the relationship between remaining kidney volume and post-donation renal function is poorly understood. Methods: We determined associations between remaining kidney volume (via pre-donation CT angiography) indexed to body surface area (BSA) and 1-year post-donation renal function in 152 living kidney donors at an academic center using multivariable linear and logistic regression for continuous eGFR and chronic kidney disease (CKD, eGFR <60 ml/min/1.73m 2 ), respectively. Results: Mean±SD age, body mass index and baseline eGFR were 38±11 y, 26±4 kg/m 2 and 97±16 ml/min/1.73m 2 . 50% were male and 94% were white. Median (range) remaining kidney volume/BSA was 79.6 (47. 1-114. 2) ml/m 2 . Mean baseline (pre-donation) eGFR was greater with increasing tertiles (T1-T3) of remaining kidney volume/BSA (92±14, 97±16 and 107±16 ml/min/1.73m 2 for T1, T2 and T3, respectively; p<0.001). There were no other signifi cant differences in baseline characteristics by remaining kidney volume/BSA tertiles. As shown in the Figure (panel A) , post-donation renal function declined and then remained separated throughout the fi rst year for T1-T3. Panel B shows that the probability for CKD at 1-year increased abruptly for lower remaining kidney volume/BSA in donors with baseline eGFR <90 ml/min/1.73m 2 , remained stable at around 30% regardless of remaining kidney volume/BSA in donors with baseline eGFR 90-100 ml/min/1.73m 2 , and did not increase above 30% unless remaining kidney volume/BSA was <55 ml/m 2 in donors with baseline eGFR >100 ml/min/1.73m 2 . As shown in panel C, each SD increase in remaining kidney volume/BSA was associated with 2.4 (95% CI: 0.7-4.1) ml/min/1.73m 2 higher adjusted 1-year eGFR. Conclusions: In this living donor cohort, remaining kidney volume/BSA was a major determinant for CKD 1 year after nephrectomy in those with baseline eGFR <90 ml/min/1.73m 2 . More studies are needed to evaluate the predictive utility of this clinical measure and its potential role in donor evaluations and informed consent. Cohorts were compared before (cohort 1, n = 55) and after (cohort 2, n = 23) the introduction of a multimodal pain control protocol. The protocol consisted of a preoperative dose of gabapentin, intra-operative local injection of lidocaine, bupivacaine, and epinephrine, intra-operative dose of IV acetaminophen and ketorolac post nephrectomy then scheduled post-operative acetaminophen, and ketorolac. Opioids and benzodiazepines were given on an as needed basis. Patients were excluded if they had history of narcotic use or had other procedures during nephrectomy. The primary endpoint was LOS. Secondary endpoints included opioid consumption, average daily pain scores, and rates of opioid-and NSAID-related adverse effects Results : Patient demographics were comparable between cohorts. Cohort 1 had signifi cantly higher rates of PCA use (p = < 0.001) and trended towards higher rates of epidural use (p = 0.1). Cohort 2 consumed signifi cantly more acetaminophen on POD2 (p = <0.001) and ketorolac (<0.0001). In living kidney transplantation, it is reasonable to believe a donated kidney's function would be associated with the function of the donor's remaining kidney. Indeed, recipients of kidney transplants from living donors who themselves develop ESRD have higher mortality and graft loss. We hypothesized that there would be an association along the full range of follow-up donor kidney function with recipient outcomes. Methods: We studied SRTR data from 2008-2016 to determine the association between early donor follow-up eGFR and recipient graft loss and mortality, using Kaplan-Meier curves and Cox proportional hazards models. Results: We found that donors with a six month follow-up eGFR<60 were older, more likely male, more likely to be white, and had lower pre-donation eGFR than those with higher follow-up eGFR (Table 1) . Their recipients were also older and more likely to be white (Table 1 ). There was an association between six month post-donation eGFR and recipient graft loss and mortality ( Figure 1 ). After controlling for multiple donor and recipient factors including donor preoperative eGFR, higher post-donation follow-up eGFR was associated with lower deathcensored graft loss (per 10mL/min, adjusted hazard ratio 0.96, p=0.02, Figure 1 ). Conclusion: Recipients of kidneys donors who develop post-donation kidney dysfunction have higher risk of graft loss after controlling for multiple donor and recipient factors. A better understanding of donor response to nephrectomy can inform recipient management. Limited health literacy (LHL) is known to adversely affect health outcomes in patients managing chronic illness. We aimed to examine the association between LHL, listing, and waitlist mortality among KT candidates, as well between LHL and postoperative length of stay among KT recipients. Methods: In a prospective cohort study of 1,544 KT adult candidates and 379 KT recipients, health literacy was assessed at the time of KT evaluation and at admission for KT, respectively (5/2014-8/2017 ). LHL was identifi ed using the Brief Health Literacy Screen (BHLS; scores range from 0-12, with higher scores indicating worse health literacy). Based on the distribution of scores in each cohort, a BHLS score>7 in KT Candidates and a BHLS score>5 in KT Recipients were used to defi ne LHL. Using Cox proportional hazards models, adjusted for demographic and clinical characteristics, we quantifi ed the association between LHL with time to listing and waitlist mortality among KT candidates and between LHL and length of KT hospital stay among KT recipients. The aim of this study is to determine if transplant quiz scores given prior to discharge were predictive of known patient risk factors and if the scores were associated with clinical outcomes. Methods: Retrospective cohort study of 198 adult kidney recipients transplanted between 2014-16. All patients who completed a discharge quiz were included. The quiz included questions related to medications, lab draws, rejection, and infection. Pediatric and multi-organ transplant recipients were excluded. Univariate and multivariable statistics were utilized for analyses. Results: 198 patients were included. Quiz scores of ≤75% were more common in African-Americans (AAs), Medicaid insured, deceased donor (DD) recipients and those deemed cognitively impaired during the social worker assessment (Table 1 and 2); validating that the quiz identifi es traditionally high-risk patients. Readmissions and overall healthcare utilization were higher in those with quiz score >75% (Table 3) . Conclusion: These results demonstrate that low quiz scores adequately identifi ed traditionally high-risk kidney transplant recipients (AAs, Medicaid, DD and cognitively impaired). However, low quiz scores were associated with lower readmissions and healthcare utilization, which may refl ect focused efforts by the transplant center and clinicians to mitigate risks in these traditionally high-risk patients. (Table 1 ). Each sequential model reduced the race HR. The % reduction in racial disparity in adding SES variables (Model 2) was the largest at 19%, though the effect of modifi able LDKT decision-making variables still reduced the racial disparity by 14% even after controlling for SES and psychosocial variables. The best predictors of time to LDKT were modifi able, including increasing attendance of others at evaluation and sharing LDKT education. Although some of the racial disparity in LDKT is accounted for by nonmodifi able factors like SES, interventions increasing non-white patients' support network involvement in learning and attending evaluation may reduce the racial disparity in LDKT by activating more living donors coming forward. We used multivariable Fine-Gray proportional subdistribution hazards models for data with competing risks to estimate the rates of receiving a DDKT, a LDKT, or any other type of outcome (i.e., death, withdrawal from wait-list, or still awaiting KT) and the medical and non-medical factors associated with them. Results: Outcomes and predictors can be seen in Figure 1 . We only included values for the signifi cant predictors from the multivariable models. We used blank spaces to indicate predictors that were not signifi cant. Conclusions: Even though race and several medical factors persisted in predicting transplant outcomes, we found that non-medical factors contributed as well (i.e., age, marital status, having a live donor, transplant knowledge, & learning activities). Our results also indicate that the VA National Transplant System does not exhibit the same pattern of racial disparities in receipt of KT as has been found in other non-VA transplant centers. We also identifi ed important risk factors for receipt of KT among Veterans. VA transplant centers can use these risk factors to identify patients who may be in need of more support to ensure they receive a transplant. The Explore Transplant @ Home (ETH) randomized controlled trial assessed the effi cacy of two supplementary education strategies for kidney transplant (KT) in dialysis centers: 1) ETH, a video-based, patient-guided program with mailed and texted content (ET-PG) and 2) ET-PG plus educator-guided telephonic coaching (ET-EG). Enrollment targeted low income (<250% of the federal poverty level) and Black patients. A total of 561 patients were randomized to ET-EG (n=189), ET-PG (n=185), or standard of care (SOC, n=187). Of these, 369 (71% Black) were included in the per-protocol analysis (n=101 ET-EG, n=108 ET-PG, n=160 SOC). Exploratory analyses examined whether the effect of ET-EG or ET-PG differed by educational background (e.g., pro-transplant attitudes and self-effi cacy, previous transplant education received, health literacy and education levels) and potential transplant derailers (fi nancial stability, social support, medical mistrust). In the primary analyses, ETH education conditions increased knowledge, steps taken toward KT, and self-effi cacy in comparison to SOC. ET-EG more effectively increased the number of steps taken toward KT compared to SOC among patients with more than a high school education (1.1 steps, 95% CI: 0.3 -1.9), while patients with a high school education or less had a reduced number of steps taken compared to SOC (-0.9, 95% CI: -1.6 --0.2). ET-EG more effectively increased living donor KT (LDKT) self-effi cacy for patients with insuffi cient social support (6.6 pts, 95% CI: 2.6 -10.6) than for those with suffi cient social support ( Unemployment after kidney transplant is common and creates a heavy reliance on public aid. While the exact reason for low employment rates is unknown, patients often experience deceased energy levels, increased pain, and increased depression after transplant. Greater levels of physical fi tness have been shown to improve these issues in several populations. However little is known about this association in kidney transplant patients and its effect on employment. The purpose of this study is to evaluate the effect of an original exercise rehabilitation program on the ability to fi nd employment or return to school, post kidney transplant. Thirty fi ve transplant recipients that were unemployed and not in school, at the time of the transplant, were enrolled in a 12 month randomized control trial. Randomization was set at 2:1 ratio. 22 participants were randomized to a 2 day per week resistance exercise rehabilitation program (each 1 hour sessions) and 13 patients were randomized to a control group with no exercise intervention. Both groups underwent testing at baseline, 6 months, and 12 months. Two markers that were assessed at each time point were employment and school status. The exercise group had a greater increase in employment compared to the control group after 12 months (41% exercise vs 15% control). The exercise group had 4 patient begin school while the control group had zero patients begin school throughout the 12 months. In aggregate, the total percentage of patients in the exercise group who found a job or returned to school was 59%. The usefulness of physical activity as a therapeutic tool to increase ability and desire for employment and education in these patients is a promising, yet an understudied concept. These data suggest exercise rehabilitation programs should be proposed post-transplant to prepare recipients to recommence work and school life. . We compared these measures with a direct measurement of TAC ingestion, the % coeffi cient of variation (CV%) of TAC trough levels assessed over the fi rst year post Tx. We examined the relationship between these measures of MNA and immunological outcomes including biopsy proven acute rejection (BPAR) and de novo (dn) donor specifi c antibodies (DSA). RESULTS: PMBS, AMBS, and BMQ obtained in the fi rst post-Tx month did not predict subsequent BPAR or dn DSA (p =0.95, 0.44, 0.15 respectively). However, pts with a PMBS score of ≥ 2 elements at 12 months had a higher risk of a combination of either BPAR and/or dn DSA (60%), compared to pts with < 2 PMBS elements (35%) (p < 0.001). Moreover, we found that patients with a TAC CV > 41% at 12 months had a higher rate of BPAR (38%) in the fi rst year compared to patients with CV <41% (14%) (p = 0.04). There was no correlation between TAC CV% and self-reported BMQ or any of the barrier analyses. CONCLUSION: In pediatric kidney Tx pts, self-assessment measures are poor predictors of BPAR or dn DSA. Yet, parental insight using PMBS at 1 year may be predictive of poor immunological outcomes. TAC CV% with a value of >41% is a more useful indicator of potential BPAR and can be utilized on an ongoing basis to determine high-risk of BPAR. After kidney transplantation (KTx) immunosuppressive therapy causes an impaired cellular immune defense resulting in an increased risk of BKPyV-nephropathy (BKPyVAN). Prognostic markers for the outcome of BKPyVinfections are missing. BKPyV-specifi c T cells (BKPyV-Tvis)may serve as a diagnostic parameter for the risk of BKPyV-associated complications. After KTx follow-up of BKPyV-Tvis were analysed in 42 children with current or previous detection of BKPyV-DNA in blood. Leukocytes were stimulated in vitro with BKPyV-antigens (VP1; large T) and immunostained by fl uorescent antibodies. Based on specifi c cellular activation and induction of intracellular cytokine-production, BKPyV-CD4 and CD8 Tvis were identifi ed by fl ow cytometry. After accounting for donor type, we found a signifi cantly lower hazard of 5-year renal allograft failure in pre-emptively transplanted children (HR 0.742, p = 0.05 Background Liver transplant has been performed successfully regardless of a positive crossmatch. However, the clinical implications of a positive crossmatch in liver retransplants have not been evaluated in large studies. The aim of this study is to assess if there are differences in patient and graft survival after liver retransplants with positive crossmatch compared to negative crossmatch. All liver retransplants performed between 1994 and 2014 were reviewed from the Scientifi c Registry of Transplant Recipients. Liver retransplants with reported crossmatch were included whether or not the results of the crossmatch were available before transplant. Results T-cell crossmatch (TXM) was performed in 3429 adult liver retransplants and was positive in 349 patients (10.2%); no difference was observed in graft and patient survival between positive and negative TXM. B-cell crossmatch (BXM) was recorded in 2285 adult retransplants and was positive in 498 (21.8%) with no difference in graft and patient survival between positive and negative BXM. TXM was performed in 466 pediatric liver retransplants and was positive in 46 (9.9%). In pediatric retransplants, 1-and 5-year graft survival was lower in patients with positive TXM (58.7% and 50%) compared to negative TXM (69.9% and 60.1%,p=0.02 Despite reports that associate donor specifi c antibody (DSA) with rejection after liver transplantation, grafts are still allocated according to blood group (ABO) but not human leukocyte antigen (HLA) compatibility, possibly due to the absence of an easily discernible clinical association between adverse recipient outcome and DSA. Re-transplantation provides a test environment where the presence of preformed DSA is prevalent and its effect on outcome should be apparent. Method: All patients undergoing a second liver transplantation with available pre-operative serum and donor cells were included with the exception of ABO incompatible or multiple organ transplants. Banked sera were tested for anti-HLA antibodies with Luminex-based solid phase assays. The threshold for positivity was median fl uoresce intensity > 1000. Anti-HLA antibodies to the second donor (D2SA) were typed for HLA-A, B, C, DRB1, DRB3/4/5, DQA1/B1 and DPA1/ B1. Results: Anti-HLA antibodies were identifi ed before the second transplantation in 40 (51%) of the 79 patients that were included in the study. Primary and re-transplantation characteristics were similar in both groups except fi rst graft survival which was signifi cantly shorter in recipients who did not develop anti-HLA antibodies before the second transplantation. Preformed D2SA was identifi ed in 31 (39%) In our series, survival after liver retransplant is excellent and is similar to primary liver transplant, despite inherent increased complexity and higher MELD than primary transplants. More resources are utilized in the redo setting. CLK transplant in the redo setting achieved satisfactory outcomes in a group often considered prohibitive. Introduction: After donation after circulatory death (DCD) liver transplantation (LT), non-anastomotic biliary strictures (NAS) and early allograft dysfunction (EAD) are more frequently observed than after donation after brain death (DBD) LT. The outcomes after retransplantation (reLT) with a DCD liver are not known. Therefore, we aimed to assess the results of patients undergoing retransplantation using a DCD liver graft. Methods: In this multicenter retrospective study, all DCD reLTs in the Netherlands between 2003 and 2017 were included. For each DCD reLT, two DBD reLTs were selected as a matched control group. Matching was performed based on number of successive reLT, BAR-score, early (<3months) or late(≥3months) reLT and year of reLT, respectively. Baseline characteristics of both donor and recipient and outcomes parameters were collected and analyzed. EAD was defi ned according to the Olthoff criteria, NAS as bile duct strictures within two years after LT at any location in the biliary tree other than the anastomosis. Continuous data are shown as median (IQR There was a two-fold increase in reporting of fertility issues (22%) compared to the general population (11%). 41% of pregnancies were unplanned. The majority of recipients were on calcineurin inhibitor therapy for immunosuppression, 37% on cyclosporine and 59% on tacrolimus; 6% of pregnancies were exposed to a mycophenolic acid product (MPA) during the fi rst trimester. There were a total of 580 pregnancy outcomes (including multiples) resulting in 397 live births (71%), 132 miscarriages (24%), 18 terminations (3%), 8 stillbirths (1%) and 5 (1%) ectopic pregnancies. There were 34 1st trimester exposures to MPA resulting in 1 termination, 2 stillbirths, 21 miscarriages, 10 live births (3 with birth defects). Comorbid conditions during pregnancy included: hypertension 23%, preeclampsia 22%, cholestasis of pregnancy 18% (general population 1%), and rejection 3%. Graft loss within 2 yrs of delivery occurred in 3%. Mean gestational age of the live births was 36.6±3.4 wks and mean birthweight was 2744±785 g; 39% were preterm (<37 wks) and 31% were low birthweight (<2500 g). The birth defect rate was 5%, similar to the general population which ranges from 3-5%. The TPR has been following the children with a mean age of 9.3 yrs (age range, 0.01-30 yrs) and there are also 10 grandchildren. Conclusions: Female liver transplant recipients have reported successful pregnancies, with an increased risk of premature and low birthweight infants. Cholestasis of pregnancy is much more common in this population. The risk of miscarriage and birth defects when pregnancies are exposed to MPA in utero underscores the need for pre-conception counseling. Infertility in liver transplant recipients requires additional study. Background: Reconstruction of biliary drainage after liver transplantation (LTx) in patients with primary sclerosing cholangitis (PSC) has been a matter of controversy. Over the recent years, the traditional method of Roux-en-Y hepaticojejunostomy (TY) has been challenged by duct-to-duct (DD) biliary reconstruction. The argument against DD reconstruction has been for potential increased risk of development of cholangiocarcinoma (CCA) in the bile duct remnant. Methods: This is a retrospective review of biliary complications, graft and patient survival after LTx in PSC patients based on type of biliary reconstruction DD vs. RY. Results: A total of 120 LTx for PSC were performed between 2005 and 2016. Twenty two patients were excluded because they received partial grafts. DD was done in 39 patients and 59 patients were reconstructed with RY. One-, 5-, and 10-year survival was similar between the two groups. Bile leak and biliary stricture was not signifi cantly different between the 2 groups. Nine patients in DD group (23%), and 13 patients in RY group developed biliary strictures and all managed endoscopically or percutaneously. There was one case of anastomotic leak in each group. There was no case of cholangiocarcinoma observed in these patients over the long period of follow-up. Results: 94 patients screened; 71 patients met inclusion criteria. See Table 1 for baseline demographics. From start of DAA therapy to 12 weeks post therapy, the mean change in dose-normalized TAC levels was -2.5 (+ 5) ng/mL, p=0.01. (see Figure 1 ). The steepest drop in dose-normalized TAC levels occurred in the fi rst 4 weeks of treatment, after which levels stabilized. The overall mean TAC level was 4.8 ng/mL (+2.5) with a mean of 1 dose change per patient. 70 patients (99%) achieved SVR, 2 patients (3%) had ACR, 2 patients (3%) had graft loss and 2 patients (3%) died. Conclusions: From start of treatment to 12 weeks post-DAA, LT recipients on DAAs experienced a decrease in dose-normalized TAC levels. Close monitoring of TAC levels, is warranted and TAC dose increases may be indicated. Direct acting antivirals (DAAs) have been used for the treatment of hepatitis C (HCV) in liver transplant patients since early on after their approval, however data in real-life settings is limited. Treating HCV after liver transplant may precipitate an immune restoration response, freeing the T-cells that were targeting HCV to recognize and attack the graft. Previous studies have shown treatment of HCV with ledipasvir/sofosbuvir appears to increase the incidence of acute cellular rejection to up to 25% shortly after the achievement of sustained virologic response (SVR). The primary objective of this study was to determine the incidence of acute cellular rejection in liver transplant patients in a singlecenter who received a DAA for HCV treatment after transplant for at least four weeks. All adult liver transplant patients who received a DAA for treatment of HCV for at least four weeks from November 5, 2014 to March 1, 2016 were identifi ed. Patients were excluded if they had a multi-organ transplant, HIV coinfection, or previous history of rejection. Charts were reviewed to identify the incidence of acute cellular rejection, SVR at 12 weeks (SVR12), graft loss, and death. The charts of 130 patients were reviewed. Of the 99 patients eligible for inclusion, the majority were male (73%), Caucasian (75%), and HCV genotype 1 (83%). Eighty-two percent were treated with ledipasvir/sofosbuvir. Less common DAA regimens studied included ledipasvir/sofosbuvir/ribavirin, daclatasvir/ sofosbuvir, daclatasvir/sofosbuvir/ribavirin, and sofosbuvir monotherapy in one patient. Seventy-fi ve percent of patients were greater than 12 months posttransplant when they were initiated on DAA therapy. Four of the 99 patients experienced biopsy-proven rejection, which is a rate of 4% and is less than the national average of 15-25%. All four patients who experienced acute cellular rejection were HCV genotype 1a and had been on ledipasvir/sofosbuvir with the addition of ribavirin in one patient. Ninety-four percent of patients achieved SVR12, no patients experienced graft loss, and one patient died of sepsis. This study did not fi nd an increased incidence of acute cellular rejection in liver transplant patients treated with DAAs for HCV. The proportion of HTx recipients receiving both heart and kidney is increasing over time. Purpose: Patients (pts) bridged with mechanical circulatory support (MCS) may require more optimal donors for best outcome compared to non-bridged transplant recipients. MCS pts have more bleeding due to scar tissue and administration of blood products can potentially overwhelm the RV of the donor heart causing right heart failure. Physicians may prefer a pristine, potentially larger donor, to accommodate excess bleeding and other complications. Prior to MCS, physicians would take donor hearts even with high risk factors for their very ill pts. In the present era, we place these very ill patients on MCS thus making them stable and better candidates for HTx. However, as a result, high risk donors are no longer being taken for these pts, and they may wait for a more lowrisk donor heart. We assessed whether MCS pts did require a low-risk type donor compared to non-MCS pts. Methods: We assessed 5,826 status 1A pts awaiting HTx using the UNOS STAR fi le. Pts were divided into those with MCS (n=3121) and those without MCS (n=2705). Donor hearts were characterized by donor age, gender and BMI between both groups. We also determined percent of donor/recipient male into male, male into female, female-male, female-female pairs. Results: MCS pts awaiting HTx compared to non-MCS had similar donor age. However, the gender of donors for MCS pts was signifi cantly more male than female. Days to HTx was signifi cantly longer in MCS pts. Donors for the MCS pts were signifi cantly taller than the non-MCS pts. Post tx outcomes were similar. Conclusion: MCS pts awaiting HTx had signifi cantly longer time in days to transplant and appeared to require a higher percentage of (larger) male donors into male recipients. This suggests that HTx physicians are more selective to obtain a more optimal donor for their MCS pts. In 2016, 47 DDHs that were recovered by our OPO were transplanted, almost a 50% increase in the past 2 years. Our SRTR ratio of observed to expected DDH placement increased in parallel to well above the national average (O:E ratio, fi gure 1).The O:E ratios are determined from national OPO performance data, adjusted for donor age, cause of death, and other quality measures. This increase in DDH transplantation to well above expectation was spread evenly among multiple donor hospitals and the centers that accepted the organs. Our conversion rate remained unchanged at 75-80%. What did we do differently? Because specialists may differ in their interpretations of donor echocardiograms, early in 2015 we began using a web-based telecardiology service to make these images available to transplant centers as part of the organ offer, together with an independent expert interpretation. At times this information confl icted with the in-hospital reading. Several centers commented that viewing the images increased their confi dence in accepting donor hearts.The increase in O:E placement ratio that occurred in 2015 was accompanied by a trend to a lower mean age for accepted donor hearts (44.6, 40.5, 26.5, and 33.8 respectively from 2013-6), suggesting that hearts from younger donors were accepted with relatively more confi dence. Graft survival from 2016 is currently 96%. Use of this strategy to increase center confi dence in organ quality could signifi cantly decrease discards of transplantable organs and increase the rate of cardiac transplantation. Figure 1 : Foxp3 + regulatory T cells (Tregs) are critical mediators of immune tolerance and are absolutely required for allograft tolerance in animal models. In addition, clinical trials for ex vivo-expanded Tregs are underway, using either bulk, nonspecifi c Tregs or alloantigen-expanded Tregs. However, the fate of endogenous, naturally occurring allospecifi c Treg populations remains to be determined. Here, we take advantage of a recently developed peptide:MHC tetramer (2W:I-A b )-based system to study an endogenous allospecifi c Treg population in a mouse transplantation model in which donor cells transgenically express the model antigen 2W. 2W + BALB/c x C57BL/6 F1 donor hearts were transplanted into C57BL/6 recipients either with no treatment or in combination with anti-CD154 and donor splenocytes to induce tolerance. Compared to naïve controls, tolerant animals had a signifi cantly higher percentage (~30% vs 10%) and total number of splenic donor (2W:I-A b )-specifi c Tregs at day 7 and 30 after transplantation. The enrichment of 2W:I-A b -reactive Tregs among 2W:I-A b CD4 + cells was even more dramatic in the tolerant graft and was signifi cantly increased relative to the percentage of bulk Tregs (~70% vs 25%). We determined that the increased Treg percentages was due to the inhibition of donor-specifi c Tconv expansion and a modest increase in Treg numbers. We next tested if the accumulation of endogenous donor-specifi c Tregs in the spleen and graft was due to the expansion of preexisting Tregs by quantifying the expression of the proliferation marker Ki67 at 7-30 days after transplantation. Increased percentages of 2W:I-A b -specifi c Foxp3 + Tregs expressed Ki67 in tolerant recipients, compared to naïve or acutely rejecting recipients. To test whether conversion of 2W:I-A b Tconvs into induced iTregs also contributed to increased 2W:I-A b Treg percentages, we transferred Foxp3 -CD44 -CD4 + T cells from Foxp3-GFP transgenic mice into transplant recipients at the time of tolerance induction. A small but detectable percentage of these transferred cells differentiated into Foxp3 + Tregs in the spleen and graft. Taken together, we show for the fi rst time that the increased percentages of endogenous donor-specifi c Tregs observed in tolerance is achieved through the inhibition of donor-specifi c Tconv expansion and a modest increase in Treg numbers that is dependent on proliferation of Tregs and conversion of Tconvs into iTregs. Regulatory T cells (Tregs) are a promising therapeutic tool for inducing transplantation tolerance. Tregs can be divided into those of thymic origin (tTreg) and those which arise in the periphery or induced in vitro (pTregs and iTregs, respectively). tTreg and iTreg share many key features such as their reliance on Foxp3 expression, but they differ in the repertoire of their TCRs and the epigenetic regulation in Foxp3 locus. Exciting new studies in the fi eld of immunometabolism have shown that cellular metabolism of different types of immune cells including T cells can affect their fate and function. Although growing insights in this fi eld suggest the manipulation of Treg metabolic traits as a therapeutic strategy, there is actually a surprising paucity of information regarding the metabolism of distinct Treg subsets and human Tregs. Here, we performed a detailed comparative analyses of human tTregs and iTregs subsets to understand their metabolic signatures. We activated ex vivo Tregs (tTregs) from healthy donor PBMCs in the presence of polyclonal stimulation and IL-2, and generated iTregs by naïve CD4+ T conventional cells in iTreg skewing conditions that include IL-2, TGFb and ATRA. Using seahorse analyses, we found that by day 3 post activation both human Treg subsets similarly engage glycolysis. By day 7, iTregs showed increased propensity to favor glycolytic metabolism unlike tTregs. This correlated with a decrease in FOXP3 expression in iTregs, a feature associated with their instability. In contrast, tTregs maintain FOXP3 expression exhibiting reduced glycolysis. These suggested that each of Treg subsets have distinct requirements for glycolysis in temporally manner. To understand this further, we inhibited glycolysis by 2-deoxy-D-glucose (2DG) at the onset as well as at day 3 post activation. Notably, addition of 2DG at the onset dramatically diminished FOXP3 expression in iTregs. In contrast, inhibiting glycolysis showed at best a modest effect on FOXP3 expression on tTregs. Furthermore, once FOXP3 expression was upregulated, 2DG treatment did not affect the maintenance of FOXP3 expression for either Treg subset. Introduction: The kidney is pro-tolerogenic by mechanisms as yet unknown. In mice, tolerance of kidney allografts can occur spontaneously in certain strain combinations, such as DBA/2 to C57/BL6. We have previously identifi ed novel lymphoid structures in all accepted kidney grafts that may be important in tolerance induction and named them Treg-rich lymphoid structures (TOLS). Depletion of Tregs results in the dissolution of these structures, resulting in renal allograft rejection. Ectopic immune cell aggregates referred to as tertiary lymphoid organs (TLO) have been found in various chronic infl ammatory conditions. Here we further investigated the time-course by which various immune cell types infi ltrate in accepted mouse kidney allografts, the lymphoid characteristics and the functional properties of TOLS and how they differ from classical TLOs, as well as the mechanism of TOLS formation.Methods: DBA/2 kidneys were transplanted into C57/BL6 recipients. Transplanted animals were sacrifi ced at week 1, 3, 6 and 32 post-transplant. Immunohistochemical, pathologic, and fl ow cytometric analyses were performed to characterize the phenotype of graft cell infi ltrates, including structural and functional properties. CCR7 knockout mice were also used as recipients to study possible underlining mechanism of TOLS formation. The folk-hero "Zorro" left his mark on by a sword-slashed "Z". In transplant recipients, rare Tregs also leave a mark on their conventional T & B LC targets. This mark is caused by Ag-specifi c CD4 Tregs that release IL35-coated exosomes, "cross-dressing" (xD) their targets with IL35. Hypotheses: 1) IL35 suppression by passenger Tregs is greatly amplifi ed by xD of host T cells; 2) monitoring CD4+ T cells displaying IL35/xD-phenotype after heterotopic heart transplantation will correlate with biD regulation(HvG and GvH), and 3) that IL-35-induced "exhaustion" is a direct result of the IL35/xD-phenotype. Approach: We tolerized B6 "donor" mice with CBA DST + 125 ug (x3) MR1(anti-CD40L). We sub-optimally tolerized recipient CBA mice with B6 DST + 62.5 ug(x3) injections of MR1. On d.35, the CBA mice received either a normal B6 heart allograft(uni-D regulation) or one from a CBA-tolerized B6 donor(biD regulation).Results: While the uni-D CBA showed prolonged allograft survival, by 4 weeks all grafts were rejected. When the heart from a tolerized B6 mouse was transplanted in the same recipient, all grafts survived long term. There was an initial slight increase in % surface Ebi3+ T-CD4 cells in the uni-D mice, then the % began to drop at 2 wks and was even lower at the time of graft rejection (4 wk). In contrast, the PBMC-CD4 T cell % of IL35-xD cells continued to climb at 2 & 4 wks in the biD group, after which the level stabilized, suggesting an amplifi ed impact of IL35 exosome production from the donor side. Population microscopy, depicting IL35-xD cells showed punctate surface staining of Ebi3 that was similar in CD4 T, CD8 T, and B cell subsets. A similar, but non-identical, pattern of p35 expression was seen in Ebi3-xD cells. Exhaustion was confi ned to the IL35-xD cells and was absent from non-IL35 xD cells. Conclusion: Tregs produce IL35-coated exosomes. These exosomes left a "mark", punctate surface IL35 expression, while also promoting exhaustion in T and B cells. Pancreatic islet allografts are effective for restoring euglycemia in type 1 diabetics (T1D) but are vulnerable to rejection by autoreactive (islet-specifi c) and/or conventional alloreactive T cells. Since T1D recipients have a memory autoimmune response, autoreactive T cells are presumed to be responsible tolerance resistance found in the non-obese diabetic (NOD) mouse model of T1D. To test this concept, diabetic NOD mice with grafted with syngeneic (NOD. Rag -/-) islets to model autoimmune disease recurrence or MHC mismatched (C57BL/6) islets to model islet allograft destruction. We determined whether CD4 or CD8 T cells were responsible for allograft tolerance resistance in NOD mice. Islet allograft recipients were treated with co-stimulation blockade therapy (anti-CD154 antibody) and / or LFA-1 blockade (anti-CD1a antibody) that individually promote tolerance in most non-autoimmune mouse strains. While one may expect that autoreactive CD4 T cells would be resistance to tolerance, analyzing islet graft-infi ltrating T cells indicated that autoreactive (autoantigen tetramer expressing) T cells in NOD mice were markedly inhibited by CD154 blockade (but not by LFA-1 blockade). In contrast, presumptively alloreactive CD4 + T cells in NOD mice readily infi ltrated islet allografts and this response was resistant to both therapies. To determine whether CD4 + T cell responses to MHC-derived or non-MHC-derived antigens drive allograft rejection, we transplanted diabetic NOD recipients with MHC-matched (B6.H-2 g7 ), minor antigen-matched (NOD. B10), or MHC class I and class II-defi cient (B6.MHC-'bald') islets. Results demonstrated marked resistance to tolerance to all types of islet allografts relative to control NOD isografts following anti-CD154/anti-LFA-1 therapy (p < .05 for all groups relative to NOD islets). Moreover, this tolerance resistance was CD4 T cell dependent and CD8 T cell independent based on T cell depletion studies. Taken together, results indicate that despite the pre-existing host immunity to autoantigens, the allograft response that was actually more diffi cult to control than the autoimmune response alone. Thus, attempts to promote islet transplant tolerance in autoimmune recipients should take into account the strength of both autoreactive and especially alloreactive CD4 + T cell responses to major and/or minor histocompatibility antigens. Purpose: The inherent challenges of selecting an acceptable donor for each of the increasing numbers and acuity of recipients has led programs to take increased risks. The outcomes of organ transplantation using organs from DWCH remained to be clarifi ed. We aimed to make an assessment of transplant outcomes of recipients using organs from DWCH. Objective: HIV-infected women and men and transplant recipients are two populations at increased risk of HPV-associated anal cancer but there is little published data. We determined anal cancer incidence and high-grade anal cytologic abnormalities (HSIL), and assessed factors associated with HSIL following organ transplantation in HIV-infected patients. We followed 111 HIV-infected women and men who underwent liver (66%) and kidney (34%) transplantation at 6 academic centers in the U.S. During baseline (pre-transplantation) and follow-up visits post transplantation, we obtained anal cytology, demographics, and measured CD4+ T cells and HIV-1 plasma RNA. Results: At the baseline visit prior to receiving organ transplantation, 87% were male, 50% were men who had sex with men (MSM), median age was 49 (IQR 43-54), median CD4+ T cells was 371 (IQR, 252-571), and median HIV-1 plasma RNA was <50 copies/ml. Following transplantation, the incidence of anal cancer was 170 per 100,000 person-years. Of those with no disease at baseline, 9%, 19% and 22% were found to have HSIL at 6, 12 and 24 months post transplantation. In multivariable analyses, there was evidence for the association of nadir CD4 at enrollment (HR 1.3, 95% CI 1.03-1.6) and MSM behavior (HR 7.9, 95% CI, 1.0-64) with incident HSIL. Conclusions: There is a high incidence of anal cancer and AIN among HIVinfected patients following transplantation. This is especially true since anal cytology underestimates the occurrence of AIN. Further studies will determine the optimal periodicity and timing of anal cancer screening among transplant recipients. Oral Abstracts subjects enrolled and 687 (53% male; 47% female) have received a transplant (50% liver, 22% heart, 22% kidney, 6% intestine or multivisceral). The mean age at transplant was 6.4 years (range 0-21 years) and 57% were EBV+, 35% were EBV-and 8% had unknown EBV status. Of the EBV-subjects, 23% seroconverted during the study period. To date, 17 subjects (6 male, 11 female) have been diagnosed with EBV+ PTLD for an overall incidence of 2.5% (heart n=5; kidney n=5; liver n=5; multivisceral n=2). The incidence of EBV+ PTLD by organ transplanted was 3.3% in heart, 3.3% in kidney, 1.5% in liver and 10% in multivisceral. The overall mean time post-transplant of EBV+ PTLD diagnosis was 18.9 months (range 1.7-43.9 months). The mean time post-transplant of EBV+ PTLD diagnosis by organ was 25.9 months in heart recipients, 17.8 months in kidney recipients, 19.6 months in liver recipients and 2.8 months in multivisceral recipients. Eight of the 17 patients diagnosed with EBV+ PTLD received induction medication and 10 were EBV-seronegative at transplant. The anatomic location of disease was nodal (n=8), tonsillar (n=1) and extranodal (n=8). In summary, we report on the fi rst large prospective US-based multi-center study on EBV+ PTLD in the era of modern immunosuppression and heightened surveillance for EBV, and observe a continued incidence of EBV+ PTLD in pediatric transplant recipients. Background: EBV infection and post-transplant lymphoproliferative disorders (PTLD) are life-threatening complications of pediatric solid organ transplantation (Tx). The risk for PTLD varies by transplanted organ, with pediatric heart chronic high EBV load (HVL) carriers displaying a higher incidence of PTLD than pediatric liver HVL carriers (45% vs 3%). Previously we reported the presence of exhausted EBV-specifi c CD8 + T cell phenotypes, the increase in effector memory CD8 + T cells paralleled by the loss of the naïve subset in the heart HVL carriers and not in liver HVL carriers, suggestive of different cellular mechanisms of viral control in the two settings. Therefore, we pursued transcriptomics on circulating CD8 + T cells to elucidate the pathways involved in EBV viral control after pediatric heart and liver Tx. Methods: We defi ned HVL as detection of > 16,000 EBV genomic copies/ mL blood with the qPCR assay at our institution. Previously banked 20 PBMC samples from EBV-load matched heart and liver Tx recipients were FACSsorted to obtain CD8 + T cell. RNA expression was performed with Affymetrix® gene array kit and signals were processed with transcriptome analysis console. Differently expressed genes with fold change >2 and FDR p-value <0.05 were further analyzed by ingenuity pathway analysis (IPA) software, with p-value<0.01 and activation Z score>2 considered to be signifi cant. We identifi ed 625 differentially expressed genes in paired comparison between heart HVL (n=3) and Liver HVL (n=3) carriers. Upstream analysis algorithms identifi ed activated molecules such as TLR3, STAT1, IFI16 by both measurement and prediction with inhibition of STAT3 and RELA in their downstream cascades in the heart HVL compared to liver HVL carriers. Our results suggest a potential differential role for elevated types of EBV dsRNA binding to TLR3, with down-steam imbalanced signals via STAT1/STAT3. These may explain, in part, the failure of EBV protective memory development due to STAT3 impairment, leading to chronic memory phenotypic changes and subsequent CD8 + T cell exhaustion in the heart HVL carriers. Our laboratory has shown that LMP1 isolated from EBV-associated B cell lymphoma lines of PTLD patients contains gain-of-function mutations at AA212 (G-S) and AA366 (S-T) that result in sustained ERK signaling, c-Fos activation, and AP-1 activity. In this study, we asked whether these mutations, or other genetic alterations, are present in primary EBV+ PTLD tumors themselves. DNA was isolated from formalin-fi xed paraffi n-embedded tissue sections of EBV+ PTLD tumors (n=8). Nested PCR was used to amplify LMP1, and the PCR products generated were cloned and sequenced. The presence or absence of gain-of-function mutations were assessed. 7 of 8 tumors demonstrated both gain-of-function mutations. Furthermore, 6 of 8 tumors contained an extra repeat of 8 amino acids within the LMP1 signaling tail corresponding to a putative JAK3 binding motif. We have previously identifi ed this repeat in 3 of 6 EBV+ B cell lymphoma lines from PTLD patients. LMP1 was also cloned from the blood of a pediatric small bowel PTLD patient. In addition to these two mutations, this repeat was also present but in triplicate, suggesting this motif may be crucial to the oncogenic activity of LMP1. In order to assess host cell mutations in EBV+ PTLD tumors, a qBiomarker Mutation PCR array was performed for the PI3K/ AKT/mTOR pathway, known to be important in human malignancies. There were signifi cantly more mutations in the primary tumors, with 25 distinct mutations found within the PTEN, PI3K, and STK11 molecules. 3 distinct mutations were identifi ed in more than 1 tumor, and 4 distinct mutations were shared between a cell line and tumor. Our fi ndings clearly demonstrate that key gain-of-function mutations in LMP1 detected in blood and cell lines are also detected in the primary tumor, suggesting a role in tumorigenesis and great potential as biomarkers of EBV+ PTLD. Moreover, host cell mutations may also contribute to dysregulation of key signal transduction pathways in EBV+ PTLD. Elucidating transplant surgeon perspectives helps identify common goals and recommendations for optimal care in this unique population. Obesity is considered a relative contraindication to pancreas transplantation due to increased risks of wound and surgical related complications. Robotic surgery has never been applied to pancreas transplantation in obese recipients despite robotic kidney transplantation has proven its value in reducing wound-related complications in these recipients. From October 2015 to July 2016, fi ve morbidly obese patients with diabetes underwent laparoscopic robotic-assisted pancreas transplantation at the University of Illinois at Chicago. The pancreas graft was procured and benched in the standard fashion. The operation was completed via two 12-mm ports (camera, laparoscopic bed-side assistance), two 8-mm ports for robotic arms, and a 7-cm epigastric incision for hand port. The portal vein and arterial Y-graft of the pancreas were anastomosed to the recipient's left external iliac vein and artery, respectively. Exocrine pancreas drainage was performed via the bladder in three cases and through the jejunum in the remaining two. Three recipients had type 1 diabetes, the remaining two had insulin dependent type 2 diabetes. All recipients were obese patients, with an average BMI of 34±4.63 kg/m 2 . Mean duration of cold and warm ischemia times were 11±4.5 hours and 43±7 minutes, respectively. The averages of the estimated blood loss was of 180±27.4 mL, of the operative time was of 433.4±125 minutes and of the hospitalization was 7.4±1.7 days. The post-operative period was uneventful for all patients. No surgical site infections were observed during the hospital course and the subsequent follow ups. Excellent metabolic control was achieved in all cases at the time of discharge and at a mean 16±5.6 month follow up, patients were not on insulin treatment and the average hemoglobin A1c was 4.92±0.5 %. Robotic-assisted pancreas transplantation is a successful novel approach that minimizes surgical and wound related complications and can offer a valuable strategy for Type II-diabetic obese patients with suppressed level of c-peptide. May 2017 (n=95) were reviewed. They were divided based on median portal fl ow after reperfusion into high (>1300 ml/min, n=47) and low group (≤1300 ml/min, n=48). Demographics and intraoperative characteristics were analyzed with postoperative outcomes. Results: Demographic characteristics were similar in both groups. Intraoperative factors such as cardiac output, central venous pressure, pulmonary artery pressure and transfusion requirements were also similar. Postoperatively, higher cumulative rates of biliary strictures at 6 months, 1 year and 2 years were observed in the low fl ow group compared to high fl ow group ( Although monitoring and immunosuppression of vascularized composite allografts (VCA) draw experience from solid organ transplantation, VCAs pose unique challenges. A deeper understanding of mechanisms underlying VCA rejection is critical for development of anti-rejection therapies targeted for VCA recipients. The availability of biopsies from the largest cohort of face transplant patients at a single center worldwide allowed us to study the molecular signature of acute cellular rejection (ACR). NanoString platform was used to quantify the expression of 730 genes in skin biopsies collected from 7 face transplant patients during ACR (Banff grade 1, n=6; grade 2, n=8; grade 3, n=11) and non-rejection (Grade 0; n=11), and compared with facial skin biopsies from non-transplanted patients with rosacea (n=3) and delayed type hypersensitivity reaction (n=4), and healthy nontransplanted facial skin (n=4). Gene expression fi ndings were validated at protein level using immunofl uorescence staining of biopsies. We found distinct gene expression profi les associated with ACR, infl ammatory dermatoses and normal skin. Comparison of grade 3 ACR vs non-rejection biopsies identifi ed 202 differentially expressed genes (adjusted p value<0.05). Functional pathway analysis showed that 3 gene sets are over-expressed in ACR: T cell activation, interferon-gamma responses, and cytotoxicity. We identifi ed distinct gene expression signatures in biopsies that correlated signifi cantly with each of the different grades of rejection (Fig1). Analysis of unique genes revealed 142 genes that are differentially expressed exclusively in face transplants during rejection, but not in infl ammatory dermatoses or normal skin. This is the most comprehensive study to date to identify the molecular signature of ACR in clinical VCAs. Our fi ndings suggest that skin biopsies from face transplants during ACR, although indistinguishable on histologic analysis from infl ammatory dermatoses, reveal extensive differences at the molecular level. Introduction -Uterus Transplantation (UTx) is the fi rst available treatment for women with Absolute Uterine Infertility (AUI). It allows the affected female to carry her own pregnancy and deliver a baby. As the fi rst center in the US we can report a delivery of a healthy baby. Materials and Methods -Under IRB protocol, 10 women born without a uterus were enrolled for UTx following successfully stored fertilized embryos. Uteri from either a living (LD) or a deceased donor (DD) were transplanted. Following UTx, immunosuppression with calcineurin inhibitor and azathioprine was given. Rejection was monitored by cervical biopsies. Embryo transfer was done after the initial healing period. Pregnancy was monitored by specialists in high-risk obstetrics. Delivery was performed as an elective caesarean section. Results -To date, 8 women have received UTx (6 LDUTx, 2 DDUTx). The uterine grafts were transplanted with the uterine arteries and variances of the uterine veins or the utero-ovarian veins to the external iliac vessels of the recipients. Regular menses occurred within 4 weeks in 3 LDUTx and 1 DDUTx. There have been no signifi cant donor or recipient complications or rejections. The fi rst pregnancy recorded occurred following a successful single embryo transfer at 6 months post UTx. Fetal growth parameters and blood fl ows of the uterine arteries and umbilical cord were normal throughout pregnancy. At an elective caesarean section a male baby with a normal birthweight for gestational age and with APGAR scores of 8/9 was born. Conclusions -As the fi rst center in the US we can report a delivery of a healthy baby. With this birth it is shown that successful UTx can be reproducible and that UTx is a promising solution for thousands of women affl icted with AUI. Introduction: There is limited data on CMV prevention after face transplantation. We report an individualized approach using prophylaxis followed by surveillance and preemptive therapy guided by viral load and CMV-specifi c and global CD8+ T cell markers. A 31-year-old man received a near-total facial allotransplant for gunshot-related facial deformities. He received Thymoglobulin, then tacrolimus, mycophenolate and prednisone. He was a CMV D+/R-, and received valganciclovir prophylaxis. Serial CD8+ T Cell Immune Competence (TIC) was performed, as measured by interferon-gamma production and CD107a/b degranulation in response to non-specifi c mitogen or specifi c MHC Class I alleles/CMV peptides using fl ow cytometry. CMV replication was measured by CMV PCR. No CMV-specifi c CD8 T cell immunity developed during prophylaxis (Table 1) . Valganciclovir prophylaxis was discontinued at 7 months, despite lack of CMV-specifi c immunity, but with sustained normalized global CD8+ T cell function.Asymptomatic CMV replication (CMV 549 IU/ml plasma) occurred 3 months after valganciclovir prophylaxis. At time of viremia, the total CMVspecifi c CD8+ T cell count was 12 cells/uL. CMV TIC score was 4 indicating effective immunity (Table 1) . Valganciclovir treatment resulted in immediate and sustained viral clearance. CMV IgM and IgG were detected. No CMV relapse was detected 17 months after transplant. Conclusion: Viral and immunologic monitoring allows for an individualized approach to CMV prevention after face transplantation. Even without CMVspecifi c immunity, a robust global immune function may signify the ability to develop effective immunity during CMV infection. Vascularized composite allografts consists of skin, muscle, bone, and other tissues, and is more immunogenic than solid-organ allografts. Our laboratory developed a murine model to study mechanisms of vascularized composite allotransplantation (VCA) rejection. A limb from a donor C57BL/6 (syngeneic) or BALB/c mouse (allogeneic) is placed in the cervical region of a recipient C57BL/6 mouse, and the donor's femoral artery and vein are anastomosed to the recipient's common carotid artery and jugular vein. Survival rate is >90% for syngeneic transplant recipients and 66.7% for allogeneic transplant recipients (n = 12). The skin is monitored daily for hair loss, discoloration, and dryness. Syngeneic grafts were monitored for >100 days and showed no evidence of rejection. In the allogeneic group, rejection was evident on day 3 post-transplant and severe rejection occurred by day 5. Histology showed leukocyte infi ltration on day 3 (n=2) and total loss of tissue architecture on day 5 (n=3). We utilized the high dimensional capacity of Mass Cytometry (Cytometry by Time of Flight; CyTOF) for unsupervised analysis of VCA immunity and designed a panel of surface and intracellular markers for the identifi cation and characterization of leukocyte populations in the spleen and lymph nodes. We observed a higher proportion of TNFα + CD44 lo CD62L -CD8 + T-cells and CD11c hi CD11b + NK1.1 NK cells in the allogeneic group as compared to control. To develop strategies to promote VCA survival, C57BL/6 mice were treated with donor BALB/c plasmacytoid dendritic cells (pDCs) intravenously 7 days prior to VCA. pDCs are a subset of dendritic cells that we and others have shown to prolong allograft survival. pDC-treated mice exhibited a delayed-rejection phenotype, showing little to no hair-loss and discoloration of the graft on day 5 post-transplant. Histology showed less leukocyte infi ltration and tissue damage than the VCA untreated groups. CyTOF analysis identifi ed an immune profi le that is uniquely distinguishable from control and untreated VCA mice on the basis of principal component analysis. In addition, pDC-treated mice had a higher proportion of Gr1 + CD11b hi myeloid-derived suppressor cells (MDSCs and such a positive effect was maintained adjusting the analysis for the contemporary global rate of CRE HAI (SOT and non-SOT patients) and community acquired infection rate p<0.01 Infl uenza infection in transplant patients can be severe but factors predictive of outcome in this setting are unclear. The aim of the study was to determine whether cytokines and specifi cally the TH 1 vs TH 2 profi les predict clinical outcomes. Solid organ (SOT) and hematopoietic stem cell transplant (HSCT) patients with acute infl uenza infection were prospectively enrolled in a multicenter study. Serum specimens were taken at disease onset and one month later. (9), intestine (6), pancreas (5), autologous HCT (5), kidney (3), lung (3), heart (1), thymus (1) Acute vascular rejection of renal allografts is associated with increased endothelial apoptosis which contributes to the development of transplant vasculopathy. We previously showed that apoptotic endothelial cells (EC) release exosome-like vesicles (ApoExo) that enhance vascular rejection in murine models of transplantation. However, the impact of ApoExo on vascular and endothelial functions remains to be characterized. Primary human EC were exposed to a pro-apoptotic stimulus in presence or not of ApoExo purifi ed by sequential centrifugation. ApoExo uptake was measured by fl ow cytometry and confocal microscopy. A gene expression analysis was defi ned by RNA sequencing. Wound closure and angiogenic activity were monitored by scratch and tube formation assays respectively. The expression of endothelial markers was measured by fl ow cytometry and by immunohistochemistry in a murine model of aortic transplantation between MHC-incompatible strains. EC displayed a rapid uptake of ApoExo (97% positive cells after 1 hr) which was effi ciently attenuated by inhibitors of a non-classical endocytosis (Dynasore: 35%, p<0.01 and MβCD: 26%, p<0.01). RNA sequencing identifi ed 139 genes differentially regulated in ApoExo-treated EC. These genes are involved in cell death (16), cell growth (15), infl ammation (7) and cell movement (4) . To follow up on the functional importance of these gene patterns we evaluated apoptosis, wound closure and angiogenesis in ApoExo-treated EC. ApoExo inhibited apoptosis (17% vs 24%, p<0.01), improved endothelial wound closure (20.4% vs 7.9%, p<0.0001), but inhibited angiogenic activity (Segments: 26 vs 75, p<0.01). Expression of CD31 was reduced in EC exposed to ApoExo (71%, p<0.01) and in aortic allograft sections from mice injected with ApoExo (MFI: 6.7 vs 8.6, p<0.001). ApoExo also increased phosphorylation ( Primary human EC from aorta (HAEC), coronary artery (HCAEC), cardiac microvessel (HCMVEC), pulmonary artery (HPAEC), lung microvasculature (HLMVEC), liver sinusoids (HSEC), skin microvasculature (HDMVEC) and kidney glomerulus (HRGEC) were compared for expression of adhesion molecules and chemokines by Nanostring Human Immunology, multiparameter fl ow cytometry and 38-plex Luminex assay (n=2 donors, ≥3 replicates). Of 373 immune genes expressed by at least one EC type, >150 were differentially expressed (fold >1.5, p<0.05). Large and microvascular EC were most closely related based on vascular bed rather than tissue of origin. Microvascular EC had higher basal levels of many signaling molecules, and other genes showed restricted expression to one cell type, which were confi rmed by fl ow cytometry, and Luminex cytokine assay. Strikingly, expression patterns of TNFα and IL-1β (10ng/mL)-inducible genes predominantly clustered by EC type before stimulus. TNFα and IL-1β increased ICAM-1, VCAM-1 and E-selectin mRNA and protein, but there were differences in the magnitude and kinetics across cells and between stimuli. For example, HAEC exhibit the largest increase in VCAM-1 expression. HCAEC maintained E-selectin expression longest. HSEC fail to upregulate E-selectin and had modest upregulation of ICAM-1 and VCAM-1 compared with other EC. Inducible chemokine patterns were divergent. Cardiac EC produced more fractalkine than other EC. HAEC and HPAEC produced the most IL-6 and G-CSF compared with microvessel EC from the same organs. HSEC did not increase fractalkine or GM-CSF unlike other EC. Signifi cant RANTES transcript and protein secretion was induced by TNFα in all but aortic and coronary artery EC. In summary, endothelial cells from different tissues and vascular beds exhibit variation that likely contributes to differential regulation of leukocyte traffi cking. The restrained response of HSEC to TNFα and IL-1β may in part explain the relative resistance of the liver to alloimmune injury. Analysis of transcription factors expressed by this cell type may identify potential therapeutic targets to dampen infl ammatory responses of endothelial cells in other organs. Background: Ischemia Reperfusion injury (IRI) mediated cellular senescence in kidney allografts has been linked to a loss in regenerative potential and delayed graft function. Injured renal vasculature plays a critical role in the pathophysiology of IRI where upregulation of p16 INK4a induces endothelial dysfunction in the vascular endothelium limiting its capacity to form new vessels. This loss of microvascular reserve may exacerbate renal hypoxia further augmenting tubular injury and interstitial fi brosis. Further, the potential cross talk between senescent endothelium and renal epithelium in long term repair, tubular regeneration and development of chronic kidney disease (CKD) is not well investigated and constituted our main aim. Methods: Mice with Tamoxifen inducible INK4a gene deletion in vascular endothelium (p16 endo∆fl ox/∆fl ox ) were generated to examine the effect of endothelial p16 INK4a on renal regeneration post 30 days IRI. Kidneys were analysed for capillary rarefaction, tubular atrophy, fi brosis and immune infi ltration by histology, immunostaining and RT-PCR. Results: The p16 endo∆fl ox/∆fl ox kidneys displayed signifi cantly reduced expression of pro-senescent cell cycle regulators p16 INK4a and p19 ARF . These kidneys also presented with signifi cantly reduced chronic tubular damage, reduced expression of renal injury markers and diminished immune infi ltration (Fig. 1) . Although, p16 endo∆fl ox/∆fl ox kidneys did not exhibit improved long-term microvascular density they did manifest improved endothelial function. Our current results show that p16 INK4a expression in endothelial cells contributes to maladaptive tubular epithelial repair and pro-infl ammatory changes, while the selective ablation of vascular p16 INK4a delays the onset of senescence and promotes renal regeneration. These observations are pivotal not only in better understanding of molecular mechanisms underlying acute kidney injury but designing targeted therapies to prevent its progression towards CKD. The immunogenicity of graft microvascular endothelial cells (ECs) has immense biological signifi cance in the modulation of the local intragraft microenvironment. EC may either support Teffector activity and rejection or local immunoregulation. Studies have shown that endothelial Tie2 and its ligands, Angpts (1/2), are crucial in the maintenance of vascular stability and endothelial quiescence. The treatment of graft recipients with agonistic ligand mimetics (e.g. Angpt1) or inhibitors of pathological Angpt2 improves graft survival. However, the mechanisms whereby the Tie2-receptor prolongs survival is not known. We hypothesize that Tie2 expression on intragraft EC modulates the local environment to support immunoregulation. We screened an FDA drug library of 650 drugs, and identifi ed simvastatin (Simva) as a potent regulator of Angpt2, and we observed that it induces Tie2 expression. The Simva-mediated (0.1-30micM) increase in Tie2 expression was validated at the mRNA and protein level in human ECs (2.4 fold, n=5) and in primary mouse heart (MHEC) and kidney (MKEC) ECs (5.8 fold, n=6, p<0.01). We employed a proteomic screen using SWATH-MS, and identifi ed a Tie2-regulated protein-protein network that promotes endothelial cell as well as lymphocyte activation. Furthermore, we observed a dramatic increase in the expression of the co-inhibitory molecules PD-L1 (2.6 fold; n=6, p<0.01) and LGALS9 (~2.8 fold; n=6, p<0.001) in Simva treated EC. To test the interplay among Tie2 signals and PD-L1 expression, by using si-RNA (25nM) we silenced Tie2 (~75%) in murine ECs and fi nd that the silencing of cells is associated with a signifi cant (n=6, p<0.01) reduction in PD-L1 mRNA and protein expression. Furthermore, Tie2 siRNA transfected MHECs failed to respond to Simva with the induction of PD-L1 and LGALS9 (p<0.001). To understand the functional consequences of Simva treatment of ECs, we performed transcostimulation assays where human EC (+/-10 micM Simva) were co-cultured with human CD4+ T cells and observed Simva treatment of EC reduced ~50% (n=3, p<0.001) proliferation as compared to control ECs. Collectively, these data suggest that intragraft endothelial Tie2 may function to regulate the expression of immunoregulatory molecules on EC, which in turn supports a pro-tolerogenic microenvironment. Our fi ndings also suggest that simvastatin may be therapeutic to augment local intragraft immunoregulation and pro-tolerogenic immunosuppressive regimens to prolong graft survival. Objectives: Thrombomodulin (TM) extensively expresses on the endothelial cells in the steady state and is known to prevent hypercoagulation via combining with thrombin and inactivating its procoagulant activity. TM also has antiinfl ammatory effects. In infl ammatory situation including ischemia reperfusion injury TM is known to come off the endothelial cells. Rapamycin prevents costimulation blockade (CoB)-resistant rejection in mice and primates, and early human data suggests a similar salutary effect. We have hypothesized that one mechanism by which rapamycin synergizes with CoB may be through its direct effects on allograft endothelial cells (ECs) in addition to its known effect on allospecifi c T cells. To study this, naïve or rapamycinconditioned human ECs stimulated with PMA were analyzed using fl ow cytometry (FACS) to detect the phosphorylation level of mTOR complex-1 (mTORC1) and the downstream molecule S6. The phenotype of ECs including adhesion, costimulation, inhibitory and HLA molecules was also studied by FACS, and EC production of IL-8 was analyzed by intracellular cytokine staining (ICCS). EC-stimulated allospecifi c T cell responses were evaluated by ICCS and VPD-450 based T cell proliferation with or without a PD1 blocking antibody. Cytokine-treated ECs upregulated adhesion, costimulatory, inhibitory, and HLA class I/II molecules. Rapamycin treated ECs demonstrated reduction of phosphorylation of mTORC1 and S6 without changes in IL-8 production after stimulation. Rapamycin/IFNɣ treated ECs showed signifi cant reduction in CD40, HLA-DR/DQ expression (p<0.05) when compared with IFNɣ treated-ECs. Rapamycin/TNFa treated ECs showed higher OX40L, CD58, and HLA-ABC expression when compared to TNFa treated-ECs (p<0.05). Rapamycin treated ECs showed higher levels of PD-1L expression when compared with untreated ECs (p<0.05), while CD4 + and CD8 + cells undergoing proliferation following EC stimulation expressed PD-1. Interestingly, although rapamycin treated ECs signifi cantly inhibited T cell proliferation when compared with untreated ECs (p<0.05), PD-1 blockade did not restore the T cell proliferation to rapamycin-treated ECs, suggesting that the effect was independent of PD-1/ PD-1L interactions. Though naïve T cell responses were decreased in rapamycintreated ECs, the activation of polyfunctional allospecifi c memory CD8 + cells was not suppressed by rapamycin EC treatment. In summary, rapamycin inhibits phosphorylation of mTORC1 and S6 in ECs, and alters the EC surface phenotype. Rapamycin treatment reduces the capacity of ECs to stimulate naïve allospecifi c T cell proliferation, but does not impair memory CD8 cell activation, and these effects are not altered by PD-1 blockade. Our fi ndings suggest that some of rapamycin's salutary effects may be attributable to direct effects on EC alloimmunogenicity. Objective: Macrophage is involved in the pathogenesis of I/R injury and allograft rejection. However, the mechanisms underlying this is vastly unknown. This study aims to identify the role of macrophage derived endoplasmic reticulum (ER) stress sensor IRE1α in renal allograft rejection. Methods: BALB/c kidneys were transplanted into bi-nephrectomized macrophage-specifi c IRE1α knockout (KO) B6 mice or their wildtype (WT) littermate. Graft survival and renal function were monitored until death or POD120. Rejection is confi rmed by histology. Flow cytometry was performed to phenotype graft infi ltrating cells. RNAseq and qPCR were used to analyze gene profi le. Results: WT renal allografts recruited host monocyte-macrophages as early as at POD2, with an increased gene expression of the ER stress inducer BiP and its downstream XBP1s, as well as macrophage activation markers, MIP-1ɣ and CD68. The WT recipients succumbed to renal failure within 60 days due to severe rejection. In contrast, majority of the KO allografts survived >120 days, with minimal cellular infi ltration, better renal function and intact renal structure. To determine the effect of macrophage depletion of IRE1α in early phase of transplantation, phenotypical analysis were performed on the graft infi ltrating cells at POD2-14. IRE1α depletion did not impair the early recruitment and activation of macrophage. Similar numbers and activation phenotypes (CCR2 + CD86 hi MHCII hi ) of infi ltrating macrophages were observed at POD2 and 6 in both WT and KO allografts. Interestingly, macrophage expression of Mer tyrosine kinase receptor (MerTK) and M2 Introduction: Donor brain death (BD) triggers a complement-mediated infl ammatory response linked to renal injury which has been associated to delayed graft function (DGF) and the development of antibody-mediated rejection (ABMR) post-transplant. We hypothesized that BD donor pretreatment with recombinant human C1 inhibitor (C1-INH) would prevent progression to DGF and ABMR in a translatable non-human primate (NHP) model of renal allotransplantation. Methods: BD was induced and maintained for 20 hours and recovered kidneys were cold-stored for a 44-hour period. These were then transplanted into ABOcompatible, MHC fully mismatched recipients. Donor animals were divided into three groups: G1) Vehicle (n=3 donors, 6 recipients), G2) C1INH 500 U/kg/ dose + heparin (n=3 donors, 6 recipients) and G3) Heparin only (n=2 donors, 3 recipients). Animals were followed for a 90-day period and underwent interval biopsies. The main end-points of the study were: 1) Incidence of DGF and 2) development of ABMR within 90 days of transplant. Results: Donor pretreatment with C1-INH prevented progression to DGF in 6/6 (100%) recipients of G2 donor kidneys. In contrast, 4/6 (66.6%) animals receiving kidneys from G1 donors and 3/3 (100%) from G3 donors developed DGF (p= 0.008). In addition, recipients of G2 kidneys showed a statistically signifi cant decrease in creatinine levels in the fi rst post-operative week (p <0.01) along with a signifi cant reduction in the activation of the donor classical pathway of complement and effectors of the contact system. Furthermore, recipients of G2 kidneys expressed markedly reduced urinary NGAL levels. In tissue, macrophage and neutrophil infi ltration was limited in renal grafts from G2 C1-INH treated donors. All recipients of G1 (4/4) and G2 (6/6) donor grafts surviving beyond one week post-transplant progressed to ABMR as evidenced by deteriorating renal function, positive C4d staining, peritubular capillaritis and glomerulitis and sustained DSA levels. Conclusion: Our data supports the use of C1-INH as an innovative approach for the prevention of DGF in grafts from BD donors with prolonged coldischemia time. We did not identify a role for donor complement blockade in the progression to ABMR in this study. Introduction: MERTK is one of the TAM receptor tyrosine kinases (RTKs) that mediate homeostatic phagocytosis of apoptotic cells, and transmit regulatory signals that modulate immune response. Donor splenocytes (SP) treated with 1-ethyl-3-(3-dimethylaminopropyl) carbodiimide (ECDI) is a potent negative vaccine for induction of donor-specifi c transplantation tolerance. Previous studies have shown that donor ECDI-SP require the apoptotic cell receptor MERTK to mediate their tolerogenic effects. However, the underlying mechanism of this requirement is unclear. Methods: In this study, we address the mechanism of MERTK-mediated transplantation tolerance by donor ECDI-SP in BALB/c to C57BL6 heart as well as islet transplantation models. Results: MERTK -/recipients exhibited markedly impaired transplant tolerance induction by donor ECDI-SP treatment in both allogeneic heart and islet transplant models. Mechanistically, MERTK -/macrophages produced a markedly elevated level of IFN-a upon co-culturing with allogeneic ECDI-SPs. The high level of IFN-a subsequently impaired the differentiation and function of myeloid derived suppressor cells (MDSCs). Specifi cally, IFN-a compromised MDSC suppression of antigen presenting cell-stimulated T cell proliferation. IFN-a receptor (IFN-aR) was found to be expressed on monocytic MDSCs, suggesting IFN-a may be able to directly affect MDSC function via IFN-aR signaling. Supporting this possibility, monocytic MDSCs treated with IFN-a were found to readily acquire robust antigen presenting capacity, while losing their immunosuppressive function. Furthermore, IFN-a led to a complete reversal of MDSCs-induced expansion of Tregs. In vivo, treatment of MERTK -/recipients with anti-IFNaR resulted in restoration of tolerance effi cacy by donor ECDI-SP in both heart and islet transplantation models. Reciprocally, treatment of MERTK +/+ recipients with IFN-a resulted in tolerance abrogation by donor ECDI-SP similar to that observed in MERTK -/recipients. These results underscore the critical role of recipient MERTK signaling in the suppression of IFN-a production upon donor ECDI-SP and the subsequent induction of a graft protective environment. C5aR2 (C5L2/gp77) is a transmembrane receptor that binds C5a but lacks motifs needed to induce G-protein coupled signal transduction. C5aR2 modulates various infl ammatory diseases in mice and has been shown to facilitate in vitro regulatory T cell (iT REG ) induction. We tested the hypothesis that C5aR2 modulates in vivo T REG generation and T cell-dependent allograft rejection. Flow cytometric image analyses showed that murine T cells express and upregulate C5aR2 during iT REG generation. After adoptive transfer of naïve WT-Foxp3RFP or C5aR2 -/-Foxp3RFP CD4 + T cells into rag1 -/recipients we observed fewer Foxp3 + iT REG in the C5aR2 -/cells (1.9 v 4 per 1x10 6 splenocytes. C5aR2 -/ vs WT, p<0.05, n=4-5/grp). Using newly generated C5aR2 transgenic mice (C5aR2-tg) we conversely fi nd that overexpression of C5aR2 in CD4 + T cells augmented in vivo iT REG generation after analogous transfer of naïve CD4 + T cells into rag1 -/recipients (204.6 v 32.16 per 1x10 6 splenocytes. C5aR2-tg vs WT, p<0.05, n=4-5/grp). In a transplant model, MR1-treated B6 C5aR2 -/mice rejected MHCdisparate BALB/c hearts faster than MR1-treated WT recipients (MST 30d v 50d, p<0.05, n=7-9/grp). At 20d posttransplant this was associated with a higher frequency of donor reactive T cells (90.1 v 6.1 IFNg producers/1x10 6 splenocytes, C5aR2 -/vs WT, p<0.05) and diminished splenic T REG /T EFF ratios (0.15x10 5 v 2.2x10 5 C5aR2 -/vs WT, p<0.05). Conversely, MR1-treated C5aR2-tg recipients rejected BALB/c hearts with delayed kinetics (MST >100 days, p<0.05 vs WT) and at 28d posttransplant had fewer donor-reactive T cells (309 v 128 per 1x10 6 splenocytes. WT vs C5aR2-tg, p<0.05, n=4-5/grp) and higher Treg/Teff ratios (273.2 v 126.9 C5aR2 -/ vs WT, p<0.05). Mechanistic studies showed that Tcellexpressed C5aR2 limits C5aR1-initiated signals (pAKT and p-pS6) known to inhibit T REG induction. Our fi ndings add to the increasingly recognized function of the complement system as a modulator of adaptive T cell immunity and provide the foundation for designing therapeutics that upregulate T cell C5aR2 to increase T REG generation and suppress pathogenic T cell immunity, including those induced by transplantation. Innate allorecognition is a process by which myeloid cells, such as monocytes, distinguish syngeneic from allogeneic cells. Monocytes that recognize donor cells as non-self differentiate into monocyte-derived dendritic cells (mo-DCs) and contribute to allograft rejection. Initiation of this recognition process is mediated through binding of signal regulatory protein alpha (SIRPa) on graft cells to its receptor, CD47, on recipient monocytes. SIRPa is an immunoglobulin superfamily (IgSF) membrane protein with a highly polymorphic IgV domain that binds monomorphic CD47. SIRPa polymorphism modulates the SIRPa-CD47 binding affi nity such that SIRPa mismatch between the donor and recipient is suffi cient to trigger allorecognition. We have determined the degree of amino acid variability within each domain of mouse SIRPa derived from the sequences of 19 mouse strains available through Mouse Genome Informatics (MGI). Furthermore, we developed a phylogram representation of the polymorphism in SIRPa IgV domain, which shows a correlation between IgV polymorphism and the degree of innate alloresponse. These fi ndings raised the possibility that SIRPa polymorphism infl uence transplant outcomes in humans. As a prelude to testing this hypothesis, we have used the 1000 Genomes Project to identify polymorphism in human SIRPa from the compilation of multiple data sets. From this data, we have derived 13 haplotypes that occur with >1% frequency across fi ve different ethnic backgrounds, with 7 common haplotypes accounting for 88-90% of tested individuals depending on ethnicity, 2 of which account for 58-74% of individuals. We are currently using 3-D rendering to overlay each haplotype on the crystal structure of the SIRPa IgV domain bound to CD47 (from PDB) to predict if and which haplotypes could have effects on CD47 binding by altering polarity, hydrophobicity, or causing steric hindrance. Future studies will investigate associations between donor and recipient SIRPa polymorphism and graft outcomes. Abstract# 309 Where present, v was more strongly correlated with the markers of acute cellular rejection (ACR) t and i (Kendall correlations 0.23-0.25) than with the markers of microvascular infl ammation (MVI) g, cg, and ptc (correlations 0.01 to 0.10). v>0 was strongly correlated with acute cellular rejection (i>1 or t>1) (Chisq 53.7; p < 0.0001). In multivariable Cox analysis including g, ptc, cg, v, and cohort, the presence of v>0 was the single strongest predictor of DCGF (coef 0.745, p = 0.003), showing independence of its effect from those of g, cg, and ptc. In patients with ACR the impact of v>0 on DCGF was only modest (coef 0.29, p=0.073), suggesting an effect mostly overlapping with t and i. There were only 3 biopsies with "isolated v lesions" (v>0 with no MVI or ACR). Conclusions: The presence of v lesions is a strong predictor of DCGF. v>0 is most strongly correlated with ACR, but v had a modest independent effect on graft outcome in patients with ACR consistent with injury from AMR. In this cohort, the presence of "isolated v" lesions was rare. Thrombotic microangiopathy (TMA), whether de novo or recurrent, is a serious complication after renal transplantation, but prognostic factors and the role of complement inhibitors in the therapeutic strategy regarding de novo TMA are yet to be determined. We retrospectively included all the patients with histopathologic lesions of TMA on for cause and/or screening kidney allograft biopsies performed between January 2004 and March 2016 in our center. The aims of our study were the description of clinicopathologic features and the identifi cation of prognostic factors of de novo TMA. Results 98 patients experienced at least one episode of histological TMA, among which 90 % were de novo, (4,8% of the total kidney transplant population in our center). The median time of occurrence was 198±920 days. The majority (83,7%) of cases were TMA localized in the graft. The etiological factors were multiple in more than one-third of cases, antibody-mediated rejection being the most frequent cause (46,9%), whereas underlying abnormalities of the alternative complement pathway were proved or suspected in 10,2 % of de novo TMA, even in localized forms. Graft survival was worst compared to kidney transplant recipients without TMA (84.7% vs 91.3% after 5 years respectively ; p<0.0001). The rate of graft loss after one year among patients with de novo TMA and recurrent TMA was 8 % and 10% respectively. The 2 main factors associated with graft loss were score of intimal arteritis and renal function at the time of diagnosis. The histological pattern of TMA was not discriminative of any etiology, even if arteriolar pattern and glomerular pattern were more often associated with calcineurin-inhibitor toxicity and antibody-mediated rejection respectively. These results confi rm the theory of the " multiple hit " with often multiple causal factors and the implication of alternative complement pathway dysregulation in the pathogenesis of some cases of de novo TMA. It also underlines the prognostic value of histology but its current limits to provide the clinician with etiologic clues. Understanding the specifi c causes of kidney allograft loss is mandatory to improve the longevity of kidney allografts. However, the current literature is limited by the low level of phenotyping, small series with selected populations or data from registries that lack precision diagnoses. We conducted a multicentric prospective study including unselected kidney transplant recipients from 4 referral centers transplanted between 2004 and 2014. Donor and recipient transplant parameters, clinical and biological post transplant parameters as well as circulating anti-HLA DSA and allograft biopsies performed post transplant (protocol and for causes) were included. The main outcome was long-term kidney allograft survival and specifi c causes of allograft loss. Main analyzes comprised 4,921 kidney recipients. A total of 10,293 kidney allograft biopsies were analyzed (4778 protocol biopsies and 5515 for cause). During a median follow-up post transplant of 6.52 years, 929 graft losses occurred. After inclusion of biopsy, clinical, biological, and anti HLA DSA data, a primary cause was identifi ed in 96% of cases. The causes of loss were: immune related (32% of antibody-mediated rejection, 4.5% of T-cell mediated rejection), surgical related (21% of thrombosis, 2.5% of urinary disease), medical related (tumoral, infectious or cardiac intercurrent disease (20%), recurrence of primary disease (7.5%), virus related (BK or CMV associated nephropathy, 5.5%) and calcineurin inhibitor nephrotoxicity (1.5%). After exclusion of primary non-function and early vascular and urological complications, antibody-mediated rejection was the leading diagnosis for allograft failures (40%). Antibodymediated rejection was associated with the worse allograft survival (55% at 9 years). Conditional probability plot of allograft diagnoses give the dynamic range of allograft injuries with time ( Figure 1 ). In this multicentric extensively phenotyped population of kidney transplant recipients, we identify the contemporary picture of specifi c causes of kidney graft loss. Such effort highlights the current priorities to detect such complications to improve long-term allograft outcomes. Although the current gold standard of monitoring kidney allograft patients relies primarily on GFR assessment, little is known about long-term eGFR trajectories prototypes and their determinants at a population level. An international, population-based cohort involving 10 transplant referral centers (7 in Europe and 3 in the US) was assembled with kidney transplants occurring from 2001 to 2016. Patients underwent assessment of clinical, histological, immunological and functional parameters including repeated eGFR measurements (MDRD). Latent class mixed models were fi t to determine the prototypes of each patient's individual GFR trajectory. Multinomial regression models and PCA were used to assess transplant parameters that are associated with the distinct eGFR trajectories. A total of 5,035 KTR were included. The median follow-up time was 6 years (IQR 4 -9). A total of 53,024 eGFR measures were analyzed. Overall we identifi ed 8 distinct latent class prototypes of eGFR trajectories (Figure 1) . Multinomial regression and PCA analysis determined that recipient gender, expanded criteria donor, allograft infl ammation (Banff i&t score), eGFR and proteinuria level at 1-year post-transplant, and circulating anti-HLA DSA, were the main determinants that discriminated between the latent eGFR classes (Figure 2) . We confi rmed that the eGFR trajectories, and their main clinical associations, were consistent in both the European and US cohorts. In this international, extensively-phenotypedpopulation ofKTR, we present novel insights into prototypes of individual eGFR trajectories and their main determinants. Our results provide the basis for using eGFR trajectory-based categorization of patients for improving patients risk stratifi cation. These results also suggest the potential for a) early identifi cation of adverse patient trajectories and the value of tailored therapeutic intervention, and b) the use of trajectory phenotypes in future clinical trials. Drug development in transplantation has diminished due to the lack of suitable endpoints for predicting graft failure. We sought to validate the performance of an integrative scoring system to predict long-term kidney allograft loss in the setting of therapeutic interventions. We used an integrative risk-scoring system, previously derived and validated in 5,125 KTR and based on readily-accessible parameters measured after transplant. We included patients who underwent standardized protocols for antibodymediated rejection (ABMR), T-cell mediated rejection (TCMR) and CNI toxicity. ABMR patients received SOC treatment by PP, IVIg and anti-CD-20 (n=224). Patients with TCMR received steroid pulses (n=143). The last group included patients with CNI toxicity converted to belatacept (n=117). All patients underwent risk score measurement at the time of treatment and 3 months after. The outcome measure was the performance of the score to predict allograft failure as compared with the actual events observed. 484 patients were included. The mean time follow-up was 4.0±3.1 years. The risk score was signifi cantly modifi ed by the therapeutic interventions (mean risk score of 3.0±0.8 at the time of treatment vs 2.7±0.8 after treatment, p<0.001). 2 groups of patients were identifi ed by the risk score: i) Group 1 showing decreasing predicted probability of graft loss after treatment (blue lines, responders, 68%); ii) Group 2 showing stable or increased predicted probability of graft loss after treatment (red lines, non-responders, 32%). The risk score prediction capability of individual patient long-term allograft loss was highly accurate (C-index 0.84). The calibration plot showed an optimal agreement between the risk score prediction model after therapeutic intervention and the actual allograft loss. This integrative risk scoring system showed high performance in the setting of therapeutic interventions after kidney transplant, correlated well with the true clinical outcome and captured the net effect of treatment on the clinical outcome. The risk score could be used as a valid surrogate endpoint for next-generation trials and in the approval of drugs. After renal graft failure, retransplantation(ReTx) becomes more diffi cult due to sensitization to alloantigens. In 2013, our center opened a specialty clinic to maintain immunosuppression(IS) after graft failure in an attempt to minimize sensitization in ReTx candidates. We present our 1-year post graft failure cohort data. Methods: We retrospectively examined patients at a single center with renal graft failure 7/2011-10/2016 who were referred to our specialty clinic. Maintenance IS usually consisted of FK506(trough 3-6ng/mL), mycophenolate 250-500mg BID, prednisone 5mg/day. Outcomes were CPRA at graft failure and 1-year later. Also, we compared our cohort with a historical cohort of relisted patients prior to the advent of our specialty clinic. Results: Our cohort had 56 ReTx candidates. 5 patients were excluded due to noncompliance/lost to followup. 11 patients had IS withdrawn<365 days after graft failure due to complications(median 165 days, IQR 118-211). At 1 year, mean CPRA 46% was signifi cantly higher than at graft failure, but not signifi cantly lower than the historical cohort (Fig 1) . However, a subgroup analysis of patients on IS ≥365 days after graft failure had a mean CPRA 42% that was signifi cantly lower than the historical cohort. Patients with IS <365 days had a mean CPRA 60% which was similar to the historical cohort (Fig 2) . 14 patients on dual agent IS had a signifi cant rise in mean CPRA 38% to 57%, p=0.038, which was not seen with triple IS. Post graft failure infection (n=17) Purpose: Acute kidney injury (AKI) occurs frequently in deceased donors and is associated with organ discard. Prior studies indicate an association between donor AKI and delayed graft function (DGF), but suffi cient data on longer-term allograft survival are lacking. We performed a multicenter study to determine associations between donor AKI severity (AKI Network stages based on admission to terminal serum creatinine) and death-censored graft failure (GF). Key covariates for adjustment included the kidney donor profi le index (KDPI, relative to all U.S. deceased donors in 2010); cold ischemia time (CIT); antigen mismatches; and recipient age, race, gender, body mass index, diabetes, panel reactive antibody, prior transplant and pre-transplant dialysis. We also analyzed several donor and recipient risk factors as potential effect modifi ers on the relationship between donor AKI and allograft survival. Results: Out of 2430 transplanted kidneys from 1298 donors, 585 (24%) were from donors with AKI. As shown in the Figure (panel A) and with a median follow-up of 3.7 years, the rates of GF per 1000 patient years were numerically lower for increasing stages of donor AKI. Kaplan-Meier curves, however, demonstrated no signifi cant differences in graft survival by donor AKI stages. With no AKI as the reference, the Figure ( We predicted kidney transplant graft survival after indication biopsy in 884 patients. Analysis was performed using random forests, a multivariate machine learning method to assess if data from the biopsy (histology/molecules) adds predictive value to clinical data. Survival predictions using either histology or clinical information were similar but were improved by combining both. Molecules were always better predictors than clinical or histology data alone or combined (Fig.1&2 ). Using clinical variables alone, eGFR, time of biopsy post-transplant, and proteinuria were the most important predictors; DSA status was not important in multivariate analysis. When clinical and histology variables were combined, the best predictors were clinical features (eGFR, proteinuria), followed by fi brosis (ci) and glomerular double contours (cg). Infl ammatory lesions (i, t, ptc, g), DSA, and histologic diagnosis were not important multivariate predictors. In models including molecules, the best independent predictors were molecular scores for injury (ci, AKI), or late-stage ABMR (cg, late-stage ABMR score) and molecules representing proteinuria or eGFR, while clinical and histologic features were less important. Thus biopsies add predictive accuracy to clinical data, and signifi cant improvement is gained by using molecular data compared to histology. Survival predictions can be made with molecular data alone, and are improved only slightly by the addition of clinical and histologic data. Many variables that are signifi cant in univariate analysis (e.g. DSA, diagnosis, rejection lesions i, t, ptc, g) drop out in multivariate analyses when more relevant molecular predictors related to injury and ABMR stage are included. IgG degrading enzyme of Streptococcus pyogenes (IdeS) cleaves human IgG in two steps, fi rst cleaving one of the heavy chains resulting in a single-cleaved IgG (scIgG) and then cleaving the second heavy chain into a F(ab') 2 and an Fc-fragment. The objective of this study was to address how scIgG present in serum at different time-points post IdeS treatment affects standard transplantation assays such as the HLA-and C1q-single antigen bead (SAB) assays and the CDC cross-match test. Pre-and post-IdeS serum samples were collected from a patient with chronic kidney disease (CKD) dosed with IdeS within a clinical phase II study (Clinicaltrials.gov identifi er NCT02224820). Different amounts of IdeS was also added to serum in vitro to generate samples containing varying amounts of IgG, scIgG, F(ab') 2 and Fc fragments. The samples were analyzed by HLA-SAB (LABScreen), C1q-SAB (C1qScreen) and CDC-XM against a mini-panel of potential donors (n = 4). Mean fl uorescent intensity (MFI) remained at a high level in the HLA-SAB assay until basically all IgG was converted to F(ab') 2 fragments. In contrast, there was a dramatic drop in MFI in the C1q-SAB assay already upon conversion to scIgG. In serum from the uremic patient MFI decreased in the HLA-SAB after dosing with IdeS and the MFI continuously decreased during several hours post dosing correlating with the conversion of scIgG into F(ab') 2 . In the C1q-SAB the drop in MFI occurred instantly and C1q fi xation was prevented within one hour post IdeS. In the CDC-XM assay, intact IgG from the CKD patient reacted strongly against the mini-panel of potential donors, and scIgG generated positive cross-matches in this assay, however with a dramatic drop in titer levels compared to intact IgG. BACKGROUND: Elimination of preexisting IgG antibodies against donor antigen is an important role before antibody-incompatible (i.e. ABO-incompatible and donor-specifi c antibody positive) kidney transplantation. Plasma exchange (PEx) has risks of hyperacute allergic reactions and microorganism transmission caused by using fresh frozen plasma. Another apheresis, Double fi ltration plasmapheresis (DFPP) using albumin solution removes antibodies effectively, however, fi brinogen (Fib) is massively removed resulting in hemostasis failure. Here, we created a modifi ed combination of fi ltrating membranes in DFPP (modifi ed DFPP, mDFPP) to retain more Fib while removing IgG, and assessed its effi cacy and safety in comparison with conventional DFPP (cDFPP). METHODS: Patients who underwent antibody-incompatible kidney transplantation were enrolled in this retrospective study. We used Plasmafl o® OP-08W in cDFPP and Cascadefl o® EC-50W, whose pore size is smaller than OP-08W to fi ltrate Fib back to patient's plasma, in mDFPP as a primary plasma separator, respectively. Cascadefl o® EC-20W was used as a secondary separator in both DFPP. Removal rates (RR) of IgG antibodies and Fib after each DFPP per session were compared between cDFPP and mDFPP. Incidence of adverse events during and after each DFPP was compared between the groups. RESULTS: cDFPP and mDFPP were performed in 23 and 10 cases, respectively. The patient's clinical backgrounds were similar between the groups. RR of IgG was signifi cantly lower in mDFPP group than cDFPP group (62.1±6.7% vs 74.6±9.6%, p<0.001). Loss rate of Fib was signifi cantly lower in mDFPP than in cDFPP (38.1±11.5% vs 75.5±5.3%, p<0.001). Regarding adverse events, there was no hypotension event observed during or immediately after either DFPP treatment, however, incidence of hemostasis prolongation of arteriovenous fi stula was signifi cantly higher in cDFPP group (81.8%) than in mDFPP group (12.5%). This result suggested that mDFPP preserve more Fib than cDFPP, resulting in better hemostasis. CONCLUSIONS: mDFPP can remove IgG more selectively with removal rate of 60% while preserving Fib with loss rate of 40%. mDFPP is effi cacious and safe apheresis to remove donor-reactive antibody before transplant. Background ABO blood group-incompatible (ABOi) kidney transplantation is considered a safe procedure, with non-inferior outcomes in large cohort studies. Its contribution to living kidney transplantation programs is substantial and growing. The objective of this meta-analysis was to systematically investigate outcomes in ABO-incompatible kidney transplant recipients compared to centermatched ABO blood group-compatible (ABOc) control patients. Donor-reactive T-cells have been suggested to impact allograft outcome due to a higher incidence of acute celluar rejection. These donor-reactive memory T-cells rapidly acquire effector functions and have been shown to be relatively resistant to standard immunosuppressive regimens. We analyzed 150 living-donor kidney transplant recipients (KTRs) from 2008 to 2016. KTRs were grouped into 92 ABO-compatible (ABOc) KTRs and 58 ABO-incompatible (ABOi) KTRs. Samples were collected at 6 timepoints, before rituximab and maintenance immunosuppression, before immunoadsorption, before transplantation, at +1, +2, and +3 months posttransplantation, and donorreactive T-cells were measured by interferon-ɣ Elispot assay. (Fig 1) . However, no differences in graft survival by CPRA or FCMX/DSA status were seen (p=NS) (Fig 2) . Conclusions Recent studies have issued that living kidney donor (LKD) might be exposed to higher risk for end-stage renal disease (ESRD) and death, compared with healthy control (HC), although there is a racial difference. We collected data of 1,294 LKDs who underwent nephrectomy from 1986 to 2016. We obtained renal complications including ESRD and mortality data by reviewing medical record, ESRD registry supported by the Korean Society of Nephrology, and Statistics Korea using a unique identifi er. As a potentially eligible donors, we included a total of 42,403 HCs who underwent health examination from 1995 to 2006. Before comparisons, we excluded HCs with hypertension, diabetes, malignancies, lower eGFR less than 80 ml/min/1.73m 2 , and advanced age older than 70 years. Mortality rate of both LKDs and HCs was compared with general population using standardized mortality ratio (SMR). Finally we evaluated impact of LKD on mortality using cox regression analysis. The relationship between predonation eGFR and long-term risk of postdonation ESRD has not been characterized. Moreover, while transplant centers are required to collect postdonation serum creatinine (SCr) in donors, the clinical utility of measuring early post-donation renal function is unknown. METHODS: Using SRTR data, we studied ESRD risk in 66,052 LKDs 1999-2015 who were ESRD-free 9 months post-donation and had at least one SCr reported to the registry between 3 and 9 months post-donation (6m-post eGFR), using Cox regression and adjusting for donor age, sex, race (black vs all other), 1st-degree biological relationship to recipient, and BMI. Predonation eGFR and 6m-post eGFR were calculated using the CKD-EPI equation. RESULTS: Donor eGFR declined from median (IQR) 98 (84-110) mL/min/1.73 m2 predonation to 63 (54-74) mL/min/1.73 m2 6m-post ( Figure 1) . A 10-unit increase in predonation eGFR was associated with 17% decreased risk of ESRD (aHR= 0.70 0.83 0.99 , p=0.04) (Figure 2) . In a separate model, a 10-unit increase in 6m-post eGFR was associated with 40% decreased risk of ESRD (aHR= 0.46 0.60 0.79 , p<0.001) (Figure 2) . In a combined model, the association between predonation eGFR and ESRD risk disappeared (aHR per 10u= 0.79 0.98 1.21 , p=0.9) while the association between 6m-post eGFR and ESRD risk remained the same (aHR per 10u= Hypertension is common among kidney transplant (KTx) recipients, but the impact of antihypertensive medication (AHM) choice on patient and graft outcomes is not well defi ned. We examined a novel database linking SRTR registry data for 54,153 KTx recipients with AHM fi ll records from a large pharmaceutical claims warehouse (2008) (2009) (2010) (2011) (2012) (2013) (2014) (2015) . Mutually exclusive regimens were defi ned hierarchically as based in: Angiotensin converting enzyme inhibitor/angiotensin receptor blockers (ACEi/ARB), dihydropyridine calcium channel blockers (DHP-CCB), non-DHP (NDHP)-CCB, beta blockers or vasodilators/others. Associations (adjusted hazard ratio, 95% LCL aHR 95% UCL ) of AHM regimen in mos 7-12 post-KTx with patient and graft survival over the next 5 yrs were quantifi ed by multivariate Cox regression including adjustment for recipient, donor and transplant factors, and clustering for center. The most common AHM after transplant were DHP-CCB, followed by betablockers, ACEi/ARB, and diuretics, but regimen patterns varied across transplant centers (Fig 1) . In bi-level hierarchical modeling, compared to DHP-CCB-based treatment, ACEi/ARB use was more common in those with diabetes, obesity, and on mTORi-based immunosuppression. Unadjusted survival varied with AHM treatment (Fig 2) . Compared to DHP-CCB regimen, adjusted mortality was higher in those managed with NDHP-CCB (aHR To develop a novel score for predicting mortality and graft failure after kidney transplant. We reviewed a cohort of 783 kidney transplant recipients and analyzed pertinent cardiovascular risk factors at the time of transplant evaluation to determine the risk of all-cause mortality and graft failure. The cardiac troponin T cut-off with the highest sensitivity and specifi city for the outcomes was identifi ed. The risk assigned to each predictor was determined by proportional hazards regression analysis with bootstrapping method. The cohort (N=783) had a mean age of 58.7 years (SD 12.1) and median followup of 6.5 years (range 1 day to 18.0 years). Patients had high baseline cardiac and vascular disease (43% and 53%, respectively The ACTV score was signifi cantly predictive of death and graft failure. Substantial incremental risk was appreciated with each score level (0 to 2, 3 to 4, and 5 points). The ACTV score is a practical clinical tool that incorporates both clinical and biomarker data for prediction of mortality and graft failure after kidney transplant. Future studies are needed for validation in larger and more diverse populations. SPPB scores correlated well with physician assessments as most patients removed from the waitlist had an SPPB of ≤ 6. Of those with SPPB < 7, 32% died or were removed from the waitlist vs. 7% of those with SPPB ≥ 7 (p= 0.019) Diabetes was more prevalent among patients with SPPB < 7 (68%) vs patients with SPPB ≥ 7 (43%) but this fi nding did not reach statistical signifi cance (p=0.060). In the older group the highest SBBP scores were observed in patients with a BMI of 26-30, followed by BMI > 30 and lowest scores in BMI of 19-25, in contrast to the younger group where the lowest scores were in the BMI >30 group. Conclusion: There was an excellent correlation between SPPB scores and death and/or removal from the transplant waitlist which suggests frailty assessments can be used as an objective measure for candidate selection, risk stratifi cation and prediction of outcomes. We will further expand on this study and attempt to validate our results using a larger number of patients with a broader age range, 55 years and older. Here, we briefl y describe and compare the technical aspects and outcomes. Methods: From Jan 2013 to Aug 2017, we compared a total of 94 OKTs and 27 RAKTs and analyzed their outcomes. The technique of RAKT included Trendelenburg in a modifi ed lithotomy position at 10 degrees, placement of a midline Gellpoint port, 2 other robotic and 1 assistant ports. This was followed by intraperitoneal kidney implantation. Of the 27 recipients, 2 patients were converted to open because of technical complications. RAKT offered lower blood loss and lower cold ischemia time, while warm ischemia time was higher. There was no increased incidence of delayed graft function, with similar post-operative pain and length of stay. There was lower incidence of wound related complications in RAKT. Patient and graft survival was 100% for both. We divided our fi rst 13 and our next 14 RAKTs for comparison. With experience, our mean blood loss and cold ischemia times have decreased moderately, while warm ischemia times decreased signifi cantly (p=0.029). The mean pain scores and days to ambulation were also much lower in the latter period. Conclusions: RAKT offers minimally invasive option with equivalent outcomes to open kidney transplants. In obese patients, it is technically easier to perform. It has a short learning curve with a trend towards lower pain and warm ischemia times. RAKT can provide a minimally invasive option with equal or better short term patient outcomes while maintaining excellent renal outcomes. [Results] VUR was diagnosed in 55 cases (15.5%), at a median post-KT duration of 50.0 months (range: 0 to 172). Among VUR cases, 20 were observed, 17 underwent transurethral collagen injection, and 28 received UCS. Collagen injection failed in 13 of 17, leading to need for UCS. A Cox proportional hazards model was used to evaluate risk factors for VUR. Age under 60 years, hemodialysis period >10 years, deceased donor, and atrophic bladder (capacity ≤50 ml) were signifi cantly associated with a higher risk of VUR on univariate analysis (risk ratio 3.24, 4.72, 3.30, 7.43; p=0.049, p<0.001, p=0.004, p<0.001, respectively) . Atrophic bladder was the only signifi cant factor associated with VUR on multivariate analysis (risk ratio 4.16; p=0.0117) . KT recipients were divided into 2 groups, according to presence of VUR, and graft survival was compared. Five-and 10-year graft survival in recipients with VUR was 87.5% and 70.0%, and 92.0% and 82.8% in those without VUR, respectively. Graft survival rate in those with VUR was signifi cantly lower than in those without VUR (p=0.0196). [Conclusion] Atrophic bladder causes post-KT VUR, and the presence of VUR is signifi cantly associated with a lower long-term graft survival rate. (4.5, 4.2, 3.9, 3.3 vs. 5.2, 4.9, 4.5, 4.3, p<0.003) . Cumulative opioid consumption in morphine milligram equivalents (MME) were signifi cantly lower in TAP patients at 24, 48, 72 hrs and 7 days (21. 1, 48.1, 73.2, 111.9 vs. 66.7, 108.0, 139.8, 190.2, p<0.004 Background: Absence of suitable implant locations for donor renal vessels may be a major problem in patients with signifi cant central and peripheral vascular disease. We describe a series of 10 patients transplanted successfully utilizing alternative vascular infl ow including prosthetic interposition conduits and donor arterial grafts arising from proximal aortic branches. Methods: This retrospective case study includes patients (n=10) transplanted between 2008-2017 whose iliac arteries were not suitable for implantation of the renal allograft artery, and were turned down for technical concerns in at least one transplant center. All patients underwent CT-angiography for operative planning. Most patients received a prosthetic conduit (aortobiiliac or aortobifemoral) at the time of (n=5) or prior to transplant (n=2). One patient underwent patch angioplasty and iliac artery stenting at transplant. Two patients were transplanted with donor arterial conduits arising from the common hepatic artery (n=1) and the superior mesenteric artery (n=1 Only ~50% of liver candidates undergo deceased donor transplant; 17% die on the waiting list and 20% are removed from the list as too sick for transplant within 3 years of listing. Despite high rates of waitlist mortality and dropout, 30% of recovered donation after circulatory death (DCD) livers were discarded in 2015. DCD livers provide an opportunity for donor pool expansion but are associated with lower posttransplant survival. Thus, they may not confer a survival benefi t to all candidates. Using SRTR data, we developed a liver offer acceptance decision tool and evaluated the potential survival benefi t of DCD livers across the spectrum of candidate allocation priority, i.e., model for end-stage liver disease (MELD) scores. Probability of patient survival (PS) for declining an offer was estimated by considering waitlist experience of similar candidates who declined offers and the probability that such experiences would lead to patient survival 1 year after the declined offer. We used offers from deceased donors recovered January 1-December 31, 2013. Posttransplant survival models used recipients who underwent transplant May 1, 2007 -June 30, 2016 . The difference in patient survival for accepting versus declining a DCD donor offer was estimated with 5000 randomly selected declined offers, 407 of which were from DCD donors. Acceptance of DCD offers was predicted to confer a 3% survival benefi t after 1 year compared with declining, although the benefi t strongly depended on MELD score (Figure 1 ). Declining a DCD donor offer and restricting the donor pool to DBD donors did not improve the probability of patient survival. Thus, DCD donors are an opportunity to expand the donor pool while conferring a survival benefi t for candidates. Background: Short wait time to LT for HCC patients may result in transplanting aggressive tumors at high risk for HCC recurrence who otherwise would have experienced dropout with longer wait time. We aimed to create a dropout risk score among HCC patients listed in long wait regions (LWR) and to apply this score in short wait regions (SWR) to evaluate post-LT outcomes. We hypothesized that receiving LT in SWR with a dropout risk score beyond a certain threshold would result in poor post-LT outcome. Methods: Using the UNOS database, we identifi ed 2,092 adult HCC patients in LWR ( Regions 1, 5, 9) and 1,735 in SWR (3, 10, 11) who received priority listing from 2010-2014. The dropout risk score was created in LWR patients using the multivariable competing risk regression coeffi cients of listing variables that predicted waitlist dropout. Results: Cumulative incidence of dropout in LWR within 1 year of priority listing was 18.6% (95% CI 16.9-20.3). On multivariable analysis of LWR patients, 1 tumor 3.1-5 cm or 2-3 tumors, AFP >20 ng/ml and increasing Child-Pugh and MELD-Na scores signifi cantly predicted waitlist dropout. These 4 variables were used in the dropout risk score (C-statistic 0.74). The median dropout risk score was 14 (IQR 7-23) in LWR and 16 in SWR (p<0.001). In LWR, the dropout risk score stratifi ed 1-year cumulative incidence of dropout ranging from 7.1% with a score <7 (lowest dropout risk quartile) to 40% with a score >23 (highest risk quartile). Cumulative incidence of LT within 1 year of priority listing in SWR was 89% at a median of 2.2 months from listing. When applying the dropout risk score to patients receiving LT in SWR, 5-year Kaplan-Meier post-LT survival with a dropout risk score <30 (n=1259) was 72% compared with 60% for a score >30 (n=191) Results: There were 477,595 ESLD and 31,487 waitlist deaths. Higher CHS, higher rates of poverty and uninsurance, rural residence and lower median income were all correlated with increased ESLD mortality and lower waitlist access (p < 0.001 for each trend.) Donor Service Areas (DSAs) with higher proportions of candidates living remotely or in high-CHS counties had lower waitlist rates ( Figure 1 ). Aggregate modeled change to DSA-level transplant volume across all publicly available UNOS proposals in the last two years showed stronger association with waitlist access than donor productivity (Figure 2 , dotted line indicates national mean for access). Conclusions: Adverse local health and economic conditions are associated with reduced access to the transplant waitlist. Sociodemographic disparities impacting candidate waitlist mortality are concentrated in the same DSAs with the lowest levels of waitlist access. Proposals for broader liver sharing that use MELD disparity as an equity measure predominantly shift organs away from areas with more vulnerable patients and reduced waitlist access. (1) whether MELD and MELDNa perform well in a contemporary cohort and (2) whether incorporation of eGFR instead of creatinine improves performance of predictive models. Methods: Using data submitted to the SRTR, we examined 2 cohorts (2000-2004, n=23,852 and 2011-2012, n=13,912) of adult candidates awaiting LT. Primary outcome was mortality w/n 90 days of listing. Cox proportionalhazards analyzed the association between MELD, MELDNa, GRAIL-MELD (br,inr,grail), GRAIL-MELDNa (br,inr,na,grail) and WL mortality. Discrimination (Harrel's concordance statistic) and calibration (difference in observed vs. predicted mortality) was assessed. Results: MELD (c=0.82) and MELDNa (c=0.79) were excellent predictors of WL mortality in 2000-2004. However, the performance was worse in 2011-2012 (c statistic=0.72). Discrimination across strata of MELD score were poor (c<0.6) and was 0.5 in the highest MELD groups. (Table) Incorporation of estimated GFR (GRAIL) had better performance in both historical (c=0.83) and contemporary datasets (c=0.72). Performance was higher for GRAIL-MELDNa across all strata, especially at higher MELD scores. (Table) Calibration: GRAIL-MELDNa had lower percent difference between observed and predicted mortality across higher deciles of the score as compared to MELDNa. Performance of MELD and MELDNa is worse in recent era, especially across higher scores, yet used for organ allocation. Incorporation of eGFR, in GRAIL-MELDNa improves discrimination and calibration and likely captures true GFR better than serum creatinine or Na alone. Abstract# 360 Background: Organ shortage is a barrier in liver transplantation (LT). Split liver transplantation (SLT) increases organ utilization where a donor provides segmental grafts to 2 recipients. Based on OPTN data supplied by UNOS, we sought to ascertain the number of whole-organ LT (WLT) between 2007-2017 that could have been safely split to 2 recipients, and the pediatric waitlist consequences. Methods: Deceased donor (DD) suitability to split: ≥12y,<40y, donation after brain death, admitted for≤5d, last serum sodium (Na)≤150 mmol/L, maximum ALT/AST≤120 IU/L, BMI≤28, macrovesicular steatosis<15%, low dopamine/ dobutamine rate, no nor/epinephrine use, no extracranial cancer, donor-recipient distance<1000 miles. Recipients who received organs from above DD as a WLT, were narrowed according to: liver-only, ≥12y, segmental graft acceptable at listing, MELD≤30 & not status 1A, BMI<35, and in order to be fair to adult recipients, extended right lobe graft volume calculated by graft-recipient weight ratio>1%. From the match runs of suitable DD/WLT-recipients, the next potential SLT pediatric recipient (PedR) was identifi ed: <5y OR ≥5y and <12y with graftrecipient weight ratio>0.5%, ABO match. Results: Of 194,695 DD, 1818 DD were suitable to split, and 588 WLT recipients could have accepted SLT; 513 had a suitable PedR who could have accepted a segment graft. A total of 274 PedR who were next on the match run, but did not undergo SLT due to lack of sharing, had longer average wait-time to transplant compared to 1644 pediatric SLT recipients during this time frame (mean 615d vs 115d, P<0.001). These children waited an additional 199d on average following the fi rst DD match run that was transplanted as a WLT instead of SLT. As a result 17 (6%) expired after an average 283d on the waitlist; 169 (62%) had LT. Conclusions: Given the current organ shortage, sharing organs suitable for splitting will increase the number of LT in children, thus saving more lives. With increased widespread profi ciency in SLT adults will also see benefi t from use of extended right lobes. Geographic Disparity in Access to Deceased-Donor Liver Transplantation for Pediatric Candidates. M. Bowring, 1 D. Mogul, 1 D. Segev, 1 S. Gentry. 1,2 1 JHU, Baltimore; 2 US Naval Academy, Annapolis. Geographic disparity in access to deceased-donor liver transplantation (DDLT) is largely uncharacterized for pediatric candidates. We investigated geographic disparity in DDLT rates across Donation Service Areas (DSAs) for pediatric liver candidates using a robust metric of disparity before and after Share35. METHODS: Using SRTR data 6/2010-6/2016, we identifi ed 4,690 pediatric (<18yrs) liver transplant candidates and estimated PELD/MELD-adjusted DDLT rates using multilevel Poisson regression. PELD/MELD was categorized empirically. From the regression model, we derived the incidence rate ratio (IRR) of DDLT rates between each pair of DSAs and the median of the IRRs (MIRR). MIRR is derived from the variation in DDLT rates across DSAs and represents an overall measure of geographic disparity. Larger MIRR indicates greater disparity. In the three years pre-and post-Share35 The Research has demonstrated increased annual medical costs in patients who abuse opioids, estimated at up to $18,000 per patient and $29 billion overall in the US. Our aim was to evaluate the fi nancial impact of any pre-transplant opioid use on readmission costs in kidney transplants. Retrospective, single center analysis evaluating kidney transplants between 1/2010 and 12/2016. Opioid use was defi ned as an opioid reported during medication reconciliation at the time of transplant or opioid prescription within 3 months from a national pharmacy claims database. Costs were adjusted to 2016 values utilizing a 3% average infl ation rate per year. Out of 1,129 kidney transplants, 271 (24%) were opioid experienced (OE) and 858 (76%) were opioid naïve (ON). Patients that were OE were more likely to be white, more highly sensitized, and had more baseline benzodiazepine use ( Table 1 ). There were no differences between cohorts in LOS, cost of transplant hospitalization, or rejection rates. The number of readmissions for patients in each cohort can be visualized in Figure 1 . OE patients demonstrated signifi cantly more readmissions and a signifi cantly higher mean total cost of readmissions at 30-, 90-, and 365-days after transplant (Table 2) . Pre-transplant opioid use is signifi cantly associated with increased costs associated with readmissions in the year after kidney transplantation. Causes and preventability of readmissions should be further explored. Background: The development of direct acting antiviral (DAA) therapy for hepatitis C virus (HCV) could allow for safe and effective treatment following renal transplantation (RT). Early pilot studies have shown that transplanting HCV D+ organs into HCV -recipients (R-), thus reducing wait list and time on dialysis (HD), then treating HCV with DAA results in high sustained virologic response. However, the cost effectiveness of such a strategy is unknown. Methods: A decision tree model was developed to analyze costs and effectiveness of a 5-year time frame between two choices: RT using a D+/R-strategy compared to continuing HD and waiting for a HCV-donor (D-/R-). We assessed the payers perspective using data from the United States Renal Data System (USRDS) 2016 Annual Report. Effectiveness was measured by expected years of life (YOL). Costs included direct expenditure for RT and immediate care post-RT, immunosuppressive therapy, HD and costs while awaiting RT, and HCV with 12 weeks of DAA for D+/R-patients. Medicaid national average drug acquisition cost was used to estimate HCV treatment costs. A total of 13 chance paths were modeled with an endpoint of two possible outcomes: death or alive. Patients on dialysis waiting for RT were examined at years 1, 2, 3, and 4 during the 5 year time frame. Results: The strategy of accepting a HCV+ organ then treating HCV (D+/R-) resulted in an expected 4.6 YOL with an expected cost of $154k US dollars compared to an expected 3.6 YOL with a total cost of $257k for the D-/Rstrategy. The D+/R-strategy remained dominant after one-way sensitivity analyses including adjustment for RT survival probability, DAA therapy cure rate (90-95%), waiting time on dialysis in the D+/R-strategy (0-12 months), cost of DAA treatment ($37,538-$73,618), probability of dialysis patients getting a RT in the D-/R-strategy by year (25% @ Y1, 50% @ Y2, 75% @ Y3, and 100% @ Y4 vs. base case of 0% @ Y1, 25% @ Y2, 50% @ Y3, 75% @ Y4, and 100% @ Y5), and HD survival probability while awaiting RT in the D-/R-strategy (± 20% of base value). The utilization of HCV D+ kidneys into HCV -RT recipients then treating the HCV with DAA is less costly and more effective with increased expected years of life compared to the D-/R-strategy. This D+/R-strategy should result in fewer discarded organs, reducing the time on dialysis and increasing the number of patients undergoing RT. Background: There are no population-based estimates of the prevalence of end-stage liver disease in the United States, creating challenges in measuring disease burden to inform allocation policy. Proxy measures often used include the transplant waitlist and population mortality. We calculated the proportion of liver disease deaths in the population that were captured by the transplant waitlist by demographic and geographic factors. Methods: Data on the liver transplant waitlist is from the Scientifi c Registry of Transplant Recipients; population mortality is from CDC WONDER. Inclusion criteria was death from liver disease between 2011 and 2015. Deaths after age 75 were excluded. The proportion of deaths captured by the waitlist was calculated as the number of waitlisted candidates who died over the number of deaths in the population. Demographics included age, race, gender, and state; differences between groups were assessed using Chi-square tests. Results: From 2011 -2015, there were 202,938 deaths from liver disease in the U.S.; 13,415 were captured by the transplant waitlist (6.6%). There were signifi cant differences in the proportion of deaths captured by the waitlist by race, gender, age, and state (p < 0.001 for all). Asians were most likely to have their death captured by the waitlist (19.5%); American Indians were least likely (2.5%). Lower proportions were captured in the South and Northwest and higher proportions were captured in the Northeast. The proportion of liver disease deaths captured by the waitlist is low and varies by demographics and geography. Using the waitlist to measure liver disease prevalence may differentially underestimate population disease burden. Although not all patients are transplant candidates, the low proportion of deaths captured on the waitlist among those who ultimately died of liver disease indicate potential missed opportunities for referral to transplant. 3 eras (era1: 2001-2005; era2: 2006-2010; era3: 2011-2016) . Multivariate analysis was performed to assess factors that impacted outcomes and the potential need for a subsequent kidney transplant. Results: Over time, recipient age increased, but donor age and preservation time decreased signifi cantly. Most recipients received induction therapy and maintenance immunosuppression with Tacrolimus and MMF. These changes resulted in signifi cant improvement in patient and pancreas graft survival. Three-year patient survival increased from 92% in era1 to 96% in era3. Pancreas graft survival 3-years post-transplant improved from 60% in era1 to 71% in era3 (p<0.0001). While early technical failure rates did not signifi cantly change over time and remained stable (6-7%), the immunological graft loss rate in technically successful transplants dropped signifi cantly at 3-years from 23% in era1 to 14% in era3 (p<0.001). By multivariate analysis, the most infl uential factors for this decrease were older recipient age and better immunosuppression. Donor factors (age, preserv. time, HLA matching) did not show any impact which may be due to an excellent donor selection process. The different duct management and venous drainage techniques did not impact outcome. Due to better donor and recipient selection, refi nements in immunosuppression and improvement in graft outcome the rate of a subsequent kidney transplant declined signifi cantly. This was primarily contingent on native graft function at the time of transplant. If the GFR was <40ml/min, 38% of patients received a kidney graft 4-years after the pancreas transplant; if the GFR was >60ml/min only 2% of patients required a kidney graft. Conclusion: Our study shows that the results of PTA have signifi cantly improved and should be considered in brittle diabetic patients before the development of advanced nephropathy. Although the risk of immunological failure has also signifi cantly decreased, results are best in recipients > 30 years of age. Abstract# 375 Introduction: The number of whole organ pancreas transplants declined signifi cantly over the past 15 years. The largest decrease was seen in pancreas after kidney (PAK) transplants. Between 2001-05 (era 1) and 2011-15 (era 2), the rate of primary PAKs declined by 74%; primary PTA, 33%; and primary SPK, 15%. Despite the decline in numbers a signifi cant improvement in patient and pancreas graft survival was noted. The aim of this study was to identify factors for improved outcome in era 2. Methods: In 2001-2005 (era1) 1,321 and in 2011-15 (era2 ) 354 primary deceased donor PAKs were performed. The UNOS/IPTR database was closed for analysis on 6/15/2017. Extensive univariate and multivariate analyses for recipient and donor characteristics, immunosuppressive regimens, and operative techniques were used to describe differences between eras, and impact of factors on recipient outcome. Patient survival improved from 90% at 3 years in era 1 to 93% in era 2 (p=0.08). Table 1 shows the signifi cant improvements in pancreas graft function by era. While the incidence of early acute rejection episodes did not change, pancreas graft function improved signifi cantly in era 2 (p=0.0001) due to lower technical and immunological graft failure rates. The number of PAK transplant centers decreased from 124 to 88 in era 2. The number of living donor kidneys increased by 10% to 79% in era 2 (p=0.003). Over time, the recipients got signifi cantly older, the donors younger, and the preservation time shorter. Signifi cantly more patients in era 2 received induction therapy and combined tacrolimus/ MMF maintenance therapy. The time between kidney and pancreas transplant changed also. In era1 the time between kidney and pancreas was signifi cantly shorter especially after a LD kidney transplant. Conclusion: Outcome after PAK has signifi cantly improved due to refi nements in immunosuppressive protocols and better donor and recipient selection. If a living kidney donor is available, a PAK is an alternative to a SPK due to similar outcome. Introduction: While kidney transplantation in the elderly diabetic patient is an accepted treatment of the secondary complication, pancreas transplants are controversial. Many centers have an age limit of 60 years or lower for this kind of transplantation. The aim of this study was to analyze outcomes and risk factors for patients 60 years of age or older. Methods: Between 2000 and 2015, 251 primary deceased donor pancreas transplants were reported in patient 60 years of age or older to IPTR/UNOS for the USA. The majority were SPK (61%) followed by PTA (22%) and PAK (71%). Uni-and multivariate statistical methods were used to describe outcomes and risk factors for this special group of patients. The oldest patient in this cohort was a 73 years old type 1 diabetic male recipient. Signifi cantly more males (60%) received a pancreas transplant but no difference between the 3 transplant categories could be found. 72% of recipients received depleting and 18% non-depleting antibody therapy for induction. In 91% of all patients, Tacrolimus in combination with MMF was used for the maintenance immuno-suppression. Overall one-and 5 year patient survival was 85% and 68%. Patient survival at one-year (and 5 year) was 86% (72%) for SPK, 93% (71.2%) for PAK, and 74% (56.3%) for PTA. The oldest recipient survived after a SPK for over 11 years and died with functioning grafts after a cardiac event. Especially critical were the fi rst month posttransplant -cardio-cerebro complications and infections were the major causes of death. In all 3 categories, patient age and diabetes type were not risk factors for patient survival. Only a failed kidney graft in SPK could be clearly identifi ed as a risk factor for patient survival. Over the analyzed time, patient survival increased in all 3 categories. One and 5-year pancreas graft function was 85.7% and 70.9% for SPK, 93.0% and 71.7% for PAK, and 75.9% and 56.3% for PTA. No difference in outcome between SPK and PAK recipients was noticed. The major reason for pancreas graft loss was 'Dying with a functioning graft' in 54% of all deaths followed by pancreas graft thrombosis in 17% of all cases. All recipient received relatively good donors so that no statistical signifi cant risk factors could be determined. Conclusion: After a careful work up, pancreas transplants in elderly patients can be successfully performed and will not only improve the quality of life of the patient but also can be life extending. Results: At the time of analysis, 6,728 recipients with >10 year pancreas graft function were identifi ed. The majority were SPK in 82%, followed by PAK in 13% and PTA in 5%. Most were primary transplants (96%), but also second (221), third (14) and one fourth transplant worked long-term. Of note, the registry follows right now 644 pancreas transplant recipients with pancreas graft function over 20 years. Figure 1 shows the increasing number of transplants with long-term graft function (>10 years) over time. After the 10-year mark was reached, median patient survival for these recipients was 10.7 years; the additional median graft function was 9.7 years. The age of recipients with long-term graft function increased over time to a median of 41 years at the time of transplant; the donor age decreased to a median of 22 years over time. The median preservation time was 12 hours. Of the recipients with >10 year graft function, 56% died with a functioning graft and 26% lost their graft for immunological reasons over the following 5 years. The most common causes of death with functioning graft were cardiovascular complications (25%), infections (12%) and malignancies (11%). Conclusions: >10 year pancreas graft function can be achieved specifi cally in older recipients and young donors with short preservation times. Improvements in immunosuppression over time have also increased the number of grafts with long-term function. The effect of improved immunosuppressive protocols will be even more expressed in future years. Cognitive impairment in the liver transplant (LT) population could negatively affect one's ability to self-manage. This study examined cognition and its relationship with self-management in the LT population. We hypothesized that global cognition and specifi c cognitive domains would be related to selfmanagement. This cross-sectional study included adult LT recipients who had a functioning transplant for at least six months. The 30-item Montreal Cognitive Assessment (MoCA) assessed cognition. Scores range from 0 to 30, higher scores indicate higher levels of global cognition. The MoCA test was then differentiated into domains (Visuospatial/Executive functioning, Memory, Attention, and Language), based on work by Vogel et al. (2015) . The 40-item Health Education Impact Questionnaire and Basel Assessment of Adherence with Immunosuppressive Medication Scale were used to assess self-management. Self-management was classifi ed as low, medium, and high using latent profi le analysis. Logistic analysis was used to examine the relationships between cognition and three levels of self-management. A total of 113 LT recipients (mean age 61.2 ± 11.1, 7.8 ± 6.4 years posttransplant, 62% male, and 94% White) participated in this study. MoCA scores were 24.4 ± 3.1, indicating mild cognitive impairment. Mean scores of four cognitive domains are presented in Figure 1 . Global cognition was not related to self-management. Among the four cognitive domains, memory was signifi cantly related to high level of self-management (OR 2.32, 95% CI 1.52-3.53, p<.001). This study found that LT recipients have a mild cognitive impairment and those who demonstrate better memory function are more likely to have a high level of self-management. Clinicians should regularly assess cognition of recipients and caregiver's participation in self-management for those with impaired cognition, especially memory defi cits. Future studies should explore potential mechanisms of cognitive impairment to design interventions to ameliorate or compensate for the cognitive impairment. Limitations of this study include small sample size recruited at a single center, use of single measure of cognition. Opioid use is associated with increased readmissions and mortality in the liver transplant (OLTx) population. The aim of our study was to identify risk factors for chronic opioid use in OLTx. METHODS: Single-center, retrospective cohort study of OLTx between 1/10-12/16. Data was collected via manual chart abstraction and use of a pharmaceutical claims database. Patients that fi lled an opioid prescription in more than 3 months in the year after transplant (excluding the month of transplant) were identifi ed as chronic opioid users (COU). Multivariable logistic regression with backward elimination was performed to identify risk factors for COU in the overall OLTx population, as well as the opioid naïve and experienced subpopulations. RESULTS: 446 patients were included in the analysis, 101 (23%) of whom were COU after transplant. Signifi cantly more opioid experienced patients were COU after OLTx compared to opioid naïve patients (69 (37.3% vs 32 (12.3%), p<0.001) (fi gure 1). In the overall OLTx cohort, risk factors for COU included lower functional status and opioid use prior to transplant (table 1). In the opioid naïve cohort, the only risk factor for COU was muscle relaxant use (table 2). In the opioid experienced cohort, risk factors for COU included lower functional status and benzodiazepine use at the time of transplant (Table 3) . CONCLUSIONS: The strongest risk factor for COU after OLTx is pre-transplant opioid use. Within the opioid experienced subpopulation, lower functional status and concomitant benzodiazepine use independently impact the risk of COU. Muscle relaxant use was the only independent risk factor for opioid naïve patients. Background & Objectives: Medication non-adherence remains problematic in pediatric liver transplant (LT)-estimates suggest that up to 53% of adolescents are non-adherent. We leveraged data from subjects who were enrolled in the MALT clinical trial (NCT01154075) to learn if community deprivation predicted non-adherence in pediatric LT recipients. Methods: This was a single-center pilot study. The primary outcome was Medication Level Variability Index (MLVI); an objective, validated biomarker of non-adherence. Primary exposure, derived from participant addresses, was a validated index of community deprivation using data from the US Census Bureau. The index has a range of [0, 1] Conclusion: The results indicate that adherent patients live across the spectrum of deprivation-suggesting that living in a deprived community does not necessarily equate to non-adherence. However, of the non-adherent patients, the data suggest that patients from more deprived communities have a more persistent and severe form of non-adherence. Further work is needed to better characterize this association. Purpose: Impact of nonadherence among transplant recipients has been well documented as a signifi cant contributor of allograft failure. The objective of this study was to compare medication adherence among transplant recipients who fi ll their tacrolimus prescriptions through the internet or on a mobile app versus those who fi ll through traditional methods over a one-year period. Methods:This retrospective,observational cohort study used administrative claims data from a large, U.S.pharmacy chain.Medication adherence was measured by proportion of days covered(PDC). The study sample included transplant recipients aged18+ with at least two pharmacy claims for tacrolimus at least 150 days apart between 2013 and 2016; each patient was followed for 12-months from their fi rst fi ll in the study period.The intervention group was matched 1:1 to the control group based on the following baseline characteristics: age group, gender, payor, number of comorbid conditions, annual copay, use of brand medications and use of a transplant specialized pharmacy [TSP] . Logistic regression was used to estimate the difference in medication adherence between the intervention group, patients who use digital methods for refi lls, and control group. The independent variable, digital use, was measured as patients who use digital methods for over 50% of their fi lls. The dependent variable, medication adherence, was measured as PDC ≥80% over a 12-month period. Results: Propensity score matching resulted in 1,166 patients in each group. After adjusting for covariables, transplant recipients who refi lled their medications through the internet or by scan were 1.70 times more likely to be adherent than those who fi lled their prescriptions through other methods (95% CI [1.40, 2 .00]; p<.001). A sensitivity analysis using logistic regression and adjusting for covariables without matching showed those refi lling through the internet or by scan were 1.45 times more likely to be adherent (95% CI [1.40, 2 .00]; p<.001). Conclusion: The results showed that transplant recipients who refi lled their prescriptions through digitally-enabled platforms compared to traditional methods were more adherent over a one-year period. Limitations of the study include its observational design and a reliance on administrative data from one pharmacy chain. Digital engagement tools provide patients more convenience and fl exibility, which may help improve patient adherence to treatment plans. Background: Inadequate knowledge hinders informed decision making in kidney transplant candidates. We conducted qualitative interviews and focus groups with patients and providers to inform development of an online shared decision aid that provides education about options and individualized estimates of likely outcomes on the kidney transplant waitlist. Methods: In-depth interviews, focus groups, and one-on-one usability testing were conducted with 37 adult kidney transplant candidates in Minneapolis, in addition to 2 national focus groups of 13 transplant physicians and 4 national focus groups of 19 recipients between March 2016 and November 2017. Systematic inductive and deductive analyses of data were performed. Results: 16 (43.2%) candidates were female and 14 (37.8%) were Black. 24 (64.9%) had at least some college education while 3 (8.1%) did not graduate from high school. Analyses of candidate and recipient interviews revealed desire for individualized information and honest discussions with physicians about outcomes. Patients had limited understanding of options and likely outcomes, demonstrating knowledge gaps about mortality on the waitlist, likelihood of transplant, differences between deceased and living donor organs, and deceased donor organ quality. Physicians represented 7 of the 11 OPTN regions. 7, or 53.8%, reported only sometimes discussing waitlist mortality with patients. Physicians described extensive transplant center approaches to educate patients, but still felt that patients do not understand outcomes on the waitlist. Discussion: Data from stakeholders resulted in the iterative development of an online shared decision aid to guide provider-patient discussions about transplant vs. dialysis, living vs deceased donation, kidney quality, increased infectious risk organs, and individualized predictions of likely outcomes on the waiting list. Additional research is needed on whether the tool changes decisions, and how to implement decision aids into clinical practice. Although incompatibility is usually viewed as affecting potential recipients of live donors, it is also encountered in instances of directed donation of deceased donors. In these cases, the family or guardian of a deceased donor requests that an organ be assigned to a recipient in the deceased donor wait list. It has been reported (2) that "at least 100 deceased donor transplants each year have occurred through directed donation." Although most chains involve only live donors, current proposals (1) consider the initiation of chains with deceased donors. Such approach could maximize the potential of chains, with the eventual return of a live donor kidney to the deceased donor wait list. Ethical and organizational concerns have been raised with respect to the initiation of chains with deceased donors. Furthermore, selecting blood type O donors to initiate chains could disadvantage blood type O wait list recipients. We believe it would be particularly interesting to develop exchanges (as diagrammed below) initiated with deceased donor kidneys from incompatible directed donations. Rather than denying the donor family and the designated recipient the possibility of a transplant, the organ would be allocated to initiate an exchange. As part of the exchange, the previously incompatible recipient would receive a compatible live donor kidney. A. Imminent Death Donation (IDD) is a term to describe organ procurement from a live donor before withdrawal of ventilator support and/or from a patient with a terminal illness with capacity for medical-decision making, who wishes to donate and not accept further life support. This study aims to serve as a literature review of the ethical issues involved in imminent death donation, specifi cally as it pertains to the key points brought forth by the OPTN/UNOS Public Comment Proposal "Ethical Considerations of Imminent Death Donation". B. A review of published papers was performed based on a Medline search using the term "Imminent Death Donation" and 7 articles were identifi ed broaching the topic of the ethics of IDD. These were reviewed and arguments for and against IDD were juxtaposed, highlighting the salient obstacles to implementing a procurement protocol that could allow for more organ recoveries with better allograft outcomes. C. Consistently, the benefi ts of IDD are summarized as follows: 1) Honoring the preferences of the donor/donor family, allowing for the goal of organ donation to be achieved more often, 2) reliable organ recovery as opposed to failure to progress to asystole 3) less ischemic damage to organs 4) more compassionate end of life care and 5) arguably, increasing the number of organs for transplantation and/or the pool of eligible donors. However, as best summarized in the OPTN/UNOS Public Comment Proposal, the concerns over IDD thus far making it impractical for implementation include: 1) the lack of defi nitions and boundaries 2) no defi nitive data to prove that IDD would increase the number of organs and/or donors 3) with the potential to decrease the number of organs recovered 4) importantly, the potential to compromise public trust and opinion on organ donation, while increasing confusion on appropriate end-of-life care and 5) lack of stakeholder buy-in or assignment of roles and responsibilities. D. Despite an agreement amongst multiple ethics committees, including that of the UNOS/OPTN Ethics committee, that IDD is a viable and ethical option for organ procurement, the Dead Donor Rule (DDR) remains a bedrock in the public and transplant psyche, that is, organ donation cannot lead to the death of the donor. In understanding the dueling arguments and biases of convention, perhaps it is time to address the need to expand our defi nitions of organ donors and the value, if any, of the DDR in our ever-evolving fi eld. OPTN/UNOS began requiring 1-year living donor follow-up (LDF) for living liver donors (LLDs) in 2014. Compliance with a similar policy for living kidney donors was low. Using national registry data, we evaluated the changes in LDF associated with the 2014 policy and identifi ed risk factor for non-compliance. METHODS: Using SRTR data on 1624 LLDs who donated 09/2010-09/2016, we compared complete/timely LDF form submissions before and after policy implementation. Hierarchal logistic regression was used to assess region-, center-, and donor-level characteristics associated with complete/timely form submissions within a difference-in-differences framework. Compliance was defi ned as both timely and complete if clinical and laboratory data submitted 60 days before or after the expected visit date as defi ned by OPTN/UNOS. RESULTS: LDF form compliance increased between 9/2010-9/2016 ( Figure 1 ). However, out of 43 transplant centers performing living liver donation 9/14-9/15, we found that only 13 (30%) centers had provided complete/timely data for 6-and 12-month LDF (Figure 2 ). Among non-compliant centers 9/14-9/15 (N=26), the median (range) number of visits that were missed but necessary to meet compliance thresholds was 4 (1-19) LDF visits. Policy implementation was not associated with change in 6-month LDF (p=0.3) or 12-month LDF (p=0.2). Characteristics associated with non-compliant 1-year LDF include younger donor age (p=0.03), lack of health insurance (p=0.03), and international donor status (p<0.001). CONCLUSION: Though compliance is improving over time, fewer than half of transplant centers performing living liver donation are compliant with OPTN/ UNOS requirements. Novel practices to engage living liver donors may be needed to improve LDF compliance. Background: Psychosocial criteria constitute absolute and relative contraindications to listing, which can bar patients from access to organ transplantation at a program. Refusing to list patients for psychosocial factors is ethically permissible as long as these factors distinguish those who will benefi t most from receiving an organ. Empiric evidence must demonstrate poorer outcomes in patients meeting these criteria before they can be considered contraindications. We conducted a survey study to identify variation in listing practices among transplant programs nationwide. Methods: We distributed an online Qualtrics survey to 650 active adult and pediatric transplant programs in heart, kidney, liver, and lung from August 2016 to March 2017. The full survey listed 38 psychosocial characteristics, and participants were asked 1) if each characteristic functioned as an absolute, relative, or irrelevant contraindication to listing, 2) whether their program retained formal, informal, or no guidelines regarding the characteristic, and 3) whether their program had encountered the characteristic. This study examines a subset of the 38 characteristics including legal status, criminal status, and substance use. Results: A majority of programs completed the survey (response rate = 52.8%). Regarding undocumented status, 15-41% of programs considered undocumented status to be irrelevant (Irr), in contrast to 23-38% that considered in an absolute contraindication (AC). Eight to 26% considered history of violent crime an AC, while 3-12% had formal guidelines for the characteristic. More programs considered recreational marijuana an AC (24-77%) than medical marijuana (12-59%). Compared to heart and lung programs, kidney and liver programs are less likely to consider current substance use an AC (p = <2.2e-16 ~ 0.006). Although more adult than pediatrics programs encountered patients with a criminal history (p << 0.001), 19-47% of pediatrics programs have still encountered these patients. Discussion: Transplant programs vary in how they consider psychosocial criteria in making listing decisions. Variation in practice violates the principle of equality and invites prejudice into listing process. Further research is needed on posttransplant outcomes of patients with psychosocial criteria so that programs can make evidence-based judgments. Patients' routine labs, including immunosuppression drug levels, are drawn with each clinic visit starting at 6am. Because the drug levels are not reported by the time the patients leave clinic mid-morning, the levels must be reviewed later in the day. Once review by the physician is completed, transplant coordinators commence calling each patient with plan of care changes late in the afternoon and early evening. We identifi ed substantial delays in lab result turnaround times, thus; the plan of care was unable to be relayed to the patient until end of day and also resulted in ineffi cient use of staff time. This process was not in line with standard practices in California transplant centers. The goal was to achieve completed lab results in less than 2 hours on post kidney clinic days. Specifi c electronic medical record-generated reports were created by our Quality Data Team. The data collected was lab result turnaround times. The process included the time of blood draw to the time of result in a post kidney/pancreas transplant clinic. This data was gathered and analyzed from each step of the process involved between 2015 to 2017. This analysis was completed monthly to identify trends in average result times. We met with leadership to review the fi ndings and seek support in improving lab result times. The analysis revealed that there were signifi cant time delays which were associated with the pretreatment and processing of the immunosuppressive levels in the laboratory. Thus, the laboratory implemented several changes to expedite the process including: decreased transport times, internal tracking systems, laboratory staff education/competency and standardized workfl ows. Weekly reports were reviewed to monitor the progress. Following the implementation of changes made by the laboratory, the lab result turnaround times signifi cantly improved. The two year data analysis revealed a 51% increase in completed lab results within two hours, from 3 hours, 33 minutes to 1 hour, 45 minutes. The outcome of this change resulted in effi ciencies including: an increase in the number of patients discharged from clinic with a complete plan of care, a decrease in the number of results for physician review after clinic, a decrease in the number of calls to patients to relay plan of care. Also, this effi ciency improved staff satisfaction and achieved turnaround times in line with the community standard. Abstract# 391 Background: Large, multi-organ transplant centers struggle with implementation of a comprehensive strategy for development and execution of a Transplantspecifi c education plan. Regulatory requirements for staff education continue to grow, and concrete data are needed to justify a transplant-specifi c curriculum, as education is frequently requested during regulatory site visits. Aim: In anticipation of the new CMS interpretive guidelines, we describe a method to identify areas of high transplant patient volume and structure centerwide education. Method: Transplant readmissions in our center from July 2014 to July 2016 were retrospectively analyzed. Admitting location was manually abstracted and collated into a heat map. Units receiving at least 10 patients over the course of the two-year study period were considered to be a transplant unit along with units caring for patients during their initial transplant admission. An education plan was developed based on an extrapolation of the Comprehensive Stroke Program plan, which had clear regulatory requirements for education. How transplant rejection is regulated on a transcriptional level remains unclear. We found that T cell-specifi c conditional knockout (cKO) of the transcription factor IRF4 in B6 mice induced long-term Balb/c heart allograft survival (all > 100 days; n=6) without any immunosuppressive therapies. IRF4 repressed PD-1, Helios and other molecules associated with T cell dysfunction; in its absence, chromatin accessibility and binding of Helios at PD-1 cis-regulatory elements were markedly increased, resulting in persistent PD-1 expression and T cell dysfunction. To further defi ne the dysfunctional state of Irf4-defi cient T cells, Irf4 cKO mice were transplanted with Balb/c hearts and treated with checkpoint blockade (anti-PD-L1 plus anti-CTLA-4) on days 0, 3, and 5. All heart allografts were rejected on days 7-8 due to checkpoint blockade-mediated reinvigoration of Irf4-defi cient T cells. At 30 days after heart grafting, recipients were transplanted again with Balb/c skins. Strikingly, those Irf4 cKO recipients with rejected Balb/c heart allografts still permanently accepted the subsequent Balb/c skins (all >100 days; n=6). These results indicate that checkpoint blockade-reinvigorated alloreactive Irf4-defi cient T cells become re-dysfunction within 30 days. We used microarray analysis to compare the gene expression profi les between the adoptively transferred CD45.2 + WT and Irf4 -/alloreactive TCR transgenic TEa cells in CD45.1 + mice following heart transplantation and checkpoint blockade. On day 6 post-heart grafting, checkpoint blockade restored the expression levels of the majority of WT TEa cell-expressed genes in Irf4 -/-TEa cells, which explains why checkpoint blockade robustly reverse the initial dysfunction of Irf4defi cient T cells. However, the remaining un-restored genes (Wnt10a, snn, Lmo4) following checkpoint blockade, though minimal in number, may be responsible for the reinvigorated Irf4-defi cient T cells to become re-dysfunction. Taken together, IRF4 deletion drives intrinsic T cell dysfunction, and targeting IRF4 represents a potential therapeutic strategy for achieving transplant tolerance. Donor-specifi c transplantation tolerance has long been an aspiration of clinical transplantation. Recently, we have reported that multiple mechanisms are required to reinforce profound hyporesponsiveness of alloreactive T cells and to mediate robust tolerance. In contrast, little is known about the fate of alloreactive B cells. Clinical observations suggest that donor-specifi c antibodies (DSA) are a major cause of graft rejection despite ongoing immunosuppression, leading us to hypothesize that stable tolerance will require the donor-specifi c B cell response to also be profoundly suppressed. We used a mouse model of tolerance, where C57BL/6 recipients treated with anti-CD154 (D0, 7,14) and DST (D0) accepted BALB/c hearts for >60 days and where minimal DSA is produced. Firstly, we showed that the lack of an alloantibody response was not due to B cell deletion. Donor-MHC class I and class II reactive B cells were identifi ed using H-2K d and I-E d tetramers, and their total numbers in tolerant recipient was comparable to that of naïve controls. Second, there was no enrichment for donor-specifi c B cells with Breg phenotype or function; with comparable numbers of CD93 + T1, T2 and T3 B cells, folicullar zone B cells, and in percentage of donor-specifi c B cells expressing expression of CD5, CD1d, TIM-1 and IL-10. Third, B cell unresponsiveness is not due to the absence of T cell help, and is B cell intrinsic. Adoptive transfer of B cells from tolerant recipients into naive MD4 hosts did not produce DSA upon challenge with BALB/c splenocytes, even when additional alloreactive T cells were included in the transfer. In contrast, naïve B cells recipients counterparts showed strong DSA responses. Finally, we show that the donor-specifi c B cells intrinsic hyporesponsive signal is not reversed even when challenged with DST in the presence of CpG and agonistic anti-CD40. Taken together, we demonstrate the acquisition of a robust cell-intrinsic state of B cell hyporesponsiveness during the maintenance phase of transplant tolerance, that is maintained even in the absence of T cell regulation, and in the presence of T cell help and CpG plus anti-CD40. The mechanistic basis for this profound B cell hyporesponsive state is under investigation, but preliminary studies indicate defective proliferation and differentiation into germinal center B cells. Introduction: Clinical data suggest allogeneic pregnancy is a sensitizing event; yet, recent data from mouse models indicates it induces fetal-specifi c regulatory T cell (Treg) expansion that mediates systemic, fetal-specifi c immune regulation post-partum (PP). In this study, we aimed to defi ne the mechanism for acquired resistance to allograft tolerance after allogeneic pregnancy. Methods: Virgin female wild-type (WT) B/6, μKO, or anti-HEL BCR-tg (MD4) mice were mated with male B/c-2W-OVA transgenic mice. Fetus-specifi c IgG were monitored by fl ow cytometry, while donor-specifi c cellular responses were tracked using K d +IE d +L d tetramers for B cells, and 2W:I-A b and OVA:K b tetramers for CD4 + and CD8 + T cells, respectively. At PP day 30-45, F1 (B/6 x B/c)-2W-OVA heart transplants (HTx) were performed and anti-CD154 (POD 0, 7, 14) and donor splenocytes (DST, POD 0) were used to induce tolerance. Results: Fetus-specifi c IgG increased during and post-pregnancy to a 9-fold peak by PP day 21. Prior donor-matched pregnancy, but not syngeneic pregnancy, prevented the induction of tolerance in 60% of PP-WT B/6 recipients, despite a signifi cant increase in percentage of fetal-specifi c FOXP3 + cells and a modest reduction in number of fetal-specifi c conventional CD4 + T cells (Tconvs) examined at POD 21-189, compared to PP-WT B/6 untransplanted controls. In contrast, ongoing donor-specifi c germinal center B cell responses were detected suggesting a B cell-mediated resistance to transplant tolerance. Strikingly, anti-CD154/DST successfully achieved long-term graft acceptance in PP-μKO B/6 and PP-MD4 (normal B cell numbers but ≤5% donor-specifi c B cells compared to WT) recipients. Introduction: The gut microbiota induce and train the host immune system. Whether microbiota from contrasting environments of colitic or pregnant mice are pro-or anti-infl ammatory is unknown. We hypothesized that gut microbiotas from different sources could infl uence the survival of murine cardiac allografts. Methods: C57BL/6 mice received BALB/c vascularized cardiac allografts and fecal microbiota transfer (FMT) by oral gavage. FMT samples were from normal or pregnant C57BL/6, spontaneously colitic mice, or cultured Bifi dobacterium pseudolongum (Bifi do) (a dominant member of the pregnant gut microbiota). Tacrolimus was administered daily. Fecal pellets were collected weekly posttransplant and analyzed by 16S rRNA gene sequencing. Grafts were harvested at days 40-60 or rejection, and assessed for infl ammation by H&E and fi brosis by Masson's trichrome. Macrophage (MF) and dendritic cell (DC) lines were stimulated with purifi ed Bifi do cells and cytokine responses measured. Results: Pregnant FMT enhanced cardiac allograft survival, resulting in reduced infl ammation and fi brosis. Normal or colitic FMT resulted in inferior survival, along with increased fi brosis and infl ammation. 16S rRNA analyses demonstrated abundant Bifi do in the pregnant and normal but not the colitic FMT samples. Transfer of Bifi do alone also resulted in enhanced allograft survival and reduced infl ammation and fi brosis. Serial 16S rRNA analyses of gut microbiota from normal, colitic, pregnant FMT, or Bifi do groups, revealed signifi cant differences in bacterial community structure (alpha-and beta-diversity). Bifi do remained abundant in pregnant FMT recipient samples for at least 40 days. In contrast, Desulfovibrio and Deferribacteraceae were more abundant in colitic samples. Microbiota community structures of the various groups became less different over time, but remained distinct. DC and MF lines stimulated with Bifi do expressed higher levels of the anti-infl ammatory cytokine IL-10 and homeostatic chemokine CCL19, but less pro-infl ammatory cytokines TNFa and IL-6 compared to LPS stimulated cells. The gut microbiota has profound effects on host immune responses with consequences for cardiac allograft outcomes. Distinct bacterial species or genera may both alter immunity and predict graft outcomes, with Bifi dobacterium species inducing anti-infl ammatory effects through stimulation of DC and MF. Therapeutic targeting of the microbiota or its functions could be benefi cial in reducing infl ammation and subsequent graft rejection. A. Dangi, 1 J-.J. Wang, 2 M. Burnette, 1 Z. Zhang, 2 X. Luo. Background: Immune responses to cytomegalovirus (CMV) infection may precipitate acute/chronic graft rejection in immunocompromised transplant recipients. CMV infection can be primary or its reactivation from the latentlyinfected graft. Achieving tolerance could eliminate both the transplant-induced infl ammation and need for immunosuppression that are thought to promote CMV reactivation. Currently, studies regarding CMV reactivation in tolerized recipients are lacking. We thus investigated the impact of transplantation tolerance on CMV reactivation and graft-function. Methods: Balb/c mice were infected with the murine CMV strain ∆m157 to develop latency over 3 months. The kidneys from latently infected mice (D+) were transplanted in naïve C57BL/6J recipients (R-). The transplant groups included: 1) Control group (CT; untreated); 2) Immunosuppression group (IS; received antilymphocyte serum on days -1, 1, 3, & 5, tacrolimus, and dexamethasone on days 0-7, then thrice weekly until day 28); 3) Tolerized group (tolerance was induced by infusing donor apoptotic splenocytes pre-treated with the chemical cross-linker ethyl carbodiimide (ECDI-SP) on days -7 & +1). Results: The kidney allografts from CT group showed distorted architecture with an aggressive infi ltration of myeloid and T cells by histology and FACS analysis on day 28 post transplantation which correlated with a high level of blood urea nitrogen (BUN). Inversely, kidney allografts from both IS and ECDI-SP groups showed a much less aggressive intragraft cellular infi ltration and a better graftfunction (low BUN). However, despite preserved graft function in both groups, only the IS group showed detected levels of MCMV DNA in kidney allografts and in other organs (lung, liver and spleen) suggesting viral dissemination. We further observed that viral dissemination in the IS group correlated with a higher number of intragraft Ly6C Low F4/80 + myeloid cells at an early time point (day 2). This population has been previously shown to be associated MCMV dissemination. Conclusions: Tolerance induction seems to eliminate MCMV reactivation while preserving kidney allograft function. While this may be mediated by inhibiting intragraft Ly6C Low F4/80 + cells, ongoing studies will further elucidate the preventative mechanisms of MCMV reactivation. Background: Successful induction of mixed chimerism (MC) and renal allograft tolerance has been achieved in both nonhuman primate (NHP) and man following conditioning that included 3.0 Gy total body irradiation (TBI). The signifi cant myelosuppression resulting from this treatment has led to the search for alternative strategies for wide clinical application. We have, therefore evaluated the novel approach of B cell lymphoma-2 (Bcl-2) inhibition for specifi c enhancement of lymphocyte apoptosis. Methods: In Group 1 (G1), four NHP recipients received combined kidney and bone marrow transplantation (CKBMT) following reduced dose (1.5 Gy) TBI, thymic irradiation (TI, 7.0 Gy), ATG and the Bcl-2 inhibitor (ABT-199: 10 mg/ kg x 11 on days -4 to 6). After CKBMT, anti-CD154 mAb (20 mg/kg x 4) and cyclosporine for 28 days were administered. Two recipients received the same regimen without TBI (G2) and three without TI (G3). The results were compared with recipients treated with the standard 3.0 Gy TBI (G4: n=8) or with 1.5 Gy TBI (G5: n=2) but without ABT-199. Results: 4/4 G1 recipients developed markedly higher and more prolonged MC (Fig. 1A) with excellent lymphocyte depletion but without neutropenia (Fig. 1B) in contrast to G4 recipients (3.0 Gy TBI without . Recipients in G2 (no TBI) failed to develop MC, indicating that minimal TBI is necessary even with ABT-199 for induction of MC. G1 recipients achieved long-term allograft survival (313, >177, >100, and >65 days) without rejection, while all recipients in G2 (no TBI), G3 (no TI) or G5 (1.5 Gy without ABT-199) developed rejection. Conclusion: An FDA approved Bcl-2 inhibitor, ABT-199 with low-dose TBI and essential TI induced robust MC and long-term renal allograft acceptance without myelosuppression in NHP CKBMT. Enhancement of intrinsic apoptosis with Bcl-2 inhibition is a promising strategy to achieve robust MC and renal allograft tolerance without causing myelosuppressive complications. Robust Hematopoietic Mixed Chimerism and Donor-Specifi c Skin Allograft Tolerance after Non-Genotoxic Anti-CD117 Immunotoxin Conditioning and Donor Bone Marrow Allotransplantation. Z. Li, 1 A. Czechowicz, 2, 3, 4, 5, 6, 7 A. Scheck, 2, 3, 4, 5, 6 D. Rossi, 3, 4, 5, 6 Establishing hematopoietic chimerism enables donor-specifi c allograft tolerance; however, chimerism protocols typically require either genotoxic conditioning or mega-doses of donor bone marrow cells (BMC), impeding clinical translation. Previously, we developed anti-CD117-saporin for non-genotoxic conditioning in syngeneic BM transplantation (BMT) acting through recipient HSC depletion (Czechowicz, Science 2007 and ASH Abstract, 2016). Here we test the safety and effi cacy of this immunotoxin in fully MHC-mismatched sequential BM and skin transplantation. C57Bl/6 mice received one dose of anti-CD117-saporin 6 days before BALB/c BMT plus 3 doses of anti-T cell antibody cocktail and 2 doses of rapamycin. Mice then received dual BALB/c and CBA/Ca skin grafts twice, ~150 and 240 days after BMT. Donor chimerism in peripheral blood ranged from 0.44 to 2.15% in mice receiving 5×10 7 BMC without immunotoxin, and 0 to 0.24%in mice receiving 2×10 7 BMC with a nonspecifi c immunotoxin. In contrast, durable donor chimerism up to 45.6% for up to 624 days after BMT was observed in mice receiving 2×10 7 BMC after anti-CD117 immunotoxin. Multi-lineage chimerism was also observed in lymphoid organs, ranging from 13.8 to 28.0%; donor B cell chimerism was the most prominent. In addition, chimeric mice accepted BALB/c skin grafts without further immunosuppression but rejected third party CBA/ Ca skin allografts. Our results provide proof-of-principle for a safe and effective method for establishing persistent high-level mixed hematopoietic chimerism and donor-specifi c organ allograft tolerance that obviates both genotoxic conditioning and long-term immunosuppression by selectively depleting host HSCs in BM with an anti-CD117 immunotoxin. Since anti-CD117 antibodies are currently in development and in clinical trials, anti-CD117 immunotoxin may be rapidly translatable as a general method of allo-transplantation. Purpose: We engineered activity nanosensors that sense anti-allograft T cell activity in vivo to noninvasively detect the onset of acute cellular rejection from urine. Methods: Granzyme (GzmB) substrate peptides labelled with FITC was synthesized (TUCF) and conjugated to iron oxide nanoparticles (IONPs). Donor BALB/c skin was grafted onto recipient B6 mice. On day 7, splenocytes and lymphocytes are harvested and stained for GzmB for fl ow cytometry analysis. Surface-labelled nanoparticles were administered to skin graft mice and imaged by IVIS Imaging System. Nanosensors were administered prior to, and 7 days after transplant, and urine samples were collected and analyzed by fl uorimetry. All animal works were approved by IACUC. Results: To develop an activity nanosensor to sense T cell killing, we conjugated FITC-labelled GzmB substrate (IEFDSG) to the surface of IONPs. In a T cell killing assay, we observed activation of our nanosensors when transgenic OT1 T cells were coincubated with ovalbumin-expressing EG7-OVA but not EL4 target cells (Fig 1a,b) . To test our nanosensors in the BALB/c donor to B6 recipient transplantation model, we fi rst verifi ed that GzmB is upregulated in activated splenocytes and lymphocytes at the onset of rejection (Fig 1c,d) . At day 7, we intravenously administered nanosensors and detected preferential accumulation in skin allografts, and signifi cant increases in urine signal in allograft mice but not in isograft mice or CD8-depleted mice (Fig 1e,f) . Collectively, these experiments showed that GzmB activity nanosensors produce detection signals in urine during acute T cell rejection. The "gold standard" for monitoring graft health remains the core biopsy despite its invasiveness and limited predictive power. We developed activity nanosensors that detect the onset of rejection noninvasively from urine, and this platform may be broadly useful for monitoring T cell activity in cancer immunotherapy and other autoimmune diseases. Purpose: To assess the potential of CFZ533 as primary immunosuppressant in a calcineurin(CNI)-free regimen in de novo kidney transplant (KTx) patients(pts). Method: CFZ533 is a new, fully human, Fc-silenced, non-depleting, IgG1 mAb preventing CD40 pathway signaling and activation of CD40+ cell types. NCT02217410 is a 12-month multicenter RCT evaluating effi cacy, safety, tolerability, and pharmacokinetics of CFZ533 (CFZ) in combination with mycophenolate mofetil (MMF) and corticosteroids (CS) compared with tacrolimus (TAC), MMF and CS in de novo KTx recipients. All patients received basiliximab induction and corticosteroids as per center practice; a central, blinded pathologist reviewed all allograft biopsies. Results: N=51 patients were transplanted and randomized (2:1) to either CFZ (N=33) or TAC (N=18). Twenty-fi ve of 51 pts (49%) received a living donor allograft. After CD40 target saturation, CFZ was dosed IV every 4 weeks. CFZ was well tolerated with no infusion related nor thromboembolic events, and prospective Month 6 interim results demonstrated comparable effi cacy on the composite endpoint of treated biopsy proven acute rejection, graft loss, or death (21.2 vs. 22.2%) and better renal function (55.8 vs. 45.5 mL/min) [Fig 1] , less serious adverse events (SAE) (47.1 vs. 61.1%) and fewer infection complications (50.0 vs. 77.8%) with no increase of opportunistic infections (viral overall: 26.5 vs. 50.0%; SAE CMV: 2.9 vs. 11.1%; BKV: 15.2 vs. 22.2%), and a lower rate of new-onset diabetes mellitus (14.7 vs. 38.9%) with CFZ vs. TAC, respectively. Conclusion: CFZ533 may have potential to become an effective CNI-free alternative treatment improving transplant outcomes by preventing graft rejection without nephrotoxic (and other) CNI adverse effects. 12-month fi nal study data will become available in Q1/2018 and will be presented at the 2018 ATC. Figure 1 : Evolution of renal function measured as eGFR (mL/min) Superior Outcomes Using While there is increasing interest in its use, defi nitive evidence demonstrating superiority of normothermic regional perfusion in controlled donation after circulatory death liver transplant has not been presented. Unlike the rest of the Western world, where use of NRP has been anecdotal, 25% of all cDCD donors that have been performed in Spain since 2012 have included post-mortem NRP. AIM Analyze the fi rst years of the Spanish experience with cDCD liver transplantation, in particular regarding the impact post-mortem NRP has had on organ utilization rates and transplant outcomes. METHODS Data was collected regarding potential cDCD liver donors and transplants that resulted between 2012 and 2016. All transplants had at least 6 mos of follow-up. Each donor hospital determined the process by which organs were recovered: NRP with pre-mortem cannulation, NRP with post-mortem cannulation, or super rapid recovery. RESULTS From 2012 to 2016, 370 potential cDCD liver donors were evaluated: 152 with NRP and 218 with SRR. Ultimately, rates of liver transplantation were 64% NRP and 57% SRR (P=0.102). Among livers that were transplanted, median donor age was 57 (46-65 IQR). While there were no differences in terms of relevant donor or recipient characteristics when analyzed according to recovery method, functional warm ischemia (fWIT) was shorter when NRP was applied -12 (10-16) NRP vs. 15 (11-20) SRR -given that in most cases femoral cannulae were placed prior to withdrawal of care. While rates of early allograft dysfunction (22% NRP vs. 29% SRR) and PNF (2% NRP vs. 4% SRR) did not vary, rates of overall biliary complications (9% NRP vs. 24% SRR, P=0.006) and ITBL (2% NRP vs. 12% SRR, P=0.01) were signifi cantly improved among recipients of livers recovered with NRP. One-year graft survival was 87% NRP vs. 78% SRR (P=0.110). On multivariate analysis analyzing risk factors for ITBL (including fWIT), the only signifi cant factor was the organ recovery method used. CONCLUSIONS This is the fi rst large series describing the application of NRP in cDCD liver transplantation. While results with SRR were acceptable, results using NRP were superior and comparable to those achieved using standardquality livers, even in spite of advanced donor age. CITATION INFORMATION: Hessheimer A., Coll E., Valdivieso A., Gómez M., Santoyo J., Ramírez P., Gómez-Bravo M., López-Andujar R., Villar J., Jiménez C., Lluís F., Lladó L., Casanova D., Barrera M., Charco R., López-Baena J., Briceño J., Pardo F., Blanco G., Pacheco D., Domínguez-Gil B., Sánchez-Turrión V., Fondevila C. Superior Outcomes Using Normothermic Regional Perfusion in CDCD Liver Transplantation Am J Transplant. 2018;18 (suppl 4). Background: T cell exhaustion is a dysfunctional state that arises during chronic infection and cancer in response to persistence of high antigenic load. Whether chronic exposure to graft alloantigens leads to the formation of exhausted T cells (Tex) and, if so, whether Tex is related to graft outcome in human kidney transplant recipients is unknown. Methods: We used mass cytometry (CyTOF) to analyze exhaustion phenotypes of serially collected PBMC from 26 kidney transplant recipients enrolled in the prospective, observational, CTOT-01 cohort (samples were obtained at month 0, 3, and 6 after transplant). Using a panel of 36 markers and unbiased clustering algorithms (Phenograph) we extracted the frequencies of 5 CD4+ and 2 CD8+ T cell subsets expressing different combination of exhaustion markers ( Figure 1A) . Results: We observed signifi cant increases in the percentages of circulating CD4+ Tex and CD8+ Tex from month 0 to month 6 in patients given induction therapy with anti-thymocyte globulin (ATG) compared to those given basiliximab or no induction (Figure 1B-C) . Within the Tex cells, the subsets of CD4+TIGIT+TIM3-2b4-and CD8+TIGIT+2b4+41BB-TIM3-were the most differentially increased between the two induction strategies. Percentages of CD8+TIGIT+2b4+41BB-TIM3-at 6 months correlated inversely with increasing interstitial fi brosis between 0 and 6 month surveillance graft biopsies (r:-0.54; p=0.05). Discussion: Our fi ndings identify previously unrecognized relationships among peripheral blood T cell exhaustion phenotypes, progressive kidney allograft injury and ATG induction and raise the possibility that these observations are mechanistically linked. and iii) characterized using RT/PCR with primers for 16 functional markers (eg.Tbet, GATA3, BCL-6, IFNɣ, TGFβ). In a simultaneous reaction, the individual TCRs are sequenced. Data are analyzed using t-distributed stochastic neighbor embedding (t-SNE). A total of 5 cases are being examined. Results: We have already phenotyped and sequenced 204 T-cells (123 PBMCs and 81 circumfl ex coronary; LCx) from one case. Using t-SNE analysis the PBMC and LCx T-cells clustered discretely (Figure 1 ). Coronary T-cells mainly expressed T-bet and IFNɣ consistent with a Th1 phenotype. In contrast to the CD45RA + predominance in peripheral blood, most coronary T-cells were CD45RO + . FOXP3 + Tregs and RORɣC + Th17 cells were virtually absent. Conclusions: Using a unique strategy combining functional phenotype with TCR sequencing and t-SNE analysis, we identifi ed a discrete Th1 memory T-cell population amongst graft infi ltrating T-cells in CAV. These results are being combined with next generation sequencing to characterize expanded T-cell clones in different locations (coronary and endomyocardium), and over time (from archived biopsies). We will expand this analysis to a larger cohort of specimens obtained during re-transplant for end-stage CAV. Introduction Memory T cell responses play a critical role in the outcome of allotransplantation. While the role of the T-box transcription factor Eomesodermin (Eomes) in the maintenance of antigen-specifi c Tmem is well studied, little is known about Eomes+CD8+T cell responses after transplantation. We evaluated allo-reactive Eomes+CD8+T cells in healthy volunteers and kidney transplant patients and their relation to transplant (tx) outcome. Methods Three groups of patients were analyzed: patients with no rejection within the fi rst year post-tx (n=11), patients with subclinical rejection (n=5), and patients with acute cellular rejection (n=5). All patients received Thymoglobulin induction and immunosuppression based on Tacrolimus, Mofetilmicophenolate and Prednisone. Cryopreserved peripheral blood mononuclear cell (PBMC) from kidney recipients were stained with CFSE and co-cultured with irradiated donor PBMC. Proliferation was determined by CFSE dilution. Eomes, Tbet and cytokines (IFNg, TNFa) were assessed by fl ow cytometry. High Eomes expression by steady-state CD8+T cells correlated with effector and memory phenotype. Following allo-stimulation, expression of both the T-box proteins Eomes and T-bet by proliferating cells increased signifi cantly, where Eomes and T-bet co-expression correlated with a high incidence of IFNɣ+TNFα+ CD8+T cells. In patients exhibiting no rejection, Eomes but not T-bet expression by donor-stimulated CD8+T cells increased signifi cantly after transplantation. This was associated with a signifi cant increase in the Eomes hi T-bet lo and a signifi cant decrease in the Eomes lo T-bet hi CD8+T cell subsets. Interestingly, before transplantation, there were signifi cantly higher incidences of donorstimulated Eomes hi T-bet lo CD8+T cells in patients without subsequent rejection, and signifi cantly higher incidences of donor-stimulated Eomes hi T-bet hi CD8+T cells in those that subsequently exhibited cellular rejection. These fi ndings indicate that Eomes but not T-bet is upregulated by donor-reactive CD8+T cells early after transplant in patients without rejection. Also, concomitant high Eomes and T-bet expression by donor allo-stimulated CD8+T cells is associated with enhanced effector function and may correlate with an increased incidence of rejection. Background: Human CD4 + T cell allo-immunity plays pivotal roles during cellular and humoral allograft rejection. Our previous analysis showed that allo-reactive precursors were present in both naïve and memory CD4 + T cells. It is unclear their distribution within T follicular (T FH :CD45RO + CXCR5 + ) and T conventional (T CONV :CD45RO + CXCR5 -) subsets. To address this question, we measured their allo-specifi c polarization, proliferation, and TCR clonal repertoire. Methods: FACS-sorted T CONV or T FH cells from 6 healthy control (HC) were used to set up CFSE-MLR assays with autologous monocytes pulsed with allogeneic PBMC lysates. Proliferation and cytokine production (IFN-ɣ, IL-10, IL-17a, IL-21) were assessed by multi-parameter fl ow cytometry. In addition, DNA from selected in vitro co-cultures was isolated and the TCR-CDR3 regions were profi led by PCR amplifi cation followed by high throughput sequencing using the immunoSEQ platform (Adaptive Biotechnologies). Results: Indirect allo-stimulation induced low proliferation, low effector cytokines (IFN-ɣ/IL-17a/IL-21) and signifi cant high IL-10 responses within T FH and T CONV cells (considered naïve response) in all subjects but one, who displayed low IL-10 secretion paralleled by high IFN-ɣ/IL-21 in T FH and high IFN-ɣ/IL-17a in T CONV cells, indicating a memory allogeneic response. ImmunoSEQ was performed on one naïve and one who displayed functional memory. Although resting T FH cells from both subjects displayed a more diverse (polyclonal) TCR repertoire compared to T CONV , allo-stimulation triggered signifi cant TCR clonotype expansion in both T FH and T CONV only in the HC who showed allo-reactive memory responses. Moreover, resting T CONV cells contained most alloreactive TCR clonotypes that correlated positively with expanded alloreactive TCR clonotypes after allo-stimulation. Two out of 38 TCR expanded clones were shared between T FH and T CONV after in vitro allo-specifi c stimulation, suggesting potential intraclonal diversity. Conclusion: These comprehensive analyses of human circulating T FH and T CONV cells refl ect the heterogeneity of allo-reactivity among humans. In addition, monitoring of circulating allo-reactive T FH and T CONV cells may be important for identifying patients with pre-formed donor-reactive memory responses (effector cytokines and high TCR clonotype expansion) that may be deleterious posttransplant. The clinical signifi cance of minimal tubulointerstitial infl ammation (MTI:'t'+'i'scores 0.5-1.5) and Banff Borderline changes (BBC:'t'+'i'scores 2-3) in early renal allograft biopsies is unclear. The rate and signifi cance of progression of these lesions to late acute rejection (AR) also remains unknown. Our center performs 2 protocol Bxs (3&12mo) along with for-cause Bxs. This allows us to assess the clinical impact of early(0-4mo)MTI and BBC on graft outcomes. 208/372 patients had either no infl ammation(NI, 36%), MTI(34%) or BBC(30%) on early Bxs(0-4mo). Patients with NI(17%), MTI(24%) and BBC(34%) at 3mo exhibited increasing rates (in parentheses) of progression to AR (≥Banff 1A) by 12mo. Further, patients with MTI or BBC(3mo) had increased graft loss or impending graft loss (eGFR<30ml/min+ >30% fall from baseline) by 50mo when compared to those with NI (Fig 1A) .While graft outcome in the NI group was not affected by progression to late AR(p=0.85), patients with early MTI (Fig 1B) or BBC (Fig 1C) had signifi cantly worse outcomes if they developed late AR. Thus, early allograft infl ammation (MIT or BBC) was not only associated with increased progression to late AR, but those who progressed had worse outcomes. Thus, MTI and BBC, particularly in patients who will progress, are a cohort with poor outcomes. However, clinical factors, including 3mo histology, could not predict progression. Based on previous results, we asked whether peripheral blood B cell cytokines could predict progression to AR. 72 patients with either MTI(n=26) or BBC(n=46) had their B cells analyzed at 3mos. IL10:TNFα expression ratio within T1 transitional B cells was signifi cantly lower in both the MTI(6.3X↓) and BBC(4.6X↓) patients who progressed vs. stable (p<0.0004). Finally,a low T1B cytokine ratio strongly predicted late progression to AR (ROC AUC 0.94 p<0.0001 Sen 92% Spec 88%) in patients with early allograft infl ammation(MIT or BBC). Thus, patients with early minimal allograft infl ammation that progress to AR at 1yr represent a high-risk cohort for graft dysfunction. This group could be identifi ed by 2-4 mo, using the T1B IL10:TNFα ratio-allowing early intervention. Background: Deceased-donor (DD) kidneys are at higher risk for ischemia/ reperfusion injury leading to increased infl ammatory mediators and innate immune response. We aimed to investigate the effects of prolonged cold ischemia (CIT) on intragraft molecular gene expression profi les of DD kidneys comparing to living-donor (LD) pre-implantation biopsies and to investigate the molecular features of DD biopsies that develop delayed graft function (DGF). Methods: There were 48 pre-implantation kidney biopsy samples (29 LD and 19 DD). The cold ischemia time (CIT) was < 16 hrs in 10 and > 16 hrs in 9 DD kidneys. 9 DD patients developed DGF after transplantation. The gene expression profi les were studied by Affymetrix HuGene 1.0 ST expression arrays. Results: Ingenuity Pathways Analysis demonstrated that DD pre-implantation biopsies showed increased expression of pathways related to acute infl ammatory response, lymphocyte mediated immunity, innate and humoral immune response, complement activation and IL-6 activation. 3 main canonical pathways were activated in DD kidneys (acute phase response signaling, complement system and LPS/LI-1 mediated inhibition). Though most of the genes were common when biopsies with CIT > 16 hrs compared to biopsies with CIT < 16 hrs there were unique genes that were differentially expressed (Figure) . The top canonical pathways involved in DDs were exacerbated with increased CIT. There were a few pathways that were activated in CIT>16 hrs group (iNOS signaling and actin cytoskeleton signaling). There were no differentially expressed probe sets when DD biopsies of patients that develop DGF compared to biopsies without DGF. Conclusions: Pre-implantation DD biopsies showed increased expression of transcripts associated with infl ammatory mediators, cytokines, macrophages, and innate immune response compared to LD kidneys. However, increased CIT had slight effect on intragraft gene expression profi les. Molecular analysis of preimplantation biopsies was similar whether they develop DGF or not. Respiratory viral infection (RVI) following human lung transplantation (LTx) increases the risk for chronic rejection. We demonstrated that lung transplant recipients (LTxR) with acute and chronic rejection induce exosomes containing HLA and self-antigens (SAg), Kα1Tubulin (Kα1T) and Collagen V (Col-V). We determined whether RVI can induce exosomes containing SAg leading to immune response resulting in rejection and their immunogenicity in mice. Sera from RVI (n=35) and stable LTxR (n=32) were used for exosomes isolation and Abs to SAg. Exosomes were isolated by ultracentrifugation, validated by sucrose cushion and tested for CD-9. SAg, viral antigens for respiratory syncytial virus (RSV), corona virus (CV) and rhino virus (RV) were detected by immunoblot. Exosomes from RVI and stable LTxR were injected subcutaneously (10μg/100μl) into C57BL/6 mice. Sera were analyzed for Abs to SAg by ELISA. Since studies have shown that a primary insult to lung is needed for Abs to SAg to induce Obliterative Airway Disease (OAD) we administered 0.1% HCL into the native lungs on day 1. Lungs from immunized mice were analyzed for histopathology with H&E and trichrome. Background: Obliterative bronchiolitis is the major obstacle limiting long-term allograft survival of lung transplantation. Airway epithelium is the primary target in obliterative airway diseases. Th17 cytokines and signals play critical roles in mediating infl ammatory responses and are involved in transplant graft rejection. In this study, we performed murine orthotopic trachea transplantations and examined the roles of Th17 development signals and IL-17 in airway epithelial injury after transplantation. Methods: Murine orthotopic allogeneic trachea transplants were implemented in wild type and RORγt -/-(C57BL/6 background) mice utilizing BALB/c donors. Recipients received anti-IL-6 mAbs or C188-9, an inhibitor of STAT3 activation, to suppress Th17 development signals, and anti-IL-17 mAbs for IL-17 neutralization. Syngeneic transplants were also performed in wild type C57BL/6 mice using C57BL/6 donors. Airway epithelial injury and infl ammatory cell infi ltration of transplanted tracheas were verifi ed by histopathologic analysis; IL-6, IL-17 and IFNγ expressions in trachea grafts were examined by real-time RT-PCR. The expression levels of IL-17 and IL-6 in allografts were markedly elevated compared to those in isografts 3 days after transplantation. At day 14, the airway structure of isografts maintained normal, whereas the pseudostratifi ed ciliated columnar epithelia of allografts changed into fl at epithelia with fi broblast proliferation and infl ammatory cell infi ltration in submucosal tissue. Administration of anti-IL-6 or STAT3-inhibitor improved airway epithelial integrity of allografts; however fi broblast proliferation along with cell infi ltration remained observed. Anti-IL-17 treatment or RORgt defi ciency completely restored normal epithelial morphology in trachea allografts. Conclusion: Our results demonstrate that IL-17 is crucial to post-transplant airway infl ammation and epithelial damage, and further indicate that, in addition to blockade of IL-6/STAT3 Th17 development pathways, suppression of IL-17 from other sources is necessary for airway epithelial protection and allograft survival after transplantation. Purpose: Interleukin-33 (IL-33) is an "alarmin", or a self-derived molecule that triggers pleotropic immune responses after release from damaged tissue. Yet, how IL-33 shapes alloresponses or outcome after solid organ transplantation is poorly understood. Using a pre-clinical mouse model, we demonstrate that IL-33 in heart transplants is critical to restrain infi ltrating myeloid cells from differentiating into pro-infl ammatory cells after transplantation and thus limit the subsequent development of chronic rejection. Methods: Wild type BALB/c, il33+/+ Bm12, il33-/-Bm12 hearts were heterotopically transplanted into WT or il33-/-B6 mice. In some groups, IL-33 was restored locally using extracellular matrix-based hydrogel at the time of transplantation. At days 3 to 7, or 100 to 120 post heart transplant (HTx) isolated grafts were stained with H+E or Trichrome. In addition to qRT-PCR and Western blot, IHC staining was used to defi ne differences in cytokine expression and leukocyte infi ltration. Leukocytes isolated from the HTx and recipient spleen were compared at various time points through fl ow cytometry. Results: Surgery, acute, and chronic rejection increased IL-33 in HTx fi broblasts, vascular smooth muscle cells, and endothelial cells. HTx defi cient in IL-33 displayed dramatically increased chronic rejection-associated fi brosis (% fi brotic: 58±6. vs. 42±1; P=0.002) and vasculopathy (% occlusion: 98±3 vs. 87±3; P=0.02). HTx defi cient in IL-33 had increased CD3+ (281±35 cell/mm 2 vs. 529±25; P=0.0004) areas of vascular cuffi ng. Characterization of the myeloid compartment in the HTx at day 3 post surgery revealed that transplants lacking IL-33 had signifi cantly increased frequency of pro-infl ammatory macrophages (CD11b+ Ly6Chi F4-80+) and cDC (CD11b+ CD11c+ CD80 hi ). Local delivery of IL-33 reduced these pro-infl ammatory immune cells in the graft. Conclusions: Our fi ndings demonstrate that IL-33 is a regulatory alarmin after HTx and functions to control the generation of local infl ammatory myeloid cells. Local IL-33 was critical to limit chronic rejection. We thus establish an unappreciated, yet fundamental role for upregulated IL-33 in heart allograft protection. Background: We explored the role of CD40 signaling in kidney proximal tubule epithelial cells on renal interstitial fi brosis (IF) in two models: hypertension; and chronic renal allograft rejection. To this end, we used spontaneously hypertensive salt-sensitive Dahl (Dahl-S) and CD40 mutant (Dahl-S CD40mut ) rats (both displaying hypertension ~180 mm Hg at 6 weeks old). These Dahl-S and Dahl-S CD40mut rats were examined for IF at 64 days of age or at 90 days post renal transplantation to normotensive Brown Norway (BN) allogeneic recipients. We hypothesize that CD40 signaling regulates IF in both hypertension and chronic allograft rejection. Methods/Results: As previously reported, Dahl-S CD40mut rats showed signifi cantly less fi brosis and improved kidney function parameters over Dahl-S rats including urinary protein excretion; plasma creatinine; and creatinine clearance on both low (0.3%) and high (2%) salt diets. However, there was no difference in the systolic blood pressure between Dahl-S and Dahl-S CD40mut rats (181 ± 3.8 vs. 183 ± 4.8 mmHg, NS), suggesting that CD40 regulates IF in hypertension. Signaling experiments in the hypertensive model showed that proximal tubules from Dahl-S CD40mut rats had a signifi cant reduction in phospholyn kinase activity (p=0.05) and expression of PAI-1 (p=0.01) compared to tubules from Dahl-S rats (not shown). Likewise, normotensive BN recipients treated for 30 days with tacrolimus 1.5 mg/kg (to block acute rejection) displayed signifi cantly reduced IF in kidneys from Dahl-S CD40mut rats in comparison to Dahl-S rats (Fig. 1A) . Dahl-S CD40mut kidney transplants also had reduced collagen 1A1 (COL1A1) and COL3A1 (Fig. 1B) as well as MCP-1 and PI-1 (Fig. 1C ) compared to Dahl-S kidney grafts, suggesting inhibition of CD40/PI1/lyn signaling for fi brosis. Purpose Latent donor rat cytomegalovirus (RCMV) infection accelerates chronic rejection (CR) in a heterotopic rat heart transplant model. RCMV latency is associated with an infl ux of macrophages (macs) and T cells into cardiac tissue providing a potential site of alloantigen presentation. We sought to determine whether depletion of macs from RCMV+ donor allografts using clodronate-laden liposomes would slow the rate of CR. Methods F344 rats were infected with RCMV 6-9 months prior to organ procurement to establish latent infection. Donor rats were treated with PBS or clodronate laden liposomes. Mac depletion from donor hearts was confi rmed by fl ow cytometry and immunohistochemistry. Donor hearts were heterotopically transplanted into CMV-naïve Lewis recipients. We evaluated the time to rejection, cytokine levels, immune infi ltrates and the extent of transplant vascular sclerosis (TVS), which is refl ective of CR. Donor clodronate treatment resulted in depletion of tissue resident macs but not T or B cells from CMV latently infected allografts. Donor clodronate treatment signifi cantly delayed the mean time to graft rejection from 61 days to 84 days post transplant (p=0.002; Figure 1 ). Beginning at POD7 clodronate treated allografts showed decreased TVS and infi ltration of macs compared to untreated allografts. At POD14 clondronate treated allograft levels of G-CSF, GM-CSF, IL-18, CXCL-10 Fractalkine, and CCL-1 were signifi cantly lower than in untreated allografts. This study represents a novel approach to the treatment of chronic rejection. We show that pre-transplant macrophage depletion of RCMV latently infected allografts slows the time to chronic rejection to a rate comparable to that seen with uninfected allografts. Clodronate treated allografts had decreased proinfl ammatory cytokines, infi ltrate of immune cells, and TVS. These phenomena form the platform for future translational strategies that could amerliorate chronic rejection in the setting of CMV+ donors transplanted into CMV-recipients. The incidence of antibody-mediated kidney graft rejection continues to increase. We previously reported that dysregulated donor-specifi c antibody ( with DSA to cause acute kidney allograft injury and graft failure. Moreover, in the absence of NK cell activation within (A/J x B6)F1 grafts, the high titers of DSA cannot cause acute antibody-mediated rejection but induce the indolent development of interstitial fi brosis and glomerular injury that will eventually lead to graft dysfunction and failure at later times after transplantation. In 2016, 1208 deceased donors died of drug overdoses (12.9% of all donor deaths), compared with 331 (4.5%) in 2010 ( Figure 1 ). Moreover, drug overdose deaths accounted for 45% of the total increase in donors, 2010-2016, Aims The aim of the study was to determine whether donor liver apoptosis is predictive for early allograft dysfunction (EAD) and graft survival after liver transplantation (LT). Methods 159 specimens from donor livers at the end of cold ischemia were analyzed for apoptosis by TUNEL assays. Univariate and multivariate logistic regression models were used to identify prognostic factors for the occurrence of EAD. A nomogram was developed based on the independent risk factors. The apoptosis index of donor liver in patients with EAD was signifi cantly higher than that in patients with non-EAD (6.2±3.2 vs. 3.3±1.9, p<0.001). Univariate and multivariate logistic regression analysis identifi ed apoptosis index of donor liver, donor age and gender, donor serum total bilirubin lever pre-transplant, MELD score and CIT were independent predictors for EAD. A nomogram was built on the predictor variables which showed good calibration and discriminatory ability with c-index value of 0.847 (95% CI, 0.839-0.853). The apoptosis indexes of donor livers in patients with graft loss within 30 days after LT were signifi cant higher than that in patients without graft loss (6. 1, 3.0 vs. 4.1, 3.6, p=0.011) . 30 days graft survival rates in high apoptosis index (apoptosis index >4.4%) group patients were signifi cantly lower than that in low apoptosis index (apoptosis index ≤ 4.4%) group (84.4% vs. 97.6%, p=0.004). Conclusions Donor liver apoptosis plays a signifi cant role in predicting EAD after LT. Furthermore, high apoptosis index of donor liver was associated with inferior graft survival on the short term. CITATION INFORMATION: Guo Z., Zhu Z., Tang Y., Huang S., Zhang Z., He X., Chen G. Donor Liver Apoptosis is Associated with Early Allograft Dysfunction and Short-Time Graft Survival after Liver Transplantation Am J Transplant. 2018; 18 (suppl 4) . Unraveling the Neuroimmunological Pathways Surrounding Donation after brain death (BD) remains an important source of transplanted organs worldwide and it has been shown that BD causes a hostile environment with a systemic pro-infl ammatory, coagulatory response and injury to the graftsto-be. Recent UK transplant registry analysis demonstrated that a prolonged duration of BD was not associated with worse outcome for abdominal organ transplantation (Boffa, 2017) . Indeed, there might be a benefi t for longer donor management in some transplanted organs as cytoprotective proteins were found counteracting BD related injury. The underlying mechanisms have however not been studied in detail. The period of donor management thus offers opportunities for targeted therapy. This work aimed to provide a systematic study of changes in neuronal, glial and immunological molecules in serum over time. Methods 27 donors with severe intracranial haemorrhage (ICH) leading to BD and available serum samples from admission to organ procurement were included. Samples were collected as part of the UK Quality in Organ Donation (QUOD) biobank programme. Pooled samples at four different time points (admission to intensive care, diagnosis of BD, start and end of procurement) were compared. Serum levels of neuron specifi c enolase (NSE), glial fi brillary acidic protein (GFAP) and interleukin 6 (IL-6) were measured using ELISA. Serum concentrations of NSE, GFAP and IL-6 were highest at admission to intensive care followed by a decline at time of diagnosis of BD. The serum concentrations of the three markers of cerebral injury did not rise again until procurement. Both GFAP and IL-6 showed an increased serum concentration at the end of procurement with levels of NSE remaining unchanged. This is the fi rst study demonstrating that the neuroimmunological impact of cerebral injury detected in serum of BD donors following ICH is highest during the initial insult. We show that serum levels of cerebral injury markers do not further increase during the period of donor management. To complete this work we will now study serum changes of NSE, GFAP and IL-6 during different durations of brain death. Understanding neuroimmunological changes following diagnosis of BD might allow to identify novel treatment opportunities for resuscitation and repair of transplanted organs. Background Liver discards are poorly investigated. Among these discarded livers are potentially useable grafts that could save the lives of many of the 2500-3000 who are removed annually from waiting lists (WL) due to death or medical deterioration. We aimed to investigate the patterns of liver discards in US in the MELD era. Methods Analysis of a UNOS STAR fi le of all deceased donor organs from Jan 2002 to June 2015. Due to high discard rates, DCD donors were excluded. Results A total of 92,967 deceased donor livers were considered for transplantation, of these 75,582 (81.3%) were transplanted, and the remaining 17,385 (19.7%) were discarded at various points in the donation process. Over the same time period, 22,845 patients died on the list, and a further 13,722 were removed due becoming too sick. 7,678 (44.2%) were discarded pre-recovery (Pre-D). The most common reasons for Pre-D were pre-recovery biopsy (Bx) fi ndings (17.2%) or poor organ function (15.7%). 9,707 (55.8%) livers were discarded intraoperatively (Intra-D). Bx fi ndings accounted for 39% of these discards (n=3,751). Interestingly, of the livers discarded with available Bx, 70% had ≤30% macrosteatosis; steatosis being the only Bx data available from UNOS. Surprisingly, 30.5% of all liver discards were in donors ≤40 years of age, and 80% in donors ≤60. <10% of Pre-D and Intra-D livers were HCV/HBV positive. There was signifi cant regional variation in discard rates, with regions 2 & 6 having the highest rates (23.2% and 23%.1 respectively), and regions 8 & 3 the lowest (15.1% & 15.5%), p<0.001. Discard rates were not associated with overall donor volume explored. Figure-1 demonstrates that despite a rise in WL removals due to death/deterioration, liver discard rates have been steady at 6-8% annually. Conclusion Substantial numbers of livers are discarded. Discards occur frequently in donors who are relatively young, and in livers that do not demonstrate signifi cant macrosteatosis. Regional variation in liver discards suggests that policy improvements can be made to minimize discard rates, and maximize the use of potentially useable to alleviate the rising WL mortality/ removal rate. As the US population ages, older liver donors (OLDs) represent a potential expansion of the donor pool. Historically, grafts from OLDs have been associated with poor outcomes and higher rates of discard, but recent studies reported equivalent outcomes to grafts from younger donors. We sought to identify trends in demographics, discard, and outcomes of OLDs. METHODS: We identifi ed 4127 OLDs (aged≥70) and 3350 liver-only OLD graft recipients using SRTR data (2003) (2004) (2005) (2006) (2007) (2008) (2009) (2010) (2011) (2012) (2013) (2014) (2015) (2016) Currently the United States kidney allocation system gives priority to candidates with a CPRA of 100% by giving them additional allocation points and providing national sharing of matched kidneys. CPRA is calculated based on allele frequencies in the donor population and includes HLA A, B, C, DR and DQB antigens. In reality CPRA is not an integer value and is calculated to multiple decimal points. CPRA of 100% includes any CPRA calculation greater than or equal to 99.5%. This study sought to understand the relationship between the actually CPRA by decile groups in candidates whose maximum CPRA qualifi es them for the 100% CPRA designation and the likelihood of receiving a deceased donor transplant. All kidney alone candidates on the waiting list as of December 4, 2014 onward, the date of implementation of the new allocation system, and whose maximum CPRA qualifi ed them for the 100% designation were selected and included in the analysis (9228 candidates). Following local decline of a recovered kidney, making regional and national offers in larger simultaneous batches might decrease discards. We simulated varying batch sizes for offers stratifi ed by KDPI, and described likelihood of discard and delay. METHODS: Using 2010 KPSAM-SRTR data, we simulated allocation of 3090 organs, comparing current practice to one of making (one-hour) expiring simultaneous offers to batches of transplant centers. Kidneys not matched after 10 rounds of offers were classifi ed as discarded for excessive delay. We excluded locally allocated and 0-MM kidneys. We assumed each hour's delay increases discard probability by 5%. RESULTS: Discard and delay decreased with increasing batch size (Table 1) . Among regionally accepted offers with KDPI>85%, batch size of 2/5/10 centers resulted in 23%/81%/100% placed within 3 hours of delay. Among nationally accepted offers, batch size of 5/10/20 centers resulted in 23%/28%/56% placed within 3 hours of delay ( Figure 1 ). For KDPI>85%, making national offers in batches of 10 centers rather than 5 decreased discard to 34% from 71%. The probability of accepting but not receiving an organ (because a higher-priority candidate received the kidney) with regional offer batch size of 10 was 38% for KDPI<0.25 and 7% for KDPI>0.85. CONCLUSION: Making expiring simultaneous kidneys offers at regional and national levels might decrease delays and discard without over-burdening providers. For high KDPI kidneys, increasing the batch size might decrease delay of placement. The revised Kidney Allocation System (KAS) has been operational for 3 years. The bolus effects seen early following the introduction have abated and a steady state has been achieved. Previous reports have shown an increase in regional and national sharing and no change in 1 year death censored overall graft survival. KAS implemented 4 allocation sequences based on KDPI, a calculated risk index used as a predictor of long term outcome. KAS prioritized the top 20% most optimal donor organs (Sequence (S) A) to candidates with the greatest estimated post-transplant (tx) survival (EPTS) ) and S-A plus S-B (KDPI 21-35%) organs to pediatric candidates. Since KAS, data review shows a decrease in the rate of pediatric transplants and a failure to maximally allocate the S-A kidneys to the best EPTS candidates. Current UNOS policy requires the kidney be allocated to a M-O candidate before passing into the kidney alone KAS prioritization. Of the 14084 total kidneys available for tx in KAS Year 2, 11.7% were never allocated according to the KAS algorithm. Based on the previously reported M-O distribution in 2015, 87.2% of SPK, 53.4% of SLK and 70.6% of K+O candidates received the most optimal (and pediatric prioritized) S-A + S-B donor organs and never allocated to pediatric recipients. Also bypassed for these optimal kidneys were the KAS stratifi ed highly sensitized, prior living donor and greatest EPTS candidates. Additionally, with the exception of the SLK recipients, there are no outcomes reporting required for the other 855 kidneys. Under current UNOS policy M-O tx recipients receive prioritization for 11.7% of the most optimal kidney before kidney tx alone allocation thus disadvantaging multiple vulnerable populations, including pediatrics. Efforts to optimize the intended benefi ts of KAS would promote the principles of longevity matching and fairness in kidney allocation. Policy correction of this growing problem is indicated. Within this group, 77% had AKI. Compared to donors without AKI these donors were marginally older but less likely to have unfavorable comorbidities or death due to stroke. They also had lower mean initial Cr and KDPI. Net exporting OPOs accounted for the majority of un-recovered kidneys, both with and without AKI. Conclusions: A large percentage of un-recovered kidneys with elevated terminal Cr come from donors with AKI and were located in OPOs classifi ed as netexporters. We hypothesize this may refl ect attitudes of less aggressive local centers. Since many of these kidneys have better function than their terminal Cr indicates recovery represents an easy way to expand our supply. (Table) . After adjusting for clinical characteristics and center-level variation, ESW was associated with a lower risk of graft failure among those without DGF (aHR= 0.88 0.94 0.99 ) but not among those with DGF (aHR= 0.97 1.05 1.13 ; p-for-interaction=0.008; Figure) . ESW was also associated with a lower risk of death among those without DGF (aHR= 0.86 0.90 0.94 ) but not among those with DGF (aHR= 0.90 0.96 1.03 ; p-for-interaction=0.08; Figure) . CONCLUSION: While ESW appears to be safe and effective for recipients without DGF, it might not confer the same benefi ts on those with DGF. Background: Immunologic risk monitoring of recipients of kidney transplant(Tx) using donor specifi c antibodies (DSA) has become a standard practice in most Tx centers in the USA. There is however paucity of data on the incidence and clinical complications associated with recurrence (R'DSA) of preformed or Denovo DSA (DnDSA) in kidney Tx patient receiving early steroid withdrawal Immunosuppression(IS) regimen. Methods: A prospective monitoring of DSA after kidney Tx was done on patient who consented to study protocol between 2009 and 2011 at Indiana University Tx Program. Patient were evaluated for DSA at 3 rd , 6 th , 9th and 12 th month respectively and then yearly for a total of 3 years. Patient with history of positive fl ow cytometry crossmatch were excluded. Patients received anti-thymocyte globulins with pulse steroid induction (Early fi ve days withdraw) except zero mismatches living donor recipients who received basiliximab. Patients were placed on steroid free two drugs IS regimen. DSA were checked using single antigen beads (One Lambda Inc Introduction: Kidney transplantation in African American (AA) recipients continues to demonstrate inferior outcomes when compared with other race/ ethnicities. Contributing factors encompass differences in pharmacogenomics, and variances in PK/PD of immunosuppressant therapy. The concept of genotyping patients in solid organ transplantation is gaining interest as little is known about the impact of CYP3A5 polymorphisms on transplantation outcomes among AA kidney transplant recipients (KTRs). Methods: A single center retrospective cohort study of all 162 adult KTRs over a 24 month period who received oral tacrolimus (TAC) as part of maintenance immunosuppression were analyzed for prevalence of CYP3A genomic variances with a subanalysis of clinical outcomes in the AA cohort. Results: 85 patients expressed a CYP3A5*1 variant and 77 patients expressed non-*1 variants. The CYP3A5*1 group was predominantly AA (93%, p≤0.0005). Among the 106 strictly AA subgroup, in CYP3A5*1 expressers compared to nonexpressers, the incidence of BPAR was signifi cantly higher in the fi rst 6 months (13% vs 0%; P = .016) compared to 24 months (13% vs 7%; P = .521). TAC total daily dose at fi rst therapeutic level was signifi cantly higher in CYP3A5*1 expressers (12 mg/day) compared to nonexpressers (8 mg/day; P < .001). Compared to CYP3A5*1 nonexpressers, DGF incidence was signifi cantly higher among CYP3A5*1 expressers (27.6% vs 6.7%; P = .006). By contrast, median GFR was signifi cantly higher in CYP3A5*1 expressers compared to nonexpressers (54.5 mL/min vs 50.0 mL/min; P = .003) at 24 months. Conclusion: We confi rm that AA are the predominant expressers of the *1 variant and suggest that AAs with CYP3A5*1 expression require 50% more TAC and have an increased incidence of DGF and acute rejection. High tacrolimus trough coeffi cient of variation (TAC CV) has been associated with an increased risk of de novo donor-specifi c antibody (dnDSA) and acute rejection (AR). We have previously shown lower tacrolimus time in therapeutic range (TAC TTR) also predicts dnDSA and AR. The purpose of this study was to evaluate the risk of dnDSA and AR in patients with a high TAC CV stratifi ed by TAC TTR. From 2007 to 2013, kidney transplant recipients who were initiated and maintained on tacrolimus in the fi rst year of transplant were screened for dnDSA at months 1, 6, 12 and when clinically indicated. Tacrolimus troughs from 1 week to 12 months were used and troughs after dnDSA or acute rejection were excluded. TAC CV was calculated as (SD/mean)x100%. The Rosendaal method was applied to calculate TAC TTR using a therapeutic range of 5 -10 ng/ml. AR included clinical cellular and/or antibody-mediated rejection. INTRODUCTION. We examined the utility of cell-free DNA (cfDNA) sequencing of urine supernatants in kidney transplant recipients for diagnosing rare and common urinary tract infections (UTI), estimating bacterial growth rates, and antibiotic resistant genes. METHODS. We collected 28 urine specimens from 18 kidney transplant recipients with bacterial UTI, 6 urine specimens from 2 kidney transplant recipients with parvovirus infection, and 2 urine specimens from 1 kidney transplant recipient with adenovirus infection. Single-stranded library preparation and shotgun sequencing on the 36 urine supernatants was performed using Illumina Next Seq, 75 bp by 75 bp. Microbial identifi cation was performed using the Grammy pipeline. RESULTS. In 27 of the 28 bacterial UTIs, urinary cfDNA identifi ed the pathogen confi rmed by bacterial culture. Urinary cfDNA was able to detect H. infl uenza in a urine culture that was negative by conventional urine culture but was positive for H. infl uenza after the clinician requested advanced culturing techniques. In the cases of parovovirus and adenovirus infections, urinary cfDNA detected the pathogens in all cases at least 1 week prior to the clinical diagnosis made by the clinicians. Antibiotic resistomes were constructed based on urinary cfDNA in the bacterial UTI associated samples and correlated with antimicrobial susceptibility testing. Growth rates of bacterial pathogens were estimated based on uneven cfDNA coverage over the origin of replication. CONCLUSIONS. Cell free DNA sequencing in urine supernatants provides an all-inclusive method to anticipate and diagnose common and rare infections as well as inform antibiotic resistance patterns and bacterial growth rate. Pneumocystis PJP after transplant (TX) is associated with substantial morbidity and mortality. In the present era of universal and routine implementation of PJP prophylaxis (PPX) during the initial 6 to 12 months after TX, usually with trimethoprimsulfamethoxazole (TMP-SMX), identifying risk factors for its occurrence can help identify patients who remain at increased risk, and therefore are most likely to benefi t from ongoing PPX. Background: Frailty is predictive of postoperative morbidity, including infections in older surgical patients. It is not known if frailty is associated with postoperative infections in patients undergoing renal transplantation. This study attempts to answer this question. Methods: We performed a retrospective, observational cohort study of older patients (age ≥65) who underwent primary renal transplant (RT) in a single transplant center from January 1, 2013 to April 1, 2017. The modifi ed frailty index (mFI) was calculated based on 11 risk factors. The mFI scores were categorized as follows: 0-1 (fi t), 2 (mildly frail) and ≥ 3 (moderately-severely frail). Outcomes measured were: 30-day mortality, 30-day post-transplant infection (PTI), 30-day readmission, and 200-day PTI. The association of frailty with RT outcomes was determined using multivariate logistic regression adjusting for confounders. Results: Eighty-three RT recipients met inclusion criteria. Of these, 60 (72%) were men and mean age was 70 years (± 3.93). The mean time on dialysis was 4.5 years (± 2.9). The distribution of mFI scores was as follows: 0-1 (37%), 2 (25%), and ≥ 3 (37% In order to utilize a willing but incompatible living donor, candidates must either proceed with incompatible transplantation or fi nd a more compatible match using kidney paired donation (KPD). A criticism of KPD is that the "easy-tomatch" candidates leave the registry quickly, and thus concentrate the pool with sensitized candidates, making future matches challenging. -79, 17.7% 80-97, and 22 .9% had PRA 98% or higher. There were no changes in the prevalent pool of candidates awaiting KPD (Figure 1 ), except an initial increase in candidates with PRA≥80% (28.0% in 2011 to 41.1% in 2012, Figure 2 ). Importantly there were also no changes in the rate of deceased donor kidney transplant over this time period. Conclusion: In this large, national KPD clearinghouse, there is no evidence that candidates have become more "hard-to-match." Providers should continue to counsel candidates who are not sensitized and their potential donors to consider KPD. Figure 1 . Acute cellular rejection and BK virus nephropathy both cause deteriorating graft function that can lead to graft loss. Distinguishing the two conditions rapidly and accurately is vital for immunosuppression management. Currently, repeated biopsies are required for differential diagnosis and disease monitoring. Using the Nanostring platform to measure gene expression in urine sediment RNA as a noninvasive means to assess the status of renal allografts, we investigated the hypothesis that gene expression patterns in urine sediment will distinguish acute rejection from BK virus nephropathy without the need for a biopsy. Using the Nanostring PanCancer Immunology panel, we measured expression of 795 immune function genes in RNA isolated from urine sediment of 61 renal transplant patients: 29 control subjects with stable graft function and no clinical signs of rejection, 17 subjects with biopsy-proven acute T-cell mediated rejection, and 15 subjects with biopsy-proven BK virus nephropathy. When compared with control samples, the most signifi cantly upregulated genes at the time of acute rejection were the same as those most upregulated during BK virus nephropathy (CXCL9, CXCL10, CXCL11, CX3CL1 and IDO1). Using the elastic net, a penalized regression method that applies both the lasso and ridge penalties, we developed a set of 24 genes that can distinguish transplant recipients experiencing acute rejection from transplant recipients with BK virus nephropathy (area under the curve 0.92; 95% confi dence interval 0.78-0.99 by receiver operating characteristic curve analysis). These data indicate that Nanostring analysis of gene expression in urine sediments of kidney transplant patients can provide a noninvasive means to detect the presence vs. absence of ongoing graft injury and to distinguish graft injury caused by acute rejection from that caused by BK virus nephropathy. These data support the need to perform prospective studies to assess whether Nanostringguided therapy improves kidney transplant outcomes. We previously described a microarray diagnostic system (MMDx) for rejection in 1208 kidney transplant biopsies (J. Reeve et al. JCI Insight 2 (12), 2017). The system assigns molecular phenotype using binary classifi ers, and estimates rejection classes using archetypal clustering. The present study validated the system in a set of 537 new biopsies and studied the relationship to histology (available in 227 of the 537). Histology diagnoses in validation set 537 were similar to set 1208. Biopsies distributed in PCA by rejection molecular phenotype: PC1=rejection; PC2 =ABMR vs. TCMR; PC3=ABMR stage (early-stage; fully-developed; late-stage) ( Fig.1 : PC2 vs 1, left panels; PC2 vs 3, right panels). Set 1208 is at the top (ABCD), and set 537 at the bottom (fi gure 1EFGH). Biopsies are colored by histology class (AB and EF) and molecular class (CD and GH). Relationships between molecular phenotype (position) and histology diagnoses (color) in set 537 were similar to those in set 1208, including separation of ABMR stage in PC3 (Fig. 1H) . Thus MMDx performed similarly in the validation set. We used receiver-operator characteristic (ROC) curves to compare the ability of archetype analysis (AA) categories and gene set scores to predict histologic diagnoses (test set results) ( Fig. 2A) . AUCs were calculated for ABMR ( Fig. 2A) or TCMR (Fig. 2B) in 1208, and in 227 biopsies in set 537 with histology diagnoses (Fig. 2 C,D) . AA and binary classifi ers performed similarly. Gene sets and single genes often performed less well than the machine learning methods (AA and binary classifi ers), particularly in ABMR ( Fig. 2A,C) . The best AUC was for ABMR 0.86 and for TCMR 0.88. The results showed that the MMDx system created in 1208 biopsies performs similarly in validation set 537, and that machine learning classifi ers rather than gene sets are necessary to optimize molecular diagnosis of rejection. Mean Banff ti-score in both groups was similar at baseline. On follow up, it declined (-39.5%) in the TCZ group & increased (+21.9%) in the control group. More patients in the TCZ group than the control group had a decline in the ti-score (7/11 vs. 2/11, p=0.08). Mean circulating CD4 + CD25 + FoxP3 + Treg frequency (n=20) was similar at baseline (4.3% vs 5.1%, p=0.58); it increased in the TCZ group (+50%) & declined in the control group (-22.5%) over 6 months (p=0.012). PMA/ionomycin stimulated production of IL-17 by CD4+ T cells decreased in the TCZ group and increased in the control group at 6 months. There were no cases of death or graft loss. Conclusions: TCZ treatment for 6 months was well tolerated in kidney transplant recipients. It was associated with a signifi cant increase in circulating Tregs, a signifi cant decrease in CD4 + T cell cytokine production & a trend towards decreased graft infl ammation. IL-6 blockade is a novel treatment option to modulate the alloimmune response. The value of urinary biomarkers has already been demonstrated in diagnosing acute rejection of the renal allograft, but confounding factors, which may interfere with their interpretation, have been poorly evaluated. Our goal was to evaluate the respective impact of urinary tract infection, reactivation of BK virus (BKV) and acute rejection on these biomarkers in order to optimize a noninvasive strategy of acute rejection assessment. A total of 391 urine samples collected in 330 patients at time of graft biopsy and BKV blood nucleic acid testing were included. Urinary concentrations of the CXCL9 and CXCL10 proteins and of CD3ɛ, CXCL9, CXCL10, granzyme B and perforin mRNAs were quantifi ed. Serum DSA status at the time of biopsy was also determined by luminex® single antigen. Our results confi rmed the impact of urinary tract infection on the urinary chemokine protein levels (1,6±2,1 vs 0,7±1,8; p<0.005 for CXCL9 and 2,1±1,7 vs 0,8±2,0; p<0,001 for CXCL10). During reactivation of BKV, while viruria had minimal impact, concentrations of urinary RNA (mainly CXCL9, CXCL10 and granzyme B) and protein biomarkers were signifi cantly increased and remained similar in viremic patients and patients with proven BKV nephritis. After elimination of these confounding factors and multivariate analysis, a 3-parameter diagnostic model (urinary CXCL9 and CXCL10 proteins and serum DSA status) best diagnosed acute rejection with an area under the curve of 0.81 (P=9.93E-13). Restricted to unstable patients at time of indication biopsy (n=168), the rejection prediction was even more robust with an AUC of 0.85 (P = 5.09E-12). No data exists to evaluate the impact of hepatectomy time (HT) during donation after cardiac death (DCD) procurement on short and long-term outcomes following liver transplantation (LT). In this study we analyse the impact of the time from aortic perfusion to end of hepatectomy on outcomes following DCD LT across all UK transplant centers. Using data requested from NHSBT, we identifi ed 1112 adult patients receiving a fi rst LT in the UK between 1 January 2001 and 31 August 2015 from a DCD donor. Primary end points were PNF and all cause graft survival. A cohort of DBD donor recipients (n=7221) in the same time period was included to allow comparison of long-term survival. Statistical methods included logistic regression and cox proportional hazard models. Incidence of PNF was 40 (4%) and in multivariate analysis only CIT >8 hrs. p=0 .023) and HT > 60 mins p=0.01) were correlated with PNF. Overall 90 day, 1 yr., 3 yr. and 5 yr. graft survival in DCD LT was 91.2%, 86.5%, 80.9% and 77.7% (compared to a DBD cohort in the same period (n=7221) 94%, 91%, 86.6%, and 82.6% respectively (p<0.001)). In multivariate analysis the factors associated with poorer graft survival were HT >60 mins (or more specifi cally, >53mins on a continuous spectrum), donor age >45 yrs., CIT> 8 hours and recipient previous abdominal surgery. The largest study to date to demonstrate a negative impact of prolonged HT on outcomes on DCD LT and although HT >53 mins is not a contraindication for utilisation it should be taken into a multifactorial assessment with established prognostic donor factors such as age (>45yrs) Results: Cold ischemia time (CIT), recipient hepatectomy and anastomoses times were signifi cantly shorter in era II (p<0.001, 0.001 &<0.001 respectively). In era II, there were fewer biliary complications and none of the patients developed IC, compared to 18% in era I (p<0.05) ( Table 1) . One year DCD LT graft survival for recipients in era II was signifi cantly higher than that of recipients in era I (93% vs. 79%, p=0.01) which was comparable to deceased donor (DBD) LT recipients (89%, p=0.22). The use of ECD DCD livers was comparable in the two eras (30% in era I vs. 23% in era II, p=0.35), and graft loss was signifi cantly higher (53% vs. 0%, p=0.004) in era I for ECD DCD livers. Number of DCD LT performed at our center gradually increased after institution of the optimization protocol to account for 12% of center's LT volume. Conclusions: Shorter ischemic times along with thrombolytic donor fl ush prevented IC, and improved survival outcomes after DCD LT. The improvement in clinical outcomes was associated regrowth of DCD LT program. In Y1, overall costs varied from $82K for kidney recipients to $366K for intestine recipients. For recipients of all organs, functioning grafts were less costly than death or graft failure. For kidney recipients, DCGF was 4.3 times more costly than functioning grafts, likely due to the high cost of dialysis. Death was 4.7 times more costly than functioning grafts. For liver recipients, Medicare paid 2.5 times more for DCG and re-transplants, and 4.8 times more for death than for functioning grafts. For heart and lung recipients, the cost of dying in the fi rst year was 4.5 and 3.9 times more than for functioning grafts. In Y2, overall costs varied from $26K for kidney recipients to $105K for intestine recipients. The cost difference between functioning and failed grafts was more extreme for most organs. For kidney recipients, DCGF was 9 times more costly and death nearly 5.9 times more costly than functioning grafts. For liver recipients, Medicare paid 4.9 times more for DCGF and 5.8 times more for death than for functioning grafts. For lung recipients, DCGF was more costly than death, 6.4 vs. 4.8 times the cost of functioning grafts. Costs of death and graft failure are high after solid organ transplant. This information is essential for determining the value of interventions designed to avoid these outcomes. These data can facilitate efforts to improve outcomes across all solid organ transplant types. Current transplant center performance models focus on 1-year graft and patient survival as primary outcomes. This is despite that 2.5 year cohorts are used to generate models and as such 2.5 year follow up is available for evaluation using standard statistical models. Moreover, a clear challenge with current assessment is very modest clinical relevance of differences between "good" and "bad" center performance given high expected 1-year survival. We evaluated all US adult kidney transplant centers using the SRTR and retrospectively simulated performance models over four independent cohorts(2009-2016) using standard risk adjustment but incorporating 2.5 year follow up for graft and patient survival rather than administrative censoring at 1-year. We evaluated changes in performance evaluations(using CMS criteria) and differences in observed and expected survival among performance groups. Over four periods and 822 transplant center cohorts, 39 cohorts were fl agged using 1-year and 2.5-year follow up for graft or patient survival. However, 26 cohorts were fl agged with 1-year but not 2.5 year follow up and 20 programs were fl agged with 2.5 year but not 1-year follow up. Overall 40% of programs currently fl agged for 1-year outcomes would not meet CMS fl agging criteria with extended follow up. Differences in observed and expected graft and patient survival were signifi cantly higher(p<0.01 for both) for centers underperforming with 2 year survival after LT. (Figure 1a ) There was wide variation in morbidity across centers. (Table) Even when limited to "top performing centers" (n=30) absolute 1 year pt survival varied by 8% (88-96%), but ITT survival for MELD ≥20 varied by 40% (47-86%). (Figure 1b ) Morbidity after LT was variable: median LOS (7-27 days), 4 +readmissions (0-6%), CKD rates ( Transplant program performance assessment has traditionally focused on posttransplant outcomes. However, the survival benefi t conferred by transplant offers a compelling reason to broaden program assessment to include pretransplant outcomes such as risk-adjusted transplant rate and waitlist mortality. Pretransplant metrics in isolation do not capture the entire range of possible patient experiences because a high transplant rate may lead to worse posttransplant outcomes due to use of high-risk donors. Survival from listing may balance these issues by incorporating aspects of pretransplant and posttransplant care. Using SRTR data, we estimated program-specifi c survival from listing hazard ratios for candidates listed at kidney, liver, lung, and heart programs between July 1, 2010, and June 30, 2016. The hazard ratios were estimated with a log-normal frailty term and were adjusted for candidate characteristics at the time of listing. A time-varying covariate for transplant was not included because higher transplant rates should improve survival from listing due to better access to transplant. Variability in program-specifi c hazard ratios for survival from listing was relatively low compared to transplant rate ratios, waitlist mortality rate ratios, and hazard ratios for 1-year graft survival (Table 1 ). In kidney and liver transplantation, survival from listing was strongly associated with each metric. In contrast, for heart and lung programs, survival from listing had a relatively attenuated association with the transplant rate ratio. In conclusion, survival from listing captures the effect of both pretransplant and posttransplant care and deserves further investigation as an intent-to-treat analysis of patient experience at a program. (Figure) . In timevarying Cox model adjusting for donor and recipient factors, enteric conversion did not affect the risk of graft loss (HR 1.54, P = 0.09). The most common complications after conversion were bowel obstruction and enteric leaks, most of which occurred in the fi rst year after conversion. When necessary due to symptoms or complications, enteric conversion of bladder drained pancreata is safe and does not affect overall graft survival. This relationship appears to be true no matter when the conversion is performed. Figure 1A -C). PTG-derived CD34+ cells were as effective as whole PTG in preserving graft mass whereas CD34-cells provided minimal benefi t ( Figure 1D ). The islet-preserving effects were associated with marked enhancement of graft vascularization and protection of beta-cell death by PTG-secreted factors. Moreover, PTG co-transplantation led to diabetes reversal using human islets (1000IEQ) in SQ in 1 week, SCIPC in IM in 4 weeks, and eBC in IM in 1 week (Figure 2 ). We present a novel co-transplantation of PTG with mature islets and stem-cell derived beta cells leads to signifi cantly increased survival and reversal of diabetes in SQ and IM. Furthermore, the islet-protective activities of PTG resides in CD34+ progenitor cells. Introduction: Ten-year outcomes from a cohort of fi ve pre-uremic islet transplant (txp) recipients treated with anti-leukocyte functional antigen-1 (LFA-1) antibody Efalizumab (EFA) revealed two patients that remain insulin independent with either minimal or complete withdrawal of immunosuppression (IS), prompting investigation of underlying immunologic mechanisms. Methods: Patients remained on EFA following islet infusion until drug discontinuation on May 1, 2009 (avg treatment 570 days), with transition to CNI-sparing IS. Blood samples were monitored pre-and post-txp for CD4+, CD8+, and regulatory T (Treg) cells. Recipient alloreactivity was evaluated by mixed lymphocyte reactions (MLR) in vitro against donor-specifi c and third-party stimulated B cells (sBc) for proliferation and cytokine production. Results/Discussion: Insulin independence at 10 years post-txp was 40% (2/5 patients; Fig 1) . Two patients, EFA-2 and EFA-4, remain insulin-independent on minimal to no IS, while a third patient, EFA-1, maintained insulin independence for 3287 days. All fi ve pts showed signifi cant enrichment of peripheral Tregs; notably EFA-4, who remains insulin independent off all IS for 5.4 years, had the highest peak levels of Treg-predominance, reaching 67% of all CD4+ T cells in circulation. Further investigation by proliferation and interferon gamma production after MLR revealed that EFA-2 and EFA-4 had an extended period of non-reactivity to donor and third-party antigen on EFA. Although EFA-4 regained in vitro reactivity to both donor and third party antigen after discontinuation of EFA, the patient remains operationally tolerant and insulin independent off all IS. Conclusions: In ten-year follow up of islet txp recipients on EFA-based IS, one patient achieved operational tolerance off all IS, with another maintaining insulin independence on minimal IS. Selective enrichment of Tregs and sustained early donor non-reactivity while on EFA correlate with these outcomes. In contrast to other solid organ transplants, the characteristics of long term surviving pancreas transplants have not been well described. Our objective was to measure graft half-lives through three transplant immunosuppression eras and characterize the demographic factors in transplants with > 10-year graft survival. A retrospective review of 2,022 pancreas transplants performed in 3 standard immunosuppression eras, pre-CsA (1978) (1979) (1980) (1981) (1982) (1983) (1984) (1985) (1986) , CsA/Aza (1986) (1987) (1988) (1989) (1990) (1991) (1992) (1993) , and Tac/MMF (1997-2016) at a single institution was studied. Anti-T cell induction was used in Csa/Aza and Tac/MMF eras, but not in the fi rst era. A subset of transplants with alemtuzumab induction was not included. Transplants were further divided by those with failure in less 10 years and those surviving more than 10 years. Standard bivariate and multivariable models were applied and survival and graft half-lives for each category were determined from Kaplan Meier estimates. RESULTS. Graft survival at all time intervals increased markedly from the pre-CsA era to the CsA era and then incrementally in the modern Tac era. As expected, SPK transplants had the longest graft survival with a half life of more than 22 years for death censored and 1 yr contingent grafts. PAK and PTA had shorter half life. In the latter eras, there are many transplants that survived more than 10 years. Long term survivors tended to have the following characteristics: SPK transplants, older recipients, male recipients, primary transplants, bladder drained, leaner recipient, locally recovered allografts, and lower pDRI. Pancreas graft survival continues to improve with graft half-lives more than 20 years in optimal circumstances. Such half-life determinations may help in weighing the risk/benefi ts in comparison to alternative therapies, especially as islet transplantation and closed-loop insulin pumps. Allogeneic memory T cell response precludes the induction of transplant acceptance; how this process is regulated at the molecular level remains unclear. Both mTOR and STAT3 are important "drivers" for T cell differentiation and function, but they have been shown to play opposing roles in memory T cell development. STAT3 promotes the functional maturation of memory T cells, whereas mTOR activity negatively regulates the magnitude and qualities of memory T cell differentiation. Herein, to determine the role of mTOR and STAT3 in regulating allogenic memory T cell response, we generated Mtor fl /fl CD4-Cre, Stat3 fl /fl CD4-Cre, and Mtor fl /fl Stat3 fl /fl CD4-Cre mice, in which mTOR, STAT3, or both mTOR and STAT3 was conditionally knocked out (cKO) in T cells. We then transplanted Balb/c ear skins onto wild type (WT) B6 and those cKO mice (n = 5 or 6 per group Introduction: Sphingosine 1-phospate (S1P) receptor 1 (S1PR1) drives T cell migration from thymus into blood, and lymph nodes (LN) into efferent lymphatics. Whether S1PR1 and S1PR4 expressed by T cells, regulate migration from tissues into draining LN (dLN) is unknown. We hypothesized that T cells use different S1PRs to traffi c from tissues into draining LN through afferent lymphatics. Methods: CD4 T cells were adoptively transferred into mice, and migration to afferent lymphatics and dLN measured. Mouse and human primary lymphatic endothelial cells (LEC), blood endothelial cell (BEC), and LEC line were used to assess migration, chemokine signals, adhesion molecules and S1PR function in vitro. Specifi c pharmacologic and genetic S1PR blockade was employed in vitro and in vivo. Results:LEC, but not BEC, selectively promoted human and murine CD4 T cell migration toward S1P, but not other chemokines. An S1P gradient was formed around the afferent lymphatics; anti-S1P treatment inhibited CD4 T cell migration in vitro and in vivo; and CD4 T cells failed to home to LN of Sphk1 -/and Sphk2 -/recipients, which lacked lymphatic S1P expression. CD4 T cell migration toward S1P not CCL19 was blocked by treatment of T cells with S1PR1 or S1PR4 antagonists. Similarly, S1PR1 -/or S1PR4 -/-CD4 T cells failed to migrate toward S1P not CCL19 in vitro, and into afferent lymphatics and LN in vivo. Pharmacologic and genetic receptor blockade inhibited migration both in vitro and in vivo. In contrast, overexpression of S1PR1 in S1PR1-Tg or S5A transgenic T cells promoted CD4 T cell migration toward S1P in vitro and enhanced homing into dLN through afferent lymphatics in vivo. Compared to CCL19 driven migration, S1P driven transendothelial migration was more dependent on transcellular rather than paracellular traffi cking. S1PR1 and S1PR4 regulated CD4 T cell migration to, but did not change CD4 T cell localization within, LNs. Conclusions: S1P engages T cells and LEC to promote migration. CD4 T cells use S1PR1 and S1PR4 to regulate migration into afferent lymphatics and LNs. These results demonstrate unique roles for S1P and S1PRs in regulating T cell migration from tissues into LN, and that distinct receptors and mechanisms are used compared to thymic or efferent lymphatic migration. These fi ndings suggest new and specifi c drug targets for regulating lymphatic migration in immunity and tolerance. CITATION INFORMATION: Xiong Y., Piao W., Brinkman C., Li L., Kulinski J., Olivera A., Cartier A., Hla T., Schwab S., Hippen K., Blazar B., Bromberg J. CD4 T Cells Require Both S1PR1 and S1PR4 for Afferent Lymphatic Migration Am J Transplant. 2018;18 (suppl 4). reperfusion injury (IRI) have been well documented, underlying mechanisms remain ambiguous. Moreover, whether Gsk3β also regulates infl ammation resolution and restoration of tissue homeostasis after IRI remains unknown. Methods: Myeloid-specifi c Gsk3β KO strain was used to study its function in macrophages at both early and late stages of tissue infl ammation in a murine liver partial warm ischemia model. Results: Compared with controls, myeloid Gsk3ß KO mice were not only protected from liver IRI, but also recovered more quickly. Peak hepatocellular injury after 90min ischemia was signifi cantly lower (at 6h post reperfusion); and liver homeostasis was restored much earlier after reperfusion (7d in WT vs. 3d in KO at both histological and cellular levels). In bone marrow-derived macrophages (BMMs), Gsk3β defi ciency resulted in an early reduction of TNF-a, but sustained increase of IL-10 gene expression upon TLR4 stimulation. Additionally, expression of efferocytosis receptor MerTK and TIM-4, two key molecules for macrophage functions in infl ammation resolution, were selectively upregulated in activated KO, but not WT, cells. Intracellular signaling pathway analysis revealed that TLR4 stimulation inhibited AMPK activity early in WT BMMs. Gsk3β defi ciency reversed this inhibition, leading to early and higher levels of AMPK activation in stimulated KO cells. This resulted in the induction of the novel innate immune negative regulator small heterodimer partner (SHP). The Gsk3β regulation of AMPK/SHP was confi rmed in WT BMMs using Gsk3 chemical inhibitor, and found to be independent of the PI3K-AKT signaling pathway. Introduction: Sphingosine 1-phosphate (S1P) receptors 1 (S1PR1) and S1PR2 are highly expressed by lymphatic endothelial cells (LEC), while T cells express S1PR1 and S1PR4. S1P is produced by LEC and regulates T cell migration. The mechanisms of how S1P regulate CD4 T cell migration across LEC are not fully understood. We hypothesized that S1PRs are utilized by LEC to regulate CD4 T traffi cking from tissues through afferent lymphatics and into draining lymph nodes (LN). Methods: CD4 T cells were adoptively transferred into mice to measure T cell migration into afferent lymphatics and draining LN (dLN). Specifi c pharmacologic and genetic S1PR blockade was employed in vitro and in vivo. Mouse and human primary LEC, blood endothelial cell (BEC), and LEC line were used to assess migration, chemokine signals, adhesion molecules and S1PR function in vitro. Results:S1P selectively promoted human and murine CD4 T cell migration across LEC, but not BEC. Selective inhibition of S1PR1 or S1PR2 indicated that migration was mediated by S1PR2, but not S1PR1, in LEC. S1PR2 antagonists inhibited T cell transendothelial migration in vitro and decreased T cell migration into lymphatic vessels and dLN in vivo. CD4 T cells also migrated less into the lymphatic vessels and dLN of S1PR2 -/recipients. Treatment with neutralizing antibodies against adhesion molecules demonstrated that S1P driven migration was dependent on VLA4-VCAM-1 interactions. S1PR2 antagonists decreased VCAM-1 expression by LEC, which resulted in decreased interaction of T cells with VCAM-1 + domains of the LEC surface membrane. The S1PR2 antagonist inhibited S1P-induced phosphorylation of ERK to control VCAM-1 expression. The S1PR2 antagonist also increased cell-cell junction integrity of LEC by enhancing VE-cadherin expression, thus further impairing T cell movement across endothelial layers. Conclusions: While both S1PR1 and S1PR2 are expressed by LEC, only S1PR2 is used to regulate T cell migration across afferent lymphatics into draining LN. S1PR2 downstream ERK signaling regulates VCAM-1 expression, which is required for CD4 T cell transendothelial migration. S1PR2 also regulates VEcadherin expression as well as LEC junctional integrity. These fi ndings implicate S1PR2 as a novel and specifi c drug target for regulating the lymphatic migration of CD4 T cells in immunity and tolerance. Background: Our prior studies demonstrated that tolerance and immunity are associated with differential expression of lymph node (LN) stromal laminins α4β1γ1 (411) and α5b1g1 (511), respectively, which determine how alloreactive T cells enter high endothelial venules and traffi c within LN. Here we tested whether laminins directly regulate CD4+ T cell receptor stimulation and migration, thereby determining immune responses and allograft survival. Methods: CD4+ T cells isolated from C57BL/6 mice were activated with anti-CD3 +/-anti-CD28, cultured with laminins for 3 days, and evaluated for activation and proliferation. Migration of CD4 T cells on laminin coated surfaces was measured with real-time live imaging. C57BL/6 mice and laminin α5 fl oxed x Pdgfrb-Cre mice lacking laminin α5 expression in LN, received cardiac transplants from BALB/c donors. Recipients were treated with mAbs against the laminin 511 receptor α6 integrin. The proliferation and activation of anti-CD3 stimulated T cells were enhanced by laminin 511, but were inhibited by laminin 411. These effects were observed for a large range of dose and kinetic variables, demonstrating robust laminin effects. Laminin signals and T cell receptor and costimulatory signals must be presented together on the same surface for regulatory interactions to occur. Laminin 421 and laminin 521 displayed analogous effects, showing specifi city for the α-chains. Laminin 411 enhanced while laminin 511 inhibited T cell migration. Blocking the laminin 511 receptor α6 integrin with specifi c mAb abrogated the stimulatory effect of laminin 511, as well as its infl uence on migration, indicating α6 integrin is the main receptor for laminin α5. Adding anti-α6 integrin to anti-CD40L mAb prolonged allograft survival (mean survival time 53 to 167 days (P=0.0018)). Laminin α5 defi cient mice treated with anti-CD40L mAb also had a marked increase in allograft survival compared to wild type controls (109 vs 87 days). Conclusions: Serving as co-inhibitory and co-stimulatory ligands, respectively, the laminin α4 and α5 chains regulate T cell receptor stimulation and migration. T cells recognize laminin α5 through α6 integrin, which acts as co-stimulatory receptors. mAbs targeting this receptor inhibited T cell responses and prolonged allograft survival. Laminins are not only structural scaffold molecules, but act as molecular switches for tolerance and immunity by directly regulating CD4+ T cell responses. Kidney transplant rejection leads to short and long-term adverse clinical outcomes. Serum creatinine is a suboptimal late biomarker for rejection. Exosomes are nanometer-sized vesicles released by cells to mediate cell-tocell communication by delivering genetic materials including mRNAs and microRNAs. In the transplanted kidney, exosomes originate from different cells including the immune cells during rejection. These cells release exosomes into the urine carrying parent cells' surface proteins and nucleic acids. The nucleic acid profi le may carry disease signatures becoming potential biomarkers of disease activity and therapeutic targets. We show that urinary exosomes, and their contents including RNA, are very stable and can be stored in the urine for 2 weeks at 4°C, reinforcing exosomes as a potential source to identify disease signatures. We isolated urinary exosomes, extracted mRNA, and performed differential gene expression analysis in 28 urine samples collected from 23 patients who underwent a kidney transplant biopsy for evaluation of acute rejection. In renal transplantation, non-invasive biomarkers are needed to identify patients at risk for poor outcomes and guide pre-emptive therapy. Here, we assessed B cell cytokines as a predictive biomarker. 165 patients with serial Bxs (3&12mo protocol+for-cause) and blood draws served as the training set. IS was Thymo→MPA+TAC. Based on prior data, we tested the ratio of IL10:TNFα expression (Cpg+CD40L, by FACs) in peripheral blood T1 transitional B cells (T1B) 3mo post-transplant, as a biomarker. A low T1B IL10:TNFα ratio at 3mo predicted acute rejection (AR) within the 1 st yr (Fig1A) . Notably, in patients with a normal Bx at 3mo, a low T1B cytokine ratio strongly predicted late AR (6-12mo;AUC 0.9, p<0.0001). These data were validated in an independent internal cohort (n=74). Further, a low T1B cytokine ratio was associated with IFTA (12 mo) and graft loss/impending graft loss (eGFR<30ml/min & >30% fall from baseline) at 4yrs (Fig1B) . The predictive value of this biomarker was not infl uenced by opportunistic infection or by nonadherence. Next, T1B cytokine ratio was validated in an external UK cohort (n=100). IS was Simulect→TAC+MMF). All AR in this group was clinical (no protocol Bxs). Again, the T1B cytokine ratio at 3mo strongly predicted AR within the 1 st yr (Fig1C), and was associated with poor graft outcomes (Fig1D) Chronic antibody-mediated rejection is thought to be the main cause of late kidney-allograft loss, involving donor-specifi c antibody-mediated activation of the complement system. Activation of fi ltered or locally produced complement may contribute to the progression of renal failure via tubular formation of the terminal C5b-9 complement complex. The aim of this study was to determine the urinary C5b-9 excretion and to assess its association with long-term outcome in renal transplant recipients (RTRs). We measured urinary C5b-9 in a well-defi ned cross-sectional cohort of RTRs. Urinary specimens were taken from the morning urine portion and terminal complement component C5b-9 was measured using an enzyme-linkedimmunosorbent assay (ELISA). Cox regression analyses were used to investigate prospective associations with death censored graft failure. We included 639 RTRs (age 53±13 years; 58% males at 5. Fig 1B) . At a cutoff value of 1.26, the T1B cytokine ratio predicted DSA+TCMR with a PPV of 81% and a NPV of 94%. Reanalysis of the data after removing the 5/43 patients whose DSA was detected after TCMR showed that the T1 B cytokine ratio strongly predicted concomitant or ensuing TCMR in patients with DSA (ROC AUC 0.94, p<0.0001). Thus, patients with DSA+TCMR represents a high-risk population for adverse graft outcomes, and this outcome can be predicted in DSA+ patients using the T1B cytokine ratio as a biomarker. Liver ischemia-reperfusion injury (IRI) represents a major risk factor for early graft dysfunction and a key obstacle to expanding the donor pool in orthotopic liver transplantation (OLT). Although graft autophagy is essential for resistance against hepatic IRI, its signifi cance in clinical OLT remains unknown. Despite recent data identifying heme oxygenase-1 (HO-1) as a putative autophagy inducer, its role in OLT settings and putative interactions with Sirtuin-1 (SIRT1), a key autophagy regulator, have not been studied. We aimed to examine HO-1 mediated autophagy induction in human OLT and in a murine OLT model with extended cold storage, as well as to analyze the requirement for SIRT1 in autophagy regulation by HO-1. Fifty-one hepatic human biopsies from OLT patients were collected under an IRB protocol at 2h post-reperfusion, followed by Western blot analyses. Clinical trials of adoptively transferred regulatory T cells (T Regs ) have been hindered by the short lifespan of these transferred T Regs . Systemic injection of low-dose interleukin (IL)-2 preferentially stimulates the high-affi nity trimeric IL-2 receptor, abundantly expressed on T Regs , leading to greater cellular expansion. However, cytotoxic CD8 T cells and NK cells are also signifi cantly expanded by this strategy. Here, we describe novel engineered T Regs conjugated with IL-2 nanogel (NG) particles before adoptive transfer. The NG's gradual secretion of IL-2 in vivo selectively promotes T Reg viability, and is engineered by reversibly crosslinking carrier-free protein using a bis-N-hydroxy succinimide (NHS) crosslinker. Incorporation of monoclonal antibodies against the highlyexpressed cell surface phosphatase CD45 acts as an anchor to retain the NG on T Regs . Upon T Reg activation, increased reduction activities lead to the progressive degradation of the crosslinker and sustained IL-2 release. We show that the coupling effi ciency of NG to magnetically-isolated murine T Regs is above 80% (n=5 experiments; ****P<0.0001), and that NG-conjugated T Regs maintain their FoxP3 expression after coupling compared to the control (CT) T Regs (99% vs. 92%, CT vs. NG; n=5 experiments; P=n.s.). To test T Reg homeostasis in vivo, Immunodefi cient RAG1 -/mice received BALB/c skin transplants and were injected with 1:1 T Regs :CD8 T cells from mismatched C57BL/6 donors to induce alloimmunity. Seven days post adoptive transfer, a two-fold increase in T Regs was observed in the spleens of mice that received NG T Regs compared to CT T Regs (n=11/group; **P<0.01). Furthermore, contrary to when IL-2 is systemically injected, we saw a more than three-fold increase in T Regs :CD8 and T Regs :NK ratios in the NG-treated mice (n=11/group; *P<0.05). In a GvHD model, the adoptive transfer of NG T Regs reduced disease progression compared to CT T Regs . In summary, we show that the IL-2 NG T-cell conjugation strategy can effi ciently and specifi cally improve the in vivo expansion of transferred T Regs while sparing CD8 and NK T cells, thus improving the therapeutic effi cacy of T Regs and IL-2. reduced M1 macrophage iNOS yet strongly augmented arginase-1, an M2 macrophage phenotype, with increased expression of IL-10/TGF-β. Transfection of p-Cox2 KO in MSCs decreased PGE2 secretion, β-catenin, XIAP expression, and IL-10/TGF-β production. Induction of β-catenin enhanced XIAP and Akt phosphorylation whereas knockout of β-catenin resulted in reduced XIAP and increased TNFα/IL-6. Moreover, XIAP knockout augmented JNK yet diminished Akt phosphorylation and arginase-1 expression. Unlike in controls, adoptive transfer of MSCs in mice ameliorated IR-induced liver damage, as evidenced by reduced sALT levels and liver Suzuki's scores, which was accompanied by increased Cox2, PGE2, β-catenin, XIAP, p-Akt, and arginase-1 expression, with reduced proinfl ammatory mediators in ischemic livers. Conclusion: This study demonstrates that MSC-mediated immune regulation through activation of PGE2/β-catenin/XBP1 signaling, which in turn reprograms host macrophage differentiation towards an anti-infl ammatory M2 phenotype in IR-triggered liver infl ammation. This might imply a novel therapeutic potential for the management of liver IRI in transplant recipients. [Background] Reconstructive transplantation represents a valid therapeutic option after devastating tissue loss. Routine clinical application, however, is hampered by the toxicity of long-term maintenance immunosuppression. The current study investigated a novel approach using ex vivo expanded regulatory T cells combined with a short-term immunomodulatory strategy in a murine hind limb transplantation model. [Methods] Orthotopic hind limb transplants were performed from Balb/C to C57BL/6 mice. Recipients in the experimental groups received a combination regimen consisting of CTLA4 Ig on day 0, 2, 4 and 6 post-transplant, anti-Thy 1.2 mAb on POD-1, and 1mg/kg Rapamycin (POD 0-9), and 1 wk expanded CD4+CD25+ Treg cells. Allograft survival, suppression assays, mixed chimerism was evaluated. [Results] Combination of T cell depletion and CTLA4-Ig plus short-course of Rapamycin increased VCA survival signifi cantly while untreated controls rejected allografts (MST 105 days; p<0.01). Mixed chimerism was detected in recipients receiving this combined treatment protocolwith 5.013 ± 1.23 % of CD11b+ cells being donor-derived on POD 55. Vβ TCR staining profi les in recipients after full treatment showed 1.570 ± 0.3700 % of νβ5+CD4+ T cells, while naïve C57BL/6 express 3.567 ± 0.3690 % of νβ5+CD4+ T cells, suggesting the actuation of central deletion of developing donor-reactive T cells. In order to further prolong allograft survival, one week expanded Tregs were then included in the combination therapy. The suppressive activity of the CD4+CD25+ Tregs was confi rmed with in vitro suppression assays. The addition of ex vivo expanded regulatory T cells further increased VCA survival to >200 days and induced long-term stable mixed chimerism with 16.7±1.5 % of CD11b cells being donorderived on POD 55 after administration of expanded Treg cells. [Conclusion] The combination of T cell depletion, costimulation blockade, and a short-course of Rapamycin prevents VCA rejection and signifi cantly prolongs graft survival without the need for myeloablative conditioning or maintenance therapy. Moreover, regulatory T cells added in the early post transplant period further optimize immune regulation by inducing sustained mixed chimerism. Fumarylacetoacetate hydrolase (FAH) defi cient pigs have been proposed as surrogate hosts for large scale in vivo expansion of normal FAH+ hepatocytes. However, immunologic rejection prevents expansion of non-autologous donor cells. To overcome this barrier, we have developed a novel method of morula complementation to produce chimeric FAH (-/-) pigs with FAH (+/+) liver grafts. FAH (-/-) recipient morulae were derived via somatic cell nuclear transfer with fi broblasts harvested from FAH knock-out pigs. The blastomeres of the recipient morulae were complemented with blastomeres from normal FAH (+/+) donor embryos also transfected with enhanced green fl uorescent protein (EGFP) label. The chimeric embryos were transferred to a surrogate sow who received no NTBC during pregnancy. A single gestation was confi rmed and the chimeric pig was born without deformity. His birth weight was 0.75 kg. Liver function tests were normal. Tyrosine and succinylacetone levels were also normal. He received no NTBC. Peripheral blood leukocytes were comprised of 33.8% donor-derived (GFP+) cells as measured via fl ow cytometry. Liver biopsies at 1 and 2 months after birth showed diffuse staining of donor (FAH+, GFP+) cells in liver parenchyma (Fig. 1) . The pig was euthanized electively and in good health without NTBC administration for data collection at day 60 after birth. These are proof of concept studies that FAH defi ciency can be successfully corrected by stem cell transplantation in early gestation through the production of chimeric embryos. Therefore, production and robust expansion of non-autologous normal hepatocytes can be achieved in FAH defi cient pigs. Our novel method avoids the need for protective drugs (i.e., NTBC), immunosuppressive drugs, or immunodefi cient modifi cations of host pigs. Background Chronic rejection is the main cause of renal transplant loss after the fi rst year and has mostly been attributed to alloimmune responses against human leukocyte antigens (HLA). Epidemiological data, however, suggests a contribution of non-HLA related alloimmune responses. Candidate antigen approaches relying on predefi ned minor histocompatibility antigens did not provide convincing evidence for the importance of non-HLA related alloimmune responses. The contribution of non-HLA donor recipient mismatches in protein coding genes on a genome wide basis has not yet been assessed. Methods We analyzed 479 kidney transplant recipients and the respective deceased donors from two transplant centers in Vienna and Prague with a median follow up of 6.5 years. Genotyping was performed using the Affymetrix Axiom Tx v1 GWAS Array (Affymetrix, Santa Clara, CA, USA) as part of the iGeneTrain consortium. Raw genotypes were phased with SHAPEIT and imputed with IMPUTE2 using GoNL and 1KG data as reference panels. The Ensembl Variant Effect Predictor was used for SNP annotation. We used a Cox PH model to analyze the association of non-synonymous single nucleotide mismatches with graft survival. The analysis showed that the load of nsSNP MM in membrane and secreted proteins is associated with a statistically signifi cant elevation in risk for death censored graft loss. This elevated risk (hazard ratio 10% per 10 nsSNP MM) remains after adjustment for HLA incompatibility *HLA-A, -B, -DR). Stratifi cation of patients according to their number of nsSNPs into quartiles showed that especially kidney transplant recipients with a high number of mismatches have a lower graft survival probability (fi gure 1). Genome wide genetic incompatibility in protein coding proteins (non-HLA) contributes to graft loss after kidney transplantation independently of HLA mismatches. Background. In kidney transplantation, de novo donor specifi c antibodies (dnDSA) are associated with antibody-mediated rejection (ABMR) and graft failure. While appearance of dnDSA, there is a great variability in outcomes. Sialylation status of antibodies was previously shown to modulate the effector function. The objective of our study was to evaluate the impact of DSA sialylation on allograft survival in non-sensitized kidney transplant recipients. Methods. The cohort included 296 non-sensitized patients, who underwent a fi rst kidney transplant between 2008 and 2014 in our center. For patients developing dnDSA (44 individuals),we analyzed immunodominant DSA sialylation on the fi rst DSA detection sera and one year after using affi nity chromatography with lectin. Primary endpoint was allograft failure according to immunodominant DSA sialylation status. Secondary endpoints were ABMR incidence and Banff classifi cation scores. Results. Eleven patients (25%) had sialylated iDSA and 33 patients had nonsialylated iDSA (75%). Patients with non-sialylated iDSA had signifi cantly higher scores for microvascular infl ammation (p = 0.02) than patients with sialylated iDSA. There was a trend for higher IF/TA score in non-sialylated group (p = 0.13). Five-year allograft survival was signifi cantly lower in patients with non-sialylated iDSA (80.0%) compared to patients with sialylated iDSA, with non-DSA anti-HLA antibodies and without any anti-HLA immunization (90.0%, 92.2% and 95.2%, respectively; p = 0.007). In multivariate analysis, nonsialylated iDSA was independently associated with allograft failure compared with no anti-HLA antibodies (HR 4.0; IC95% [1.24; 12.90] ; p = 0.02). Conclusion. DSA sialylation status seems to be associated with allograft failure in non-sensitized kidney transplant recipients. DSA sialylation analyze may be helpful for ABMR risk assessment. Since implementation of the current Kidney Allocation System, highly sensitized patients with 98-100% cPRA (HSPs) are receiving ~10% of deceased donor kidneys compared to <2% prior to its implementation. However, those HSPs who possess allele specifi c antibodies (aAb) (which are antibodies directed against polymorphisms/epitopes/eplets present in some members of an antigenic group but not others) remain a disadvantaged group of candidates. The approach to list as unacceptable the parent antigen to which patients only have antibodies to specifi c alleles would unnecessarily exclude compatible donors, thereby further reducing an already limited pool of donors. In this study, we present our experience that expands the donor pool for HSPs with aAbs. 38 HSPs with 98-100% cPRA were evaluated. When patients displayed aAbs, parent antigens were not listed as unacceptable in UNet. Instead, when otherwise compatible donors who possessed parent antigens were offered to candidates with aAbs, the parent antigen was resolved by high resolution HLA typing and then a vXM was performed. Donor organs whose allelic specifi cities were not the targets of the aAbs were transplanted. 27/38 patients (71%) who possessed aAbs were transplanted with kidneys from donors where no potential mismatched targets were involved. 11/38 (29%) recipients with aAbs were transplanted with parent antigen positive/allele specifi city negative donor kidneys. All vXMs and their corresponding physical crossmatches were negative. The median MFI value for aAbs was 4528 (range 2314-15603). Transplant outcomes among these patients were compared to the control group of recipients who had no aAbs. Outcomes were identical between the two groups of recipients. Among highly sensitized patients with aAbs, the parent antigens to which patients have aAbs do not need to be considered as unacceptable. Rather, combining high resolution typing with vXM successfully predicts donor/recipient compatibility. This approach maximizes the donor offers to a particularly disadvantaged group of highly sensitized patients without negative impact to transplant outcomes. With the aging donor population, accurate assessment of tissue quality of deceased donor kidneys is required to determine the intermediate and long term graft performance, and is the major unmet need.We have previously shown that the expression of acute kidney injury transcripts (IRRAT) in implant biopsies is predictive of immediate function/delayed graft function in the fi rst week posttransplant. In the present study we asked whether post-transplant intermediate (3 months) and late (12 month) graft function i.e. high serum creatinine could be predicted by p16/INK4a (CDKN2A) transcript, the classic somatic cell senescence marker whose expression represents the biological age/quality of an organ, compared to IRRAT expression. We analyzed 64 implantation (1-hour) biopsies from brain dead donors. p16/ INK4a expression was measured by qPCR (p16/INK4a is not assessable by microarrays) and compared to IRRAT expression measured by microarrays. Clinical risk evaluation of kidney function included KDRI, Irish and Schold scores. The intermediate/late graft function (3 and 12 months) was defi ned as poor with serum creatinine>132 um/L. We compared molecular, demographic, and clinical scores in implant biopsies for their association with serum creatinine at 3 and 12 months post-transplant. Only expression of p16/INK4a was signifi cantly associated (by p-value) with the elevated serum creatinine at 3 and 12 months (AUC of 0.71 and 0.70 for 3 and 12 months, Figure 1A ). IRRAT expression, donor and recipient age and clinical risk scores were not signifi cantly associated with the intermediate/late graft function. IRRAT expression was only associated with early (seven day) elevated creatinine ( Figure 1B ). LCPT (Envarsus XR®) is a once-daily, extended-release formulation of tacrolimus which has demonstrated greater bioavailability and a smoother PK profi le compared to TacBID. This post-hoc analysis of previous conversion studies summarizes the PK profi le of LCPT and TacBID when targeted at low, medium, and high trough levels in kidney transplant recipients. Methods: Data from 3 PK studies on conversion from TacBID to LCPT (Gaber 2013, Tremblay 2016, Trofe-Clark 2017) were pooled and patients were grouped according to trough (Cmin) concentrations of ≤5ng/mL, >5 and ≤8ng/mL, or >8 ng/mL. Daily dose conversion rates from TacBID to LCPT ranged from 1:0.7 to 1:0.85. PK parameters between drugs were compared at each trough range. Results: Fifty-three, 115, and 68 patients had trough levels in the low, medium, and high ranges, respectively. Patients treated with LCPT consistently showed lower peaks, less fl uctuation, and comparable trough levels in each concentration range (Table 1) . Correlation between Cmin and AUC was signifi cant for LCPT in all ranges, but was highest (0.83, p<0 .001) in the high trough patients. Conclusion: Pooled analysis of 3 PK studies show that conversion from TacBID to LCPT results in consistently reduced peak concentrations, less fl uctuation and similar trough levels at reduced daily doses, regardless of trough level range. Data here demonstrate that LCPT maintains its smoother PK profi le with once-daily dosing at a variety of clinically applicable trough targets in kidney transplant recipients. Tolerance induction is a long standing clinical goal. However, identifi cation remains elusive. Here we profi le intra-renal allograft RNA expression in a mixed chimerism renal allograft model in Cynomolgus monkeys and identify biologically signifi cant tolerance. Methods Samples from 76 animals (one to 11 biopsies, zero to 5983 days posttransplantation. n = 278) included protocol, indication,plus autopsy nephrectomies. 21.4 % lacked a terminal rejection (tolerant), and 78.6 % showed terminal rejection. The gene set includes 67 oligonucleotides derived from human studies. JMP13 programs included Fit Model Platform, K-means clustering, Factor Analysis. P values were adjusted for false discovery. Factor analysis was chosen to reduce the many numerated gene expressions into smaller sets of correlated variables Results Factor analysis identifi ed three dominant factors (Eigen >6), each with a different pattern of gene expression, relating to TCMR, CAMR, or Tolerance. The Tolerance factor comprises a novel set of gene expressions (Treg pathway by Gene Ontogeny). Clustering these three factors created nine groups. Calculated both per sample and per animal one of the nine clustered groups, the Tolerance cluster, showed the lowest probability of terminal rejection (P < 1-E-4), the longest duration of allograft survival (P < 1-E-3), and the lowest relative risk of terminal rejection (P < 1-E-3). The Tolerance factor could not be identifi ed within current pathological diagnostic categories. The TCMR and CAMR factors are dominant to the Tolerance factor causing rejection even if the Tolerance factor is present. These three factors determine the probability of terminal rejection or tolerance. The fi gure shows the probability of survival for the clustered groups (A per sample, B per animal). This novel a posteriori approach permits identifi cation of pathways of rejection, including tolerance. Alemtuzumab induction combined with belatacept and rapamycin (ABR) maintenance immunotherapy effectively prevents costimulation blockade resistant rejection (CoBRR). We longitudinally investigated kinetics of repopulating B cell subsets and donor-specifi c allo-reactive antibody (DSA) in patients undergoing a novel calcineurin inhibitor-free ABR regimen within 36-month post-transplantation. 40 patients received a kidney allograft from either living (n=30) and deceased (n=10) donors. All patients were DSA free at transplant. Absolute peripheral blood B cell counts were analyzed by TruCount analysis, and the CD27-IgD and Bm1-Bm5 (IgD/CD38) classifi cations were utilized to defi ne the B cell differentiation. Serum samples were screened for DSA by a microparticle-based FACS analysis, and the specifi cities of DSA to HLA class I and II was determined by a Luminex-based assay. Profound B cell depletion post-alemtuzumab induction was followed by rapid repopulation. The reconstituting B cells were predominantly naïve B cells (CD27 -IgG + , Bm1/Bm2 subsets) after 6 months post-transplantation. In contrast, the frequency of memory B cell subsets including switched (CD27 + IgD -), un-switched (CD27 + IgD + ), and exhausted (CD27 -IgD -) memory cells, and early (Bm5-early)/later (Bm5-late) memory B cells was signifi cantly lower than baseline levels between 6 and 36 months post-depletional induction (p=0.0001). Additionally, regulatory B cells defi ned as CD38 hi CD24 hi IgM hi CD20 hi subset was signifi cantly higher than baseline 6 months post-transplantation (p=0.0001). DSA was not detectable in 35 patients with 36 months post-transplantation. No patients developed ABMR, though 5 patients developed DSA: 2 class II (DQ), 2 class I, and one associated with multiple failed pregnancies (class I and class II). The repopulating B cells in patients undergoing novel ABR immunosuppression demonstrate increased naïve and regulatory B populations, and the reconstitution of memory B cells is signifi cantly inhibited without detectable allo-reactive antibody in most patients after transplantation. These fi ndings suggest that lymphocyte depletion and belatacept-based maintenance regimen achieves anti-rejection effects by promoting naïve and regulatory B cells while suppressing memory B cells after transplantation. This regimen warrants formal, prospective, comparative study. Abstract# 528 Kidney transplant patients treated with belatacept-based regimens without depletional induction have higher rates of costimulation blockade resistant rejection (CoBRR). In contrast, belatacept effectively prevents rejection when used following T cell depletion. We longitudinally monitored T cell repertoires in 40 patients who received kidney allografts from living (n=30) or deceased (n=10) donors under alemtuzumab depletion followed by a belatacept and rapamycin maintenance regimen (ABR). Ten patients treated with non-depletional induction and a calcineurin inhibitor-based regimen were used as comparators. Using polychromatic fl ow cytometry, the phenotype of reconstituting T cells was assessed every 6 months for 36 months focusing specifi cally on markers of T cell proliferation, maturation, and costimulation. Alemtuzumab induction produced profound lymphopenia, evoking substantial homeostatic T cell proliferation characterized by increased intracellular Ki67 expression (p<0.05). Depleted patients were enriched signifi cantly with naïve (CCR7 + CD45RA + , CD57 -PD1 -, CD28 + ) CD4 + and CD8 + T cells compared to baseline (p<0.05). Alemtuzumab induction signifi cantly reduced T cell subsets associated with belatacept resistance including CD8 + CD2 hi CD28 -, CD4 + CD28 + CCR7 -CD45RA -, and CD4 + CD57 + PD1cells when compared with baseline (p<0.05). The CD8 + CD2 hi CD28 + and CD4 + CD2 hi CD28 + subsets were also signifi cantly decreased (p<0.05), and a reduced effector memory compartment (CCR7 -CD45RA -) in CD8 + CD2 hi CD28 + subset was observed post-depletion when compared with baseline (p<0.05). Control patients had no signifi cant changes in their repertoire over the follow-up period. In summary, ABR regimen effectively prevents the repopulation of belatacept-resistant T cell subsets, and the de novo priming of reconstituted T cells results in proliferation of predominantly CD28 + naïve cells that are susceptible to belatacept. Our results demonstrate that alemtuzumab depletion followed by belatacept and rapamycin maintenance alters the immune profi le of the recipient, producing a peripheral repertoire that is naïve, CD28 + and thus conceptually more conducive to control with belataceptbased therapy. This regimen warrants formal, prospective, comparative study. The Organ Procurement and Transplantation Network (OPTN) requires programs to submit complete, accurate and timely clinical information for 80% and laboratory data for 70% of living kidney donors (LKDs) at 6-, 12-, and 24-month follow-up. A program is noncompliant with policy for a time-point if any required clinical/laboratory element is missing or not collected ±60 days of the time-point. We retrospectively analyzed data collected in the NIH-funded Kidney Donor Outcomes Cohort (KDOC) study to ascertain whether policy requirements were met. Complete and timely clinical information was collected for 61% (6 months), 52% (12 months), and 48% (24 months) of LKDs; complete and timely laboratory data were collected for 44% (6 months), 45% (12 months), and 35% (24 months) of LKDs. Higher rates of data collection were observed when removing the ±60 day timing requirement. None of the six KDOC programs met policy requirements at any of the follow-up periods for clinical information, while two programs met requirements at both the 6-and 12-month follow-up periods for laboratory data. Despite substantial resources and efforts, we were unable to meet policy requirements and this experience is generally consistent with what others have reported and what has been reported by the OPTN. Purpose: There is limited research that describes living donor (LD) program processes and protocols. This study aimed to describe LD program protocols, processes, and resources within U.S. transplant centers. Method: A survey was sent to the living donor coordinator (LDC) at every U.S. LD program (n=211; response rate=70%, total respondents=148). Descriptive statistics summarized survey responses. Results: 22% of centers required LD to have personal health insurance and 28% required LD to have a PCP (Table 1) . Median time to respond to initial LD inquiry was 1 day (IQR 1-2) ( Table 1 ). Median number of days LD required at the center for evaluation was 2 days (IQR 1.5-3) with 24% completing in center evaluation in 1 day and 40% of programs required > 3 days at the center. Median number of days to complete entire LD evaluation was 45 days (IQR 30-83) with 42% completing LD evaluation in < 30 days and 27% requiring > 60 days. 63% of centers had KPD software or registry. 83% of centers paid for the cost of post donation labs, 75% centers paid for post donation offi ce visits, and 64% of centers covered the cost of post donation complications if insurance did not cover (Table 2) . 94% offered the potential donor a medical opt out. 81% of centers accepted donors with controlled HTN. All centers reported a BMI restriction. 18% had BMI restriction < 30 and 61% had BMI restriction > 35. 43% reported upper age limits with 46% reporting upper age limit > 70 years. Conclusions: Signifi cant variability exists among centers in regards to program protocols and processes. This study is an important fi rst step to identifi cation of best practices among programs and program characteristics that are associated with higher numbers of living donor transplants. Introduction: While post-donation follow-up in the United States has improved since implementation of minimum center compliance standards, it is still suboptimal and brief (only 2 years total required). Despite being at greater risk for developing post-donation kidney failure, living kidney donors with obesity at the time of donation are more likely to have incomplete post-donation follow-up. We sought to examine whether center-level variation exists in follow-up at 6 months post-donation among a cohort of obese living donors. Methods: Using data from the Scientifi c Registry for Transplant Recipients and the Organ Procurement and Transplantation Network from January 2005-July 2015, we included all adult kidney donors with obesity (body mass index ≥ 30 kg/m 2 ) at time of donation (n=13,831). We used a multilevel logistic regression model with a random intercept for recovery center and empirical Bayes estimates, to explore the presence of center variation in recording of 6-month post-donation serum creatinine. Results: After adjusting for various donor and center-level characteristics, 24% of the variation in recording of 6-month serum creatinine was accounted for by center (intraclass correlation coeffi cient: 0.237, p < 0.001). When examining the predicted probability of 6-month creatinine by center, the adjusted probability ranged from 10% to 91.5%, with an average of 49.5% and 91 of 192 centers had less than 50% predicted probability of complete 6-month creatinine. Conversion to open was required in 5 cases (for major bleeding and malfunctioning staplers). All the conversions occurred early in the experience (the last one 11 years ago). Perioperative complications occurred in 10.2% of the cases and the majority were minor (Grade I-II) with the exception of 1 death, non-related to technical issues. There were no statistical differences in the 4 groups on surgical outcomes and postoperative complications. Purpose: Impact of transplant center program size, staffi ng, and organization on LKD outcomes is unknown. This study describes the labor inputs, organization, and resources available within U.S. living donor (LD) programs and their associations with LD outcomes. Method: A survey was sent to the living donor coordinator (LDC) at every U.S. LD program (n=211; RR=70%, total respondents=148). Respondent surveys were linked to AHA and SRTR databases to describe hospital characteristics and volumes. Descriptive statistics summarized responses; multiple linear regressions assessed associations of center characteristics with outcome variables. Results: 74% of programs reported dedicated LDCs (Table 1) . Median LDC FTEs per center was 1.0 (IQR 1-2); 37% of programs reported that the LDC devoted < 40 hours per week to the LD program. >1 LDC FTE was associated with higher numbers of LD inquires, evaluations, and transplants, as was having a dedicated LDC (p<0.004) ( Table 2 ). Dedicated LD clerical staff was associated with higher numbers of evaluations and transplants (p<0.003). Centers that accept donors with controlled HTN and centers that have a KPD program reported higher LD inquires, evaluations, and transplants (p<0.021). The overall national conversion rate from initial contact to LD transplant was 8% and from evaluation to LD transplant was 32%. Conclusion: Findings suggest an association of labor, organizational and resource inputs with LD outcomes suggesting that centers invest in nursing and non-nursing staffi ng, evaluate donor management structure, review acceptance criteria/protocols, and devote specifi c resources to the LD program. This is the fi rst known study in the transplant literature that describes an association between nurse labor and LD outcomes. Results: In all sub-periods, female donors made up the majority (55-62%), except for sibling donors (45%) and child to-parent donors (40%). No signifi cant gender differences were seen in perceived information given before donation. For males, it was more common that the recipient took the initiative to donate. For females, the motivation for donating was more frequently to help the recipient and because others wanted them to donate. For males, it was more common to feel a moral obligation. Post-operatively, females more frequently felt sad and experienced nausea, and more frequently felt that the donation had a positive impact on their lifes. With the introduction of minimally invasive surgical techniques, donors experienced fewer problems from the operation, with no gender difference. Females donate more frequently than males, a difference that did not change over time. Only a few gender differences were seen in donor motives and the donation experience; however, these differences may be relevant to address the gender imbalance in kidney donations. Living donor kidney transplantation (LDKT) is the optimal treatment for adults with renal failure. However, living kidney donation has declined in the past decade, particularly among men, younger adults, blacks, and low-income adults. There is evidence that donation-related costs may deter both transplant candidates and potential donors from considering LDKT. Lost wages is a major source of fi nancial loss for some living donors and, unlike travel and lodging expenses, is not reimbursed by fi nancial assistance programs. In this presentation, we will describe the rationale and design -and provide an update -of an NIH-funded, single-center study in which 350 kidney transplant candidates are randomly assigned to one of two parallel arms: (1) possible reimbursement of donor lost wages up to $1,500 or (2) possible reimbursement of donor lost wages up to $3,000, with LDKT occurrence within 12 months of initial transplant evaluation as the primary outcome. The study is testing the impact of offering reimbursement for donor lost wages on the LDKT rate in kidney transplant candidates, examining whether offering reimbursement for donor lost wages reduces racial disparity in LDKT rates, and determining whether higher reimbursement amounts lead to higher LDKT rates. Our central hypotheses are that offering reimbursement for donor lost wages will yield higher LDKT rates overall relative to historical controls, and reduce the disparity between LDKT rates in white and minority patients. In addition, enrolled patients and prospective donors are completing questionnaires to assess fi nancial concerns about transplantation and donation, their relative importance on decision-making, donation ambivalence, and feelings of pressure to donate. We will highlight similarities and differences between our study and the donor lost wages recently funded by the Laura and John Arnold Foundation. Our study is the fi rst such one to be initiated (enrollment underway) and it addresses the transplant community's call to reduce the fi nancial burden of living donation and examine its impact on LDKT rates. Findings have the potential to infl uence policy, clinical practice, LDKT access, and income and racial disparities in LDKT. Infl ammation is associated with poor physical performance, frailty, disability and death in community-dwelling older adults. Given the elevated infl ammation levels among ESRD patients, it is likely that infl ammatory markers are associated with adverse outcomes such as frailty or physical limitation among kidney transplantation (KT) candidates, although the strength of these associations is unclear. Methods: 974 participants were enrolled at the time of evaluation for KT (4/2014 to 5/2017). Infl ammatory markers, Short Physical Performance Battery (SPPB), and frailty were assessed at the time of KT evaluation. Participants with a SPPB score <10 were classifi ed as SPPB impaired. Elevated infl ammation markers were defi ned as >1SD higher level of IL-6, sTNFR1, CRP or an infl ammation index in log scale. Results: Dialysis groups had higher infl ammation; HD patients were most likely to be SPPB impaired and frail (Table 1) . Overall, the odds of being frail or SPPB impaired at the time of KT evaluation were higher with increasing IL-6, CRP, and infl ammation index, while the odds of being SPPB impaired were also higher with increasing sTNFR1 (Table 2) . Among HD patients, the odds of being frail at the time of KT evaluation were higher with increasing IL-6 and CRP and the odds of being SPPB impaired were higher with increasing IL-6, CRP, and infl ammation index (Table 2) . Conclusion: Among KT candidates, those undergoing HD have higher odds of being frail and SPPB impaired with elevated levels of infl ammatory markers. HD patients with elevated infl ammatory markers, especially IL-6, CRP, and infl ammation index, may need special care such as closer follow up or prehabilitation to improve lower extremity function and reduce frailty burden while they are on the waitlist. Abstract# 538 Background Hypothermic machine perfusion (HMP) is increasingly being used in cardiac death kidney transplantation, but the value of assessing organ quality and predicting DGF through HMP parameters is still controversial. Methods Three hundred sixty-six qualifi ed kidneys were randomly assigned to the development and validation cohorts in a 2:1 distribution. The HMP variables of the development cohort were used as candidate univariate predictors for DGF. Then, the independent predictors of DGF were identifi ed by multivariate logistic regression analysis with a p value <0.05.According to the OR values, each HMP variable was given a weighted integer and the sum of the integers was a total risk score for each kidney. Results HMP duration, resistanceand fl ow rate were identifi ed as independent predictors of DGF. The HMP predictive scores ranged from 0 to 14, and there was a clear increase in the incidence of DGF moving from the low to very-high predictive score group. According to the frequency associated with different risk scores of DGF, we formed four increasingly serious risk categories (scores 0-3, 4-7, 8-11, and 12-14) . The HMP predictive scores demonstrated good discriminative power with a c-statistic of 0.706 in the validation cohort, and signifi cantly better prediction value for DGF than terminal fl ow and resistance (p<0.05). Conclusion The HMP predictive score is a good noninvasive tool for assessing the quality of DCD kidneys, and is potentially useful for physicians in making optimal decisions about donor organ offers. Preliminary analysis of T-REX implementation was examined 7 months after pilot start. Referral documentation was examined within T-REX and referral reports pulled from transplant center electronic medical records to determine usability. Results: From April 1 to November 1, 2017, 68 referrals were electronically sent and received via T-REX. The mean time between a referral sent by dialysis staff to time of referral receipt (including all required patient information to schedule an evaluation) by transplant staff was 33.8 hours. The mean time to process a referral decreased from 50.1 hours (April 1-July 15) in the fi rst half to 24.1 hours (July 16-November 1) in the second half of the pilot. All but 8 referrals from these dialysis facilities were sent via T-REX (vs. fax). The pilot phase of T-REX showed high usability among dialysis staff. A randomized controlled trial among dialysis and transplant centers in the Southeast will be conducted in 2018 to examine the effect of T-REX vs. standard of care on transplant access. T-REX could help improve communication between healthcare providers, streamline effi ciency of the transplant process, and ultimately help improve dialysis patient access to kidney transplant. Background: Rituximab is commonly prescribed in ABO-or HLA-incompatible kidney transplantation. When used before transplant it interferes with B lymphocyte complement-dependent crossmatch (CDC CM) or FACS crossmatch (FACS CM) results thus rendering their interpretation impossible. This effect has been described up to 9 months following rituximab infusion. We describe here a new method to abolish rituximab interference in both CDC CM and FACS CM. We used an anti-rituximab mouse monoclonal antibody (10C5 clone, ABNOVA®) directed against the F(ab)2 fragment of rituximab. Normally this antibody is employed to dose rituximab by ELISA technique. We tested different antibody concentrations in different settings (rituximab infusions 0 days to 270 days before CM; patients with and without class 1 and 2 HLA antibodies) and evaluated whether the interference of rituximab was abolished in FACS CM and CDC CM. Results: A concentration of 0.17μg/μl of the 10C5 clone abolished completely false positive results in both CDC and FACS CM. 1μl of 10C5 was thus added to 5μl of serum for 15 minutes at room temperature, the CM was then performed. While these concentrations dilute the serum they did not nullify expected positive results. CDC and FACS CM stayed positive when donor specifi c HLA antibodies had a Luminex mean fl uorescence intensity respectively >8000 and >4000. Conclusion: We describe an easy, effi cient and relatively inexpensive method to eliminate rituximab interference on CDC and FACS CM results. Objective: OPTN has recently approved MELD prioritization for HCC patients beyond Milan Criteria (MC) who are downstaged (DS) with locoregional therapy (LRT) prior to liver transplant (LT). We sought to evaluate post-LT outcomes, identify predictors of downstaging, and evaluate the impact of LRT in HCC patients presenting beyond MC. Methods: Clinicopathologic characteristics and outcomes were compared among beyond MC patients (n=789) who were downstaged (DS, n=465), treated and not downstaged (LRT-NoDS, n=242), or untreated and not downstaged (NoLRT-NoDS, n=82) from 20 US centers (2002) (2003) (2004) (2005) (2006) (2007) (2008) (2009) (2010) (2011) (2012) (2013) . Logistic regression identifi ed predictors of downstaging. Among non-downstaged patients, multivariate Cox regression and propensity matching with inverse probability of treatment weighted analysis determined the effect of LRT on HCC recurrence. Results: Compared to NoDS, recurrence-free survival was superior and post-LT recurrence lower in DS, with further stratifi cation of risk by tumor > 5cm. Multivariate predictors of downstaging included pre-LT AFP response, and pathologic tumor number and size. Surprisingly, LRT-NoDS had higher recurrence rates compared to NoLRT-NoDS, even after controlling for clinicopathologic variables (HR 2.43 [1.47-4 .00], p=0.001) and propensity matching (HR 2.60 [1.58-4.28 ],p<0.001). HCC recurrence after OLT. There was no signifi cant difference in the tacrolimus concentrations between the patients without HCC recurrence and with HCC recurrence. The HCC recurrence proportion of D(+)R(+) was 0%, D(+)R(-) was 3.4%, D(-)R(+) was 29.7%, and D(-)R(-) was 8.1%. Log-rank test in KM analysis demonstrated a remarkable difference in tumor recurrence rate among different HLA-DR4 matching status (p=0.008). CONCLUSION: These results suggest that the HLA-DR4 matching status was associated with different HCC recurrence risk and that the graft immune system could play a role in the response to the tumor cells after liver transplantation. Further clinical and basic studies are required to confi rm the fi ndings in this study and explore possible mechanisms. Background and aims: De novo malignancies are one of the major late complications and cause of death after liver transplantation (LT). Using extensive data from the "French National Agence de la Biomédecine Cristal database", the present study aimed to quantify the risk of solid organ de novo malignancies (excluding non-melanoma skin cancers) after LT. Methods: The incidence of de novo malignancies among all LT patients between 1993 and 2012 was compared to that of the French population, standardized on age, gender, and calendar period (standardized incidence ratio, SIR). Patients with history of malignancy before LT were not included. Results: Among the 11226 liver transplant patients included in the study, 1200 de novo malignancies were diagnosed (11.6%) after a median delay of 5.0 years after LT. In October 2015, OPTN implemented a new policy for liver allocation, under which hepatocellular carcinoma (HCC) patients receive exception points only after a 6-month delay. This delay may lead to undetected HCC disease progression and recurrence after transplantation that may result in poor posttransplant outcomes. We sought to investigate the association between this policy change with all-cause graft failure (death or re-transplant) in HCC liver transplant recipients. We identifi ed 1907 adult, fi rst-time deceased donor liver transplant (DDLT) recipients who were granted HCC exception points with SRTR data (2014) (2015) (2016) . We estimated cumulative incidence of all-cause graft failure using Kaplan-Meier curves, pre-2015 (10/8/2014-10/7/2015) and post-2015 (10/8/2015-10/7/2016). We compared 1-year graft failure using Cox regression, pre-2015 and post-2015, adjusting for recipient age, sex, race, and donor risk index (DRI). Results: Allocation-MELD and lab-MELD at transplantation were both higher post-2015 than pre-2015 (allocation-MELD, median (IQR) 28 (23.5-28) vs. 22 (22-28), p=<0.01; lab-MELD, median (IQR) 12 (9) (10) (11) (12) (13) (14) (15) (16) (17) vs 11 (8-15), p<0.001) [ Table 1 ].The one-year graft failure rate for HCC DDLT recipients was 7.5% pre-2015 and 8.1% post-2015 (log-rank test p =0.5) [ Figure 1 ]. After adjustment, there was no evidence of association between the policy change and graft failure (aHR= 0.54 0.85 1.35 ; p =0.5) [ Figure-1 ]. There is no evidence that the 6-month policy delay has affected early post-DDLT outcomes for HCC patients. Long-term survival for HCC DDLT patients should be examined as the length of follow-up time after policy implementation increases. There is considerable commentary regarding liver transplant vs resection for the treatment of hepatocellular carcinoma (HCC). While liver transplant provides longer survival when compared to resection; Milan criteria, organ availability, and UNOS reduction of exception points for HCC leave many patients with resection. One aspect missing in this analysis is the fi nancial burden on patients/ payers. Indeed, The Patient Protection and Affordable Care Act (Obama Care) prohibits the Secretary of Health and the Patient-Centered Outcome Research Institute to include cost in comparing treatment. To address this, we compiled billable events as a proxy for receivables, from diagnosis of HCC through resection or transplant, to death or last reported encounter of patients with HCC between 01/2011 and 12/2012. On average, patients with HCC who underwent resection had a MELD of 12, survival of 652 days, and billable encounters of $316,873 or $2,904/day. In contrast, patients that received a liver transplant had a MELD of 19, survival of 1579 days, and billable encounters of $704,291 or $2,622 per day. Surgery represents the largest category of treatment of patients with HCC ($88,020 or 28% & $175,457 or 26% resection & transplant respectively). Moreover, the cost effectiveness of treatment was directly proportional to length of survival. The longer overall survival in patients transplanted with HCC ultimate diminishes the fi nancial burden long term compared to resection. This endpoint measure occurs after 2 years survival. In patients who received a liver transplant and died within 2 years, the fi nancial burden was on average $7,450/ day. In contrast, patients who survived longer > 2 years had a fi nancial burden of $320/day. In conclusion, liver transplant offers both patients and payers a signifi cant advantage over liver resection for eligible patients with HCC when predicted survival is greater than 2yrs. In a health care climate aiming to contain costs and evaluate value based treatment paradigms, fi nancial burden should be included in the decision analysis. A balance needs to be drawn between survival and fi nancial burdens. Objective:Split/reduced graft liver transplantation(SRLT), while widely accepted in pediatrics, remains underutilized in adults. SRLT is considered a negative factor for graft quality(HR;1.52) in liver donor risk index reported by Feng et al. in 2006. This study aimed to investigate whether improved donor-recipient matching and accumulated technical advancements had improved SRLT outcomes Methods:This study used 82,554 adult patients in the scientifi c registry of transplant recipients(SRTR) from 1998-2015, evaluating characteristics and outcomes of 1,221(1.5%) SRLT patients according to time periods(Era1; 1998 -2003 2004 -2009 2010 -2015 . Cox proportional hazards models for overall graft survival and the Kaplan-Meier method were used to assess prognostic outcomes. P<0.05 was defi ned as statistical signifi cance Results:The case number of SRLT remained 60 cases/year over the study period and each time period(Era1;408, Era2;402, and Era3;411 cases). 102 centers performed at least one case. More than half the cases were performed by 12 high volume centers; each performed at least 30 cases (53.4%, 652/1221). Over the study period, the median donor age was less in Era2(20, IQR16-28) and Era3(21, IQR16-29) than Era1(22, IQR17-33)(P<0.001). On the other hand, recipient age was greater in Era2(54.5, IQR48-61) and Era3(58, IQR51-63) than Era1(51, IQR45-57)(P<0.001). The cold ischemic time in Era3 was shorter than the other two groups(7.3 hours vs 8 hours and 8 hours; P<0.001). The length of stay has signifi cantly improved over time(from 13days to 10day, P<0.001). 1 and 5-year graft survival also improved over time; 68.6 and 58.3%, 84.8 and 71.9%, and 87.3 and 79.8% in Era1, Era2 and Era3, respectively(P<0.001). The adjusted HR of SRLT was signifi cantly more hazardous than whole grafts in Era1(HR;1.39, P<0.001); however, no signifi cant difference remained in Era2 and Era3(HR;1.04 and 0.94, respectively) Results: In total, 67 living donors responded to the SF-36 and DQLS, with a mean follow up of 6.5±0.3 years. The mean age at donation was 44.5±1.3 years, and 53.7% of respondents were male. SF-36 mental and physical component summary scale scores were higher compared to the normal US population data (p<0.05). DQLS data was grouped into four areas: physical activity, employment status, insurance status, and emotional status. 76.1% of respondents were able to resume the same level of physical activity as prior to surgery, while 73.1% remain physically active with strength that is equal to or greater than pre-operatively. 91% of respondents returned to school or work after donation, and 80.6% reported that LD had no impact on their performance. 95.5% of respondents reported that their health insurance had not been impacted by LD. Only 10.4% of respondents indicated that they had applied for life insurance post-LD; however, 28.6% of these patients reported that LD had impacted their ability to obtain coverage. Overall, 85.1% of respondents felt positive about their surgery, and 97.0% indicated that they did not regret LD. Data from the 2017 surveys is currently being analyzed and will represent a mean follow up of 13 years postdonation. Conclusion: These data demonstrate that most patients report above average HRQOL post-LD when compared to the general population. To our knowledge, this study will represent the longest follow up of HRQOL in LD and represents unique data related to physical, fi nancial, and psychosocial factors in HRQOL after LD. The In conclusion, preformed HLA antibodies present at time of LDLT are associated with inferior graft survival compared to patients transplanted without antibodies. Following treatment of DSAs with IVIg + PP, graft survival is equivalent to patients without antibodies. Additional study is required to delineate mechanism of these fi ndings. Background: Interferon-b (IFNb), a type I interferon with antiviral and proinfl ammatory activity, has been used as an immune modulatory treatment of multiple sclerosis (MS). The effect of IFNb on alloimmunity and transplant rejection are unknown. Methods: We used mathematical modeling to predict signaling effects of IFNb in T cells, and validated the model through experimental testing of type I interferon receptor (IFNAR) expression and phosphorylated STAT activation in IFNb treated murine and human naïve CD4+T cells by fl ow cytometry. We perfomed Treg induction assays +/-IFNb and measured frequencies and function of induced CD4+CD25+Foxp3+ Treg. We also treated B6 recipients of BALB/c allografts with IFNb, evaluating graft survival and effects on Treg. Results: As predicted by the computational model, murine and human naïve CD4+ T cells express IFNAR and IFNb activates STAT1,4 and 5 (Fig. 1A) . IFNb augmented TGFb-induced Treg generation in vitro (Fig. 1B) and in vivo following adoptive transfer of allogeneic splenocytes (vehicle vs. IFNb: 6.8 ± 1.4 and 8.8 ± 2.0 %, respectively; p<0.05). IFNb treatment prolonged survival of BALB/c hearts transplanted into CTLA-4 Ig-treated B6 recipients (Fig. 1C) . Discussion: Our translational fi ndings identify a previously unrecognized direct effect of IFNb on human and mouse Treg induction that has the potential to be used to improve allograft survival in humans. The instability of the regulatory T (Treg) cell lineage precludes their therapeutic potential in transplantation and autoimmune diseases. Herein, we invented a novel approach to stabilize the Foxp3 expression and suppressor function of TGF-β/IL-2 induced Treg (iTreg) cells using 3 compounds (3C), including a pan HDAC inhibitor NaB, a Histone methyltransferase G9a inhibitor UC46, and vitamin C. We found that during iTreg cell polarization NaB and UC46 synergistically enhanced the chromatin accessibility and the recruitment of DNA demethylase TETs at the Foxp3 CNS2 locus, while vitamin C enhanced TET enzyme activity to enforce active DNA demethylation. The 3C treatment further prevented the subsequent re-methylation at the Foxp3 CNS2 locus by inhibiting the IRF4 and p-STAT6 mediated DNMT recruitment. Hence, 3C conditioned iTreg (3C-iTreg) cells persistently expressed high levels of Foxp3 and exerted potent suppressive function, associated with the complete demethylation of the Foxp3 CNS2 locus. Upon in vivo adoptive transfer, virtually all 3C-iTreg cells maintained high Foxp3 expression within the 6-month study period. Next, we determined the therapeutic function of 3C-iTreg cells using three distinct models. In an experimental autoimmune encephalomyelitis (EAE) model, iTreg and 3C-iTreg cells were derived from the MOG-specifi c 2D2 TCR transgenic CD4 + T cells. Adoptive transfer of 2.5 million 3C-iTreg but not iTreg cells almost completely prevented the EAE development in mice immunized with the MOG 35-55 peptide. In a colitis model, iTreg and 3C-iTreg cells were derived from polyclonal CD4 + T cells. Adoptive transfer of 0.5 million 3C-iTreg but not iTreg cells completely abrogated 1 million transferred CD4 + CD45RB high cell-mediated weight loss and colon infl ammation in Rag1 -/mice. In a Balb/c-to-Rag1 -/-.B6 skin transplantation model, polyclonal alloreactive iTreg and 3C-iTreg cells were derived from B6 CD4 + T cells upon stimulation with Balb/c splenocytes. Adoptive transfer of 0.1 million 3C-iTreg but not iTreg cells completely prevented 0.2 million transferred CD4 + Foxp -B6 T cell-mediated skin graft rejection. Taken together, 3C treatment stabilizes iTreg cell lineage through inducing an epigenetically fi xed state for Foxp3 expression. 3C-iTreg cells have a great therapeutic potential in transplantation and autoimmune diseases. CITATION INFORMATION: Xiao X., Zhang X., Li J., Chen S., Lan P., Dou Y., Chen W., Li X. A 3C Approach Drives Epigenetic Fixation of Foxp3 Expression and Stabilizes Treg Cell Lineage and Therapeutic Function Am J Transplant. 2018;18 (suppl 4). diseases. The study aimed to examine the LT outcomes in NASH recipients particularly in recipients with a biological MELD score of 35 or higher after the implementation of Share 35 policy. Methods: A retrospective analysis was performed in 4,380 adult patients from the UNOS database who received deceased donor LT between 2009 and 2017 due to primary diagnosis of NASH or cryptogenic cirrhosis with body mass index ≥ 30. Cox regressions were used to model the effect of the share 35 policy on post-LT graft and patient survivals comparing the fi rst 3 years of Share 35 to an equivalent time period before. Stratifi cation on MELD score of 35 or higher was performed. Results: The number of NASH patients received LT increased from 232 (14.1%) in 2009 to 266 (20.5%) in 2017. Compared to pre-Share 35 era, average MELD score was higher (26.5 vs 24.7) with a higher proportion of recipients having MELD ≥ 35 (22.9% vs 15.5%), mean waitlist time decreased from 219.3 days to 137.5 days, and average cold ischemic time decreased from 6.4 hours to 6 hours. However, no signifi cant differences was found in the average length of hospitalization after LT. Three-year graft and patient survivals were comparable either in the entire cohort or within recipients having MELD ≥ 35. 30% of kidney transplant recipients are readmitted in the fi rst month posttransplant. ABO-incompatible (ABO-I) recipients constitute a unique subpopulation that might be at higher readmission risk. Drawing on a national cohort of ABO-I recipients across ten years, 230 patients with Medicare primary insurance were matched to ABO-compatible (ABO-C) transplant matched controls and to waitlist-only matched controls on age, sex, race, prior transplant, PRA, diabetes, years of renal replacement time, percent of renal replacement time with a functioning graft, and transplant date/waitlisting date. Readmission risk was determined using multilevel, mixed-effects Poisson regression. In the fi rst month, ABO-Is had the same readmission risk as ABO-Cs; thereafter, readmission risk was higher for ABO-Is. From months 1-6, 6-12, 12-24, and 24-36, ABO-Is had a 1.23-fold These fi ndings of ABO-Is having a higher readmission risk than ABO-C controls, but a lower readmission risk after the fi rst year than waitlist-only controls should be considered in regulatory/payment schemas and planning clinical care. The purpose of this effort is to provide transplant administrative and clinical directors with reliable predictions of Scientifi c Registry of Transplant Recipients (SRTR) risk-adjusted 1-year expected number of events -to be published in forthcoming Program Specifi c Reports (PSRs). Using data provided by the United Network for Organ Sharing (UNOS), statistical software is programmed to emulate, for a specifi c cohort of transplant patients, the applicable SRTR Risk Adjustment model. The program calculates each patient's unique set of risk-adjustors, and renders them into spreadsheet form. With all of the risk-adjustors at hand, a simple editing of the most recent SRTR Expected Survival Worksheet (ESW) allows patients to be added (e.g. a new 6 months' worth of transplants), and to be omitted (the oldest 6 months), thus enabling the formation of cohorts that will be reported on in future PSRs. The estimation method described was implemented fairly recently, and consequently only a few PSR's worth of longitudinal information on its predictive effectiveness exists at this time. What data has been generated, however, indicates a reasonably close estimation of the actual, SRTR-reported expected number of events is being obtained (see Figure 1 ). We have used it for heart, kidney, liver, and lung. Adding patients to the SRTR's dynamic Expected Survival Worksheet (to form future cohorts) provides estimations of outcomes and applies them to current regulatory criteria used by the Centers for Medicare and Medicaid Services, and the OPTN's Membership and Professional Standards Committee. The results are displayed with sophisticated charts and color-coded graphics. The sheet also enables transplant center leadership to engage in hypothetical modeling, adding events likely to happen, or changing high value donor and recipient covariates that impact risk adjustment. Accuracy is also increased, as much of the data employed by the sheet is exactly that used by the SRTR. Using this method to predict expected events comes reasonably close to those actually reported by the SRTR. Introduction: Time from referral to appointment for kidney transplant evaluation is an important metric. Demands of dialysis schedules and transportation logistics typically lead to high no-show rates, cancelled appointments and long appointment waiting times. We sought to meaningfully decrease our time to kidney transplant evaluation. Methods: Sequential interventions are described: Intervention 1: We scheduled all Medicare patients with next available appointments immediately after receiving referrals. Commercial patients were scheduled 2 weeks after receiving referrals to allow for fi nancial authorizations. Simultaneously, we increased the number of available appointments on Tuesdays and Thursdays due to the large volume of candidates who dialyze M/W/F. Intervention 2: We noted that no- show rates were a barrier to available evaluations and began a pilot of reminder letters and phone calls. Intervention 3: Upon learning that patients wanted to cancel or postpone their evaluation, we backfi lled their appointments with candidates who desired an earlier appointment. Results: Baseline time to evaluation was 55 days. After intervention 1 (more appointments, rapid scheduling), referral to evaluation time increased slightly to 58 days. Following intervention 2 (reminder letters, phone calls), the referral to evaluation time decreased to 40 days with a concomitant drop in No-show rate from 22% to 16%. While these interventions were helpful, intervention 3, back-fi lling cancelled appointments, showed the most dramatic effect on time to evaluation, with a decrease to 19 days. Conclusions: Referral to evaluation time is a critical metric for candidates and transplant programs. Increasing clinic slots and reminder phone calls had a modest effect on decreasing time to evaluation. However, no-shows for evaluations decrease staff productivity and cause delays for motivated candidates. The interventions above markedly decreased time to evaluation by scheduling motivated candidates through backfi lled appointments. In effect, our improvement in time to evaluation was a conversion of "no-shows" into motivated candidates. Racing to Activation: Using LEAN Principles to Effectively Manage Your Waitlist with Predicted Improved Outcomes. L. Curry, V. Rao, D. Cassidy, V. Rohan, D. Dubay, N. Pilch, S. Gray. Introduction: In July 2017, our waitlist was approximately 950 pts, 36% which were inactive. One coordinator & one program assistant managed the kidney waitlist. Review of the waitlist demonstrated that our strategies were ineffective based on the high % of pts inactive for extended periods of time (>1 yr). In anticipation of the need for a robust waitlist policy we leveraged our in house donor call group to facilitate evaluation of our active waitlist & our waitlist coordinator focused solely on the inactive list. Diversifying efforts was used to target pts that required on-site re-evaluation, additional testing or other multidisciplinary team resources. The aim of this study was to apply LEAN principles & aggressively manage the entire inactive waitlist over the course of 4 mos. Methods: We employed the DMAIC Lean Six-Sigma process improvement strategy to our waitlist. We defi ned our inactive waitlist starting as of May 24, 2017 and we designated this as our baseline cohort. Our measurement was evaluation of our waitlist and conversion to active, require re-evaluation or removal. The strategy to target individual pts was defi ned by our multidisciplinary team to identify pts at highest risk for mortality and those who would benefi t most from activation based on age. Analysis of our interventions was evaluated weekly through team communication via email and collated monthly during our quality meetings. Improvement in the process was determined based on status on the inactive waitlist. Control included continual reprioritization of the inactive waitlist based on weekly data and overall assessment of the inactive list. Results: All the inactive pts on the waitlist were reviewed during a 4 month period. Due to transplant volume equaling waitlist additions the distribution of the waitlist was not dramatically altered as shown in Table 1 . However, the inactive list focus allowed the activation of pts <35 years of age and re-activation or delisting of those >65 yrs. Our total inactive waitlist was altered by 44% (149/340). We were able to improve accuracy of the waitlist without increasing FTE's predicting improved outcomes in graft survival and completed transplants. Over the past two decades, the role of transplant pharmacists (Txp Rx) has expanded following revised guidance from UNOS and CMS. There is little data published detailing the expanded role of the Txp Rx in the ambulatory setting. The purpose of this analysis was to evaluate the impact of newly expanded ambulatory Txp Rx services on readmissions and patient outcomes. Methods: A single center retrospective review of all adult kidney, liver, and pancreatic transplant recipients transplanted between April 2015 and October 2016 was performed. This group was compared to a historical control group which included patients transplanted between August 2013 and December 2014. Patients included in the study group were scheduled to see a pharmacist at weeks 1 and 2 as well as months 2,3,6 and 12 post-transplant. The primary endpoint was 90-day all cause readmission. Secondary endpoints included organ-specifi c patient and graft survival at 1 year, rejection at 1 year and readmission rates classifi ed by indication. Results: A total of 124 patients included in the Txp Rx study group were compared to 129 patients in the historical control. Baseline characteristics were similar. The rate of readmissions at 90 days after transplant was similar between the groups (60% Txp Rx group vs 58% historical control). Expansion of Txp Rx services into the ambulatory clinic did not appear to decrease all-cause 90-day readmission rates. Future analysis will adjust for confounders on readmission rates and determine if the number of pharmacy follow-up visits has an impact. We also plan to evaluate patient satisfaction with Txp Rx services in the clinic as well as evaluate the impact of Txp Rx services on tacrolimus variability and non-adherence. Medication adherence post transplantation is vital. Upon discharge, abdominal transplant patients receive a medication list from the electronic medical record (EMR) and obtain a medication teaching tool that includes pictures of medications (Medication Action Plan (MAP)). Patients typically rely on the MAP for their medication schedule post-transplant. This MAP does not communicate with the EMR and there are often discrepancies between these lists. Multiple providers are involved in medication education and documentation which can create additional variation in medication lists. Medication reconciliation discrepancies on discharge may lead to medication errors and patient harm. In 2016, an average of 4.9 medication discrepancies per patient were identifi ed between the MAP and electronic medication list in 51 patients. A quality improvement project was undertaken to decrease the discrepancy rate. Overall, we analyzed the medication records from 103 post abdominal transplant patients that were discharged between March of 2016 and July of 2017. The tests of change (TOC) included: 1. More providers given access to MAP. 2. PharmD provided MAP to discharge provider. 3. Discharge RN role created 4. Inpatient coordinator position was created and an additional discharge RN position was created. The outcome measured was the percent of patients with no discrepancies between medications lists. On average, patients were taking 17 medications daily. The most common types of discrepancies found before implementing the test of change were: omission (27%), dose (23%), schedule (15%), and strength(11%). After implementation of the tests of change, the most common discrepancies included dose (50%), omission (18%), and commission (16%). Prior to implementation of the quality initiative, there were 3% of medication profi les with no discrepancies between the medication list. The tests of change did increase the rate of medication lists without discrepancies. The TOC #1 yielded a 3.7% rate, TOC #2 yielded a 22% rate, TOC #3 yielded a 27% rate, and TOC #4 yielded a 47% of medication lists without discrepancies. Assigning specifi c roles has helped decrease medication discrepancies. Process change is sustainable. Exploration of the optimal role and process continues. We have achieved successful induction of long-term immunosuppression (IS) free renal allograft survival in HLA mismatched kidney transplantation using cyclophosphamide (CP) based conditioning and combined kidney and donor bone marrow transplantation. Transient mixed chimerism (MC) for up to 3 weeks was induced in all recipients. IS was successfully discontinued in 7/10 recipients for more than 5 years. Four of them remain off IS and continue with normal kidney function after more than 8 to 14 years. Wider clinical application has been hampered by acute kidney injury (AKI) shortly after transplantation in 9/10 recipients. We therefore conducted a pilot study, in which cyclophosphamide was replaced with low-dose (1.5 Gy X 2) TBI. The fi rst two recipients did well without development of AKI. One recipient successfully discontinued his IS at one year and remains well without ongoing IS for 3 years. Because suffi cient chimerism was not detected, low dose IS has been continued in the second recipient. A third TBI recipient received belatacept/ATG in place of anti-CD2 mAb. Transient MC without AKI was observed and he is currently doing well with low dose MMF only to control IgA recurrence. Conclusion: MC and longterm IS free renal allograft survival were achieved without AKI with low dose TBI-based regimens. which were then correlated with the pathological and biochemical parameters of transplanted organs. Subsequently, an 8 gene expression panel, consisting of the increased expression of 6 immunoregulatory genes and the decreased expression of 2 pro-infl ammatory genes, was found to be predictive of tolerance. Methods:In this Phase 2A single center study (LITMUS), we examined whether an 8 target and 5 housekeeping gene expression panel in the peripheral blood mononuclear cells (PBMC) and liver allograft could identify operationally tolerant liver transplant recipients. We fi rst measured the panel in PBMC from 54 adult liver transplant recipients who were a minimum of 3 months post-transplant and who had no biochemical evidence of rejection. Patients with a putative tolerant gene profi le in PBMC had a liver biopsy and were then weaned off immunosuppression to confi rm that the gene profi le identifi ed a tolerant state. Results: Of the 54 patients studied, 16 had the putative tolerance gene profi le in their PBMC. Age, gender, indication for transplant, and CNI choice did not correlate with having the tolerant profi le. Twelve patients agreed to enter the withdrawal phase of the study. Prior to withdrawal, a liver biopsy was performed, and 2 patients were excluded as their biopsies showed recurrent autoimmune disease and rejection. Of the 10 remaining patients 5 have now been weaned off of immunosuppresion (IS), 2 are undergoing withdrawal and 3 developed acute cellular rejection, which was easily reversed. Five of eight patients who had the gene expression profi le both in liver and PBMC were successfully weaned off IS. 10 H. Regele, 11 M. Mengel. 1 1 University of Using six additional nephrectomies with mixed BKVN and TCMR, regions showing histological features of only BKVN (mixed BKVN, n=6) and only TCMR (mixed TCMR, n=3) were isolated with laser capture microdissection. Differential gene expression and diagnostic performance were assessed. Results: All fi ve PV genes were signifi cantly increased (FDR<0.05) in native BKVN versus pure TMCR but no human genes were differentially expressed. PV gene expression was also signifi cantly higher (versus pure TCMR) in tumor BK (p<0.01), mixed BKVN (p<0.001), and mixed TCMR (p<0.05, except VP1). ROC analysis revealed excellent discrimination between BK-positive (including mixed TCMR) and BK-negative cases (AUC=0.96-1.00). As a 5-gene set, the PV genes demonstrated near-perfect diagnostic performance (AUC=0.99) with improved sensitivity (0.96) over histology (0.87). Conclusion: These data suggest that PV gene expression is more sensitive than histology and can more precisely discriminate BKVN and TCMR. However, at the molecular level, no signifi cant difference in immune response was identifi Abstract# 462 Diagnostic Value and Confounding Factors of Urinary Biomarkers in the Non-Invasive Screening of Renal Allograft Infl ammation Anglicheau. 1,2 1 Renal Transplantation Unit, Necker-Enfants Malades Hospital 1 1 Division of Transplant Surgery After patients were classifi ed according to posterior probabilities, variables associated with class membership were identifi ed. Results: A total of 52 patients were included in the analysis. Two latent classes were identifi ed (Figure): patients with stable eGFR post-DSA (n=28) and patients with rapidly declining eGFR post-DSA (n=24). Patients in the declining trajectory (compared to the stable eGFR cohort) were more likely to have had a living donor (33% vs 7%), have experienced an acute rejection episode (71% vs 25%), have IgG3+ DSA (65% vs 9%), died during follow-up (38% vs 14%), or experience graft failure (58% vs 25%). There were not substantial differences between the classes in age at transplant, race, gender, or delayed graft function. Conclusion: Our fi ndings support the idea that patients may take clear clinical courses after the onset of DSA. Those who develop or have a history of acute rejection and those who have IgG3 positive DSA are more likely to have a fast decline in eGFR and proceed to allograft failure and possibly death All DQ DSA May Not Be Equal: Comparing Outcomes between All others DQs only had one case of graft failure each. Similarly, rates of IgG3 DSA and acute rejection after DSA were more common in the DQ7, DQ2, and DQ6 cases (Table). CONCLUSIONS: These fi ndings suggest that all DQ DSA are not equal. Larger patient analysis and a further research will be necessary to better understand the potential differences between dnDSA against different DQ mismatches. Table: Outcomes and characteristics by DQ dnDSA specifi city Concurrent Session: Kidney Donor Selection / Management Issues -2 Abstract# 515 Not All HLA Mismatches Are Equal: Determining the Relative De Novo DSA Induction Capacity of HLA Mismatches The cohort is predominantly African-American patients receiving kidneys from mostly Caucasian donors, and who received calcineurin inhibitors as fi rst-line immunosuppression. Patients were screened for dnDSA using Single Antigen Beads 1, 3, 6, 12 months post-transplant, and bi-annually thereafter. A total of 977 MM were studied DQ5 MM were the most immunogenic where 64% MM led to the formation of dnDSA, followed by DQ7 (62%) HLA class II). Last, 75% of MM did not lead to the formation of dnDSA despite the high frequency of some (Figure 1C) Highly immunogenic MM (upper-left quadrant in Figure 1B) should be considered to prevent the development of dnDSA and antibody mediated graft injury. Many MM did not induce dnDSA (listed below the plot in Figure 1), and may not be immunogenic Follow Up of Patients Treated with the IgG Endopeptidase (IdeS) for Desensitization and HLA Incompatible (HLAi) Kidney Pre-& post-transplant desensitization included IVIg + anti-CD20. Data summarizing outcomes including Banff biopsy scores, DSA levels and outcomes are shown below: Graft and patient survival at a mean 18.76±5.6 M post-IdeS transplant were 94%. Rebound DSA responses were rare and of low MFI values with only 4 patients demonstrating DSAs, all with MFIs ≤3000. Biopsies were performed in 15 patients Abstract# 524 Effi cacy and Safety of Bleselumab in Kidney Transplant Recipients: A Phase 2, Randomized, Open-Label Study monthly thereafter) + IR-T (0.1 mg/kg/day; target C trough 4-11 ng/mL days 0-30, then 2-5 ng/mL) · Bleselumab (dosing regimen as per bleselumab + IR-T group) + MMF All received basiliximab induction (20 mg injection pre-transplant and on Day 3-5 post-transplant) and corticosteroids. The primary endpoint was incidence of biopsy-proven acute rejection (BPAR; Banff grade ≥1) through month 6. Treatment-emergent adverse events (TEAEs) were assessed to 6 months posttransplant Bleselumab + IR-T treatment demonstrated noninferiority with SoC: at 6 months posttransplant Both groups had greater incidences of drug-related TEAEs and TEAEs leading to treatment discontinuation compared with SoC. There were 4 deaths (2 per bleselumab group) during this study, none deemed related to bleselumab. Conclusions: Treatment with bleselumab + IR-T over 36 months demonstrated similar effi cacy in the prevention of BPAR compared with SoC. No new safety signals were observed Salary, Astellas Pharma Global Development A Phase-I Clinical Trial of Donor-Derived MIC Cell Infusion for the Induction of Donor-Specifi c Hyporesponsiveness after Living Donor Kidney Transplantation (TOL-1 Study Ownership interest, Patent for MIC cell therapy Ownership interest, Patent for MIC cell therapy Abstract# 526 RNA Expression Profi ling of Renal Allografts in the Cynomolgus Monkey Identifi es Tolerance. R. Smith, 1 M. Matsunami, 3 B Department of Surgery Dialysis Modality, Infl ammation, and Frailty in Kidney Transplant Candidates Associations of center volume with all cause graft failure (adjusted hazard ratio, 95% LCL aHR 95% UCL ) were quantifi ed by multivariate Cox regression including adjustment for recipient, donor, and transplant factors. Patients transplanted at high volume centers were more likely older (age >65 years), to have longer waiting time, longer cold ischemia time, and to receive alemtuzumab for induction Future studies should seek to identify care processes that support optimal outcomes following kidney transplantation irrespective of center Abstract# 540 Cold Pulsatile Machine Perfusion versus Static Cold Storage in Kidneys from Donation after Circulatory Death: A Multi-Centre Randomised Controlled Trial 3%), p= 0.77), primary non-function (CS 1/51(2%) vs MP 0/51(0%), p= 1), slow graft function (41/51(80.3%) vs 39/51(76.5%), p= 0.63), one year death-censored graft survival (CS 94% vs MP 94%, p= 0.95), and one year patient survival (CS 95.9% vs MP 95.3%, p= 0.92). Per protocol analysis did not demonstrate a signifi cant difference between the groups. This study is underpowered and concluded early due to diffi culty recruiting patients. Discussion: Within the limitations of the study, there is no evidence that machine perfusion improves outcomes for recipients of DCD donor kidneys Improving Prognostication amongst Patients Undergoing Liver Transplantation for Hepatocellular Carcinoma: An International, 16-Center Study to Validate and Recalibrate HALTHCC Objective: Prognosticating outcomes in liver transplant (LT) for hepatocellular carcinoma (HCC) continues to challenge the fi eld. Whereas adoption of the binary Milan Criteria (MC) generalized the practice of LT for HCC and improved outcomes, its predictive character has degraded with increasing candidate and tumor heterogeneity. We sought to validate and recalibrate a previously developed, preoperatively calculated risk score, the hazard associated with liver transplantation in HCC (HALTHCC) in an international cohort. Methods: This cohort from 2002-14 consisted of 4085 patients (both MC in and out [25.2%]) across 16 centers in North America, Europe, and Asia. A continuous risk score using pre-LT levels of alpha feto-protein, model for end stage liver disease sodium score, and tumor burden score was recalibrated amongst a randomly selected cohort (n=1016) and validated in the remainder (n=3079). Results: This international study was used to adjust the coeffi cients in the HALTHCC score. Before recalibration, HALTHCC had the greatest discriminatory ability for overall survival (C-index=0.61) compared to all previously reported Improving Prognostication amongst Patients Undergoing Liver Transplantation for Hepatocellular Carcinoma: An International, 16-Center Study to Validate and Recalibrate HALTHCC Laparoscopy group mostly had type 1 (95.0%) bile duct and 81% had single bile duct in liver grafts compared to open group. (59.5% type 1 bile duct and 59.5% of single bile duct) The laparoscopy group had signifi cantly longer operation time (378.2 ± 93.5 minutes vs. 329.1 ± 68.0 minutes, P<0.001) and warm ischemic time (median 271 minutes vs. median 151 minutes, P<0.001) compared to open group. However, estimated blood loss was small in laparoscopy group The literature on living donor liver transplantation (LDLT) for alcoholic liver disease (ALD) is primarily from single Asian centers We compared recipients with ALD and non-ALD using Wilcoxon rank sum and chi-squared tests; graft and patient survival were estimated using Kaplan-Meier methods. 1065 recipients were included. 168 (15.8%) underwent LDLT for ALD; the majority were male (70.8%), Caucasian (92.7%), median (IQR) age 53.3 (47.5-59.0) years, BMI 26.2 (23.2-29.5), and MELD 15 (13-19). 94.6% of recipients received a right lobe graft, 70.8% of recipients were biologically related to their donor. Compared with recipients who underwent LDLT for other indications, those with ALD had a greater proportion of male recipients (70.8% vs. 55.9%, p<0.001) and concomitant HCV diagnosis (44.6% vs. 36.6%, p=0.048). Between ALD vs. non-ALD, there was no signifi cant difference in median number of complications or biliary stricture (32.5% vs. 33.3%, p=0.84). Patient survival was similar for ALD and non-ALD recipients at one (94% vs. 91%), fi ve (83% vs. 79%), and ten (61% vs. 66%) years posttransplant (p=0.32). There was no difference in graft survival between ALD and non-ALD recipients at one (88% vs. 84%), fi ve We found no signifi cant difference in major complications, patient survival, or graft survival among patients transplanted for ALD versus other etiologies of liver disease Abstract# 556 Outcomes of Interventional Radiology Treatment for Vascular Complications Following Living Donor Liver Transplantation: The Effi cacy of Stent Placement. S. Mizuno, 1 A. Nakatsuka, 2 A. Hayasaki, 1 U. Iizawa, 1 H Background: Vascular complication is life-threatening when it develops after living donor liver transplantation (LDLT) and interventional radiology (IVR) is a less-invasive therapeutic option comparing to surgical approaches to treat them. The aims of this study were to evaluate the short-and long-term effi cacy of IVR treatment for vascular complications after LDLT. Patients: Among the 151 patients of LDLT between After then hepatic arterial collaterals developed and both of them could avoid the graft failure. As to sixteen patients developing outfl ow block, the median age was 54 years old (4-69) and the median interval between LDLT and IVR treatment was 13 days (1-107). Stent placement was performed across the outfl ow block veins without any complications in all patients because of venous stenosis with a pressure gradient more than 5 mmHg. The patency of HA, HV and IVC had been kept in all patients after stent placement by checking Doppler ultrasonography. The longest follow-up time was 105 months in HA stent, 182 months in HV stent, and 178 months in IVC stent Concurrent Session: New Approaches to Target Regulatory T Cells Abstract# Interferon-b Promotes Regulatory T Cell (Treg) Induction and Prolongs Allograft Survival. M. Fribourg, 1 M. McGinty Our initial fi ndings were surprising in that CD4-DEPKO mice did not develop autoimmune disease for up to 1 yr. Nevertheless, we fi nd increased phosphorylation of S6 and Akt(S473) in CD4 + T cells following mitogen-activation, suggesting that mTOR is hyperactive in the absence of DEP. Using RNA-seq, CD4-DEPKO cells show a defi ciency in cytokine signaling. To validate this possibility, we performed in vitro T helper cell differentiation assays and fi nd that CD4-DEPKO cells produce more IL-4 and IL-17A vs. controls. We also generated Foxp3-DEPKO mice to evaluate DEP function in immunoregulation; KO mice develop patchy skin lesions with hyperkeratosis and local infl ammatory infi ltrates over a 9-12 month period. Phenotyping of T cell subsets from Foxp3-DEPKOs was normal at times preceding dermatitis. In contrast, over time Foxp3-DEPKOs had increased numbers of activated splenic CD4 + T cells (CD69 + : 27% vs. 13%, P<0.05) and expanded populations of memory subsets (CD44 high CD62L low : 93.1% vs. 32.4%; P<0.05). Notably, there was also a dramatic increase in the number of CD4 + Foxp3 + T cells in Foxp3-DEPKO mice (27% vs. 15%; P<0.05), suggesting that there is a high turnover and/or high dedifferentiation of Tregs in the absence of DEP. We also transplanted MHC class II mismatched B6.C-H-2 bm12 hearts into Foxp3-DEPKO recipients (C57BL/6) or controls. Following transplantation Thus, DEP regulates cytokine responsiveness, T helper cell differentiation as well as Treg function and stability in vivo Mechanisms of DEPTOR Function in CD4 + T Cell Subsets Abstract# 561 OX40 Costimulation Prevents Induction of Foxp3 + Tregs through Upregulation of Novel Repressors of Foxp3 Gene Expression 1,4 1 Immunobiology & Transplant Research Center TORC1 Inhibition Protects Activated Human Regulatory T Cells from Self-Infl icted Damage via Confocal microscopy with Z stack shows GrA and GrB outside of the CD107+ granules both in the cytosol and in the nucleus of Tregs. While GrB is inactive at the acidic pH of the granules, we show that its release in the cytosol signifi cantly activates its protease activity, leading to multiple substrates cleavage. We used fl ow cytometry (GranToxiLux assay) and immunoblotting to probe for known Gr substrates at 0.5, 1, 2, and 3 days after in vitro activation of healthy donor Tregs (3 separate donors/experiments By using cytometry by time of fl ight (CyTOF), we show an increase in GrB-expressing Tregs in the peripheral blood and renal allografts of transplant recipients undergoing rejection. These GrB-expressing Tregs showed a Th1 and Th17 like phenotype with increased expression of chemokine receptors that mediate traffi cking to sites of infl ammation (n=20, P< 0.05). However, those GrB-expressing Tregs were signifi cantly more apoptotic than non-GrB expressing Tregs (n=20, P<0.05). This potentially novel fi nding improves our understanding of Treg survival and suggests that manipulating Gr expression or activity might be useful for designing more effective Treg therapies Abstract# 563 Using mice with a novel inducible B cell-specifi c deletion of IL-10 (IL-10fl /fl XhCD20TAM-Cre) we now confi rm that IL-10 is essential for Breg activity. However, many aspects of Breg biology remain unclear, including how and where Bregs regulate immune responses within lymphoid tissues. Unfortunately, Bregs lack a specifi c marker and IL-10 protein can generally only be detected after ex vivo stimulation. Thus, the actual B cell subsets that express IL-10 in vivo remain unknown. Moreover, Breg localization and cellular interactions required for their suppressive function in vivo have never been examined. Using IL-10eGFP reporter mice we could detect GFP+ B cells without ex vivo stimulation. B cell IL-10 increased 2 Bregs expressed higher levels of CCR-7 than Control B cells, which may explain their capacity to localize more frequently to the T-B border and interact with T cells. Our results are the fi rst to show a comprehensive examination of the B cell subsets that make up Breg in-vivo and Breg in-vivo dynamic interaction with T cells and DCs 429 potential organ donor screening labs were reviewed There were 2107 referrals that were seropositive for either HBV or HCV, but were not viremic by NAT. 83.8% of the total population became organ donors. Conclusions: As expected, NAT detected infections that serology did not, especially in the case of HCV. Importantly, there are many seropositive, NAT negative donors that should be safe to use in certain recipients, especially the HCV Ab+/NAT-donors. The effect of these tests on utilization will be evaluated Donor Derived Transmissions in 2016-2017: Analysis of the OPTN Ad Hoc Disease Transmission Advisory Committee (DTAC). C United Network for Organ Sharing to assess for likelihood of donor transmission. Reporting policy changed signifi cantly in 09/2016 emphasizing recipient disease and limited impactful donor diseases, thereby leading to changing awareness of transmission events 172 were reviewed by DTAC, down from 212 reviewed in the previous year. A standardized algorithm was used to classify each donor and individual recipients of reported donors. Results: 35/172 (20%) reported donors had proven or probable (p/p) transmission of I, M or other conditions to recipients Bacterial and viral pathogens were each reported in 9 p/p donors, leading to 13/29 and 15/28 organs becoming infected respectively. Kidney recipients had the highest disease penetrance, with 30/49 (61%) affected, mostly viral. Although 21 reports involved HBV/HCV, only 2 involved a viral transmission The proportion of p/p cases has risen notably, during a year when reporting requirements focused on recipient disease rather than all positive cultures. Malignancy numbers remain signifi cant, and continue to contribute to mortality amongst recipients with PDDTE Abstract# 566 Waitlist Outcomes and Access to Liver Transplantation for HIV+ Candidates Methods: We linked data from SRTR (2002-20121) to pharmacy fi lls (IMS Health; covering 65% of the LT waitlist in study period) to identify HIV+ candidates (defi ned as ≥1 fi ll of HIV-specifi c medication). We modeled risk of waitlist mortality (censored for transplant) and rate of transplant (censored for death) for HIV+ vs. HIV-LT candidates using Cox regression, adjusting for MELD score at listing, age, sex, race, ethnicity, and Hepatitis C (HCV) status 63 , p<0.001) and 41% lower rate of transplant (aHR= 0.50 0.59 0.70 , p<0.001), adjusting for candidate factors. Discussion: HIV+ LT candidates face a higher risk of waitlist death and lower risk of receiving a transplant, compared with their HIV-counterparts. Understanding this disparity in waitlist outcomes and access to LT is critical for improving care for HIV+ candidates Waitlist Outcomes and Access to Liver Transplantation for HIV+ Candidates Grant/Research Support Outcomes after HIV+ Liver Transplant: A HOPE in Action Study Combining all eclipse-window infections, all were US PHS increased risk donors, 13/14 had either active IVDU or drug use with unknown injection practices /14 donors had HCV NAT drawn ≤48hrs after admission, and 5/14 within 24hrs (median = 40hrs) Eclipse NAT-window period infections remain rare, however especially in active IV drug users the timing of NAT tests in relation to admission and cross-clamp may be important Antiviral Therapy for Donor-Derived Hepatitis C Virus Infection after Solid Organ Transplantation. A. Kwong, 1 A. Wall, 2 M Oral Abstracts In comparison, 15% of centers had a change in the traditional 3-tier rating at three years. 5-tier ratings at four years had minimal association with baseline rating Centers had median 3 different 5-tier ratings over the period(q1=2,q3=4). Findings were consistent by center volume and baseline 5-tier rating As Clear as Mud: Volatility of the Five-Tier Quality Assessment of Kidney Transplant Centers in the United States Positive Relationship between Transplant Center Volume and Utilization of Hard to Place Liver Allografts after Late Declines Cost Impacts of Alternative Kidney Transplant Immunosuppression: A National Cohort Study Cost of post-transplant care were estimated from Medicare Part A &B payments (N= 62,698). Marginal costs of induction (all periods) and maintenance (Yr 1, Yr 2, Yr 3) ISx were estimated from multivariate regression, including adjustment for baseline recipient, donor, and transplant factors. Compared to IL2rAb, Thymoglobulin (TMG, $10,066) and other induction ($21,627) was associated with increased costs of the transplant hospitalization while all other maintenance regimens were associated with higher costs in Yr 1. Patients who received mTORi-based and CsA-based ISx continued to incur higher costs in Yrs 2 and 3 post-KTx Induction and maintenance ISx are associated with differential costs of care during the transplant hospitalization and post-transplant periods, which may be mediated by different complication rates. Initial higher costs of TMG induction appear to be followed by later cost savings Hospital Days These data suggest that "Transplant First" should be a priority rather than current guidelines for "Fistula First We determined whether persistent MXC in HLA-matched and -mismatched recipients of combined kidney and hematopoietic cell transplants from living donors followed the preclinical paradigm and allowed for permanent MXC and graft acceptance in the absence of ISD. Recipients were given a 10-day post-transplant conditioning regimen of total lymphoid irradiation and ATG. Of 29 HLA-matched pts, 24 developed MXC for ≥1 yr and were withdrawn from ISD without subsequent rejection in 23 of the 24, up to 12 yrs of follow-up. MXC persisted after drug withdrawal in 10. However, 14 lost MXC during the 2nd yr while graft acceptance was maintained except in 1 pt with mild rejection 3 yrs off ISD. Pts with graft tolerance had specifi c unresponsiveness to donor cells in MLR even in the absence of MXC. There were 2 graft losses out of 29 during the 12-yr observation period, from disease recurrence. Biopsy obtained from mixed chimeric pts before discontinuation of ISD showed no rejection. Of 22 HLA haplotype-mismatched pts, 18 have been followed for ≥1 yr, and 9 developed MXC that persisted at least 1 yr. MMF was withdrawn from the chimeric pts and they were maintained on tacrolimus (TAC) monotherapy at the end of the 1st yr. Biopsy showed no rejection. However, withdrawal of TAC in 6 during the 2nd yr resulted in loss of MXC with evidence of mild rejection in 3; they returned to ISD. MXC in these HLA-mismatched pts did not follow the "classical" paradigm -it was dependent on ISD. The remaining persistent chimeras have been maintained on TAC monotherapy. There has been no graft loss or chronic rejection in the 22 pts, with up to 7 yrs of follow-up. Neither severe or chronic infection, nor graft versus host disease, has been observed in the HLA-matched and -mismatched pts. In conclusion, persistent MXC was associated with lack of rejection in both HLA-matched and -mismatched pts. Lack of rejection off ISD was observed even after loss of chimerism in HLA-matched pts, but not in HLA-mismatched pts. Persistent MXC in HLA-mismatched pts maintained on low-dose TAC monotherapy is a desirable outcome Abstract# 586 Long-Term Follow-Up of a Phase 2 Clinical Trial to Induce Tolerance in Living Donor Renal Transplant Recipients A G-CSF mobilized peripheral blood mononuclear cell product was apheresed from the donor >2 weeks pre-KTx, processed to remove graft-versus-host disease (GVHD)-producing cells yet retain CD34 +cells and FC, and cryopreserved until use day+1 post-KTx. 36 pts have reached at least 1 year of FU (range 12-105 months) and are the focus of this analysis. Pts ranged in age from 18-65 yrs. Enrollment was agnostic to degree of HLA match; chimeric pts retained chimerism after removal of IS, remain rejection-free w.o DSA, and show immune competence. Transiently chimeric subjects resumed endogenous hematopoiesis and are maintained on low-Oral Abstracts dose IS with stable renal fcn. There have been two cases of GVHD: one grade 1-2 acute GI GVHD that responded to steroids; this pt has developed chronic GVHD. The second presented late following development of symptoms and manifested treatment resistant GI GVHD with associated CMV that proved fatal at 11 months post-Tx. There have been two additional graft losses, both previously reported and related to infections. A second subject death occurred in a heavy (>100 pack yr) smoker who developed advanced stage lung cancer 4.5 years after Tx. Overall patient survival is 94.4% and death censored graft survival 94.1%. In summary, high levels of durable chimerism and tolerance with a low (5.5%) incidence of GVHD has been achieved in recipients of FCRx + LDKTx Ownership interest PD-L1 May Serve as a Biomarker for Chimerism-Induced Tolerance in Kidney Transplant Recipients. A. Merchak, 1 A. Khalil Six transiently donor chimeric (TDC) patients are maintained on tacrolimus monotherapy. The primary objective of this study is to determine biomarkers predictive of chimerism and tolerance. mRNA from PBMC from 9 stable donor chimeric (SDC) patients and 4 TDC patients at pretransplant, 9 months, and 18 months post-transplant were analyzed by microarray. Select immunomodulatory markers were then analyzed by qPCR and fl ow cytometry. The microarray revealed 681 and 129 differentially expressed genes after transplant in SDC patients and TCD patients, respectively, indicating that SDC patients had more profound and lasting biological changes. We found 311 genes differentially expressed before transplant between the two groups. PD-L1 was one of the most differentially expressed genes before transplant in patients who became durably chimeric vs. ones that were only transiently chimeric (1.33 fold change; P<0.001). The PD-L1/CD86 ratio gives an integrated snapshot of the strength of regulatory events relative to infl ammatory events and has been used to predict outcomes in liver transplant. qPCR analysis showed that SDC patients experienced a 13.2 fold increase in PD-L1 expression after transplant while TDC patients experienced only a 0.9 fold increase. The PD-L1/CD86 ratio became higher overall in SDC patients. In addition, SDC patients showed an increase in PD-L1/CD86 ratio 18 months after transplant while TCD chimeric patients showed no change. In conclusion, the PD-L1/CD86 ratio might serve as a biomarker to predict success in clinical FCRx tolerance trials Conditioning with Low-Dose Total Body Irradiation in Place of Cyclophosphamide Induces Mixed Chimerism and Long-Term Immunosuppression Free Allograft Survival without Acute Kidney Injury in HLA Mismatched Kidney Non-Myeloablative Conditioning with Low-Dose Total Body Irradiation in Place of Cyclophosphamide Induces Mixed Chimerism and Long-Term Immunosuppression Free Allograft Survival without Acute Kidney Injury in HLA Mismatched Abstract# 589 Primary immunodefi ciency patients may develop pulmonary complications and most are ineligible for either lung transplant or BMT due to futility. We report our fi rst 2 subjects enrolled on NCT01852370 trial receiving tandem BOLT&BMT from the same deceased UNOS donor. Case 1: 14 year old female with IL-7R SCID & recurrent pneumonia underwent BOLT from a 2/6 HLA antigen (1/8 allele level) matched donor. Marrow suspension prepared from T11-L4 VB was CD3+/CD19+ depleted(CliniMACS ® ) then frozen along with ~20 fold fewer TNC & CD34+ cells from the iliac crest. At 3m post-BOLT, she started BMT conditioning. A month later, the marrow was thawed and infused (5E+06 CD34+ cells/kg and 8E+04 T cells/kg) DLI resulted in dominant donor T and B cell chimerism (Fig1A, B) unlike other subsets (Fig1C,D). BAL values exhibited different kinetics (Fig1) At 18m post-BMT (2m post-IST withdrawal), circulating donor T cells were unresponsive to host DC while responded to 3 rd party APC (Fig3) She had 3 episodes of lung rejection before becoming eligible for BMT (1/8 match) 14 months later. She engrafted by day+12 with 95% whole blood chimerism but T cells of host origin that was treated with DLI Case 1 is the fi rst in human to demonstrate durable engraftment, immune competence, and tolerance from deceased donor VB marrow matched only at a single class I MHC allele, providing proof of principle currently tested in adults Abstract# 590 Long Term Histologic Allograft Health is Preserved Despite Rejection during Protocolized Immunosuppression Withdrawal (ISW) in Stable Pediatric Liver Transplant Recipients 6 1 Children's Hospital Background: Long term allograft health after rejection during ISW has not been previously assessed. We compare baseline and 4yr graft histology of 20 rejectors in iWITH For-cause biopsy was triggered by clinical judgment and mandated for ALT or GGT>100 IU/L. Rejection treatment was by site standard of care; resolution was defi ned as ALT and GGT<1.5X at study entry. A central pathologist graded rejection by Banff criteria and quantifi ed fi brosis by Ishak stage (F0-F6) and Liver Allograft Fibrosis Score Of the 35 with rejection, 20 have completed the study.(Table) Rejection occurred at a median(IQR) 25rejection in median(IQR) 11(7-17)wks. At the end of study portal;1 perivenular). A single subject had Ishak ∆+2 (F0→2) and LAFSc ∆+3 (F0→F3;∆+1 portal, sinusoidal and perivenular) 95% of rejectors showed stable graft histology over 4yrs. Intense lab test monitoring and low biopsy threshold may be essential to ensure safe IS minimization or withdrawal Abstract# 591 Among the latter, 9 tolerant patients were off IS completely. All DSA were screened using Luminex Single antigen beads with a 1000 MFI cutoff for positivity. Among the patients who achieved stable IS monotherapy (Figure A), the incidence of post-transplant DSA, dnDSA and pre-transplant DSA rebound was 28%, 19% and 9%, respectively. The incidence of dnDSA was higher in nonrandomized patients (p=0.002). The majority of dnDSA was HLA-DQ DSA the incidence of posttransplant DSA was 65%, dnDSA was 60%, and pre-transplant DSA rebound was 13%. The incidence of dnDSA between the IS maintenance, tolerant and nontolerant patients was similar. The majority of dnDSA was HLA-DQ DSA. Lastly, post-transplant DSA was found to be a risk factor for graft rejection during IS withdrawal In conclusion, the development of post-transplant DSA increases after an episode of early acute rejection, after early minimization of IS, and after complete withdrawal of IS. It is yet to be explored whether early expression of DSA in the setting of low or no IS is associated with long term impact on the allograft We also cloned PIR-A3 from B6 mice (H-2b) and constructed PIR-A3/Fc fusion protein, and showed that PIR-A3/ Fc strongly labels allogeneic BALB/c cells. Moreover, in HEK293T cells that overexpress the PIR-A3 receptor, they bind the H-2Dd pentamer with much higher affi nity, thus defi nitively demonstrating the allospecifi city of PIR-A3 to allogeneic MHC class I molecules. We also showed in a reporter system consisting of PIR-A3 extracellular domain fused to CD3e intracellular domain that binding of allogeneic MHC class I molecules to PIR-A3 triggers signaling activities in reporter cells. In our model PIR-A3 expression on macrophages requires CD40 stimulation, primarily due to chromatin remodeling at Pira and Pirb loci. Further in vivo studies showed that treatment of Rag1 -/-γc -/-mice with PIR-A3/Fc fusion protein inhibited the induction of allospecifi c macrophages and rejection of allogeneic BALB/c cells, and in the BALB/c-to-B6 heart transplantation model, the PIR-A3/Fc fusion protein inhibited macrophage activation, and signifi cantly prolonged allograft survival. In conclusion, we identifi ed PIR-A3 as the innate allorecognition receptor that confers macrophages the ability to directly recognize allogeneic non-self in transplant settings, and this fi ndings are signifi cant in developing new antirejection therapies Abstract# 594 Role of B Cell Intrinsic NLRP3 in the Development of Chronic Rejection. J. Nie, 1 B. Ramaswami, 2 X. Guo, 3 R. Hoffman, 2 G. Chalasani. 3 1 Thoracic Surgery Methods: We made bone marrow chimeras lacking NLRP3 only in B cells. Irradiated μMTCD45.1/CD45.2 mice were transplanted with bone marrow cells from μMTCD45.1/CD45.2 and wtCD45.2 (μMT+wt) or μMTCD45.1/ CD45.2 and NLRP3-/-CD45.2 (μMT+NLRP3-/-). NLRP3 defi ciency (CD45.2+) was restricted to B cells (98±0.5%) and limited in non-B cells (15±5%). 12 weeks later, chimeras received OT1 (1 x 10 6 ) and OT2 (3 x 10 6 ) Thy1.1+ T cells followed by H2b/d-Ova heart transplants. Extent of chronic allograft vasculopathy (CAV) was assessed by morphometric quantitation of luminal occlusion of vessels at 70days. We examined endogenous (Thy1.1-) and transferred T cell (Thy1.1+) functions after restimulation with donor cells by FACS. B cell cytokines were analyzed after CpG DNA and donor cell stimulation Everolimus with Reduced Calcineurin Inhibitor Exposure in De Novo Kidney Transplant Recipients: Effi cacy and Safety Outcomes from the TRANSFORM Study Everolimus with Reduced Calcineurin Inhibitor Exposure in De Novo Kidney Transplant Recipients: Effi cacy and Safety Outcomes from the TRANSFORM Study Grant/Research Support, Novartis, Astellas and Alexion, Honoraria, Novartis, Astellas and Alexion, Travel, Novartis, Astellas and Alexion The Results of the PRISM (Prediction of Rejection In Sensitized Patient Blood SaMples) Trial with a Novel Bioassay This is the fi rst assessment of the predictive accuracy of pre and serial post-tx kSORT in a prospective clinical trial of high immunologic risk tx patients. Methods. 113 kidney tx recipients, with cPRA of >50% (median 97%) were enrolled pre-tx and followed for 6 mo post-tx in the PRISM (Prediction of Rejection In Sensitized SaMples) trial. A protocol bx was done at 6 mo and/or at graft dysfunction Customized software kSAS generated actionable immune risk scores as High-(HR) or Low-(LR) risk for AR. All patients had induction with Thymoglobulin and maintained on TAC, MMF and prednisone. Statistical analysis used R and Fisher's exact test The overall predictive accuracy of the pre-tx kSORT (assessed in 54 pts) was 90.3% for no-rejection post-tx. For pre-tx kSORT: 80% wereLR,18% were -HR and did not correlate with cause of sensitization. For post-tx blood samples paired with bx, 51 patients had no-rejection and 47 were correctly classifi ed as LR (specifi city 92.2%) serial times post-tx to monitor patients at low and high risk of rejection in highly sensitized patients. A LR kSORT score has 90% accuracy for predicting freedom from rejection, either before or after tx. Pre and post-tx kSORT assessment is an important adjunct measure for instituting Precision Medicine in the management of sensitized renal tx recipients and optimizing their outcomes :1) to receive either EVR (C0-h 5-10ng/mL) with reduced CNI (TAC C0-h 3-8ng/mL or CsA C0-h 50-150ng/mL) and steroids (≤0.3mg/kg) or EVR (C0-h 5-10ng/mL) with mycophenolic acid (EC-MPS max. 2880mg/day or MMF max. 3g/day) and steroids (≤0.3mg/kg). The primary objective was assessment of eGFR (MDRD) 12 months after randomization for superiority in CNI-free over CNI-reduced everolimus group. Key secondary objectives included effi cacy (composite of BPAR ISHLT 1990 grade ≥3A / ISHLT 2004 grade ≥2R, graft loss / re-transplant, death or loss to follow-up) and assessment of safety profi les including infections. Results: Primary endpoint for superior renal function in CNI-free EVR arm was met with high signifi cance with a difference of +11.2 ml Per protocol analysis showed a difference of +19.0 ml/min in favor of CNI-free EVR arm (p<0.0001) [cGFR (ml/min) from MDRD formula Data from full analysis will be available for presentation at ATC2018 meeting. Conclusion: The MANDELA study showed that improved renal function can be achieved by early conversion to an everolimus-based CNI-free regimen in HTxR without compromising safety and effi cacy Scientifi c/medical advice and active participation in advisory boards, Other, Author's institution received study honoraria, Mandela study honoraria (Novartis, research reimbursements) to institution. Garbade, J.: Other, Author's institution received study honoraria Scientifi c/medical advice, Other, Author's institution received study honoraria, Mandela study honoraria (Novartis, research reimbursements) to institution Other, Author's institution received study honoraria, Mandela study honoraria (Novartis, research reimbursements) to institution. Ding, Z.: Other, Author's institution received study honoraria Research support to author's institution Author's institution received study honoraria, Mandela study honoraria (Novartis, research reimbursements) to institution Abstract# 603 ) suggested that kidneys from offspring live donors conferred worse post-transplant outcomes than non-offspring live donors, and advocated selecting non-offspring live donors when able. However, this study did not account for young age and high HLA match of offspring kidneys We modeled death censored graft failure (DCGF) and recipient mortality using Cox regression. We adjusted for recipient characteristics only, since donor related characteristics are mediators of offspring donors and post-transplant outcomes. Results: Offspring kidney recipients were older than non-offspring (61 vs 51, p<0.001) Abstract# 604 Results of LITMUS (NCT 02541916): The Liver Immune Tolerance Bio Marker Utilization Study Background: Previously we reported a novel biomarker gene set for the identifi cation of tolerance in murine models of rapamycin-induced cardiac tolerance and spontaneous hepatic tolerance. GeXP multiplex rRT-PCR was used to amplify 22 prominent immunoregulatory genes Prophylaxis. M. Fung, 1 J. Greenland, 2,3 S. Hays, 2 J. Singer, 2 J. Golden, 2 P. Chin-Hong. 1 Temporal and Geographical Trends of Intestinal Transplantation in the USA. M. Segovia, 1 S. Jafri, 3 B. Summers, 3 T. Schiano, 2 T. Pietrowsky, 3 A. Al-Osaimi, 5 A. Mavis, 1 S. Horslen. 4 1 Duke Univ, Durham; 2 Mt Sinai Hosp, NY; 3 Henry Ford Hosp, Detroit; 4 Seattle Children's Hosp, Seattle; 5 Temple Univ, Philadelphia. Intestinal transplantation (IT) is uncommon compared to other organ transplants. We examined the temporal and geographical trends of centers that perform IT in the USA We obtained information about IT centers from the OPTN and analyzed it by era and age of recipients. The analysis was divided into 3 periods: 1)1990-1999, 2)2000-2009, 3)2010-September 2017. The country was divided into census (West, Midwest, Northeast, South) and UNOS regions. High volume(HV) centers were those performing an average of 10 or more IT per year. Pediatric patients were those <18 years old In 1990, 2 UNOS regions had active IT centers: 2 and 8. HV centers were: 1 during period 1 (UNOS region 3), 6 during period 2 (regions 2, 3, 8, 10), 5 during period 3 (regions 2, 3, 8, 9, 10). Pediatric patients accounted for 65.3% of 398 IT performed during period 1, 56.3% of 1485 IT in period 2 and 40.7% of 1013 IT in period 3. Centers doing at least 1 IT in any given year remained relatively constant in number during the 3 periods. The Midwest had the greater stability in number of centers doing an IT in any given year, as have UNOS regions 1, 3, 7, 9. The number of centers doing IT every year almost doubled from period 1 to 2 and since then has remained relatively stable. The location of these centers has remained stable with UNOS regions 2 and 10 currently accounting for 53.8% of BACKGROUND: Although HLA matching improves graft survival in kidney transplant recipients (KTx), eplet mismatches provide higher resolution information, and could better predict the development of donor specifi c antibodies (DSA). Currently, limited prospective data exist on the association between eplet mismatch and DSA. AIM: To assess the relationship between eplet mismatch and DSA. We calculated the number of HLA-A, B, C, DR and DQ eplet mismatches in a prospective cohort of KTx transplanted between July 2010 and August 2017. Where possible, high resolution (4-digit) donor and recipient typing was entered into HLA Matchmaker to calculate the number of eplet mismatches. Where high resolution typing was not available, the HLA Matchmaker Converter program was used to convert 2-digit typing to 4-digit typing prior to calculation of mismatches. All patients were prospectively screened for DSA pretransplant and at 3 and 12 months post-transplant. The association between eplet mismatch, DSA and clinical outcomes was assessed using multivariable analysis. RESULTS: Of 313 patients transplanted during the study period, high resolution typing or conversion was completed for 202 recipients, 253 donors and 166 donor-recipient pairs. Conversion of low to high resolution typing was limited by the absence of ethnicity (n=83) or haplotype (n=64) data in the Converter database for 147 patients (47%). Of the 166 pairs with eplet mismatch data, DSA screening was complete for 150 patients. There was a mean of 14 (SD 7.7) Class I and 17 (SD 11.8) Class II eplet mismatches per patient (Table 1) . 64 patients (43%) had pretransplant DSA (preDSA), 30 (20%) had de novo DSA (dnDSA) and 17 (11%) had both pre-and dnDSA. The number of HLA-A, B, C and DR eplet mismatches was associated with detection of preDSA (OR 1.04; 95% CI 1.01-1.07; P=0.007), however there was no association between HLA-DQ eplet mismatches and preDSA. There was no association between eplet mismatches and dnDSA. CONCLUSION: Calculated HLA-A, B, C and DR eplet mismatches were associated with pretransplant DSA but not dnDSA. Backround: Kidney transplantation confers a signifi cant survival benefi t to its recipients. Our objective was to reassess this survival benefi t in light of improving outcomes on dialysis and the stagnant long-term survival after kidney transplantation. Methods: We studied a cohort of 768,214 candidates listed in the Organ Procurement and Transplantation Network for kidney transplantation between 1987 and 2016; 372,876 of the candidates (49%) were transplanted. We divided the study period in to approximately four year cohorts. Kaplan-Meier and Multivariable Cox regression analysis was used. We also used competing risk analysis to confi rm our waitlist analysis. Results: Waitlist survival for kidney transplantation has improved steadily over the study period. 5-year survival on the waiting list was 53. 8% in 1988, 50.3% in 2000, and 78.3% in 2012 (p<0.001 The accumulated experience has improved SRLT outcomes and the split/reduced liver graft is no more hazardous than whole liver grafts. Background: The long-term impact of living donation (LD) in liver transplantation on health-related quality of life (HRQOL) has not been well characterized. This study was designed to investigate HRQOL in living liver donors, at several intervals up to 19 years post-donation in a single institution. Methods: Between 2004 and 2017, HRQOL was assessed at three intervals (2004, 2010, 2017) using the to assess generic outcomes. A separate Donor Quality of Life Survey (DQLS) was designed to explore the fi nancial, medical, and psychosocial impact of LD. Introduction: PVN is a common complication occurring around 12 mo after renal transplant (RT). We aimed to explore the differences between early and lateonset PVN with regard to histological fi ndings and graft survival. Methods: Indications and follow-up biopsies of 71 patients with PVN were reevaluated and examined for development of interstitial fi brosis (IF). Interstitial plasma cells, neutrophils, CD3, CD4, CD8, HLA-DR positive cells, and macrophages were graded. Patients were separated into 4 groups based on time of development of PVN after RT [Group A: <12mo (n=35), Group B: 12-24mo (n=13), Group C: 24-48mo (n=15), Group D: >48mo (n=8)]. Also, they were correlated in 2 groups: Group 1 (n=48) early PVN (≤12mo) and Group 2 (n=23) late PVN (>12mo) Results: The mean interval between the diagnosis of PVN and RT was 17±22 months. Group 1 had higher mean hemodialysis (HD) time before RT (p<.05). CMV viremia was also found in 27 patients. The mean viral load in urine and plasma at diagnosis was higher in Group 1 (p<.01).Group 1 showed lower stages and higher degrees of polyoma viral load (Pvl) in biopsy (p<.05). Viral load in urine, plasma, and biopsy increased from Group A to D. Group 1 showed higher degrees of interstitial neutrophil, plasma, macrophage, lymphocyte, DR-positive cell infi ltration and lower degrees of the CD4/CD8 ratio (p<.01). All infl ammatory cell types increased from Group A to D. Interstitial CD4/CD8 ratio showed a signifi cant negative correlation with Pvl (r=-0.320, p=.01), viremia (r=-0.602, p<.001) and viruria (r=-0.748, p<.001) . 43 patients (60.6%) developed IF during follow-up, and 31 (43.7%) lost their graft 18±14mo after PVN. The risk of development of IF increased from Group A to D (p<.01).Compared to Group 2 (28.7±16mo), mean time of graft loss after PVN was earlier in Group 1 (13.3±9mo ). The mean time of graft loss also decreased from group A to D (p<.05). Positive correlation was found between graft loss and CD4/CD8 ratio (r=0.391, p<.05). Recipients with late-onset PVN had a better prognosis than earlyonset cases. Host cells may infl uence the time of PVN onset. Lower CD4 and higher CD8 proportions were risk factors for early-onset PVN and poor graft survival. CITATION INFORMATION: Ozdemir B., Ayva S., Terzi A., Ok Atilgan A., Akcay E., Ayvazoglu Soy E., Ozdemir F., Haberal M. Comparison of Recipients with Early and Late Presenting Polyomavirus Nephropathy (PVN) with Regard to Histological Findings and Graft Survival: What is the Infl uence of CD4/CD8 Ratio on the Presenting Time of PVN Am J Transplant. 2018; 18 (suppl 4) . Origin and Pattern of Human Polyomaviruses Replication after Kidney Transplantation: A Prospective Study. E. Favi, 1 S. Delbue, 2 C. Colico, 1 P. Ferrante, 2 S. Villani, 2 I. Perna, 2 N. Mondoni, 2 A. Giussani, 1 L. Clementoni, 1 P. Messa, 3 M. Ferraresso. 1 1 Kidney Transplantation, Fondazione IRCCS Ca' Granda, Milan, Italy; 2 Biomedical, Surgical and Dental Sciences, University of Milan, Milan, Italy; 3 Nephrology, Fondazione IRCCS Ca' Granda, Milan, Italy. Background. Human Polyomaviruses (HPyVs) are able to establish latent infection in the kidney of the host. Immunosuppression is recognized risk factor for HPyVs reactivation. Besides the well studied BK virus (BKPyV), JC virus (JCPyV), Merkel Cell Polyomavirus (MCPyV), and HPyV9 have been also detected in kidney transplant (KTx) recipients. Origin and natural history of these HPyVs remain unclear. Methods. Urine, blood, and kidney samples from 39 donor-recipient pairs were collected immediately before and periodically (from 1 to 180 days) after KTx. Samples were tested for BKPyV, JCPyV, MCPyV, HPyV7, and HPyV9 genome using Real Time PCR and automatic sequencing to defi ne viral genotypes and rearrangements. Results. No HPyVs viremia was observed whereas HPyVs viruria was detected in 21/39 (54%) donors and 24/39 (61.5%) recipients. Overall, 14/39 (36%) donor-recipient pairs were positive for HPyVs. JCPyV DNA was detected in 13/39 (33%) donor-recipient pairs with both donors and recipients positive for identical JCPyV strains. JCPyV was consistently positive in the urine of the recipients at any time point of the study. BKPyV was detected in 2 donors and sporadically in 5 recipients (1 donor-recipient pair). MCPyV was detected in 2 donors and sporadically in 10 recipients (1 donor-recipient pair) . Median time from KTx to fi rst viruria for JCPyV, BKPyV, and MCPyV was 1 (range 1-301), 76 (range 24-180), and 14 (range 1-267) days post KTx, respectively. Two cases of concomitant JCPyV and BKPyV, and 1 case of concomitant JCPyV and MCPyV infections were observed. No relationships between HPyVs replication and KTx outcomes were identifi ed during the follow up. Conclusions. Our data confi rm that JCPyV replication is frequently observed in organ donors. They also show that JCPyV replication in KTx recipients generally occurs very early. Moreover, post transplant JCPyV infections are due to viral strains transmitted by the donor. Post KTx BKPyV and MCPyV replication is less frequent, mostly occurs at a later stage, and it is likely due to viral reactivation of recipient's strains or new infections. Extended follow up is needed to rule out clinical impact of early JCPyV infection after KTx. CITATION INFORMATION: Favi E., Delbue S., Colico C., Ferrante P., Villani S., Perna I., Mondoni N., Giussani A., Clementoni L., Messa P., Ferraresso M. Origin and Pattern of Human Polyomaviruses Replication after Kidney Transplantation: A Prospective Study Am J Transplant. 2018; 18 (suppl 4 and high healthcare costs, often these challenges are not refl ected in the current organ allocation system. Refractory HE can be managed by obliteration of portosystemic shunts but the safety and effects of these procedures remain to be defi ned. Methods: 10 patients with ESLD and signifi cant portosystemic shunting underwent BRTO. LOS and number of admissions in the 6 months before BRTO were noted. MELD score and components were measured immediately prior to the procedure and at 1, 3, 7, and 10 days post-procedure. Patients undergoing BRTO had been admitted to our hospital 3.9±3.0 times for a cumulative LOS of 48.8±31.5 days in the 6 months pre-procedure ( Background: To prevent ischemic cholangiopathy (IC) after liver transplantation (LT), a protocol optimizing peri-operative conditions along with thrombolytic (tissue plasminogen activator) donor fl ush during DCD procurements was introduced at our center in July 2011. Methods: 57 consecutive DCD LTs were performed using this protocol (Era II).Outcomes were compared with 61 historic controls (Era I) and 1619 donation after brain death (DBD) donor LT. Expanded criteria donor (ECD) DCD livers were defi ned as those with one of the following factors: 1) donor age > 55 years, 2) donor BMI > 35, 4) donor functional warm ischemia time (fWIT) > 30 minutes, and 5) donor liver macrosteatosis >30%.Background: Cytomegalovirus (CMV) reactivation is associated with increased morbidity and mortality in transplant recipients. CMV establishes latency for the host's lifetime and frequently reactivates. Immunosuppression, while protecting the graft from rejection, impairs immune responses to CMV. We previously found that the frequency of CD8 T cells producing cytokine in response to CMV increased over the fi rst year after transplantation in CMV seropositive (CMV+) heart and kidney recipients in the absence of detected viremia, likely as a result of subclinical reactivation. These T cells may not be protective, as CMV is known to induce expansion of dysfunctional T cells with age in non-immunosuppressed populations.Methods: Blood was obtained from healthy volunteers and from heart and kidney transplant recipients pre and up to a year post transplant. Patients received standard of care treatment, including anti-viral prophylaxis and immunosuppression (steroid, calcineurin inhibitor, and anti-metabolite). Kidney recipients also received T cell depleting induction therapy with rabbit ATG. Blood mononuclear cells were cryopreserved for analysis. Here we analyze T cell receptors (TCR) and functions in T cells that secrete IFNɣ in response to CMV peptide antigen using a novel approach coupling paired single cell TCR sequencing to gene expression analysis. Background: Deceased-donor kidney discard rates remain high. Ineffi ciencies in the allocation of hard-to-place kidneys may be contributing to discard suggesting that system-level factors represent opportunities to improve placement. Methods:We analyzed DonorNet® data of consecutive deceased-donor non-mandatory share primary kidney-only offers to adult candidates at our center and beyond between July 1, 2015 and March 31, 2016 to identify system-level risk factors of discard independent of donor quality. Discard was defi ned as non-transplantation at our or subsequent transplant centers. Exclusions were HCV/HBV (n=14), blood type AB (n=20), and donor< 1 year (n=25) based on low candidate waitlist size. Results: Of 456 individual kidney offers, from 296 donors, 73% were discarded. Most were national (93%) offers from KDPI 35-85% (n=233) or > 85% (n=208). Each system-level factor was separately entered into the multivariate model containing donor-level factors and those that remained signifi cant independent factors associated with discard were increasing offer cold ischemia time ( Objectives: Living donor liver transplantation (LDLT) is underutilized in high MELD patients. Our aim was to assess the impact of high MELD score on shortterm post-transplant outcomes in LDLT recipients at our center. Methods: A total of 352 adult LDLT's were performed from 1999 to present and review of outcomes was analyzed retrospectively. Patients were grouped into low MELD<25 and high MELD ≥25 score to compare short-term outcomes. Results: A total of 326 recipients were in low MELD group with mean MELD of (13.72 ± 4.82) and 26 in high MELD group with mean MELD of (27.92 ± 3.72), P <0.0001. There were no signifi cant differences between the 2 groups with regards to demographics including age, gender, etiology of liver disease, BMI or incidence of HCC. The right lobe was the most commonly used graft in both groups, but the middle hepatic vein was included more commonly in the graft when it was utilized in high MELD patients. To study Treg-LTβR engagement, we developed and characterized an in vivo conditional, regulatable knockout of the LTβR. Methods:LTβR fl /fl mice were crossed with Prox1-Cre-ER T2 transgenic mice to generate Prox1-Cre-ER T2+/-LTβR fl /fl mice, in which LTβR is depleted in LEC by tamoxifen treatment. The effects of LTβR depletion on Treg lymphatic migration and migration related molecules were analyzed by fl ow cytometry and histology. Results: In Cre-Lox mice, LTβR expression by LEC was markedly reduced by 10 days after tamoxifen treatment, while blood vessel endothelial cells and fi broblastic reticular cells (FRC) maintained expression. LTβR depletion did not affect stromal cell or leukocyte cell counts or percentages in primary or secondary lymphoid organs, suggesting normal immune system homeostasis. Histologic analysis of LN did not show signifi cant differences in the overall architecture, cortical or medullary organization, B and T cell zone segregation, dendritic or plasmacytoid dendritic cell distribution, or the structure of the FRC network. Depletion of LTβR did lead, however, to a marked reduction in the accumulation of Foxp3+ Treg and the expression of CCL21 in the LN. CCL21 is a ligand for CCR7, and the major chemokine important for T cell migration to LV and LN. In vitro stimulation assays showed that both LTβR and Treg directly regulated CCL21 expression and secretion by LEC, confi rming that Treg-LTβR-LEC interactions are specifi c and physiologically important. LTβR depletion also reduced LEC expression of non-canonical NFκB kinase (NIK) and the infl ammatory and chemotactic lipid sphingosine-1-phosphate (S1P Indeed, the suppressive effects of OX40 in iTreg induction can be completely abrogated by targeting the Batf3/Batf and the Akt-mTOR pathway. We also examined whether OX40 costimulation inhibits allogeneic iTreg generation in vivo through Batf proteins and the mTOR pathway in a donor-specifi c cell transfer model, we adoptively transferred CD45.2 + Wt and Batf -/-Batf3 -/naïve CD4 T cells into CD45.1 + OX40L-Tg mice; some host mice were also injected with allogeneic Balb/c splenocyes and treated with rapamycin for 4 days, and then examined the induction of Foxp3+ T cells within the CD45.2+ fraction. The frequency of induced Foxp3+ T cells from Batf3 -/-Batf -/-CD4+ T cells following rapamycin treatment in the lymph nodes and the lungs in the OX40L-Tg hosts was signifi cantly higher than those of other groups. Thus, our data uncover new mechanistic insights into OX40-mediated suppression of allogeneic Foxp3 + iTregs, and these fi ndings may have important clinical implications in tolerance induction. CITATION INFORMATION: Zhang X., Xiao X., Lan P., Li J., Dou Y., Chen W., Ishii N., Chen S., Taparowsky Six (60%) patients had a prior diagnosis of HCV and had been successfully treated pre-transplant with DAA-based regimens. All recipients were non-viremic at the time of transplantation. At the time of organ procurement, 9 of 10 donors tested positive for HCV by nucleic acid testing (NAT); one donor was HCV antibody-positive but NAT-negative. All recipients acquired HCV infection from the infected donor and were treated post-transplant with DAA-based regimens, with a median time from transplant to treatment of 47 days (IQR 32-63). Five (50%) patients have completed antiviral therapy and have achieved sustained virologic response at 4 weeks (SVR-4). There have been no adverse events related to treatment. Conclusion: Transplantation of HCV-viremic organs into non-viremic recipients is well-tolerated and results in acceptable short-term outcomes. All patients initiated DAA-based antiviral therapy within 3 months of transplantation, and all who have completed therapy have achieved SVR-4. In the face of ongoing organ shortage, such strategies may be used to expand the donor pool across organs. BACKGROUND: An increasing number of patients and families are utilizing online crowdfunding to support their medical expenses related to organ transplantation. The factors that infl uence the success of medical crowdfunding campaigns are poorly understood. We analyzed campaign variables to identify those that correlate with fundraising success. METHODS: Medical crowdfunding campaigns were abstracted from a popular crowdfunding website. Campaigns were included if they were actively accepting donations to fund medical expenses related to transplant surgeries of interest (kidney, lung, liver, heart). The primary outcome measures were receipt of any donation and total amount raised among campaigns receiving at least one donation. Bivariate and multivariate analyses were performed on various campaign characteristics. Text analysis of campaign descriptions for emotional content was also performed. In total, 850 campaigns were analyzed. Kidney transplant campaigns were the most common type (40.5%), followed by liver (33.3%), lung (12.2%), heart (11.3%), and multi-organ transplants (2.7%). Longer description length (OR 1.15 per 100 characters, p<0.0001) was signifi cantly associated with receipt of any donation. More positive overall sentiment (+2.6% greater amount raised, p=0.0002), longer description length (+2.4% per 100 characters, p=0.0006), and higher goal amount (+0.6%, p=0.0038) were associated with greater amount raised. Campaigns written in fi rst-person perspective (-56.8% lower amount raised vs. third-person, p<0.0001) were associated with lower amount raised. The words "family," "life," and "time" were among the most common across all campaign narratives. The most commonly used emotive words included "support," "care," "failure," and "love." CONCLUSIONS: This is the largest quantitative analysis of medical crowdfunding campaigns to date. Campaign success was signifi cantly associated with longer description length, higher goal amount, more positive overall sentiment, and third-person writing perspective. These fi ndings will inform patient and provider discussions around the growing practice of medical crowdfunding. Given ongoing waitlist mortality and the high rates of liver discard in the US we wanted to investigate late declines (LD), and very late declines (VLD) of liver allografts in our OPO to look for correlation with center transplant activity. We defi ned LD as a decline after the donor operation started, and VLD after donor cross clamp. We performed a retrospective review of activity of the 8 most active transplant centers in our OPO between 07/2016 and 10/2017. The OPO provided LD and VLD data. Transplant center activity was obtained from the Scientifi c Registry of Transplant Recipients. We quantifi ed 1) the total # of liver transplants per center, 2) LD per center, 3) VLD per center, and 4) the number of livers each center utilized after a LD by another center. For statistical analysis, Pearson correlations were performed. Results: The number of LD and VLD varied dramatically in our OPO, as did the number of liver allografts utilized after another center's LD (fi gure 1). Some of the centers with the highest number of VLD were not local centers, and only 2 of the VLD were by local centers. Importantly, there is a positive correlation between a center transplant volumes and willingness to utilize a liver after a LD (fi gure 2). High volume centers had a very low rate of LD or VLD per organ transplanted. LD and VLD result in challenging livers to reallocate, and risk organ discard. Centers with higher volume utilized more liver allografts after LD by another center. Many late LD and VLDs are from non-local centers. In the share 35 era we must be mindful of these challenges to ensure effective sharing and organ discard is minimized. A larger study comparing centers in over performing to centers in underperforming OPOs would be valuable. Background: Quality metrics are used to regulate transplant centers, inform patients, and improve quality of care. Currently graft and patient survival are used to regulate transplant programs. However, it is unclear that these metrics contain optimal incentives for transplant quality and volume. Methods: We surveyed members of the ASTS and AST (n=270) to characterize perceptions about transplant quality metrics. Participants rated alternative metrics on effectiveness for measuring quality of care, amenability to risk adjustment, and predicted effects on volume, 1-year transplant mortality, and waitlist mortality.Results: Respondents rated 1-year patient survival as both the most effective at measuring quality of care (mean score=7.44 on a 10-point scale) and most amenable to risk adjustment (mean score=6.26, Figure 1 ). Patient satisfaction was rated least effective at measuring quality of care (mean score=3.38) and evaluated patient mortality rate was rated least amenable to risk adjustment (mean score=4.45). Over half of respondents (52.5%) believed using 1-year waitlist mortality would decrease 1-year waitlist mortality; however 24.3% of respondents also believed this metric would decrease center volume (Figure 2) . A quarter of respondents (25.5%) believed using 3-year post-transplant graft survival would decrease 1-year transplant mortality, while 24.3% believed this metric would decrease center volume. About one third of respondents (32.4%) believed using organ refusal rate would increase center volume, while 21.2% of respondents believed this metric would increase 1-year transplant mortality. Discussion: Choice of transplant regulatory metrics involve tradeoffs between incentivizing waitlist and post-transplant outcomes. However, many members of the transplant community believe that alternative metrics will not impact quality or volume. (Table) . A two month blinded study with 1 hospital site was conducted. Actual referrals were still made by the on-site team while the Tele-ICU staff also practiced making referrals. Tele-ICU staff successfully identifi ed all patients for whom actual referrals were made. The Tele-ICU team identifi ed potential referrals which were not made by their on-site colleagues.Implementation of the new referral process at 4 hospitals has led to an increase in the number of referrals, an increase in timeliness and an increase in donation. Interviews with hospital staff and OPO coordinators reveal markedly improved satisfaction. Additionally, bedside staff report signifi cant time savings. These encouraging results are embraced by leaders at these hospitals and the OPO. Continued expansion of this strategy is planned. Monocytes distinguish between self and allogeneic non-self and contribute to allograft rejection by generating mature, IL-12+, DCs. The initial trigger of monocyte differentiation to DCs is the binding of SIRPa, a polymorphic marker of non-self on donor cells, to CD47 on recipient monocytes. Here, we show that monocytes primed in vivo with allogeneic cells via the SIRPa-CD47 pathway acquire allospecifi c memory. First, they mount an anamnestic recall response (measured by enhanced DC generation) to grafts from the same but not thirdparty donors. Second, removal of the MHC-disparity between the donor and recipient at the time of recall abrogates the response, indicating that memory specifi city is to the allogeneic MHC. Third, the recall response can be elicited up to 4-7 weeks after initial priming -a duration that is signifi cantly longer than the very short lifespan of a monocyte (1-3 days) . Fourth, acquisition and recall of memory occurs in the complete absence lymphoid cells, indicating the innate nature of monocyte memory. We provide direct evidence that memory is preceded by monocyte proliferation (measured by EdU uptake) and is associated with transcriptional changes in monocytes (measured by RNAseq analysis). Using blocking antibodies to paired immunoglobulin receptor (PIR) molecules present on monocytes, we show that MHC specifi city of memory is mediated by PIR-A receptors that preferentially bind allogeneic MHC molecules. Therefore, the innate alloresponse triggered in monocytes by non-self SIRPa rapidly evolves into an enhanced memory response specifi c to non-self MHC. This novel form of innate memory could have profound effects on allogeneic transplantation by continually generating mature DCs that promote rejection and prevent tolerance. 36) . Initial recipient HCV levels varied from 25 IU/mL to 40 million IU/mL, but all patients' HCV NAT levels declined rapidly after elbasvir/ grazoprevir treatment (Figure) . Two patients were cured (SVR-12); 7 have ontreatment undetectable HCV but have not reached SVR timepoint; SVR data will be available June 2018. Five of 9 recipients had ALT elevations post-transplant that resolved in follow-up. All recipients have good allograft function. Serious adverse events included 1) acute kidney injury from early poor cardiac output and pericardial tamponade; the patient requires dialysis at week 12; 2) another had antibody mediated rejection, required ECMO, had acute kidney injury, but has since had good allograft recovery; and 3) another had cellular rejection responsive to oral steroids. None of these were related to HCV or treatment. Interim Results of the kSORT in the SAILOR Randomized Multicenter Trial. P. Lindner, 1 A. Shroeder, 2 J. Ekberg, 1 S. Hsieh, 2 P. Towfi ghi, 2 I. Damm, 2 T. Sigdel, 2 M. Sarwal. 2 1 Sahlgrenska University Hospital, Göteborg, Sweden; 2 UCSF, San Francisco. Background : In a randomized multicenter trial of 222 renal tx recipients treated with steroid minimization and a tacrolimus, MMF based regimen, a novel gene expression assay, kSORT was evaluated for its accuracy in diagnosing and predicting biopsy confi rmed AR. Methods: Blood samples were drawn at day 0, 10, months 3, 6, 12 and at graft dysfunction, to perform kSORT assay, a customized 17 gene assay, that provides a high risk (HR) or low risk (LR) immune score for acute rejection (AR). Biopsies (bx) were done on all study patients by protocol at engraftment and 12 months post-transplantation and central histology was read by Banff scores. The kSORT assay was run on 633 blood samples obtained from the fi rst 111 enrolled patients who cp,pelted 1 yr followup. 214 blood samples were matched with protocol or indicated biopsies. 31 patients had clinically suspected AR, of which 18 were BPAR and an additional 6 were borderline (BL-AR). RNA was extracted and QPCR for all 17 genes was normalized to 18S; data was profi led using a customized algorithm kSAS. Results: Of the 25 biopsy confi rmed AR episodes, 21 had defi nite kSORT scores and 4 were indeterminate; 18/21 AR had high-risk KSORT scores. 11 AR episodes had prior blood samples collected per protocol in the previous 4 months; 8/11 of the pre-AR samples had high kSORT scores. Of the 163 biopsy matched blood samples without histological AR, 139 had defi nite kSORT scores, and 24 had intedeterminate calls. 132/139 blood samples matched with biopsies without AR, had low-risk kSORT scores. A diagnostic odds ratio was calculated to examine the odds of a (+)kSORT compared to the odds of a (-)kSORT in the confi rmed AR group (dOR=39.3, p=4e-15). To evaluate prediction accuracy, there were 107 patients with samples either before on day 0 of the transplant. kSAS called 8/17 with confi rmed AR high-risk. Of the 63 stable transplants, 61 were predicted to be low-risk. Conclusion: Interim results of the diagnostic accuracy of the kSORT assay in a randomized prospective multicenter trial in renal transplantation, confi rms that the assay has 85.7% sensitivity, 95.0% specifi city, and 97.8% NPV for the noninvasive diagnosis of AR. 73% of AR could have been diagnosed by the kSORT assay days-months prior their current time-line for diagnosis based on the serum creatinine alone, supporting the use of this assay for serial monitoring of rejection risk and proactive immunosuppression customization to alloimmune risk. Figure 1A ), lower CD4 effector memory cells ( Figure 1B ) and lower CD4 PD-CD57+ T cells ( Figure 1C ) were seen in patients with impaired protective immunity prior to and persisting after the infection compared to no event or alloimmune failure. We have identifi ed a CD4 T cell signature that can potentially predict/monitor patients at risk for infection, leading to medication optimization and decreased infectious related morbidity. Purpose: The MANDELA study (NCT00862979) was designed to assess the benefi t on renal function of either CNI-free or CNI-minimized EVR-based regimen after early conversion of de novo heart transplant recipients (HTxR).