key: cord-0041388-l1jm5vxc authors: nan title: Oral Abstract date: 2016-06-08 journal: Am J Transplant DOI: 10.1111/ajt.13897 sha: 5f0e43b6b2db5b67a419bf41f70b141b21a209f0 doc_id: 41388 cord_uid: l1jm5vxc nan Introduction. Elevated levels of the biomarker cardiac troponin T levels (cTNT) pre-KTX relates to high mortality and CV risk pre and post-KTX. We sought to determine whether abnormal cTNT levels measured routinely post-KTX in asymptomatic recipients might uncover unrecognized cardiac pathology that might be associated with high mortality. Methods. Included are 1114 recipients with cTNT measured pre and at regular intervals post-KTX and followed for 49.6+28.9 months (59% males, 43% preemptive and 84% living donors). Pre-KTX, 24% had diabetes and 29% had history of CV disease. Post-KTX there were 6079 cTNT measurements (median 5 (1-36)/ recipient). cTNT<0.01 ng/ml was considered normal. Results. Post-KTX 64 recipients (5.7%) had clinical non-fatal cardiac events (MACE) and 172 had asymptomatic elevations in cTNT. Compared to recipients without MACE or cTNT elevations, mortality was higher after non-fatal MACE [HR=3.10 (1.65-5.85) p<0.0001] and after cTNT elevations [HR=3.50 (2.33-5.26 ), p<0.0001]. Risk associated with cTNT elevations increased with increasing cTNT levels (0.01-0.03 ng/ml, HR=3.32; >0.1, HR=6.98). One time elevations in cTNT related to increased risk but the risk was higher when cTNT was elevated more than once. Risk factors for MACE and cTNT elevations were similar (age, males, diabetes, lower GFR). The cumulative incidence of MACE/cTNT also related to cTNT levels pre-KTX (Table) and time on dialysis. 115 recipients expired (10.3%). By multivariate analysis, mortality related to MACE/cTNT elevations (HR=5.11 (3.29-7.93 There are significant sociodemographic and comorbidity differences between dialysis, waitlisted and transplant patients in the UK. The comorbidity profile of listed as well as transplanted patients varies significantly between centres in the UK, suggesting different selection criteria. Several comorbid conditions affect 1-year survival on dialysis and could assist the selection process for transplantation. The purpose of this study was to describe pregnancy in female heart recipients whose indication for transplantation was due to congenital heart defect. Data were collected by the National Transplantation Pregnancy Registry (NTPR) via questionnaires, phone interviews, and medical records. Total, there are 83 heart recipients with 141 pregnancies reported to the NTPR to date. Of these, 17 recipients (28 pregnancies) had an initial diagnosis of a congenital heart defect (hypoplastic left heart syndrome 7, transposition of the great vessels 3, tricuspid atresia 1, septal defect 1, fibroelastosis 1, and other 4). Mean age at first transplant was 12.4±10.7 yrs (range 0.05-29 yrs). The transplant to conception interval was 11.5±6.1 yrs and 52% of the pregnancies were reported as being planned. Conceptions occurred between June 1990 and August 2014. Immunosuppression during all the pregnancies was calcineurin-inhibitor based with mycophenolic acid product (MPA) exposure in 7. Outcomes included: 7 miscarriages (all MPA exposure) and 22 live births (one set of twins) with a mean gestational age of 35.4±4.1 wks and a mean birthweight of 2403±777 g. Four children had birth defects: 1 duodenal atresia, Tetralogy of Fallot and atrioventricular canal defect (MPA exposure), 1 facial defects (MPA exposure), 1 pectus excavatum, 1 Dandy Walker syndrome with vermian hypoplasia of cerebellum, and a 5th with long QT syndrome diagnosed later in childhood. Comorbid conditions during pregnancy: hypertension 13 (46%) and preeclampsia 6 (27%). One recipient had mild rejection during pregnancy treated with an increase in oral prednisone. One recipient was retransplanted 12 yrs post-delivery but subsequently died, another died 1.7 yrs after pregnancy, one reported reduced function, and 14 recipients reported adequate graft function. Conclusions: Female heart transplant recipients born with congenital heart defects have reported successful pregnancies with a majority reporting adequate graft function postpartum. MPA exposure in this group confers increases in miscarriages and birth defects. The higher frequency of birth defects in the offspring not exposed to MPA in utero in this group warrants further study. All transplant centers are encouraged to refer pregnancies to the NTPR. Calcineurin inhibitors (CNI) serve as the cornerstone of immunosuppression (IS) after kidney transplant. However chronic CNI nephrotoxicity has been implicated in allograft dysfunction. We hypothesize low dose CNI with low dose Everolimus (Ev) may be associated with lower IFTA on allograft biopsy and less rejection. Methods: We studied allograft pathology on protocol biopsies at 12 months posttransplant in 35 kidney transplant recipients randomized to steroid free IS with either low dose Tacrolimus (Tac) and Everolimus or standard dose Tac and Mycophenolate Mofetil (MMF) after Alemtuzumab induction. Everolimus levels were maintained between 3-8 ng/ml. We also analyzed circulating T cell populations in each group. Clinical outcomes included rejection free graft survival and estimated GFR (eGFR). Results: Baseline characteristics were statistically similar between the two groups. Mean follow up was 14±4 and 17±5 months (p=0.02)and mean Tac levels were 4.5±1.9 and 6.4±1.5 ng/ml (p=0.03) in the Tac+Ev and Tac+MMF group respectively. Rejection free graft survival was greater in the Tac+Ev group (p=0.04), possibly related to increased circulating Tregs. The 2 groups had similar eGFR and similar degress of IFTA, glomerulosclerosis, isometric vacuolization and arteriosclerosis on biopsy. (1994) (1995) (1996) (1997) (1998) (1999) (2000) (2001) (2002) (2003) (2004) (2005) (2006) (2007) (2008) (2009) (2010) (2011) (2012) (2013) were studied. Obese LKDs were defined as donors with body mass index (BMI) ≥ 30kg/m 2 at the time of donation. Donors were followed until the earliest of ESRD development, death, or administrative end of study. Maximum follow-up was 20 years; median follow-up was 7.3 years (IQR: 4.2-10.6) for obese ) for non-obese LKDs. Main outcome measures included cumulative incidence and adjusted risk of ESRD (adjusted for age, race, blood pressure, and insurance status). Results: Compared to non-obese LKDs, obese LKDs were more likely male, African American, and to have higher blood pressure. ESRD developed in 0.14% of obese LKDs in a mean (SD) of 8.9 (2.8) years after donation; ESRD developed in 0.05% of non-obese LKDs in a mean (SD) of 8.0 (2.1) years after donation. Estimated risk of ESRD at 20 years after donation was 35 per 10,000 for obese LKDs and 14 per 10,000 in their non-obese LKD counterparts (p < 0.001). LKDs with obesity had a 2.29-fold increased risk of ESRD compared to non-obese LKD (adjusted hazard ratio (aHR): 2.29, 95%CI: 1.34-3.93, p=0.003). For each one unit increase in BMI, there was an associated 8% increase in the risk of ESRD (aHR: 1.08, 95%CI: 1.03-1.13, p=0.001). Conclusions: Compared with non-obese LKDs, LKDs who are obese at the time of donation had an increased risk of ESRD within 20 years of donation. These findings may help to inform selection criteria and discussions with persons considering live kidney donation. Diabetes Mellitus (DM) continues to be the most common cause of kidney disease. The incidence of DM following kidney donation is unknown. Moreover, whether its development in someone who donated a kidney in the past results in accelerated GFR change, compared to non-diabetic donors and those with 2 kidneys, has not been studied. We determined the incidence and predictors of DM development in 3956 White kidney donors who donated between 1963-2013 and performed a case control study of 208 donors who developed DM and 208 without DM after matching on age, gender, BMI, eGFR and glucose at donation and year of donation. The main outcomes of interest for the matched analysis were hypertension development, proteinuria and reduced GFR. After a mean follow-up of 16.6±11.9 yrs, 231 (6.7%) donors developed diabetes requiring treatment at a median age of 56.6 years. In the matched analysis, diabetic donors were more likely to become hypertensive (71% vs 43%) and proteinuric (26% vs 11%). Annual eGFR change after DM development was -0.48 ml/min/1.73m 2 (95% CI -0.62,-0.34) in cases and -0.42 ml/ min/1.73m 2 (95% CI -0.66, -0.80) in controls. These rates are almost identical to the rates reported in the INDT trial which studied microalbuminuric hypertensive type Recent studies have shown inreased risk of end stage renal disease (ESRD) in living kidney donors compared to healthy nondonors. Donors must understand the risk of donation in order to provide informed consent, but the risk of ESRD in individual donors is unknown. METHODS: Using SRTR data linked to outcome data from CMS, we studied the association between donor age, sex, race, relatedness to recipient, BMI, and ESRD in 120,203 living kidney donors 1987-2014. We used Cox regression with multiple imputation to account for missing data and late entries to account for imperfect ESRD surveillance prior to 1994. We used the model to create a calculator for individual risk prediction. RESULTS: Male donors had 2-fold higher risk of ESRD than female donors (Table) . African-American (AA) donors had increased risk compared to non-AA donors; the magnitude of increased risk varied with donor age. Among non-AA donors, older age was associated with increased risk (46% higher risk per 10 years of age, Table) , but there was no evidence of increased risk with age among AA donors. Donors with no biological relationship to the recipient had 35% lower risk than donors with a biological relationship . Higher BMI was associated with increased risk (52% higher risk per 5 kg/m 2 ). There was no evidence of association between eGFR and ESRD risk after adjusting for other factors (p=0.9). The c-statistic of the model was 0.71. Among all donors in the study, median predicted risk at 5, 10, and 15 years postdonation was 0.01%, 0.06%, and 0.17%, respectively. Median (IQR) predicted risk of ESRD at 20y was 0.35% (0.21%-0.62%), but 1% of donors had 2.7% or higher predicted risk of ESRD (Figure) . Online calculator at transplantmodels.com/donesrd CONCLUSIONS: Risk of ESRD within 20 years post-donation is small for most donors, but a few donors have greatly increased risk. The online calculator is a useful tool for evaluating and counseling potential kidney donors. Two studies have reported a small increase of ESRD in LDs compared to healthy matched controls. However, the majority of ESRD has been in relatives (vs unrelated donors) (also true at our center). Relatives of those with ESRD have higher risk of ESRD (w/o/ being donors). ESRD is still a rare event after donation. We studied impact of family Hx of ESRD on the more common (and earlier postdonation) event of developing CKD. Time from donation to first eGFR, 60 (CKD stage 3a), <45 (stage 3b), and <30 (Stage 4) were determined in 2 Caucasian groups: a) first degree relatives plus others who had a 1 st degree relative with Hx of kidney disease, and b) all others. Because LURD was not common until 90s, we limited analyses to donation after 12/31/89; and thru 12/31/13. Of 1651 donors, 1038 had a 1 st degree relative with ESRD. Figure 1 shows the unadjusted cumulative probability of developing CKD stages by family history of kidney disease. However, there are significant differences in clinical factors between the two groups (Table 1) . Therefore, we modeled the impact of having a 1 st degree relative with family hx of kidney disease, while adjusting for eGFR, BMI, systolic blood pressure, hemoglobin, glucose at donation, gender, smoking status, and calendar time of transplant. Family history of kidney disease was associated with a small but NS increased risk of developing CKD Stage 3a (HR = 1.12, p=0.12); stage 3B (1.23, p=0.19) ; stage 4 (1.07, p=0.88) . We then modeled, using a pooled analysis with the same adjustment factors, the hazard of developing the next stage of CKD. Those with a 1 st degree relative had 13.2% ↑ (95% CI, -1.1, +29.6) (p=0.07). We conclude that although the majority of reported ESRD after donation has been in 1st degree relatives, there is only a small and NS ↑ in development of CKD between 1st degree relatives and other donors. The role of circulating donor-specific anti-HLA antibodies (DSA) in the development of accelerated arteriosclerosis have been recently reported in kidney transplant recipients. This study investigated the characteristics of DSA that are associated with the severity of allograft arteriosclerosis. We enrolled 744 consecutive kidney transplantation performed between January 1, 2004 and January 1, 2010 at Necker Hospital (Paris, France) , with systematic assessment of injury phenotype and arteriosclerotic lesions using the vascular fibrous intimal thickening (cv) Banff score on allograft biopsies performed at one year after transplantation. We assessed circulating DSA and their characteristics (specificity, HLA class, mean fluorescence intensity [MFI] and C1q-binding) at six months after transplantation. We identified 281 patients with cv0 score, 213 patients with cv1 score, 189 patients with cv2 score and 61 patients with cv3 score. The distribution of DSA according to cv score was the following: 47/281 (17%) in cv0 patients, 39/213 (18%) in cv1 patients, 63/189 (33%) in cv2 patients and 28/61 (46%) in cv3 patients. Immunodominant DSA (iDSA) MFI level was positively correlated with the severity of arteriosclerosis (Spearman's rho=0.23, p=0.002), with a mean MFI of 3204.0±3725.2 in cv0 patients, 3760±3598 in cv1 patients, 4892±4676 in cv2 patients and 5541±3892 in cv3 patients. C1q-binding DSA prevalence increased with the severity of allograft arteriosclerosis: 8/281 (3%) in cv0 patients, 6/213 (3%) in cv1 patients, 25/189 (13%) in cv2 patients and 9/61 (15%) in cv3 patients (p<0.001). Patients with C1q-binding iDSA had a higher cv score compared with patients with non-C1q-binding DSA (1.7±1.0 versus 1.3±1.1, respectively, p=0.01). The C1q-binding capacity of DSA was associated with increased microvascular inflammation (p<0.001) and C4d deposition in peritubular capillaries or arteries (p<0.001). This study shows a biological gradient between DSA MFI level and the severity of allograft arteriosclerosis. The complement-binding capacity of DSA is associated with an increased severity of arteriosclerosis and complement deposition in allograft. Background: Transplant glomerulopathy (TGP) is frequently found in the setting of chronic antibody mediated rejection along with microvascular inflammation (MVI) (peritubular capillaritis+glomerulitis score > 1) and/or positive c4d staining. We aimed to assess the molecular profiles of TGP in the absence of microvascular inflammation and c4d staining as compared to TGP with positive MVI and/or C4d. Methods: A total of 42 for-cause renal allograft biopsies were studied using Affymetrix HuGene 1.0 ST expression arrays; 12 with normal biopsy findings (G1), 17 with a diagnosis of TGP, c4d positive and/or a MVI score >1 (G2), and 13 with a diagnosis of TGP and MVI score 20% at the time of biopsy compared to those in G1 (25%), p=0.006 and p=0.05 respectively. More patients had DSA in G2 (TGP with MVI or C4d), 82.4%, as compared with G1 (8.3%) p<0.001 and G3 (TGP without MVI or C4d), 38.4%, p=0.02. Pathogenesis based transcripts revealed increased expression of gamma interferon and rejection (GRIT) and DSA associated transcripts (DSAST) seen in antibody-mediated rejection when G2 (TGP with MVI or C4d) were compared to G1 and G3 (TGP without MVI or C4d) groups (Table 1) . However, when G3 (TGP without MVI or C4d) compared to G1, increased expression of Cytotoxic T cell (CAT) , T-regulatory cell (TREG), and B cell associated transcripts (BAT) were observed but not GRIT or DSAST. There was no difference in expression of natural killer cell and endothelial cell associated transcripts between the 3 groups. Conclusions: Gene expression profiles of TGP in the absence of microvascular inflammation and C4d involve an upregulation of T cell mediated responses but not transcripts seen in antibody mediated rejection suggesting that this subset may represent chronic T cell mediated rejection. The aim of our study is to determine the clinical significance of pre-transplant non-HLA antibodies, mainly those targeting the angiotenin II type I receptor (AT1R) and major histocompatibility complex class I-related chain A (MICA). We examined the presence of pre-transplant AT1R and MICA antibodies in 362 patients who underwent kidney transplantation. Patients were divided into groups according to the presence of AT1R and MICA antibodies: MICA only (n=28), AT1R only (n=150), MICA -+AT1R -(n=159), MICA + +AT1R + (n=22) groups. Clinical characteristics and immunological characteristics were compared among these groups. Patients who underwent post-transplant allograft biopsy (indicated biopsy for elevated creatinine (n=128) and protocol biopsy (n=124)) were further divided according to the presence of pre-transplant HLA-DSA, and pathologic biopsy findings and clinical outcomes were compared among groups. There were no significant differences among the groups in baseline clinical and immunological characteristics. In biopsy findings, no significant differences were found in the development of AMR, but microvascular inflammation (glomerulitis + peritubular capillaritis) scores, representing humoral immunity, were significantly higher in the MICA or AT1R detected groups compared to groups without either antibodies (P<0.05). These findings were consistent in both indicated and protocol biopsy patients, especially in those without detectable HLA-DSA. Allograft survival was also significantly lower in patients positive for either MICA or AT1R antibodies, but negative for HLA-DSA (P<0.05). In conclusion, pre-transplant detection of MICA or AT1R antibody is significantly associated with the post-transplant development of microcirculation inflammation and poor allograft outcomes especially in those without HLA-DSA. Arteriolar hyalinosis lesions (ah) are often interpreted as a sign of calcineurin inhibitor toxicity even though they are induced by other conditions. We studied the features associated with ah-lesions to understand their relationship to diseases, transcript changes, and outcomes. In indication biopsies from 562 patients (3 days-35 years post-transplant), mean ah-scores increased with time of the biopsy post transplant (TxBx) and correlated with atrophy-fibrosis (r=0.31), but unlike atrophy-scarring, failed to increase until after 500 days (Fig. 1 ). The associations of ah-lesions varied with TxBx: early (<3 months) donor age; >3 years: progressive glomerular diseases (ABMR with cg lesions or GN) ( Table 1) . In a multivariable model the independent predictors of ah-score were TxBx (p=6 x10 -16 ); glomerular diseases (p=1x10 -11 ); and donor age (p=0.01). Transcripts had poor associations with ah-lesions but in the early period (<3m) ah-lesions correlated weakly with a blunted injury response e.g. reduced expression of AKI transcript HAVCR2, and increased expression of selected microcirculation transcripts e.g. regulator of G-protein signaling 5 (RGS5). Association of ah with graft failure was complex: risk was high in ah3 but largely due to glomerular diseases. In the 1-5y period, biopsies with ah0 were at risk, due to ABMR without cg lesions, associated with non-adherence. In a multivariable model, Ah-score is a weak predictor of graft survival (p=0.03) compared to ci-score (p=8.6E-07) and TxBx (p=7.1E-05). Hyalinosis in indication biopsies has paradoxical associations. The previous belief that ah3 indicated progressive CNI toxicity may have been due to failure to recognize glomerular disease due to ABMR. The mTOR inhibitor is an immunosuppressive drug used in kidney transplantation. Whether the mTOR inhibitors is associated with reduced risk of cancer development in Chinese population is unknown. Methods We conducted a nationwide population-based study. Patients who did not have malignancy history and received kidney transplantation between 2000-2013 were enrolled. A total of 3628 patients were included. We conducted a propensity score matching to determine 793 recipients who had mTOR inhibitors > 30 days as the study group; 2835 recipients who did not have mTOR inhibitors as the control group. We calculated the cumulative tablets of Sirolimus (Rapamune ® 1mg, Pfizer) or Everolimus (Certican ® 0.5mg, Novartis) within the first year of kidney transplantation. The primary outcome is the development of cancer after kidney transplantation. A Cox proportional hazard model was applied to investigate the risk of cancer development. Results Among the 78-month median duration of observation, patients who received mTOR inhibitors were not associated with lower risk of cancer development. High dose (>330 tablets *) 1.13 (0.84-1.52) The association of mTOR inhibitors and risk of cancer development in kidney transplantation recipients. 3 NHSBT, Bristol, United Kingdom. Background. The survival advantage of pre-emptive transplantation (Pre-empTx) in live donor kidney transplantation (LDKTx) was reported by Meier-Kreische (Kidney International, 2000) . Dialysis practices differ internationally and immunosuppressive regimes have advanced. The aim of this study was to examine the effect of dialysis time on allograft survival in UK LDKTx, including an assessment of allograft survival (GS) between compatible (CTx) and antibody incompatible transplants (AiT) donors. Methods. Data from NHSBT for LDKTx recipients in the UK were analysed between 2001 and 2013 including 9755 patients, from 8970 were adults and had available dialysis data. These data were analysed for both GS and composite outcomes (combined death or graft failure). No meaningful differences were found in using a competing risk regression model. Dialysis time (DiT) was categorised into Pre-Tx, <1yr, 1-2 yrs, and >2yrs dialysis. AiT (n=946) were grouped as HLA incompatible (HLAi, n=473) and blood group ABO incompatible (ABOi, n=473). Paired Exchange recipients (PrEx, n=278) and CTx (n=7746) were also analysed in this study. Results. Transplant groups differed significantly with respect to donor age, recipient age, calculated reaction frequency at transplant, HLA mismatches. The risk of graft failure (GF) of LDKTx compared to Pre-empTx increased with more DiT, but only after 1 year of dialysis in the whole cohort (<1yr DiT HR 1.05, p=0.73, Acute rejection and renal function are associated with long-term renal allograft survival. We evaluated these outcomes in patients receiving reduced TAC exposure combined with EVR or standard TAC exposure combined with MPA. Methods: In this single center prospective study, 288 low immunological risk kidney transplant recipients were randomized to receive (1) a single 3 mg/kg dose of antithymocyte globulin (rATG), reduced TAC exposure (<5 ng/ml), EVR (4-8 ng/mL) and prednisone (G1, n=85); (2) basiliximab (BAS), reduced TAC exposure (6 ng/ml for 3 months and <5 ng/mL 4-12 months), EVR (4-8 ng/mL) and prednisone (G2, n=102); (3) BAS, TAC (6-8 ng/ml), MPA (1440 mg/day) and prednisone (G3, N=101). Anti-HLA donor specific antibodies (DSA) and protocol biopsy were performed at 12 months. This analysis evaluated renal function at 12 months based on estimated glomerular filtration rate (eGFR) by MDRD, using the last observation carried forward method, and the binary composite of first treated biopsy confirmed acute rejection (tBCAR) and eGFR lower 50 mL/min. Results: Mean age was 45, 66% male and 52% Caucasian, with no differences among the groups. 69% were recipients of deceased donor kidney allografts, withno differences in mean kidney donor profile index (KDPI, 45%±22% vs. 52%±24% vs. 49%±24%) Final donor creatinine was higher in G1 ( The presence of de novo DSA (dnDSA) is associated with antibody mediated rejection and less optimal graft outcomes. The overall incidence of dnDSA post renal transplant varies according to immunosuppression protocols, screening, and detection methods, and ranges from 5-25%. Specifically, a higher incidence of dnDSA following alemtuzumab (AL) has been observed. We sought to characterize the expression of early dnDSA development after AL induction in renal transplant recipients. Consecutive kidney transplant recipients were screened for the presence of dnDSA from 7/1/2014 to 6/30/2015. Induction protocol consisted of alemtuzumab with rapid steroid withdrawal, and maintenance immunosuppression was with tacrolimus and mycophenolate mofetil.DnDSA was detected by single antigen beads and luminex technology at months 1, 2, 3, 6 and 12. Mean Fluorescent Intensity (mfi) > 500 was considered positive. A total of 60 renal transplants were performed from 7/1/14 to 6/30/15 and were followed for average of 10.3 months (range [4] [5] [6] [7] [8] [9] [10] [11] [12] [13] [14] [15] [16] . Of these, 22 developed dnDSA (36.6 %) in the first 6 months; 15 class I, 10 class II and 3 both. Class I was detected at months 1 (8), 2 (3), 3(1) and 6 (3). Class II was detected at months 1 (3), 2 (3), 3 (2) and 6 (2) . Median mfi at the time of onset was 3985 (range 544-7427) for class I and 12,539 (range 502-24,576) for class II. Of this cohort, 22/60 patients completed one year of screening; 9/23 had developed dnDSA as described above. Of these, the DSA cleared without any intervention in five patients (55%). The levels remained unchanged with stable graft function in 3 (33%) of the patients. One patient had persistent DSA > 5000 mfi, was treated with plasmapharesis , IVIG and Rituximab . The DSA still remains high but the graft function was unchanged so patient is being monitored closely. In this cohort, dnDSA developed in 36.6% of recipients, with the majority developing in the first 2 months (77 %). In 50% of the patients that were followed for one year, the DSA cleared with no intervention with stable graft function.Despite their DSA positive status, patients with early de novo DSA did not fare worse than those without at one year. 3 Surgery, Lehigh Valley Health Network, Allentown, PA. Mycophenolic acid (MPA) products are considered teratogenic and transplant centers report discontinuing MPA in female kidney recipients anticipating pregnancy. Data were collected by the National Transplantation Pregnancy Registry (NTPR) via questionnaires, phone interviews and medical records. 142 pregnancies were conceived on MPA, and in 302 pregnancies MPA was discontinued preconception. Recipients who discontinued MPA preconception had significantly longer transplant to conception intervals and lower creatinine levels before, during and after pregnancy. These pregnancies resulted in a significantly higher rate of live births and lower incidence of birth defects. Acute rejection rates during pregnancy and postpartum were slightly higher in the MPA exposed group. Conclusions: Compared to kidney recipients who remained on MPA in early pregnancy, those who discontinued MPA preconception had a significantly higher rate of live births with an incidence of birth defects similar to the general population. There was no increase in acute rejections during pregnancy or postpartum in the group of recipients who discontinued MPA. The NEVERWOUND study was designed to evaluate the impact of immediate vs. delayed introduction of everolimus (EVR) on wound healing complications (WHC) and delayed graft function (DGF) in de novo kidney transplant (Tx) recipients during the first 3 months post-Tx. NEVERWOUND (NCT01410448) , an open-label, multicenter study randomized 383 single kidney Tx recipients to: immediate EVR (0.75 mg twice daily), within 48 hours after graft reperfusion (N=193; iEVR) along with low-dose cyclosporine (CsA, 4 mg/kg/day) or delayed EVR (0.75 mg twice daily) 28±4 days after Tx (N=190; dEVR) along with low-dose CsA, with a bridge of enteric-coated mycophenolate sodium (1440 mg/day) and CsA (6-8 mg/kg/day). All patients received induction therapy and steroids as per local clinical practice. Primary endpoint is the proportion of patients without WHC related to initial transplant surgery (i.e. lymphorrea, fluid collections, wound dehiscence, wound infections, incisional hernia). Secondary endpoints are the rate of treatment failure (composite endpoint: biopsy-proven acute rejection, graft loss, death), incidence and duration of DGF, patient and graft survival rates, renal function and proteinuria. Proportion of patients without any WHC at 3 months was 70.5% and 72.1% in iEVR and dEVR groups respectively (p=0.67). Among different complication types, comparable rates were observed for the two treatment groups (p=NS). Treatment failure rate was 8.3% and 6.8% in iEVR and dEVR groups respectively (p=0.57). DGF occurrence was 23.8% and 31.6% (p=0.12) with a median duration of 8.5 and 5.5 days of dialysis in iEVR and dEVR groups respectively (p=0.21). No differences between the two groups were observed regarding patient (p=0.66) and graft survival rates (p=0.36), renal function and proteinuria. The immediate introduction of EVR post-Tx did not increase the risk of WHC and showed DGF incidence and duration comparable to delayed introduction. Renal function, efficacy, safety and tolerability were similar with both treatment groups. Historically, treatment of hepatitis C virus (HCV) infection with interferon-based regimens after kidney transplantation has been associated with risk of rejection & graft loss. Direct acting antiviral (DAA) regimens now provide opportunity to cure patients of HCV, potentially leading to improved post-transplant outcomes. Methods: We describe our transplant center's initial experience utilizing DAA regimens to treat HCV(+) patients after kidney transplantation. Results: At the time of submission, 29 deceased donor kidney transplant recipients have begun &/or completed anti-HCV therapy with a DAA regimen at a median of 29 months post-transplant (range . 89% are male with median age of 59 at transplant (range 37-83); 85% are HCV genotype 1, median HCV viral load (VL) at transplant was 1.5 million (M) (range 77K-68M), 70% were HCV treatment naive, & 70% received a HCV(+) kidney, which significantly reduced waiting time (595 vs. 1461 days (p=0.01)). Over 80% received a Thymo induction/steroid-sparing regimen at transplant; median HCV VL at therapy start was 4.5M (range 364K-37M). eGFR & fibrosis at therapy start are seen in Figure 1a & b, while DAA regimens utilized are seen in Figure 1c . To date, 13/18 (72%) cleared HCV by week 4 (Figure 1d ), 18/18 (100%) cleared HCV by end of therapy (EOT) & 6/7 (86%) achieved sustained viral response (SVR) at 12 weeks following EOT; 5/6 with SVR had received a HCV(+) donor kidney. 1 patient with HCC relapsed after being negative at EOT. Two patients self-discontinued DAA's due to side effects & HCC diagnosis. Adverse events included anemia requiring RBV dose reduction or discontinuation (n=2), headache (n=2), acute kidney injury due to tacrolimus toxicity, diarrhea, & worsening blood glucose control (n=1 each). One patient died 4 months after achieving SVR of an unknown cause. Conclusion: Treatment of HCV after kidney transplantation is now a reality. Cautious selection of regimen based on renal function, interacting medications, & adverse event profile is warranted. Long-term follow-up is needed to assess the impact on transplant outcomes. compared with HCV-kidneys, but better outcomes compared to remaining on the waiting list. Using HCVD+ for HCV+ patients may shorten waiting times decreasing its mortality. Limited early data of novel HCV DAA therapy in this population is promising, and may improve HCVD+ outcome. To evaluate HCVD+ kidney utilization and post-transplant HCV therapy, we retrospectively reviewed charts of kidney transplant recipients between 1/1/2010 and 8/31/2015 at our institution. Multi-organ and living donor transplants were excluded. Of the remaining 305 deceased donor transplants, only 15 (5%) were from HCVD+. HCVD+ kidney recipients' had shorter waiting time and mean KDPI of 46% (Table1 and 2). All of the HCVD+ kidney recipients have functioning allografts. Neither HCV genotype switches, nor new infections were noted among recipients. Eleven (73%) had HCV treated post transplant with DAA. Five of those (46%) achieved sustained virological response. Six remain on therapy. Four have not initiated treatment (Table2). HCVD+ kidneys underutilization remains challenging at our institution and nationwide despite high quality of organs and documented good outcomes. According to Scientific Registry of Transplant Recipients, 2008-2012 data, 6.3% of deceased donor kidney recipients and 2.3% of deceased donors kidneys transplanted were HCV+, hence less than third of HCV+ recipients received HCVD+ kidneys. With new highly effective DAA therapies, HCVD+ may represent a safe resource to expand the donor pool for HCV+ recipients. and 67-99%. Outcomes for patients in Groups S and D were then compared by KDPI. All patients received the same immunosuppression and continuous veno-venous hemodialysis during the LT, which was continued until KT. Results: Recipient and donor characteristics were comparable within Groups S and D, including recipient age, BMI, MELD, Hepatitis C status, pre-KT dialysis and its duration. The higher KDPI groups had higher donor age, more ECD kidneys, and more deaths due to stroke, as expected. Transplant outcomes were comparable within Groups S and D, liver and kidney CIT and WIT, DGF rate, transfusion and pressor requirements, and ICU stays. Low KDPI kidneys are associated with increased patient survival in both groups. Delayed approach KT resulted in no DGF in Group D and prolonged patient survival >10% in each subgroups compared to Group S. These results support that the delayed KT in CLKT; (i) should be the preferred method regardless of the KDPI of the renal graft; (ii) if combined with the use of better quality kidneys offers excellent patient survival up to 3 years; and (iii) helps expanding the donor pool with the use of more ECD and DCD kidneys with similar outcomes. The Effect of DRI and KDPI on Outcomes of High MELD Patients Undergoing Simultaneous Liver Kidney Transplant. E. Chan, 1 M. Hertl, 1 J. Perkins. 2 1 General Surgery, Rush University Medical Center, Chicago, IL; 2 Surgery, University of Washington, Seattle, WA. Background: Patients with a high Model for End Stage Liver Disease (MELD) score have an increased risk for dying without transplant. Renal function worsens secondary to complications such as infection, or hepatorenal syndrome necessitating simultaneous liver and kidney transplant (SLKT) in a subset of these patients. These patients derive the highest survival benefit from transplant; however, they have been shown to have a worse outcome. Indices such as the donor risk index (DRI) in liver transplantation and the kidney donor profile index (KDPI) in kidney transplantation have been developed to predict graft failure and subsequently, outcomes following transplant. We sought to determine if these indices influenced the outcomes of these patients undergoing SLKT. Methods: Our dataset included the UNOS/STAR database from 3/1/2002 to 3/31/2015 in patients with a MELD score of 33-40 who received a liver transplant alone (LTA) or SLKT. DRI of the hepatic allograft was calculated and the KDPI was taken from the STAR files. Livers were classified as low DRI (D1<1.3), moderate DRI (D2 = 1.3-1.7) or high DRI (>1.7). Kidneys were grouped into low KDPI (K1 £30%) or high KDPI (K2>30%). The survival outcomes based on SLKT with each pairing of DRI with KDPI (D1KI, D2K1, D3K1, D1K2, D2K2, D3K2) were reviewed. Results: There were 67,143 LTA, 4950 SLKT and 1760 patients receiving SLKT with MELD ≥ 33. There was no difference in survival in patients who received a LTA or a SLKT with a low DRI and any KDPI (D1K1 or D1K2). Also, there was no difference in survival in patients who received a LTA or SLKT with a moderate DRI and any KDPI (D2K1, D2K2). However, survival was statistically worse in SLKT with high DRI (p= 0.005). Conclusions: In patients with a MELD score >33 receiving SLKT, there is no difference in survival with a low DRI or moderate DRI liver with any KDPI kidney. Survival is markedly decreased with a high DRI liver and high KDPI combination. Consideration should be given to transplanting high MELD patients with a low or moderate DRI with any KDPI combination as their risk of dying on the waitlist is high. The number of simultaneous liver-kidney transplants (SLKT) for end-stage liver disease (ESLD) with renal failure is rising. The overall utility of kidneys used in this setting has not been quantified. We hypothesize: 1) Kidneys allocated as SLKT have shorter graft survival than do kidneys allocated as kidney (Ki) or kidney-pancreas (KP) transplants. 2) Each kidney, if allocated as Ki/KP, would offer a high benefit as measured by life-year-from-transplant (LYFT). Deceased donor kidney pairs from 1/1/1995 through 12/3/2014, in which one kidney is utilized in SLKT and the other in kidney (Ki) or kidney-pancreas (KP) transplantation, were identified in Scientific Registry of Transplant Recipients. Excluded were pediatric recipients, other multi-organ transplants, SLKT for metabolic disorders or amyloidosis, and SLKT with pre-transplant dialysis duration >90 days. The primary outcome was 10-year mean graft survival by transplant type, estimated from flexible parametric models restricted to 10-year follow-ups adjusted for donor and recipient characteristics. Graft survival was partitioned into graft failure and death using a competing risk framework. Expected LYFT per kidney was calculated as a weighted average based on Wolfe et al.'s projections and baseline characteristics of our matched Ki/KP cohort. Angiotensin II Type-1 Receptor (AT1R) antibodies have been associated with pulmonary hypertension, renal allograft loss, and fibrosis progression in liver allograft recipients especially if combined with HLA donor-specific alloantibodies (DSA). However, long-term outcome in liver allograft recipients is not only impacted by allograft fibrosis but also renal function. We therefore sought to determine if native renal function was impacted in liver allograft recipients by the presence of AT1R antibodies. Methods: Primary liver allograft recipients at Baylor University Medical Center from 1/00 to 4/09 had their prospectively collected pre-transplant (1269 patients) and year-1 post-transplant (795 patients) serum tested retrospectively for AT1R (>10) antibodies. AT1R-antibody testing was accomplished with standardized solid phase assay. Since AT1R antibodies have been associated with hypertension this factor was not controlled for in multivariable modeling. Results: Pretransplant AT1R did not change the median delta creatinine from pretransplant to 3-months post-transplant. In patients with vs. without AT1R at 1-year post-transplant a median unadjusted change in MDRD6 of -5.4. vs. -1.1mL/ min (P=0.01) was found. In multivariable analysis when controlling for diabetes (DM) and calcineurin inhibitor (CNI) use at 1-year AT1R-Ab at 1-year remained statistically significantly associated with a decline in GFR (calculated by MDRD6) from year 1 to 5 post-transplant (p=0.018, Table 1 ). This decline may have been more pronounced (p=0.06) in patients on a CNI; however, the decline was most pronounced in diabetic patients with AT1R at year-1 (p=0.004). Conclusion: AT1R antibodies post-liver transplant are associated in multivariable analysis with an increased risk of native renal function decline especially in diabetic patients. AT1R positive patients may benefit from CNI free immunosuppression. Aim of the study is to assess the incidence of acute kidney injury (AKI) and chronic kidney disease (CKD) after liver transplantation (LT) in DCD vs. DBD recipients. Methods: this is a retrospective single-centre study of 1151 patients who underwent LT from 2007 to 2014. Exclusion criteria: urgent (=66) and living donor (=7) LT. We considered: renal function pre-LT, daily within one week post-operatively, at 1, 3, 4, 6, 9 months and 1, 3, 5 years post-LT, characteristics of recipient, donor type, graft variables and indicators of initial graft function. AKI and CKD defined and classified on the basis of KDIGO Guidelines (2012). Results: we considered 1078 LT patients (830 DBD and 248 DCD). DBD recipients had a higher MELD (p=0.002) and pre-LT serum bilirubin level (p<0.001) than DCD but there were no differences in INR and serum creatinine values. DBD recipients had longer cold and recipient warm ischemia times than DCD (p<0.001 and p=0.018 respectively). The incidence of AKI was 57.9% (624/1078), of which 57.1% of DBD (474/830) vs. 60.5% of DCD (150/248). DCD recipients had a higher incidence of stage 3 AKI than DBD (20.6% vs. 12.7%, p=0.0197). Among patients with stage 3 AKI DCD had a higher cumulative incidence of CKD compared to DBD (SHR 1.6 (1.0-2.7), p=0.051). Spontaneous allogeneic kidney transplant tolerance occurs in mice, but the underlying mechanisms are unknown. Building upon recent human data showing that EPO, an erythropoietic hormone produced by the adult kidney, has immunosuppressive effects (JASN 2014), we tested the hypothesis that EPO is required for kidney allograft tolerance. Flow cytometric analyses showed EPO receptor expression on murine CD4 + and CD8 + T cells. Recombinant EPO inhibited T cell proliferation by 85.4±17.9% , promoted Treg induction by 148.8±39.7%, and increased Treg stability by 291.6±78.6% vs. control (p<0.05 for each). When adoptively transferred into B6 rag1 -/recipients of BALB/c (I-E d+ ) kidney allografts, ~4% of naïve CD4 + Foxp3 GFPneg TEa TCR transgenic T cells spontaneously became FoxP3 + , whereas <0.5% became FoxP3 + in recipients of syngeneic B6 kidney transplant controls, or in recipients of BALB/c hearts (p<0.05, n=3-5/group). Administration of recombinant EPO to B6 recipients of BALB/c hearts induced Foxp3 expression (n=3, p<0.05 vs control), treatment of kidney allograft recipients with acriflavine (a HIF-inhibitor that inhibits EPO transcription), or administration of a blocking anti-EPO mAb, prevented conversion of adoptively transferred FoxP3 neg TEa cells into FoxP3 + cells (n=3-4, p<0.05 for each). We next transplanted MHC-disparate A/j kidneys or syngeneic B6 kidneys into B6 hosts (removing both recipient native kidneys). While allogeneic kidneys survived for >30 days, treatment with the anti-EPO mAb (but not an isotype control), or with acriflavine precipitated acute allograft rejection (n=5-7/group p<0.05 vs. untreated allograft recipients). Histology showed CD3 cell infiltrates with fewer FoxP3+ cells in the acriflavine or anti-EPO antibody treated A/J kidneys compared to the untreated allograft controls (p<0.05). Acriflavine treatment had no effect on control isograft survival. Together, our findings indicate that kidney allograft-produced EPO is necessary and sufficient for spontaneous allospecific Treg induction, and identify EPO as the crucial kidney-specific inducer of spontaneous kidney allograft tolerance in mice. The results provide the rationale for testing EPO as a Treg-sparing, anti-rejection therapy in humans. The constitutive Proteasomes (c-20S) are ubiquitously expressed cellular proteases that degrade polyubiquitinated proteins and regulate cell functions. However, immunoproteasomes (IP or i-20S) are primarily expressed in immune cells. C-20S and i-20S differ only in the 3 proteolytic β subunits. We show here that in a murine heart transplant model, effector and memory T and B cells upregulate their expression of i-20S. We hypothesized that i-20S inhibition would target memory T and B cells and suppress alloimmunity without the toxicity of inhibiting c-20S. We designed and synthesized a specific, non-covalent, small molecule inhibitor (DPLG3) of the i-20S β5i subunits. DPLG3 inhibited mouse i-20S with IC50 of 9.4 nM and 1500-fold selectivity over mouse c-20S, and inhibited human i-20S with an IC50 of 4.5 nM and 7000-fold selectivity over human c-20S. C57BL/6 recipients of BALB/c hearts treated with low dose CTLA4Ig (250 µg on day 2) and 14 days injection of DPLG3 (25 mg/kg/day) exhibited indefinite heart survival compared to control (CTLA4Ig + vehicle) (MST >100 and 30 days respectively, n=6/group, p<0.05). DPLG3 significantly reduced effector CD4 + and CD8 + T cells (T eff ) coupled with a marked increase in regulatory T cells (Tregs) in spleen and draining LN. Furthermore, T eff in DPLG3-treated mice acquired an exhaustion phenotype with an increased expression of PD1, TIM3 and LAG3, lower Th1, Th17 cytokines and lower GrB expression in CD8 + T eff (P<0.05, n=8 mice/group). Furthermore, RAG -/mice recipients of BALB/c skin received 10 x 10 6 CD3 Foxp3-GFP negative cells. DPLG3 treated mice (25 mg/kg/d for 7 d) showed 5-fold increase in induced Tregs in vivo. Furthermore, CD4 and CD8 T eff exhibited more exhaustion phenotype and amenable to most transplant recipients. Thymic Epithelial Cells (TEC), a population of stromal cells residing in the thymus, exert a major contribution to central selection. However, donor TEC do not develop following BM-Tx protocols. Therefore, we propose a new immunomodulatory strategy based on generating a donor-recipient "Hybrid Thymus", through donor TEC engraftment into the recipient thymus, to re-engineer the thymic microenvironment that selects developing thymocytes and achieve dominant central tolerance. We optimized a protocol for isolating TEC via a combination of negative and positive selection. Purified BALB/c TEC (from 3-12 day old mice) were injected intrathymically into C57BL/6 with or without co-stimulation blockade (CoB: CTLA4-Ig +/-anti-CD40 mAb). TEC survival post-injection was assessed via immunofluorescent staining of thymic sections and peripheral T cells analyzed for changes in TCR Vb11 expression-a marker of negative selection. Results: Our optimized purification protocol yields an average 70% TEC purity. Injection of BALB/c TEC in otherwise unmanipulated animals resulted in minimal survival by POD21, suggesting an absence of intrinsic immunomodulatory capacity of these cells. However, CTLA4-Ig co-administration exerted a significant protection with a higher survival at POD21 and 28. In this latter treatment group, the percentage of peripheral Vb11+ T cells on d21 was significantly decreased indicating actuation of negative selection by the engrafted donor TEC. The full CoB regimen provided the highest survival rate at all time points, indicating the need for a complementary treatment to promote alloTEC engraftment. We are currently investigating the long term TEC survival and the impact on host alloreactivity. Conclusion: Our preliminary data show that the thymic engraftment, survival, and function of allogenic TEC can be promoted by CoB. These exciting results indicate that engineering a donor-recipient Hybrid Thymus is feasible and has the potential to promote a dominant regulation of alloreactivity that could be conducive to transplant tolerance induction. BACKGROUND Development of long-lived alloantibody is closely linked with chronic rejection and graft failure. We examined in a murine model of antibody-mediated rejection (AMR) if targeting T follicular helper (Tfh) cell development prevents chronic rejection by blocking germinal centre (GC) activity. METHODS T-cell deficient CB57BL/6 recipients of a BALB/c heart were reconstituted with monoclonal TCR75 CD4 T cells (with indirect specificity to donor Class I H-2Kd) or with TCR75 CD4 T cells genetically deficient for the adaptor molecule SAP. SAP signalling is essential for Tfh development but does not influence extrafollicular antibody production. RESULTS Adoptive transfer of 103 TCR75 T cells generated persistent anti-H-2Kd antibody responses, characterised by Kd-binding GC B cells within the spleen, and longlived anti-Kd-secreting plasma cells in the bone marrow. This was associated with endothelial complement deposition and activation; resulting in chronic allograft vasculopathy, and ultimately, graft rejection (median survival time (MST) = 50 d, n=10). In contrast, transfer of 103 SAP-deficient TCR75 T cells failed to initiate GC responses, with substantial reduction in anti-Kd IgG production. Grafts in this group survived indefinitely (n=5), without development of allograft vasculopathy. Transfer of large numbers (105) of SAP-deficient TCR75 T cells likewise did not initiate GC responses, but did provoke strong and immediate extrafollicular responses, which precipitated acute graft loss (MST=13 d, n=4) , with histological hallmarks of acute AMR. CONCLUSIONS The demonstration that GC alloantibody responses are essential for allograft vasculopathy highlights the potential for targeting the Tfh subset for improving clinical transplant outcomes. High T helper cell precursor frequency may however provoke acute graft rejection through extrafollicular antibody production. Vasculopathy. Y. Zhao, W. Chen, P. Lan, X. Xiao, W. Liu, M. Kloc, Y. Liu, R. Ghobrial, O. Gaber, X. Li. Immunobiology & Transplant Research Center, Houston Methodist Hospital and Houston Methodist Research Institute, Houston, TX. Most transplants eventually lost to chronic rejection under potent immunosuppression therapies where activation of innate immunity is suspected to play a major role in graft loss. Monocytes and macrophages are major cell types infiltrating the grafts in chronic rejection, but the molecular mechanisms involved are poorly understood. In the present study, we generated mTOR-conditional knockout mice in which deletion of mTOR in macrophages was induced by LyzM-Cre and used this model to examine the role of mTOR in regulating innate immunity in chronic allograft rejection. We demonstrated that in mTOR deleted macrophages, polarization to an M2 phenotype was impaired whereas induction of M1 phenotype was intact, and this impaired M2 induction was related to reduced STAT6 activation. In a heart transplant model in vivo, we found that treatment of wild type C57BL/6 mice with CTLA-4Ig prevented acute rejection of Balb/c heart allograft, but allowed chronic rejection to develop. In this model, there was an extensive macrophage infiltration in chronically rejected heart allografts, and the graft infiltrating macrophages preferentially expressed markers associated with M2 phenotype. Interestingly, treatment of macrophage specific mTOR deleted mice with CTLA4-Ig showed that neointima formation and M2 infiltration were marked ameliorated in the grafts, and as such, the graft survival was marked prolonged (>55 days). These findings highlight the importance of mTOR in regulation of macrophage differentiation in chronic rejection and provide new therapeutic targets in prevention of chronic allograft loss. Antibodies (Abs) to donor MHC and lung self-antigens (SAgs) administered following induction of lung allograft tolerance by co-stimulation blockade can result in breaking of tolerance and augmentation of immune responses to donor MHC and SAgs. In addition, loss of BATF can result in significant reduction in anti-MHC induced obliterative airway disease (OAD). The aim of this study is to determine the impact of anti-MHC or anti-SAgs in breaking established lung allograft tolerance in BATF deficient recipients. Orthotopic vascularized left lung transplant (LTx) was performed (BALB/c to C57BL/6 or Batf -/-, n=3/group). Tolerance was established by costimulation blockade using anti-CD154 and anti-CD40 with MR1 and CTLA4Ig. Abs to donor MHC (anti-H-2K d ), Kα 1 Tubulin (Ka1T) or Collagen V (Col-V) were administered intraperitoneally (i.p.) 30 days after LTx in the tolerant recipients. Histopathological analysis of transplanted lungs from tolerant C57/BL6 animals following administration of anti-MHC or anti-SAgs 30 days after tolerance was established demonstrated marked cellular infiltration (~9 folds), epithelial metaplasia (~9 folds), and fibrosis (~10 folds) indicating that anti-MHC or anti-SAgs can break established tolerance. In contrast, Batf -/recipients demonstrated significant reduction in cellular infiltration (~7 fold, p<0.05), epithelial metaplasia (~10 fold, p<0.05), and fibrosis (~12 fold, p<0.05) around bronchioles by days 30 following Abs administration. Anti-H2K d administration induced cellular (Th17) and humoral (Ab) immune responses to both anti-donor MHC and SAgs. Similarly, anti-SAgs lead to immune responses, not only to Sags, but also to donor MHC, demonstrating spreading of the immune responses that culminate in OAD. Significant reduction in Th9 cells (p<0.01) and Th17 cells (p<0.01) specific to SAgs were also noted in Batf -/recipients compared to WT B6 recipients. Significant dysregulation of a substantial proportion of immunoregulatory miRNAs, including miR-155 (~5 down regulation, p<0.05), and miR-126a-5p (~4 up regulation, p<0.05) were also seen in Batf -/recipients. Together, lack of intrinsic BATF signaling in tolerant lung allograft recipients leads to resistance to anti-MHC and anti-SAgs mediated abrogation of established lung allograft tolerance and OAD development. Background: Obliterative bronchiolitis (OB), a major cause of morbidity and mortality following lung transplantation, is characterized by progressive fibrosis of distal airways. We hypothesized that OB results from airway stem cell depletion and destruction of their niche. We focused on the fate of K5+p63+ basal stem cells in allograft airways and submucosal gland (SMG) structure-a key cartilaginous airway stem cell niche. Methods: Ferret left lung transplantation was performed as recently described by our laboratory as an OB model (n=5). Human OB specimens were from autopsies or surgically removed allografts (n=5). Airway histology was assessed at various times post-transplantation. Using MetaMorph Software, SMGs in cartilaginous airways were quantified in human and ferret tissue, both native lungs and allografts. Basal cells positive for markers of multipotent (K5,p63) and unipotent (K14) precursor cells were also quantified. Results: Ferret allografts revealed an intense inflammatory infiltrate with gland atrophy and a post-transplant decrease in SMGs per cartilaginous airway (g/a) over time: native lung controls=8g/a, early OB=6g/a, late OB=2g/a (p<0.0001). There was a progressive decline in the number of acini per gland with increasing severity of OB (p=0.0035). Evidence of SMG destruction was observed in late OB human allografts. Immunofluorescence analysis of basal stem cells in early OB ferrets revealed a progressive decline in the number of multipotent, K5+/p63+ basal cells by 97.8% (p=0.004) of total basal cells. There was a concomitant increase in the number of K14+K5-p63-, unipotent cells by 97.8% (p=0.029). Defects were present in distal and proximal airways. Analysis of lungs from human late OB samples revealed similar changes in basal cell phenotypes. Notably, SMG destruction and basal cell phenotype changes appear to precede OB in the distal airways. Conclusions: SMGs and surface airway K5+p63+ basal cells are accepted stem cell compartments in proximal and distal airways. Our findings suggest that these compartments undergo progressive immune destruction and exhaustion in the transplanted lung. We hypothesize that distal airways are first to exhaust their stem cells and develop fibrosis in OB allografts as they have fewer K5+p63+ basal cells and lack SMGs that house a facultative stem cell niche. Intracellular OPN Is an Essential Protective Factor for Cardiac Endothelial Cell and the Long Term Heart Graft Survival. Y. Su, 1, 2, 3 Z.-X. Zhang, 1, 2, 3 Z.-Q. Yin, 1 X.-Y. Huang, 1 A. Jevnikar. 1, 2, 3, 4 1 Matthew Mailing Centre for Translational Transplantation Studies, Lawson Health Research Institute, London Health Sciences Centre, London Health Sciences Centre, Lodon, ON, Canada; 2 Departments of Medicine, London Health Sciences Centre, Lodon, ON, Canada; 3 Departments of Pathology, London Health Sciences Centre, London, ON, Canada. [Background] Endothelial cell (EC) injury is central to cardiac allograft vasculopathy and premature graft loss. Osteopontin (OPN) is a multifunctional antiapoptotic protein that is involved in cell homeostasis, cell death and inflammation. It is expressed as secreted (sOPN) and intracellular (iOPN) forms but their role in cardiac EC survival remains undefined. In this study, we tested the capacity of OPN to regulate cardiac EC apoptosis and graft survival in a mouse model. [Methods & Results] By immunohistochemistry, we found OPN was highly expressed in B6 wildtype (wt) but not B6 OPN -/-(null) mouse heart endothelium. in vitro, sOPN and iOPN mRNA and protein were expressed by B6 wt cardiac EC. Using flow cytometry, OPN -/-EC with or without sOPN had enhanced expression of Fas ((98.4±1.2)% and (97.2±5.1)%, respectively) compared to wt EC (34.1±4.7)%, (n=3, p<0.01). OPN -/-EC had greater extrinsic apoptosis induced by anti-Fas antibody compared to wt EC (12.7±1.7 vs. 5.7±0.7 % Annexin V positivity, n=3, p<0.05) or by activated Bm12 CD4+ T-cells (26.6±4.6 vs. 10.8±2.7%, n=3, p<0.05). Additionally key inhibitors of intrinsic apoptosis were downregulated in OPN -/-EC compared to wt EC (decreased Bcl2 by 24 fold, Bcl-xl by 314 fold, realtime PCR, n=3, p<0.01), and intrinsic apoptosis by TNFα+IFNγ+IL-1β treatment was significantly higher in OPN -/-EC (n=3, p<0.05), which was not changed by manipulating sOPN in either (p= ns). In a B6 to Bm12 heterotopic cardiac allograft model, OPN -/grafts were rejected significantly faster than wt grafts (22.0±0.9 vs. 79.5±23.0 days, n=6, p<0.01) despite excessive exposure of the graft to the recipient-derived sOPN in the circulation. [Conclusion] These results demonstrate a key role for endogenous OPN in protecting cardiac EC and promoting graft survival. Importantly the capacity of OPN to inhibit apoptosis in vascular endothelium is confined to the intracellular form. While OPN plays a crucial protective role in heart transplantation, consideration of its potential therapeutic use to improve long term heart graft survival will need to consider this decisive separation of roles by cellular location. Introduction: Increased Hif-1α mRNA expression in deceased-donor organ transplant tissue prior to or soon after transplantation significantly correlated with primary graft dysfunction. HIF-1α is positioned to be a key player in the integration of TCR-and cytokine receptor-mediated signals involved in CD4 + helper lineage commitment and CD8 + effector differentiation independent of oxygen availability. We investigated the role of Hif-1α expression and stabilization in heart infiltrating leukocytes in ischemic conditions and in transplantation. Methods: Ischemic injury to the mouse heart was induced by ligating the LAD, 7 days later Hif-1α expression analyzed by RT-PCR in PMN, Ly6C hi and CD4 T cells extracted from the hearts, sham operated mice were used as controls. To study the role of Hif-1α in transplantation we transplanted single MHC-II mismatched bm12 hearts into B6 recipients or FucTVII KO recipients with impaired leukocyte trafficking. Results: Following myocardial ischemia, there was upregulation of Hif-1α mRNA in leukocytes infiltrating the heart [PMN (2.04 fold), Ly6C hi (6.53 fold) and CD4 (4.17 fold)] compared to sham. In the context of transplantation, Hif-1α protein expression is increased both at early (2h, 24h, Fig. 1 ) and late time-points (14 days, Fig.2 ) posttransplantation and is reduced in recipients with impaired leukocyte trafficking. Hif-1α stabilization by Dimethyloxaloglycine (DMOG) a prolyl hydroxylase inhibitor led to accelerated rejection of heart allografts with a median survival of 5 days compared to 53.5 days in controls (p=0.0008). Conclusions: Hif-1α is stabilized in the heart after ischemia prior to transplantation and both early and late after transplantation. Stabilization of Hif-1α after transplantation precipitates acute rejection. These data suggest that Hif-1α plays a critical role in alloimmune responses. Background: DGF among solitary deceased donor kidney transplants (tx) increased by 22% (p<0.0001) --from 24.6% to 30.1% --during the first 6 months of KAS. This study sought to identify the underlying causes of the increase such as the possible impact of more kidneys being shipped under KAS. Methods: We used logistic regression with OPTN data to perform effect mediation analysis to determine which donor, recipient, and logistics-related factors help to explain the apparent KAS effect on the DGF rate. Pre-KAS era: 12/4/13-12/3/14 (n=11,566 solitary deceased donor kidney transplants), Post-KAS: 12 /4/14-5/31/15 (n=5,732) . The post-KAS increase in the DGF rate -odds ratio=1.38 (95% CI: 1.28, 1.49) -was not explained at all by shifts in donor factors (e.g., age, creatinine, DCD, comorbidities), % glomerulosclerosis on biopsy findings, whether the kidney was pumped, most recipient factors, or donor-recipient matching factors. Increased shipping distance and CIT had very little mediating effect. However, adjusting for candidate duration on dialysis explained more than half of the KAS effect, reducing the residual post/pre-KAS odds ratio to 1.16 (1.06, 1.27 ). (Fig 1) Potential mediating factors of the KAS effect are those that are (a) independently and strongly associated with DGF and (b) changed significantly with KAS. Fig 2 reveals duration on dialysis prior to listing as the only such factor, with CIT only moderately associated with DGF and correlated with pre/post KAS. The early post-KAS rise in DGF appears to be largely, but not entirely, explained by more post-KAS recipients having had long dialysis durations. As transplants to these patients taper due to initial bolus effects, the national DGF rate may decrease. After accounting for traditional factors used to model DGF, a residual (unexplained) KAS effect remains. Further monitoring and analyses will be performed. Abstract# 98 Recovery. S. Bontha, 1 L. Suh, 1 P. Dormish, 1 R. Gehrau, 1 E. Rhone, 1 L. Gallon, 3 A. King, 2 C. Dumur, 2 D. Maluf, 1 V. Mas. 1 1 UVA, Charlottesville; 2 VCU, Charlottesville; 3 Northwestern University, Chicago. Background: Delayed graft function (DGF) is mainly a consequence of ischemia and reperfusion injury (IRI) resulting in acute tubular necrosis. The degree of IRI is dependent on a complex interplay of pre-transplant injury and subsequent innate and adaptive immune responses after reperfusion known to affect long term graft survival post-kidney transplantation (KT). Here, we investigate the principal mechanisms of IRI in KT aiming to identify predictors of graft function and recovery. Methods: Gene expression using Affymetrix human genome microarrays were evaluated in a training set of 73 KT recipients. Renal biopsy samples at pre implantation (K1) and 90 min post implantation (K2) were utilized (n=146). The samples were categorized based on presence or absence of DGF. K1 vs. K2 gene expression analysis was performed using RMA algorithm. IPA was used for gene ontology and molecular pathway analysis of differentially expressed genes (fold change ³2, FDR ≤ 5% and p value ≤ 0.001). A similar analysis was done in DGF samples classified based on eGFR at 1 month post-KT (eGFR ≤40 ml/min/1.7m 2 and eGFR ≥40 ml/min/1.7m 2 ) as measure for recovery. Results: A distinct gene expression pattern was present in both DGF and non-DGF groups. Inflammation promoting genes like TNFSF6, TNFAIP, and SOCS1 were differentially expressed in DGF group and IRI ameliorating effects like SLC19A2, TGIF1, MT1M were differentially expressed in non-DGF group. Further analysis of K1 Vs K2 biopsies grouped based on recovery from DGF at 1 month posttransplantation showed unique differential gene expression in both groups. Disease and function analysis of the specific genes expressed in non-recovery group belong to development (zscore=3.18), proliferation (zscore=3.298) and growth (zscore=3.38) of connective tissue in addition to differentiation, proliferation and growth of different immune cells (zscore > 2). 8 genes were uniquely expressed in DGF recovery group. Conclusion: The exacerbated molecular immune response distinguish non-recovery group within 90 min of transplantation. The mechanistic insights gained in this study can be used for developing early intervention protocols that might lower the incidence of DGF in renal transplant patients. A. Cherukuri, R. Mehta, P. Sood, P. Randhawa, A. Tevar, S. Hariharan. Analysis of DGF and clinical outcomes in relation to progressive histological changes is not well reported. In this prospective longitudinal study, patients with DGF were compared to those with Primary Graft Function (PGF) for changes in renal function as well as acute and chronic histological lesions in the first post-Tx year through protocol (3m & 12m) and for-cause biopsies. 370 patients who were transplanted between 1/13-11/14 were prospectively studied (Thymo with MPA-TAC & rapid steroid withdrawal). 72 patients (19.5%) had DGF and were compared to those with PGF (with a further analysis of LD & DD Txs). Patient demographics including KDPI scores were comparable. 29/79(36%) patients with DGF>2wks were biopsied for renal dysfunction. Acute Inflammation: Significant tubulointerstitial inflammation (TI) was noted in the DGF group at 2wks that persisted at 3m and 12m. Patients with DGF had significantly more TI at 3m compared to those with PGF with a similar trend at 12m (Fig-1A) . These findings translated to 60% clinical/subclinical T cell rejection in the DGF group in comparison to 37% in those with PGF (P<0.05). Chronic damage: Chronic damage (IFTA, IF+i) was worse at 3 and 12m in the DGF group compared to PGF (Fig-1B) . Importantly 50% of those with DGF had chronic damage (mild-moderate IFTA and IF+i) as early as 3m (Fig-1B) . In patients with DGF, chronic damage was minimal at 2 weeks, however, progressed significantly at 3 and 12m. Patients who experienced a DGF and rejection had the greatest degree of chronic damage. Renal Function: Patients with DGF had significantly worse creatinine at 3, 6, 12 and 18m with a significantly higher proportion of patients showing progressive dysfunction (∆ creatinine 3-12m>0.3) (Fig-1C) . Within the DGF group, patients who experienced a rejection episode had the worst renal function (Fig-1C ). All the above results remained unchanged when the analysis was repeated in DD and LD Txs separately. To conclude, in patients with DGF, early allograft perfusion injury coupled with TI leads to early progressive chronic allograft damage and renal dysfunction. Thus events early after transplant determine allograft injury impacting graft outcomes. Associations Between Deceased-Donor Urine MCP-1 and Kidney Transplant Outcomes. S. Mansour, 1 J. Puthumana, 1 P. Reese, 2 I. Hall, 1 M. Doshi, 3 F. Weng, 4 B. Schröppel, 5 H. Thiessen-Philbrook, 1,6 M. Bimali, 1 C. Parikh. 1 1 Nephrology, Yale University School of Medicine, New Haven, CT; 2 University of Pennsylvania, Philadelphia, PA; 3 Wayne State University, Detroit, MI; 4 Saint Barnabas Medical Center, Livingston, NJ; 5 Nephrology, University Hospital, Ulm, Germany; 6 Nephrology, Western University, London, ON, Canada. Current assessments of deceased donor kidneys lack precision in predicting recipient graft function. Mouse models have demonstrated that monocyte chemoattractant protein-1 (MCP-1) has a pivotal role in inflammation and repair after acute kidney injury (AKI). Thus, revealing the role of MCP-1 in the recovery process after deceased donor kidney procurement might enable better prediction of recipient graft outcomes. We conducted a multicenter prospective cohort study involving deceased kidney donors from five organ procurement organizations (OPO). Donor characteristics were obtained from OPO donor charts. Recipient characteristics were obtained through the United Network for Organ Sharing database. We measured donor urine MCP-1 (uMCP-1) levels and determined associations with: donor AKI (at least doubling of serum creatinine), delayed graft function (DGF) and recipient 6-month eGFR. The cohort included 1301 deceased donors and 2435 recipients. AKI was present in 111 (9%) of donors. Urine MCP-1 levels were significantly higher in donors with AKI, compared to donors without AKI, with median (interquartile range) of 1.35 ng/mL (0.41-3.93 ) versus 0.32 ng/mL (0.11-0.80) . After adjusting for donor variables, each unit increase in log uMCP-1 was associated with 54% increased risk of donor AKI (RR 1.54 [95% CI 1.42-1.67]). A total of 756 (31%) recipients developed DGF. There was no association between log uMCP-1 and risk of DGF. Higher levels of log uMCP-1 were associated with higher 6-month eGFR (0.81 ml/ min per 1.73 m 2 [95% CI 0.21-1.41]) after adjusting for donor, transport, recipient variables and donor urine NGAL levels. When stratifying the cohort by DGF status, the association between 6-month eGFR and uMCP-1 concentration remained significant only in the non DGF group. In summary, donor uMCP-1 levels were strongly associated with donor AKI demonstrating the presence of inflammation and repair in the graft but were also associated with higher recipient graft function at 6 months. Hence, uMCP-1 may have a role in assessing kidney graft quality among deceased donors. Surveillance Renal Allograft Biopsies Prognosticate Graft Function. J. Thompson, D. Taber, P. Baliga, K. Chavin, T. Srinivas. MUSC, Charleston, SC. Background: Surveillance biopsy of the renal allograft provides a window into the health of the allograft and could allow prognostication of function by affording better understanding of structure-function relationships in the kidney. We examined the relationship between Banff lesion scores obtained during protocol biopsies and subsequent graft function. Methods: This was a retrospective analysis of first protocol biopsy data obtained from solitary renal transplants performed between 2005 and 2013, with last follow up October 2015. Protocol biopsies were performed 3 to 6 months post transplant. We calculated "chronic Banff score" as the sum of chronic glomerular (cg), interstitial (ci), tubular (ct), and fibrointimal thickening (cv) lesion scores. Estimated GFR was calculated by the 4 variable MDRD equation. We compared serum creatinine at multiple time points to chronic Banff scores by repeated measures of ANOVA. Covariates examined included age, gender, race, recipient hypertension, recipient diabetes and KDRI. Results: Of 1320 patients with 1720 biopsies, 279 were the patient's first protocol biopsy. There was creatinine data available for up to 2 years after transplant for 160 biopsies. The mean time to first protocol biopsy was 125.39 days. The mean eGFR at time of first protocol biopsy was 53.5 ml/min/1.73 m 2 , with Banff lesion scores ≤ 2 and > 2 at 54.39 and 51.8 ml/min/1.73m 2 , respectively. The mean serum creatinine at time of first protocol biopsy for those with chronic Banff £2 was 1.393 vs 1.562 (P=0.020). Between the 6 month and 24 month time points, patients who had a chronic Banff lesion score > 2 saw a creatinine rise of 10.52% vs. a 4.33% rise for those with a score £2. Higher cumulative chronic Banff score at first protocol biopsy predicts higher serum creatinine at subsequent time points. The slope of creatinine rise is more rapid in those with higher chronic lesion scores. Patients with higher chronic scores may need consideration for treatments that minimize nephrotoxic insults and maximize renoprotection. Further study is needed to examine relationships between biopsy scores and graft survival. (2.9%) or Campath (0.18%) induction. TCMR was detected in 3.4% (36) and AHR in 0.4% (5) cases, though borderline changes were called n 8.7% (96). Blood BK-QPCR was positive in 22.8% (252), but BK nephritis was rare at 1.8% (18). Mild tubular atrophy was often seen (54%; 596), but substantive tubular atrophy (ct>2) was rarely seen (4.4%; 48) . Mild inflammation (>i0) was seen in 13.4% (148) and some tubulitis could be noted even in non-atrophic areas (>t0) in 26.9% (297), without meeting Banff criteria for rejection. Peri-tubular capillaritis (ptc>0) was rare at 3.2% (35) and arteriolar hylanosis (ah>0) was seen in 16.9% (186). Patients with inflammation had no differences in mean age, DSA, PRA or NODAT, but significantly correlated with greater decline in GFR between 1 and 24 mo (coeff: 0.21 C.I. (0.2 -0.43) ; p:0.03).Patients with tubulitis had higher rates of NODAT and DSA and a significant correlation with worse 2 yr GFR (coeff,0.42 C.I. (0.13 -0.70) ; p:0.004), which persisted when corrected for age at tx,donor type and NODAT(coeff:0.53 C.I. (0.21 -0.86) ; p:0.001). Drug toxicity (ah) had no impact on graft function. Conclusions. Protocol biopsy data reveals that Banff confirmed sub-clinical rejection is a very low frequency finding, but even very mild changes of inflammation and tubulitis, currently not factored into Banff scoring for an independent rejection diagnosis, have a substantive impact on downstream graft function. Further analysis is underway to develop a composite protocol biopsy risk score that could be more informative for patient management. Introduction. Up to 30% of patients with protocol biopsies and stable renal function have borderline lesions. His relationship with acute rejection remains uncertain, as does the decision to treat them. The aim of this study is to compare the outcome of patients with borderline lesions treated versus those who were not treated. Methods. From January 2004 to January 2015, 519 transplants were performed. All patients with a previous immunological event were excluded. We included 104 patients with borderline lesion on protocol biopsy without dysfunction, performed within the first year after transplantation. Graft survival was analyzed according to whether they received treatment for the total population, and excluding those with pretransplant DSA. Descriptive statistics was used according to the variable level of measure. To analyzed graft survival Kaplan-Meier and Log Rank method were used. RESULTS. 104 patients, 50% male, were analyzed. Mean age was 34.6±12.9y, 57.7% were transplanted from living donor. The main cause of CKD was unknown in 57 (54.8%). From the total of patients, 48.1% (50) received methylprednisolone pulses, 5.8%(6) immunosuppression optimization, and both treatments 8.7%(9), and those who has not received any treatment were 37.4% (39). Only 15.4% (16) had pretransplant DSA. The graft survival was not different for those treated patient (figure1), including when the type of treatment was analyzed, or when patient with pretransplant DSA were excluded. The bar graph shows the histological evolution according to treatment. From all patients, 23.1%(24) progressed to rejection. In this study, the treatment of borderline lesion did not improve the outcome in kidney transplant patients. Even the type of treatment did not modify the graft outcome. Background: Current guidelines recommend ultrasound (US) for hepatocellular carcinoma (HCC) surveillance in cirrhotic livers. However, certain patient and disease characteristics may pose a technical challenge to US surveillance. We aimed to assess predictors of decreased sensitivity of US for detecting HCC. Methods: At a single, high-volume liver transplant (LT) center in the U.S., all patients with HCC being evaluated for LT receive an abdominal US, which allows for comparison of US sensitivity to recent MRI or CT. In consecutive patients presenting 2012-2015, previously untreated LI-RADS 4 or 5 lesions found on cross-sectional imaging within three months of US were compared with US findings. Multivariable logistic regression models compared potential US sensitivities by patient and nodule characteristics. Results: Of 536 patients completing a LT evaluation, 288 patients (54%) had no residual tumor after local-regional therapy (LRT) on CT or MRI and were excluded, and 39 (7%) did not meet imaging inclusion criteria. Median MELD of the study cohort (n=209) was 10.6 (IQR 8.8-14.2), median BMI was 28 (IQR 25-31), and 46% had previously received LRT. Moderate ascites was seen in 5% and 9% had severe ascites. Overall per-patient sensitivity of ultrasound compared to CT or MRI was 0.82 (95% CI 0.76-0.87). Patients with BMI ³30 had an US sensitivity of 0.79 vs. 0.83 for BMI<30 (p=0.5). MELD and presence of moderate/severe ascites also did not affect US sensitivity. US sensitivity was decreased in patients with non-alcoholic steatohepatitis (NASH) vs. other etiologies (0.54 vs. 0.84, p=0.007) . Compared to those with other etiologies and MELD < 10, patients with NASH and MELD ≥ 10 were less likely to have their HCC detected by US (sensitivity 0.44 vs. 0.85, p=0.008). Among 296 nodules in total in the cohort, the overall per-nodule sensitivity of US was 0.72 (95% CI 0.67-0.77). US was less likely to detect nodules 1-2cm in size than nodules ≥ 2cm, with a sensitivity of 0.62 vs. 0.79 for larger nodules (p=0.006). Conclusions: US performed at a high-volume LT center demonstrates suboptimal sensitivity, failing to detect known HCC in nearly 20% and missing nearly 30% of all lesions. US was particularly insensitive in patients with NASH and for nodules < 2cm. HCC surveillance guidelines should consider cross-sectional imaging for patients with cirrhosis and concurrent metabolic risk factors, to promote earlier detection of HCC. Pre-Transplant Loco-Regional Therapy for Hepatocellular Carcinoma and Post-Transplant Outcomes: A National Study. M. Nazzal, 1 J. Chen, 1 K. Lentine, 1 M. Schnitzler, 1 J. Tuttle-Newhall, 2 C. Varma, 1 A. Taha, 1 A. Said, 1 H. Xiao, 1 H. Randall. 1 1 Saint Louis Univ, Saint Louis; 2 East Carolina Univ, Greenville. Currently most transplant centers use loco-regional therapy to bridge liver transplant (LT) candidates with hepatocellular carcinoma (HCC). However, limited data are available to support the efficacy of these approaches. We examined a novel linkage of the Organ Procurement Transplant Network (OPTN) registry and Medicare claims data for 18,945 LT recipients (2003 18,945 LT recipients ( -2013 with HCC as the primary diagnosis for LT (N=6,096). Loco-regional therapy in the year before LT was identified based on International Classification of Diseases (ICD)-9 procedure and Common Procedural Terminology (CPT) codes in the year before transplantation, and categorized as: trans-arterial chemoembolization (TACE), radio-frequency ablation (RFA) only, or combination therapies including radiotherapy (RT), RFA or alcohol injection (AI). Associations of loco-regional regimen with 5-yr post-LT mortality was examined by multivariate Cox regression. A secondary analysis was performed among those with explant pathology adjusting for tumor size, number, presence of vascular invasion, and differentiation as reported to the OPTN (N=1,674). LT recipients with HCC managed with TACE+RFA pre-transplant had qualitatively superior post-LT survival compared to patients managed with RFA alone or no loco-regional therapy (Kaplan-Meier P=0.06) (Figure) . After adjustment for baseline recipient, donor and transplant factors, compared with no loco-regional therapy, most single or combination therapies were associated with trends lower post-transplant mortality except TACE+AI (Table) . However, the favorable patterns did not reach statistical significance. Patterns were similar in the subgroup with explant pathology records. Combination loco-regional prior to transplant might improve post-transplant survival among LT candidates with HCC. In light of new US allocation policy for HCC exception scores expected to lengthen waiting time for many HCC patients, ongoing studies are vital to define the optimal management of patients with HCC before and after LT. Hepatocellular carcinoma (HCC) remains the most commonly granted MELD exception (M/E). Share 35 has disadvantaged this population. We have developed a strategy to use traditionally unacceptable organs (TUOs) including locally discarded organs, grafts with macrosteatosis, and DCD donors for these recipients. A retrospective review of the patients transplanted for HCC in a large urban academic medical center was performed from 1/2012-8/2015. Recipient and Donor data were obtained from our databases. Longitudinal follow-up has been maintained for recurrence of HCC. M/E at transplant, time to transplant, donor origin and organ type were analyzed. In 2012, 30% of patients were transplanted under an HCC M/E. They had an average waitlist time of 435 days and 85% of the grafts were from local donors. After Share 35, the HCC M/E patients made up only 20% of the transplanted population with most of livers (85%) procured locally. This decreased to 15% with concomitant increase in the time to transplant. In 2015, with the use of TUOs for the HCC population we observed an increase in the percentage of M/E (30%) patients transplanted with a decrease in transplant time (213 days v. 311days, p=0.001). Regionally procured livers increased to 50% for the HCC M/E population. There was no change in the average MELD at transplant in the time periods analyzed (p=0.2). Liver transplantation in the Share 35 era disadvantages the HCC M/E population. An aggressive approach to the use of local and regional TUOs may increase in the number of M/E patients transplanted. This may lead to a reduction in waitlist time and a reduction in early disease recurrence. In the largest series of HCC explants to be studied to date, we are the first to demonstrate a relationship between rising MELD and vascular invasion in HCC. This can possibly be due to greater inflammation associated with rising MELD scores or the inability to aggressively treat patient with loco regional therapy prior to transplant. This knowledge may aid in the selection of patients with out of criteria tumors who are considered transplant candidates. Obesity has become a worldwide epidemic. The number of patients with HCC undergoing Orthotopic liver transplant (OLT) is rising. Epidemiologic evidence suggests a link between obesity and HCC. This is believed to occur by a production of protumorogenic adipocytokines by adipose tissue. Body mass index (BMI), a composite measure of subcutaneous and visceral adiposity, has been used as a marker for obesity. However, it is now known that visceral but not subcutaneous fat is metabolically active and is alone responsible for the secretion of a protumorogenic systemic-mileu. We hypothesize that visceral adiposity measured by MRI is a prognostic indicator of oncologic outcomes after OLT for HCC and, MRI measures of visceral adiposity correlate with oncologically inferior explant pathology. From January 1999-January 2013 we identified 423 patients with HCC who underwent OLT and had MRI imaging preOLT. Peri-renal fat thickness, a standardized measure of visceral obesity, was calculated on all MRI scans. Demographics and BMI, HCC recurrence, days to recurrence, and survival were measured. All explant pathology was reviewed. Student's t test, Chi-Square and Kaplan Meier curves were used as appropriate. Mean MELD was 22±8. No differences were noted in MELD across spectrum of BMI or peri-renal fat thickness. Total recurrence rate =12%. These data show that increasing visceral adiposity is associated with increased recurrence and hastened time to recurrence. Additionally, visceral adiposity is associated with oncologically inferior explant pathology. Therefore, visceral adiposity in patients with HCC should be used as a prognostic marker to determine appropriate allocation of livers in HCC recipients with similar MELD scores. 2, 39.9, 28.1, 13.83, 1.59, 6.63, 3.03, 0.14, 0.29, 0.29, 0.72, and 0.29 Lean thinking is a bundle of concepts, methods and tools for organizing human activities in organizations. It attempts to eliminate the underlying unnecessary waste in order to deliver more benefits. The methodology of Lean Six Sigma combines these principles with a statistical approach ("six sigma") and lean management. Here we deployed for the first time Lean Six Sigma methodology for the evaluation and improvement of a transplant administration process. We focused on kidney waitlist management and the duration between first contact to our center and final listing to the Eurotransplant waiting list. The structure of this work is following the DMAIC improvement cycle as the core tool for six sigma Projects. This cycle includes the five consecutive steps "Define", "Measure", "Analyze", "Improve" and "Control". First we stated the current process for kidney waitlist management in our hospital. Next we defined measurements for the administration process. The primary metric was document cycle time. Then we collected historical data from patients listed for kidney transplantation in 2014. A random sample was taken according to sample size calculation. All available data for randomized 64 health records were analyzed by using MiniTab®. Based on our analysis we implemented several improvements into the process. Later we tracked metrics for on-going listing processes in 2015 over six month. Based on the created process map of kidney transplant waitlist management we could generate a value stream map. Calculations revealed an initial total lead time (processing time in our office) about 30.4 days. The impact of different factors on our output was evaluated by Pareto-Chart-Analysis. After implementing reasonable improvements the lead time decreased from 30.4 to 10.1 days (equals a decrease of 67%) and the number of new listings in our hospital increased dramatically. Additionally we identified even much more potential time savings, which could be implemented in future improvement projects. The described Lean Six Sigma project demonstrates that the methodology could be successfully applied to improve the kidney waitlist management at our university hospital. So far the approach could deliver the essential data to track the work flow and make the process of kidney waitlist management transparent for all process owners. (p=0.15) and was most pronounced for KDPI 35-85% (10% to 16%). Discard rates for KDPI >85% remained steady (66% vs 68%) but were higher than the national rate (61.9%). Export rates rose due to regional sharing of KDPI >85 and highly sensitized from 5% to 38% (p=0.001). 1/3 of exported kidneys were not used in primary recipients and an additional 13% were discarded. Of organs transplanted in non-primary recipients, 72% were originally exported for cPRA >98% through regional/national sharing policies. Oral Sessions Sunday, May 19 reflect increased regional sharing and a response to decreased low KDPI organ availability. Centers should individually assess the impact of KAS to optimize DD organ allocation. Assessing the Impact of the Share35 Liver Allocation Policy: Survival Outcomes Among Liver Re-Transplant Recipients. J. Ortiz, 1 N. Koizumi, 2 C. Kwon, 2 Y. Zhang, 2 C. Ortiz. 3 1 Department of Surgery, University of Toledo, Toledo, OH; 2 George Mason University, Fairfax, VA; 3 Bucknell University, Lewisburg, PA. Liver retransplantation (reLTx) is the only treatment option for those who experienced graft loss after primary liver transplantation (LTx). Yet, in an era of increasing organ scarcity, the higher graft failure rate among reLTx recipients is a source of continuing controversy. Thus, this study uses the UNOS' liver transplant data (2011 -2015) to examine the survival outcomes in reLTx recipients with a MELD score of 35 and above after the share 35 liver allocation policy. Outcomes of interest are 1-and 2-year post-transplant graft and patient survivals. Kaplan-Meier estimates for the graft survivals at 1, 3, 6, 12, 36 months are 87.9%, 80.7%, 75.6%, 69.1%, and 64.3% in the MELD>=35 reLTx recipients before the policy. The estimates after the policy are 90.3%, 85.8%, 82.6%, 79.6%, and 62.4%. The log-rank test for equality of survivor functions demonstrates the significant improvement in 1-year graft survival among the reLTx recipients after the policy (p=0.03). There is no significant difference in 2-year graft survival before and after the policy (p=0.63). The estimates for the patient survivals are similar; they are 89.0%, 81.7%, 75.6%, 71.0% and 66.8% before the policy and 91.3%, 86.7%, 83.5%, 79.6% and 61.3% after the policy. The results of the log-rank tests are also similar, indicating significant improvement in 1-year patient survival since the policy (p=0.05) but no significant improvement in 2-year patient survival (p=0.28). Figure 1 presents the corresponding graft survival curves. Cox-regression for graft failure also confirms the results. The hazard ratio of the interaction term between the share35 era and MELD>=35 patients is 0.64 (p=0.08). Donor risk index (DRI) is another significant variable for graft and patient survivals (p<0.001). In conclusion, the share 35 policy seems to have improved 1-year graft and patient survivals in the reLTx recipients with a MELD score of 35 and over by about 10%. However, the policy did not seem to have improved 2-year survival outcomes in this cohort. The Finances of Broader Sharing of Livers Following Share 35. A. Harper, 1 E. Edwards, 1 R. Gilroy, 2 W. Chapman, 3 D. Mulligan, 4 G. Klintmalm. 5 1 Research, UNOS, Richmond, VA; 2 Univ.of Kansas Med Ctr, Kansas City, KS; 3 Washington Univ, St Louis, MO; 4 Yale Med Ctr, New Haven, CT; 5 Baylor Univ, Houston, TX. Background. After implementation of the "Share 35" policy in 2013, many centers and OPOs reported increased costs associated with broader sharing. The OPTN Liver and Intestinal Transplantation Committee sought to quantify these costs as it explores various concepts to further broaden sharing. Methods. Forty liver programs were asked to provide organ charges (e.g., acquisition, surgeon fees), transport mode (e.g., flight, ground) and transport charges for up to 50 of their liver transplants performed in 2014. Other requested data included post-operative variables related to recipient costs (lab MELD at transplant) and times associated with recovery and transplantation. Results. Responses were received from 28 centers (70% of sample) for 1039 transplants (17% of all deceased donor whole liver transplants in 2014), from 9 of 11 Regions. Of these organs 65% were transported by air, with median charges for local vs regional or national air transport of $3892, $9796, and $13099. Median organ procurement charges increased from $38,350 for local to $56,967/$56,894 for regional/national. Transport and total organ charges were strongly correlated with distance (r=0.79 and 0.64, p<0.001). Median recipient charges for Lab MELD scores < 24 were $248,140 vs $449,996 for lab MELD 35+. Overall recipient charges were strongly correlated with LOS (r=0.61, p<0.001) and only weakly with MELD score (r=0.24, p<0.001). Discussion: This is the first survey to capture actual data on the costs of broader sharing for a large sample of liver transplants. While not representing all regions or programs, these data nevertheless provide insights into the potential financial impacts associated with sharing organs across greater distances. Background. Antibody mediated rejection is the leading cause of renal allograft failure. Membrane-bound angiotensin II receptor(AngII) in the renal allograft can be a target of cellular(ACR) or antibody(AMR) mediated rejection. We questioned whether AngII was released during rejection and might be a useful marker of graft injury and outcome. Methods. Serum was collected at 0,3,6, 12 months posttransplant from 53 renal transplant recipients and 20 healthy volunteers(Controls). Patients were grouped as: 27 Non-rejectors (NR), 9 ACR and 17 AMR occurring in first year. AngII was measured by elisa. Three year outcomes were compared. Results. AngII was low among Controls(13±25 pg/ml). Using 60 pg/ml as upper limit, 20% Controls were above limit. In contrast, 90%(p<0.001) renal patients had >60 pg/ml AngII. Pretransplant levels were highly elevated(369±448, p<0.001) compared to Controls but similar between NR and all rejectors (367±537 vs. 372±356, p=ns) and between ACR and AMR groups(427±166 vs. 386±12, p=ns). Among NR, AngII levels and frequency of patients with >60 pg/ml continually declined and at l yr were similar to Controls(88 pg/ml, 20% patients, p=ns). In contrast, AngII levels spiked among rejectors during rejection. Thereafter, AngII level declined among ACR and 1-yr level(96 pg/ml) and frequency(37%) were similar(p=ns) to NR. In contrast, 1-year AngII levels(155 pg/ml) and frequency(50%) among AMR group were greater(p<0.01) than NR. Three year graft survival was equivalent between NR and ACR groups(100% vs. 90%, p=ns) but poor for AMR(60%, p=0.002). One-year AngII levels were reexamined among AMR patients with graft failure(n=7) or surviving grafts (n=10) at 3-years. Among graftsurvivors, 1-yr AngII level(75±90 pg/ml) and above-normal frequency(30%) were equivalent (p=ns) to NRs. In contrast, patients with subsequently failed grafts had greater AngII(209±118, p<0.05) and a high frequency of patients with abnormal AngII levels(86%, p<0.05). Summary. Circulating AngII is uncommon among healthy individuals but prevalent in renal failure. Pretransplant AngII levels didn't predict the potential for rejection. AngII spiked during both ACR and AMR. However, only sustained elevations in AngII after AMR were associated with poorer 3-year graft survival. This suggests systemic AngII may be a marker of tissue injury and may facilitate detection of subclinical or chronic antibody mediated rejection. Archetype analysis is a method for identifying new relationships among samples without subdividing them into classes, finding a limited number of archetypical cases and calculating the relationship of every sample to each archetype. We applied this method to molecular ABMR, using microarray and conventional data from 1208 indication biopsies from prospective multicenter studies in Europe and North America (Clinical trials.govNTC1299168). Each sample had scores assigned by molecular classifiers trained on its histologic ABMR diagnosis (ABMR score) or its histologic features (ptc-, g-, or cg-lesions). The analysis indicated that 5 archetypes explained the variation in the ABMR features, and that all biopsies had relationships to all five archetypes: 1. late cg-rich with less ptc/g than full house; 2. "full house" highly active ABMR; 3. early ABMR without cg features; 4. A form of TCMR that shared some ABMR features, but had unexpectedly low hyalinosis for time and probably reflected non-adherence; 5. Biopsies with no ABMR-related molecular features. Biopsies were divided into clusters based on their strongest archetype relationships (figure 1). Risk of subsequent failure was highest in the late cg-rich (#1) and full house (#2) clusters, but after a delay was also increased in the early ABMR biopsies without cg. This method indicates that many biopsies that may be misclassified by histology (figure 2), including histologic ABMR lacking molecular features, histologic TCMR misclassified as ABMR and vice versa, etc. The results raise the possibility of better disease classification and risk prediction when new molecular methods of assessment are combined with novel methods for discovering relationships among biopsies while retaining the uniqueness of each biopsy. The angiotensin II type 1 receptor (AT1r) plays a key role in the renin-angiotensin system. Anti-AT1R antibodies (Abs) have been associated with both acute and chronic injuries of kidney allografts and detected in 17-59% of the pre-transplant samples. AT1R is widely distributed among different organs and tissues as well as in circulating leukocytes. The aim of this work is to investigate if positives flow cytometric crossmatch (FCXM) in the absence of HLA-DSA can be produced by AT1r Ab. We query from our laboratory database all the positive FCXM in the absence of HLA-DSA Ab. We tested for anti AT1r with an ELISA assay (One Lambda, CA). Two serum samples (SS) with high titers (>40u/ml) of AT1r Ab were incubated with a surrogate donor's cells known of produce positive FCXM. After 30 min incubation the cells were washed and then incubated with an elution buffer to remove the Abs absorbed by the cells. After centrifugation the supernatant was collected and tested with the AT1r ELISA assay. Eleven of seventeen (64.7%) SS with a positive T or B cells FCXM were also AT1r positive using a 12 U/ml cut off. Two of seventeen positive T or B FCXM were AT1r negative. For comparison we tested 22 SS with negative FCXM selected from our database as a control, and 4/22 samples (18.2%) were positive for AT1r Ab, two with result of 12 U/ml and the other two with 16 and 20 U/ml. To demonstrate that positive FCXM was produced by AT1r IgG Abs we tested the eluted supernatant after the incubation of two SS with a surrogate donor cells with the same AT1r ELISA assay. The OD from the eluted samples was only 6-16% lower than the original SS. These results demonstrate that IgG AT1r Ab produce the positive FCXM result. Our data shows that AT1r Abs are capable of producing a positive T and B cell FCXM. A positive FCXM in absence of HLA-DSA should lead to screening for AT1r Ab to evaluate immunologic risk and guide post-transplant monitoring. The presence of MVI (Banff glomerulitis "g" score and Banff peritubular capillary "ptc" score) in KTxBx, often related to antibody mediated rejection (AMR), is associated with reduced graft survival and is one of three Banff criteria needed for a diagnosis of AMR. However neither the current minimal Banff criteria for a diagnosis of MVI nor the impact of varying degrees of MVI have been validated in a large patient population. The objective of this study was to define the impact of MVI among 478 patients biopsied due to new onset kidney graft dysfunction (25% increase in Scr or new onset proteinuria) ≥ 3 months after transplant enrolled in the two Long-term Deterioration of Kidney Allograft Function (DeKAF) Study cohorts (cross sectional (CSC): 229 patients; prospective (PC): 249 patients). Biopsies showing other potential causes of graft dysfunction -recurrent glomerulonephritis, BK nephropathy, and transplant glomerulopathy -were excluded. Biopsies were read centrally by a single pathologist (JG) and graded in strict accordance with Banff criteria. Impact of MVI was assessed by the hazard ratio (HR) for death censored graft failure. Initial analyses examined the impact of ptc ≥ 1, g ≥ 1, or both, compared with absence of both ptc and g (111, 18, 63, and 286 patients respectively). The HR for any g or ptc score ≥ 1 compared with both = 0 were similar (HR: 2.1 and 2.7 respectively). To evaluate the impact of higher scores of either g or ptc, a composite score for MVI (MVI score) totaling the individual ptc and g scores was determined. As shown in the figure, graft loss was significantly more frequent with any MVI score > 0 than with an MVI score Oral Sessions Sunday, May 19 all p < 0.0001] , but outcomes at each score > 0 were virtually identical (all comparisons p > 0.14). The impact of MVI was not different between biopsies in the CSC and PC cohorts. We conclude that MVI should be diagnosed in the presence of any Banff ptc or g score > 0, and that the impact of MVI on death censored kidney graft failure is independent of total MVI score > 0. Background: The mechanism leading to the appearance of anti-HLA antibodies and donor-specific antibodies (DSAs) after an allograft nephrectomy (NTx) is not fully understood. Two mechanisms have been advocated: (i) at graft loss, DSAs are not detected in the serum because they are fixed on the non-functional transplant, and are released after NTx; (ii) NTx itself is responsible for de novo anti-HLA immunization against residual donor tissue. The aims of our study were to compare anti-HLA antibodies present in the serum and graft at the time of an allograft nephrectomy, and to assess the timing and kinetics of their appearance after an allograft nephrectomy. Methods: Seventeen patients have undergone NTx, 4 (3-33) months after graft loss. Immunosuppression had been stopped in all patients at least three months before NTx. Anti-HLA antibodies were assessed in the serum before, and 1, 5, 30, and 90 days after NTx. In addition, fragments of the removed kidney allograft were eluted to characterized intra-graft anti-HLA Abs. Anti-HLA antibodies were analyzed using the Luminex Single antigen assay. Results:, On 14 patients with anti-HLA antibodies in the serum at NTx, 11 patients had anti-HLA antibodies fixed in the kidney allograft. No anti-HLA antibodies were detected in the graft if they were also not detected in the serum. Eleven of the 12 patients who had DSAs detected in their sera also had DSAs detected in the grafts. A positive C4d staining was positive in 9 of the 11 patients. Epitopic analysis revealed that most anti-HLA antibodies detected in removed grafts were directed against the donor. All de novo Anti-HLA antibodies and DSAs were detected ≥ 1 month after NTx. Interestingly, none of the de novo anti-HLA antibodies had been previously detected in the graft. Conclusion: Our data suggest that anti-HLA sensitization after NTx is related to the NTx itself rather than to a release of fixed anti-HLA antibodies from the failed kidney. Comparison of C4d Deposition in Renal Biopsies with Luminex-Based C3d Single Antigen Bead (SAB) Detection. R. Pelletier, 1 I. Balazs, 2 P. Adams, 3 P. Steller, 3 N. DiPaola, 4 L. Rankin, 3 A. Diez, 5 M. Henry. 1 1 Surgery/ Transplant, The Ohio State University Wexner Medical Center, Columbus, OH; 2 Discovery Research, Immucor, Inc, Stamford, CT; 3 Tissue Typing, The Ohio State University Wexner Medical Center, Columbus, OH; 4 Methodist Hospital, Houston, TX; 5 Nephrology, The Ohio State University Wexner Medical Center, Columbus, OH. Recent interest in characterizing the complement-fixing ability of post-transplant donor specific alloantibodies (DSA) resulted in the enhancement of luminex solid phase alloantibody assays to determine the complement-fixing capacity of DSA. Previous studies have reported a significant correlation of renal biopsy-detected C4d deposition with SAB C1q detection (produced via the classical pathway only). A newly available SAB method detects C3d (produced via the classical, lectin, and alternate pathways) rather than C1q. The purpose of this study was to correlate SAB DSA-associated C3d detection with biopsy evidence of C4d deposition in renal transplant recipients. A cohort of 222 recipients transplanted between 6/2002 and 11/2013 with previously identified post-transplant de novo DSA were retested by SAB C3d assay. A comparison biopsy was available for 187/222 recipients. This 187 patient cohort was 44.8 ± 11.9 (20-74) years old, 64% male (n=119), 36% African-American (n=67), 32% (n=60) sensitized pre-transplant, and 11% (n=21) re-transplant recipients. The median time to first rejection episode experienced by 108/187 (57.8%) recipients was 9.2 months (range 8 days to 6.5 years). The median time of sera testing was 31.4 months post-transplant (range 8 days to 10 years). The biopsy C4d deposition and C3d SAB detection were concordant in 77.5% of cases (145/187), with 81% of cases positive for both (n=118) and 19% negative for both (n=27). In 22.5% of cases the results were discordant (42/187), with 71.4% of cases positive for C4d deposition and negative for SAB C3d detection (n=30) and 28.6% negative for C4d deposition and positive for SAB C3d detection (n=12). The positive predictive value of SAB C3d detection for C4d deposition on biopsy is 91%, the negative predictive value is 47%. Sensitivity of C3d to predict C4d is 79.7%, the specificity is 69.2% with a positive likelihood ratio of 2.6 and negative likelihood ratio of 0.29. We conclude that complement fixation determined by SAB C3d assay correlates well with biopsy evidence of peritubular capillary complement deposition. Urine Cell-Free Supernatant Metabolites Diagnostic of Antibody Mediated Rejection in Kidney Allografts. M. Alkadi, 1 J. Lee, 1 D. Dadhania, 1 T. Muthukumar, 1 C. Snopkowski, 1 C. Li, 1 S. Salvatore, 1 S. Seshan, 1 K. Suhre, 2 M. Suthanthiran. 1 1 Weill Cornell Medicine, NY; Doha, Qatar. Introduction: Urine cell-free supernatant metabolite profiles provide a unique opportunity to interrogate intra-graft processes and have distinguished acute cellular rejection from no rejection in the Clinical Trials in Organ Transplantation-04 study. Antibody mediated rejection (AMR) has emerged as a major cause of kidney allograft failure; however, the critical issue of urine metabolite profiles as diagnostic of AMR in kidney transplantation has not been investigated. We performed untargeted metabolite profiling of 141 urine cell-free supernatants matched to biopsy-proven AMR (N=20 specimens from 20 patients), acute tubular injury (ATI) (N=61 specimens from 56 patients), or normal biopsy result (NORMAL) (N=60 specimens from 29 patients). We evaluated whether the previously discovered diagnostic biomarkers of ACR , xanthosine [X], quinolinate [QUIN] , and X-16397, 3SL/X, and QUIN/X16397) distinguish: (i) patients with AMR biopsies from the patients with ATI biopsies and (ii) patient with AMR biopsies from patients with normal biopsies. Osmolarity-corrected 3SL, QUIN, and X-16397 were significantly higher in the AMR Group than in the NORMAL Group (see Table 1 ; P values: 0.02, <0.0001, and 0.05, respectively). Osmolarity-corrected QUIN was also significantly higher in the AMR Group than in the ATI Group (P value 0.02). The ratio of QUIN/X-16397 distinguished the AMR group from the Normal Group (median 2.31 vs. 1.54, P value 0.006) and AMR group from the ATI Group (median 2.31 vs. 1.39, P value 0.003) ( Table 1) . With the use of a comprehensive combination of nontargeted LC-MS/MS and GC-MS based metabolomics platforms, we demonstrate that the relative abundance of quinolinate and the ratio of quinolinate to X-16397 distinguish patients with AMR biopsies from patients with Normal biopsies and patients with AMR from patients with ATI biopsies. Quinolinate, a product of tryphtophan metabolism, by serving as a precursor for the biogenesis of NAD+, may help meet the metabolic demands of immune cells involved in allograft rejection. 1 1 Surgery, Univ. of Minnesota, Minneapolis, MN; 2 Biostatistics, Masonic Cancer Center, Minneapolis, MN; 3 Biostatistics, Univ. of Minnesota, Minneapolis, MN. PURPOSE: Laparoscopic donor nephrectomy (LDN) requires a high degree of surgical competence and skill. UNOS recommends that fellowship-trained surgeons participate in 15 LDN to be considered proficient. However, evaluating proficiency among transplant surgery fellows (TSF) in the modern training paradigm has yet to be defined. METHODOLOGY: A retrospective intraoperative case analysis was performed to create a learning curve model (LCM). Between January 2000 and December 2010, 20 novice TSF (nTSF) rotated on the donor service of an ASTS-approved abdominal transplant surgery fellowship program. The measure of surgical performance was composed of 3 variables: 1) intraoperative time, 2) estimated blood loss, and 3) an ordinal intraoperative complication scale. For comparison, performance of graduating TSF (gTSF) was analyzed and used to score nTSF performance. Case complexity was defined by metrics obtained from donors with BMI>30, prior abdominal surgeries, renal vasculature multiplicity, and in whom a right nephrectomy was performed. Scores were tabulated in a general linear model, adjusting for case complexity and prior institutional-and TSF-accrued case volume. RESULTS: We estimated nTSF performance in 643 LDN. According to our LCM, BMI>30, multiple renal arteries, and prior abdominal surgeries lowered the expected performance score (EPS), while accruing institutional and TSF experience and performing a right nephrectomy raised the EPS. nTSF performance scores are depicted in Figure 1 . Based on the LCM results, nTSF peak performance was achieved at the end of the third training month. This translated into an average of 24+8 cases . CONCLUSION: Defining the learning curve of LDN during fellowship training has critical implications for surgical education, patient safety, and program credentialing. In the current era, where fundamental laparoscopy skills are mandated during residency training, TSF seem to acquire peak LDN performance in about 24 cases. 1 1 Renal & Transplantation, University Hospitals Coventry & Warwickshire NHS Trust, Coventry, United Kingdom; 2 Statistics, National Health Service, Organ Donation & Transplant, Bristol, United Kingdom. It is important to have a better understanding of the short and long term outcomes and risks of kidney donation. We studied the occurrence of acute operative complications and 1,5,& 10 year outcomes in living donors by type of nephrectomy. From Jan 1, 2001 until Dec 31, 2013 inclusive, all live kidney donors in the U.K were included in the study. Dec 31, 2014 was considered the study end, so that all patients had at least one year of follow-up. A total of 9750 live donor records were available. Nephrectomy type was available for 9602 donors; Open 3132 (33%), Pure laproscopic (PL)-3886 (38%) and hand assisted laproscopic (HAL)-2802 (29%). We analysed the incidence of operative complications; splenectomy, reoperation required, organ perforation, operative haemorrhage, pneumonia, pneumothorax, pulmonary embolism, wound infection, DVT, other complications and a combined variable for any one or more complication. Statistically significant differences were noted for the incidence of reoperation ( , [65] [66] [67] [68] [69] or >69 and CIT <8, [8] [9] [10] [11] [12] [13] [14] [15] [16] or >16 hours. Incidence of delayed graft function (DGF), graft failure in the first post-operative year (GF 1 ), overall graft failure (GF tot ), and median nadir creatinine value were evaluated. RESULTS: DGF occurred in 4.9% of transplants (n=69). Donor age was not associated with a significant effect on DGF (p = 0.4) ( Fig 1A) . CIT >16 hours was associated with a significant increase in DGF compared to CIT <16 hours (p = 0.005) ( Fig 1B) . GF 1 occurred in 2% of patients transplants (n=21). Donor age did not produce a significant effect on GF 1 (p = 0.99). CIT 8-16 hours was associated with an increase in GF 1 compared to <8 hours and >16 hours (3.4% vs. 2.1% and 0%, p = 0.0221). GF 1 did not occur in donors >69 or CIT >16 hours. Multivariate logistic regression analysis for GF 1 did not find a significant effect produce by donor age or CIT (p = 0.7650 and p = 0.4381, respectively). GF tot occurred in 3% of transplants (n=43). Donor age did not significantly affect GF tot (Fig 1C) (p = 0.07). CIT 8-16 hours was associated with increased GF tot compared to CIT <8 hours and >16 hours ( Fig 1D) (p = 0.0110). GF tot did not occur in donors >69 or CIT >16 hours. Cox regression analysis for GF tot did not yield a significant association with donor age or CIT ( Table 1 shows Cox proportional hazard modeling for graft survival between 6 month and 3 years. The hazard ratio for graft failure decreased to 0.64 (0.55-0.74, p<0.001) in 2010 compared to 2000. Figure 2 depicts the substantial improvement on patient survival during the last decade. The median eGFR at 6 months improved from 55.9 in 2000 to 60.9 ml/min/1.72m 2 in 2010. Adjusting for donor age, the median eGFR improved 11% over the decade. The frequency of CKD stage 3b, 4 and 5 at 6 months decreased by at least 60%. These changes occurred in the face of changing trends in immunosuppression towards predominantly tacrolimus (TAC) and mycophenolic acid ( Little is known about donation-related out-of-pocket costs incurred by NDD. For NDD, we studied: resources used to cover these costs; level of stress due to costs; and concerns re: health or life insurance. We also asked NDD regarding their attitude about compensation. Stress was ranked on a 5 point Likert scale (not stressful to extremely stressful). We asked about stress related to: 1) total financial burden, 2) medical costs and 3) nonmedical costs (travel, lost wages). Of 91 NDD, 55 (60%) responded (time from donation ±SD), 6.5±4 yrs. Mean age at donation was 44±11 yrs, 56% female, 98% White. At donation 86% had >high school education and 91% employed. Re: financial resources used during recovery; most common were sick leave (47%); personal savings (40%), vacation leave (35%), short-term disability (18%), and unpaid medical leave (11%). Only 4 (7%) used a donor grant. None borrowed money from family/ bank or held a fundraiser. Overall financial burden was ranked "moderately" or "very" stressful by 13%. Medical costs were "not at all stressful" for the majority (96%). Nonmedical costs due to donation (i.e. travel, lost wages) were a little stressful for 13%. Concerns re: future health insurance was expressed by 15%; life insurance by 20%. NDD level of burden and stress was lower than our previously reported level seen in directed donors (DD). Half (53%) NDD agreed donors should be reimbursed for donor specific costs; 33% did not have an opinion and 15% disagreed. Most (76%) NDD did not think donors should be compensated beyond reimbursement for costs, but 15% felt they should; 9% did not have an opinion. Of the 8 affirming payment, tax credits and health insurance were viewed as appropriate vehicles. We conclude that the majority of NDD use their own financial resources to defray the cost of donation. While overall stress from financial burden was minimal for most, 13% reported greater levels. Stress may be lower in NDD than DD because they choose donation, without any possible obligation or pressure, and can opt to put off donation until financially secure. The fact that NDD (as with any living donor) incur financial burden is disconcerting. The vast majority of NDD endorsed reimbursement for direct expenses like travel, lodging, and meals, yet were willing to absorb this cost. Our data emphasizes the need for a system that minimizes economic stressors associated with donation and at a minimum, make living donation cost neutral. Non-invasive diagnosis and prediction of acute rejection (AR) is a critical unmet need. A In a randomized multicenter trial of 222 renal transplant recipients a novel blood transcriptional assay, kSORT was evaluated for its accuracy in diagnosing and predicting biopsy confirmed AR. Blood were drawn at day 0, 10, months 3, 6, 12 and at graft dysfunction for analysis of kSORT, a 17 gene assay (CFLAR, DUSP1, IFNGR1, ITGAX, MAPK9, NAMPT, NKTR, PSEN1,CEACAM4, EPOR, GZMK, RARA, RHEB, RXRA, SLC25A37 , RNF130, RYBP) that provides AR high or low immune risk scores. The assay has a ~15% assay read out of intermediate scores where repeat sampling is recommended. Biopsies were done on all patients by protocol at engraftment and 12 months posttransplantation and when clinically indicated. The kSORT assay was run on 338 blood samples obtained from the first 79 enrolled patients, of which 98 blood samples were matched with protocol or indicated biopsies. 22 patients had clinically suspected acute rejection (AR) of which 18 were biopsy confirmed. RNA was extracted and QPCR for all 17 genes was normalized to 18S; data was profiled using a customized algorithm kSAS to produce an immune risk score for rejection. 4 Pathology, LUMC, Leiden, Netherlands. MicroRNAs in urine are suitable targets for non-invasive detection of acute rejection (AR), since they represent relatively stable analytes. Indeed, we found that PCR signals for several microRNAs in both sedimentary cells and isolated exosomes remained stable in urine collected freshly from 4 renal transplant patients at the bedside, when these urine samples had been incubated for up to 24 hours at room temperature. Next, within the transplant period 2007-2014 we set up a discovery-validation strategy to identify microRNAs that are associated with incidence of AR. First, 742 microRNAs were profiled by qPCR (Exiqon LNA miRNome PCR panels) in urinary sediment of 8 recipients with AR and in 8 recipients with stable graft function. As validation, 10 microRNAs were analyzed in urinary sediments of 140 recipients with AR: morphologically indicative of borderline (n=5), cellular (n=67), vascular (n=37) or humoral rejection (n=31). Biopsies were negative for SV40. The group was compared with urinary sediment from 64 recipients, who had a surveillance biopsy taken showing no morphologic alterations indicative of rejection. RNA was extracted from the sediments, and a spike-in was added to the RNA to check efficiency of the cDNA reaction. MicroRNA levels were corrected for 3 reference microRNAs. In the discovery set, 32 microRNAs were differentially expressed (P<0.05) between groups, of which 20 were significantly lower in the AR group. Several microRNAs matched with what was found in previous profiling studies in transplant biopsies and urines. In the validation set, most pronounced differences in the AR group compared to controls (all P<0.00001) were found for miR-25-3p (1.9-fold), miR-126-3p (2.9fold), miR-142-5p (0.40-fold), miR-155-5p (3.1-fold) , and miR-615-3p (0.23-fold). The latter four together distinguished AR from controls in a multivariate logistic regression model with a sensitivity of 90.0% and a specificity of 81.0%. Humoral and cellular rejections differed mostly in hematopoietic-cell-related miRs 126-3p (P=0.032) and 155-5p (P=0.016), but the difference was of lower significance after correction for multiple comparisons. Measurement of microRNAs in urinary sediment of kidney transplant patients may help to non-invasively identify acute graft rejection. Urinary (u) CXCL9 protein expression can noninvasively diagnose ACR and serial measurements during CNI withdrawal indicate that elevations in uCXCL9 detect subclinical rejection. How elevated uCXCL9 values respond to therapy as a symbol of reversing ACR is unknown. Clinical utility of such an assay further requires rapid detection that could be applied at point-of-care. We used biolayer interferometry (BLI) in which refraction of incident white light is proportional to binding of antigens to an antibody-linked probe in real time. Using a sandwich ELISA adaptation of BLI we detected uCXCL9 over a range of 36.9 pg/mL to > 5 ng/mL within 40 minutes. The assay is specific for CXCL9 without cross reactivity to CXCL10. When compared to results of standard ELISAs (n=180), we observed a correlation coefficient of 0.78; using 200 pg/mL as the threshold for defining a positive result (derived from published data), BLI and ELISA showed 100% concordance. We then serially and prospectively collected urine from 25 subjects beginning at a time of biopsy for acute transplant dysfunction over the ensuing 8 weeks, measured CXCL9 by BLI and correlated the results with pathological diagnoses, treatment and subsequent clinical course. Positive uCXCL9 (>200 pg/ml) was detected in all subjects with ACR (n=7, mean 5356 pg/ml), 5 with borderline ACR (mean 145 pg/mL), 1 subject with BKV (204 pg/mL), but none of the subjects with normal pathology (n=6, mean 64.3 pg/mL), IFTA (n=3, mean 76.7 pg/mL), or AMR (n=3, mean 110 pg/mL). Initial anti-rejection therapy normalized serum creatinine and resolved elevated uCXCL9 to <200 pg/mL within 7 days (n=4). Persistent elevation of uCXCL9 beyond day 7 signaled ongoing ACR and/or AMR unresponsive to initial therapy even in the context of resolving serum creatinine elevations (n=8). Rapid detection of uCXCL9 detection by BLI can potentially be used as a point of care approach for diagnosing ACR and guiding subsequent therapeutic decision making in kidney transplant recipients. figure) . There was a trend to higher eGFR in the first two groups compared to those with non-treated BACR patients (57±19, 52±19 and 47±29 ml/min/1.73m 2 , respectively). Conclusions: Our data suggest a higher incidence of ACR and/or persistence of subclinical BACR in subsequent Bx of patients where BACR is diagnosed on protocol Bx and is not treated. This finding might also be associated with a lower eGFR in those with BACR who receive no treatment. Further studies are needed to demonstrate the negative impact of subclinical BACR found in protocol Bx on graft histology and function, and benefits of treatment with steroids. Background: Partial graft is especially more vulnerable to deleterious effects of donor specific HLA antibodies (DSA). Living liver donors and recipients are often family members. Mothers are at risk of immunological sensitization to their children at time of birth. Aim: To assess presence of donor specific HLA antibodies and the risk of antibody mediated rejection in female patients receiving a living donor liver graft from one of their children. Methods: Retrospective analysis of all living donor liver transplant (LDLT) recipients in a single academic center between 2011 -2015 was performed. Patients were divided in two subgroups: 1) Mothers receiving a living donor liver graft from one of their children and 2) all other LDLT recipients. Pre-transplant HLA antibody profile, complement-dependent cytotoxicity (CDC) and anti-human globulin-CDC cross match, incidence of post transplant antibody mediated rejection (AMR) as well as transplant outcome were compared between the two groups. Results: A total of 21 consecutive LDLT cases were performed. Eight women received LDLT from either a son or daughter. Four out of eight mothers (50%) had one or more preexisting DSAs, versus 2 out 13 in the other group (15%). CDC and AHG-CDC cross matches were negative in all patients except in one mother receiving a liver lobe from her daughter, whose CDC B cell cross match was positive. Three out of 8 mothers (38%) were diagnosed with clinically significant and biopsy proven antibody mediated rejection (C4D positive), as opposed to none of the other patients (P=0.04 Background: Biomarker profiles diagnostic of acute rejection (AR) in liver transplant (LT) recipients could enhance the diagnosis and management of recipients. Our aim was to identify diagnostic protein signatures for AR in PBMC, using a novel top down proteomics methodology. In this approach, proteins are kept intact prior to analysis by an integrated separations and mass spectrometric platform, allowing evaluation of protein primary structure and post-translational modifications (proteoforms). Methods:PBMC lysates from non-HCV+ LT recipients undergoing liver biopsy were processed by a technique similar to SDS-PAGE (GELFrEE) to create a single fraction containing all proteins from 0-30 kDa with high recovery. After SDS removal, this protein fraction was analyzed by top-down proteomics in 3 steps: 1) Nanocapillary liquid chromatography coupled with a Orbitrap Elite mass spectrometer, 2) processing data using ProSight software to identify proteoforms by database searching, and 3) processing data through a linear hierarchical model matching the study design to quantify proteoform fold changes in AR vs. Tx (normal liver function) vs. ADNR (acute dysfunction without rejection). Profiling the immunologic risk for acute rejection in liver transplantation (OLT) may allow for selection immunosuppression (IS) target ranges appropriate to risk levels, all with the goal to minimize unnecessary IS. We aimed to identify risk factors for acute rejection in OLT with the hopes of creating a schema for assisting clinical judgment. Using registry data from the Organ Procurement and Transplantation Network, we identified 42,508 adult recipients who underwent OLT between 2002 and 2013. We excluded recipients with a blank entry for treated rejection (yes/no). We analyzed this all-inclusive cohort in addition to a subset of 27,493 patients with just tacrolimus maintenance IS. We performed univariate and multivariate logistic regression analyses on both cohorts and identified independent risk factors for treated acute rejection at one year. Although prior studies have suggested age as a risk factor for rejection in OLT, this is the first study of national-level data to demonstrate a robust dose-dependent relationship between age and risk for rejection at one year. This analysis shows that regardless of IS regimen, recipient age is a dominant risk factor for acute rejection in liver transplantation. Clinicians should place significant weight on recipient age when they profile their recipients for the immunologic risk of rejection. Acute cellular rejection following liver transplantation has decreased in incidence with the use of potent immunosuppressive agents, affecting less than 25 percent of liver transplantation recipients. HCV recurrence following liver transplantation has been shown to lead to late graft loss. We have previously shown a sustainable viral response (SVR) in post liver transplant HCV patients after treatment with Sofosbuvir/ Ribavirin for 24 wks. Recently we studied 51 recipients with detectable HCV viral load at the time of liver transplant, who were treated with either Sofosbuvir/Ribavirin or Sofosbuvir/Ledipasvir post-transplant. The incidence of acute cellular rejection in the setting of SVR between two treatment groups was compared. 51 liver transplant recipients with detectable viral load were enrolled in this study (Two patients were excluded; one lost to follow up and the other never achieved SVR). Age, sex, viral load, genotype, previous treatment, time to seroconversion and rejection episodes were reviewed. Tolerance to treatment, side effects, dose adjustments, immunosuppression changes, and sustainability of viral response were monitored. Patients treated either with Sofosbuvir 400mg and Ribavirin 600mg or Sofosbuvir 400mg and Ledipasvir 90mg were analyzed in two separate groups. Length of treatment was 24 weeks (7 patients had shorter treatment due to insurance coverage). Average time for viral clearance was 4.3 weeks after the initiation of anti-viral therapy. All patients were followed for at least three months after the completion of treatment. No incidence of acute cellular rejection was observed among 35 patients treated with Sofosbuvir/Ribavirin during the first 36 week period after the initiation of treatment. In clear contrast, four out of the remaining sixteen patients treated with Sofosbuvir/Ledipasvirsuffered from an ACR episode within a two week period after they achieved SVR (0% vs 25%, p<0.005). Sofosbuvir/Ribavirin and Sofosbuvir/Ledipasvirfor 24 wks are effective and well tolerated treatment regimens to achieve SVR among liver recipient patients. Nevertheless, effective treatment of HCV with Sofosbuvir/Ledipasvir appears to increase the incidence of acute cellular rejection shortly after the achievement of SVR. It may be warranted to increase immunosuppression shortly after clearance of virus has taken place. Here, we evaluate the approaches based on 36M results from CNI withdrawal (WD) vs standard (C) CNI regimens from these studies. Methods: H2304 study recruited patients in CNI-WD (N = 231) arm to receive EVR (C0 3-8 ng/mL; increased to C0 6-10 ng/mL by end of M4) + rTAC (C0 3-5 ng/mL; withdrawn at M4), 1M post-liver transplantation (LTx). Enrollment into CNI-WD arm was prematurely terminated due to higher acute rejection rate during CNI withdrawal; however, patients on study treatment for >4M could continue in the regimen. In PROTECT, patients in CNI-WD (N = 101) arm received EVR + CNI (TAC or CsA) 4-8 weeks post-LTx with EVR C0 target of 5-12 when combined with TAC or 8-12 ng/mL in combination with CsA. After CNI withdrawal EVR C0 was maintained at 5-12 ng/mL.CNI was completely withdrawn when patients were stable with 70% CNI reduction (for at least 2M) latest by M6 post-LTx and all patients received basiliximab induction therapy. Results: At M36, incidence of tBPAR was higher in CNI-WD arm vs CNI-C in both studies (Figure 1a & 1b) . However, there was no increase in graft loss in CNI-WD vs CNI-C arm (H2304: 2.8% vs 4.0%; PROTECT: 2.2% vs 2.1%). In both the studies, renal function (eGFR; MDRD4) improved significantly in CNI-WD vs CNI-C arms and incidence of AEs and SAEs was similar in CNI-WD vs CNI-C arm ( Table 1) . Conclusion: Although an increased risk of rejection was seen at the time of CNI withdrawal in H2304 study, complete CNI withdrawal without risk of subsequent efficacy failure, can be achieved with the introduction of induction therapy and stepwise CNI reduction as seen in PROTECT. Despite the differences in rejection rates, CNI-WD arm in both studies showed better renal function preservation vs CNI-C arm. Immune response to HLA and non-HLA antigens play a significant role in the development of chronic and acute rejection (AR), bronchiolitis obliterans syndrome (BOS), following human lung transplant (LTx). The goal of this study is to determine whether exosomes are induced during lung allograft rejection and to define its antigenic compositions (HLA, lung associated self-antigens (SAgs)) and microRNAs (miRNAs). Exosomes were isolated from bronchoalveolar lavage (BAL) and sera from LTx recipients (LTxR) diagnosed with AR, BOS, or stable. Specificity of isolated exosomes were defined by flow cytometry and western blot using expression of CD63 and Annexin-V respectively. Expression of SAgs, Collagen-V (Col-V) and Ka1 tubulin (Ka1T), were examined by transmission electron microscopy and HLA by flow cytometry using allele specific Abs. miRNAs were profiled by Affymetrix miRNA array and comparative miRNA pathway analysis were performed with DIANA-mirPath. Differentially expressed miRNA were validated in independent cohorts using qRT-PCR. Donor HLA antigens and SAgs were detected on the surface of exosomes isolated from LTxR diagnosed with AR and BOS, but not in stable, indicating that immune responses can lead to exosome generation and release into body fluids. Exosomes expressing Col-V were isolated from sera of LTxR months prior to diagnosis of AR (3 months) and BOS (6 months Purpose: The main diagnostic platform for rejection in lung transplant recipients is histology of transbronchial biopsies (TBB), a test with a poor performance and safety profile. Microarray analysis of kidney transplant biopsies has been shown to produce rapid, accurate detection of rejection in small pieces of tissue. We used microarray analysis of transbronchial and mucosal biopsies (MB) in an attempt to detect the molecular changes of T-cell mediated inflammation (TCMI). Methods: We analyzed 29 TBB and 27 MB from 26 lung transplant recipients, between 1-184 months post-transplant undergoing indication biopsies. We obtained 5-10 TBB and 3 MB per patient (1 bite for microarray analysis, remainder for histology). We assessed specimens for molecular changes proven to be associated with TCMI (pathogenesis based transcript sets or PBTs), based on prior work in kidney and heart transplants, and compared this expression to sets of genes expressed in normal lung tissue (LT1,LT2) via Spearman correlation. Results: High quality RNA was assessable in 100% of samples, even when histology was not. PBTs were detectible in TBB and MB specimens. PBTs correlated negatively with LTs in almost all cases. Expression of transcripts associated with interferongamma (IFNG), effector T-cells, and injury all correlated with loss of LT1, indicating parenchymal de-differentiation. The same relationship could be demonstrated in both TBB and MB (Table 1) . Conclusion: Single TBB and MB specimens can be read for molecular changes of TCMI, and correlate with loss of lung transcripts indicating functional deterioration. This confirms that the molecular diagnostic methods emerging in kidney and heart transplantation are practical for both TBB and MB, and will hopefully allow us to standardize microarray analysis as a diagnostic platform for lung transplants. This has the potential to quantify rejection and injury, while doing so in a safer biopsy type, with the ultimate goal of guiding improvements in patient care. Similar to the case of pediatric kidney transplant recipients, late adolescence and early adulthood is a high-risk period for pediatric lung transplant recipients, while that time period is not characterized by an increased risk of graft loss for pediatric liver and heart transplant recipients. These disparate findings suggest that sociobehavioral mechanisms alone do not fully account for the high-risk age window and that other biologic etiologies may be involved. respectively. Number of ACR events were not numerically different between groups (p=0.44). Each increase in TTR by 10% resulted in significantly lower burden of ACR events (OR 0.74, p<0 .001); which remained significant when adjusting for lymphodepleting induction with alemtuzumab (LIA) and age (OR 0.81, p=0.045) . Each increase in LTR by 10% resulted in significantly lower burden of ACR events (OR 0.72, p<0 .001); which did not remain significant when adjusting for LIA and age (OR 0.81, p=0.14) . TTR over 30% was associated with significantly less high-grade ACR (OR: 0.1, 95%CI 0.02-0.41, p=0.001); which remained significant when adjusting for LIA and age (OR: 0.1, p=0.003) . LTR over 20% was associated with significantly less high-grade ACR (OR: 0.28, 5%CI 0.15-0.53, p<0.001); which remained significant when adjusting for LIA and age (OR: 0.15, 95%CI 0.06-0.4, p<0.001). Conclusion: Increasing FK TTR is significantly associated with decreasing incidence and severity of ACR events within the first year after LT; even when adjusting for lymphodepletion and age. Transplant physicians evaluate patients for lung transplant (LTx) who are on ventilators and ECMO with high lung allocation scores (LAS). Six-minute walk distance (6MWD) predicts post-transplant survival. In patients with rapid decline in function, illness severity is reflected with a high LAS but inability to obtain 6MWD leaving the LTX physicians with the challenge to determine which patient may benefit from LTx. We analyzed the United Network for Organ Sharing (UNOS) database to assess for LTx benefit in this specific population. Methods: Adults listed for LTx from 05/2005-06/2015 in UNOS database were included. 6MWD was categorized into 9 groups: 0ft and 8 others in increments of 200ft. In multivariable Cox regression, a time-varying covariate representing LTx effects was interacted with 6MWD to identify variation in LTx survival benefit controlling for age, gender, race, ethnicity, diagnosis, LAS, Karnofsky score, ECMO and ventilator at the time of listing. We performed nearest-neighbor propensity score matching of 6MWD of 0ft (cases) to nonzero 6MWD ≤600ft (controls) using all covariates to compare waitlist and post-transplant survival within matched pairs by stratified Cox regression. Results: 19,948 patients were evaluated. 948 (5%) patients had 6MWD 0ft and 5,359 (27%) had nonzero 6MWD ≤600ft. 545 patients with 6MWD 0ft were transplanted. The mean LAS of the 0ft subgroup was 64.4 (range:60-90). LTx was protective at 6MWD 0ft (HR=0.20; 95% CI=0.16, 0.24; p<0.001) and 95% CI=0.41, 0.61; p<0 .001), but not at higher 6MWD, despite optimal post-LTx survival among patients with 6MWD>1400 ft. Propensity score matching demonstrated worse waitlist survival among patients with 6MWD 0ft relative to nonzero 6MWD ≤600ft (HR=2.02; 95% CI=1.44, 2.83; p<0.001) but similar post-transplant survival between these two groups. Propensity matched patients with 6 MWD 0ft benefited from LTx. Conclusions: In propensity score matched analysis of UNOS LTx patients, a high LAS has an elevated waitlist mortality. However, the subgroup of 6MWD 0ft did benefit from LTx. Carefully selected patients who have rapid decompensation and high LAS can prove to have significant survival benefit from LTx despite 6MWD 0ft. Purpose: Describe response of belatacept (BELA) immunosuppression regimen (ISR) conversion in lung transplantation recipients (LTRs) after calcineurin inhibitor (CNI) failure on incidence of cellular rejection (ACR), bronchiolitis obliterans syndrome (BOS) progression, infections, and cardiorenal function. Methods: Single center, retrospective medical record review of adult LTRs before and after conversion to BELA from CNI-based ISRs. Patients were evaluated at fixed time points before and after BELA conversion. Primary outcome was incidence of ACR (composite rejection standardized score (CRSS)). Secondary outcomes included incidence of infections (positive bronchoalveolar lavage or blood cultures), change in FEV1, BOS progression, death; change in MAP and serum creatinine (SCr). Descriptive statistics; both univariate parametric and nonparametric statistical tests were used to assess characteristics and outcomes. Results: 8 LTRs underwent BELA conversion and had mean of 287 (range 62-1064) days of follow-up. 5 (62.5%) were male, mean age at transplant was 53.8 (SD 9.4) years, 7 (87.5%) received bilateral lungs primarily for COPD (4, 50%) or ILD (2, 25%), 8 (100%) were EBV D+R+, 6 (75%) received alemtuzumab induction, 4 (50%) were converted for TTP; 3 (37.5%) for PRES; 1 (12.5%) for BOS. 6 (75%) received labeled dose (10 mg/kg then 5 mg/kg); 2 (25%) received conversion dose (5 mg/kg). ACR was not different before and after BELA conversion (CRSS, 0.55 vs. 0.35, p=0.49) . Incidences of infections were not different: BAL gram negative (0.79 vs. 0.57, p=0.44) , BAL gram positive (0.17 vs. 0, p=0.32), viral (0.04 vs. 0.13, p=0.18) , and fungal (0.35 vs. 0.4, p=0.19) . FEV1 prior to BELA conversion and lowest FEV1 after BELA conversion were not different (1.51 vs. 1.44, p=0.38 Figure 1B ). Patterns were similar at 12mo. Conclusion: Although associations may in part reflect underlying conditions, the need for midodrine before kidney transplantation is a bio-marker for increased risks of complications including graft failure and death. As the new KAS is expected to increase ESRD duration at the time of allocation for many patients, monitoring recipient comorbidity burden through novel methods including pharmacy claims, and associated impacts on transplant outcomes, are important priorities. Cognitive Impairment in Kidney Transplant Recipients. A. Gupta, 1 D. Johnson, 1 G. Chen, 1 D. Subramaniam, 1 T. Polshak, 1 T. Thomas, 1 T. Schmitt, 1 D. Ladner, 2 A. Yu, 1 J. Burns. 1 1 University of Kansas Medical Center, Kansas City, KS; 2 Northwestern University, Chicago, IL. Background: Cognitive impairment is highly prevalent in end stage renal disease. The prevalence of cognitive impairment after kidney transplantation and the factors associated with it are unknown. Aim: To systematically determine the prevalence of cognitive impairment after kidney transplantation and elucidate the patient and clinical factors associated with cognitive impairment. Methods: In this cross-sectional observational study, we screened randomly selected 216 kidney transplant recipients from the kidney transplant clinic in an academic center. We used the Montreal Cognitive Assessment (MoCA), a comprehensive cognitive assessment validated for use in outpatient clinics. We used ANOVA for continuous variables and Chi-square test for categorical variables. To examine the association of cognitive status with these factors, we used a multivariate logistic regression analysis including factors significant in the bivariate analysis and additional important factors. Odds ratios and 95% confidence intervals were calculated. Results: The mean age of the participants was 55±14 years. Of the 216 patients, 71% were male, 57% were white and 57% had a college degree. A majority of the recipients (68.5%) had cognitive impairment defined by a MoCA score of <27. Bivariate analysis found that cognitive impairment was associated with age (P=0.014) and male gender (P=0.04). These associations persisted in the multivariate analysis adjusted for age, gender, level of education, time on dialysis and estimated glomerular filtration rate with OR 0.97, CI 0.94-0.99 for age and OR 3.25, CI 1.52-6.93 for male gender. There was no association with race, level of education, body mass index, blood pressure, history of comorbidities such as smoking, diabetes, coronary artery disease, peripheral vascular disease and stroke, hemoglobin, eGFR, use of prednisone, cause of ESRD, time on dialysis prior to transplant or time since transplant. Conclusion: This is the first systematic study of cognitive impairment after kidney transplantation. We found that cognitive impairment is highly prevalent in kidney transplant recipients. This information should be factored in during patient education and monitoring of medical adherence. Further research into gender-based differences in cognitive impairment and understanding of the pathophysiology of cognitive impairment in transplant recipients is needed. Black Ethnicity as a Risk Factor for Poor Kidney Allograft Outcomes Post-Transplantation. S. Tahir, 1 F. Jackson-Spence, 1 H. Gillott, 1 F. Everson, 2 J. Nath, 2 A. Sharif. 2 1 University of Birmingham, Birmingham, United Kingdom; 2 Queen Elizabeth Hospital, Birmingham, United Kingdom. Introduction: Registry studies, largely from the United States, have consistently demonstrated inferior outcomes for black patients undergoing renal transplant. A recent Canadian publication has challenged this by reporting equivalent outcomes compared to white counterparts. The aim of this study was to determine outcomes for black patients in a large contemporary UK cohort. Methods: Data was extracted from hospital informatics systems for all kidney allograft recipients transplanted at our centre between 2007 and 2015. Electronic records were then manually searched to facilitate data linkage between various sources to create a comprehensive database of baseline demographics, donor details, clinical/biochemical parameters, histology and clinical events. Results: We collected data for 1,140 post transplant patients, with median follow up 4.4 years. Ethnicity breakdown of the cohort was White (72.1%), Black (5.5%), South Asian (17.6%) and other/unspecified (4.7%). Black ethnicity vs non-black ethnicity demonstrated equivalent patient survival (6.3% vs 7.1% respectively, p=0.531) but significantly worse death-censored graft survival (22.2% vs 9.5% respectively, p=0.003). Black patients had worse overall graft survival compared to non-black patients (25.4% vs 15.4% respectively, p=0.032). Black patients had borderline significance for increased risk of any rejection episode within the first year post transplantation compared to non-black patients (19.0% vs 11.2% respectively, p=0.053). There was no difference when comparing black vs non-black ethnicity to post-transplant events (cardiac events, cerebrovascular accidents or cancer) but black patients did have increased risk of post-transplant diabetes (16.7% vs 8.7% respectively, p=0.048) and peripheral vascular disease (6.3% vs 1.1% respectively, p=0.009). When adjusted for other variables in a Cox regression model, black ethnicity was not independently associated with graft failure. Discussion: Our dataset shows that Black kidney allograft recipients have worse overall graft survival but black ethnicity itself is not a risk factor for graft loss. This is likely to be due to differing demographic or transplant specific factors, and unappreciated confounders, within this cohort. Further work, including analysis of national data from NHS Blood & Transplant, should be undertaken to provide clarity on kidney allograft outcomes for black patients in the UK. In Norway, a one-year waiting time free of cancer recurrence is required before kidney transplantation. To provide an updated, large-scale assessment of this practice, the survival of such a group of patients was compared to that of a matched cohort of kidney transplant recipients (KTRs) without a history of pretransplant cancer. Methods: KTRs with pretransplant cancer were identified in the Cancer Registry. Combined organ transplantations were excluded. Basal cell carcinoma was not uniformly registered and excluded as cancer diagnosis. No waiting time was required for localized prostate cancer, but this diagnosis was also included. A control group of KTRs without pretransplant cancer was constructed by coarsened exact matching in Stata, with age, gender and transplant era (<1983, 1983-1999 , >2000) as matching variables. Follow-up was censored by November 2013, and survival compared by Kaplan-Meier plot and Cox regression. Results: From 1963 to 2010, 5622 KTRs were included, of whom 361 had a history of pretransplant cancer. The median follow-up (interquartile range) was 6.2 (0.3-11.5) years. 2980 patients died, 30/361 (8.3%) of a cancer recurrence, of whom 6/103 of renal cancer, 5/43 prostate, 4/35 colorectal, 5/40 urothelial, 1/26 breast, 3/20 melanoma, 1/6 lung, 0/11 Non-Hodgkin lymphoma, 1/3 Hodgkin lymphoma, 3/15 plasma cell neoplasia, 0/10 cervix, and 1/1 acute myeloid leukemia. Patient survival in KTRs with a history of pretransplant cancer was similar to that of controls ( Figure) , hazard ratio (HR) 1.06 (95% CI 0.93-1.22), p=0.39. Defining waiting time from diagnosis of cancer to transplantation, there was no increased risk after stratifying for waiting times: 0-1 year (n=34; HR 1.03 [0.69-1.54 (2), Serratia marcescens (2) and Citrobacter freundii (1) . The culture sources of pre-SOT CRE were as follows: urine (22), rectal swab (21), blood (16), respiratory (9), others (10). The date of the most recent CRE culture was a median of 54 days (range: 1-2064) prior to SOT. The transplanted organs were as follows: liver (27), heart (17), kidney (7), liver-kidney (3), lung (2) , and intestine (1 Conclusion: In a large series of VRE bacteremia after LT, incidence of EB was low but represented only a portion of enterococcal infections. VRE bacteremia tended to impact the early transplant course and resulted in high mortality despite use of potent antibiotics. There was no strong data to support the use of linezolid over other active antibiotic in regards to patient outcome. Although the association between VRE bacteremia and survival was not significant, VRE bacteremia resulted in worse outcome compared to other EBs. (2) and A. baumannii (1) . MDRO colonization occurred prior to ITx in 11 patients (7 with VRE, 3 with MRSA and 1 with VRE and MRSA). VRE and CR-GNB-colonized patients were more likely to develop bacteremia than non-colonized patients, 6/20(30%) vs. 1/15(7%), P=0.2 and 2/6(22%) vs. 2/29(7%), P=0.13, respectively. The median time from positive SC to MDRO bacteremia was 64 days (IQR=29-92). 21(60%) patients were alive at 1-year post-ITx. There was no difference in survival between patients colonized with MDRO vs. those not colonized, 15(58%) vs. 6(67%), P=0.71. However, survival was significantly lower among MDRO-colonized patients who developed bacteremia compared to colonized patients who did not, 1/8(13%) vs. 14/18(82%), P=0.003. Conclusions: Most of our patients were colonized with MDRO. VRE and CR-GNB bacteremia were more common among colonized patients. Although, not statistically significant due to small sample. 1-year survival was lower among MDRO-colonized patients who developed bacteremia. Table 1 and 2). Patients with post-transplant infections were more likely to have a pre-transplant infection (52.2% vs. 21.4%, p = 0.0075) and to have a longer pre-transplant (12.9 versus 5.1 days) and post-transplant (18.9 versus 9.1 days) lengths of stay. One-year mortality was higher in patients with pre-transplant (20.8% vs. 3.6%, p = 0.031) and post-transplant (26.1% vs.1.8%, p = 0.0088) infections. Bloodstream infections are a major concern in SOT recipients and ineffective antibiotic therapy is strongly associated with poor outcomes. Rapid diagnostic tests (RDT) provide prompt identification of microorganisms and resistance markers, offering a unique collaborative opportunity for antimicrobial stewardship programs (ASP). We evaluated the effect of RDT coupled with an ASP communication on clinical outcomes and antibiotic optimization in bacteremic SOT recipients. RDT was performed using Verigene® Blood Culture System (Northbrook, IL) for gram positive and gram negative bacterial pathogens. Results were reported to the Infectious Diseases pharmacist who notified the practitioner and selected an appropriate treatment regimen per protocol. A retrospective chart review was performed to compare management of bacteremic patients pre and post implementation of the Verigene® System. The primary endpoints were time to bacterial identification and antibiotic switch. Sixty-eight SOT recipients with bacteremia were included in our analysis; 39 in the pre-RDT group and 29 in the post-RDT group. After initiation of RDT, there was a decrease in time to bacterial identification ( [IQR, , p <0.001 ) in the post-RDT and pre-RDT groups respectively. Patients in the post-RDT group had a reduction in ICU length of stay (3) (4) (5) (6) (7) (8) (9) vs. 10 days [IQR, [8] [9] [10] [11] [12] [13] [14] [15] [16] [17] , p= 0.02), although no difference was observed in mortality endpoints. RDT coupled with an ASP communication resulted in faster identification of microorganisms, prompt deescalation or escalation of antibiotics, and reduced ICU length of stay with potential implications for improved clinical outcomes in SOT recipients with bacteremia. 3 Pathology, University of Pittsburgh, Pittsburgh, PA; 4 Immunology, University of Pittsburgh, Pittsburgh, PA. Background: The ability of regulatory T cells (Treg) to prolong allograft survival and promote transplant tolerance in lymphodepleted rodents is well-established. Few studies have addressed the therapeutic potential of adoptively-transferred Treg in clinicallyrelevant large animal models. We evaluated the impact of polyclonal Treg infusion(s) on heart allograft survival in cynomolgus monkeys. Methods: Ex vivo expanded CD4 + CD25 + CD127 -Foxp3 + Treg were obtained from either normal monkeys (for 3 rd party infusions) or prospective recipients (for autologous infusions). Recipients (n=9) and donors were MHC-mismatched. Recipients received either no Treg infusion (n=3), single 3 rd party (n=3), single autologous (n=1), or multiple autologous Treg infusions (n=2). All recipients received anti-thymocyte globulin, short-term tacrolimus, anti-IL-6R mAb and tapered rapamycin maintenance. Results: Treg administration in single or multiple doses (up to 1.87 billion cells) during the first month post-transplant, resulted in inferior graft function. With no Treg infusion, graft palpation score began to decline on days 28, 35 and 53. With single Treg infusion, reductions in score were observed on days 32, 41, 53 and 53, whereas with multiple Treg infusions, reductions in score were observed on days 12 and 14. This was accompanied with significantly higher cardiac enzyme CPK-MB levels. In all recipients, T cells were profoundly reduced. CD4 + Treg:CD4 + Teff ratio was skewed markedly in favor of Treg during the first month post-transplant, particularly with multiple Treg infusions. With Treg infusion, there were increased incidences of effector memory T cells, enhanced IFNγ production by CD8 + T cells, elevated pro-inflammatory cytokines and increased anti-donor antibody levels. . Conclusions: Despite marked but transient increases in Treg relative to endogenous effector T cells in lymphodepleted NHP heart graft recipients and use of "Treg-friendly" immunosuppression, the host environment/immune effector mechanisms can perturb rather than favor the potential therapeutic efficacy of adoptively-transferred Treg. Introduction: Long-term post-liver transplant (LTx) complications, particularly impaired renal function (RF) and its consequences, remain a concern. Presented here are the results of the SIMCER Study evaluating the efficacy and safety of treatment with everolimus (EVR) combined with enteric-coated mycophenolate sodium (EC-MPS) after the stepwise withdrawal of tacrolimus (TAC) vs standard TAC + EC-MPS treatment. Methodology: This is a prospective, open-label study conducted in 15 French centers. 188 patients were randomized at 1 month (M1) post-LTx (1:1) to receive EVR (C 0 6-10 ng/ml) + EC-MPS (1440 mg/d) + TAC (stepwise withdrawal over 8 weeks on average) or TAC (C 0 6-10 ng/mL) + EC-MPS (1440 mg/d). All received basiliximab ± corticosteroids. The primary objective was to evaluate whether EVR + EC-MPS leads to better RF (eGFR, abbreviated MDRD) at 6 months (M6) vs standard treatment. Results: Patient characteristics were comparable between the EVR + EC-MPS (n = 93) and TAC + EC-MPS (n = 95) groups. The analysis of covariance of eGFR progression between randomization and M6 shows a significant difference in favor of the EVR + EC-MPS group (+14.3 ml/min/1.73m², 95% CI 7.3-21.3; p<0.0001). The incidence of treatment failures was comparable between EVR+EC-MPS and TAC+EC-MPS (10% vs 4.3%; p=0.139, of which 8 vs 2 treated BPAR, 0 vs 1 graft loss, 1 vs 1 death), and the incidence of adverse events (AEs) was: 90% and 90.4% respectively, p = 0.923. The incidence of serious AEs was significantly higher in the EVR + EC-MPS group than in the TAC + EC-MPS group (46.7% vs 29.8%; p = 0.018) without specific pattern of SAEs. The SIMCER study has evaluated for the first time the EVR + EC-MPS combination from M1 post-LTx with stepwise withdrawal of TAC compared with the TAC + EC-MPS combination. A significant benefit in terms of renal function was reported in the EVR + EC-MPS group, with efficacy of immunosuppression comparable to the standard treatment despite a higher incidence of SAEs. These results require confirmation over the longer term. Cardiac allograft vasculopathy (CAV) is a leading cause of death in long-term heart transplant survivors. Dense B cell infiltrates are common in heart transplants with CAV. To understand the role of these cells, we conducted detailed examination of graft-infiltrating B cells with regards to their phenotype, clonality, reactivity and functional interplay with other infiltrating immune cells. We collected 55 cardiac allograft explants and confirmed the presence of CAV in all of them based on intimal thickening of intramural vessels. 93% of these explants had B cell infiltration near coronary arteries. Antibody-producing plasma cells and macrophages could also be detected in 85% and 95% of CAV explants, respectively. Significant fraction of infiltrating macrophages had an M2 phenotype (CD68+CD163+). Notably, B cell infiltrates were not associated with circulating DSA, indicating that the function of these cells was unrelated to the production of anti-HLA antibodies. To study the clonality of infiltrating B cells, we analyzed the immunoglobulin heavy chain variable region (IGVH) repertoire in 6 CAV explants using next generation sequencing. We found that certain B cell clones had undergone robust proliferation at multiple locations in the graft tissue. Lastly, we isolated and immortalized B cell clones directly from 3 fresh CAV explants and analyzed the reactivity profiles of their secreted antibodies using ELISAs and flow cytometry. Remarkably, a majority of these clones had characteristics of innate-like B cells and secreted natural antibodies reactive to multiple auto-antigens and/or to apoptotic cells. To conclude, our study revealed a high frequency of B cell infiltrates around coronary arteries in heart transplants with CAV independent of DSA. The unexpected enrichment of polyreactive innate-like B cells, among infiltrating cells, strongly supports their role in the immune reaction associated with CAV, especially in association with neighboring macrophages. Supported by NIHAI116814, ROTRF46510926, S10RR027050-01A1. Methods: Patients approved for waitlist evaluation at the Nashville VAMC from 01/09-10/15 were included. Monthly data were summarized as: number of applications, median days to initial evaluation, and % of initial evaluations that occurred within 30 and 60 days. Temporal trends were analyzed using non-parametric comparisons of medians between three eras: pre-web-based submission (PWB), web-based submission (WB), and web-based submission with telehealth (TWB). Results: During the study period, 906 patients were approved. The number of applications per month did not vary between eras (p=0.571). The monthly median time to initial evaluation decreased significantly with the TWB era when compared to the WB and PWB eras (59 vs 260 and 116 days, respectively; p£0.01). There were no differences in the monthly % of patients evaluated within 30 days between eras (p=0.521). TWB significantly increased the monthly % with appointments within 60 days compared to the WB and PWB eras (54% vs 8% and 0%, respectively; p<0.001). Conclusion: Our transplant center has been able to markedly improve the timeliness to kidney transplant waitlist evaluation for Veterans with the addition of telehealth. Conclusion: Formative testing of ASCENT intervention materials will be conducted to ensure they are tailored to each targeted group. In the large clinical effectiveness study, we will administer the validated materials and assess their impact on improving KAS knowledge among medical directors and on reducing disparities in KTx waitlisting. Background: Although kidney transplantation (KTx) provides a significant survival benefit over long-term dialysis, many end stage renal disease (ESRD) patients are uncertain or conflicted about their decision to undergo KTx or remain on dialysis. We aimed to identify predictors of decisional conflict between treatment options in ESRD patients. Methods: In a randomized clinical trial of ESRD patients (n=158) measuring the effectiveness of a decision tool at a single transplant center, we assessed patient decisional conflict related to KTx through a validated 10-item questionnaire. Scores could range from 0 (none) to 100 (high). Decisional conflict was dichotomized into no decisional conflict (score = 0) and any level of decisional conflict (score > 0). We investigated potential predictors of decisional conflict using univariate and multivariable logistic regression. Results: The mean age of the cohort was 51 years, with 63% male, and 67% African American patients. More than half (54%) had decisional conflict regarding their treatment. The median time on dialysis at the time of KTx evaluation was 39 months (IQR: 20-127). In univariate analyses, male sex (vs. female OR: 2.5; 95% CI: 1.3, 4.8), black race (vs. white OR: 2.4; 95% CI: 1.2, 4.9), ³1 year on dialysis (vs. <1 year OR: 2.5; 95% CI: 1.2, 5.1), graduate education (vs. high school OR: 0.3; 95% CI: 0.1, 0.9), and low numeracy (vs. high OR: 2.54; 95% CI: 1.0, 6.3) were independent predictors of decisional conflict. Multivariable logistic regression (Table 1) , found age, sex, race, time on dialysis, and numeracy to be moderately good at predicting decisional conflict (c-statistic: 0.74; 95% CI, 0.65, 0.82). Conclusions: Understanding characteristics that predict decisional conflict in ESRD patients could help identify patients who may require intervention efforts to decrease patient uncertainty about treatment options. Background: Adults over 70 years old are the fastest growing segment of the end stage renal disease (ESRD) population. Due to substantial variation in co-morbidities and life expectancy in this group, we sought to develop a clinical prediction tool to identify incident elderly dialysis patients (age≥70) with a 5-year prognosis appropriate for kidney transplantation. Methods: We used data from the United States Renal Data System. All incident dialysis patients in 2006-2009 aged ≥70 at the time of dialysis initiation were included. Patients who were transplanted, missing dialysis start date or recovered renal function were excluded. Included patients were randomly divided into derivation and validation cohorts. Using the derivation cohort, candidate variables with significant crude association with 5-year all-cause mortality were included in a multivariate model which was used to generate a scoring system based on the beta coefficients. The scoring system was then applied to the validation cohort for performance testing. Results: The relevant variables and score assignments, derived from the derivation cohort (n=78,595), are shown below. In the validation cohort (n=78,596), the probability of mortality within 5 years of dialysis initiation was 52.6% for the lowest risk score quintile (<1pt) representing 3.4% of the validation cohort. Conclusions: Our clinical prediction tool could be useful for physicians to identify potentially suitable candidates for kidney transplantation among elderly dialysis patients in the US. Many factors beyond traditional clinical risk factors and demographics are associated with outcomes in the general population. These include behavioral, economic, genetic and environmental risks that are difficult to codify but associated with access to care, mental/physical health and protocol adherence. There is also significant heterogeneity in health conditions and outcomes across the US. We used a novel database published by Institute for Health Metrics and Evaluation which includes estimates of life expectancy(LE) by US residence location. We merged data by zipcode with national SRTR and tested that LE(stratified by race and ethnicity) was associated with kidney transplant recipient outcomes independent of recipient/donor risk factors. We included adult solitary kidney transplant recipients between 2004-2009 with follow up through 2014. There was significant variability in LE by recipient zip code ranging from 65yrs for males(similar to North Korea Iraq,Mongolia,Ghana) to 82 yrs(similar to Japan with longest LE). Overall graft and patient survival were significantly associated with residential LE consistent after risk adjustment. Gender/ethnic specific LE estimates had stronger association with outcomes within respective groups. Estimated effects of LE were virtually unchanged with additional adjustment for zip code median income and percent of poverty suggesting LE is a proxy for conditions beyond socioeconomic status. There was significant variation in LE by individual transplant center (female LE median=81 years, 10 th percentile=78, 90 th percentile =83) suggesting heterogeneity in baseline risk between centers. Findings support the notion that factors outside of the standard transplant registries are strongly associated with patient outcomes. Patients' baseline risk as measured by resident life expectancy may provide insights for risk assessment and potentially be utilized as a readily available tool for risk adjustment and center performance evaluations. Further considerations of the mechanisms of this association and potential interventions for patients from residence with lower life expectancy are warranted. Cardiac evaluation and risk stratification prior to kidney and liver transplantation can be critical to patient selection and outcomes. We assessed the impact of cardiac evaluation on waitlist entry, transplant rate, and survival among transplant candidates. Patients who were referred for kidney or liver transplant evaluation were drawn from a database of a large commercial health plan. Using multivariate proportional hazard models, we estimated time-toevent mortality and wait-list entry outcomes for patients referred between January 2010 and March 2015. Outcomes were followed up through April 30, 2015. Outcome analysis was adjusted for age, gender, patient comorbid conditions, transplant primary diagnosis, UNOS region, and socio-economic status. Results: A total of 7,097 patients evaluated for a kidney transplant and 3,071 evaluated for a liver transplant were included in the study. Stress testing was common in both populations showing lower rates of catheterization, angiography and revascularization. Stress testing was associated with a lower probability of mortality among patients who were evaluated. No other significant mortality effects were observed. Stress testing was also associated with a higher probability of waitlist entry among liver transplant candidates. Catheterization was associated with increased probability of waitlist entry in both kidney and liver transplant candidates. Having the patient undergo a revascularization was associated with lower likelihood of waitlist entry among kidney candidates. ****P < 0.0001, ***P < 0.001, **P < 0.01, *P < 0.05. Conclusions: Cardiac screening and intervention have significant impacts on patient survival and access to the waiting list in kidney and liver transplant candidates. In the future we hope to also be able to report on the effect of cardiac screening of transplant candidates on total medical cost before, during and post transplantation. In B cell-deficient mice, prolonged islet allograft survival (GS) mediated by anti-TIM1 depends on transfer of IL-10 competent B cells. However, such regulatory B cells (Bregs) might also require other cytokines or differentiation into plasma cells (PCs) to provide effective suppression. In this regard, in EAE models, B cell IL-35 and IL-10 were both shown to be essential for Bregs to ameliorate disease. Moreover, PCs express very high levels of both cytokines and PCs were also essential for "Breg" activity in EAE. To determine the role of IL-35 in anti-TIM1 mediated GS, WT vs. IL-35KO (Ebi3-/-p35-/-) B cells were transferred into B cell-deficient µMT (B6) recipients of BALB/c islets with/without anti-TIM-1 treatment. While anti-TIM1 had no effect on GS in µMT mice without B cells (MST 17.5 vs. 21 d; p= NS), transfer of either WT or IL-35-/-B cells, restored responsiveness to anti-TIM-1 (100% vs.66.7% GS>50d p=NS) and both were significantly different than no B cell transfer (p<0.05). To examine the role of PCs (B220 low CD138 hi Ig + Blimp1 + ) in GS we first compared their IL-10 expression to (CD19 + CD138 -) B cells in alloimmunized mice. Freshly isolated splenic PCs had increased frequency of IL-10 expression (~47%) vs. B cells (<1%). Thus, despite being infrequent (0.3%~0.6% of splenic B cells), PCs comprise ~20-35% of all B-lineage IL-10. However, after in vitro stimulation to promote cytokine expression, B cell IL-10 expression increases markedly (4-7%), and PCs only comprise ~4% of B-lineage IL-10. To examine their Breg activity in vivo, WT and BLIMP1-/-B cells were transferred into µMT islet recipients. Both types of B cells were similarly effective at mediating prolonged GS by anti-TIM-1 (100% vs.75% GS>80d p=NS). Finally, we used unique mice with an inducible B cell-specific KO of IL-10 (IL-10fl/flXCD20-ERT2-Cre; B6) vs Cre-neg littermate controls as recipients of islet allografts after treatment with Tamoxifen. B cell IL-10 was essential for prolonged GS by anti-TIM-1 in these otherwise entirely normal mice. Thus, anti-TIM-1 mediated prolongation of GS is completely dependent on induction of Bregs expressing IL-10, and unlike other models, requires neither IL-35 nor PC differentiation. The conceptual underpinning of our studies is that robust tolerance requires multiple mechanisms to enforce functional quiescence of alloreactive T and B cells. While much research has focused on the mechanisms that control alloreactive T cells, much less is known about the fate of alloreactive B cells. Clinical data suggest that donor-specific antibodies are a major cause of graft rejection despite immunosuppression, leading us to hypothesize that stable tolerance requires the donor-specific B cell responses to be profoundly inhibited, and by mechanisms that are not solely dependent on the absence of T cell help. To investigate the mechanism of B cell tolerance, endogenous donor-MHC Class I and Class II reactive B cells from C57BL/6 recipients of BALB/c hearts were identified using H-2K d and I-E d tetramers. Tolerance was induced with anti-CD154 plus BALB/c splenocytes, and the majority of allografts were accepted for >200 days without donor-specific antibodies. In tolerant recipients, alloreactive B cells were not deleted, and were present in numbers and phenotypes similar to naïve controls. Also there was no enrichment for CD93 + transitional or marginal zone B cells, or for the expression of CD5, CD1d or TIM-1. Importantly there was no increased frequencies of IL-10-production within the alloreactive B cell subset in tolerant compared to naïve controls. Despite the lack of these phenotypic differences that have been described to mark regulatory B cells, B cells from tolerant mice were functionally unresponsive. Upon adoptive transfer of purified B cells into naïve MD4 recipients (~95% of their B cells express B cell receptors specific for HEL), B cells from tolerant mice (B-Tol) did not make DSA-IgG upon challenge with BALB/c splenocytes, even when additional alloreactive CD4 + T cells (TCR75) and polyclonal T cells from naïve donors were included in the adoptive transfer. In contrast, B cells from naïve mice (B-naïve) responded with strong DSA responses. Even more notable was the ability of B-Tol to inhibit B-naïve responses, when the cells were adoptively transferred at a 1:1 ratio. To our knowledge, this is the first demonstration that B cells from tolerant mice have the Introduction Type 1 Diabetes is a significant barrier to islet transplant as grafts are subject to both alloimmunity and recurrent autoimmunity. No better is this clinical scenario modeled than the T1D-prone non-obese diabetic (NOD) mouse, a model in which no therapy has ever induced permanent transplant tolerance to islet allografts or any other allograft, indicating the severity of the immunologic barrier. As B lymphocytes play a critical role in T1D, we hypothesized that eliminating B lymphocytes would enhance tolerance induction. Methods Diabetic NOD mice and B cell deficient NOD mice (NODuMT) were transplanted with C3H islets and tolerized with 100ug anti-CD45RB on days 0,1,3,5,7 or left untreated. Rejection was determined by two consecutive BG readings > 250mg/dL. To confirm the tolerant state, grafted islets were removed and return to hyperglycemia noted. Hyperglycemic recipients were then retransplanted with matched C3H islet allografts, received no further treatment, and the maintenance of euglycemia monitored. Tolerant grafts were analyzed via IHC and lymphocyte populations were evaluated by flow cytometry. Results B lymphocyte deficient NODuMT mice were susceptible to tolerance induction in almost 100% of recipients. Removal of the graft resulted in recurrent hyperglycemia (confirming graft function) and retransplant of a matched graft was accepted without further treatment. Immunohistochemical analyses of tolerant islet allografts demonstrated significant infiltration of protective CD4 T Regulatory Cells (CD4 Tregs), whose expansion was restrained by B lymphocytes. Furthermore, we observed a significant expansion of Vbeta3+ CD4 Tregs in NODuMT mice, a clonotypic T cell population which when expanded, confers protection from T1D. Our investigation demonstrated direct B cell-Treg interactions by imaging cytometry and suggest that thymic resident B cells may negatively select Tregs in NOD mice. These discoveries 1) represent the first instance in which permanent transplant tolerance has been achieved in NOD mice and 2) highlight the deleterious role that B lymphocytes play in the establishment of transplant tolerance in T1D. We are currently testing the ability of these ex vivo expanded Bregs to prolong islet allograft survival. Methods Splenocytes were harvested from C57BL/6 mice. B cells were purified by negative selection (CD90.2 kit/Milentyi) and cultured with irradiated NIH 3T3-CD40L cells for 7 days. Cells were treated on day 0 and 3 with combinations of anti-TIM-1, IL-4, IL-21, and BAFF (Fig.1A) . Expansion between different treatment groups was compared using ANOVA statistics. P values <0.05 were considered statistically significant. In-vitro growth of B cells with NIH 3T3-CD40L cells and concomitant treatment with anti-TIM-1, IL-4, IL-21, and BAFF resulted in nearly 30 fold expansion and increased percentage of Bregs under optimal conditions. The difference in foldexpansion under various conditions was statistically significant with p <0.0001 (Fig. 1A ). Further analyses of expanded B cells showed a high percentage of TIM-1+ B cells expressing IL-10 and TGF-β ( Fig. 1B) , both essential for Breg activity. Delivery of cells co-expressing these two inhibitory cytokines may provide an effective antirejection strategy and promote transplantation tolerance induction. The spleen regulatory B cell subset with the functional capacity to express IL-10 (B10 cells) modulates immune responses. Expansion of Bregs ex vivo will facilitate experimental studies dissecting their mechanism of action and ultimately Bregs may be an effective cell therapy for prevention or treatment of rejection and as an adjunct in gaining transplant tolerance. Clinical data suggest that donor-specific antibody (DSA)-mediated rejection is currently the leading cause of kidney allograft failure, with anti-donor Class II Abs tending to be more pathogenic compared to anti-donor Class I Abs. We have previously reported on the use of MHC Class I tetramers to study the cellular dynamics driving anti-Class I antibody production in mice; here we focus on tracing the fate of endogenous B cells recognizing donor Class II using I-E d tetramers. At 10 day after BALB/c splenocyte immunization of or heart transplant into C57BL/6 mice, a 7.9-9.8 fold increase in the total number of I-E d -binding B cells was observed, with 54-58% of these cells expressing a germinal center (Fas + GL7 + ) phenotype. Following subcutaneous immunization, the I-E d -specific B cell response was localized in draining lymph nodes whereas following heart transplantation into the abdominal cavity, the response was observed in the mediastinal lymph node and spleen. In sensitized recipients, transplantation of BALB/c hearts resulted in a recall I-E d -binding B cell response that resulted in a modest acquisition of the germinal center phenotype (<5%). In contrast the recall anti-I-E d response was associated with strong differentiation into IgG secreting cells, which was detected with an I-E d -specific ELISPOT assay. By combining this visualization approach with the lineage tracking of memory B cells in AID-cre xRosa26-StopfloxEGFP recipients, we observed that I-E d -specific memory B cells were predominantly generated between days 7-14 post-sensitization, consistent with their germinal center origin, and that CTLA-4Ig treatment starting as as late as day 7-14 post-immunization was able to significantly reduce memory B cell generation. In summary, we validated the use of MHC Class II tetramers to specifically track the in vivo fate of endogenous Class II-specific B cells, and provide insights into the timeframe whereby memory B cell generation can be prevented. Since the costimulatory inhibitor belatacept effectively blocks T-B cell interaction in animal transplant models, we studied the functional follicular T helper cells (TFH)-B cell interaction in belatacept-treated patients to determine the effects of this drug in human kidney transplantation (KT). The presence of CXCR5+PD-1+CD4+ TFH cells, CD19+CD27+CD38++ plasmablasts and CD19+CD24++CD38++ transitional B cells was assessed in belatacept-and tacrolimus-treated patients (n=40), and in vitro after donor antigen re-stimulation in the presence and absence of therapeutic concentrations of belatacept (10 µg/mL) or tacrolimus (10 ng/mL). PBMCs were obtained 3 months after KT or during an acute rejection episode (before additional therapy). The proportion of TFH-cells and their allogeneic IL-2 and IL-21 production were measured as well as the differentiation of B cells into TNFα-producing effector plasmablasts and transitional IL-10+ regulatory B cells (Bregs). In belatacept-treated patients, the frequency of circulating TFH cells was low before KT (0.6 cells/µL), and did not change after. B cell depletion can augment or inhibit auto-and allo-immune responses in humans and mice without altering Ig levels. This is likely due to the presence both regulatory B cells (Bregs) and effector B cells (Beff) expressing anti-or pro-inflammatory cytokines, respectively. While TIM-1 is an inclusive marker for IL-10+ Bregs, the phenotype of Beff cells is completely unknown, hampering study of their in vivo role and therapeutic targeting. We now show that TIM-4 and TIM-1 are expressed by distinct B cell subsets. While TIM-1 + B cells are enriched for IL-10, TIM-4 + B cells are enriched for IFN-γ. Transfer of TIM-1 + B cells prolongs allograft survival (GS) in B-deficient mice, while TIM-4 + B cells accelerate rejection in an IFN-g-dependent manner. TIM-4 + B cells promote pro-inflammatory Th differentiation in vivo (↑IFN-g and ↓IL-4, IL-10, and Foxp3). Thus, TIM-4 identifies proinflammatory Beff cells. We hypothesized that targeting these cells could prolong GS. Indeed α-TIM-4 (RMT4-53; 250 ug ip d1-,0,5) induced long-term GS in BALB/c recipients of B6 islets (MST >100d). Moreover, α-TIM-4 ↑'s CD4+ T cell IL-4, IL-10 and Foxp3, and ↓'s IFN-γ and CD4 proliferation. However, these α-TIM-4-mediated changes and long-term GS were completely B cell dependent. To address whether α-TIM-4 directly targeted TIM-4 on Beff cells, we showed that adoptive transfer of WT, but not TIM-4 -/-B cells, into µMT recipients, reconstitutes α-TIM-4-mediated long-term GS. Similar results were obtained in mixed BM chimeras where only B cells lack TIM-4 (uMT + TIM-4-/marrow at 10:1 ratio). While α-TIM-4 treatment has no effect on TIM-4+ or TIM-1+ B cell frequency in vivo, it ↓'s B cell IFNγ (by ~55%) and ↑'s B cell IL-10 (by 60%). Next, purified TIM-4 + B cells were stimulated in vitro +/-α-TIM-4. Surprisingly, α-TIM-4 dramatically inhibited IFNg (↓4X) by TIM-4+ B cells. Thus, TIM-4 ligation with α-TIM-4 can act directly on TIM-4+ B cells to suppress inflammatory cytokine production, rather than acting indirectly -for example by blocking interaction between TIM-4 and its ligands. This was surprising because TIM-4 has no known signaling motifs and it has not been previously thought to only signal in "trans". We believe decreased TIM-4 B cell IFNg augments IL-10 expressed by TIM-1+ B cells. Taken together, our data reveal that targeting TIM-4 enhances IL-10 expressing Bregs and also reduces inflammatory Beff cells to strongly enhance allograft tolerance. ITN030ST is a prospective randomized trial designed to determine the clinical benefit of early IS withdrawal in adult liver transplant (LT) recipients. Recipients did not receive induction therapy and were maintained on standard CNI IS. 275 participants were enrolled at LT across 7 US sites; 145 had non immune non-viral (NINV) causes and 130 had hepatitis C virus (HCV) as indication for LT. Subject disposition is shown in Figure 1 : 199 (72%) achieved IS monotherapy early after LT (median 218[IQR 124-326]days). Subjects achieving stable monotherapy, acceptable liver function, and biopsy without rejection or fibrosis (n=95) were randomized to withdrawal (n=77) or maintenance (n=18) at a mean of 17±4.5 months post-LT. Major reasons disqualifying randomization were voluntary withdrawal (n=41), HCV activity (n=40), and protocol deviation (n=19). Response to IS withdrawal was similar for HCV or NINV subjects. 52/77 (68%) achieved ≤50% of baseline dose. 10/77 (13%) were off IS ≥ 1 year: 9 remained IS free until study completion (24 months of follow-up) and 1 until retransplantation for HCV recurrence. Of 67 who failed withdrawal, 30 had biopsy-proven rejection and 37 had abnormal liver tests; all resolved with CNI therapy with or without antimetabolites. Only 13 (19%) subjects required corticosteroids. The primary composite endpoint, evaluated 24 months after randomization, was comprised of death or graft loss; grade 4 malignancy or opportunistic infection; stage ³3 fibrosis; or 25% decrease in GFR. The endpoint occurred in 4 of 13 (31%) and 12 of 66 (18%) evaluable subjects in the maintenance and withdrawal groups respectively. Conclusion: IS minimization starting 12-24 months after LT was tolerated by the majority of recipients. Complete IS withdrawal was achieved in 13% of those qualified for the minimization protocol. (day0). A G-CSF mobilized peripheral blood mononuclear cell product was apheresed from the donor >2 weeks pre-KTx, processed to remove graft-versus-host disease (GVHD)-producing cells yet retain CD34 + cells and FC, and cryopreserved until administration day+1 post-KTx. Subjects ranged in age from 18-65 years and were from 6/6 HLA matched related to 0/6 matched unrelated. 12 subjects had unrelated and 19 had related donors. 30 of 31 subjects exhibited donor chimerism at one month post KTx. The one subject w.o chimerism (NW9) was highly sensitized (PRA>50%). MMF and tacrolimus based immunosuppression (IS) was weaned and discontinued at 1 year if chimerism, normal renal fcn and normal KTx biopsy were noted. There was a learning curve in the early phases of this study. Subjects NW1 and NW4 received a suboptimal cell dose and were only transiently chimeric (< 6 months). Another subject (NW11) with a high PRA (33%, maximum historic 64%) developed transient chimerism. Transiently chimeric subjects resumed endogenous hematopoiesis and are on low-dose IS with stable renal fcn. 1 subject (NW27) exhibited grade 1-2 GVHD that was successfully treated with corticosteroids. A second subject (NW33) developed concomitant Grade 2-3 GVHD and CMV disease and is undergoing treatment. There have been 2 KTx losses related to infections post-Tx. 1 and 5yr Pt survival is 100%. Durable chimerism allowing for full IS withdrawal developed in 19 subjects (time off IS ranging from 3 -65 months). 16 subjects had "full" (> 98% donor) chimerism, and three subjects more mixed chimerism. All stable chimeric subjects retained chimerism after removal of IS and remain rejection-free, as demonstrated by protocol biopsy, while 3 of 5 who were transiently chimeric had subclinical rejection. In summary, high levels of durable chimerism and tolerance with low incidence of GVHD have been achieved in mismatched related/unrelated recipients of combined FCRx and living donor KTx . Tolerance induction is considered to be a final goal in the field of organ transplantation. We have performed simultaneous bone marrow and kidney transplantation in MHC mismatched patients to induce transient chimerism and tolerance. The preconditioning regimen consisted of cytoxan, fludarabine and thymic irradiation. Immuosuppression induction was given as 375 mg/m2 of rituximab 7 days and 2 days before transplantation and 1.5 mg/kg/day of antithymocyte globulin for 3 consecutive days, beginning on the day before transplant. Maintenance immunosuppression were tacrolimus and steroids. From December 2011 to May 2014, seven MHC mismatched patients received simultaneous bone marrow and kidney transplantation. Median follow-up was 22 months after transplant. Immunosuppression tapering was usually initiated when the patient sustained stable graft function for at least 12 months post-transplant. Immunosuppression was slowly tapered over 6~12 months. Four out of the seven patients were successfully tapered off immunosuppression. One out of the four patients resumed tacrolimus therapy after biopsy-proven acute cellular rejection was diagnosed after 16 months off immunosuppression. The patient was managed with steroid pulse therapy and is currently undergoing a second attempt at immunosuprresion withdrawal. Two patients failed at immunosuppression withdrawal: one patient developed acute rejection during immunosupression tapering and has been kept on tacrolimus without further attempt at withdrawal and the second patient had severe BK virus nephritis which lead to graft loss. One patient is currently undergoing immunosuppression tapering. HLA-mismatched allograft tolerance was intentionally induced via combined kidney and bone marrow transplant (CKBMT) in two ITN protocols (NKDO3 and ITN036ST), but the mechanisms of tolerance are incompletely understood. We previously developed and validated a technique for identifying alloreactive T cells pre-transplant via deep sequencing of the T cell receptor (TCR) β chain hypervariable (CDR3) region and tracking them post-transplantation. Here we describe TCR sequencing of sorted T regulatory (Treg) clones (CD3+CD4+ CD25+++ CD127-) pre-and post-transplant in three tolerant CKBMT patients. There were 7430, 9593, and 116,350 total Treg clones from each patient, respectively. Donor-specific Treg clones were defined by both: 1) presence in a sorted Treg sample; and 2) at least 5-fold expansion following donor stimulation in a pre-transplant CFSE-mixed lymphocyte reaction, with frequency ³10^-5 in the sorted CFSE-low sample. For the three patients, 37, 11, and 122 unique donor-specific Treg clones were identified, respectively. At 6 months post-transplant, all three patients showed ≥5-fold expansion of donor-specific Tregs relative to pre-transplant in total CD4 unstimulated populations, and in two of the patients, ³2-fold expansion persisted until 18 and 24 months post-transplant, respectively, the last time points examined. Furthermore, in two of the patients, donor-specific Tregs were enriched post-transplant ³3-fold relative to pre-transplant in the CD4+Treg compartment, suggesting that the donor-specific Treg expansion was at least partially driven by graft antigens. Finally, donor-specific induced Tregs --pre-transplant sorted non-Treg clones that were found post-transplant only in sorted Treg samples --were detected post-transplant in all three patients. These findings suggest that both expansion of pre-existing donor-specific Tregs and induction of Tregs from donor-reactive conventional CD4 cells contribute to the marked enrichment of donor-specific Tregs, which may contribute to the immunologically quiescent post-transplant environment of the tolerated allograft. Ongoing Treg TCR sequencing of a non-tolerant CKBMT recipient should add further insight into the role of Tregs in tolerance in CKBMT recipients. Therapeutic cell transfer using regulatory T cells (Tregs) holds the promise of reducing the need for drug based immunosuppression (IS) and improving long term graft survival. In late 2014 we initiated a Phase 1 trial of autologous, polyclonally expanded Tregs in living donor kidney transplant recipients (KTx) (NCT 02145325, IND 15898). This is a nonrandomized dose-ranging study with 3 tiers of cell dosing (0.5, 1, and 5 x 10E9 cells infused, n=3 subjects/tier). Subjects underwent a nonmobilized leukopheresis > 2 weeks prior to KTx. This leukopheresis product was cryopreserved for later isolation/manufacturing of Tregs. We have demonstrated the ability to isolate and manufacture phenotypically "pure", functional, GMP grade Tregs from a cryopreserved pheresis product in support of our IND. Briefly, we enrich for natural Tregs through sequential negative (CD8, CD19) and positive (CD25) immunomagnetic selection (CliniMACS, Miltenyi BioTec). The enriched natural Tregs undergo a 3 week expansion using CD3/CD28 Exp-Act® beads (Miltenyi Biotec), IL2, and sirolimus. Resultant cells need to meet the following release criteria: > 70% viable; > 70% CD4+ CD25+; < 10% CD8+ and CD19+; <3000 Exp-Act® beads/10E8 cells; endotoxin < 5.0EU/kg; negative aerobic, anaerobic and fungal sterilities, negative mycoplasma and negative gram stain, > 50% suppression of T-effector proliferation in vitro. KTx recipients received alemtuzumab induction to achieve lymphodepletion (deemed important for effectiveness of TRACT) and tacrolimus/mycophenolate based IS. Subjects were converted from tacrolimus to sirolimus at 30 days post KTx to provide an IS milieu conducive to the survival of infused Tregs. Tregs were infused day +60. 9 subjects have been enrolled and all have received TRACT. We have successfully isolated and expanded Tregs from all collected pheresis products. All manufactured product has met release criteria. There have been no serious adverse events attributable to TRACT in any subject. Protocol biopsies performed after TRACT have not shown rejection. There have been no infectious complications. Immunophenotypic analysis of subjects shows a significant (9-20 fold) increase in % of circulating CD4 + CD127 -CD25 High Foxp3 cells in peripheral blood post TRACT. Conclusion: TRACT using autologous polyclonally expanded Tregs appears safe. Plans for a Phase 2 trial are being finalized. Systematic multicenter clinical evaluation of regulatory cells is being conducted by the EU supported ONE Study trial in which each site is administering a different regulatory population (non-donor specific Tregs, donor specific Tregs (dsTregs), regmarophages, and tol-DCs) using a common immunosuppressive protocol. We report the first clinical application of dsTregs to kidney recipients as part of the ONE Study. dsTregs were generated (Guinan et al, STM 2009)by culturing donor and recipient PBL in MLRs with addition of costimulatory blockade (bela). We documented feasibility in vitro with ESRD patient cells finding a similar 2-3X expansion of Tregs after MLR+belatacept. dsTregs isolated post 3 d culture under GMP were purified by sequential negative (CD19, CD8) and positive (CD25) magnetic bead selection for CD4+CD25+ to >95% Foxp3+, and we found these cells had >95% TSDR demethylation. Antigen specificity was verified by suppression of donor but not third party responses and a phase I trial of dsTregs in live donor renal transplants initiated (NCT0209123). Two patients enrolled were males age 47 (GN) and 53 (IgA) transplanted with LURDs. Frozen donor and recipient PBMCs were processed 1 wk post-op. 3 d later recipients received dsTregs at ~ 2x10(4)/kg. No complications accompanied the infusion. Both successfully discontinued steroids at 15 weeks. At 5 and 8 mo posttx, both have excellent graft function. Protocol bx at 9 mo will determine eligibility to discontinue MMF leaving only low-dose maintenance tacro. Patient 2 initially experienced mild DGF prior to receiving dsTregs and a slight rise in the creat. on d 12 prompted bx which revealed no rejection. A mild T cell infiltrate was present that was rich in Foxp3+ CD4. We initiated the first human organ transplant trial of purified dsTregs and demonstrated the feasibility and preliminary safety of dsTreg therapy. While numbers and follow up are limited, there have been no adverse events suggesting safety. In subsequent patients, the dose of Tregs will be escalated. We expect these preliminary studies to lay the foundation for addition of cell based dsTregs to the armamentarium of therapies to prevent rejection and promote transplant tolerance. The capacity of anti-HLA antibodies (Abs) to activate complement is thought to be an informative property susceptible to facilitate outcome prediction. Two different tests, C1q-and C3d-binding assays, have recently been reported as being useful predictors. However, some authors have suggested that complement-binding ability of anti-HLA antibodies and MFI value are tightly correlated, especially when analytical artefacts interfering with MFI determination (prozone effect) are properly corrected. We explore the impact of a decrease of MFI value, realised by plasmapheresis, upon the complement-binding capacity of anti-HLA Abs. Four hundreds and thirty-three anti-HLA Abs (66% Class I, 34% Class II) were monitored longitudinally (Single Antigen-Bead assay, Luminex) before and after plasmapheresis in their capacity of C1q (One Lambda) and C3d (Immucor) binding according to their respective MFI value. Sera were pre-treated by DTT to control the complement-and IgM-associated prozone effect. Functional assessment of HLA antibodies (Abs) has historically been limited to complement-based assays, which have known limitations. Moreover, HLA Abs mediate injury by complement-independent mechanisms, including direct signaling via HLA antigens and via Fc receptor (FcR) engagement. We have hypothesized that comprehensive assessment of the pathogenicity of HLA Abs requires assessment of FcR binding capacity. Initial validation of an FcR binding assay is presented. Data from 48 patients (pts) with antibody mediated rejection (AMR) were analyzed. Pathologic data included Banff component scoring and C4d staining. Serum samples obtained prior to transplantation, at the time of AMR diagnosis and following AMR treatment were analyzed by HLA single antigen bead (SAB) microarrays, C1q assay, IgG isotype-specific SAB assays (IgG1, IgG2, IgG3 and IgG4), and FcR binding assays (performed according to standard SOPs for the laboratory-developed test). RESULTS FcR assay inter-and intra-run CVs were <10%. Correlation between SAB assay Ab strength and FcR assay was high (r=0.70, p = 0.0075); correlation between FcR assay and C1q assay was lower (0.57) due to negative C1q assay results when HLA Ab strength was moderate or low. In 14 pts with AMR and low strength DSA (<2000 MFI) C1q assays were negative (14/14 pts), whereas 10/14 (70%) pts were positive by FcR assay (p=0.0004). 10 pts with AMR and moderate strength DSA (4000-8000 MFI), 1 of 10 (10%) had positive C1q assay, whereas 8/10 (80%) patients had a positive FcR assay (p=0.007). FcR assay results positively correlated with IgG1 and IgG3 isotype-specific SAB assay. Banff component acute glomerulitis ( Background: Microvascular inflammation (MVI), assessed as glomerulitis (g) + peritubular capillaritis (ptc), is associated with T cell mediated (CMR) and antibody mediated (AMR) allograft rejection. It is a key finding in diagnosis of C4d-negative AMR. For that reason it was incorporated into the 2013 update of the Banff classification. There is high inter-observer variability however. We observed a high proportion of MVI cells in allograft rejection are activated cytotoxic T cells with aggregated perforin-positive granules detectable by immunohistochemistry (IHC). Perforin positive cells are rare in the interstitium and tubules. In this setting the perforin stain is a stain for MVI. We sought to determine whether it could substitute for the calculation of MVI. Design: Fifty clinically indicated renal allograft biopsies were selected to include various classes of rejection based on the 2013 Banff criteria. Perforin IHC was performed on each biopsy using a mouse monoclonal antibody, and the number of perforin positive cells per 10 high power fields (400X) was counted blindly and recorded. ml) was carried out for 15 minutes at 37°C, followed by 3 minutes of treatment with DNase (2.75 mg/ml). CD32 monoclonal antibodies 6C4 (eBioscience) and IV.3 (Stemcell Technologies), and a recombinant, immunoglobulin-derived protein (Human BD Fc block; BD Biosciences) were incubated with donor cells for 10 minutes before serum was added. To induce Fc receptor-mediated false positives, heat aggregated gamma globulin (Quidel) or heat-treated serum (20 minutes at 56°C) were used. Results: Of the three Fc receptor-blocking reagents, only CD32 monoclonal antibody 6C4 was able to prevent aggregated immunoglobulin binding to untreated donor B cells, producing mean channel shift values consistent with a negative FCXM. Additionally, serum from an antiretroviral-treated HIV+ patient resulted in falsepositive T cell FCXM when tested against pronase-treated cells, but not with 6C4-incubated cells. Conclusion: The nonspecific enzyme cocktail pronase can be used to abrogate falsepositive B cell FCXM results, due to aggregated or antigen-bound IgG binding to B cell-expressed Fc receptor. Our results demonstrate that a more specific reagent, CD32 monoclonal antibody 6C4, can replace pronase treatment. Importantly, 6C4 incubation is faster than pronase treatment, and does not introduce other false positives. Blocking the Fc receptor increases the specificity of the FCXM assay, reduces test time, and is expected to improve patient management. Complement-binding anti-HLA DSA have demonstrated higher rejection rate and decreased allograft outcome. We investigated the effect of terminal complement inhibition (Eculizumab) in a cohort of patients with C1q-binding DSA at the time of transplantation. We enrolled 2 groups of patients, all with C1q-binding DSA at the time of transplantation between 2011 and 2013: i) 12 patients (study group) received Eculizumab in the prevention of AMR according to a phase 2 clinical trial (NCT01567085); ii) A matched control group of 12 patients receiving standard of care (PP x4 and IVIG 2g/kg BW). Surveillance kidney allograft biopsies were performed at 14 days and 12 months post-transplant in all patients. In both groups the allograft injury phenotype was assessed by histopathology and gene expression. DSA characteristics, GFR and proteinuria were also determined at those time points. Baseline characteristics were similar in both groups in term of donor, recipient, transplant and DSA characteristics. At Day-14, patients receiving Eculizumab had lower histological Banff scores for peritubular capillaritis (p=0.0074), interstitial inflammation (p=0.0133) and tubulitis (p=0.0055). Molecular allograft gene expression revealed lower AMR activity reflected by decreased endothelial DSAselective transcripts (DSAST, 3.97 fold-change, p<0.001) and ABMR Molecular Score (1.81 fold-change p=0.0061). This lower activity was related to lower expression of NK transcripts (NKb, 5.25 fold-change, p<0.0001), macrophage transcripts (QCMAT, 1.67 fold-change, p=0.04), IFNG production and inducing transcripts (GRIT, 2.65 fold-change, p<0.001) and acute kidney injury transcripts (IRRATS, 1.53 fold-change, p=0.01) as compared with patients receiving standard of care. At 1-year (after 9 months of treatment wash out), the histologic scores (g, ptc, cg, C4d, i, t, v, IFTA and cv) and molecular scores for ABMR, GRIT, QCMAT and IRRATS were similar in both groups with the exception of NKb (1.8 fold-change, p=0.02) and DSAST (2 fold-change, p=0.01). At 1 year, eGFR and DSAmax MFI were similar in both groups while there was a trend lo lower proteinuria among the Eculizumab treated group (p=0.0562). Eculizumab prophylaxis in patients with C1q-binding DSA reduces early AMR activity with major effect on NK burden with limited effect after treatment discontinuation. We investigated the clinical, histological and immunological determinants of allograft survival in patients with active AMR receiving standard-of-care treatment in a prospective observational study. We prospectively enrolled consecutive kidney transplant recipients with biopsyproven active AMR diagnosed between 2007 and 2013 in two Paris transplant centers. All study patients received standardized treatment including plasmaphereses (x4), high-dose intravenous immune globulins (2 g/kg) repeated every 3 weeks for 3 rounds and rituximab (375 mg per square meter of body-surface area). Patients were systematically assessed at the time of diagnosis and 3 months post-treatment for clinical data (eGFR and proteinuria), histological characteristics (allograft biopsy) and circulating anti-HLA DSA characteristics (specificity, HLA class, mean fluorescence intensity [MFI] and C1q-binding capacity). We included 291 patients with biopsy-proven acute or chronic active AMR who received standard-of-care treatment. The 5-year allograft survival after AMR diagnosis was 69.5% (95% CI: 62.7-75.2). Post-treatment independent determinants of allograft loss included eGFR (HR=0.97, 95% CI: 0.96-0.98, p<0.001), microvascular inflammation (g+ptc) score (HR=1.2, 95% CI: 1.0-1.3, p=0.035), allograft glomerulopathy (cg) score (HR=1.4, 95% CI: 1.1-1.8, p=0.004) and complement-binding capacity of DSA (HR=5.2, 95% CI: 3.3-8.3, p<0.001). On the basis of these predictors, we built a composite risk score for allograft loss after AMR treatment that showed a good predictive capacity: c-statistic, 0.77 (1000 bootstrap 95% CI: 0.71-0.84). For centers that do not use C1q-binding assay, we replaced C1qbinding assessment by the MFI of the highest rank anti-HLA DSA. The predictive capacity of the MFI-based score was 0.74 (95% CI: 0.68-0.81). A systematic clinical, histological and immunological evaluation of the response to treatment at 3-month post-AMR allows to identify patients at high risk of allograft loss. We defined a post-AMR treatment composite score with a good performance to predict allograft loss. Further studies should validate this score on different independent cohorts. TLR stimulation has been shown by others to prevent costimulatory blockadeinduced allograft tolerance. Immune cell derived complement activation products C3a and C5a amplify inflammation, enhance effector T cell responses and suppress regulatory T cells (Tregs) analogous to previously reported effects of TLR stimulation, raising the hypothesis that the 2 systems are mechanistically linked. As published, BALB/c hearts transplanted into CpG (TLR9 agonist)-treated B6 mice given MR1 were rejected (d20, n=6) while grafts in MR1 treated controls survived >60 d. In contrast, grafts transplanted into MR1 treated recipients deficient in receptors for C3a and C5a (C3aR-/-C5aR-/-) survived >50d despite CpG (p<0.01, n=6). MR1+CpG treated recipients deficient in C3aR alone rejected their grafts (d25, n=6). Splenic DCs express C3aR/C5aR, and in vitro CpG stimulation of splenic DCs mixed with allo-T cells produced anaphylatoxins. We next treated WT and C3aR-/-C5aR-/-mice with CpG and isolated/analyzed splenic DCs 4 h later. This ex vivo analysis showed that DCs from the C3aR-/-C5aR-/-mice had lower surface expression of class II MHC, CD80, CD86 and higher expression of the co-inhibitory molecule PDL1 vs DCs obtained from CpG treated WT mice (p<0.01). Functionally, CpG-prestimulated WT DCs cocultured with WT T cells augmented allo-T cell responses, while mixed lymphocyte reactions using DCs from CpG-treated C3aR-/-C5aR-/-mice or with C3aR-/-C5aR-/-T cells resulted in 30-60% reduction in allo-T cell proliferation/expansion (p<0.01). To assess links among TLR9 signaling, C3aR/ C5aR signaling and Treg stability in vivo we employed ERT2-dTomatoFoxp3-GFP "fate mapping" mice on WT and C3aR-/-C5aR-/-backgrounds so as to identify Foxp3+Tregs (GFP+tomato+) and ex-Tregs (GFPneg tomato+). While CpG treatment to MR1 treated WT recipients of allo-hearts resulted in ~30% conversion of Treg to ex-Treg (Treg instability), the absence of C3aR/C5aR prevented the effects (Treg conversion to ex-Treg was no different from CpG-untreated controls, p<0.001 n=5-8). Together our data support the hypothesis that TLR9-induced resistance to allograft tolerance depends on C3a/C3aR and C5a/C5aR signaling which activates DCs, stimulates effector T cells and induces Treg instability. These findings support the need to test the impact of targeting C3aR/C5aR in human transplant recipients. Although glycogen synthase kinase 3 β (Gsk3β) has been found to play key roles in ischemia and reperfusion injuries (IRI) of different organs, underlying mechanisms remain ambiguous, partially due to that its chemical inhibitors do not differentiate Gsk3β from its α isoform and have no target cell selectivity in vivo. As global Gsk3β gene KO is embryonically lethal, we created myeloid specific Gsk3β KO mice using LyzM-Cre/Loxp-Gsk3β system to studied its roles in regulating macrophages response against IR in a murine liver partial warm ischemia model (90m ischemia in cephalad lobes). Activation and differentiation of both Kuppfer cells (KCs) and bone marrow derived macrophages (BMMs) were studied in vitro. Myeloid specific Gsk3β KO was confirmed by Western blot analysis of liver parenchymal, non-parenchymal cells, as well as bone marrow derived-macrophages (BMMs). These KO mice were protected from liver IRI (sALT and liver histological analysis) with diminished pro-inflammatory immune responses (measured by qRT-PCR of TNF-a, IL-6, IL-10, CXCL10 gene expressions and MPO assays), as compared with WT controls. Gsk3β deficient BMMs expressed higher levels of M2 (Arginase and MRC1), lower levels of M1 (iNOS) markers at the resting state, and responded to TLR stimulations with comparable levels of TNF-a, IL-6, lower levels of IL-12/23, but much higher levels of IL-10, productions, as compared with WT cells. The IL-10 high phenotype and elevated M2 marker expressions in Gsk3β deficient BMMs were sustained even under M1 polarization condition (IFN-γ). The inhibitory phosphorylation of AMPKa at Thr479 was diminished in LPS-stimulated Gsk3β deficient BMMs, with simultaneous increase of its activating phosphorylation at Thr172. More prominently, Gsk3β deficiency resulted in a significantly higher level of induction of SHP, the newly identified innate immune negative regulator, upon TLR stimulation. Inhibition of AMPK (Compound C) attenuated the IL-10 high phenotype and SHP induction in Gsk3β deficient BMMs. Gsk3β promotes liver pro-inflammatory immune activation by limiting the induction of SHP in activated macrophages via the AMPK signaling pathway. Objective. Chronic rejection of transplanted organs remains the main obstacle in the long-term success of organ transplantation. Chronic rejection occurs in 40-45% of recipients at 5 years following transplantation. Thus, our goal was to uncover the mechanisms of chronic rejection, which will undoubtedly revolutionize heart transplant medicine. It is known that macrophages play a critical role in the injury and repair of transplanted organs. Novel anti-rejection strategies in organ transplantation target macrophages to abrogate transplant rejection. Multiple macrophage functions depend on actin cytoskeleton, which is under the control of small GTPase RhoA and its downstream effector ROCK1. Methods We generated mice with macrophage specific deletion of RhoA. Hearts from BALB/c (H-2d) donors were transplanted into RhoAflox/flox (no Cre), heterozygous Lyz2Cre+/-RhoA+/flox and Lyz2Cre+/-RhoAflox/flox recipients treated with CTLA4-Ig to inhibit early T cell response, and transplanted hearts were pathologically assessed for signs of chronic rejection. Macrophage responses to RhoA deletion were studied by actin staining, immunostaining, PCR, Western blotting, flow cytometry, and phagocytosis, matrix degradation and cell migration in vitro assays. Results. Macrophage-specific deletion of RhoA in conjunction with CTLA4-Ig abrogated chronic rejection of allografts and inhibited macrophage infiltration into the grafts. RhoA deletion increased ROCK1 protein level and its activity and affected macrophage structure, polarity and actin-related functions. Increased ROCK1 activity was partially dependent on nonapoptotic Caspase-3 cleavage and was sensitive to Caspase-3 inhibition. Conclusions. These novel findings indicate that interference with the RhoA pathway inhibits macrophage infiltration of the graft and modifies macrophage polarity and functions. This, in turn, will help to design novel macrophage-targeted anti-rejection therapies. BACKGROUND: Cellular infiltrates in kidney allografts may be relevant to the induction or maintenance of regulatory tolerance. Here we examined the phenotype, anatomical distribution and function of plasmacytoid dendritic cells in kidney allografts from recipients made tolerant by mixed chimerism protocols (nonhuman primates, NHP) or by natural mechanisms (mice). METHODS: Plasmacytoid dendritic cells (pDC) populations in NHPs rendered tolerant to heart and kidney allografts using a mixed-chimerism protocol and in mice with spontaneously accepted kidney allografts were analyzed using flow cytometry and immunohistochemistry. Bone marrow, naïve and accepted kidney pDCs were compared for differences in pDC cell markers. pDCs were isolated from these tissues and cultured with naïve T-cells, IL-2, TGFβ to test for Foxp3 induction. We observed distinct Treg-rich organized lymphoid structures (TOLS) in spontaneously accepted kidneys in mice and tolerant NHP. These had prominent Foxp3+ cells and pDCs. pDCs in the tolerant NHP kidneys were found to be BDCA2 hi (pDC marker in NHP) in contrast to bone marrow pDCs in naïve NHPs, which were BDCA2 lo . No pDCs were detected in native kidneys. Tolerant murine kidney pDCs were Siglec-H hi (pDC marker in mice), whereas naïve kidney pDCs were Siglec-H lo . Immunohistochemistry revealed gradual localization of pDCs around TOLS over time in both NHPs and mice. Isolated tolerant kidney pDCs cultured with naïve T cells, IL-2, and TGFβ were able to convert T-cells into Foxp3+ Tregs with greater potency than pDCs isolated from native kidney (Fig. 1) . We have identified a unique subset of pDCs that traffic to a distinct location in accepted kidney allografts and that are capable of promoting the expansion of Tregs. These findings suggest that these unique pDCs play an important role in inducing tolerance of kidney allografts across species. We previously reported the molecular features of human kidney transplant biopsies with T cell-mediated rejection (TCMR) (Am J Transplant 14 (11):2565-76) and antibody-mediated rejection (ABMR) (Am J Transplant 15 (5):1336-48), each compared to all other disease and injury states. The present study mapped the features shared by TCMR and ABMR, and sought to understand why this sharing was so extensive. In 703 expression microarrays from prospectively collected kidney transplant indication biopsies (clinicaltrials.gov NCT01299168), we compared biopsies with rejection (TCMR, ABMR, mixed) to all others. Results in the Discovery Set (n=403) were highly conserved in the Validation Set (n=300) (Fig 1A-B ). In the Combined Set (n=703), top rejection-associated transcripts were IFNG inducible molecules (e.g. CXCL9, 10, 11, PLA1A, GBP5), with P values as strong as 10 -57 ( Fig 1C) . Molecules shared by NK and effector T cells were also strongly associated with rejection, including NK receptors NKG7, KLRK1 (NKG2D) and KLRD1 (CD94), CD8, and cytotoxic molecules PRF1, GNLY, and GZMA/B. Inflammasome transcripts were strongly associated with rejection (e.g. CASP1 and AIM2). However, many TCMRselective molecules representing macrophages (e.g. ADAMDEC1) and costimulation (e.g. CTLA4, ICOS), as well as ABMR-selective molecules representing endothelium (e.g. DARC, VWF) and NK cells (e.g. SH2D1B) had reduced strengths when their association with rejection was considered (e.g. p values of 10 -3 ). The top rejection molecules reflected the similarity of the cognate events in T cell TCR signaling in TCMR and NK cell CD16a signaling in ABMR -e.g. IFNG release, reflected by IFNG inducible molecules -albeit localized in distinct compartments: interstitium in TCMR and microcirculation in ABMR. Combined with previous results in TCMR vs all other biopsies (T cell and DC/macrophage triggering) and ABMR vs all other biopsies (NK cells and endothelium), the rejection-associated transcripts provide a complete picture for both rejection phenotypes that could be useful for identifying new interventions against targets that could control both processes. Purpose: We have shown that CD57+PD1-CD4 T cells play a role in belataceptresistant rejection, but their mechanism of action is incompletely defined. Potential mechanisms include loss of CD28 expression, adhesion molecule acquisition, and consolidation of an activated phenotype. In these studies, we explored the in vitro biology of these cells to gain insights into their development and further studied the stability of these cells in patients following transplantation. Methods: Peripheral blood mononuclear cells (PBMCs) from healthy controls were non-stimulated, or stimulated with αCD3 beads alone, αCD3 and αCD28 beads, or allogeneic antigen. PBMCs from 50 kidney transplant recipients treated with nonbelatacept based therapies were obtained at baseline, 1 month, 3 months, 6 months and 1 year post transplant to analyze the longitudinal stability of CD57+PD1-CD4 T cells. Results: We found that in vitro stimulation increases expression of CD57, and that CD57+ CD4 T cells are proliferative and express high levels of granzyme B. CD57+ CD4 T cells express high levels of adhesion molecules previously implicated in costimulation blockade resistant rejection, and that CD57 serves as a unifying indication of CD2, LFA1, and VLA4 upregulation. Furthermore, while antigen experience induces CD28 loss, it does not confer activation; rather CD57 expression flags activated cells resistant to belatacept. Interestingly, we find that CD57+PD1-CD4 T cells are virtually non-existent in the healthy population, but are significantly increased in 35% of patients with renal failure (p<0.05), possibly due to chronic inflammation. CD57high populations persist at least 1 year following transplantation. Conclusion: CD57+PD1-CD4 T cells are highly activated, proliferative, are present in 35% of kidney transplant recipients at baseline and persist following transplantation. These cells maintain their increased expression of adhesion molecules and remain poised to mediate rejection. The ability to identify these cells and characterize a means to target them would offer the possibility for therapy conversion to belatacept. to the assay uncovers reactivity of CD28 neg memory CD8 T cells previously associated with increased risk of graft rejection (AJT 2014). Whether pre-transplant PRT+/-IL-15 is a biomarker of 2-y kidney graft function has not been carefully determined. Methods: We performed PRT assays +/-IL-15 (IL-15 PRT) and anti-donor ELISPOT assays in 92 consecutive living (n=21) and cadaveric (n=71) kidney transplant recipients treated with thymoglobulin or basiliximab induction followed by CNI, MMF and early steroid withdrawal as maintenance. We used multivariable-regression repeated-measures linear mixed models to estimate the different time-change of eGFR between groups at 3, 6, 12, and 24-mo post-transplant, after adjusting for all potential confounders. A positive PRT was defined as >40 spots/10^5 cells against ≥50% of 8 donors (>80 spots with IL-15) based on Akaike information criterion. The threshold for positive anti-donor ELISPOT was 25 spots/2x10^5 cells, as per published studies. Results: Absolute PRT values were highly correlated with IL-15 PRT (R:0.49; P<0.001), but there was no statistically significant correlation between PRT, or IL-15 PRT, and anti-donor ELISPOT. At baseline, 11 subjects were PRT+ (12%), [6 (7%) were IL-15 PRT+], and 61 subjects were anti-donor ELISPOT+ (66%). PRT-neg and IL-15 PRTneg subjects showed stable eGFR 3-24-mo post-transplant (-0.2mL/min, P=0.90 for PRTneg). In contrast PRT+ subjects had a significant eGFR decline (-12 In renal transplantation, non-invasive biomarkers that are predictive and allow for early intervention are limited. Here we report findings from a prospective longitudinal study of B cell cytokines in an unselected cohort of renal transplant recipients to predict rejection and outcome. 164/374 patients transplanted between dates 1/13-12/14 had serial biopsies (forcause, and protocol at 3 and 12 mos) and blood drawn at 0, 1, 3, 6 and 12 mos. 315 biopsies and 534 PBMC samples were examined. 33% of patients had acute rejection (AR; all ACR and 3 ACR&ABMR; 19% subclinical, 14% clinical) within the first yr. Of patients with AR: 37% had early AR (£3mos) that responded to therapy, 20% had early AR with persistence/recurrence at 12mos despite treatment, and 43% had late AR (6-12mos) despite a normal early biopsy. In this regard, the utility of 3 mos histology was limited in that it could not predict the response to treatment nor late AR. Based on prior work, we assessed the ratio of B cell IL-10:TNFa expression (CD40L & CpG stimulation X24h) to serve as a biomarker. Patients with AR (clinical or subclinical) had a significantly lower B cell IL-10:TNFα ratio, especially within the T1 transitional subset (T1B). Using ROC analyses, an optimal cut-off value of 1.26 for T1B IL-10:TNFα ratio at 3 mos independently predicted AR at any time, with an 82% sensitivity and 86% specificity (OR 0.31, per increase in ratio by 1; p=0.0001). Moreover, T1B IL-10:TNFα ratio at 3 mos strongly predicted rejection at that time (PPV 76%, NPV 85%). Even in patients with normal 3 mos biopsy, a low T1B IL-10:TNFα ratio predicted subsequent rejection with a or PPV 82% and NPV 85% (AUC 0.86, p<0.0001). Moroever, patients treated for early AR who subsequently has persistent/recurrent AR, had a significantly lower T1B cytokine ratio at 3mos (0.58 vs. 1.52, p=0.04). Importantly, this biomarker is not confounded by "inflammation" 2° infectious causes. Patients with BK or CMV viremia had significantly higher T1B IL-10:TNFa ratio compared to those with AR (low ratio). Finally, patients with a low T1B IL-10:TNFα (<1.26) ratio had significantly worse creatinine at 18 and 24 mos post-transplant and this was associated with interstitial fibrosis with inflammation (IF+i) on 12 mos biopsy (OR 0.31, p=0.003). To conclude, T1B IL-10:TNFα ratio at 3 mos appears to be a strong and a specific biomarker for the prediction and course of renal allograft rejection, and possibly long-term outcome. If confirmed, it may help inform individualized therapeutic intervention. 3 Pathology, LUMC, Leiden, Netherlands. Background: The innate immune response is increasingly being recognized in allograft injury after kidney transplantation. We aimed to establish the relationship of innate immune gene expression at the moment of implantation or acute rejection (AR) with graft outcome. Method: 19 genes, including TLRs, complement components and regulators, and apoptosis-related genes, were analyzed at the mRNA level in biopsies of 123 patients with AR and in paired pre-implantation biopsies (n=75). Expression levels before transplantation were tested in relation to the type of donor (deceased or living; making up 78% and 22%, respectively, of the group) and occurrence of delayed graft function (DGF). The expression levels during AR were investigated for association with steroid-responsiveness and graft loss. Results: Before transplantation, expression of C2, C3, and the Bax/Bcl2 ratio were significantly higher (P<0.01) in deceased donors as compared to living donors. Within deceased donors there were no associations between gene expression and DGF. During AR, all TLRs and C2 and C3 showed increased expression compared with pre-implantation conditions whereas three complement regulators and C4 and Bcl2 showed decreased expression. Expression of none of the genes was associated with steroid response. However, relatively high TLR4 expression and Bax/Bcl2 ratio at time of AR were related to adverse graft outcome. Patients with a high Bax/ Bcl2 ratio in their deceased donor graft (higher than that in the living donor group) had significantly (P<0.05) worse death-censored graft survival (61.3%) at 6-year post-transplant compared to those with a low ratio (89.4%) and those with a living donor graft (95.8%). In Cox regression analysis, TLR4 and Bax/Bcl2 ratio in the deceased donor group predicted outcome independently of previously identified clinical risk factors of graft loss. Conclusion: The elevated expression of C2, C3, and Bax/Bcl2 ratio in deceased donor transplants supports the notion that complement and apoptosis pathway activity is already enhanced before kidney transplantation. The relationship of high Bax/ Bcl2 ratio during AR with graft loss may point to an adverse effect of intragraft cell death, and thereby possibly enhanced immunogenic danger signals, on graft outcome. Purpose: The microbiome is important to human health and alteration may lead to aberrant immunity and disease. Interstitial fibrosis and tubular atrophy(IFTA) occurs in 20% of 1-year protocol kidney transplant biopsies. Cellular rejection and injury are associated with developing IFTA. DNA sequencing has revealed a urinary tract microbiome (UMB). Kidney transplantation likely alters the UMB at multiple levels including use of immunosuppressives and antibiotics. We hypothesized: 1) transplantation substantially alters the UMB and 2) patients with IFTA have UMB alterations that contribute to immune pathogenesis. Methods: Sequencing bacterial 16S rRNA genes was used to assess UMBs at 1 and 6-8 months post-transplant in 25 patients developing IFTA on 6 month protocol biopsies. UMBs of 20 healthy non-transplant controls(HCs) and 25 transplant patients with normal biopsies and excellent function (TX) were compared. Results: UMB of HC males had 5 significant genera: Streptococcus (32%), Lactobacillus (23%), Prevotella (10%), Corynebacterium (9%), and Pseudomonas (4%). In contrast, HC females had: Lactobacillus (63%), Corynebacterium (11%), Gardnerella (8%), Prevotella (3%) and Bacillus (2%). These dominant genera were decreased in IFTA and TX at 1 month post-transplant (e.g. Strep decreased in IFTA and TX males (32% to 12% and 5%); Lactobacillus decreased in IFTA and TX females (63% to 49% and 45%)). There was a parallel increase in putative pathogenic bacteria in both genders, e.g. Enterococcus, Ureaplasma. At 6-8 months, these dominant genera stabilized or re-populated in TX subjects, but decreased further in IFTA: Strep increased in TX males (6% to 36%), but decreased in IFTA males (12% to 3%). Lactobacillus stabilized to 46% in TX females, but decreased in IFTA females (49 to 34%). At 6-9 months, the number of genera per sample increased in IFTA vs. non-transplants and TX (39 vs. ~ 26 in both TX and non-transplants). Conclusions: TX associates with the re-population of the normal urinary microbiome, while IFTA is associated with a loss in dominant resident urinary microbes and parallel increase in non-resident, potentially pathogenic bacteria. The alteration of UMB may contribute to the development of IFTA by a dysregulated stimulation of the host immune system that results from replacement of dominant host genera with non-resident bacteria. A critical question now is whether UMB species are also resident in the transplanted kidney. comorbidities, nor the rate of performing biopsies among recovered kidneys. However, the combination of (a) poorer biopsy findings (increase from 9.5 to 12.5% having glomerulosclerosis>20%) and (b) a sharp decline (38% to 25%) in pumping of KDPI>85% kidneys by OPOs appears to fully explain the apparent KAS effect (Fig 1) . A higher # of non-local centers with CPRA 99-100% candidates on the match run was independently associated with lower odds of discard (p<0.001), presumably due to higher acceptance rates for these patients. The # of regional-to-local-center "toggles" when allocating KDPI>85% kidneys to a combined local/regional list was not significant (p=0.38). The return to pre-KAS discard rates in months 8-11 was only partially explained by lower KDPIs and improved biopsy findings among recovered kidneys. The initial post-KAS rise in discard rates may be entirely explained by poorer biopsy findings and less pumping. The decline in pumping for KDPI>85% kidneys suggests KAS may have affected behavior. However, match run characteristics unique to KAS were not associated with increased odds of discard, and discard rates appear to have returned to pre-KAS levels. Future refinements to allocation may help address the long-standing problem of high discard rates. In spite of improved outcomes associated with the usage of extended criteria donor (ECD) kidneys, the number of kidney transplants (KT) using this donor population (DP) have declined over time. Kidney donor profile index (KDPI) is a numerical measure (0 to 100%) that combines ten dimensions of information about a donor, including clinical parameters and demographics, to express the quality of the donor kidneys relative to other donors. KDPI is now used for organ allocation with a higher KDPI value associated with lower donor quality. A Donor KDPI >85% (DK85) is thought to be equivalent to an ECD kidney. Analyze the trends in usage and outcomes of KT associated with the KDPI. All adult recipients (>18) Although kidney transplant outcomes with DK85 have improved over time, utilization of DK85 shows a downward trend. Further studies will help to assess the impact of improved organ selection (by kidney biopsy findings and machine perfusion) and increased use of lymphocyte depletion agents. The supply of kidneys for transplant continues to exceed the demand. Efforts to expand the donor pool have led to interest in small pediatric kidney donors. We reviewed kidney recovery, use, and outcomes for kidneys from donors weighing < 10kg. Using SRTR standard analysis files, we examined all donors weighing <10 kg January 1, 2010-June 30, 2015; a donor was defined as having at least 1 organ procured for the purpose of transplant. These kidneys are reported to UNOS by the OPO as single left/right or en bloc. A total of 848 donors (1.9% of 45,557 deceased donors) with 1696 kidneys were available for analysis. The number of transplants ranged from a low of 89 (2010) to 119 (2012) per year, performed in 29 to 40 centers. The disposition of kidneys varied by donor weight. As donor weight increased from 2 to 10 kg, more kidneys were offered for transplant and fewer were not recovered or discarded (Fig. 1) . Of the 848 donors whose kidneys were offered for transplant, 683 (81%) were offered en bloc and 165 (19%) as single left/right kidneys. No kidneys from donors weighing less than 5 kg were transplanted as single kidneys despite up to 35% being offered as single kidneys. Graft survival trended lower for donors weighing 2-5 kg (p=0.0687, Fig. 2 ) with most of the graft losses occurring in the first 6 months. Recipient survival did not differ (p=0.67). Among small pediatric kidney donors, organ recovery, use, and graft survival vary by donor weight. Potential expansion of the organ pool should focus on increasing recovery, use and graft survival, particularly with donors <5kg. Further study is needed to establish the potential yield of organs from this population to optimize their distribution. The introduction of interferon-free direct-acting antivirals(DAAs) in December 2013 dramatically improved outcomes for individuals with Hepatitis C. The impacts of these therapies on utilization and outcomes associated with HCV+ deceased-donor (DD) kidneys for HCV+ recipients have not been previously characterized. METHODS: Using 2005-2015 SRTR data, we studied 6,904 HCV+ adult KT recipients, using modified Poisson regression to estimate the relative rate(RR) of receiving an HCV+ DDKT, and Cox regression to estimate post-transplant patient and graft survival associated with receiving a HCV+ DDKT, adjusting for recipient and donor characteristics. RESULTS: The proportion of HCV+ recipients who received a HCV+ DDKT increased from 31% in early 2013 to 42% in 2015. HCV+ recipients were 30% more likely to receive a HCV+ kidney after December 2013[aRR 1.18 Purpose: Direct acting antivirals (DAAs) have revolutionized the treatment of hepatitis C (HCV) in both general and liver transplant populations. However, data in kidney transplant (KTx) recipients is more limited, especially in those who received a HCV+ donor kidney. Methods: We performed a retrospective review of 43 HCV+ Ktx patients treated with DAAs at our institution since January 2014. Results: Patients were predominately black (58%) and male (79%). Over half (51%) had failed prior interferon based therapy and 25% had a prior liver transplant; 41% received a HCV+ donor kidney. Thymoglobulin induction was given in 47% of patients. DAA therapy consisted of ledipasvir-sofosbuvir (n=26) in most patients but sofosbuvir-simeprevir was also common (n=12); the median time after transplant to treatment was 1095 days (356-2265) and median time to viral clearance was 43 days (28-90). All patients achieved a sustained viral response at 12 weeks (SVR 12). The median time to transplant for recipient of HCV-donor kidney was 735 days (269-1633) while for HCV+ donor kidney was 377 days (146-783), but with a p=0.14. Use of a HCV+ donor did not affect SVR rates nor time to viral clearance (p=0.08), compared to recipients of HCV-uninfected. No significant adverse events were observed but tacrolimus dose change was required in 34% while on therapy and 33% required tacrolimus dose increases after therapy completion. Conclusions: DAAs are safe and highly effective in curing HCV infection in HCV+ recipients of kidneys from HCV+ donors. This finding strongly supports continued use of this practice that may shorten waiting times and dialysis duration for HCV+ candidates. Tacrolimus levels should be monitored closely during and after completion of therapy. Post-transplant (tx) IgG donor-specific antibodies (DSA), but not IgM DSA has been identified as one of the major causes of late graft loss. Considering the impact that post-tx IgG DSA has on late graft loss, it may be important to understand why and how many pre-tx IgM convert to IgG DSA post-tx. Our aim was to learn more about the IgM-IgG conversion post-tx and determine if immunosuppression affected this conversion. Methods: We retrospectively analyzed 218 consecutive kidney tx patients at a single center, transplanted between 1/06 and 12/10, with a minimum 3 year follow-up. We excluded 20 patients from the analysis due to HLA-matched tx, insufficient patient information, or lack of patient sera. Sera sampling occurred pre-tx, 1, 3, 6, 9, 12 months post-tx and annually thereafter. Antibody identification was determined by LABScreen single antigen beads. Results: 14% (28/198) of patients had pre-tx IgG DSA and 26% (51/198) of patients had pre-tx IgM DSA but were negative for IgG DSA. Of the 51 patients, 22 patients (43%) converted pre-tx IgM to IgG DSA post-tx of the same specificity, with >50% (12/22) converting within the 1 st year. Interestingly, significantly more Class II antibodies (44%), all DQ-specific, converted as compared to Class I antibodies Controversy exists regarding the optimal choice of induction therapy in low-risk kidney transplant (KT) recipients. Current guidelines specify induction choices in KT based on recipient factors and would recommend IL2 receptor antagonists over ATG in low-risk patients. However, the impact of donor factors in this decisionmaking remains unclear. We aimed to compare outcomes of ATG versus IL-2 receptor antagonists in low-risk KT, stratified by Kidney Donor Profile Index (KDPI). METHODS: Using OPTN data, we identified 14,172 low-risk (first-time, KT alone, non-African American, PRA<20%, HIV-negative, HCV-negative, CIT<24h) deceased-donor KT recipients between 2003-2014 who received ATG or IL2 induction in addition to Tacrolimus (Tac), Mycophenolate (MPA), and steroid immunosuppression. We built Cox models for all-cause graft loss (ACGL), deathcensored graft loss (DCGL), and mortality, as well as logistic regression models for acute rejection at 1 year (AR). Patients were stratified into 3 KDPI categories (low: 0-34, moderate: 35-85, high: 86+). Introduction: Alemtuzumab has been increasingly used as an induction agent in renal transplantation, although the optimal dose is unknown. We have previously reported that alemtuzumab dose adjusted for weight (AD) resulted in a reduction of microbiologically proven infection episodes compared with a historic control group who received a standard dose of 30mg (SD), without an apparent increase in rejection rates. Methods: In this study we have examined the rate of leucocyte repletion in 366 patients receiving AD (0.4mg/kg) and 590 patients receiving SD alemtuzumab induction. All received tacrolimus monotherapy. Study objective. A health economic analysis was undertaken to quantify the economic consequences of acute rejection and serious adverse events in patients receiving induction with rATG (Thymoglobulin ® ) vs the monoclonal antibody basiliximab (Simulect ® ) during the first year after kidney transplantation (tx). Methods. Health economic data to year 1 post-tx were derived from current costs at 3 German centers and applied to the database from a randomized trial of 278 patients (Brennan DC et al. N Engl J Med 2006; 355: 1967-77 Our findings indicate that low risk patients achieve acceptable outcomes with low dose rATG. Initial analysis suggests that the use of low dose rATG provides a similar Poor adherence to immunosuppressive medications limits long-term renal allograft survival. Using data from the 3-month run-in (no intervention) of the randomized trial TAKE-IT, which tests an adherence intervention vs usual care, we aimed to characterize taking and timing adherence patterns among young kidney transplant recipients. Identification of patient characteristics associated with unfavorable patterns may allow targeted intervention. Adherence was monitored with a multidose electronic pillbox in 120 patients (62% male; 65% white; median baseline age 16.0y [IQR 13.7-17.5]; median time since transplant 2.9y [IQR 0.8-7.1]) followed in 8 transplant centers in Canada and USA. We used latent class growth analysis to identify adherence trajectories shared by groups of patients. Model goodness-of-fit was assessed using the Bayesian Information Criterion. We identified 4 profiles for each of taking (% prescribed doses taken) and timing (% doses taken on time) adherence: stable near-perfect adherence, stable good adherence, high baseline adherence followed by a decline, and low baseline adherence followed by an improvement. For both taking and timing models, discrimination between classes was excellent with a mean probability of membership in the assigned class of >0.88 for all classes after posterior classification. 94% of patients had the same pattern for taking and timing. In multinomial logistic regression including sex, race, baseline age, and time since transplant at baseline, only time since transplant was associated with class membership for taking adherence (p=0.02). Participants with stable near-perfect adherence had a longer time since transplant than those with other patterns. A similar tendency was observed for timing adherence. This may reflect survival bias, whereby only the most adherent have many years of graft survival. Future analyses will determine stability of adherence patterns over time and if intervention influences the trajectory. We identified 4 distinct adherence trajectories; adherence patterns were significantly associated with time since transplant. Results: The prevalence of AT 1 R-Ab was 58%. AT 1 R-Ab was negative in 40%, de novo in 34%, and preformed in 26% of patients. AT 1 R-Ab was associated with tubulitis (p=0.022), but not C4d positivity, peritubular capillaritis, or arteritis (data not shown). AT 1 R-Ab was not associated with any identifiable risk factors, including the presence of HLA DSA (Figure 1a ). AT 1 R-Ab was associated with allograft loss (p=0.036, Figure 1b ), but not hypertension or decline in eGFR by 50% (data not shown). Conclusions: AT 1 R-Ab is highly prevalent in pediatric renal transplant recipients and is a risk factor for graft loss independent of HLA DSA. AT 1 R-Ab monitoring should be considered in pediatric renal transplant recipients. Results: We analyzed 75 subjects with a median (IQR) age at biopsy of 12 (6-17) years. The cohort included 50% black and 75% deceased donor recipients. Overall, 37% of subjects had subclinical inflammation (22% borderline changes and 15% subclinical rejection). The composite outcome occurred in 34% of all subjects. Those with subclinical inflammation had a higher incidence of the composite outcome by 5 years, but most occurred within 24 months ( Figure) . By Kaplan-Meier analysis, both early non-adherence and C4d+ (both P<0.05) were associated with higher incidence of the composite outcome. Conclusions: Early subclinical inflammation was frequently detected in pediatric kidney transplant recipients and was associated with a high incidence of rejection and graft failure within 24 months post-transplant. Early non-adherence and C4d deposition may also contribute to early allograft outcomes in children. Focusing on improved early management may reduce graft failure in pediatric recipients. Background: Kidney dysfunction is common after solid organ transplantation; however, rates of post-transplant AKI and CKD in children are unknown. Objective: Determine rates of AKI and CKD among childhood recipients of nonkidney solid organ transplants (heart, lung, liver or multiple organs). Methods: Cohort study of all children (<18 years) with a non-kidney transplant at Hospital for Sick Children, Toronto (2002 -2011 . AKI in the first year defined per international guidelines and CKD defined by creatinine-based estimated glomerular filtration rate (eGFR) <60 mL/min/m 2 for 6 months. End stage renal disease (ESRD) defined as initiation of dialysis or receipt of a kidney transplant. Those who received pre-transplant dialysis or were followed <90 days were excluded. Risk of CKD as a function of AKI and other covariates was evaluated in a Cox model. Results: A total of 304 children were transplanted at median age of 4.0 years (IQR: 0.7-11.9); 55% were male. A total of 88 developed AKI and 24 CKD. Incidence by organ and overall is shown in Table 1 . Three children developed ESRD, all within 65 days post-transplant. By Cox regression, those with 1 AKI event versus no AKI event had significantly greater risk (HR 2.7, 95%CI 1.1-6.3) for CKD after controlling for age, sex, baseline eGFR and calcineurin inhibitor use. Conclusions: AKI and CKD are common in children with non-kidney solid organ transplants. Close monitoring to prevent AKI events may impact long-term risk for kidney disease progression. Live Donor Champion (LDC) Programs promote the advocacy for living donor transplantation by patients' family members or friends. Hereby, we conducted a prospective 9 month pilot trial of this novel educational intervention that facilitates the separation of advocate from patients in need of liver transplantation (LT). Methods: Patients accepted for LT and without contraindications for living donors were invited to attend the LDC training session designed to:1-obtain collaborative approach from patients, caregivers and health care team (HCT), 2-develop an environment to promote donation, 3-test the feasibility of this pilot program. The LDC intervention includes a(n): initial face-to-face meeting (focus groups) with a presentation that enhances the clinic experience and knowledge on living donation, b:Personalized education provided by HCT to target all levels of health literacy, c: follow-up phone calls for ongoing education and d: availability of focus videos and video-conference consults (via Skype -Cisco). A control group of 139 end stage liver disease (ESLD) patients were included. Metrics for evaluation of the intervention was established. Results: From a total of 167 ESLD evaluated in our clinic, 72 ESLD were eligible for enrollment into the UVA LDC (January to September 2015). A total of 57 (79.16%) out of 72 ESLD patients identified a care giver/donor champion (LDC) interested to advocate for them. During this period of time, a total of 72 intakes (67 directed and 5 non-directed donors) were received on behalf of 38 participants (1.76 donors per ESLD patient). Data shows that 56.7 % of LDC participant identified at least one potential living donor, which represents an increase of more than 4 fold (or 474%) the number of intakes for liver donors compared to the control group. Furthermore, the number of completed donor evaluations doubled. We also observed an increase in patient satisfaction and overall understanding of the transplant process with these educational efforts. Conclusion: Our pilot study demonstrated that LDC Programs improve patient education and LT access to living donation. Tactical Use in Small for Size (SFS) Live Donor Liver Transplantation (LDLT). T. Mansour, 1 J. Pisa, 1 E. Przybyszewski, 1 J. Guarrera, 1 K. Tomoaki, 1 B. Samstein, 1 K. Halazun, 2 A. Griesemer, 1 J. Emond. 1 1 Surgery, Columbia University Medical Center, New York; 2 Surgery, Weill Cornell Medical Center, New York. Objectives: Efforts to reduce the risk of donor hepatectomy have resulted in smaller grafts for transplantation. Grafts below 0.8% of recipient body weight (GW/RW), have been defined as SFS and have traditionally had worse outcomes. Reduction of excess portal flow (PF) may be protective of graft injury in SFS LDLT though there is no consensus on the indications for and the results of PM. Three related issues need to be clarified, the relation of hepatic hemodynamics to graft size, the efficacy of PM to improve hepatic flows, and the relationship of HD to graft function and outcomes. In this study we sought to define the relationship between HD and graft outcomes, and determine the effect of PM. Methods: 70 LDLT between Sep.2009 and Jul.2015 were enrolled in a single center cohort study of HD and PM. 34 were right lobes and 36 were left lobes with GW/BW ranging from 0.47% to 4.63%. 30 subjects (43%) had GW/BW <.08. PM was performed in 21 (33%). Outcome variables included function (day 7 bilirubin), ascites formation, and graft survival. After reperfusion hepatic artery flow (HAF) was (162.7±157.6) ml/min, PF was (1179.5±719.3)ml/min and portal pressure (PP) was (14.8±5.2)mmHg. Mean PP was 13±4 in subjects without PM and 19.3±4.9 in subjects with PM (P=0.0001). After PM (n=21), there was 30%±14 reduction in PP, a 35%±19 decrease in PF and a 38%±24 increase in HAF.Graft function was assessed with respect to final HD after all modulations. Functional consequences of HD were observed when GW/ BW was below 0.8%. Further analysis was conducted to identify cut-off values for final HD associated with significant changes in postoperative graft function. In subjects with GWRW < 0.8%, day 7 bilirubin was significantly higher when final PP >11 mmHg; (7.4±6.4 vs 2.3±2.3 p= .032) and final HAF below .25ml/gm.(P=.01). Graft survival not different based on graft size; 83.3% when GW/BW <0.8 and 82.5% in GW/BW >0.8. Our findings demonstrate expected results of PM when performed in a single center with a consistent approach. HD at the end of the procedure were correlated with functional assessment of graft function. In this setting SFS were used with survival comparable results to larger grafts. A larger cohort using a defined protocol will be needed to validate these observations. Despite good results, only 10% of liver transplants in children in the US are from living donors. In addition to optimal timing and graft quality, LDLT may offer immunologic advantage since most donors are haplo-identical parents. In this single center study we sought to quantify the benefit of LDLT with granular outcomes over the longterm. Data were collected on 250 children (n=184 DD, n=66 LD) surviving at least 1-year with up to 18-years of followup. LD and DD were compared with respect to demographics, graft type, rejection, immunosuppression (IS), complications, and graft failure; analyzed using SPSS. Results: DD recipients were older (5.9±6.5y vs 4.1±5.5y, p=0.039) with less followup (6.7±4.3y vs 9.0±5.3y, p=0.001). Compared to DD, LD recipients had lower maximum BANFF scores (3.7±2.3 vs 5.1±2.0, p<0.0001), fewer steroid recycles (1.1±1.1 vs 1.7±1.5, p=0.003), were more likely to be free of acute rejection (27% vs 15%, p=0.03), less likely to have chronic rejection (4.5% vs 13.6%, p=0.046), and more likely to be on monotherapy or off IS (85% vs 55%, p<0.00001). Mean tacrolimus levels in the last year of followup were LD:3.39±1.13 vs DD:4.52±1.65, p=0.004. Overall LD patients had a trend to higher graft survival (p=0.057, Fig 1) . To adjust for differences in patient characteristics, we performed a subgroup analysis of biliary atresia patients (n=80 DD, n=43 LD). Here, LD recipients had less chronic rejection (0% vs 12.5%, p=0.016), lower maximum BANFF scores (3.8±2.3 vs 4.9±1.9, p=0.009), were more likely to be on monotherapy or off IS (88% vs 68%, p=0.011). Graft survival was not significantly higher (p=0.09, Fig 2) . However, biliary complications were doubled in LD (56% vs 28%, p=0.002). This study demonstrates the benefit of LDLT on multiple clinical parameters over longterm followup. The high rate of biliary complications, especially in LDLT, mandates aggressiveness to mitigate this treatable cause of chronic graft injury. Further study in larger populations will address the generalizability of these findings. Background and Aims: Model for End-Stage Liver Disease (MELD) score is universally used to prioritize patients on the liver transplant (LT) waiting list. There has been conflicting evidence on the use of living donor liver transplantation (LDLT) in patients with high MELD scores. We reported a single-center retrospective data comparing survival between LDLT and deceased donor liver transplantation (DDLT) for patients with MELD score 25. Methods: Between 2001 and 2015, 603 LT were performed at out institution, of which 308 were LDLT and 298 were DDLT. Of those, 178 patients had MELD score >25 (116 DDLT and 62 LDLT). A retrospective analysis was made on patients with high MELD score evaluating age, gender, history of diabetes, renal impairment (using estimated GFR), hospital stay, ICU stay, MELD score at the time of LT and outcome. Results: Univariate analysis using non-parametric Mann-Whitney test was performed. The joint effect on the survival rate using the potential risk factors was estimated using Cox regression. The estimated one, three, and five years survival probabilities are given in Table 2 . Background: Current evaluation of donor candidates for adult-to-adult living donor liver transplant (LDLT) includes MRI and CT without and with contrast for defining liver anatomy. Donor and recipient size matching is a critical determinant for donation. The most common reasons for rejecting potential donors are a small left lobe either inadequate in size for donation (GBWR < 0.8) or which would leave a remnant of <30% liver volume for right lobe donation. Our hypothesis was that a single sequence non-contrast-enhanced MRI (SSMRI) performed in less than a minute can be used to rapidly determine liver volumes and identify candidates who should not be evaluated further for LDLT. If successful, use of SSMRI early in donor evaluations will improve safety and efficiency by avoiding unnecessary patient exposure to ionizing radiation and iodinated contrast media and evaluation of futile donor candidates. Methods: Of 31 consecutive adults undergoing evaluation for LDLT, 30 had routine pre-and post-contrast hepatic CT and MRI, including HASTE sequence used for SSMRI volume determination. Liver volumes were determined by a radiologist using commercially available software. Data were analyzed for statistical difference between SSMRI and CT volumes for total, right and left lobes. Percent volume difference (SSMRI -CT/CT) was calculated. Results: There was 100% agreement (no statistically significant difference) between volume determinations by SSMRI and CT imaging. The average percent difference between CT and SSMRI for total liver volume was 0.9 + 5.7%, for the right lobe 2.2 + 7.1% and for the left lobe -1.5+9.9%. Of the 31 evaluated, 3 had a left lobe volume of < 30% by SSMRI (14.7, 20.6, and 21.2%). Conclusions: SS MRI accurately estimates liver volume. Adoption of donor screening whereby those with a small left lobe volume on SSMRI are excluded from further evaluation improves the efficiency, cost and safety of LDLT evaluation. More LDLT candidate evaluations will be encouraged by the possible avoidance of unnecessary radiation and contrast exposure, and improvement in the efficiency of evaluation will enable centers to perform more living donor candidate evaluations. The ISHLT working formulation defines Intravascular Activated Mononuclear Cells (IAMC) as one of the histopathologic features of antibody-mediated rejection (ABMR) in heart transplantation. However, no accurate grading of IAMC correlating with pAMR diagnosis has been proposed. The aim of this study was to develop a score to grade the extent and the pattern of the IAMC in endomyocardial biopsies (EMB) with ABMR. This case control study included heart transplant patients from five French referral centers with biopsy-proven ABMR (pAMR1-3) (n=64) and a matched control group of 44 patients without rejection (pAMR0). IAMC on EMBs was graded blind of pAMR grades by two skilled pathologists according to the percentage of the area with microvascular inflammation in capillaries and to the maximum number of IAMC in the most affected capillary on a 0 to 3 scale and a positivity defined by a grade ≥ 1. The score was compared to the gene expression profiling in EMBs by microarray using pathogenesis-based transcripts reflecting endothelial activation (ENDAT), DSA (DSAST), macrophage burden (QCMAT), gamma-interferon response (GRIT), T-cell (TCB) and NK-cell burden (NKB) (http://atagc.med.ualberta.ca). 100% of control EMBs were graded as IAMC score 0. All pAMR1(I+) EMBs and none of the pAMR1(H+) and pAMR2-3 were graded as IAMC score 0. The highest IAMC score 3 was mainly distributed in pAMR2-3 (Fischer's exact=0.000). Increase in the IAMC score was associated with an increase in the proportion of C4d positive EMBs (scores ³1) and of EMBs positive for macrophage markers, and a higher proportion of DSA positive at EMB. It was also associated with an increase in the expression of ENDAT, DSAST, GRIT, NKB, TCB and QCMAT transcripts (All Kruskal-Wallis p<0.001). The IAMC score is correlated with molecular activation in grafted myocardial tissue. The IAMC score could help pathologists for ABMR diagnosis, emphasizing the value of the microvascular inflammation. The role of circulating DSA in addition to traditional cardiovascular risk factors, in the development of accelerated CAV have not been demonstrated. We investigated the role of circulating DSA in the development of accelerated CAV in a large prospective observational cohort of heart transplant recipients. This was an observational prospective cohort study, including 723 heart transplant patients from 2 centers between 2004 and 2011. Participants were screened for traditional cardiovascular risk factors, circulating anti-HLA antibodies, and their properties (specificity, HLA class, strength). All patients underwent prospective heart allograft biopsies and angiogram with assessment of CAV lesions. We assessed the independent determinants of CAV at 3 years after transplantation. Purpose: To understand if there is a relationship between aspirin resistance as measured by elevated levels of urinary 11-dehydrothromboxane B2 in patients supported with continuous flow-left ventricular assist devices Methods: 66 patients with cf-LVAD (mean age 50±11 years, 68% male, 65% African American) and currently being followed at an academic medical center had urinary 11-dehydrothromboxane B2 (11dhTxB2) levels tested at time of enrollment to ascertain aspirin resistance. These values were dichotomized using 1500 pg/mg as the cutoff based on criteria used to define aspirin resistance. Chi-square test was used to determine significance with regards to incidence of clotting and bleeding. Cox regression model was performed to evaluate the association between elevated 11dhTxB2 and events after adjustment for age, gender, race, and supratherapeutic INR (>3.0) Results: 25 patients (37.9%) had at least one bleeding event during the follow-up period. Of the 32 patients (48.5%) with an elevated 11dhTxB2, 19 patients (59.4%) experienced a bleeding event (p<0.001). After adjustment for the aforementioned covariates, elevated 11dhTxB2 was an independent predictor of bleeding (hazard ratio 3.18, 95% confidence interval 1.25 to 11.62, p=0.018) with increased risk noted from time of implant ( Figure 1 ). No other covariates in the model were significant predictors of bleeding. Conclusion: Elevated 11dhTxB2 is an independent predictor of bleeding events after LVAD implantation. Measurement of 11dhTxB2 may improve risk stratification and anticoagulation management in LVAD patients. Further study is required to better characterize the relationship in this patient population. Current immunosuppressive therapies are limited by non-specificity and toxicities, and new pharmacologic approaches to immunosuppressive therapies are needed. Many T cell subsets have distinct metabolic requirements that offer an opportunity for therapeutic immune modulation. Activated CD4 helper and cytotoxic CD8 T cells require glycolysis for optimal function, whereas Foxp3+ T-regulatory (Treg) cells are thought to be dependent on oxidative phosphorylation. We found that compared to wild-type (WT) Treg cells, Tregs from mice deficient in histone deacetylase-6 (HDAC6) had higher basal (196.3%) and uncoupled (194.5%) oxygen consumption rates, respectively (p<0.05, n=4), despite equal mitochondrial mass. In contrast HDAC6-KO and WT conventional T cells (Tconv) had no difference in oxygen consumption, HDAC6-KO Treg and exhibited stronger suppressive function than WT Tregs in vitro and in vivo (homeostatic proliferation, transplantation, autoimmune colitis models) and expressed more acetylated, and transcriptionally active Foxp3. Since the increase in oxygen consumption was observed only in the Tregs, but not Tconv, we hypothesized that Foxp3 might directly control oxidative phosphorylation in Tregs. We therefore retrovirally transduced murine T cells with Foxp3 or empty vector (EV), and also cultured CD4 + Foxp3 -T cells from Foxp3 YFPcre mice under polarizing conditions with TGFβ to form induced Tregs, using YFP expression to track Foxp3 expression. Our analysis of FACS-sorted YFP + iTreg and YFPnon-iTreg showed that both iTreg and Foxp3-transduced T cells had significantly higher basal oxygen consumption rates than non-iTreg or EV transduced T cell controls (44.1±49% and 76.1±76.5%, respectively, p<0.05, n=4). Using Foxp3 ChIP-seq and microarray studies, we identified multiple binding sites of Foxp3 and Foxp3-dependent transcripts among key genes involved in control of oxidative phosphorylation. In summary, Foxp3 regulates the metabolism of Treg cells, and HDAC6 targeting can further modulate these actions on metabolism, with direct translational benefits for therapeutic immune modulation. Background: IL-33 is a pleotropic cytokine that can protect against cardiac transplant rejection by expanding regulatory T cells (Treg), particularly a Treg subset expressing the IL-33 receptor, Suppression of Tumorigenicity 2 (ST2). However, if IL-33-mediated signals directly impact on the ST2-expressing Treg expansion and alloimmune regulatory functions is poorly understood. The goals of this study was to precisely characterize IL-33-mediated Treg signaling and establish the importance of direct IL-33 stimulation of Treg for control of alloimmune responses. Methods: To determine the importance of ST2 expression on Treg for regulation of alloimmune responses, we utilized a rodent model of graft-vs. host disease (GVHD) where C57BL/6 recipient mice were given lethal total body irradiation followed by a subsequent BALB/c bone marrow graft. Graft recipients also received wild-type (WT) BALB/c CD3 + effector T cells (Teff) alone, or with CD4 + CD25 + Treg from WT (st2 +/+ ) or st2 -/mice (Treg:Teff=1:2). In related experiments, fluorescence activated cell sorting was used to obtain ST2or ST2 + populations of Foxp3 + cells from the splenocytes of Foxp3 Reporter mice (FOXP3-IRES-mRFP mice). The capacity of IL-33 to activate signaling downstream of ST2 in each population was quantitated by phosphoflow cytometry. Results: Where as 90% of recipients receiving st2 +/+ CD4 + CD25 + were free of GVHD at Day 100 post-alloHCT, only 10% of mice that received st2 -/-Treg survived long-term (p=0.0017, st2 +/+ v. st2 -/-). Associated mechanistic studies revealed that Treg frequency was reduced in recipients receiving st2 -/-Treg cells compared to those receiving st2 +/+ Treg. Phosphoflow established that IL-33 signaling via ST2 + on Treg activates both NF-κB and p38 MAPK. Yet, only IL-33-stimulation of p38 MAPK, but not NF-kB, was required for ST2 + Treg expansion. Conclusions: These studies make the novel observation that IL-33 activates p38 MAPK to drive selective ST2 + Treg expansion. We also establish that IL-33-mediated ST2 signaling is critical to the capacity of adoptively transferred Treg to prevent the alloimmune responses leading to GVHD. In total, our data reveal the importance of IL-33 stimulation of Treg for effective regulation of alloimmunity. IL-2 is critical for Tregs function, and low-dose IL-2 therapy can exert beneficial effects in autoimmunity and GVHD. Calcineurin inhibitors (CNIs), which constitute the mainstay immunosuppression in liver transplantation, prevent rejection by inhibiting effector T cells, but also suppress Treg activity. CNIs also reduce the overall availability of IL-2. Our aims are to investigate the effects of CNIs on the Treg compartment by studying the homeostasis of the Treg subsets in human liver transplant recipients, and to elucidate if administration of low-dose IL-2 promotes the expansion and functional restoration of Tregs in the presence of CNIs both in humans and animal models. We immunophenotyped PBMCs samples from 40 stable liver transplant recipients on CNI therapy >6 months post-transplant and 30 age-matched healthy controls. We assessed the effects of tacrolimus on Treg survival in vitro in a human whole blood culture system, and in vivo in a skin transplant mouse model (Bm12 into B6) in which allograft survival depends on tacrolimus treatment (5mg/kg/day). Liver transplant recipients on CNIs displayed an increased cell turnover in the suppressive Treg subsets, with higher proliferation and apoptotic death than healthy controls. This was particularly striking in the activated Treg subset (CD4+CD45RA-Foxp3hi). Treg apoptosis correlated with soluble CD25 in serum. Tregs also exhibited reduced CD25 expression and lower IL-2 sensitivity. IL-2 therapy increased the number and function of Tregs in the presence of tacrolimus by upregulating the anti-apoptotic gene Bcl-2. Administration of tacrolimus to mice replicated the same phenotypic changes in Tregs. To confirm the beneficial effects of IL-2 in vivo we employed a CNI-dependent skin transplant model. Immunosuppression withdrawal was completed 4 weeks after transplantation. The allograft survival was significantly increased when IL-2 therapy (1ug rmIL-2+9ug mAbIL-2) was administered in combination with tacrolimus during the last week of treatment. Our study provides a rationale for administering low-dose IL-2 to liver transplant recipients under CNI treatment to expand the pool of endogenous Tregs as a means to improve immunoregulation and potentially promote tolerance. The transcription factor T-bet endows Treg with properties tosuppress Th1 inflammation. However, the mechanisms by which T-bet controls alloimmune responses are not fully understood. We hypothesized that T-bet regulates Treg distribution and suppressive functions, and uncovered a novel molecular mechanism for Treg migration and stability. Methods: BALB/c (H-2 d ) donor islets were isolated and transplanted to the renal capsule of streptozotocin-induced diabetic C57BL/6(H-2 b ) wild type (WT) recipients. Blood glucose was monitored. CD4 + CD25 + Foxp3-GFPnatural Treg (nTreg) from WT or T-bet KO C57BL/6 were isolated and adoptively transferred to allograft recipients, or used for in vitro assays. Flow cytometry and qRT-PCR were performed to analyze expression of effector, adhesion and migration molecules. Results: T-bet KO nTreg failed to prolong islet allograft survival as well as WT nTreg. T-bet KO nTreg suppressed effector T cell proliferation as well as WT in vitro, but failed to prevent antigen-specific CD4 T cell proliferation in vivo. Furthermore, T-bet KO nTreg did not prevent antigen-specific CD4 T cell infiltration into grafts and draining lymph nodes (dLN). Compared with naïve nTreg, T-bet expression was up-regulated by wild type nTreg recovered from the graft, suggesting that T-bet expressed by nTreg played an important role in suppression in the graft. T-bet KO nTreg remained within the graft compared to WT, and failed to traffic to the dLN from the graft afferent lymphatics. T-bet KO nTreg expressed more CCR4 and CD103 than WT nTreg, and in the graft the cognate ligands CCL22, CCL17 and E-cadherin were highly expressed. Imaging analysis showed that T-bet KO Treg remained associated with interstitial ligands and failed to access afferent lymph vessels. T-bet KO nTreg recovered from the graft expressed much less Foxp3 and the suppressor effector molecules IL-10, CTLA4, CD73 and CD39 than WT nTreg. More T-bet nTreg lost Foxp3 expression and became ex-Treg than WT nTreg. Conclusion: T-bet regulates nTreg distribution through adhesion and migration molecules, resulting in graft retention that ultimately affects Treg stability, which is essential for in vivo suppression to protect islet allografts. These results demonstrate a novel and unique regulation of Treg migration, genetic stability and suppression. These functional and molecular interactions are foci for therapeutic interventions in immunity and tolerance. The modulation of PI-3K/Akt-induced signaling is critical for CD4 + Treg homeostasis and stability. Moreover, activation of PI-3K/Akt/mTOR signals within T cell subsets is associated with enhanced Teff cell activity. DEPTOR is a recently discovered mTOR-binding partner and a negative regulator of the Akt/mTOR signaling pathway. However, its expression and function in T cell activation and alloimmunity is unexplored. By qPCR, Western blot analysis and immunofluorescence, we find high levels of DEPTOR expression in CD4 + T cells, and further we observed that its expression levels were reduced at early times (<3h) following mitogen activation. We harvested CD4 + T cells from a doxycycline (dox)-inducible DEPTOR transgenic mouse (rtTA +/+ DEPTOR +/+ , iDEP) to evaluate the effect of DEPTOR overexpression on Teff/Treg responses. Treatment of iDEP CD4 + T cells with dox (0.3-3 mcg/ml) resulted in a marked induction of DEPTOR, and this effect was associated with reduced expression of pS6K and induced expression of pAkt(S473). We also cultured naive CD4 + CD25 -iDEP cells in standard iTreg-inducing media (anti-CD3/ anti-CD28, TGF-β, IL-2 and retinoic acid) ± dox (3 mcg/ml). We observed that ~80% of the iDEP cells expressed Foxp3 and the addition of dox into cultures (to force DEPTOR expression) had no effect on iTreg differentiation. Nevertheless, in preliminary experiments, dox-treated and DEPTOR-expressing iTregs were more potent to inhibit Teff proliferation in suppression assays in vitro. Also, as expected, dox had no effect on suppressive potential of rapamycin-induced iTreg. We assessed iTreg stability over 7 days by monitoring Foxp3 expression upon restimulation with anti-CD3/anti-CD28. While only 9.9±5.8% of iTreg retained expression of Foxp3 after 7 days of restimulation, the addition of dox (3 mcg/ml to induce DEPTOR) resulted in significantly (p<0.001) higher frequencies of Foxp3 + Tregs (46.1±6.0%). In summary, our findings identify DEPTOR as a cell intrinsic regulator of the mTOR pathway in CD4 + T cells and as endogenous immunoregulatory protein that stabilizes Introduction: Lymphotoxin (LT) directs lymphoid organ structure, and LTβ receptor (LTbR) is involved in leukocyte migration into lymph nodes (LN) and thymocyte migration. We previously showed LT was required for murine Treg to prolong islet allograft survival, and for Treg migration from tissues into afferent lymphatics by regulating interactions with lymphatic endothelial cells (LEC). We hypothesized that human Treg would use LT to regulate their interaction and migration through human LEC. Methods: Murine Treg were generated from Foxp3GFP mice, and flow-sorted for CD25 and Foxp3GFP. Naïve human T cells were flow-sorted from peripheral blood, and effectors and induced Treg (iTreg) were stimulated with anti-CD3, IL-2 and antigen presenting cells (APCs). iTreg received TGFb1 and rapamycin. Thymusderived tTreg were isolated from umbilical cord blood and expanded with anti-CD3, IL-2 and APCs. T cells were used in transmigration assays across primary mouse and human skin LEC and the murine LEC line SVEC4-10. For footpad assays tTreg were labeled with CFSE, injected into footpads, and analyzed in draining popliteal LN 12h later. LTbR fusion protein (LTbRIg) was used to block Treg LTaβ. Results: Consistent with our previous results using mouse T cells and LEC, human Treg but not non-Treg CD4 T cell migration across human skin LEC was inhibited by LT blockade. We extended our mechanistic understanding of the action of LT. Murine Treg migration across SVEC4-10 and primary mouse LEC layers was specifically inhibited by blocking LTaβ-LTbR interactions, blocking VCAM-1, or inhibiting the non-canonical NFκB pathway. Migration of other T cell subsets was not inhibited by any of these agents. Inhibitory effects of these treatments were not additive, suggesting they were part of the same pathway. Inhibitory effects were further enhanced by simulating lymphatic fluid flow across the LEC. Finally, LT and VCAM-1 blockade also inhibited Treg but not non-Treg CD4+ T cell migration in vivo from footpad to draining LN. . Conclusions: Both human and murine Treg but not non-Treg rely on cell surface LTaβ to stimulate LTbR on lymphatic endothelium for lymphatic migration. This mechanism involves VCAM-1 and the non-canonical NFκB pathway and operates under both static conditions and fluid flow. As lymphatic flow is modulated during inflammation this may also result in different sensitivity to this pathway depending upon the phase of the inflammatory response. CD4 + CD25 + Foxp3 + regulatory T cells (Tregs) play a central role in the maintenance of immune tolerance and T cell homeostasis. It is known that Tregs promote transplant tolerance and prevent tissue damage through multiple mechanisms including the production of IL-10 and TGF-beta. Although initial clinical trials using Tregs were promising, strategies employed to isolate and expand Tregs, however have been a limiting factor. Here, we investigated in the context of transplantation, a novel immunosuppressive approach applying NAD + -and discovered novel immunosuppressive properties that were CD4 + CD25 + Foxp3 + Tregs independent. To assess NAD + immunosuppressive properties we used a fully MHC II mismatch skin transplant mouse model. DBA/2 skin was engrafted; onto C57BL/6 mice treated with either PBS or NAD + (30mg/day i.p.). Allograft survival and immune responses were assessed. In addition, CD4 -/and IL-10 -/deficient mice were used to demonstrate NAD + mechanisms of action. NAD + treatment promoted a dramatic improvement of allograft survival (18 days vs 9 p<0.01) in parallel to a dramatic decline of CD4 + CD25 + Foxp3 + T cells suggesting that allograft survival was CD4 + CD25 + Foxp3 + T cell independent. Furthermore, our data indicated a significant increase of Th1/Th17 CD4 + T cells that are known to be two pro-inflammatory subsets. In vitro experiments indicated that in presence of NAD + , CD4 + CD25 Low Foxp3 + were able to differentiate into Th17 cells with a change in their transcriptional and cytokine signature profile. The results indicated an augmented expression of IL-17A and RORgt and a Th17 cell proliferation in absence of IL-23 that was mediated via P2XR and STAT3. Of particular relevance and unraveling a novel mechanism, NAD + promoted allograft survival by skewing Th1 pro-inflammatory cells towards immunosuppressive CD 4 + IFN-gamma + IL-10 + regulatory type 1 cells. More importantly, CD4 -/and IL-10 -/mice that were treated with NAD + did not exhibit a prolonged allograft survival, suggesting that NAD + immunosuppressive properties were mediated via CD4 + T cells and IL-10 cytokine. Collectively, our study unravels a novel immunosuppressive therapy with intriguing and novel mechanisms with reduced side effects observed when compared to current immunosuppressive therapies that may benefit transplanted patients and beyond. Simple standardized measures of global immunity may help predict the risk of infections post-transplant. The Quantiferon-Monitor (QFM) test is a novel interferon-γ (IFN-γ) release assay that can provide global measure of overall innate and adaptive cell-mediated immune function using antigens for stimulation of whole blood. The objective of this prospective study was to assess whether IFN-γ levels obtained by QFM correlate with infections post-transplant. In this prospective observational study, transplant patients had QFM testing performed at 1, 3, and 6 months post-transplant. Clinical data were collected up to one-year post-transplant. The QFM assay involves stimulation of whole blood with a lyosphere containing anti-CD3 and R848 antigens followed by an ELISA for IFN-γ (IU/mL). Levels of IFN-γ were correlated with subsequent infectious complications. We enrolled 148 patients (liver=50, lung=51, kidney=47). QFM was performed in 109, 67 and 37 patients at month 1, 3 and 6 post-transplant respectively. In paired comparisons, the median IFN-γ level at month 1 (11.7 IU/mL, range 0.0 -818.9 IU/ mL) was significantly lower compared to month 3 (25.3 IU/mL, range 0.0 -1605.6 IU/mL, p<0.001) and month 6 (36.8 IU/mL, range 0.4 -1393.1 IU/mL, p<0.001). IFN-γ levels of lung transplant recipients were significantly lower compared to non-lung transplant recipients (month 1: 5.0 vs. 21.7 IU/mL, p=0.002). During the follow-up period (median 334 days), there were 105 episodes of infection in 40 patients (20 UTI, 21 pneumonias, 13 bacteremia, 37 viremia and 14 other). At month 1, IFN-γ levels were 15.7 vs 4.4 IU/mL in those with no subsequent infection vs. infection (p=0.023). Similarly, at month 3, IFN-γ levels were 33.3 vs. 14.5 IU/ mL respectively (p<0.001). At month 6, IFN-γ levels tended to be lower in patients with infections (55.4 vs. 18.6 IU/mL, p=0.21). Among patients that developed CMV viremia specifically, IFN-γ levels were significantly lower compared to non-viremic patients at month 1(2.5 vs 18.8 IU/mL, p=0.001), but not month 3 (7.1 vs 25.4 IU/ mL, p=0.054) or month 6 (15.3 vs 30.4 IU/mL, p=0.12). We evaluated a novel method of monitoring post-transplant global immune function including both innate and adaptive arms. This assay may help determine the degree of immunosuppression and predict infections in transplant recipients. Background: The nature of the immune response to seasonal Influenza A Virus infection has not been well characterized in transplant patients. Antibodies may be directed against conserved or variable regions of the virus. The objective of our study was to perform a detailed characterization of influenza-specific antibody responses using a high-throughput based array platform. Methods: Serum was collected at onset of Influenza A infection (day 0) and convalescence (day 28) from transplant patients with microbiologically confirmed influenza infection. Our antigen library included a diverse collection of 92 influenza antigens, including hemagglutinin (HA), neuraminidase (NA) and conserved influenza antigens of various influenza subtypes, host origins and geographical locations. Antigens were printed onto nitrocellulose-coated slides to construct an influenza protein array. These arrays were probed using patient sera and fluorescently conjugated IgG and IgM secondary antibodies. Antibody responses to each antigen were determined by measurement of median fluorescent intensity (MFI). Results: A total of 100 transplant patients with influenza A infection were assessed (80 SOT; 20 HSCT). Severe disease was defined as pneumonia at presentation (N=24) or need for mechanical ventilation (N=9). Significant heterogeneity in baseline and convalescent antibody diversity was observed between patients. However, SOT patients consistently produced significantly more anti-influenza IgM and IgG antibodies relative to HSCT patients and these responses were primarily directed toward HA and NA antigens. For a subset of antigens, evidence of differential antibody responses was observed in patients who had more severe disease manifestations. For example, in patients with pneumonia, antibodies against a small subset of H3N2 and H1N1 antigens were significantly greater at presentation, while a subset of antibodies, primarily targeting HA, were lower than the group without pneumonia. By day 28, significant increases in antibodies directed against a number of antigens were observed for many patients. However, patients who required mechanical ventilation had higher IgG antibody levels that targeted HA (especially H1N1 HA) than those without mechanical ventilation. Conclusions: Transplant patients form diverse antibody responses to influenza infection including those to conserved flu antigens. Significant heterogeneity and differences in antigen specificity are evident and may have clinical correlates. Cytomegalovirus (CMV) and BK polyomavirus (BKPyV) infections are independent risk factors for acute rejection and graft loss. Early conversion from calcineurin inhibitors (CNIs) to everolimus (EVR) may protect the kidney allograft from CNI-related damage and reduce the risk of viral infections. Here we evaluate the incidence of viral infections at 24-months (M) post-transplantation (Tx), from the ELEVATE study. Methods ELEVATE (NCT01114529) was a 24M, open-label, multicenter study, in which de novo kidney Tx recipients (KTxR) were randomized at 10-14 weeks post-Tx to receive either EVR (n=360; trough level [C0] 6-10 ng/mL) or continue standard CNI regimen (n=357; C0: TAC 5-10 ng/mL or CsA 100-250 ng/mL). A key safety assessment was the incidence, severity, and quantification of CMV, BKPyV, and Epstein-Barr virus (EBV) infections. At M24, the incidence of overall viral infections was lower in the EVR arm (18.8 vs. 22.6%). CMV (4.3 vs. 7 .8%) and BKPyV infections (1.4 vs. 3 .3%) were lower in the EVR arm; no EBV infections were recorded in either arm. There were no 'severe' CMV (0% vs. 1.1%) and BKPyV infections (0 vs. 0.3%) in the EVR vs. CNI arm. There were no CMV infections (0 vs. 0.6%) and BKPyV infections (0 vs. 0.8%) that led to the study drug discontinuation in the EVR arm compared with the CNI arm. The median CMV viral load was lower in the EVR arm. The median BKPyV viral load was lower in the EVR arm compared with CNI arm in viremic and significantly lower in the viruric patients (P=0.0016). The median EBV viral load was higher in the EVR arm. At 24M post-Tx, the incidence of overall viral (including CMV and BKPyV) infections was lower in the EVR arm. These findings are consistent with what was previously observed in kidney transplant studies involving EVR. Early conversion from CNI to EVR was associated with lower CMV and BKPyV loads. Recipients Compared to Healthy Persons. M. Kho, 1 W. Weimar, 1 M. Boer-Verschragen, 1 A. van der Eijk, 2 N. van Besouw. Erasmus Medical Center, Rotterdam, Netherlands; 2 Viroscience, Erasmus Medical Center, Rotterdam, Netherlands. Introduction: Herpes zoster occurs more frequently and with more complications in solid organ transplant recipients than in age matched healthy persons. Vaccination could prevent herpes zoster. However, patients with end stage renal disease (ESRD) are known to respond much poorer to vaccinations against hepatitis B and influenza than healthy individuals do. Therefore, we studied the effect of a VZV booster vaccine on B-cell response in patients with ESRD and healthy controls. Methods: In a prospective study, 26 patients, aged at least 50 years and awaiting renal transplantation were vaccinated with Zostavax®. Gender and age-matched living kidney donors were included as controls (n=27). Varicella Zoster Virus (VZV) specific IgG titres were measured before, at 1, 3 and 12 months post vaccination. Background: Primary varicella zoster virus (VZV) infection causes varicella and lifelong latent infection in neural ganglia from which it may reactivate leading to herpes zoster (HZ). Immunocompromised transplant recipients are at risk to develop HZ and severe clinical complications. Therefore, we investigated the incidence of HZ after the first organ transplant and analysed the severity of HZ. Methods: The records of 958 transplant recipients after the first kidney (KTx: n=420), first liver (LTx: n=224), heart (HTx: n=195) and lung (LuTx: n=119) transplantation were analysed for VZV-PCR DNA and clinical signs of HZ. Results: HZ infection was clinically diagnosed and confirmed by PCR in 107 patients: 38 KTx, 16 LTx, 36 HTx and 17 LuTx recipients. 9/38 patients post-KTx had complicated HZ: 6 had disseminated HZ (³3 dermatomes) of whom one died due to encephalitis/meningitis 2 weeks later and 3 had cranial nerve involvement. 2/16 patients post-LTx had disseminated HZ. 14/36 patients had complicated HZ post-HTx: 2 had systemic dissemination, 5 had cranial nerve involvement and 7 had post herpetic neuralgia (PHN). 8/17 patients had complicated HZ post LuTx: 3 had PHN and in one patient the cranial nerves were involved and she died six days later. The overall incidence rate of HZ post-KTx (14.4 cases/1000 PY), LTx (24.5 cases/1000 PY), HTx (30.8 cases/1000 PY) and LuTx (38.2 cases/1000 PY) was significantly higher than in the general population of 50-70 years of age (7-8 cases/1000 PY). Conclusion: HZ is a frequent complication after kidney, liver, heart and lung transplantation. Boosting the VZV immune response by prophylactic VZV vaccination pre-transplantation may limit the incidence and severity of HZ posttransplantation. While 45% of liver candidates are hepatitis C (HCV)-antibody positive (+) and might benefit from a HCV+ donor liver, only a small proportion receive one. The introduction of direct-acting antivirals (DAAs) in 2011 and interferon-free DAAs in 2013 dramatically improved outcomes for individuals with HCV. We explored changes in utilization and outcomes associated with HCV+ donor livers for HCV+ recipients. Methods: Using SRTR data 2005-2015, we studied 23,376 adult HCV+ recipients, using modified Poisson regression to estimate the relative rate (RR) of receiving a HCV+ liver, and Cox regression to estimate post-transplant survival associated with receiving a HCV+ donor liver, adjusting for recipient and donor characteristics. Results: The proportion of HCV+ recipients who received HCV+ livers increased from 6.9% in 2010 to 17.1% in 2015. HCV+ recipients were 29% more likely to receive a HCV+ liver after 2011 [aRR versus 2010: 1.14 1.29 1.46 p=0.4) , and this did not change over time(p=0.8). The utilization of HCV+ livers among HCV+ recipients has increased over time, particularly since the introduction of highly effective HCV therapies, yet post-transplant outcomes have not been negatively impacted. Despite this, HCV+ livers continue to be discarded at almost twice the rate of comparable HCV-negative livers. Increased utilization could expand the liver supply and improve outcomes for HCV+ transplant candidates. With each blood test, all patients were interviewed about exposure to specific HEV risk factors in the preceding 3 months. Results. Eighteen percent of the entire cohort were anti-HEV IgG positive at baseline. Anti-HEV IgG prevalence increased with age (p=0.0006), while other demographic factors (gender, ethnicity, place of birth), medical history (dialysis status, prior organ transplant, diabetes, co-infection with hepatitis B, C or HIV) and recent exposure to HEV risk factors were not significantly associated. In patients tested pre-and post-transplant (n=100), the prevalence of anti-HEV IgG increased from 19% to 25% (p=0.07), respectively; there was no significant change in IgM positivity (p=0.62) and no recipients tested positive for HEV RNA after transplant. No risk factors for HEV were reported by post-transplant seroconverters. Among the 68 donor-recipient pairs with available donor serologies, 8 HEV-seronegative recipients were transplanted from HEV-seropositive donors. None of these 8 recipients seroconverted on follow-up. In summary, among kidney transplant candidates, prevalence of anti-HEV IgG increases with age, and it is higher than reported in the general population; in recipients, anti-HEV seropositivity increased within the first 12 post-transplant months, though there was no evidence of chronic or transient viremia. Limited data suggest that presence of anti-HEV IgG and/or anti-HEV IgM in donors, in the absence of HEV RNA, does not appear to be associated with HEV transmission to kidney recipients. Conclusion. Prevalence of HEV infection in advanced chronic kidney disease patients is higher than reported in the general population but, unlike the European experience, does not appear to cause significant morbidity before or after transplant in the northeast US. Aims: To prospectively investigate the efficacy and safety of procedural HBV vaccination in patients transplanted for HBV related liver diseases for prevention of hepatitis B recurrence. Methods: Patients who underwent liver transplantation for HBV related liver diseases were recruited more than 1 years after LT. Except for regular administration of anti-HBV drugs prophylaxis, hepatitis B vaccination (40µg) was administered intramuscularly during months 0, 1, 2, 6 and 12 after enrolment. The results of liver function, HBsAb titers and so on were monitored during the course and HBIG would be administrated intramuscularly when HBsAb titers lower than 30 IU/L. After the completion of vaccination procedure, all subjects were withdrew of HBIG administration and followed up for 6 months. Patients whose HBsAb titer higher than 30 IU/L at the end of follow-up were considered as effective responders. Results: 27 patients were enrolled and their HBsAb titers before vaccination was 19.87±15.38 IU/L. After follow-up, HBsAb titer in 9 responders was 57.14±22.75 IU/L. There have been no reports of HBV reactivation and severe anaphylaxis so far. The LY/EO rate (lymphocyte number/eosinophil granulocyte number) more than 20 before the vaccination may be considered as a predictor of effective responders (P=0.036). Furthermore, a longer interval between transplantation and vaccination (>600 days) may be a significant factor (P=0.059). Conclusions:Active immunization is an effective, cost-saving, and safe method for prevention of HBV reactivations in patients transplanted for HBV related liver diseases. The LY/EO rate and interval time may be the potential keypoints in selecting effective responders for vaccination. Background: In SOLAR-1, liver transplant recipients with HCV and either fibrosis or cirrhosis were treated at baseline with ledipasvir/sofosbuvir and ribavirin for 12 or 24 weeks and assessed for SVR after another 24 weeks. In the HepQuant substudy, the improvement of graft function was evaluated over time. Liver Transplant Recipients: 21 recipients participated in the substudy, 10 with F0-F3 fibrosis (5 at CU, 5 at BUMC), and 11 with cirrhosis (5 at CU, 6 at BUMC) (3 CTP A, 6 CTP B, 2 CTP C). All, except 1 CTP C recipient, achieved SVR. HepQuant®-SHUNT Test: Recipients were tested at baseline and at 4, 24, 36, and 48 weeks. Serum samples were taken prior to, and at 5, 20, 45, 60, and 90 minutes after administration of stable isotope labeled cholates, to yield Portal Hepatic Filtration Rate (HFR) from PO d4-cholate, Systemic HFR from IV 13C-cholate, SHUNT from the ratio of Systemic to Portal HFR, and disease severity index (DSI) from these 3 test results. Results: At baseline, the HFRs were higher, and SHUNT and DSI were lower in fibrotic recipients than in cirrhotic recipients. The HFRs and DSI reached maximum improvement at 4 weeks in fibrotic recipients and at 24 weeks in cirrhotic recipients. SHUNT did not change significantly. The Systemic HFR in cirrhotic recipients improved to nearly that of the fibrotic recipients at 24 and 48 weeks. Both cirrhotic and fibrotic recipients had an 8% reduction in DSI at 48 weeks, while the functional improvement was mainly in the Systemic HFR in cirrhotic recipients and in Portal HFR in fibrotic recipients. Conclusions: Functional improvement in cirrhotic recipients is slower than in fibrotic recipients. The cirrhotic recipients achieved a Systemic HFR almost equivalent to fibrotic recipients. HepQuant®-SHUNT testing revealed differences between fibrotic and cirrhotic grafts in the time course and the mechanistic features of functional improvement. Background: DAA-treatment of hepatitis C virus (HCV) re-infections in orthotopic liver transplant (OLT) recipients has been demonstrated to be safe and effective in controlled studies, but real-world data are scarce. Usually, monitoring of HCV-infections relies on HCV-PCR-testing. The HCV-core-antigen-assay (HCV-core-Ag) is a cheap and efficient alternative, but its value to predict treatment response in OLT-patients is still undetermined. All OLT-patients treated with DAA-regimens at the University-Hospital-Hamburg-Eppendorf between 02/2014 and 08/2015 were studied (n=39, 42 treatment courses). HCV-core-Ag (HCV Ag, Abbott, Germany, LloD 3 fmol/L) and HCV-RNA (PCR) (COBAS AmpliPrep/Cobas TaqMan HCV quantitative test v.2, Roche, Germany, LloD 15 IU/ml) were determined at each visit. Treatment endpoint was SVR 12, i.e. loss of viremia 12 weeks after EOT. Results: 39/42 treatment courses (93%) led to SVR 12. Three patients experienced treatment failure under therapy with SOF/RBV and were successfully retreated with SOF/DAC/ RBV or SOF/LDV/RBV. All 17 (100%) patients treated with SOF/LDV achieved SVR 12 as compared to 88% (n=22) of patients treated with other DAA-regimes. HCV-core-Ag tested negative after a mean of 2.7 weeks (range 2-12 other DAAs; 2-4 SOF/LDV), while PCR-tests became negative after a mean of 5.14 weeks (range 2-12). SVR 12 was associated with a short time interval between treatment start and HCV-core-Ag-negativity (p=0.008, Mann-Whitney-test) or HCV-PCR-negativity (p=0.02). No severe side effects were observed. DAA-treatment is highly efficient and well tolerated in OLT-recipients. There was no treatment failure in patients receiving SOF/LDV. HCV-core-Ag-loss and PCRnegativity were both predictors of SVR 12. Background/Aims: Controlled trials of direct acting antiviral therapy for Hepatitis C Virus (HCV) show impressive sustained virologic response (SVR) rates, but post-transplant data of sofosbuvir plus ledipasvir (SOF/LED) are lacking. Aim: To study the characteristics and outcomes in a large single-center post-transplant cohort receiving SOF/LED for HCV. Methods: All post-transplant patients with genotype (GT) 1, 4, or 6 HCV who began SOF/LED at our center from 10/2014 to 8/2015 were identified. Charts were reviewed for demographic and clinical variables and treatment response. SVR rates at 12 weeks after therapy (SVR12) were compared between subcohorts. Predictors of SVR12 were evaluated with logistic regression. Results: 77 post-transplant patients began SOF/LED during the study period. The cohort was 74% male, 66% white, 29% black, 77% GT 1a, 13% GT 1b, 3% GT 4, 1% GT 6, and 7% cirrhotic. Mean (SD) age was 60.9 (6.7). Calcineurin inhibitor (CNI) therapy was used in 95% of the cohort and rapamycin without CNI in 5%. 5 patients were treated with a creatinine (Cr) greater than 2.0 (GFR range 21-35). Treatment endpoints have been reached by 59 patients, and 56 (95%) achieved SVR12. The SVR12 rate was 95% for non-cirrhotics and 100% for cirrhotics (p=1.000). Of patients with a negative week 8 viral load (VL), 98% achieved SVR12 vs 50.0% of those who had a positive week 8 VL (p=0.072). Median (IQR) Cr was 1.1 (1.0-1.3) in patients who achieved SVR12 vs 2.0 (1.3-3.0) 0 .87], p=0.002). Donor urine C5a levels remained independently associated with DGF after adjusting for known DGF risk factors. As C5a was associated with donor AKI stage, we examined the correlation between donor AKI and recipient DGF. 680 donor kidneys (72%) did not have AKI but 28% of these non-AKI kidneys developed DGF. In the absence of donor AKI, C5a levels remained strongly correlated and independently associated with recipient DGF. 216 donor kidneys (28%) had at least AKIN stage 1 and 45% of these AKI kidneys developed DGF. However, C5a was no longer associated with recipient DGF in the subgroup of kidneys from donors with AKI. Urine C3a in donors was not associated with AKI or DGF. Urine complement C3a and C5a levels were not associated with 6-month eGFR. These data demonstrate that despite the absence of clinical donor AKI, C5a is able to predict recipient DGF. This data implies the potential use of urine C5a as a marker in kidneys without clinical AKI to predict subsequent DGF in the recipients. Targeting C5a signaling using stratification based on this biomarker may have value in preventing or treating post-ischemic acute kidney injury after transplantation. PURPOSE: A new kidney allocation system (KAS) was implemented in December 2014 using the kidney donor profile index (KDPI), which prioritizes historicallydisadvantaged populations. While KAS aims to maximize the equitable sharing of kidney allografts, concerns remain that its implementation through wider geographical sharing may lead to increased cold ischemia time (CIT) and CITassociated adverse effects, namely delayed graft function (DFG), and ultimately, transplant-related costs. METHODOLOGY: Between 2006 and 2014, 1495 kidney transplants (KTx) were performed at our institution. Donor and recipient demographics, intraoperative parameters, short-and long-term outcomes and hospitalization cost data were collected. A linear regression model was constructed adjusting for recipient age, race, gender, pre-KTx hemodialysis (HD) status, KDPI, and donor type (DD, LD) to assess the impact of CIT on DGF, LOS (LOSTx) and cost (cTx) of transplant admission, and overall 1-year post-transplant in-hospital LOS (LOS1Y) and cost (c1Y). RESULTS: A total of 1200 KTx recipients had financial records available for analysis. Median age at transplant was 48.7 years (IQR 32.6, 59.7). The majority of our patients were male (62.5%), Caucasian (80.9%) and on pre-KTx HD (60%); 43% received a DD kidney. In multivariate analysis, CIT was associated with increased DGF (p<0.001), LOSTx (p<0.001), cTx (p<0.001), LOS1 (p<0.01) and c1Y (p=0.013). The incremental effect of CIT on KTx outcomes is listed in Table 1 . CONCLUSION: One of the unintended consequences of KAS is the prolongation of CIT, which has the potential to increase DGF rates. In turn, we have seen increased DGF rates lead to increased LOS and hospital costs. There is an increase in the number kidneys from older brain dead donors (DBD). Many are discarded based on conventional features (clinical or histology). Thus we asked whether the molecular AKI scores and histologic scores can demonstrate that some kidneys discarded by conventional features will be similar to kidneys accepted for transplantation. Test set contained 43 pre-implantation core biopsies (Bx) from discarded kidneys of DBD donors, age>50yr. Decision to discard was based on histological or clinical features. The independent reference set contained 47 Bx from accepted DBD kidneys from donors >50yr. The molecular AKI score was based on the gene set of injury-repair response transcripts that were elevated in AKI and was analyzed by microarrays. The following histologic lesions were assessed -% of glomerulosclerosis (GS), fibrous intimal thickening (cv 0-3), fibrosis/atrophy (IFTA 0-3) and arterial hyalinosis (ah 0-3). We compared the test set to the reference set by principal component analysis (PCA) using complete histo-molecular data. The 43 discarded kidneys from the test set were similarly distributed as the 47 accepted kidneys when the molecular AKI score was analyzed. We then analyzed the similarity of discarded kidneys to accepted kidneys using the histology features i.e. % of GS, cv IFTA and ah scores, and donor age for each biopsy (Figure 1 ). The PCA based on these variables showed striking similarity of discarded kidneys: 34 discarded kidneys had similar positive PC1 scores to 19 accepted kidneys with positive PC1 scores. All kidneys with positive PC1 scores had more ah, cv and GS. However, there was no difference in histologic features within the PC1 score positive group between the discarded and accepted kidneys. Thus by histology and molecular features the discarded kidneys in one center are similar to the kidneys accepted in another center. The current criteria for discarding kidneys based on conventional features, particularly histology, needs to be reassessed. Our aim was to study the impact of DGF on the progression of interstitial fibrosis during the first post-transplant year. We included all patients who received a deceased donor kidney transplant at our center between July 2003 and April 2015. We excluded combined organ transplants and recipients who lost the allograft during the first 30 days (n=16). We defined DGF as a less than a 30% drop in creatinine from day 0 to 3 or the need for dialysis within 7 days. Surveillance biopsies (Bx) are done at reperfusion and at 1, 4 and 12 months post-transplant. We used a linear mixed model to analyze differences in the slope for the progression of fibrosis (mean of Banff ci score 0-3 as continuous variable) between 0 and 12 months and a logistic regression analysis to adjust for variables associated with graft fibrosis at 12 months. Continuous data is shown as mean±1 standard deviation. Results 1054 patients were included in the study: 604 (57%) were in the DGF group and 450 in the control group. Recipient age (55.7±12.8) was not different between groups. The DGF group was more likely to be male, diabetic and on dialysis prior to transplant. Donor age was higher, more likely to be a DCD donor and had a higher KDPI score in the DGF group. 3-year death censored graft survival was 93.3% in the DGF group and 95.3% in the control group (p=0.18). The eGFR (by CKD-EPI) at 12 months was lower in the DGF group (56.8± 20.6 vs 61.2± 21.1 (p=0.004)). The cumulate rejection rate at 12 months was 16.6% in the DGF group and 17.6% in the control group (p=0.62). The RR of Banff ci>1 for the DGF group on the 12 month Bx (adjusted for time 0 ci and the donor KDPI) was 1.23 (95% CI 0.79-1.93, p=0.35). The mean for the Banff ci score (0-3 as continuous variable) for biopsies done at reperfusion and at months 1, 3 and 12 months post-transplant were plotted. There was no significant difference in the slopes between the DGF and control group (p=0.549) (figure). Transplanting kidneys from donors with AKI is one of the methods to minimize the disparity between numbers of listed patient and donor pool. Our aim is to determine the influence of AKIN classification of donors on outcomes following DDKTx. We included all DDKTx transplanted at our center between January 2008 and February 2015. Using data from DonorNet, we classified all donors using the AKIN Staging (0-3). The primary outcome was graft survival at 1, 3 and 5 years post-transplant. The secondary outcome was delayed graft function (DGF), defined as requirement of dialysis or failure of creatinine drop more than 30% within 3 days of transplant. Survival analysis between groups were done using the Kaplan-Meier method. Continuous data shown as mean ±SD. The AKIN stage could be determined for 804 of 815 who received DDKTx during the study period. The risk of DGF progressively increased with increasing AKIN stage. Graft survival at 1, 3 and 5 years stratified by the AKIN stage was not different between the groups (p=0.19). Background It has been reported that de novo donor-specific HLA antibody (dn DSA) production is associated with chronic antibody-mediated rejection (CAMR) which leads to poor outcome. However, the question of what specific amounts of dnDSA are responsible for CAMR remains to be solved. Early diagnosis before a rise in serum creatinine and/or proteinuria is observed, and is undoubtedly essential for effective treatment of CAMR. The aim of our study was to identify factors related to the development of subclinical CAMR and to evaluate the efficacy of its treatment. We followed up on 899 renal transplants without preformed DSA. These patients were annually screened for HLA antibodies by LABScreen Mixed and DSA was identified by LABScreen single antigen beads. Among 95 DSA-positive (MFI>1000) patients, 43 without renal dysfunction who underwent renal biopsies were enrolled in this study. 18 patients (41.9%) were diagnosed with biopsyproven subclinical CAMR. After biopsies were performed, the patients who were diagnosed subclinical CAMR underwent plasmapheresis (PP) and rituximab-based therapy,while 25 had no significant findings of CAMR. The patients who were resistant to rituximab-based treatments underwent PP and bortezomib-based therapy. Results. Significant subclinical CAMR-related factors were younger recipients, past history of acute T cell-mediated rejection and DSA-Class II, especially DR-associated DSA. MFI values of DR-DSA were significantly higher, whereas DQ-DSA was not different between subclinical CAMR and no CAMR. Delta MFI (>50%), DSA-MFI values>3000 and positive C1q binding DSA were also significant subclinical CAMR-related factors (P<0.05). Among 18 patients treated for subclinical CAMR, 8 patients (44.4%) obtained over 50% reduction of DSA-MFI and/or improvement or no deterioration of pathological findings. In contrast, 25 patients without subclinical CAMR did not show renal dysfunction clinically. Moreover, all of 12 with re-biopsy after two years continued to demonstrate no ABMR. Conclusions. About 40% of patients with de novo DSA demonstrated biopsy-proven subclinical CAMR, leading to progressive graft injuries. Patients with these related factors are encouraged to undergo a graft biopsy for proper evaluation, when dnDSA is detected. Further study on effective treatment for subclinical CAMR is necessary. Objective-Renal Transplantation (RT) is the optimal choice for end stage renal disease. Despite all advances in the development of effective immunosuppressive regimens in renal transplantation one of the recent and major problem in renal transplantation is appearance of de nove DSA (dnDSA) [donor specific anti-human leukocyte antigen (HLA) antibodies] which is one of the leading cause of allograft loss. Vitamin D receptor (VDR) is a member of nuclear receptor family which is well distributed in many cells including immune cells. It acts as a transcription factor differently in different cells. Polymorphism in VDR gene may be associated with allograft outcome which has not been studied so far. Therefore, we speculated the association of VDR gene and incidence of dnDSA after RT. Method-58 patients were enrolled in our study who underwent primary renal transplantation. were determined by performing PCR-RFLP (Restriction Fragment Length Polymorphism) method following extraction of genomic DNA from peripheral blood mononuclear cell of the recipients. These polymorphism were analyzed with respect to dnDSA and other clinical outcomes. dnDSA was defined as HLA-A, B, Cw, DR or DQ antibodies directed against the donor HLA that were not present pre-transplant. MFI values over 1000 were determined to be positive. We found that VDR SNP FokI [C/T] was statically significant (P=0.030*) with incidence of dnDSA i.e. 10 patient out of 47 C carrier (21.28%) had dnDSA while out of 11 patients none of TT homozygous (0%) had dnDSA. When we stratified the genotype in three groups we found that CT heterozygous (4 of 23, 17.39%) and CC homozygous (6 of 24, 25%) had higher incidence of dnDSA with respect to TT homozygous (0 of 11, 0%) with (P=0.071). There was no association of other three VDR SNP BsmI, ApaI and TaqI with incidence of dnDSA or any other clinical outcome post renal transplantation. Polymorphism in FokI [C/T] SNP site of VDR gene has significant association with the incidence of dnDSA after RT. Therefore, patients who are FokI C carrier of VDR gene may be at higher risk of developing dnDSA after RT than TT homozygous. This SNP can be used as special predicting marker to identify the high risk group forehand and better treat them. Antibody-mediated rejection (ABMR) presents with either pre-existing or de novo DSA. The aim of the study was to compare the two ABMR phenotypes and their outcome. From a cohort of 965 kidney biopsies from two North American and five European centers, we selected all patients with ABMR. In 205 patients with ABMR, 101 (49%) had pre-existing DSA and 104 (51%) de novo DSA. The median time from transplantation to biopsy-proven ABMR (TxBx) was earlier with pre-existing DSA (2.8 months) compared to de novo DSA (3.9 years). There was no difference for the GFR: 39.3 mL/min/1.73m 2 for pre-existing DSA vs 41.3 for de novo DSA (p=0.487) but the de novo DSA group had more proteinuria (1.5 vs. 0.5 g/g creatinine) (p<0.001). Kidney biopsies with pre-existing DSA presented with more glomerulitis (mean g score 1.73 vs. 1.05), less transplant glomerulopathy (0.48 vs. 1.27) but similar peritubular capillaritis and C4d deposition. ABMR with pre-existing DSA more often had Class I DSA (40% vs. 25%, p=0.018) but lower mean MFI (5096 vs. 8587, p=0.002). Using the gene expression assessment, ABMR with pre-existing DSA exhibited more injury-repair response associated transcripts (IRRATS) (p=0.008) but less Gamma interferon inducible transcripts (GRIT1), NK cell transcript burden (NKB) and T cell transcript burden (TCB) (p=0.018, p=0.010 and p<0.001). The two ABMR phenotypes exhibited similar endothelial cell-associated transcript (ENDAT) expression and ABMR scores. Kidney allograft survival at 4 and 8 years after rejection was superior in the pre-existing DSA phenotype (78% and 63%) compared to the de novo DSA phenotype (54% and 35%) (p<0.001). Using a random forest, the most important variables associated with graft survival were GFR, proteinuria, TxBx, and the ABMR phenotype. Thus early ABMR with pre-existing DSA is associated with better allograft survival compared to late ABMR with de novo DSA. Among the potential explanations for superior results with early ABMR with pre-existing DSA are the aggressive early detection and treatment protocols used for this phenotype. The degree of HLA compatibility between donor and recipient is an important risk factor of developing de novo DSA, but misjudged by the number of HLA antigenic incompatibilities. The HLAMatchmaker software compares the HLA molecules of the recipient and the donor, as a succession of eplets corresponding to amino acid residues accessible to antibodies. The epitope load is the number of incompatible eplets, i.e. expressed by the donor but not by the recipient. Methods 86 patients were switched at three months post-renal transplantation from cyclosporine to everolimus in the randomized CERTITEM study. The most likely allelic typing was estimated from the generic typing using the HAPLOSTAT database. The epitope load was calculated by the HLAMatchmaker software. During the 2-years follow-up, 6 patients (7%) developed class I and 26 patients (30%) developed class II DSA (20 DQ, 2 DR and 4 DQ+DR). The corresponding epitope load was significantly higher in patients who developed class I DSA (22.7±7.1 vs 14.5±6.1 class I eplets, p=0.008), class II DSA (36.0±12.7 vs 20.0±16.9 class II eplets, p<0.0001) and DQ DSA: 14.3 ± 4.8 vs 7.4±6.6 DQ eplets (p <0.0001). In patients with more than 13 incompatible DQ eplets, frequency of de novo DSA was 50% versus 2.5% (RR=20, p <0.0001). In the subgroup of DQ7-negative patients who received a DQ7-positive graft (n=22), the number of incompatible DQ7 eplets was significantly higher in patients who developed DQ7 DSA:12.6±3.5 vs 6.7±3.3, p=0.002. In those patients with a DQ7 DSA, some incompatible eplets were expressed by donor DQ7 with a significantly higher frequency (66ER, 67VT, 70RT, 74EL, 77T, 37YA, 52PL3; p <0.05) than in patients who did not. Conclusion A high epitope load appears as a risk factor of developing DSA after conversion to everolimus. Regarding DQ DSA, which are the most frequent, a cut-off of 13 incompatible eplets clearly allows discriminating patients who will develop DSA or not. Moreover, some eplets expressed by mismatched antigens appear to be more immunogenic than others. This epitope load should be assessed before considering minimization of immunosuppression. Background: Recently, the importance of complement-binding de novo DSA (dnDSA) was documented. We aimed to explore the presence of IgM dnDSA according to C1q dnDSA-binding capacity. Material and Methods: Since 2007, we screened all patients in our outpatient department at annual intervals for the presence of DSA. We selected 59 renal transplant recipients with dnDSA, adequate serum samples available and complete follow up after first detection of dnDSA. All samples were analyzed for the presence of C1q-binding and IgM dnDSA. Results: From the 59 patients, 37 developed ABMR during a mean follow-up of 37±28 months and 22 patients had no clinical or biological signs of graft dysfunction during follow-up (34±17 months).The groups were comparable for baseline characteristics. At time of first IgG dnDSA detection 35 patients were C1q-positive (26/37 in ABMR and 9/22 in ABMR-free group, p=0,026) and 24 C1q-negative. Forty-six (28 C1q-positive and 18 C1q-negative) patients have been also explored for the presence of IgM dnDSA. 14/28 (50%) C1q-positive and only 4/18 (22%) C1q-negative samples were also positive for IgM dnDSA (p=0.020). 11/14 C1qpositive IgM dnDSA-positive (79%) compared to 1/4 (25%) C1q-negative IgM dnDSA-positive developed an ABMR (p=0.045). 8/14 C1q-positive IgM dnDSA negative (57%) and 5/14 (36%) C1q-negative IgM dnDSA negative developed an ABMR (p=0.26). ABMR-free survival could be stratified according to C1q and IgM dnDSA status and overall comparison of Kaplan Meier showed statistical significant differences between curves (p<0.001). We demonstrate that C1q positivity is associated with a higher frequency of IgM dnDSA in the serum (50%). Moreover presence of IgM dnDSA in C1qdnDSA positive patients is associated with more than 75% of ABMR. Finally we are able to show that C1q and IgM dnDSA positivity at time of first IgG dnDSA detection was associated with the highest risk of ABMR occurrence. Purpose: The negative impact of non-adherence to the immunosuppressant medication (IM) regimen on transplant outcomes is well recognized. In an attempt to improve outcomes, the SIMpill ® System is currently being utilized in kidney transplant(KT) recipients in a 3-year study in which allograft rejection is being evaluated. Methods: KT recipients(n=46) were randomized to 4 groups, 2 Intervention Groups(I1 & I2) or 2 Control Groups(C1 & C2). Subjects enrolled in I1, I2 and C1 received the SIMpill ® Electronic Medication Adherence Monitoring System. C2 did not receive a device. This system is a medication-dispensing device that communicates and stores timing of openings, or doses taken to a secure server. The Intervention Group subjects received notification if a scheduled IM dose was missed via text message and/or email. In addition, in I2, if a scheduled dose was missed despite the reminder message, a study provider was notified, enabling intervention. In C1 no one received any feedback, but their adherence is stored on the server for analysis. Results: At one year, 46 KT recipients were enrolled with an average follow-up of 334 days (median=413.5[range 118-559]). Seven of 9 rejections were ACR where 2 of the 9 were ABMR. Five of 7 ACR biopsies were categorized as Banff grade ≤ 2 while the other 2 were classified as borderline. Five of 9 rejection episodes resulted in hospital admission totaling 34 days. One of the 9 subjects with ACR experienced graft failure and died on post-transplant day 140. The purpose of our study was to evaluate the impact of the new kidney allocation system (KAS 2014) on the access to transplantation of HLA sensitized patients listed at our center. The number of deceased donor crossmatches (XMs) and the number of transplants performed from Jan 1 st to June 30 th 2015 (post-KAS) were compared to those performed from Jan 1 st to June 30 th 2014 (pre-KAS). The criteria for reporting unacceptable HLA antigens to UNET were the same pre-and post-KAS. All patients were transplanted based on a negative flow and/or negative CDC crossmatch. A total of 575 and 348 deceased donor XMs were performed during post-KAS and pre-KAS intervals, respectively. The percentage of XMs performed on sensitized patients (cPRA>20%) increased from 12% pre-KAS to 26% post-KAS (p<0.0001) ( Table 1) . This increase was balanced by a lower rate of XMs performed for patients with cPRA lower than 20% (74% post-KAS vs 88% pre-KAS). The distribution of cPRA among patients who received deceased donor kidney transplantation preand post-KAS showed a similar trend. Out of 56 patients transplanted post-KAS, 14 patients (25%) had a cPRA>20%, while only 5 patients (9%) of the patients transplanted pre-KAS (N=56) were sensitized (Table 1) . Table 2 . Median follow-up (months) 5 AMR (%) 14 Mean serum creatinine (g/dl) 1.6 Graft survival (%) 93 Patient survival (%) 93 In conclusion, the number of sensitized patients who received donor offers and of those who were transplanted increased significantly at our center with the implementation of the new KAS. This increase was observed for all subgroups of sensitized patients (cPRA>20% Kidney transplantation remains limited by toxicities of calcineurin inhibitors (CNIs) and steroids. Belatacept is a less toxic CNI alternative delivered by monthly intravenous infusion, but its clinical use has been hampered by early, high-grade acute rejections. We have reported that a regimen including a single intraoperative dose of alemtuzumab followed by monthly belatacept and daily sirolimus can prevent rejection of live (LD) or deceased (DD) donor kidney transplants, including allosensitized individuals and recipients of extended criteria donor (ECD) kidneys. Herein we report the follow-up on the initial 40 patient cohort. Graft and patient survival remains 100%. One patient withdrew from protocol due to pregnancy; 39 patients remain on protocol, all of who have enjoyed stable and excellent graft function. The trial enrolled in two cohorts of 20. Median follow-up in cohort 1 (all LD) is > 60 months with a mean creatinine of 1.20 mg/dl. Median follow-up in cohort 2 (LD, DD, ECD) is 24 months with a mean creatinine of 1.4 mg/dl. Three patients in each cohort have been intolerant to sirolimus, requiring conversion to mycophenolate. In cohort 1, 6 of 20 patients were weaned to monotherapy belatacept with follow-up on monotherapy exceeding 3 years, and in cohort 2, 4 patients have been weaned to monotherapy belatacept with a mean follow up on monotherapy of 6 months. Additional patients are approaching eligibility for sirolimus withdrawal. Mechanistic study of all patients indicate that post depletional repopulation under this regimen facilitates significant (p<0.01) enrichment for naïve CD28+ T cells, reduction of CD28-T memory cells, and augmentation of regulatory T and B cell phenotypes; all characteristics that favor belatacept-based immunosuppression. Recipients exhibit donor-specific hyporesponsiveness without impaired CMV or EBV-specific immunity. Continued follow-up of this pilot experience indicates that belatacept and sirolimus effectively prevent kidney allograft rejection without CNIs or steroids when used following alemtuzumab induction, and provides insight as to the etiology of early belatacept-resistant rejection. Prospective controlled trials of this regimen using broad inclusion criteria are warranted. A total of 5170 first kidney transplant recipients were included in the analysis with a mean follow-up 6.5 years. Overall 2128 patients were withdrawn from steroids within the observation period. We found the optimal time point for steroid withdrawal was one year after engraftment, because steroid withdrawal was associated with a higher rate of graft loss six months after transplantation (HR 1.4; 95%CI 1.1-1.9) compared to steroid maintenance, but thereafter the rate of graft loss was no longer statistically different (figure 1). All-cause mortality was not different between groups throughout follow-up. Our data suggest that steroids should not be withdrawn within the first year after transplantation. Our analysis did not reveal any survival benefits in patients who were withdrawn from steroids at any time after transplantation. Despite using contemporary advances statistical methods, all limitations of an observational study apply. The plot shows estimated hazard ratios with confidence intervals for functional graft loss comparing patients who were withdrawn from steroids at various time points after kidney transplantation compared to those who were maintained on steroids at that time. The hazard ratio for graft loss is higher in those withdrawn from steroids until 2.5 years after transplantation, but the association of steroid withdrawal and graft loss loses significance one year after transplantation. A2309 was a 2-year, phase III randomized controlled trial evaluating the effect on graft and patient outcomes of three groups:1.5mg everolimus plus low exposure cyclosporine (EVR1.5), 3.0mg everolimus plus low exposure cyclosporine (EVR3.0) or 1.44g mycophenolic acid plus standard cyclosporine (MPA). 7-year data for 95 ANZ A2309 participants were extracted from the ANZDATA registry. Associations between treatment and outcomes, including rejection, eGFR, graft loss, death and cancer incidence were examined using adjusted generalized linear regression or Cox regression analyses by ITT analysis. Adverse events and discontinuation of study drugs (up to 2-years) were compared between treatment groups using data from Novartis. Randomized to EVR1.5 or EVR3. These data show that, so far, the EVL arm has the same efficacy but seems to have a better safety profile in elderly recipients. To examine the utilization and clinical outcomes of belatacept in the real clinical setting, a retrospective cohort study was conducted using SRTR data that compared 1-year clinical outcomes between belatacept-and tacrolimus-treated adult kidney transplant recipients (KTR) from June 1, 2011 to December 01, 2014. Primary outcome, defined as incidence of biopsy-proven acute rejection (BPAR), and secondary outcomes, including mean estimated glomerular filtration rate (eGFR) and incidence of new-onset diabetes after transplantation (NODAT), were analyzed in a multivariate Cox model. Subgroup analyses included patients with or without lymphocyte-depleting (LD) induction and those meeting inclusion criteria for the BENEFIT and BENEFIT-EXT trials. Of 50 244 total patients, 417 received belatacept+tacrolimus, 458 received belatacept alone, and 49 369 received tacrolimus alone at hospital discharge. Significantly increased risk of BPAR associated with belatacept alone was identified in the overall study cohort compared to tacrolimus alone, with highest rates in those without LD induction Memory T cells are critical components to protective immunity against recurrent pathogens, but memory T cells with donor-reactivity pose a major barrier to successful transplantation and tolerance induction. We have previously shown that endogenous donor-reactive CD8 memory T cells infiltrate murine cardiac allografts within hours of reperfusion, amplify early post-transplant inflammation by producing IFN-γ and perforin/granzyme B and directly mediate CTLA-4Ig resistant rejection of allografts subjected to prolonged cold ischemic storage. Here, we have sought to define the inflammatory signals and mechanisms driving their early increased number and activation within allografts subjected to prolonged ischemia. Using BrdU pulsing of graft recipients, memory CD4 and CD8 T cells infiltrating cardiac allografts subjected to prolonged vs. minimal (8 hours vs. 30 min) cold ischemic storage were stimulated to increased proliferation that peaked 48 hours after graft reperfusion and then began to subside. Recipient treatment with anti-CD40L mAb, unlike CTLA4-Ig, at the time of reperfusion significantly inhibited the early proliferation of graft infiltrating memory CD4 and CD8 T cells and greatly extended survival of cardiac allografts subjected to prolonged cold ischemic storage. Consistent with this, proliferation of the endogenous memory CD8 T cells within highly ischemic allografts was reduced to the low levels observed in allografts subjected to minimal cold ischemic storage when recipients were depleted of CD4 T cells or when allografts were depleted of dendritic cells or were deficient in expression of class II MHC or CD40. In the absence of recipient memory CD4 T cells, expression of IL-15 and IL-18 mRNA was reduced in highly ischemic allografts to the levels observed in allografts subjected to 30 min cold ischemic storage, but were restored by treating CD4 T cell depleted recipients with an agonist anti-CD40 mAb. Overall these results indicate that clonal expansion of endogenous memory CD8 T cells within high ischemic cardiac allografts requires the activation of graft infiltrating memory CD4 T cells and graft resident dendritic cells through CD40-40L interactions, similar to the immune response that occurs during de novo priming of naïve donor-reactive T cells within the secondary lymphoid organ draining the graft. CD8 memory T cells rely on signaling through CD122, the shared IL-2 and IL-15 receptor β-chain. The addition of anti-CD122 to costimulation blockade prolongs graft survival, in part by controlling costimulation independent memory CD8 T cell expansion and effector function. Twelve Rhesus macaques underwent bilateral nephrectomy and life-sustaining renal transplantation using a kidney from an MHC-mismatched donor. Animals were treated with belatacept monotherapy, anti-CD122 monotherapy, or anti-CD122 + belatacept. For murine studies, congenically labeled TCR transgenic OT1 cells specific for Ovalbumin peptide were adoptively transferred to naïve B6 mice, who were infected with ova expressing listeria monocytogenes. Thirty days post-infection mice were given ova expressing skin grafts. For functional studies, mice were sacrificed at day 5 post skin graft and OT1 cells from spleen and draining lymph node were analyzed via flow cytometry. Rhesus kidney transplant recipients receiving combined CD122 and costimulation blockade enjoyed long term survival (MST=112 days) compared to animals treated with Belatacept alone (MST=29). In a murine model of memory CD8 T cell mediated rejection, combined CD122+costimulation blockade prolonged survival indefinitely (MST > 60 days) compared to costimulation blockade alone (MST = 20 days). Combined CD122 + costimulation blockade efficiently limits expansion and diminishes effector function of alloreactive CD8 memory T cells. The addition of therapies like anti-CD122 which target "costimulation-resistant" T cell subsets such memory T cells may allow for reduced rejection rates with belatacept-based therapy and encourage wider use in transplant recipients. We recently demonstrated that ageing also impedes development of transplantation tolerance. Unlike their young counterparts (8-12 weeks of age) aged male recipients (greater than 12 months of age) transplanted with a full MHC-mismatched heart are resistant to tolerance mediated by anti-CD45RB antibody. Surprisingly, either chemical or surgical castration restored tolerance induction to levels observed using young recipients. Based on the strong impact of endocrine modulation on transplant tolerance, we explored the impact of ageing and castration on the immune system. METHODS: For cytokine staining, cells were stimulated in Complete Medium and either with or without LPS for 5 hours. ELISPOT assays detected IFN-g production after 4 hrs and 72 hrs of culture, which distinguish memory from naive T cells. Graft survival between experimental groups was compared using Kaplan-Meier survival curves and Wilcoxon statistics. Other differences between experimental groups were analyzed using the Student's t test. P values less than 0.05 were considered statistically significant. RESULTS: Here we report a significant increase in the percentage of T cells that produce interferon-γ (IFN-γ) in aged male versus young male animals and that the overall increase in IFN-γ production was due to an expansion of IFN-γ-producing memory T cells in aged animals. We observed no significant difference in percentage of IFN-g by naive T cells in young and old recipients. In contrast to IFN-γ production, we did not observe differences in IL-10 expression in young versus old male mice. We hypothesized that endocrine modulation would diminish the elevated levels of IFN-γ production in aged recipients, however, we observed no significant reduction in the percentage of IFN-γ+ T cells upon castration. Furthermore, we neutralized interferon-γ by antibody and did not observe an effect on graft survival. Diminished IL-7 in aged individuals may also contribute to age-dependent resistance. In young mice, we observed that neutralization of IL-7 resulted in rapid rejection of skin grafts, similar to rejection observed in aged mice (p<0.05). CONCLUSIONS: We conclude that while elevated levels of interferon-γ serves as a marker of tolerance resistance in aged mice, other as yet to be identified factors are responsible for its cause. Defining these additional factors, such as IL-7, may be relevant to design of tolerogenic strategies for aged recipients. Significance: Autoimmunity is a significant barrier to transplantation. Based on previous studies, it is apparent that there are separate genetic regions that control susceptibility to autoimmunity vs resistance to transplantation tolerance, but these have been difficult to isolate in polygenic models. We have recently established that B6.SLE123 mice, in which a lupus phenotype is conferred by 3 genetic regions, resist tolerance induction to islet transplants in absence of any pre-existing autoimmunity. Using this congenic model, we now further trace which genetic regions contribute to tolerance resistance and their mechanisms. Methods: Single congenic B6.SLE1, B6.SLE2, and B6.SLE3 mice were made diabetic, transplanted with C3H islets, and treated with anti-CD45RB (100ug/ day, days 0,1,3,5,7). Rejection was detemrined by two consecutive BG values > 250mg/dL. Evaluation of the alloresponse was measured via ex vivo and in vivo mixed lymphocyte reactions (MLRs). IL-6 production was assessed via ELISA. IL-6 signaling blockade was achieved by administering 500ug of anti-IL6R on days -3,-1,1,3 relative to transplant; blockade efficacy was evaluated via phosphoflow cytometry. Results: Each single congenic strain resisted anti-CD45RB mediated transplant tolerance induction. Rejection kinetics of B6.SLE1 mice most closely resembled that of triple-congenic B6.SLE123 mice in which in no recipients achieved longterm tolerance. Lymphocytes from B6.SLE123 and B6.SLE1 mice demonstrated enhanced alloreactivity during an ex vivo MLR and enhanced IL-6 production. In vivo administration of an anti-IL-6R blocking antibody reduced IL-6 mediated phosphorylation of STAT3(Y705) and enhanced CD4 Treg expansion. Coadministration of anti-IL-6R with anti-CD45RB restored long-term tolerance induction to islet allografts in 40% of B6.SLE1 transplant recipients. Conclusions: Analysis of minimal genetic regions from an autoimmune background demonstrate strong effects preventing transplantation tolerance. We relate the function of one of these regions in lupus to increased IL-6. IL-6 blockade may enhance organ engraftment in autoimmunity and lupus. Objectives: Transplant ischemia reperfusion injury (IRI) mediated primary graft dysfunction (PGD) remains the predominant cause of short-and long-term graft loss in lung transplantation. This study was designed to determine the role of donorderived MyD88 signaling pathway in the development of PGD using a mouse model of lung transplantation. Methods: Lungs from wildtype (wt) or MyD88 knockout (MyD88 -/-) mice on BALB/c (H-2 d ) background were orthotopically transplanted into C57BL/6 (H-2 b ) mice. The recipients were euthanized at POD 1 and 5. Phenotypic analysis of graft infiltrating cells was performed using Flow-Cytometry. Additionally, graft histology was examined by H&E staining and graft function was evaluated by measuring arterial oxygenation (PaO 2 ). Results: Compared to normal controls, impaired lung function was observed in the wt allograft on POD 1, as manifested markedly decreased PaO2 level by 50%. In contrast, abrogation of donor MyD88 restored PaO2 level from 286.8±96.87mmHg to 614.7±31.2mmHg. Histology (figure 1) showed that there was a marked cellular infiltration in the wt allograft on POD 1 and the infiltration was further increased on POD 5, meanwhile the alveolar structure disappeared thoroughly. Parallelly to histology, phenotypic analysis of intragraft cellular infiltrates demonstrated neutrophil was the major intragraft infiltrating population in the wt allograft. Interestingly, MyD88 deficiency led to more than 50% decrease of neutrophil infiltration on POD 1 and further decreased on POD 5(p<0.01), and preservation of better lung integrity. Moreover, the recruitment of CD8+ T cell was reduced markedly as compare to the wt allograft on POD 5(p<0.05). Conclusions: Abrogating donor-derived MyD88 signaling pathway restores graft function following lung transplantation. Targeting donor derived MyD88 signaling may be a potential therapy for PGD in clinical lung transplantation. Hypothesis: We previously presented highly significant differences in the bacterial community structures of human renal allograft recipients over time. Likewise, murine normal, colitic, and pregnant fecal samples influenced cardiac allograft survival and were markedly distinct in bacterial community structure, notably, in abundance of Bifidobacterium. We hypothesized that these differences persist after fecal microbiota transplant (FMT) into mice with cardiac allografts and correlate with allograft outcome. Methods: Mice were fed antibiotics (kanamycin, gentamicin, colistin, metronidazole, vancomycin) ad libitum in drinking water on days -6 to -1 before transplant. On d0 C57BL/6 mice received BALB/c hearts. Fecal samples from healthy C57BL/6, female mice on d11 of pregnancy, colitic T cell receptor transgenic mice that spontaneously develop colitis, or cultured Bifidobacterium pseudolongum (ATCC25526) were transferred on d0 by gavage. Mice received daily immunosuppression tacrolimus (2 mg/kg/d sc d0-40 or 3 mg/kg/d sc d0-60) starting on day 0. Fecal pellets and intestinal tissue were collected from transplanted mice, and analyzed via 16S rRNA gene sequencing and RNA-Seq. Cardiac allografts were assessed for survival, harvested at d40, 60, or rejection, and stained with H&E and Masson's Trichrome. Results: 16S rRNA gene analysis of transplant recipient samples revealed highly significant differences in the bacterial community structures of recipients of normal, colitic, and pregnant FMT, and cultured B. pseudolongum, as determined by bacterial composition and relative abundance, principle component analysis and hierarchical clustering. Bacteria from the genus Bifidobacterium were absent in colitic, yet present in normal and pregnant source samples and remained fairly abundant in pregnant transplanted samples for at least 40 days. In general the microbiota of recipients of normal, colitic, and pregnant fecal samples converged over time, but remained distinct for at least 40 days. Mice receiving pregnant FMT or B. pseudolongum had the highest graft survival rates, indicating a possible anti-inflammatory effect of Bifidobacterium bacteria. Conclusion: FMT of pro-and anti-inflammatory fecal microbiota and specific components of the microbiota results in durable changes to host microbiota. These changes correlate with alterations in systemic immunity with consequences for graft survival, rejection, and inflammation. Background: OPTN policy exists that allows the UNOS Organ Center to facilitate the allocation of a pancreas within 1 hr. of donor surgery, permissible only to those programs that have a written agreement to receive these offers. The OPTN pancreas transplantation committee has studied the utilization of this allocation schema. Methods: OPTN potential transplant recipient (PTR) data were studied from 2008 through 2014 to find allocations where facilitated pancreas allocation (FPA) was used. The volume of transplants performed using FPA by center and year were studied to determine the rate each participating center imported pancreas for transplant. The volume of any type of pancreas transplant along with the percent imported was reported for each center, and was assessed to determine suitability regarding a more structured facilitated system. are performed with imported organs at a higher rate than simultaneous pancreas-kidney (SPK). Multiple programs that volunteered for FPA have not imported pancreas transplants, and some perform very few pancreas transplants at all. Multiple large volume pancreas programs do not import large volumes of transplants. On average, OPOs make the first offer of a pancreas 19 hours before cross clamp. Conclusions: The current system of FPA is ineffective due to the short notification time (1 hr before recovery) and a lack of participation and/or willingness to import pancreata by more than half of the programs that volunteered to be on the list. Developing qualifying criteria for centers to automatically receive FPA offers and allowing for a longer time window prior to recovery may increase the volume of pancreas transplants in the United States. Background: In 2010 a new pancreas allocation system (PAS) was approved by the OPTN and implemented in October 2014, shortly preceding the new kidney allocation system (KAS). This system combined pancreas and simultaneous pancreas-kidney (SPK) candidates onto one match-run, added qualifying criteria for SPK waiting time, and set the kidney to follow the pancreas nationwide. Post-implementation, the OPTN pancreas committee studied the impact of the new PAS and KAS. Methods: Using OPTN data from 6 months post-KAS implementation (12/4/14 -6/4/15) and the same time-period from the prior year, the analyses show the impact on deceased donor pancreas and kidney utilization by pancreas and kidney donor risk indexes (PDRI and KDRI) and other factors. Odds of recovery and discard for donors were used to test utilization trends, and changes in proportions were studied for trends in volumes. Results: Utilization of pancreas did not change. Kidney recovery rates have increased for KDPI<35% and decreased for KDPI>85%. Sharing of kidneys for adult transplants have increased significantly post-implementation (p<0.01), and regional adult SPK transplants have increased (p=0.06). SPK transplants are being performed on recipients further on the match run than pre-implementation (p=0.03). The proportion of kidney transplants that are SPK remained constant post-implementation as pre-(~5.5%). Conclusions: Despite an increase in national volumes of deceased donors and transplants within the 6 month period following KAS implementation, pancreas recovery rates remain constant. The increase in regional SPK transplants has not resulted in fewer local transplants and OPOs place SPKs further on the match-run than before. PAS has not interfered with kidney allocation, using the same % of kidneys for SPK as in the past. Abstract# 299 Purpose: Outcomes for SPK txps are superior to solitary pancreas txps. The basis for this difference is unknown. We sought to identify the biologic mechanism for the difference. Our hypothesis was that either the uremic conditions in a SPK txp or the immunoprotective influence of the kidney resulted in better graft survival. Methods: Review of 1,260 pancreas txps (355 SPK, 509 PAK, 355 PTA, and 41 SLK [simult LD kidney and DD pancreas]) performed (1998) (1999) (2000) (2001) (2002) (2003) (2004) (2005) (2006) (2007) (2008) (2009) (2010) (2011) (2012) (2013) (2014) (2015) . Bivariate and multivariate methods were applied to txp outcomes accounting for a full panel of covariate risk factors. To determine whether the uremic conditions during the txp provided a protective effect we compared the risk of AR or graft loss between SPK and SLK txps, where both patients are uremic. To assess for the immunoprotective impact of the kidney, we examined HLA matching between prior kidney donor and the current pancreas donor for PAK txps. Results: We found that SLK txps had a higher incidence of AR (p<0.001) and worse graft survival (p=0.01) than SPK. SLK txps had the same risk of AR and graft loss as PAK txps (p=NS). Furthermore, in MV models the risk of AR for either SLK txps or SPK txps was not different if the patient was or was not on HD prior to txp (HR 0.93, p=0.8, Fig. 1 ). Graft survival for SLK and PAK txps are nearly identical (p=NS), and together they are intermediate between SPK and PTA (Fig. 2) . In MV models we found that >numbers of HLA matches between previous kidney and current pancreas resulted in improved graft survival (HR 0.825, p=0.029). Txps with 1-4 HLA-B and -DR matches were improved compared with those with 0 (p=0.05). Conclusion: These data show the superiority of SPK txps is not due to the uremic conditions during the txp, but may be due to an immunoprotective influence of the matched kidney. The Survival Benefit of Pancreas and/or Kidney Transplantation for Patients with Type 1 Diabetes. R. Gruessner, V. Whittaker, N. Ozden, A. Gruessner. SUNY Upstate Medical University, Syracuse, NY. Introduction: Deceased donor transplantation (tx) has provided an enormous survival benefit for patients with end-stage organ failure. An analysis of the survival benefit for type 1 diabetic patients through kidney and pancreas tx has not been performed. Purpose: To determine the survival benefit of pancreas and/or kidney txs for type 1 DM, we analyzed the UNOS/OPTN database between 10, 1987 and 6, 2015. Methods: In this analysis, we reviewed the records of 70,755 adult type 1 diabetic patients listed for primary pancreas and/or kidney tx: 37,491 recipients who underwent a tx and 33,264 patients who were placed on the waiting list but did not undergo a tx. We adjusted our analyses for multiple listings, incomplete classification. The primary outcome was patient death while on the waiting list or after transplant. Patient survival was computed according to Kaplan-Meier for time-to-event analysis. Results: Waiting list mortality rates at 1-and 5-years for type 1 Kidney tx alone (KTA) recipients were 5% and 36%; for simultaneous pancreas/kidney (SPK) recipients, 5% and 49%; for solitary pancreas tx (PTA, PAK) recipients, 3% and 16% (p < 0.05). Patient survival at 10 years was 49% for DD KTA, 63% for LD KTA, 70% for SPK, 68% for PTA and 65% for a PAK after LD KTA. We found that 187,050 life-years were saved to date during the 27 years of pancreas and/or kidney txs in type 1 diabetics. Saved life-years for KTA was 71,735 yrs; for SPK, 97,130 yrs; for PAK and PTA 18,185 yrs. This resulted in 4.98 yrs life-years that were saved for every type 1 diabetic recipient of a pancreas and/or kidney transplant. The average observed number of life-years saved for KTA recipients was 3.04 yrs; for SPK 6.24 yrs; for PAK and PTA 4.02 yrs. The difference was highly significant (p < 0.0001). Conclusions: Our analysis demonstrates the following: (1) Waiting list mortality is significantly higher for SPK vs. KTA; (2) Pt survival is highest after SPK and PTA; (3) Only the 3rd highest pt survival rates were noted in PA LDK and LD KTA; (4) At 10-years, patient mortality rates after transplantation vs. wait list mortality for SPK patients were 30% vs 80% and for KTA patients 47% and 53% (DD only); (5) significantly more average life-years were saved with SPK and PTA/PAK than with KTA; (6) diabetic patients with kidney dysfunction benefit significantly from a simultaneous or subsequent pancreas transplant; (7) In all, pancreas and kidney transplantation in type 1 diabetics saved a total of 187,050 years. BACKGROUND: Our group recently demonstrated that 54% of simultaneous pancreas kidney (SPK) recipients experience early hospital readmission (EHR). It is unknown whether EHR following SPK is associated with graft loss and mortality. METHODS: We used USRDS data to study 6,394 adult Medicare primary first time SPK recipients from December 1999 through October 2011. EHR was any hospitalization within 30 days of initial transplant discharge. Cox proportional hazard models were used to determine the association between EHR, death-censored graft loss (either kidney or pancreas), and mortality, adjusting for age, sex, race, body mass index, time on dialysis, donor cause of death, donor sex, donor race, terminal creatinine, cold ischemia time, HLA mismatch, pancreatic drainage, and delayed graft function. RESULTS: While readmitted, SPK recipients were at an 11.5-fold higher risk of graft loss (95% CI: 6.6-20.2, p<0.001) and an 18.7-fold higher risk of mortality (95%CI: 63.3, 105.62, p<0.001). Following readmission discharge, these risks dropped significantly, but not entirely, to a 1.35-fold higher risk of graft loss (95% CI: 1.16-1.57, p<0.001). Following readmission discharge, there was no statistically significant difference in mortality between readmitted and non-readmitted SPK recipients (95% CI: 0.18-1.68, p=0.3). CONCLUSIONS: Early hospital readmission following SPK is independently associated with graft loss and mortality. The risk of graft loss and mortality is highest during the readmission hospitalization. The risk of graft loss remains elevated even after the recipient is discharged. Readmission hospitalization is a high-risk window following SPK, and readmitted recipients require very careful clinical management during and following readmission. Introduction: Type 2 diabetes (T2DM) is characterized initially by insulin resistance and later by beta cell failure. Historically, T2DM has been a relative contraindication for simultaneous pancreas and kidney transplantation (SPK), as insulin resistance may render a transplanted pancreas ineffective. UNOS reports that only 9% of patients listed for pancreas transplantation have been designated as T2DM. However, recent studies of SPK have shown comparable outcomes between T2DM and Type 1 diabetes. Here we report long-term follow-up from a single center experience of SPK in T2DM. Methods: We conducted a retrospective study of 256 patients who underwent SPK at California Pacific Medical Center from 2002 to 2015. Of these patients, 73 were categorized T2DM based on C-peptide, age of onset, insulin usage, and BMI. Outcome measures included patient/graft survival, hemoglobin A1C, BMI, lipid parameters, creatinine, and proteinuria. Results: Patient survival was 93.2% and kidney and pancreas graft survival were 97.3% and 83.6%, respectively,with mean follow-up 6.4 years. Average hemoglobin A1C was 5.7% at one year and 5.8% at three years. Average creatinine was 1.3 mg/dL at both one and three years. BMI rose significantly from a baseline average of 27.5 kg/ m 2 to 30.0 kg/m 2 at one year and 31.7 kg/m 2 at three years. Lipid parameters showed no significant differences. Preexisting cardiac disease defined as cardiovascular intervention was present in 28.8%, and the post-transplant incidence of cardiac events requiring intervention was 6.8%. Albuminuria occurred in 21.3% and 23.0% at one and three years, respectively, but the majority were trace on dipstick. Conclusions: Our study reports outcome data for the largest cohort of type 2 diabetics receiving a simultaneous pancreas and kidney transplant in the literature. Patients maintained long-term euglycemia and stable renal function despite significant increases in BMI. These results suggest that T2DM should not be a contraindication to SPK and that a select group of T2DM patients could benefit from SPK transplantation. Background: Organ donation after cardiac death (DCD) has the potential to expand the pool of available organs for transplantation. Because of the concern that warm ischemia time affects pancreas graft survival, the utilization of DCD pancreas transplants either alone or as part of simultaneous pancreas-kidney (SPK) transplantation has been limited. Methods: We performed a retrospective analysis of Organ Procurement and Transplantation Network (OPTN) records for SPK transplants in the U.S. between 2000 and 2013. SPK recipients were stratified according to donor type as DCD or donation after brain death (DBD). Associations between donor type and post-transplant graft failure and patient death were examined by Cox regression including adjustment for recipient, donor and transplant factors. Results: During the study period, 10540 patients with type 1 diabetes underwent SPK. Of these, 4.6% (n=230) were procured from DCD donors. DCD donors were more likely to be white (83% vs. 67%) and more commonly to be expired secondary to Anoxia (32% vs. 12%) compared to DBD. Age, sex, and body mass index were similar between the two groups. At 3 months, recipients of DCD and DBD SPK transplants had similar (P>0.05) rates of pancreas survival (91% vs. 90%), kidney survival (97% vs. 96%), and patient survival (98% vs. 98%). After multivariate adjustment, DCD transplantation was not associated with increased risk of pancreas graft failure (aHR 0.78, CI 0.59-1.03), kidney graft failure (aHR 0.94, CI 0.7-1.24), or patient death (aHR 0.87, CI 0.58-1.31) at 3 months. Outcomes remained similar for recipients of DCD and DBD transplants at 1, 3, 5, and 10 years. Although this study is not powered to comment further, this may be biologically significant and will be addressed in an ongoing larger study. This interim analysis indicates how such tests might guide personalized decisions regarding CMV management such as length of prophylaxis, particularly in the high risk D+/R-group. CMV-seronegative kidney transplant recipients (KTRs) from CMV-seropositive donors are at highly increased risk of primary CMV-infection with associated inferior outcomes after kidney transplantation. Presence of CMV-specific T-cells despite seronegativity, however, may result from absence of detectable circulating antibodies despite CMV-specific memory B-cells or cross-reactivity due to allogeneic presensitization. Here, we studied all CMV-seronegative KTRs between 2008 and 2013 for the presence of CMV-specific T-cells pretransplantation. Among 87 CMV-seronegative KTRs, 49 KTRs (56%) received an allograft from a CMV-seropositive and 38 KTRs (44%) from a CMV-seronegative donor. Samples were collected pretransplantation, at +1, +2, +3 months posttransplantation. CMV-specific T-cells to CMV-pp65, -IE1, and alloreactive T-cells were measured using an interferon-γ Elispot assay. Among 49 KTRs from CMV-seropositive donors, 11 KTRs (22%) showed detectable CMV-specific T-cells pretransplantation. Although no differences were observed for the incidence of CMV-replication, KTRs with CMV-specific T-cells presented with shorter duration of CMV-replication, lower initial and peak CMV-loads, less CMV disease, and less need for intravenous antiviral therapy (p<0.05). KTRs with CMV-replication despite CMV-specific T-cells showed a loss of CMV-specific T-cells from pre-to posttransplantation (p<0.05). KTRs with no CMV-specific T-cells pretransplantation showed inferior patient survival, allograft survival, and allograft function (p<0.05). KTRs with CMV-replication showed a higher incidence of alloreactive T-cells and acute cellular rejection episodes (p<0.05). outcomes compared to KTRs without CMV-specific T-cells. Higher frequencies of alloreactive T-cells may contribute to the higher incidence of acute cellular rejection in KTRs with CMV-replication. Patients treated for CMV viremia commonly relapse after discontinuation of therapy. CMV-specific CD8+ T-cell mediated immunity (CMI) is useful to predict the risk of CMV reactivation but has not been evaluated in interventional trials. We hypothesized that CMI assessment could be used to tailor therapy for CMV viremia by guiding the need for secondary prophylaxis following initial treatment. Transplant patients were enrolled in this prospective study at the onset of CMV viremia requiring antiviral therapy (>1000 IU/mL). CMI was measured serially using the Quantiferon-CMV (QFT-CMV) assay. Patients were treated with antiviral therapy until viral load was negative. Following that, the real-time result of the QFT-CMV assay was used to guide therapy: patients with a positive CMI assay had antivirals discontinued; patients with a negative CMI had two months of secondary antiviral prophylaxis. All patients were monitored by serial CMV PCR assays. The primary endpoint was relapse defined as detectable viremia (>137 IU/ml) requiring antiviral therapy. We enrolled 21 patients (liver=4, lung=7, kidney=6, combined=4) of which 17 had sufficient follow-up for analysis. Median age was 53 years (range 29-76) and 10 patients had symptomatic CMV disease and 7 patients had asymptomatic viremia. Median viral load at onset was 10,900 IU/ml (range 1,920-1,220,000). All patients received antiviral therapy until viral load was negative. Relapse occurred in 9 patients (52.9%) at a median of 59 days (range 7-169). At end-of-treatment, 6 patients (35.3%) had a positive CMI response (median IFN-γ response to CMV=3.40 IU/mL) and discontinued antivirals. CMI was negative in 11 patients (64.7%), who therefore received 2 months of secondary prophylaxis. During the follow-up, patients with a negative CMI were still more likely to have CMV relapse compared with those who were positive [8/11 (72.7%) vs. 1/6 (16.7%), p=0.05). Hospitalization due to CMV disease tended to be lower in the CMI positive vs. CMI negative patients (0/6 (0%) vs. 5/11 (45.5%); p=0.1). This study provides a first proof of concept that therapy for CMV viremia can be successfully tailored based on measurement of the CMV specific CD8+ T-cell response. This allowed minimization of antivirals in a subset of patients without adverse sequelae. Patterns of viremia and relapse are influenced by the host immune response, specifically, CD8 T-cell responses. The use of mass cytometry (cyTOF) allows for simultaneous analysis of a large number of surface and intracellular markers that is not possible with conventional flow cytometry. We hypothesize that an in depth analysis of CD8 T-cell responses may provide novel immunological insights into the pathogenesis of CMV infection. Methods: PBMCs were collected from 20 transplant patients at the time of CMV viremia onset. Of these, 10 were patients with positive outcomes, meaning they had spontaneous viral clearance or a rapid response to antiviral therapy with no relapse, and 10 were patients with negative outcomes in whom viremia persisted or relapsed, despite antiviral therapy. We utilized mass cytometry to simultaneously characterize cell-surface expression of CCR7, CD45RA, CD45RO, CD127, CD95, CD25, CD69, CD7, HLA-DR, CD38, CD27, PD-1, CD107a and CD57 on CD8 T-cells. Functionality was assessed by measuring intracellular effector molecules (IFN-γ, IL-2, MIP-1β, TNF-α, IL-17, IL-10, perforin, granzyme B and FOXP3) in CD8 T-cells stimulated with a pool of immunodominant CMV peptides each at 1 mg/mL. Results: As compared to patients with positive outcomes, those with negative outcomes had a greater frequency of terminally differentiated effector memory CD8 T-cells (42.2% vs 60.7%, p=0.023), and a significantly lower frequency of central memory CD8 T-cells (0.48% vs 0.15%, p=0.035). No differences were seen in total, naïve or effector memory CD8 T-cells, as well as CD8 T-cell maturation markers (eg, PD-1, CD107a, CD45RO) between the two groups. Interestingly, FOXP3 + CD25 + regulatory CD8 + T-cells were present in greater frequency in the negative outcome group (0.26% vs 0.07%; p=0.0081), but no differences in IL-10 were observed. With respect to functionality, tri-(IFN-γ, TNF-α, IL-2) and polyfunctional (IFN-γ, TNF-α, IL-2, MIP-1β) CD8 T-cell frequencies were consistently greater in the positive outcome group (p=0.0005 and 0.048, respectively), as were perforin-and granzyme B-expressing IFN-γ + TNF-α + CD8 T-cells (p=0.0021). Conclusion: We show that positive CMV outcomes are associated with higher frequencies of polyfunctional T cells but also lower frequencies of regulatory CD8 T-cells. A comprehensive assessment of CD8 T-cell immunity at onset of viremia could be used to predict CMV outcomes. Background: We do not currently have sufficient predictive tools to determine risk of CMV infection after organ transplantation. Recent data suggest that assays of the CMV-specific cellular immunity appear promising in predicting the risk of CMV infection, allowing for personalized prevention. Such assays may optimize our ability to prevent CMV and enhance transplant outcomes, while limiting infection, cost, toxicity, and phlebotomy. Whether such lymphocyte-based assays are useful in the immediate post-transplant phase after the administration of anti-lymphocyte therapy has not yet been studied. The primary objective was to establish the timeframe of the return of CMV cell mediated immunity after kidney transplant in patients given thymoglobulin, and to determine whether such lymphocyte based assays would yield clinically useful information soon after transplant. Methods: 36 CMV seropositive adult kidney transplant recipients who had received thymoglobulin at a single center underwent T-SPOT.CMV testing once within 6 months of transplant. Normalization dilution resulted in 250,000 cells/well plated. Results: Among study subjects, the average age was 53 ± 13 years and 47% were female. The average creatinine at the time of the assay was 1.3 mg/dL ± 0.5, average WBC was 6.5 x 10 9 per liter ± 2.0, and lymphocytes 11.5% ±6.6 All but 3 of 36 had detectable CMV immune responses; those 3 were among 8 within the first 3 weeks after transplant. By week 9 after transplant, T-SPOT.CMV results were similar to normal controls. ELISpot IE-1 Antigen Normal CMV seropositive controls 102±165 186±169 Transplant recipients All 47±91 132±153 Week 4+ 59±101 160±160 Week 6+ 83±119 214±166 Week 7+ 92 ±128 211±152 Week 9+ 101±152 237±158 Conclusion: CMV-specific cellular immunity returned rapidly within the first few months after transplant in patients given thymoglobulin, suggesting that assays of cellular immunity, specifically T-SPOT.CMV, may be useful shortly after organ transplant even in the setting of lymphocyte depleting agents. The aim of the Deterioration of Kidney Allograft Function study (DeKAF) was to identify and assess the pathologies of troubled kidney grafts, that is, those with ↑ risk for death-censored graft loss (DC-GL). The cross-sectional cohort consisted of those tx prior to 10/1/05 and having SCr £2 mg/dl on 1/1/06, and subsequently having deterioration of function (25% ↑ Cr or new onset proteinuria) leading to graft bx. Prelim studies(18 mo f/u) identified clusters of markers, and individual histologic markers (C4d, IATR (infl in areas of fibrosis) plus DSA markers as risks for DC-GL. In this study, we assessed whether these associations remain valid during long-term follow up. Median f/u is now 67 mos, and the findings remain similar: 1) Fig. 1a shows the clusters by extended Banff criteria. There is a signif diff (p<.0001) between clusters in DC-GL (Fig. 1b ) Cluster 1 (fibrosis w/o infl), often read by local pathologists as CNI-toxicity, had the least GL. In contrast to prelim data, Cluster 2 (inflammation w/o chronic change) has HR of failure 1.96> cluster 1 (p<.0001). 2) Fig. 2a shows the outcomes grouped by C4d and DSA. C4d+/DSA+ had signif ↑ 5 yr DC-GL vs C4d-/DSA-(p=.0001); DSA+/C4d (p=.03) vs -/-; DSA-/C4d+(=.04) vs-/-.…3) Univariate analysis of time to DC-GL was signif diff by IATR score (e.g., 1 vs 0; 2 vs 3) (<.01). In addition, 2 groups (i=0, IATR=0 and i>0, IATR=0) had signif better DCGS than either i=0, IATR>0 or i>0, IATR>0 (Fig. 2b) . In contrast, i=0, IATR>0 vs i>0 and IATR>0 was NS. We conclude that findings observed in prelim analysis of the DeKAF CS-cohort (cluster analysis, DSA/C4d, and IATR) remain significant with extended f/u. Circulating donor-specific anti-HLA antibodies (DSA) are associated with allograft failure in solid-organ transplantation. We investigated the pathogenic characteristics of DSA that may improve risk prediction of allograft loss in a population-based study. We enrolled consecutive patients who received kidney allografts at two Paris centers between 2007 and 2010. Patients were screened for the presence of circulating DSA at the time of transplantation (day-0), at one year and two years after transplantation or during an episode of acute rejection in the first two years after transplantation. We assessed DSA characteristics, including specificity, HLA class, mean fluorescence intensity, C1q-binding capacity, and IgG subclasses at day - *contributed equally Introduction Integrity of the microvasculature is critical for long term survival after solid organ transplantation. We hypothesize that biomarkers of endothelial injury and repair are early predictors of chronic rejection. Preliminary evidence suggests that angiogenic markers are associated with cardiac allograft vasculopathy. We investigate if angiogenic markers measured 12 months after renal transplantation can predict late allograft dysfunction. Methods Levels of 17 angiogenic proteins and donor specific HLA-antibodies (DSA) were measured by mulitanalyte profiling 12 months after renal transplantation in sera of 152 recipients. 45 Patients had progressive renal dysfunction, defined as at least 20 ml/min/1.73m 2 eGFR loss between year 1 and 5 (mean baseline eGFR 54±22 ml/min/1.73m 2 ), and 107 control patients had stable function (mean baseline eGFR 48±15 ml/min/1.73m 2 ). All patients had low to standard immunological risk and started on triple therapy with calcineurin inhibitors. Six months post-transplant 33 patients switched to dual therapy, including 13 with mTOR inhibitors. Results 5-year death-censored graft survival was 100% in the control group and 79.2% in the progressors. Follistatin, a promoter of tubular regeneration, was significantly increased (median(IQR) = 1146(1060) vs 826(1341) in controls, p=0.033) in recipients with progressive renal dysfunction. Also the angiogenic and pro-inflammatory factors PLGF (median(IQR) = 16(37) vs 10(26) resp, p=0.019) and VEGF-C (median(IQR) = 276(372) vs 183(274) resp, p=0.029) were increased in the progressors, independent of treatment regimen. Remaining biomarkers including endothelin1, FGF1/2 and VEGF-A showed no association with loss of renal function. In total 46 patients had allograft rejection. Late, but not early rejection, was associated with progressive renal decline at 5 years (OR 3.15, 95%CI 1.27-7.81, p=0.013). De novo DSAs were found in 22 patients (14.5%), however no association with biomarkers was found. In both primary kidney diseases and kidney transplants, the extent of fibrosis is a key indicator of stage, progression, and prognosis. Much debate focuses, however, on whether fibrosis in kidney transplants and chronic kidney disease (CKD) is a maladaptive mechanism of progression or an adaptive response to wounding Kidney transplant biopsies offer an opportunity to understand the pathogenesis of organ fibrosis in relation to the time of biopsy post-transplant (TxBx). We studied the relationships among TxBx, histologic fibrosis, diseases, and transcript expression measured by microarrays in 681 indication biopsies taken either early (n=282, <1 year) or late post-transplant (n=399, >1 year). Fibrosis was absent at transplantation but was present in some early biopsies by four months TxBx as a self-limited response to donation-implantation injury, and was not associated with progression to failure. The molecular phenotype of early biopsies represented the time sequence of the response-to-wounding: immediate expression of acute kidney injury transcripts, followed by fibrillar collagen transcripts after several weeks, then by the appearance of immunoglobulin and mast cell transcripts after several months as fibrosis appeared. Fibrosis in late biopsies had different associations, because it reflected new injury from progressive diseases (antibody mediated rejection, transplant glomerulopathy, mixed rejection and glomerulonephritis) with high risk of progression to failure. Fibrosis in late biopsies correlated with injury (r=0.49), fibrillar collagen (r=0.34), immunoglobulin (r=0.33), and mast cell (0.43) transcripts but these were independent of TxBx, probably because ongoing injury telescoped the response-to-wounding time sequence. Pathway analysis revealed epithelial response-to-wounding pathways such as Wnt/beta-catenin. The results indicate that fibrosis in kidney transplants reflects the response to wounding and nephron loss, but progression to failure reflects continuing injury, not autonomous fibrogenesis. A recent editorial in AJT characterized ABOi transplants as "Twice as Expensive, Half as Good" and challenged the ABOi community to evaluate survival benefit of ABOi transplants, i.e. survival with the ABOi transplant versus waiting for a compatible donor. The goal of this study was to address this major concern to the practice of ABOi transplantation. METHODS: Using SRTR data on a combined population of adult first-time kidneyonly waitlist registrants and ABOi LDKT recipients, we calculated the survival benefit of ABOi LDKT versus conservative therapy (remaining on the waitlist or receiving DDKT) using the Mauger method, treating ABOi LDKT as a time-varying exposure and adjusting for baseline candidate characteristics. RESULTS: Among 308,846 patients in the study, 577 received ABOi LDKT. Compared to conservative therapy, ABOi LDKT was associated with 3-fold higher mortality risk in the first 30d post-KT, but lower mortality risk past 180d post-KT (Table) . ABOi recipients reached a break-even point of mortality risk (time to equal risk) 2.7 months post-KT ( Figure 1 ) and received a survival benefit past 8.0 months (time to equal survival). The long-term survival benefit associated with LDKT did not diminish over time, up through five years post-KT ( Figure 2 ). CONCLUSIONS: In light of the profound survival benefit associated with ABOi transplantation, ABOi transplantation can be characterized as "Twice as Expensive, Twice as Good" as waiting for ABO-compatible DDKT. genes; PC and B cell development genes were exclusively upregulated whereas genes associated with mature B cells were both up-regulated and down-regulated. Upregulated genes including many associated with 1) ubiquitin+proteasome system (UPS), 2) apoptosis negative regulation, and 3) negative regulation of cell-cycle control. Genes encoding proteolysis in the immunoproteasome (IP) (PSMB8, PSMB9, PSM10) were upregulated more than those in the constitutive proteasome (CP), indicating potential changes in PI sensitivity. In vitro assessment of chymotrypsinlike activity in BMRPCs after CFZ treatment revealed reduced chymotryptic-like inhibition by CFZ, bortezomib and ixazomib, but increased inhibition with an IPspecific inhibitor. These results indicate a shift toward IP-specific proteolysis may mediate relative CFZ resistance in surviving BMRPCs. Conclusions: This first RNAseq expression profile analysis of untransformed, human CD138 + BMRPCs revealed wide-ranging PI-induced effects on genes controlling B cell differentiation into PCs and high connectivity and dependence on genes associated with UPS and autophagy. Additional in vitro studies indicated changes from CP to IP proteolytic capacity post-CFZ. These studies suggest several new strategies to personalize and enhance PI-based PC depletional therapy, such as sequential therapy with CP and IP-selective inhibitors. Background. Desensitization results have largely focused on effects on anti-HLA antibody (HLA-Ab) and anti-ABO antibody levels. Anti-angiotensin II type 1 receptor antibody (AT1R-Ab) has recently been associated with allograft loss and antibody-mediated rejection and may therefore represent an important parameter to assess in desensitization. The aim of the present study was to evaluate: 1) the prevalence of AT1R-Ab in renal transplant (RTX) candidates with circulating HLA-Ab and 2) AT1R-Ab response to proteasome inhibitor (PI) desensitization therapy. Methods. Thirty nine patients receiving PI therapy had HLA-Ab levels determined by single-antigen bead array (Luminex). Mean fluorescence intensity (MFI) of the strongest IgG HLA-Ab (immunodominant antibody -iDSA) was measured before and after PI therapy. A successful response was considered when iDSA decreased >50% at nadir versus pre-desensitization. Serum samples were also tested by ELISA for the presence of IgG AT1R-Ab (U/mL after 1:50 dilution), before and after PIbased desensitization. A high proportion of candidates with HLA-Ab exhibited AT1R-Ab (20/39, 51%). A good response of AT1R-Ab to PI therapy was observed in 18/20 patients:17.4±12.3 pre versus 10.9±13.1 U/mL post PI therapy, p=0.0009). iDSA reduction with PI desensitization was observed in 13/20 patients -7996±2580 pre versus 4410±2695 MFI post PI therapy, p=0.0001). 12/20 patients had both iDSA and AT1R-Ab reduction by PI therapy, 6/20 to AT1R-Ab reduction only, 1/20 had iDSA but not AT1R reduction, and 1/20 patient did not experience iDSA or AT1R-Ab reduction. AT1R-Ab levels were more elevated in Caucasians than in African-Americans both before (20.7±14.9 versus 12.3±3.7 U/mL, p=0.1), and after PI therapy (14.7±15.9 versus 5,2±2.9 U/mL, p=0.1). More than half of HLA-sensitized candidates exhibit IgG AT1R-Ab. This initial desensitization experience with PI treatment consistently provided substantial reductions both in HLA-Ab and AT1R-Ab levels in an unselected, highly sensitized kidney transplant candidate population. Narcotic use among liver transplant (LTX) candidates is common. However, the survival implications and intensity of use are not well understood. We integrated national SRTR data with pharmacy fill records from a nationwide prescription claims data warehouse for LTX candidates listed between 2008 and 2014. Narcotic fills in the year before listing were normalized to morphine equivalents (ME). Associations of pre-listing narcotic use (adjusted hazard ratio, aHR) and other baseline clinical factors with short-term (1 year) and long-term (2) (3) (4) (5) year) patient survival after LTX were assessed with multivariate Cox regression. Among 41,907 LTX candidates, 48.6% filled ³1 narcotic prescription in the year before listing, and 25.8% filled multiple prescriptions, equating with a total dose of ³10 ME/day. Factors associated with pre-listing narcotic use and dose included younger age, white non-Hispanic race/ethnicity, unemployment at listing, and disease due to hepatitis C, drug overdose, or tumors. Of listed patients, 22,114 (52.8%) underwent transplant during the study. We found no association between pre-listing narcotic use or dose and access to transplant, but significant associations with narcotic use and posttransplant survival in multivariate analysis at doses ³10 ME/day. We also found significant effects on survival for age, sex, race, unemployment, INR, serum creatinine, and primary diagnosis. Pre-listing narcotic use in more than one-quarter of LTX candidates (³10 ME/ day) is independently associated with increased death risk after LTX. Future work should identify underlying mechanisms and management approaches to improving outcomes. Background: Graft quality is a determinant of graft function after liver transplantation (LT). This study aimed to evaluate graft molecular markers that correlate to peripheral markers associated with graft injury severity. Patients/Methods: The study prospectively evaluated 74 LT patients (deceased DDLT n=64 and living LDLT n=10 donors). DDLTs were grouped by donor characteristics (standard (SCD) and extended criteria (ECD)), CIT (>12hrs), BMI (>30kg/m2), and graft steatosis. Biopsy samples were collected at pre (L1) and post-reperfusion (L2) times. Total RNA was isolated, labeled, and used for gene expression microarrays. Probeset summaries were obtained by RMA algorithm. Unsupervised cluster analysis compared molecular profiles among grafts. T-test was fit for comparisons (p<0.001; FDR 1%). Pathways were analyzed by IPA. EDTA-anticoagulated blood was drawn at pre-implantation (P1), post-reperfusion (P2), 1-day (POD1) and 2-day (POD2). Cell-free nucleic acids (cf-NAs) were isolated and concentration estimated. cf-DNA was tested by microsatellite markers. Cf-miRNAs were tested by RT-qPCR (spike in: cel-mir-39). Pearson's correlations were fit (p<0.05). Results: Demographics and clinical characteristics were similar among groups. Transaminases were significantly lower in LDLTs (p<0.001), higher in DDLT and especially increased in steatotic DDLT recipients (p=0.002) at P2. Molecular profiles were similar among grafts at L1 by unsupervised clustering analysis and no deferentially expressed genes were identified among DDLTs adjusting for to CIT. A set of 283 genes involved in metabolism activation were found deferentially expressed between LDLTs and DDLT at L1 time. Post-Reperfusion, we found deferentially expressed genes in DDLT samples associated with inflammation and metabolism. In plasma samples, cf-NAs significantly increased in steatotic livers (p<0.001) at post-reperfusion time (p<0.001) and correlated with AST activity (r=0.744; p<0.001). Furthermore, we identified that 95% of cf-DNA at post-reperfusion time was donor origin and decreased to 5% at POD2. Hospitalizations for Cardiovascular Disease After Liver Transplantation in the United States: Epidemiology, Outcomes, and Costs. A. Mathur, Y.-H. Chang, D. Steidley, N. Khurmi, N. Katariya, A. Singer, W. Hewitt, K. Reddy, A. Moss. Transplant Center, Mayo Clinic, Phoenix. Background: Cardiovascular disease (CVD) is a leading cause of post-liver transplant death, and variable care patterns may affect outcomes. We aimed to describe epidemiology, outcomes, and costs of inpatient CVD care across U.S. hospitals. Methods: Using a merged dataset from the 2002-2011 Nationwide Inpatient Sample and the American Hospital Association Annual Survey, we evaluated liver transplant patients admitted primarily with myocardial infarction (MI), stroke (CVA), CHF, dysrhythmias, cardiac arrest (CA), or malignant hypertension. Patient-level data include demographics, APR-DRG co-morbidity burden, and CVD diagnoses. Facility level variables included ownership status, payer-mix, hospital resources, teaching status, and physician/nursing-to-bed ratios. Cost quartiles were based on facilityspecific cost-charge ratios. We used generalized estimating equations to evaluate patient and hospital-level factors associated with mortality and costs. Results: 4,763 hospitalizations occurred in 153 facilities (Transplant hospitals n=80). CVD hospitalizations increased overall by 115% over the decade (p<0.01). CVA and MI declined over time (both, p<0.05), but CHF and dysrhythmia grew significantly (both, p<0.03), as shown below. 19% hospitalizations were for multiple CVD diagnoses. Transplant hospitals had lower co-morbidity patients (p<0.001),and had greater resource intensity including presence of cardiac ICU, interventional radiology, operating rooms, teaching status, and nursing density (all, p<0.01). CA and MI were associated with the highest mortality rates (p<0.001). Transplant and non-transplant hospitals had similar mortality (overall, 3.9%, p=0.55). High CVD burden (aOR 3.35, p <0.001), emergent admission (aOR 1.50, p=0.06), and high expense hospitals (aOR 1.70, p=0.03) predicted mortality. High cost was associated with high CVD burden (aOR 2.11, p=0.001). Conclusions: CVD after liver transplant is evolving and responsible for growing rates of inpatient care. High CVD burden predicted mortality and high cost care, which should direct QI efforts. We previously developed and validated a nomogram to predict discharge disposition post OLT. The current study analyzes our experience using the discharge calculator. We retrospectively reviewed data on 430 adults transplanted between 01/2012 and 11/2015. Patients that died <30 days were excluded. The discharge calculator, available online, utilizes the following: Karnofsky score, age, INR, creatinine, diabetes, dialysis, bilirubin, albumin, and BMI. The calculation performed at admission for OLT was shared with the transplant team. Patients with a predictive value of ≥ 60% were considered high risk for discharge to a facility. Early discharge planning and physical therapy (PT) were based on discharge probability. Continuous variables were described by means and standard deviations. Predictions from existing nomogram were validated against the actual observed outcomes from the prospectively collected data using the C-index (ROC AUC). 132/430 (30%) of patients were discharged to a facility with a predictive score of 60±24% and 298 were discharged home with a predictive score of 27±22% (p<0.0001). The characteristics of the population going to a facility were: older age (p=.004), higher MELD score (p<0.001), higher BMI (p=0.001), number of hospital admissions prior to OLT (p=0.002), higher creatinine (p<0.001), longer ICU length of stay (p<0.001), recipients of cadaveric grafts vs. living donor grafts (p=0.04), hospitalized at the time of OLT (p<0.001), encephalopathy (p<0.001), arrhythmia (p<0.001), dialysis (p<0.001), ascites and hydrothorax (p<0.001). 107 patients had a predictive value >60% for discharge to a facility. 31(28%) of these were discharged home. The only significant factor in the patients that were able to be discharged home was less time in ICU, 2.3 ± 1.9 vs. 9.1±13 days (p<0.001). The concordance index for discharge disposition prediction is 0.835. For higher predictions >80% (n=44) the model over-predicts discharge to a facility, but even at this level 75% (n=33) of patients are actually discharged to a facility. The intensive discharge planning and PT regimen may impact the discharge disposition in these patients. The discharge calculator is a vital component in early discharge planning for the transplant team. The concordance index of 0.835 is reflected in its accuracy in clinical practice. The over-prediction at higher values may reflect the response of the clinical team to the initial predictive value. Hepatic artery thrombosis (HAT) after liver transplantation (LT) is a devastating complication associated with ischemic biliary cholangiopathy (IBC) that can occur even after successful revascularization. However, the routine performance of immediate retransplantation for all HAT cases is not indicated in an era of organ shortage. Our objective is to explore long-term outcomes following HAT in adult LT recipients who did not receive immediate retransplantation, focusing on biliary complications with its probability, risk factors and resolution. In 2113 consecutive adult LT performed in our institution for the last two decades, 44 HAT (2.0 %) were identified. Thirty-three patients did not receive immediate retransplantation, but received revascularization with thrombectomy/thrombolysis (n=19), or non-revascularization management (n=15). Among the 33 without immediate retransplanation, 16 patients (48 %) developed IBC median of 45 days after HAT diagnosis and required long-term biliary stent placement for median of 530 days (150-1857 days). Affected biliary system was extrahepatic duct alone in 7, limited to hilar area in 3, and diffuse intrahepatic ducts in 6 patients. Biliary complication-free graft survival rate 3-year after HAT diagnosis was equivalently low in both a revascularization and non-revascularization group (33% vs 20%, p= 0.97). However, biliary stricture in the revascularization group was more likely to be limited area and more probable to have resolution {5 of 8 (62%) vs. 0 of 8 (0%)} compared to the non-revascularization group. Salvage retransplantation after failure of non-transplant management was performed in 7 patients with graft survival rate equivalent to that after immediate retransplantation. IBC can develop in half of HAT cases treated without immediate retransplantation with milder extent and higher probability of resolution in revascularized patients compared to those who were not. Salvage retransplantation should be considered for IBC in those who were not revascularized because of its dismal chance of resolution. tight junction proteins (TJ), cytokeratin, e-cadherin, laminin, CD4, CD8 and CD14 and by global mRNA expression using Microarray technique.Bacterial infiltration was determined by FISH for bacterial antigens. Further, a BD damage score (BDDS) to quantify biliary epithelial injury was developed and correlated with recipient and donor data and patient outcome. Results: Patients with major BD damage after cold storage, as quantified by the newly developed BDDS, had a significantly increased risk of biliary complications (p<0.0001) and graft loss (p=0.0004). After cold storage (p=0.0119) and even more after reperfusion (P=0.0002), epithelial damage categorized by the BDDS was markedly increased, and TJs were detected with inappropriate morphology. mRNA expression levels of adherens-junctions (q=0.003) and focal-adhesion-molecules (q=0.04) in damaged BDs without biliary complications were increased compared to damaged BD with biliary complications reflecting increased regenerative capacity of the biliary epithelium in the first group. Consecutively, IH showed significantly increased cytokeratin, e-cadherin and laminin expression in this group. FISH analysis demonstrated equal distribution of bacterial infiltration of BDs, however, mRNA analysis detected induced antibacterial immune response (q=0.00084) and phagocytosis (q=0.04) in BDs with enhanced epithelial regenerative capacity corroborating with significantly increased CD4 + and CD8 + cell-mediated adaptive immune response. Conclusions: In many cases, the common BD epithelium shows considerable damage after cold ischemia with further damage occurring after reperfusion. The extent of epithelial damage can be quantified by our newly developed BDDS and is a prognostic parameter for biliary complications and graft loss. Following BD damage during cold storage, functional regenerative capacity of the biliary epithelium and enhanced local adaptive antibacterial immune response are able to rescue BDs and prevent biliary complications after liver transplantation. Background: In the US, living donor liver transplantation (LDLT) is associated with increased long-term graft and patient survival compared to deceased donor liver transplantation (DDLT). However, patients undergoing LDLT have a higher risk of surgical complications leading to retransplantation (reLT) than those undergoing DDLT. To date, reLT outcomes after failed initial LDLT are not known. Methods: We identified 276 patients in the UNOS database who underwent reLT with a deceased-donor allograft after initial LDLT from 2002-2013. Multivariable logistic models were used to evaluate the association between pre-LT characteristics and 1-year mortality. Results: ReLT occurred within 14 days of LDLT in 29% of recipients, of which 78.8% were in the intensive care unit (ICU) at the time of reLT. Overall unadjusted 1-year mortality after reLT was 32.3%. In univariable analysis, only donor age and patient location at the time of reLT were significantly associated with 1-year mortality. As a category, timing between LDLT and reLT was not significantly associated with the risk of death at 1-year (p=0.09), however in pairwise comparisons there was an increased risk of death at 1-year if reLT occurred or >365 days (OR 2.39; 95% CI 1.02-5.58) from initial LDLT using reLT ≤ 14 days as a reference. Using multivariable logistic regression modeling with donor age, patient location at reLT and time category of reLT as covariates, demonstrated a 1-year predicted mortality of 31%, 45.5%, 52.7% and 50.8% for reLT recipients in the ICU at ≤ 14, 15-90, 91-365 and >365 days from initial LDLT, respectively. Conclusion: A significant proportion of reLTs occur soon after LDLT and in critically ill recipients with significantly lower survival than after initial LDLT, though outcomes appear best if reLT is needed emergently. Medication errors are associated with increased incidence of infection, rejection, and graft loss in kidney transplant recipients. This study aims to create a model to predict patients at highest risk of medication errors to streamline workflow in a kidney transplant clinic caring for long-term recipients. This was a prospective, observational study in adult kidney transplant recipients who were ≥ 90 days post-transplant. Prior to their clinic visit, patients were administered a survey assessing medication adherence, perception of health status, and current medication regimen. Subsequently, pharmacists conducted a blinded medication history encounter and documented the medication errors discovered during the visit. A predictive model was then created using binary logistic regression with backward elimination. This study included 237 patients ( Table 1 ). The logistic model had good predictability, with 61.5% and 66.3% specificity and sensitivity, respectively, for identifying patients likely to have ≥ 6 medication errors. This model had an AUC of 0.724 ( Figure 1 ) and includes the following 12 variables: lack of or unknown baseline calcineurin inhibitor (CNI), use of Social Security disability as income (SSDI), use of Medicaid for prescription insurance, poor health status rating, number of daily scheduled medications, current anti-diabetic regimen, number of antihypertensive medications, and medication adherence, affordability, etc. (Table 2) . A more parsimonious model, which included 9 variables, was also created for ease of use in a clinic setting. This model had a sensitivity and specificity of 62.5% and 66.7%, and an AUC of 0.72 ( Figure 2 ). These results demonstrate that a simple 5 minute patient survey conducted in kidney transplant recipients may be capable of accurately predicting patients at high risk of medication errors, which can be used as a screening tool for transplant pharmacist's interventions. These results require external validation prior to implementation. Medication errors are associated with increased incidence of infection, rejection, and graft loss in kidney transplant recipients. This study aims to assess the impact of a pharmacist encounter on the number of medication errors discovered at the following clinic visit. This was a prospective, blinded observational study in adult kidney transplant recipients ≥ 90 days post-transplant. Through a medication history encounter, pharmacists documented medication errors discovered at the clinic visit. Errors were categorized as nonadherence, over-or under-dosing, duplication of therapy, missing medication, preventable adverse drug reaction (ADR) due to allergy/ intolerance or due to inaccurate medication instructions, conflicting information from providers, lack of monitoring, and receipt of an incorrect medication. The number and type of errors were compared between patients who had a pharmacist visit vs those who did not. This study included 237 patients. Significant baseline differences included hypertension (HTN) and delayed graft function (DGF) ( Table 1 ). Univariate analysis demonstrated a significant reduction in the proportion of patients with ≥ 2 medication errors (21.7% vs 78.3%, p=0.041), ≥ 3 errors (18% vs 82%, p=0.003) and a trend in ≥ 4 errors (19.6% vs 80.4%, p=0.102), when comparing those with and without a prior pharmacist encounter. The breakdown of errors within groups is displayed in Table 2 . There were 693 medication errors (3.8 errors per patient) in the group without a pharmacist encounter versus 181 errors (3.1 errors per patient) in the group with a pharmacist encounter. Significantly fewer patients in the encounter group had errant listed medications (68% vs 47%, p=0.005) and lack of monitoring (39% vs 25%, p=0.054). This study suggests that an ambulatory pharmacist encounter can reduce the number of medication errors in kidney transplant recipients. These findings strengthen the argument for pharmacist presence in the transplant ambulatory care setting. Kidney tx recipients carry a high burden of comorbidities, and some of the medications for these conditions have genetic predictors of efficacy and toxicity. These genes may be associated with subtherapeutic treatment, predisposition to sideeffects and, in rare cases, death. However, these genetic variants are not commonly used to personalize therapy in clinical tx practice. Recipients (n=3043) undergoing kidney tx between 2005-2011 were enrolled and followed prospectively in the DeKAF genomics multicenter study. DNA was genotyped using the Affymetrix Tx Array GWAS chip (exome panel with approx 782,000 variants) which included PGx variants. The Clinical PGx Implementation Consortium (CPIC) guidelines were reviewed and medications with genetic recommendations that may be relevant to tx recips were selected. We then evaluated our subjects for the presence of these risk gene variants. The mean age at tx was 49±14 yrs; 62% male; 75% White, 18% African American, 4% Asian. We identified 8 relevant high-risk PGx genotypes on our chip (Table 1) . These genotypes occurred in 5-57% of recips. High risk clopidogrel CYP2C19 variants were present in 29% of recips. Clopidogrel was actively prescribed at time of tx in 111 of which 24% had a high risk genotype that would confer risk for lack of efficacy. 38% of recips had a CYP2C19 risk genotype that would place them at risk for antidepressant therapy failure. 24% carried the SLCO1B1 variant which placed them at higher risk for development of simvastatin related myopathy. The genotype conveying rapid clearance of tacrolimus was present in 26%. We found a high proportion of recipients carried high risk PGx genotypes. The CYP2C19 risk variant was present in nearly a third of our >3,000 recips and may be at risk for therapeutic failure should they receive clopidgrel or antidepressants. Genetic testing is increasingly accessible and knowing this information prior to starting these medications may improve efficacy and reduce unnecessary side effects in an already complex patient. Background: Long term allograft survival remains poorer in African American(AA) renal transplant recipients(RTR) compared to Caucasians(C) and may be due to racial differences in calcineurin inhibitor pharmacology. P-glycoprotein(P-gp), an ABC transporter expressed in liver, gut, kidney and peripheral blood mononuclear cells(PBMCs), modulates tacrolimus(TAC) pharmacokinetics(PK) and intracellular pharmacology. This study evaluated PBMC P-gp function and TAC PK in 33 AA and 32 C male(M) and female(F) RTR who received TAC and mycophenolic acid. Methods: A steady-state 12-hour PK study determined TAC trough(C0), doseadjusted area under the concentration time curve 0-4 hours(AUC0-4), apparent clearance(CL) and lean body weight (LBW)-normalized CL in stable RTR >6 months post-transplant. TAC dosage was adjusted using C0 range of 4-9 ng/ml. P-gp function was quantified by flow cytometric measurement of cyclosporine (CYA, 2.5 µM)-reversible efflux of P-gp substrate, DiOC2(3) by % change of Mean Fluorescent Intensity(MFI) with CYA(% ∆MFI-CYA) pre-TAC(0 hours), 4, 8 and 12 hours after TAC and evaluated with AUC % ∆MFI-CYA. Results: Significant race-gender effects in TAC dose, CL, AUC and AUC%∆MFI-CYA were noted with reduced P-gp function and more rapid CL in AAF. P-gp function was a significant covariate influencing TAC AUC0-4(p=0.042). Non-adherence is common among kidney transplant recipients. However, data on the impact of appointment and medication non-adherence on kidney transplant outcomes are scarce. This study aims to assess the relationship of non-adherence to laboratory assessments, clinic appointments, and medication regimens with kidney transplant outcomes. Methods: We analyzed kidney transplant recipients between 2005-2014 with a detailed review of the medical records to determine non-adherence and clinical outcomes. Baseline characteristics and outcomes were compared between two groups: patients with no documented non-adherence vs patients with documented non-adherence, using univariate and multivariate analyses. Results: 1,410 kidney transplant recipients were included in this analysis: 737 with no documented non-adherence and 673 with documented non-adherence. Patients with documented non-adherence tend to be younger, African American, and have private insurance. Patients with no documented non-adherence tend to have a history of DM and heart disease, receive an ECD kidney, and have DGF. (Table 1) . Non-adherence was associated with increased the risk of acute rejection (OR 1.4) . This association was even more pronounced with subsequent rejection episodes: OR 3 and 7.7 respectively for second and third rejection episodes ( Table 2) . We also found a time-dependent association between non-adherence and graft loss (Fig. 1) . Non-adherence is common following kidney transplantation, particularly to laboratory assessments and clinic appointments. It is associated with increased risk of acute rejection and graft loss. Results: The mean duration of follow-up (7.6 yrs) was similar in all groups. The cumulative cancer incidence was highest in the best adherent group, and lowest in the worst adherent group, p=0.029. Those with the best adherence had an absolute cancer rate (per 100 pt-yrs at risk) of 6.2 cases during the first 5 yrs post-tx; , and 9.9 cases in yrs 5-10. For the least adherent group, cancer rate was 4.1 cases for the first 5 yrs; 6.0 cases in yrs 5-10. If non-melanocytic skin cancers are excluded, the cancer incidence was significantly higher in the best adherent group [p=0.011]. After adjusting for age and treating death as a competing risk, adherence grouping remained a significant factor in the incidence of cancer [p = 0.035], with the cancer hazard (vs best adherence group) 35% lower in the worst adherence group, and 49% lower in the midrange adherence group. Conclusion:Medication adherence, although lowering AR rates and increasing long-term graft survival, is associated with increased cancer risk. Individualizing treatment to decrease the risk of either graft loss or other adverse outcomes, including malignancy, is necessary. Trials are needed to determine how to achieve this balance. Background: Kidney transplantation is associated with bone loss and an increased risk of fracture. Since current therapeutic options to prevent bone loss are limited we assessed the efficacy and safety of Receptor Activator of Nuclear Factor κB Ligand (RANKL) inhibition with denosumab to improve bone mineralization in the first year after kidney transplantation (NCT01377467). Methods: We enrolled 108 kidney transplant recipients and randomized 90 patients two weeks after surgery in a 1:1 ratio to receive denosumab (subcutaneous injections of 60 mg denosumab at baseline and after 6 months) or no treatment. The primary endpoint was percentage change in bone mineral density (BMD) measured by DXA at the lumbar spine at 12 months. Results: After 12 months, the primary outcome of total lumbar spine BMD increased by 4.6% (95% CI 3.3-5.9%) in 46 patients in the denosumab group and decreased by -0.5% (95% CI -1.8-0.9%) in 44 patients in the control group (between-group difference 5.1%, 95% CI 3.1-7.0%, p<0.0001). Denosumab also significantly increased BMD at the total hip by 1.9% (95% CI, 0.1 to 3.7%; p=0.035) over that in the control group at 12 months. Biomarkers of bone resorption (β-CTX, urine deoxpyridinoline) and bone formation (P1NP, BSAP) markedly decreased with denosumab (p<0.001), whereas 25 (OH) and 1,25 (OH) 2 vitamin D 3 were not changed by denosumab. Interestingly there was a delayed decrease of PTH, a higher incidence of asymptomatic and transient hypocalcemia (£1.9 mmol/l) (12 vs 1 episodes) but a lower incidence of hypercalcemia (>2.6 mmol/l) (37 vs 55 episodes) with denosumab. A 3-year safety follow-up showed excellent graft function (eGFR 59.7 vs 58.0 ml/min/1.73 m 2 ) and a similar improvement of the persistent hyperparathyroidism in both groups. Conclusions: Antagonizing RANKL with denosumab effectively increased BMD in de novo kidney transplant recipients, improved hypercalcemia but caused more frequent episodes of asymptomatic hypocalcemia. Denosumab appears therefore to be suitable to improve bone health after kidney transplantation. Objective: Treatment with ACEI or ARB has been shown to have anti-inflammatory effects in animal models but this effect has not been investigated in kidney transplant recipients. We aimed to study the effect of ACEI or ARB treatment on intragraft gene expression profiles of transplant kidney biopsies using microarrays. Methods: We identified 29 near normal biopsies with chronic sum allograft injury score (ct+ci+cv) ≤ 3 for gene expression profiling comparing 2 groups; Group 1 (n=16), patients with no exposure of ACEI or ARB treatment and Group 2 patients (n=13) with exposure to ACEI or ARB at least 6 months prior to kidney biopsy. Biopsies with a diagnosis of acute or chronic rejection, recurrent or de novo glomerular disease, or polyoma nephropathy were excluded. The gene expression profiles were studied by Affymetrix HuGene 1.0 ST expression arrays. Both groups had similar demographics characteristics in terms of age, race, sex, type of transplant, previous history of transplantation or acute rejection, panel reactive antibody levels and immunosuppressive treatment. There were no differences in acute and chronic Banff allograft injury scores between the 2 Groups. Intragraft gene expression profiles of ACEI or ARB treated Group 2 biopsies showed decreased gene transcripts of interferon-gamma and rejection-associated transcripts (GRIT) and constitutive macrophage-associated transcripts (CMAT) compared to Group 1 biopsies. There were no statistically significant differences in expression of cytotoxic T cell (CAT), regulatory T cell (TREG), B-cell (BAT), natural killer cell (NKAT), and endothelial cell-associated transcripts (ENDAT) between the 2 Groups. Conclusions: Our data suggest that exposure to ACEI or ARB was associated with down-regulation of GRIT and CMAT. This anti-inflammatory effect of ACEI or ARB treatment could be an additional benefit in kidney transplant recipients. Patients. T. van den berg, G. Nieuwenhuijs-Moeke, S. Bakker, R. Pol, T. Lisman. Groningen Transplant Center, University Medical Center Groningen, Groningen, Netherlands. Introduction: It is generally assumed that dialysis patients (DP) have a more severe bleeding diathesis related to platelet dysfunction than pre-emptively transplanted patients (PEP) who are not yet on dialysis. Therefore, in many centers, PEP receive heparin, with the aim to prevent graft thrombosis, whereas DP do not. However, a solid biochemical basis for this difference is lacking. Twenty-eight recipients were PEP and 28 were DP. Living kidney donors were included as a control group. PEP were given 5000 IU of heparin prior to reperfusion. Blood samples were taken at incision, and two hours after skin closure. Five minutes after reperfusion, blood samples were taken from the renal vein and systemic circulation. The following hemostatic and fibrinolytic parameters were analysed: Platelet factor 4 (PF4) as a specific platelet activation marker, prothrombin fragment 1+2 (F1+2) for coagulation activation and D-dimer for clot breakdown. Plasma hemostatic and fibrinolytic potential was studied by thrombin generation (TGA) and clot lysis time (CLT) assays. Results: At start of surgery, both DP and PEP showed activation of platelets and coagulation as evidenced by elevated levels of PF4, F1+2, and D-dimer compared to living donors. In addition, both DP and PEP showed a decreased plasma fibrinolytic potential on CLT assays. The pre-transplantation hemostatic status of both DP and PEP was remarkably similar. Five minutes after reperfusion there was much more pronounced platelet activation, without concomitant coagulation activation in DP compared to PEP. This was corroborated by a substantial increase in PF4 in DP. Two hours post-surgery activation of platelets (PF4), coagulation (F1+2, D-dimer), and plasma coagulatory potential (TGA) were substantially higher in DP compared to PEP. At that time, DP also displayed a more pronounced hypofibrinolytic state compared to PEP. Discussion: Prior to kidney transplantation, DP and PEP have a comparable hemostatic state. Post transplantation, DP show a more pronounced activation of coagulation and inhibition of fibrinolysis, presumably related to the heparinisation of PEP. These results strongly contrast with the general belief that DP have an increased bleeding risk. Our results suggest that many centers erroneously withhold DP from adequate peri-operative anticoagulation during transplantation. Conclusion: Patients with bariatric surgery have increased GL as compared to patients with BMI > 35. The causes could be decreased CNI/MMF absorption due to the surgery itself or increased oxalate absorption leading to crystallization in the transplanted kidneys. Our findings question the advantages of bariatric surgery in patients with high BMI especially among centers having a BMI cut off criteria for transplantation. Further studies involving larger numbers of such patients should be considered. Smoking Exposure Among Kidney Allograft Recipients and Outcomes After Transplant. H. Gillott, 1 S. Tahir, 1 F. Jackson-Spence, 1 F. Evison, 2 J. Nath, 3 A. Sharif. 3 1 Medical School, University of Birmingham, Birmingham, United Kingdom; 2 Medical Informatics, Queen Elizabeth Hospital, Birmingham, United Kingdom; 3 Nephrology and Transplantation, Queen Elizabeth Hospital, Birmingham, United Kingdom. The causal effects of smoking on major comorbidity and mortality in the general population are well established. However, the influence of smoking exposure on kidney transplants remains uncertain. The aim of this study was to determine patient and graft specific outcomes for kidney allograft recipients stratified by any smoking exposure at a large UK centre. Methods: Data was extracted by the Hospitals informatics team for all kidney transplants at the Queen Elizabeth Hospital, Birmingham between January 2007 and January 2015. Electronic patient records were manually searched to facilitate data linkage between various sources to create a comprehensive database of baseline demographics, donor details and clinical events. Results: Of 1,140 patients transplanted, 280 (24.0%) had documented smoking exposure and were classified as ever smokers for analysis. Ever smokers were more likely to be male (71.5% v 56.0% p<0.001) and Caucasian (76.3% v 70.8% p=0.044)but there was no association with any other baseline demographic. There was no significant association between smoking exposure and mortality (8.8% v. 6.6% p=0.139) (median follow up 4.4 years). Ever smokers had higher rates of post transplant major morbidity including cancer (10.2% v. 4.8% p=0.002), diabetes (11.3% v. 7.2% p=0.029) and cardiac events (11.3% v. 4.4% p<0.001). Ever smokers had significantly increased risk of graft failure (21.5% v. 14.2% p=0.003), driven by death-censored graft losses (13.9% v. 9.0% p=0.016). Of the 502 patients that underwent biopsies, there were significant histological differences between the groups with higher rates of thrombotic microangiopathy (4.0% v. 1.7% p=0.029), acute tubular injury (16.8% v. 12.1% p=0.032) and chronic allograft damage (5.8% v.2.8% p=0.016). Renal function was also consistently inferior in ever smokers with higher creatinine levels at 12 months (185.68 mmol/l v 155.54 mmol/l p=0.018) with similar findings at 0, 1, 3, 6, 9, 36 and 60 months. Indeed, using a Cox regression model, smoking exposure was shown to independently increase overall graft loss. Conclusion: This large single centre study demonstrates that smoking exposure is associated with negative post transplant outcomes; it increases patient morbidity, increases graft failure and consistently reduces renal function. We studied the role of phosphatidylinositol-3-kinases (PI3k) γ and ∆ pathway in alloimmunity. Generated Foxp3-GFP-PI3kγ and -PI3k∆ knockout mice (C57BL/6 background) were used in heart transplant models and GVHD models. While PI3kγ -/and PI3k∆ -/recipients of BALB/c hearts exhibited significant heart allograft survival prolongation compared to WT (MST: 14, 14, 7 respectively), the administration of low dose CTLA4Ig (250mgon day 2) induced indefinite allograft survival of the PI3kγ -/recipients compared to WT (MST: >100 and 41) with a marked increase in Tregs and a reduced % of CD4 + and CD8 + Teff along with significant suppression of Th1/Th17 cytokines in the spleen and draining lymph nodes (DLN) . Surprisingly, the absence of PI3k∆ abrogated the effect of CTLA4Ig treatment with a marked decrease in Tregs with significant increase in CD4 + and CD8 + Teff in the spleen and DLN of the PI3k∆ -/recipients along with significant upregulation of Th1/Th17 cytokines. Adoptive transfer of Foxp3-GFP-PI3k∆ -/and -PI3kγ -/-Tregs into a GVHD model showed that PI3k∆ -/-Tregs went more into apoptosis compared to PI3kγ -/and WT Tregs. We also examine the effect of pharmacologic inhibition of PI3k isoforms using IPI-1828 (PI3k∆ selective inhibitor) and INK-055 (PI3kγ and ∆ inhibitor). BALB/c heart allografts were transplanted into C57BL/6 and PI3kγ -/mice respectively and each recipient was treated with IPI-1828 or INK-055 with or without low dose CTLA4Ig respectively. WT and PI3kγ -/recipients treated with IPI-1828 and PI3kγ -/recipients treated with IPI-1828 plus low dose CTLA4Ig showed similar allograft survivals as previous model (MST: 13, 20, >100 respectively). Furthermore, the allograft survival of WT recipients treated with INK-055 plus low dose CTLA4Ig confirmed that PI3k∆ abrogates the effect of CTLA4Ig. Using double knockout recipients (CD28 -/-PI3kγ -/and CD28 -/-PI3k∆ -/mice), allograft histology was observed and showed less lymphocyte infiltration in CD28 -/-PI3kγ -/recipient compared with CD28 -/-PI3k∆ -/-. Allograft survival data also reflected the histology data of CD28 -/-PI3kγ -/and CD28 -/-PI3k∆ -/-(MST: >30 and 27.5). Our data shows a differential role of PI3Kγ and PI3K∆ in Tregs homeostasis and function with significant application in the future of PI3K based therapies in solid organ transplantation. Here, we investigate the mechanisms underlying this resistance. Interestingly, enhanced humoral responses in KO mice were not responsible for the CTLA4-Ig-resistant rejection, as we observed that CTLA4-Ig treatment was able to inhibit graft-specific antibody formation in both WT and KO mice, indicating that a different mechanism may be responsible for the accelerated rejection seen in KO animals. Given the interplay between T and B cells during alloreactive responses and the rise in the use of monoclonal antibody therapy to control T cell alloreactivity, we sought to investigate the role that FcgRIIB may play in T cell-mediated graft rejection. While T cells have long been thought to not express Fc receptors, we observed the expression of FcgRIIB on allospecific CD4 and CD8 effector T cells by day 14 following skin transplantation (61.5% and 64.4%, respectively), and this expression was maintained at a high level on memory T cells. Furthermore, we showed that most allospecific cells capable of making TNF and IFNγ in response to ex vivo restimulation also coexpressed FcgRIIB (88.3%). With this novel finding of high FcgRIIB expression on donor-reactive memory T cells and the fact that memory T cells are a significant barrier to transplantation, we sought to determine if FcgRIIB functionally impacts recall responses to a graft. We transferred OVA-specific CD4 and CD8 T cells to B6 mice and allowed them to fully reject mOVA skin grafts. At a memory timepoint, these mice were given a second mOVA graft, and recipients were either left untreated or treated with the FcgRIIB-blocking 2.4G2 antibody. We observed a greater expansion of CD8 T cells in the draining lymph node of 2.4G2-treated animals, suggesting that FcgRIIB plays a significant functional role in the control of secondary donor-reactive CD8 T cell expansion in this model. Taken together, these data identify FcgRIIB as a novel coinhibitory pathway regulating donor-reactive memory CD8 T cell responses in the setting of transplantation. Apoptotic bodies (ApoBodies) and apoptotic exosome-like MV (ApoExo) released by apoptotic endothelial cells in vitro were purified by sequential centrifugation and analysed by electron microscopy and comparative proteomics. C57Bl/6 mice transplanted with an aortic graft from a MHC-mismatched BALB/c donor or nontransplanted C57Bl/6 mice were injected intravenously with apoptotic MV for up to 3 weeks. Injection of ApoBodies failed to induce an immunogenic response. Injection of ApoExo induced a strong anti-LG3 IgG response, both in non-transplanted (p<0.001) and transplanted (p<0.05) mice. Donor specific antibodies were detected in allografted mice but their levels were not affected by the injection of MVs. The injection of ApoExo to transplanted mice, but not of ApoBodies, heightened the severity of rejection as shown by increased neointima formation (n=6, p<0.01), infiltration by CD3+ T cells (p<0.01) and CD20+ B cells (p<0.01) and C4d deposition (p<0.001). Proteomic analyses identified perlecan/LG3 and the 20S proteasome specifically in ApoExo. The proteolytic activity of the 20S proteasome was significantly increased in ApoExo (p<0.01) compared to ApoBodies or cell extracts. Proteasome inhibition in endothelial cells with bortezomib, did not reduce the production of ApoExo but significantly impeded their proteolytic activity (p<0.001). Injection of proteasomeinhibited ApoExo led to reduced recruitment of both B (p<0.01) and T (p<0.05) cells to the allograft in association with decreased C4d deposition (p<0.05) and reduced circulating levels of anti-LG3 IgG (p<0.05). These results identify apoptotic exosome-like vesicles as novel accelerators of rejection and the 20S proteasome, active within these MV, as an important determinant of immune responses to the graft. Acute allograft rejection is mediated by alloreactive CD8+ T cells that differentiate into cytotoxic lymphocytes (CTL). In most rodent models, CTL differentiate from naïve CD8+ T cells within secondary lymphoid organs in response to alloantigen presented by professional antigen presenting cells (APC). Human allografts may be rejected by circulating effector memory T (T EM ) cells that recognize non-self class I and II MHC molecules on graft endothelial cells (EC), infiltrate the graft and mediate rejection without involvement of professional APCs or secondary lymphoid organs. While much is known about the differentiation of rodent naïve CD8+ T cells into CTL, the requirements for differentiation of human CD8+ T EM into CTL are not well defined. Here we tested if human CD4+ T EM , activated by EC class II MHC molecules, are required for this process. Consistent with this hypothesis, preventing recognition of class II MHC molecules expressed by EC lining human artery segments that have been interposed into the aortae of immunodeficient mice reduces graft infiltration by adoptively transferred allogeneic human T cells as well as CD8+ T cell differentiation within and acute rejection of the human artery. In vitro, siRNA knockdown or CRISPR/Cas9 ablation of class II MHC molecules on allogeneic EC prevents CD4+ T EM from producing sufficient IL-2 to help CD8+ T EM to expand and differentiate into CTL in response to the same EC. Synthetic human microvessels formed by EC within collagen gels and then implanted into SCID/bg mice may be rejected by allogeneic T cells in a process that is mitigated by removal of CD8+ T cells. Microvessels engineered from human EC that had undergone CRISPR/Cas9-mediated ablation of class II MHC molecule expression are significantly protected from CD8+ T cell-mediated vessel destruction in vivo compared to control EC microvessels. We conclude that human CD8+ T EM -mediated acute vascular rejection, which targets graft class I MHC molecules, requires help from CD4+ T EM cells activated by recognition of class II MHC molecules on allogeneic EC. These preclinical studies predict that human allografts or tissue engineered constructs that incorporate allogeneic EC will be significantly less prone to CTL-mediated rejection. Methods: Human islets were transplanted into diabetic, athymic mice and recipient plasma was analyzed for transplant islet specific exosomes (TISE) using anti-HLA antibody quantum dots on NanoSight nanoparticle detector. Rejection was induced by infusion of syngeneic leukocytes (n=15) and plasma TISE signal was characterized. TISE were purified using affinity antibody beads, and the exosome protein (Western blot, mass spectrometry) and RNA (RT-PCR, microarray) cargoes were profiled. Results: TISE were reliably tracked in all normoglycemic recipient plasma samples (n=20 versus n=10 naive animals) over follow-up (14 to 96 days) (p<0.001). Immune rejection caused quantifiable decrease in recipient plasma TISE signal (p<0.001) (Figure 1 ). Purified TISE cargo contained bona fide islet endocrine specific markers insulin, glucagon, and somatostatin at the protein and mRNA levels. Microarray showed distinct enrichment of microRNAs in TISE compared to human islet graft. Further, islet rejection led to decreased signal in TISE insulin levels. Proteomic profiles of TISE performed using mass spectroscopy showed distinct changes in proteins associated with immune rejection. Microarray analysis showed distinct changes in the microRNA profiles of TISE with transplant rejection. Conclusions: Transplanted human islets release quantifiable islet specific exosomes into recipient plasma, and TISE quantity decreases with immune rejection. TISE characterization shows condition specific (immune rejection) changes in their proteomic and RNA cargoes. These findings validate the biomarker potential of transplant tissue specific exosome characterization in transplantation diagnostics. Purpose: Over 30% of children listed for liver transplant (LT) have exceptions filed to increase their PELD/MELD. Non-standard exception requests (NSER) are granted by regional review boards. The impact of denial of NSER has not been studied. Methods: UNOS data 2009-2014 on pediatric waitlist (WL) candidates, 0-18 years, analyzed excluding standard exceptions. Competing-risks regression evaluated WL death/removal for too sick, accounting for LT; backward stepwise selection with variables p<0.1 used to select multivariable (MV) models. Results: Of 2747 WL candidates, 47%, 28%, and 25% were listed at age <2, 2-11, and 12-18 years, respectively, with 54% female, 22% Hispanic, 43% biliary atresia. Of 1216 with NSER (44%), 6.5% were never approved (n=79). WL death/removal occurred in 14% of those never approved, 5% of those approved (n=1137), and 7% with no NSER (n=1531). Denial was associated with increased risk of WL death/ removal in univariate (HR=2.9, 95% CI 1.5-5.5, p=0.001) and MV (HR=2.1, 95% CI 1.0-4.2, p=0.04) analyses compared to NSER approval (Table 1 ). In MV model, higher MELD/PELD increased WL death/removal (HR 1.06, 95% CI 1.04-1.07, p<0.001). Gender, prior LT, region, list year did not predict WL death/removal. Hepatoblastoma (HB), hepatocellular carcinoma (HCC), along with hemangioendothelioma, hemangiosarcoma, and angiosarcoma (HHA) are rare tumors in children that are treated with orthotopic liver transplantation (OLT) when partial hepatectomy is not feasible. The purpose of this study was to examine the effect of era and patient age on late outcomes after OLT in the management of pediatric malignant tumors. Pediatric (<18 years at listing) liver transplant recipients with HB, HCC and HHA were identified using Scientific Registry for Transplant Recipients data from 1987 to 2014. Graft and patient survival were analyzed using Kaplan-Meier and multivariable Cox proportional hazards regression methods. Six hundred forty-one OLTs (489 HB, 97 HCC, 55 HHA; 241 during 1987 -2004 , 400 during 2005 -2014 were performed in 603 patients (5% with >1 OLT). Overall follow-up time averaged 66±75 months (range: <1 to 312) and did not differ between the three tumor types (p=0.371). While there were no differences in overall graft (p=0.096) or patient survival (p=0.170) between the three tumor types, graft and patient survival were better overall (both p£0.001) in the more recent era for HB and HCC (Figure 1 ). After adjusting for previous abdominal surgery (p=0.147), HB recipients in the upper quartile for age at OLT (>45 months) had a 72% increased risk (HR=1.72, 95%CI: 1.03-2.86, p=0.037) of failure of the first/only graft compared to those in the lower quartile (£18 months). Long-term outcomes after OLT for primary pediatric malignant tumors have improved during the last decade for patients having HB or HCC, but not for patients with HHA. Younger patients with HB had better initial graft survival rates after OLT. The low retransplant rate and excellent long-term graft and patient survival rates demonstrate the benefits of OLT when partial hepatectomy is not possible. Allocation for deceased donor livers is prioritized in the U.S. by waitlist mortality, as predicted by PELD for children<12 years, and MELD for individuals>12 years. Adult candidates that experience a sudden increase ("spike") in MELD have substantially higher waitlist mortality, suggesting that they might require different allocation priority. However, pediatric patients have potentially different disease trajectories, and it remains unknown if such a spike affects waitlist mortality for pediatric candidates in a similar manner. Methods: Pediatric (age<18) patients waitlisted between 2002-2015 were studied using SRTR data. Individuals were excluded if they received a multi-organ transplant, re-transplant, exception points, or status 1a/b priority. A spike was defined as greater than 80% increase in PELD/MELD over the lowest value in preceding 7 days. Multivariate logistic regression was performed to predict the odds of 7-day waitlist mortality for individuals with a spike compared to those individuals without a spike, after adjusting for PELD/MELD; the association between a spike and 7-day mortality was also assessed at different PELD/MELD scores. Results: Of 2,129 eligible registrants, 252 experienced a PELD/MELD spike. A spike was associated with 41% increase in 7-day waitlist mortality (OR= 1.01 1.41 1.98 , p=0.041) after adjusting for current PELD/MELD. However, the association between spike and 7-day mortality varid by PELD/MELD (interaction: p<0.001), conferring a higher risk of mortality for registrants with PELD/MELD below 30, but actually a lower risk of mortality for registrants with PELD/MELD of 40 ( Background: De-novo autoimmune hepatitis (DAIH) is a chronic immune mediated graft disorder seen following liver transplantation (LT). The long-term course and outcome of DAIH is largely unknown. A retrospective multi-center study assessing these aspects is presented. Methods: Children with DAIH were followed from diagnosis until December 31, 2012, death, re-LT or transfer of care and for a minimum of 1 year. DAIH was diagnosed based on the Banff working group recommendations (Hepatology 2006). Long-term outcome data were collected in 5 LT centers. Results: Thirty-one patients (58% females) with DAIH out of 1833 (1.7%) LT were identified. Follow up data was available for 29 patients. Seventeen (59%) patients underwent LT for biliary atresia, 5 for acute liver failure and 7 for other etiologies at a median age of 1.5 years (range:0.2-12.3). At least one episode of acute rejection was diagnosed in 14 (48%) patients before the diagnosis of DAIH. DAIH was diagnosed at a median of 5.3 years (range:1.2-14.9) post LT. Median ALT at diagnosis was 108 IU/L (IQR:40-216), GGT 72 IU/L (IQR:23-173), IgG 16.7 g/l (IQR:16.2-19.1) and total bilirubin 0.7 mg/dl (IQR:0.5-0.9). Prednisone at a mean dose of 0.84±0.66mg/ kg was initiated in 21 patients and azathioprine in 14. Patients were followed for a median of 7.1 years (range:1.6-15). Immunosuppressive therapy at the last visit was calcineurin inhibitor based in 27 patients. Eighteen (62%) patients were maintained on prednisone at a median dose of 0.06 mg/kg and 21 (72%) on either azathioprine or mycophenolate mofetil. Exacerbation of DAIH was diagnosed in 17 patients during the follow up period. Re-LT due to portal/hepatic vein pathologies in the presence of DAIH was required in 2 patients. One patient died from a non-DAIH related pathology. Portal hypertension (PHT) secondary to DAIH was documented in an additional 2 patients. Abnormal liver enzymes (>2X ULN) were detected in 10 (38%) patients at their last follow up, most commonly GGT. Conclusions: DAIH is an uncommon chronic complication following pediatric LT. It requires prolonged and augmented immunosuppressive therapy, is associated with continued mild graft dysfunction and very rarely it may lead to PHT but not to graft failure. Objective: The MALT [Medication Adherence in children who had a Liver Transplant (LT), NCT-01154075, R01DK080740 (NIDDK)] multisite study prospectively evaluated a biomarker for erratic medication adherence (MLVI), as a predictor of late allograft rejection (LAR). The primary hypothesis was that a higher MLVI will predict a masked central pathology reading of LAR. Methods: Subjects were 1-17 years old LT recipients, more than one year after LT, who did not have LAR in the year prior to enrollment. MLVI, defined as the standard deviation (SD) of sequential tacrolimus blood levels, was calculated for each subject from at least three out-patient levels. Decisions to do a liver biopsy were clinically determined at sites. Biopsies were read locally to drive immunosuppression strategy and centrally for the study endpoint. Locally diagnosed LAR both histologically and clinically (modification of immunosuppression) were also recorded. Pre-defined secondary analyses evaluated the MLVI-LAR relationship in adolescents. Results: 5 centers enrolled 400 patients (attrition <2%). A higher MLVI predicted later development of the primary and secondary endpoints: centrally confirmed LAR [mean pre-rejection MLVI for patients with LAR: 2.4 (3.6 SD) vs. without LAR, 1.6 (1.1); p=0.026], site-read LAR [2.6 (3.8) vs. 1.6 (1.1), p=0.007], and immunosuppression modification because of suspected LAR [2.5 (3.5) vs. 1.6 (1.1)], p=0.004]. In adolescents (>12 years old), the Area Under the Receiver-Operator-Characteristic Curve was 0.78 (0.61 for the entire cohort). 45% of adolescents who met the MLVI threshold (>2) in year 1 had LAR in year 2, as compared with LAR rate of 8% for those below the threshold. Conclusions: This prospective multisite study has shown that the MLVI, a costeffective innovative extension of existing clinical practice, is a robust predictor of LAR. As one of few known biomarkers of human behavior, the MLVI could inform behavioral interventions to improve post-transplant outcomes in pediatric liver transplantation and in other patient groups. Background: Biliary complications can result in a significant morbidity for split liver graft recipients. The segmental biliary anatomy, particularly the left biliary system is poorly defined. The anatomy of Segment 1 and 4 ductal drainage is highly variable and could be the source of bile leak for left lateral segment (LLS) split liver grafts. Use of a bench cholangiogram (BC) can be helpful to accurately identify the number and drainage pattern of segmental biliary system. This information further guides the line of parenchymal transection and identify significant biliary radicles that needs ligation to reduce biliary complications. A BC is routinely performed during a deceased split liver procedure at our institute. The aim of the study is 1) to assess the incidence of variant biliary anatomy with particular attention to Segment 1 and 4 ducts and 2) to evaluate the clinical relevance of routine bench cholangiogram in LLS grafts. Methods: 100 bench cholangiograms between January 2009 and January 2015 were analysed. The images were reviewed by two surgeons and the biliary anatomy was compared using Huang and Busuttil's classification for right and left systems respectively. The standard line of parenchymal transection is 1cm to the right of falciform ligament and any deviations based on the information from BC was noted. Results: A standard left ductal system was noted in 45 (45%) cases. The left bile duct anatomy was outside the Busuttil's classification in 22 cases (22%). BC results guided the line of parenchymal transection to obtain a single Segment 2&3 duct in 15 cases. The cut-surface of left lateral segment had a common Segment 2&3 duct in 88 cases (88%). A surgical intervention in the form of suture ligation of significant Segment 1/4 duct at bench preparation was performed in 6 cases. No bile leaks were noted in this cohort. Conclusion: BC is a useful tool to guide liver parenchymal transection and potentially reduce the incidence of biliary complications. The anatomy of left biliary system is highly variable. A new classification including segment 1 ductal anatomy is needed for split liver procedure. Introduction: One of the major advances in xenotransplantation is the ability to modify donor pig genetics. Using rapidly evolving techniques of pig cloning and gene editing, we are now able to produce pigs with multiple genetic modifications.. The ideal genetic construct is still to be determined but the results of sequential iterations is encouraging. Methods: In a heterotopic model specific pathogen free baboons were utilized as recipients due to low levels of non Gal antibody. Starting with alpha 1-3 galactoside transferase knockout (GTKO) as a base, seven different groups of pigs were donors with transgenic expression of one to five human genes. These genes included CD46, hDAF, CD39, EPCR, TBM, A20, CD47 and TFPI.. The genetic modifications were designed to avoid complement and coagulation pathway incompatibilities as well as to help modulate the B and T cell response. All groups received our standard immunosuppression that targets inhibition of complement, elimination of B and T cells, and costimulation blockade. Results: The graft survival with different donor genetic modifications was as follows: GTKO. hCD46 (236 days), GTKO.hDAF (7 days), GTKO.hCD46 (without B cell elimination) (8 days), GTKO.hCD39 (47 days), GTKO.hDAF.hEPCR (47 days), GTKO.hCD46.hTBM (945 days), GTKO.hCD46.hTBM.hCD39.hEPCR.hDAF (>143 days) and GTKO.hCD46.hEPCR.hDAF.hTFPI.hCD47(>68) . The last two groups are still ongoing. In the last three groups co stimulation blocking antibody was also changed from anti CD154 antibody to anti CD40 antibody to avoid consumptive coagulopathy. Conclusion. We have demonstrated significant progress in graft survival by replacing incompatible pig genes with human genes which aided in avoiding several complications like thrombotic microangiopathy, consumptive coagulopathy and also ameliorated the cellular inflammatory process. Further genetic modifications are in progress to produce donor organs with the ideal genetic makeup to facilitate clinical cardiac xenotransplantation. Objective A major obstacle in the success of cardiac xenotransplantation has been the development of significant consumptive thrombocytopenia, frequently limiting recipient survival. Advances in genetic engineering have allowed expression of multiple humanized genes in order to achieve greater xenograft acceptance. We now detail the incorporation of human tissue factor pathway inhibitor (hTFPI) and thrombomodulin (hTBM) transgenes to attenuate critical consumptive thrombocytopenia. Methods Heterotopic cardiac xenotransplantations were performed on baboons (7-15 kg) using grafts from genetically-modified swine (α-galactosidase transferase knockout with hCD46, endothelial protein C receptor [EPCR] , and decay accelerating factor [DAF] transgenes). Either hTFPI or hTBM was also included. Subjects were maintained on conventional and targeted immunosuppression. Results All heterotopic heart transplants were completed without perioperative complications. Compared to historical controls, where a significant decrease in platelets was observed within one week of transplantation, baboons receiving hTFPI or hTBM-expressed xenografts maintained superior platelet counts (210K vs 337K, p=0.002). Non-hTFPI/hTBM subjects were associated with a greater initial reduction of platelets postoperatively (35.92% decrease vs 7.43% decrease, p=0.01) and were found to have thrombocytopenia-related complications including DIC, GI bleeding, anemia requiring multiple transfusions, and decreased survival. Hemorrhagic episodes remain absent in the hTFPI and hTBM cohorts; platelet counts recovered back to baseline levels within seven days following transplantation and none have experienced critical thrombocytopenia (platelet counts <100,000). No difference in platelet levels were found between hTFPI and hTBM groups at 14 days (374K vs 385K, p=0.93). Conclusion The inclusion of hTFPI or hTBM transgenes appears to mitigate the presence of severe consumptive thrombocytopenia and resultant critical hemorrhagic-associated complications. Implementation of these additional humanized genes is a significant development toward improving cardiac xenograft outcomes. Background Xenotransplantation (XTx) involves the participation of innate immune system with natural antibody producing B cells, NK cells, macrophages and complement as well as adaptive immune system from T and B cells. Recently we have reported cardiac xenograft survival of over 2.5 years in a pig-to-baboon heterotopic transplantation model using a modified immunosuppression (IS) regimen, which includes depletion B cells and costimulation blockade by anti-CD40 antibody. In this study we demonstrate that continuous treatment of high dose of anti-CD40 antibody could prevent the xenograft rejection. Methods Heterotopic cardiac XTx was performed from GTKO.hCD46Tg (n=8) and GTKO. hCD46.hTBM (n=5) GE pigs into baboons using IS regimen that included ATG, anti-CD20 antibody, MMF, CVF, and co-stimulation blockade (anti-CD154 or anti-CD40) antibody. Telemetry, palpation and ultrasound (U/S) were used to evaluate the xenograft function. Baboon peripheral blood lymphocytes (PBLs) and serum were collected twice a week for the first two months post XTx and then monthly thereafter. Mixed lymphocyte reaction (MLR) and flow cytometry analysis was performed on PBLs to analyze the T cell response and the relative percentages of the B and T cells subsets. Non-gal IgG and IgM antibodies were measured in serum using their binding to pig endothelial cells (PAECs). Antibody secretion from stimulated PBLs of recipient baboon was also measured by ELISA. The GE pig cardiac xenografts along with modified IS (either anti-CD154 or CD40 antibody) survived up to 945 day.In MLR in-vitro T and B cells immune response was suppressed. The non-gal IgG and IgM antibodies were inhibited in long-term survivors. Regulatory T cell numbers were also maintained. After one and half or two year , anti-CD40 antibody was withdrawn from long-term xenograft survivors (n=2), which resulted in rejection of GE pig hearts over 8-10 week period. During this period serum levels of non-gal antibodies were progressively increased. These results demonstrate that the costimulation blockade by anti-CD40 antibody effectively suppresses adaptive immune response and xenograft rejection of GE pig hearts. Purpose: Successful implementation of neonatal porcine islet (NPI) xenotransplantation requires an understanding of the xenospecific immune elements influencing engraftment. To elucidate these early xenospecific processes, we applied a unique in vivo dual transplant model comparing alloislets (AI) to either wild type (WT) or 1,3 galactosyltransferase knockout (GKO) xenoislets. Methods: Equivalent NPI and AI masses were infused into randomly assigned contralateral hemilivers of rhesus recipients via the right and left portal veins. Six primates received AI and WT NPIs; 3 were studied at one hour and 3 at 24 hours post-infusion. Assessment consisted of extensive liver sectioning with immunohistochemical staining for protein and cellular immune components with digital quantification. Three additional animals were rendered diabetic, and then administered AIs and GKO NPIs utilizing anti-LFA-1, anti-CD154, and CTLA-4Ig based immunosuppression. Livers histology from these animals was studied 7 days post-infusion. Results: Porcine specific stains confirmed separation of NPIs and AIs by liver lobe (p<0.05). Within 24 hours, there was significantly higher NK cell infiltration (p<0.01 at 1 hour and p=0.03 at 24 hours) and apoptosis (p=0.02) in WT NPIs over AIs. At 7 days post infusion, macrophage (p=0.02) and NK cell (p=.05) staining was significantly stronger in GKO NPIs compared to AIs, with a similar trend in IgM and CD3 (T cell) staining. These recipients had improved glucose homeostasis with stronger insulin staining in AIs than GKO NPIs (p=0.02). C4d complement staining was also stronger in AIs than GKO NPIs (p=0.06). The separation of islet preparations within an individual recipient creates a highly controlled environment to objectively compare two different islet phenotypes and the immunologic factors that affect their early engraftment. We demonstrate higher early apoptosis and NK cell infiltration specific to WT NPIs when compared to AIs. After a week, glucose homeostasis was dominated by AI likely due to physiologic differences. Rigorous costimulation/adhesion blockade still permits innate cellular responses towards GKO NPIs significantly greater than AIs. These data suggest a need to target elements of innate cellular immunity in order to improve xenoislet engraftment. We have recently demonstrated that allogeneic islets engineered to transiently display on their surface a novel form of FasL protein chimeric with streptavidin (SA-FasL) induce tolerance in allogeneic mouse models. In this study, we tested the efficacy of this approach in inducing tolerance to porcine islets transplanted into mice using two different transplant sites; subrenal and intraportal. Methods: Porcine islets were modified with 5 µM biotin and engineered with SA-FasL protein (~200 ng/1000 islets). SA-FasL-engineered islets were transplanted under the kidney capsule (~2000 islets/transplant) or intraportally (2,500 islets/ transplant) in streptozotocin diabetic C57BL/6 mice under a transient cover (20 days) of rapamycin. Unmodified pancreatic islets with rapamycin treatment served as controls. Results: Porcine islets were effectively engineered with SA-FasL protein without a detrimental effect on their viability and function. SA-FasL-engineered islets induced apoptosis in responding mouse T cells in in vitro co-culture experiments. All control grafts with transient rapamycin treatment were rejected within 30 days post-transplantation. In marked contrast, all SA-FasL-engineered porcine grafts showed prolongation and ~60% of intraportal (n=37) and ~80% of subrenal (n=22) grafts survived for a 300-day observation period. Intraperitoneal glucose tolerance test demonstrated normal function of long-term islets, with the subrenal model performing better than the intraportal model. There was no detectable signs of acute or chronic toxicity of the procedure. The long-term euglycemia was due to the transplanted porcine islets as the surgical removal of the grafts resulted in prompt hyperglycemia. Conclusion: SA-FasL as an immunomodulatory molecule is effective in inducing tolerance to porcine islet grafts transplanted subrenally as a standard site in rodents and intraportally as a clinically applicable yet immunologically more challenging site. Xenotransplantation is a potential solution for the shortage of organs for transplantation. However, potent xenograft rejection responses present a formidable barrier. Mixed xenogeneic hematopoietic chimerism has been shown in mouse models to induce tolerance among T, B and NK cells. NK cells play an important role in the rejection of xenogeneic tissues. We have shown previously in an allogeneic mixed chimerism model that specific NK cell tolerance is induced. In a rat-to-mouse transplantation model, in contrast, induction of mixed xenogeneic chimerism led to global hyporesponsiveness of recipient NK cells. In this study, we investigated whether pig/human mixed chimerism could tolerize human NK cells in a humanized mouse model. Pig/human mixed chimeric mice were generated by injection of pig bone marrow cells to irradiated pig cytokine transgenic NSG mice followed by injection of human fetal liver-derived CD34+ cells 3-7 days later. Control non-chimeric mice were injected with human CD34+ cells only. Human NK cell reconstitution was induced by hydrodynamic injection of plasmids encoding human Flt3L followed by injection of human IL-15/IL15 Fc-receptor alpha complex. Our results showed that induction of human NK cells in pig/human mixed chimeras did not promote loss of pig chimerism. NK cells from pig/human mixed chimeric mice showed either specifically decreased cytotoxicity to pig cells or global hyporesponsiveness as determined by in vitro cytotoxicity assay, indicating that mixed xenogeneic chimerism resulted in variable and partial tolerance of human NK cells to pig cells. Bone marrow NK cells from chimeric mice showed decreased cytokine responses to K562 cell stimulation in vitro than NK cells from non-chimeric mice. Mixed xenogeneic chimerism did not hamper the maturation of human NK cells, but was associated with an alteration in NK cell subset distribution in the bone marrow. In summary, our results demonstrate that mixed xenogeneic chimerism is able to induce partial human NK cell tolerance to pig cells and support the use of this approach to inducing xenogeneic tolerance in the clinical setting. However, additional approaches are required to improve the efficacy of NK cell tolerance induction. Human Leukocyte Antigen-E Expression on Porcine GalTKO.hCD46 Cells Improves Ex-Vivo Xenograft Survival and Attenuates Injury. C. Laird, 1 N. Kubicki, 1 L. Burdorf, 1 T. Zhang, 1 X. Chang, 1 G. Braileanu, 1 C. Phelps, 2 D. Ayares, 2 A. Azimzadeh, 1 R. Pierson. 1 1 University of Maryland School of Medicine, Baltimore, MD; 2 Revivicor, Inc, Blacksburg, VA. Purpose: Lung xenografts are subject to injury mechanisms associated with the sequestration of human cells, including natural killer (NK) cells. To control NK cell associated injury, human leukocyte antigen (HLA)-E was added onto the GalTKO.hCD46-pig genetic background. We evaluated the effect of this modification on lung injury patterns and performance. Methods: Transgenic pig lungs were perfused with fresh heparinized human blood until failure or elective termination at 4 hrs. Porcine aortic endothelial cells (PAECs) were cultured in monolayers on microfluidic channels through which NK cells were perfused. NK cell adhesion and cytotoxicity were measured using fluorescent dyes and image processing software. Results: Median survival time of GalTKO.hCD46 lungs was 162m (range 5-240) whereas all GalTKO.hCD46.HLA-E lungs survived to 4h (p=.012). Pulmonary vascular resistance (PVR) in the HLA-E group was significantly lower during the first 3 hrs (82±13 v 156±26 mmHg-min/L at 150m, p=0.018). Platelet sequestration (36±8 v 82±10 %initial value after 240m, p=0.004), histamine elaboration (36.6±15 v 104±17, ∆ from initial value at 60m, p=0.007; p=0.068 at 240m), and platelet activation (∆BTG, 505±166 v 1350±133, ∆ from initial value at 240m, p=0.003) were also significantly lower with HLA-E expression. NK cell and neutrophil sequestration, complement activation, thrombin, and thromboxane generation were similar between groups. In vitro, HLA-E expression did not significantly alter adhesion but significantly reduced antibody-independent cytotoxicity. The addition of the HLA-E transgene is associated with improved PVR and lung survival, without significantly attenuating NK cell sequestration. These data are consistent with the hypothesis that HLA-E expression prevents NK cell activation and cell-mediated cytotoxicity. Future investigation will further evaluate the effect of HLA-E on NK cell adhesion and activation in vitro. Graft Loss, Dysfunction and Death (CTOTC-04 study). S. Webber, 1 L. Addonizio, 2 E. Blume, 3 A. Dipchand, 4 R. Shaddy, 5 B. Feingold, 6 C. Canter, 7 D. Hsu, 8 W. Mahle, 9 A. Zeevi, 6 K. Much, 10 D. Ikle, 10 H. Diop, 11 J. Odim, MD, PhD, for CTOTC-04 Investigators. 11 1 Vanderbilt, Nashville; 2 Columbia Univ., NYC; 3 Boston Children's, Boston; 4 Sick Kids, Toronto, Canada; 5 CHOP, Philadelphia; 6 CHP of UPMC, Pittsburgh; 7 Washington Univ., St. Louis; 8 Children's Hospital, Montefiore, NYC; 9 CHOA, Atlanta; 10 Rho, Chapel Hill; 11 NIAID/NIH, Bethesda. Sensitization is common in pediatric heart transplant (HT) candidates. Waitlist mortality is high if a prospective -ve CDC-crossmatch (XM) is required, but HT across a +ve XM is considered high risk for rejection & graft loss. CTOTC-04 is a multicenter prospective study assessing the impact of pre-HT sensitization on HT outcomes. Methods: We prospectively recruited consecutive candidates (<21 yr) at 8 pediatric centers. Pts were categorized as non-sensitized (cohort A) or sensitized (cohort B) defined as pre-HT +ve Luminex screen with ³1 anti-HLA Ab at ³1000 MFI confirmed by single antigen beads. Cohort B XM +ve pts were identified at HT. Immunosuppression was standardized (thymoglobulin with tacrolimus/MMF maintenance). XM +ve pts also received periop Ab removal, maintenance steroids, IVIG. Primary endpoint was 1 yr incidence rate of composite of death, reHT, and rejection with hemodynamic compromise. Results:317 were screened, 290 consented & 240transplanted. Cohort A (n=97;40%) had mean age 7.0±6.8 yr at HT, 45% male and 34% with congenital heart disease (CHD). Cohort B (n=143;60%) comprised 16 XM +ve recipients 8.0±6.8 yr, 69% male, 81% CHD, and 127 XM -ve, 7.6±6.4 yr, 58% male, 49% CHD. Incidence rates of the primary endpoint did not statistically differ; cohort A 5.2% (CI:1.7%, 11.6%), cohort B +ve XM 12.5% (1.6%, 38.4%), cohort B -ve XM 11.8% (6.8%, 18.7%), p=0.161. Individual components of the primary endpoint did not differ, but 1-year incidence rate of AMR was higher in the +ve XM group (37.5%) compared to cohort A (4.1%) and cohort B -ve XM (15.0%), p£0.001. Conclusions: In the short-term, the composite endpoint did not differ based on sensitization or XM status despite varying rates of AMR. Subsequent analyses will assess outcomes based on MFI of DSA and long-term follow-up will assess late graft / pt outcomes. Share35 Liver Allocation Is Associated with Increased Early Graft Failure Rate. T. Wong, 1 N. Koizumi, 2 J. Ortiz. 3 1 Maricopa Medical Center, Phoenix, AZ; 2 George Mason University, Arlington, VA; 3 University of Toledo, Toledo, OH. Background: The implementation of Share35 liver allocation policy in June 2013 facilitated the increase in number of transplanted livers with less organ discards. Early analysis demonstrated decreased waitlist mortality with no compromise to early posttransplant (postTXP) outcomes, including rates of 7-day retransplantation (reTXP) and early postTXP mortality. The impact of Share35 on early graft failure (EGF; loss of allograft at 7, 30, and 90 days postTXP from any cause) has yet to be examined. Aim: To investigate the effect of Share35 policy on the incidence of EGF. Method: A retrospective analysis was performed using UNOS database comparing the 2 years pre-and post-Share35. Basic recipient, donor, and perioperative characteristics were compared using student t-tests. Rates of EGF 7, 30, and 90 days post-transplantation were analyzed using Kaplan-Meier curves. Cox regression was used for identification of potential risk factors for EGF. Results: In the pre-and post-Share35 era, we showed no significant difference between donor and recipient characteristics, although there was a trend toward higher average recipient MELD score, proportion on dialysis, and proportion in ICU at time of transplantation in post-Share35. Perioperative analyses demonstrated similar warm and cold ischemic times pre-and post-Share35 despite increased regional organ sharing. Analyses of graft survival at 7, 30, and 90 days postTXP revealed significantly increased rates of EGF in the post-Share35 era (2.08% vs 6.79%, 3.73% vs 16.39%, and 6.39% vs 23.25%, respectively, comparing pre-and post-Share35). Importantly, the higher EGF rate was seen in both the high MELD (MELD>35; 7.07% vs 16.57%) as well as the low MELD (MELD£35; 3.98% vs 16.57%) recipients. Additionally, the overall time to graft loss was significantly shortened in post-Share35 (819.4 vs 262.46 days). Conclusion: Although previous analyses have shown increased number of liver transplants with positive early outcomes with respect to waitlist mortality, 7-day reTXP rate, and early postTXP mortality after implementation of Share 35, the findings in our current study call for a potential need to reconsider the efficacy of this policy given the grossly increased rate of EGF and its possible long term consequences. Further analyses are warranted to identify specific donor, recipient, and peri-operative risk factors that may be responsible for the rise in EGF rate in the post-Share35 era. The diagnosis of heart transplant rejection by histology in endomyocardial biopsies (EMB) is challenging. A molecular system (MMDx) has been developed to assess both T cell-mediated (TCMR) and antibody-mediated (ABMR) rejection in kidney transplant biopsies. The present project adapted the MMDx derived in kidney transplant biopsies to heart transplant EMBs. We collected a single bite from 331 standard-of-care EMBs from three centers and processed them on Affymetrix microarrays. EMB diagnoses were assessed by histology using ISHLT guidelines. To develop rejection tests for heart, the genes most highly associated with Rejection, ABMR, and TCMR in kidney transplants were used to perform semi-supervised clustering on the EMBs. Two-thirds of the biopsies were used as a discovery set, and the remaining third as a validation set. The biopsies segregated into three overlapping molecular clusters in the discovery set ( fig 1A) , roughly corresponding to histologic diagnoses of TCMR (A3), ABMR (A2), and No Rejection (A1) (fig 1B) . Some histology ABMR and TCMR biopsies were molecularly No Rejection. Cluster scores in the validation set were assigned using a predictive model based on the molecular distribution of biopsies in the discovery set. Diagnostic performance was similar in the discovery and validation sets. Biopsies designated as ABMR by the molecular test showed reasonably good agreement with histology, and were highly associated with HLA antibody. There was less agreement between molecular and histologic TCMR, supporting concerns that the current histologic assessment of EMBs is poorly predictive of true TCMR. We conclude that the MMDx can provide a new basis for classifying endomyocardial biopsies and a new reference point for improving the assessment by histology. (Kidney biopsies from Clinicaltrials.govNCT#01299168). S. Prockop, 1 E. Doubrovina, 1 C. Sauter, 2 S. Suser, 1 R. O'Reilly. 1 1 MKSCC, New York, NY; 2 MSKCC, New York, NY. Morbidity and mortality from EBV-induced post-transplant lymphoproliferative disease (EBV+ PTLD) can complicate solid organ transplant (SOT), particularly in those seronegative prior to transplant and in lung/small bowel graft recipients. EBV+ PTLD ranges from early lesions that can respond to decreases in immune suppression to rapidly progressive, monoclonal, diffuse large B-cell lymphomas (DLBCLs). Combination chemotherapy can induce remissions in 40%-50% of cases but relapses are common. We report treatment results in 13 SOT patients (pts) with EBV-specific cytotoxic T lymphocytes (EBV-CTLs) derived from a third party donor, i.e. someone other than themselves or their SOT donor. EBV-CTLs were matched with the pt for ≥ 2/10 HLA alleles at high resolution (HLA -A, B, C or DR) including the HLA allele required for EBV antigen recognition (the restricting HLA allele). Each cycle included 3 weekly intravenous infusions of 1 -2 x 10 6 /kg cells followed by a 3 week period of observation. Pts who failed to achieve CR could receive additional cycles from the same donor (pts with SD or PR), switch to therapy with cells from a different donor (POD), or be referred for alternative therapy. The median age at time of treatment was 19.1 years with median time for development of EBV lymphoma 36.4 months (180-5330 days) after SOT. The median time from diagnosis of PTLD to initial cell therapy was 13.9 months (36-4230 days) reflecting the duration of prior treatment. All pts had failed prior rituximab. Eleven (11) of 13 pts had received additional multi-agent chemotherapy (median of 2 regimens) and/or radiation therapy (n=5) prior to treatment with EBV CTLs. At the time of treatment 6 had ³3/ sites of disease. Of the seven with 1 or 2 involved sites, five had CNS lesions. There were no immediate infusion related adverse reactions observed. No patient experienced suppression of blood counts and none showed evidence of organ rejection related to infusion of the EBV CTLs. After the first cycle of cells 3/13 pts achieved a CR or PR. After a median of 2 cycles of cells (range 1-6 cycles) 1 CR and 6 PRs by Lugano criteria were obtained for an overall response rate of 54% (CR+PR). Taken together, our findings show that adoptive immunotherapy with banked EBV-CTLs was well tolerated and may offer clinical benefit to pts with EBV+PTLD after SOT. Background: A high proportion of patients develop chronic kidney disease (CKD) after liver transplantation (LT). However, the ability to better select patients for nephroprotective interventions early after LT, prior to the onset of CKD, is impeded by the lack of markers of subclinical renal injury. Aim: To characterize a proteomic signature of subclinical renal injury early after LT that predicts the development of late CKD. Methods: We compared multi-analyte panels (Myriad DiscoveryMAP®; 171 proteins) on sera collected at 3 months post-LT in 105 patients with preserved measured iothalamate GFR (mGFR>60 ml/min) that either progressed to CKD (cases; mGFR<45 ml/min) or maintained mGFR>60 ml/min (controls) within 5 years. We performed analyses to determine significant protein differences between cases and controls (Benjamini-Hochberg method for multiple comparisons). Results: 35 cases [age 56 (34-70); 46% male; 89% Caucasian; 23% HCV; 25% diabetes; 54% hypertension] and 70 controls [age 48 (28-64); 68% male; 70% Caucasian; 41% HCV; 25% diabetes, 61% hypertension] had sera collected at 3 months post-LT when mGFR>60. All but one control was on CNI therapy and all had normal liver tests. Unadjusted analysis revealed seven proteins different between cases and controls. After adjustment for variables known to affect renal function (age, sex, race, hypertension, diabetes, hepatitis C), levels of six proteins at 3 months (when mGFR>60) were statistically higher in cases who developed CKD vs. controls over five years (Table 1) . Conclusion: We have identified a signature of six serum proteins that early after LT can distinguish patients who will or will not eventually develop CKD. Interestingly, of the 171 proteins spanning multiple pathways, these six are well-known renal injury markers. These findings validate the notion that a novel proteomic biomarker panel can be used to reliably predict CKD in LT recipients and stratify patients into less nephrotoxic immunosuppressive regimens. In the US in 2014, 17,188 potential kidneys were available from 8594 deceased donors (2014 SRTR/OPTN Annual Data Report). Of these, 25% were not recovered for transplant (8%) or were recovered but discarded (17%), in all 4277 discarded/ not recovered (DNR) kidneys. The most common reason for non-recovery was poor organ function (40%) and for discard biopsy findings (34%). While discards may occur for sound medical reasons, a potential reason may be fear of poor outcomes resulting in regulatory review. At OPTN's request, SRTR analyzed the potential impact on program-specific outcome evaluations if DNR kidneys were transplanted. Donors January 1, 2012-June 20, 2014, were analyzed; 3090 not recovered and 6726 discarded kidneys were matched to transplanted kidneys by kidney donor risk index (KDRI). Programs that had transplanted a similar-KDRI kidney were assumed to have transplanted the DNR kidney into a similar recipient with similar outcomes. Once all DNR kidneys were matched to programs and recipients, the programspecific hazard ratio for first-year outcomes was re-calculated using the existing risk adjustment models, which explicitly adjust for KDRI and other donor risk factors, and compared to the program's hazard ratio on actual transplants. The risk adjustment models closely estimated expected events for DNR kidneys (Figure 1 ). Programs would see minimal change to their evaluations; about the same number would see improvement as would see decline ( Figure 2 ). These findings may be more extreme than would occur in practice because 1) the risk adjustment models are recalibrated during each evaluation cycle, and 2) not 100% of DNR kidneys are likely suitable for transplant. This analysis suggests that transplanting currently discarded kidneys would not systematically affect program evaluations. Based on in vitro studies and limited in vivo data from patients with mutations in FOXP3 or animals with depletion of circulating Tregs, Tregs are considered central to immune homeostasis. How the practical nuts and bolts of how these cells function in vivo remain largely unexplored, i.e. we have a reasonable cellular understanding but lack biochemical insights into their regulation and activities. We have shown that the histone/protein acetyltransferase, Tip60, interacts with the N-terminal repressor domain of Foxp3. We now present data showing how Tip60 is vital for Treg survival and function. Conditional deletion of Tip60 in Foxp3+ Tregs did not affect their intrathymic development but decreased peripheral Tregs (3.7% vs 10.7% in WT mice) and almost completely erased Treg suppressive function in vitro. Accordingly, immune activation of conventional CD4 and CD8 T cells (CD69+, CD62L low CD44 high ) was markedly increased in mice with Treg depletion of Tip60, and they developed severe autoimmunity and died within 3 weeks of birth. Tregs from these mice showed markedly decreased mRNA levels of key Treg genes, including Foxp3, CTLA4 and GITR, and increased expression of IL-2, IFN-g, IL-4, IL-6 and IL-17 mRNA (qPCR), and Western blots showed decreased Foxp3 protein expression in KO vs. WT Treg. Biochemical studies showed that Tip60 acetylates and stabilizes Foxp3 expression, such that without Tip60, Foxp3 was ubiquitinated and subject to proteasomal degradation. Microarray data showed up-regulation of many proapoptotic genes, consistent with the poor survival of Tip60-/-vs. WT Tregs following their adoptive transfer into immunodeficient mice. Lastly, in translational studies, we found that pharmacologic inhibition of Tip60 (i) impaired Treg suppressive function in vitro, but had little effect on conventional T cell proliferation; and (ii) prevented induction of allograft tolerance (BALB/c->C57BL/6) induced by CD154 mAb plus donor splenocyte transfusion, i.e. abrogated Treg function in the most powerful tolerance strategy yet described in rodents. In conclusion, Tip60 is the most important HAT in Tregs, proving essential for acetylation, dimerization and function of Foxp3, and in its absence or upon pharmacologic inhibition of Tip60, Foxp3 is subject to ubiquitination and proteasomal degradation, with consequent loss of Treg function. Background:The optimal timing and duration of DAA therapy in HCV patients undergoing liver transplant(LT) is unknown. CRUSH-C is an open-label, Phase 2 study evaluating the safety and efficacy of short, perioperative ledipasvir(LDV)/ sofosbuvir(SOF) for 4 weeks in HCV genotype(GT) 1 or 4 patients to prevent HCV recurrence post-LT. Methods:25 patients, with chronic HCV GT 1 or 4, listed for LT, not receiving antiviral therapy, were enrolled to receive a single dose of LDV/SOF the day before LT followed by LDV/SOF for an additional 28 days post-LT. Patients were required to have a baseline eGFR ≥40 mL/min at screening and day of LT. Receipt of a liver from an anti-HCV positive donor was an exclusion criterion. Immunosuppression management was at investigator discretion. Results:Interim data on the first 11 transplanted patients are available. Most patients were female(55%), Caucasian(82%), IL28B non-CC(64%). Four (36%) were treatment-experienced, all were GT 1. Baseline CTP class: A(27%), B(36%), and C(36%). The median MELD score was 13(range 7-16). 4/11 patients received a liver from a living donor. One patient met CrCl stopping rules (eGFR <30 mL/ min) at Day 7, and discontinued. All other patients completed the planned therapy, 9/10(90%) have achieved SVR4; the patient with relapse has initiated protocoldefined retreatment with LDV/SOF for 12 weeks. Treatment was safe and well tolerated. 9/11(82%) patients had an AE, the majority being mild or moderate. One grade 2 AE (dry eyes) was considered related to study drug. Three patients had SAEs: bile duct stenosis; acute kidney injury, elevated creatinine, and post-operative peri-incisional wound cellulitis; and decreased hepatic artery flow. All were assessed by the investigator as unrelated to study medication. No subjects have died or experienced graft loss. Updated results will be presented. Conclusions:These preliminary data support the safe use of LDV/SOF in the immediate pre-and perioperative transplant period. Preemptive use of LDV/SOF administered as a single dose pre-LT and for 4 weeks following transplant may represent an effective strategy to prevent HCV recurrence. The TAM (Tyro3, Axl and Mer) receptor tyrosine kinases (RTKs) mediate homeostatic phagocytosis of apoptotic cells, and transmit regulatory signals that modulate immune response. Currently, their role in transplant tolerance is unknown. Methods: In this study, we address the role of Mer in transplant tolerance in a BALB/c à B6 heart transplant model in which tolerance is induced by recipient treatment with donor splenocytes (SP) treated with 1-ehtyl-3-(3-dimethylaminopropyl) carbodiimide (ECDI). Results: While baseline expression of Mer was primarily only detected on the red pulp macrophages, recipient mice treated with donor ECDI-SP significantly upregulated Mer expression on both the CD68 + SIGN-R1 + marginal zone macrophages and CD68 + CD169 + metalophilic macrophages. This up-regulation was most prominent in macrophages that had phagocytosed the injected donor ECDI-SP. To determine the role of Mer in mediating transplant tolerance by donor ECDI-SP, we utilized Mer -/mice. Bone marrow derived macrophages (BMDM) from Mer -/and Mer +/+ were cultured with allogeneic ECDI-SP. While Mer +/+ BMDM exhibited marked up-regulation of M-CSF and essentially completely inhibited expressions of IFN-g and TNF-a following culturing with allogeneic ECDI-SP, Mer -/-BMDM exhibited an opposite pattern, i.e. inhibited up-regulation of M-CSF but marked up-regulation of IFN-g. This altered pattern of expressions of M-CSF and inflammatory cytokines correlated with an absence of the expansion of myeloid derived suppressor cells ( Immunologic tolerance to islets has been achieved in various rodent models using antibodies directed at CD45RB and Tim-1. We have shown that this tolerance mechanism is B lymphocyte dependent, presumably requiring immune regulatory function. In contrast to i.e autoimmunity, the regulatory B cells (Breg) in our model do not depend on IL-10 and may therefore represent a yet to be characterized subpopulation of Bregs. In an effort to elucidate the mechanism of B cell induced tolerance we investigated the requirement of NK and NKT cells in vitro and in vivo. Murine islets (200-300 Islets/recipient), isolated from male Balb/c mice, were transplanted under the kidney capsule of STZ treated (200 mg/kg) C57bl/6 (B6), CD1d -/and Lyst bg /J (beige) mice. Tolerance was induced by intraperitoneal injection of 100 mg anti-mouse CD45RB (Bio X cell) on days 0, 1, 3, 5, and 7 following transplantation and 500 mg anti-mouse TIM-1 (Bio X cell, RMT1-10) i.p. on the day before transplant, and 300 mg on days 0 Regulatory dendritic cells (DCregs) have shown considerable potential for safe and effective prolongation of allograft survival in pre-clinical models. In humans, DCregs with purity >84% and with <0.3% contaminating T cells can be generated readily from precursor monocytes isolated from apheresis products using CD14-specific immunobeads or elutriation. In preparation for testing of DCregs in clinical organ transplantation, we undertook both small and large-scale (GMP) manufacturing of DCregs generated in rhGM-CSF and rhIL-4 plus vitamin D3 and rhIL-10 during 7-day culture. DCregs cultured from either source exhibited similar low levels of CD80 and CD86, but comparatively high levels of co-inhibitory programed death ligand-1 (PDL-1) resulting in high PD-L1:CD86 MFI ratios (immunobead DCreg: 28±11, elutriated mono DCreg: 38±32) compared to immature DC (iDC) cultured in GM-CSF+IL-4 alone (immunobead iDC: 2.8±1.4, elutriated mono DCreg: 7.1±7). The DCregs resisted phenotypic maturation but unlike control iDC, further upregulated the PD-L1:CD86 MFI ratio in response to LPS stimulation (immunobead DCreg: 40±25, elutriated mono DCreg: 56±50). Whereas control iDC secreted high levels of pro-inflammatory IL-12p70 and TNFα and no IL-10 in response to LPS, the converse was observed for LPS-stimulated DCreg. These DCregs were poor stimulators of naïve and memory allogeneic CD4 + and CD8 + T cell proliferation, IFNγ, IL-17, or perforin/granzyme B production in CFSE-MLR. However, their T cell stimulatory function was partially restored by addition of PD-1 blocking mAb. Using high-throughput TCR sequencing we determined that naïve T cells displayed highly diverse allogeneic TCR repertoires after DCreg stimulation, unlike memory T cells (with CD8>CD4) which showed less diverse repertoires and more alloreactive TCR oligoclones, that were further expanded by mature DC but to a lesser degree by DCreg. These demonstrate the feasibility of generating highly-purified human DCregs for clinical application, and reveal their potential for regulation of alloreactive T cell repertoire and function. Background: Accumulating evidences indicate that vascularized bone marrow transplantation (VBMT) plays a critical role in inducing tolerance of vascularized composite allografts (VCA). Recipients with long-term surviving allografts show a higher degree of cell trafficking between donor and recipient. We hypothesize that short-term existence of VBMT may promote engraftment the donor cells and induce allograft tolerance. Methods: Osteomyocutaneous (OMC) allografts from Balb/c were transplanted onto C57BL/6 mice and 1 mg anti-CD154 (POD 0), 0.5 mg CTLA4Ig (POD 2), and 3mg/kg/day for 7 days every other day for 3 weeks of rapamycin were administered. Transplanted mice combined with skin graft were divided into 4 groups: Group 1, OMC + skin graft (n=4); Group 2, OMC removed at POD 30 + skin graft (n=4); Group 3, OMC removed at POD 60 + skin graft (n=4); Group 4, OMC removed at POD 120 + skin graft (n=4). Results: Sixteen of 20 mice receiving OMC allografts achieved long-term graft survival (>120 days). In animals with long-term allograft survival, peripheral blood analysis showed that cellular and humoral responses (IgG and IgM) were abrogated and proinflammatory cytokines (IL-17A, IFNγ, TNFα, CD40L) were suppressed. Peripheral and central tolerance were showed by a significant deletion of Vb5 + CD4 + cells in peripheral blood and thymus in tolerated allograft animals, but not in rejected mice (p<0.05). Interestingly, transplanted animals which OMC allografts were removed at POD 30 were able to achieve long-term skin allograft survival with (>60 days). It was consistent with the data that showed animals before OMC removed versus after OMC removed showed no significantly elevated immune response. Using Rag2 -/mice, OMC allograft was able to delayed skin graft rejection after 5x10 6 CD4 + T cell donor specific injection (n=2, MST=33.5). Conclusions:Efficacy of VBMT to promote long-term allograft survival has been shown in this study. The data also suggest that intra-and extrathymic clonal deletion is one of mechanisms contribute to maintenance of tolerance. Function. J. Budihardjo, 1 B. Kern, 1,2 S. Mermulla, 1 A. Quan, 1 C. Cadmi, 1 J. Lopez, 1 J. Park, 1 A. Hoke, 1 W. Lee, 1 S. Tuffaha, 1 G. Brandacher. 1 1 Plastic Surgery, Johns Hopkins Medical Institutions, Baltimore, MD; 2 Transplant Surgery, Medical University of Innsbruck, Innsbruck, Austria. Background: Studies related to functional return for upper extremity transplantation have been limited by the lack of a functional VCA animal model. The rat hindlimb transplant model has been used for this purpose but it does not allow for reliable assessment of behavioral functional recovery. We developed a novel forelimb transplant model to address this specific problem. Methods: Allogeneic (Brown Norway to Lewis) and syngeneic (Lewis to Lewis) orthotopic forelimb allotransplantations were performed at mid-humerus level, with anastomosis of the brachial artery and vein. Two syngeneic sub-groups were studied to determine the degree of functional return due to nerve regeneration. In the experimental group, median, ulnar and radial nerves were approximated, but left in discontinuity in the control group (N=6/group). Functional recovery was tested by measuring grip strength using a force transducer and evaluating performance of complex hand movements during consumption of food pellets. Median nerve histomorphometry and flexor digitorum myofiber cross-sectional area analysis were performed at 12 weeks. Results: Long-term allograft survival (>120 days) was successfully achieved with cyclosporine A (10mg/kg/day). Animals in the control group did not regain grip strength after transplantation, while the experimental group demonstrated 52%±5.5% return of baseline grip strength (p<0.05). Forelimb function scores (0-9) showed greater functional return of the transplanted limb in the experimental group compared to the control group. In transplantation, increasing evidence support that B cells can play a major regulatory role in addition to their critical functions in allo-immunity and graft rejection. We have described a regulatory B cells-dependent model of tolerance in fully a MHC-mismatched islet allograft setting in mice using a short course of combined anti-TIM-1 and anti-CD45RB antibody treatment. In this model, tolerance is both regulatory B cell-dependent and regulatory T cell-dependent. We found that adoptively transferred Bregs require the presence of Tregs to establish tolerance, and that adoptive transfer of Bregs increases the number of Tregs. Bregs are antigen-specific, IL-10-dependent, and are capable of transferring tolerance to untreated, transplanted animals. Interaction with Bregs in vivo induces significantly more Foxp3 expression in CD4+CD25-T cells than with naive B cells. We have interesting preliminary results indicating that regulatory B cells through their production of TGF-β, induce regulatory T cells.Breg-mediated graft prolongation post-adoptive transfer is abrogated by neutralization of TGF-β activity. Also, Graft survival induced by dual antibodies is abrogated by co-injection of anti-TGF-β neutralizing antibody. Although informative, this result does not localize the critical source of TGF-β to Bregs. To definitively assess whether Breg production of TGF-β is essential to Breg tolerance, we generated a novel B cell-specific TGF-β mutant mice by mating CD19-cre mice with floxed-TGF-β mice. In contrast to WT recipients, B cell-specific TGF-β-mutant recipients rapidly reject allogeneic islet grafts. These B cells from TGFβ-mutant animals to do not express significant Levels of IL-10 after in-vitro stimulation. Background: In July 2015, our center successfully performed the first bilateral hand transplant in a pediatric patient in the United States. The recipient was an 8 year old male quadramembranal amputee secondary to septicemia. He was amputated at 2 years of age and received a kidney transplant at 4 years of age. Methods: A matched size and color donor was identified by UNOS. Following successful surgery, a multi-disciplinary team of experts in orthopedic, vascular, plastic and transplant surgery, nephrology and transplant medicine, occupation/ rehabilitation medicine and neurology have worked together to optimize medical management and therapies. The patient has strong social support, augmented by ongoing social work and psychology services. Results: The patient received thymoglobulin induction and was initially maintained on tacrolimus, mycophenolate mofetil and prednisone. Due to tacrolimus nephrotoxicity with creatinine elevation, sirolimus was added to the immunosuppressive regimen to enable tacrolimus dose reduction. Over the first four months post-transplant, the patient has experienced multiple episodes of rejection, mostly grade 1 and responsive to topical tacrolimus and steroids. The patient continues intensive daily occupational and physical therapy. He is able to grasp, pinch and release. Active wrist flexion is progressing. He is incorporating his hands into his sense of body. Functional motor mapping of the cerebral cortex with right and left elbow flexion correlate with demonstrated clinical functional changes and have highlighted underlying cortical deficits likely related to the initial timing of amputation. Conclusions: Short-term management of the first pediatric bilateral hand transplantation has included frequent episodes of low-grade rejection and tacrolimus toxicity, however these issues have been responsive to treatment. Neurological and functional progress requires long-term monitoring, but short-term results are promising. In future pediatric candidates, pre-transplant cortical functional mapping should be considered to inform rehabilitation expectations and help target physical therapies. Additionally, consistent social and psychological support are crucial for children and their caregivers undergoing this novel treatment which demands longterm commitment to rigorous therapies. Belatacept (CTLA4Ig) is an emerging treatment in solid organ transplantation. Effects on the development of donor specific antibodies (DSA) as well as its clinical safety in challenging immunological settings have yet to be explored. 3 hand transplanted patients have been converted to a Belatacept-based immunosuppressive regimen at 4 months, 6 years and 9 years after unilateral or bilateral hand and forearm transplantation. Patients have received 5mg/kg Belatacept every 2 weeks, the dosing interval was then extended to 4 weeks after 5 applications. All 3 patients were kept on their baseline immunosuppressive medication, consisting of a CNI (Patients A, B, C) or mTOR inhibitor (Patients A and B) plus steroids (Patients A and B) and CellCept (Patient B). No adverse effects of Belatacept have been noted so far. Patient C, who received Belatacept 4 months after transplantation, can successfully be maintained on Tacrolimus monotherapy with a low trough level of ~5ng/ml. This patient has never developed donor-specific antibodies. Patient A, who had previously developed DSA but was in a stable immunological state at the time of conversion, is now successfully tapered from baseline immunosuppression without evidence of rejection. Patient B, who had DSA at the time of conversion, showed an increase of DSA and worsening graft appearance despite stable levels of his baseline immunosuppression and despite absence of a cellular infiltrate in the skin biopsy. The addition of Belatacept to an immunosuppressive regimen can be beneficial in hand transplantation. However, our patients showed variable results depending on the immunological state at the time of conversion. Based on our clinical experience, the application of Belatacept as a "rescue" medication has to be discussed critically. Purpose: Vascularized composite allotransplantation (VCA) has demonstrated clinical success with standard immunosuppressive strategies in face, hand, and forearm transplantations. We developed a non-human primate model of thymussternum VCA to address technical feasibility, immune tolerance strategies, and presence of chimerism. Methods: Vascularized thymus-sternal allotransplantations were performed between MHC-mismatched rhesus monkeys (feasibility studies) and baboons (long-term survival studies). A 5-10 cm vascularized anterior chest wall ("sternal") segment was recovered from a male donor in continuity with associated skin, muscle, thymus, and pericardium. Bilateral internal thoracic and subclavian vessels, ascending aorta and superior vena cava (SVC) were included en bloc in the allograft. For survival studies, a male allograft was transplanted to a female's lower abdominal wall with end-to-side anastomoses of the donor aorta and SVC to the recipient common femoral vessels. In survival studies clinically applicable immunosuppression was given. Skin biopsies and immunological assays were completed at regular intervals and for evidence of skin graft rejection. Presence of chimerism was quantified using polymerase chain reaction specific for baboon Y chromosome. Results: Four successful transplants were performed, two in the survival model. Survival baboon #1 developed clinical and histologic evidence of acute rejection on days 6 and 77. Day 6 rejection episode resolved with antithymocyte globulin bolus and increase in steroids. Animal #2 developed graft swelling and wound dehiscence on day 8, and the allograft was eventually explanted on day 13 due to progressive wound infection. tolerance would avoid the risk of immunosuppression and increase application of VCA. The purpose of this study is to investigate strategies for tolerance induction in a large animal model. METHODS: Heterotopic osteomyocutaneous hind limb transplantation was performed in 19 MGH miniature swine across full swine leukocyte antigen mismatch. All animals received non-myeloablative conditioning with 50cGy total body and 350cGy thymic irradiation for induction. Group I was treated with high-dose tacrolimus (15-20ng/ml) maintenance therapy. Group II was treated with low-dose tacrolimus (4-6ng/ml). Group III received low-dose tacrolimus and 20 mg/kg of CTLA4-Ig administered on POD2, 7, 14, 30, 60, 90, and 120 . Group IV received transient high-dose tacrolimus until POD60. Group V received transient high-dose tacrolimus until POD60 and was switched to CTLA4-Ig administered on POD60, 85, 100, 120 and 150. Graft rejection was monitored by clinical assessment and protocoled skin biopsies. Alloreactivity against donor antigens was assessed using an optimized CFSE-based mixed lymphocyte reaction (MLR). RESULTS: Prolonged high-dose tacrolimus led to maintenance of VCA in 3/3 animals but was associated with major infectious complications. 2/3 animals in group II rejected their grafts by POD46 and 217. In group III, 2/5 animals demonstrated rejection prior to POD150, while 3/5 animals achieved long-term survival of their VCA beyond POD300. However, 3/3 animals in group IV and 5/5 animals in group V achieved indefinite graft survival (beyond POD200) despite weaning of all immunosuppression. Donor specific unresponsiveness was confirmed in long-term survivors in vitro by CFSE-MLR. CONCLUSIONS: Tolerance of VCA containing vascularized bone marrow can be achieved with a regimen of peritransplant high-dose tacrolimus without myeloablative conditioning. These findings describe a potential induction regimen to eliminate the need for long-term immunosuppression after reconstructive transplantation. Vascularized Composite Allografts. I. Rosales, 1 M. DeFazio, 2 R. Foreman, 1 D. Sachs, 2 C. Cetrulo, 2 D. Leonard, 2,3 R. Colvin. 1 1 Pathology, Massachusetts General Hospital, Boston; 2 Transplantation Biology Research Center, Plastic and Reconstructive Surgery, Massachusetts General Hospital, Boston; 3 Plastic and Reconstructive Surgery, Countess of Chester Hospital, Chester, United Kingdom. Successful application of skin-containing vascularized composite allografts (VCA) depends on accurate assessment of the graft status typically based on skin biopsies. A working classification (Banff 2007) proposed 4 global grades of acute rejection, but did not score the individual components. Here we report a component scoring system developed from studies of porcine skin-containing VCA. The scoring system was used on 28 coded H&E-stained sections of punch biopsies from MHC mismatched porcine skin-containing VCA scored by 2 pathologists blinded to the treatment protocols, gross features and outcome. To evaluate the prognostic value of the schema, we compared biopsies of 4 allografts that were subsequently accepted (followed a mean of 93 days) with 4 that rejected (mean survival 22 days). Preliminary studies of biopsies from allograft and autologous VCA and normal skin were analyzed to optimize the methodology. The components were: perivascular cells/dermal vessel (pc), perivascular dermal infiltrate area (pa), luminal leukocytes/ capillary or venule (c), epidermal infiltrate (ei), epidermal apoptosis/necrosis (e), endarteritis (v) and chronic allograft vasculopathy (cav). The scoring could be done easily and efficiently (~5 min/sample). The reproducibility was acceptable, with weighted kappa scores for pc (0. The table below shows the number of candidates added to the waiting list and the number of transplants during 7/3/14-11/27/15, as well as the number of candidates still waiting as of 11/27/15. Twenty candidates have been added to the VCA waiting list at 9 different transplant centers. Nine VCA transplants have been performed at 8 centers (4 upper limb, 3 craniofacial, 1 scalp, and 1 abdominal wall). All 9 recipients were male and 6 were White. Four recipients were ages 35-44 yrs, 2 were 18-34 yrs, 2 were 45+ yrs, and 1 was <18 yrs. Three candidates were removed for nontransplant reason and 8 were still waiting for upper limb at 4 centers as of 11/27/15. under tourniquet control. Following elbow disarticulation, the brachial artery was cannulated. The limb was flushed with 10,000U heparin and connected to a temperature controlled (30-33°C) ex-situ perfusion system composed of a commercially available roller pump and oxygenator. The perfusate was plasma-based with packed red blood cells added to a concentration of 4-6 g/dL. The circuit was not anticoagulated. Perfusion was performed for 24 hrs; blood gases were performed hourly, compartment pressures and nerve stimulation were performed every 4 hours. Results: There was 55 min of total ischemia time. Average arterial systolic pressure was 95±6 mmHg. Perfusion flow was 350±52 mL/hr, which was 6-8% of the estimated cardiac output based on donor height and weight. Vascular resistance was 137±50 mmHg/mL/min. Perfusate composition had an average pH of 7.40±0.9, pCO2 41±4 mmHg, pO2 339±46 mmHg, and hemoglobin 4.3±0.5 g/dL. Lactate gradually increased (max of 16.0 mmol/L), while serum potassium remained within a normal range (3.9±1.5 mmol/L). Compartment pressures were 1-5 mmHg. Nerve stimulation remained intact. Conclusions: A human limb allograft was viable after 24 hours of ex situ perfusion. This approach is a promising modality of preservation with the potential to extend the narrow time frame for revascularization and moving one step closer to hand allograft banking. INTRODUCTION Pregnancy-induced sensitization (PIS) is a barrier to the transplantation of women and contributes significantly to gender disparity in transplantation. We hypothesized that this barrier is greatest for living donor kidney transplant (LDKT) candidates given the potential for specific sensitization to children and spouses who may donate. The frequency with which women are excluded from LDKT because of PIS is currently unknown. METHODS To better quantify the barrier posed by PIS for female LDKT recipients, we performed a retrospective intention-to-treat analysis of potential living donor (LD) utilization in male and female LDKT candidates at our tertiary care center (2007) (2008) (2009) (2010) (2011) (2012) (2013) . We compared donor type, histocompatibility, and transplant frequency between three groups of listed LDKT candidates segregated by gender and pregnancy history without other sensitizing events [Men (M)=142; Women with a history of pregnancy (P)=72; and Women without pregnancy history (Nulligravidas) (N)=20]. RESULTS All three groups had equivalent initial LD access with respect to number of crossmatched LDs. The most frequent LD candidate type was an unrelated LD. While men and women were crossmatched with comparable frequency with offspring and spouses, histocompatibility with these donors was significantly different between the groups. Consequently, 17% of women with a history of pregnancy lost access to any living donor after histocompatibility testing [v 0% (N) Recent studies have shown multiple risk factors for de novo DSA (dnDSA) development. One controversial risk factor is race. Studying a primarily African-American (AA) transplant population, we aimed to assess whether transplant recipients' of African American race had similar dnDSA incidence rates, dnDSA risk factors, and dnDSA related outcomes. Methods:We performed a single center analysis of 158 HLA mismatched patients receiving a primary transplant between 1/06 to 12/10. All patients underwent frequent HLA IgG antibody monitoring by single antigen beads pre-transplant, post-transplant at 1,3,6,9,12 months, and annually, thereafter. All patients were DSA negative at time of transplantation. Results: 106/158 transplant patients were AA. Of the 106 AA patients 49% developed dnDSA by 5 years compared to 27% in the non-AA group (p=0.0072, Fig A) . Among AA patients, donor type did not differentiate risk for dnDSA. Of all other potential risk factors in AA patients, HLA-DQ mismatch, Non-adherence and BK viremia (pre-dnDSA) were the most common clinical and demographic dnDSA antecedents (Fig B) . In AA patients, pre-transplant hypertension and choice of tacrolimus (as the calcineurin inhibitor) were found to prevent dnDSA formation in AA patients. In the non-AA group, we were unable to establish a clear set of factors to serve as possible predictors. Even though more AA patients developed dnDSA and subsequently experienced graft failure, the rate of post-dnDSA graft failure did not differ between AA and non-AA patients (Fig C) . The actual 3 year post-dnDSA allograft failure was 30% for both groups. , 5 (19.1%) and 6 (23.0%) had the largest % increases. Regions besides 9 all increased the number of deceased donors recovered annually between these time periods, though region 9 did increase DDTs. Regions 1 (22.3%), 4 (24.4%), 5 (16.9%) and 6 (28.8%) had the largest % increases. The proportion of donor head trauma as COD has decreased (p<0.01), while anoxia as COD (p<0.01) and cardiovascular and drug intoxication as mechanism of death have increased (p<0.01). DCD donor volume was 16% higher through Oct 2015 vs. 2014, a significant shift in the proportion of total donors (p<0.01). The recent increases in DDTs after a decade of stagnation has been geographically broad based with increases in most regions. Changes in donor causes and mechanisms of death also help explain the overall rise. Further analysis of these data and center practice changes will be used by the OPTN/UNOS to inform efforts to promote the #1 strategic goal of increasing transplant volume. Results: Findings support an increased rate of wound complications and the need for rigorous assessment of comorbid cardiovascular conditions prior to transplant, but were inconclusive regarding the benefit of pre-transplant weight loss. Some report lower rates of patient and graft survival when compared to those with BMI <35. Importantly, there is evidence that KTx provides a significant survival benefit when compared to patients with BMI > 35 that were maintained on dialysis, and supports KTx in some morbidly obese patients with BMI ≥ 40. Health Related Quality of Life (HRQOL) was assessed in 1 study and was not significantly affected by Background: Bile production during machine perfusion and after transplantation has been used as marker of liver allograft viability and function. This study aimed to assess bile volume, composition and gene regulation when SNMP and CS were compared. Methods: Swine liver allografts were preserved for 9 hours with either SNMP combined with a hemoglobin-based oxygen carrier (HBOC) solution (n=6) or CS (n=6) and transplanted into unmatched recipients. The common bile duct was exteriorized by a biliary drain to allow bile collection post-operatively. Perfusate and bile samples were assessed for metabolomics (600 analytes). Liver biopsies were obtained and analyzed with microarray assays to assess 25,000 hepatic genes. Results: SNMP livers produced a significantly (p<0.05) higher amount of bile postoperatively. Bile acid conjugation was significantly reduced in the CS livers, which was characterized by significantly lower levels of the cysteine-derived amino acids: taurine (p<0.001) and hypotaurine ( Background: Hibernation is a complex and multi stage process that leads to unique metabolic changes over an extended period of time. Regulatory genes and transcription factors involved in this processes are also present in non-hibernator mammals. This study compares ex-vivo metabolic and genomic pathways of porcine livers perfused at 21°C over a 9 hour period with published data from hibernating Ursus americanus (UA). Methods: Six swine livers were preserved with MP and perfused with a novel hemoglobin-based oxygen carrier (HBOC). Perfusate samples were collected every three hours and post-transplantation bile samples were collected daily for 5 days. Metabolomics studies were performed through the analysis of 600 metabolites. Liver biopsies were obtained at baseline and after preservation and 25,000 genes were assessed by microarrays. All liver allografts were subsequently transplanted and compared with a group (n=6) preserved with cold storage (CS). Results: Porcine liver oxygen consumption during MP was similar to that of hibernating UA (0.096 vs 0.083 ml/g/h) and bile production was very limited (1ml/h). Most of the signature hibernation metabolic pathways described in the UA were also observed in the porcine livers at 21°C. 1) Significant reduction in the expression of genes involved in glycolysis. 2) Gluconeogenesis was significantly upregulated. 3) Increased carbohydrate synthesis was confirmed by remarkable increase in levels of glucose (↑7.24 folds, p<0.001) and fructose 4) β-oxidation of fatty acids and the production of ketone bodies significantly increased [3- Purpose: Organ procurement, cold storage and ischemia reperfusion injury (IRI) promote inflammation, which induces endothelial cell (EC) activation and dysfunction post transplantation. EC gap junctions (GJs) breakdown as a consequence of these injuries and play a key role in graft injury post transplantation. Here we explore the therapeutic potential of adding a novel gap junction (GJ) stabilizing peptide, ACT1, to UW preservation solution as a therapeutic agent to improve endothelial cell health. ACT1, is a small peptide Cx43 mimetic, which impairs the association of ZO-1 with Cx43 thus promotes GJ integrity. Methods: Mouse cardiac ECs (MCECs) were exposed to 6 hrs of cold storage in UW or UW/ACT1 solution followed by reperfusion to mimic clinical cold storage and reperfusion. Efficacy was determined by trans-endothelial electrical resistance (TEER), a measure of GJ function, cell viability assays, and ELISAs for proinflammatory cytokines. In-vivo, utilizing a cardiac allograft model, Balb/c donor hearts were stored in UW or UW/ACT1 for 6 hrs prior to transplantation into C57Bl/6 recipients. Grafts were harvested 48 hrs post-transplant and cardiac graft injury determined by serum cardiac troponin I and histological analyze. Graft inflammation was assessed by immunohistochemistry specific for neutrophil and macrophages. In-vitro studies demonstrate that UW/ACT1 solution significantly reduced EC injury and inflammation, as measured by TEER, cell viability and ELISA. Invivo studies similarly showed that ACT1 pretreatment of the donor organ led to a reduction in IRI, as noted by reduced serum troponin and histological analysis. Subsequent analysis of neutrophil and macrophage infiltration showed pretreatment significantly reduced graft infiltrates. Background: Recurrence of Focal Glomerulosclerosis (FSGS) after kidney transplantation is associated with poor graft survival. We present results from a concise diagnostic and management protocol that proposed treatment of post transplant FSGS with Plasma Exchange (PEX) and Rituximab (RTX). Methods: We compared the outcomes between 10 consecutive kidney transplant recipients (KTR) with post-transplant FSGS that were treated with a protocol based on histology for diagnosis and that consisted of RTX (total of 2gr over 2 infusions, 2 weeks apart) and monthly cycles of 5 PEX over 7 days for 6 months to a historic control group of 9 KTR's with post-transplant FSGS. Patients with a primary diagnosis of FSGS as well as transplant recipients with non-biopsy proven primary diagnosis that developed post transplantation proteinuria with evidence of segmental or focal glomerulosclerosis on light microscopy or diffuse effacement of podocyte foot processes on electron microscopy (EM) were included. Results: 10 consecutive patients (8 male, mean age 51 years, range 23-67) were treated with the new protocol (group A) while the historic control group consisted of 9 KTR's(6 male, mean age 54 years, range 34-71) (group B). The mean time to diagnosis was 6.8 (0.1 -34.6 ) for group A and 13.5 (1.5-40.3 ) months for group B. All group A patients received treatment with at least 2gr of RTX in total and PEX, while group B received a variety of treatments; IVIG+PEX (n=4), PEX (n=4) or no treatment (n=1). 9 out of 10 patients in group A achieved remission after the conclusion of treatment (4 complete and 5 partial), while in Group B 5 out of 9 patients achieved remission (2 complete and 3 partial). During the follow up period, 1 patient from each group relapsed, and ended up requiring dialysis at 11 and 24 months post diagnosis, respectively. In relapse free responders there was a significant reduction in mean uPCR between diagnosis (645+/-667 mg/mmol) and Conclusion: Heterozygous CFHR3-CFHR1 deletions in combination with a second heterozygous variant in a complement gene may be pathogenic in the setting of transplantation or tacrolimus exposure, while each variant in isolation may be benign. The use of perioperative eculizumab may help minimize this risk. We studied virologic response, graft function, proteinuria, and acute rejection in 11 HCV+ve KTRs who completed DAA therapy (Rx); 7M:4F, mean age-59+/-5yrs, AA (73%), diabetes mellitus (73%), hypertension (91%), all had deceased donor KT and were on CNIs, 6 donors were HCV+ve. All patients had detectable HCV viral load (genotype 1A-n=10, genotype 2-n=1) and majority had failed pre-transplant IFN Rx. The median time from KT to initiation of DAA was 13 months (range 6-124 months). DAA regimens utilized were Ledipasvir /Sofosbuvir (73%), Sofosbuvir /Simeprevir (18%) and Ribavirin/Sofosbuvir (9%). DAA Rxs were well tolerated with the exception of dose modification of ribavirin due to anemia. Majority, 91% (10/11) had sustained virologic response at 12 weeks (SVR12) after completion of DAA Rx. There were no episodes of acute rejection associated with DAA Rx. Serum creatinine (SCr) and spot urine protein/creatinine ratio (Up/c) at 3 and 6 months pre-and post-DAA Rx were analyzed in the 10 patients who achieved SVR12. The mean SCr pre and post DAA Rx was similar (1.4mg/dl). There was a significant reduction in proteinuria, with median U p/c ratio pre-DAA Rx of 0.38mg/g (range 0.05-1.32mg/g) and median U p/c ratio post-DAA Rx of 0.18 mg/g (range 0.05-0.48 mg/g) (p=0.02). In conclusion, DAA Rxs were highly effective, safe, and well tolerated in HCV+ve KTRs. The graft function remained stable during DAA Rx without any episodes of acute rejection. The decrease in proteinuria associated with SVR could represent the amelioration of HCV-related kidney effects and potentially improve graft survival. Donor-specific HLA antibodies (DSA) are a major cause of renal allograft dysfunction and loss following transplant. The presence of DSA, with or without C4d deposition, together with evidence of microvascular inflammation (MVI) is required to diagnose antibody-mediated rejection using Banff criteria. We studied 294 patients transplanted at the 7 centers participating in the Deterioration of Kidney Allograft Function (DeKAF) Study with a biopsy for cause (25% increase in serum creatinine or new onset proteinuria) more than 3 months after transplant. Our objective was to determine the relative impacts of MVI and DSA at the time of biopsy on subsequent death-censored graft survival. Biopsies were scored centrally using extended Banff criteria without clinical or DSA information. Serum samples acquired at the time of biopsy were tested at a central laboratory to identify DSA given only the donor and recipient HLA types and using single HLA antigen bead tests. Patients whose biopsy showed recurrent disease, or BK virus nephropathy as potential causes of dysfunction were excluded. Patients were followed after biopsy for an average of 3.6 years. Death-censored graft survival (DCGS) after biopsy is illustrated in the Figure. 5-year DCGS following the biopsy was 88% among the 141 patients whose biopsy showed no evidence of MVI (ptc score=0, g score=0) and no DSA. Biopsies with evidence of MVI and no DSA (n=50), DSA with no MVI (n=39) or DSA+MVI (n=64) had significantly poorer DCGS ranging from 47-66% at 5 years (p<0.01). More than one-third of DSA+ patients in this sample had no evidence of MVI in their biopsy and 44% of patients with MVI had no detectable DSA. We conclude that DSA testing provides important information in addition to biopsy results. Both biopsy and DSA testing should be performed to properly evaluate late renal allograft dysfunction. The number of HLA-DR eplet mismatches (eplet load) is a risk factor for the development of anti-HLA antibodies and transplant glomerulopathy (TG). In the case of TG, our group has recently demonstrated an odds ratio (OR) of 1.63 (95% confidence interval (95% CI): 1.14, 2.33, p = 0.008) for the development of TG per 10 HLA-DR eplet mismatches. Using the same study sample, the current analysis sought to ascertain whether HLA-DR eplet load limited to immunogenic HLA-DR eplets confers an even greater risk for TG and to ascertain whether a particular risk threshold of eplet load can be identified. In a nested case-control study, patients with TG (N=52) were matched with randomly selected controls from the underlying cohort who had a similar follow-up time from transplantation (N=104). HLAMatchmaker was used to ascertain HLA-DRB1/3/4/5 eplet load from HLA types obtained using molecular methods with 4-digit HLA-types assigned from the catalogue of common and well-documented alleles. Immunogenic HLA-DR eplets were defined as either antibody-verified eplets from the HLA epitope registry (http://epregistry.com.br) or as eplets corresponding to Terasaki epitopes (TerEp). The OR of developing TG per 10 additional HLA-DR immunogenic eplets was assessed by multivariable conditional logistic regression models. Restricted cubic splines were used to flexibly capture the continuous relationship between immunogenic HLA-DR eplet load and TG and to evaluate for a threshold effect. population with intermediate follow up. The purpose of this study was to evaluate toxicities associated with BTZ AMR therapy in a large kidney and/or pancreas transplant population with long term follow up. Methods: Pts were evaluated at baseline, during therapy, and following completion of BTZ therapy. Gastrointestinal and hematologic adverse effects were graded in accordance with the Common Terminology Criteria for Adverse Events (CTCAE). The Functional Assessment of Cancer Therapy-Neurotoxicity (FACT/GOG-Ntx) questionnaire was used to evaluate peripheral neuropathy. AMR was diagnosed according to the Banff criteria. Results: 103 pts received BTZ for AMR treatment. 78 pts (75.7%) completed 1 cycle of BTZ. The most common toxicity was gastrointestinal; symptoms were generally mild only requiring dose adjustment in 4 pts (3.9%). Grade 3 and 4 hematologic toxicity occurred in 61 pts (59.2%). However, resulted in dose modifications in only 29 pts (28.2%): 14 for anemia, 14 for thrombocytopenia, 1 for neutropenia. New or worsening PN was also common, occurring in 42.7%. Peripheral neuropathy (PN) resolved in all patients; median time to resolution was 13 days (range 3-461). BTZ dose adjustments for PN were required in only 1 (1%) pt. CMV viremia was observed in 7 (6.8%) pts. BK viremia in 4 (3.9%) of pts and 1 case of BK nephropathy was noted. One pt with grade 4 neutropenia expired with cryptococcal fungemia. Malignancy was not observed. Conclusion: Overall, toxicities associated with BTZ are relatively low in severity and appear to be transient. This large series with longer follow-up further defines the adverse event profile of BTZ AMR therapy in kidney and/or pancreas transplant recipients. G.: Grant/Research Support, Millennium, Research study. Cardi, M.: Grant/ Research Support, Millennium, Research study. Kremer, J.: Grant/Research Support, Millennium, Research study. Cuffy, M.: Grant/Research Support, Millennium, Research study. Paterno, F.: Grant/Research Support, Millennium. Alloway, R.: Grant/Research Support, Millennium. Woodle, E.: Grant/Research Support, Millennium. Abstract# 413 A Molecular Biopsy Test for Probability of Non-Adherence. P. Halloran, 1,3 G. Einecke, 2 J. Reeve. 1 1 ATAGC, Edmonton, AB, Canada; 2 Hannover Medical School, Hannover, Germany; 3 Department of Medicine, University of Alberta, Edmonton, AB, Canada. Afferent arteriolar hyalinosis (ah-score) in renal transplant biopsies increases with time of biopsy-post-transplant (TxBx), reflecting calcineurin inhibitor drug effects, donor aging, and progressive glomerular diseases. We hypothesized that ah-lesions and TxBx might be used to discover molecular changes related to hyalinosis ("drug effect"), and that levels lower than expected might indicate non-adherence. In renal transplant indication biopsies from 528 patients, 3 days to 35 years post transplant, we developed a classifier reflecting molecular changes related to the ah-lesions and TxBx: the "Molecular Hyalinosis-associated changes" (MH score). The MH classifier did not find molecular changes induced directly by CNI drugs: instead it identified rejection molecules. We calculated the degree to which a biopsy had less than expected MH for TxBx-the "Residual score" (Rah), as a potential estimate of under-immunosuppression. Residuals strongly correlated with histologic and molecular rejection. Biopsies with strong RahS occurred in the first five years when non-adherence was common (figure 2), mainly showing TCMR or ABMR without cg lesions ("pgABMR"). In a subset of patients with medical records available for review, non-adherence was frequent in patients with strong residuals. However, many biopsies with ABMR and cg-lesions did NOT have large negative residuals. We conclude that identification of indication biopsies with large negative residuals, using molecular equations trained on ah-lesions and TxBx, is correlated with the probability that rejection is due to under-immunosuppression/non-adherence, which cannot be estimated directly from ah lesions. The results indicate that many TCMR and pgABMR biopsies in the first years post transplant reflect non-adherence, but many ABMR with cg-lesions do not, especially after 5 years post transplant. Blockade of HLA Antibody-Triggered Classical Complement Activation by High-Affinity Humanized Monoclonal Anti-C1s Antibody TNT009 -Results of a First-in-Human Phase 1 Trial. J. Mühbacher, 1 B. Jilma, 2 M. Wahrmann, 3 L. Marinova, 3 F. Eskandary, 3 J. Gilbert, 4 S. Panicker, 4 G. Böhmig. 3 1 Department of Surgery, Medical University of Vienna, Vienna, Austria; 2 Department of Clinical Pharmacology, Medical University of Vienna, Vienna, Austria; 3 Division of Nephrology and Dialysis, Department of Medicine III, Medical University of Vienna, Vienna, Austria; 4 True North Therapeutics, Inc., South San Francisco, CA. Study purpose: Classical complement may play a key role in ABMR. A promising therapeutic approach could be complement inhibition at the level of key component C1. In this single ascending dose part of a first-in-human phase 1 trial (NCT02502903) we evaluated the tolerability and activity of TNT009, a high-affinity humanized monoclonal antibody directed against serine protease C1s. Methods: In a double-blind, randomized, placebo-controlled phase 1 trial, 48 healthy volunteers received either a single dose of TNT009 or placebo by IV infusion (3:1 randomization; 7 cohorts: 0.3-100 mg/kg). To determine the effect of TNT009 on ex vivo HLA antibody-triggered complement activation, serum samples from dosed subjects were added as a complement source in a modified solid phase assay. C3d deposition was detected on HLA haplotype-coated flow beads pre-incubated with pooled inactivated sera obtained from sensitized patients. Results were expressed as the mean of C3d MFI determined on 16 defined bead populations. Results. Single doses of 3-100 mg/kg TNT009 led to a >85% inhibition of HLA antibody-triggered C3d bead deposition. At doses of 10, 30, 60 or 100 mg/kg this effect lasted for 2 to at least 14 days. C3d test results showed a tight dose-effect relationship without major inter-individual differences. Similarly, in vitro spiking of sera with TNT009 revealed uniform and strong complement inhibition at concentrations >30 mg/mL. Conclusion. We demonstrate that TNT009 allows for prolonged and complete classical pathway inhibition in vivo. Future studies will clarify whether TNT009 is able to block antibody-mediated injury as a strategy to prevent or treat ABMR. Recent studies have showed correlation between CD4 + CD57 + PD1cells and costimulation blockade resistant rejection (CoBRR) in patients treated with belatacept without depletional induction. We have recently reported that alemtuzumab induction and belatacept/ sirolimus-based maintenance effectively prevents CoBRR. We have therefore investigated the dynamics, phenotypes, and antigen specificity of reconstituting CD57 + cells in patients treated with this regimen. Peripheral blood mononuclear cells collected from 20 patients before and serially after transplantation were assessed by flow cytometry focused, specifically focusing on CD57 expressing T cells, interrogating markers of memory (CCR7/CD45RA classification) and differentiation (IL-7R, a receptor required for homeostatic repopulation). Cells collected from 9 patients were stimulated with specific donor antigens and analyzed by intracellular cytokine staining to detect dual (TNF-α/IFN-γ) cytokine producers. Early post-depletion, depletion-resistant T cells were characterized as being phenotypically memory cells (CCR7-CD45RA-effector and CCR7-CD45RA+ terminally differentiated effector memory cells), with CD57 + PD1 -, CD57 + PD1 + , and CD57 -PD1 + subsets present. However, as homeostatic repopulation occurred, this was almost exclusively CD4 + and CD8 + CD57 -PD1 -T cells. This produced a repopulating repertore enrisched for CD57 -PD1 -T cells (p<0.05). In contrast, patients treated with conventional immunosuppression did not show reduction of CD57 + cells posttransplantation. Interestingly, CD57but not CD57 + T cells were predominantly positive for IL7R, perhaps explaining the mechanism for CD57 -T cell selective expansion. Preoperatively, allospecific T cells (defined as dual cytokine producers following allostimulation) were predominantly memory cells characterized as being CD57+ and/or PD1+. The frequency of dual cytokine producers in these patients decreased posttransplantation commensurate with the emergence of a CD57repertoire. In summary, patients receiving alemtuzumab induction and belatacept/ sirolimus-based immunosuppression demonstrate increased repopulating CD57 -PD1cells predominantly expressing IL-7R, a change not seen in conventionally treated patients. This unique repertoire enriched for CD57 -PD1cells may be permissive for control of costimulation blockade-resistant allograft rejection. Data shows a significant improvement in long-term graft survival for the TCZ group. In addition, pts in the SOC who lost their grafts had higher death rates (p=0.002). Conclusions: CABMR & TG pts treated with TCZ continue to show long term stabilization of renal function and improved survivals. Although there is a benefit in reduction of iDSAs, there may be other beneficial mechanisms of action of anti-IL6-R therapy. More importantly, extending the functional half-life of allografts has benefits for the patient in improving length and quality of life and to the health care system. Background. CD40-CD154 pathway blockade significantly prolongs renal allograft survival in non-human primates (NHPs), and trials of the novel blocking, non-depleting anti-CD40 monoclonal antibody (mAb) CFZ533 in de novo kidney transplantation have been initiated. The exploration of the relationship between CFZ533 blood/serum levels and CD40 pathway inhibition in tissues is a key aspect in the selection of a dose and regimen that would result in complete CD40-CD154 blockade in kidney transplantation. Aim. To address this question, we made use of in vitro and in vivo data, examining outcomes of CD40 pathway inhibition in the context of receptor occupancy, CFZ533 serum concentration as well as a relevant tissue pharmacodynamic (PD) effect, namely the decreased cellularity of established germinal centers (GC). Results. Using an in vitro human whole blood assay where we could simultaneously assess CD40 receptor occupancy (RO) and pathway inhibition by CFZ533, we could clearly demonstrate that full RO was required for complete suppression of recombinant CD154 (rCD154)-mediated expression of CD69, CD23 and ICAM1. In vivo (healthy volunteers), full RO on peripheral blood B cells was observed at CFZ533 plasma concentrations ³1 mg/mL. In contrast, in vivo experiments in NHPs indicated that steady-state serum CFZ533 concentrations ³38 mg/mL were required for full suppression of germinal centers in mesenteric lymph nodes, almost 40-fold higher than observed for complete peripheral blood RO. Conclusions. Our results suggest that CFZ533 dose-selection should not rely solely on RO data in peripheral blood, but also on the relationship between serum exposure and a relevant pathway PD effect in tissue. Such an approach would help avoid under-dosing in the clinic; of particular concern in transplantation, where accelerated target-mediated clearance of anti-CD40 mAbs has been observed. Our results have therefore proved crucial in selecting a dose of CFZ533 that would enable us to test the hypothesis of whether full CD40 pathway blockade would prolong allograft survival patients undergoing de novo renal transplantation in the absence of CNIs. Study. C. Lefaucheur, 1 D. Viglietti, 1 C. Gosset, 1 A. Loupy, 2 A. Zeevi, 3 D. Glotz. Paris, France; 2 Necker Hospital, Paris, France; 3 University of Pittsburgh Medical Center, Pittsburgh. The use of complement inhibitors in the treatment of acute antibody-mediated rejection (AMR) has not been thoroughly evaluated. We performed a prospective, single-arm, pilot study to assess the efficacy and safety of C1-inhibitor (CI-INH) Berinert added to high-dose intravenous immunoglobulin (IVIG) for the treatment of acute AMR that is non-responsive to conventional therapy. Kidney recipients with acute AMR that was non-responsive to conventional standardized treatment (plasmaphereses [x4], high-dose IVIG [2 g/kg] repeated every 3 weeks for 3 rounds and rituximab [375 mg per square meter of body-surface area]) were prospectively enrolled between April 1, 2013, and July 1, 2014 (N=6) . They received C1-INH (20 U/kg twice weekly) and high-dose IVIG (2 g/kg bw every 3 weeks) for 6 months. C1-INH patients were compared with a historical control group treated with high-dose IVIG alone (N=21). The primary endpoint was allograft function (eGFR) changes between the groups at 6 months after inclusion. Secondary end points included allograft histology, donor-specific anti-HLA antibody (DSA) characteristics and adverse events in the C1-INH group, as evaluated at 6 months after inclusion. The C1-INH group showed a significant improvement in eGFR compared with the control group: +16.6±9.9% vs. -13.0±33.1% (p=0.01). The mean eGFR at the end of the study was of 45.2±21.3 mL/min/1.73 m 2 in the C1-INH group vs. 31.7±14.6 in the control group (p=0.08). All patients in the C1-INH group showed an improvement in eGFR between inclusion and study end from 38.7±17.9 mL/ min/1.73 m 2 to 45.2±21.3 mL/min/1.73 m 2 (p=0.03). There was no change in the histological features in patients in the C1-INH group between the biopsies at inclusion and those obtained at the end of the study, except for a significant decrease in the C4d deposition rate (83% vs 17%, respectively, p=0.04). There was a significant change in anti-HLA DSA C1q-binding status from 6/6 (100%) positive at enrolment to 1/6 (17%) positive at the end of the study (p=0.02). One deep venous thrombosis of a lower limb occurred during follow-up. C1-inhibitor added to high-dose IVIG may improve allograft function in kidney recipients with non-responsive acute AMR. Purpose: Chain transplantation in kidney paired donation (KPD) offers the ability to facilitate multiple kidney transplants between incompatible pairs using a non-directed donor (NDD) to initiate the sequence. Concerns regarding donors' ability to back out have been previously raised yet few data exist which evaluate the frequency of broken chains due to donor issues. The purpose of this study was to evaluate the rate and cause of broken chains within a large KPD program. Methods: All patients undergoing renal transplantation through the NKR from 2008 through Sept 2015 were included for analysis. Broken chains were defined as chains in which a donor did not undergo donor nephrectomy despite their intended recipient having received a kidney. Real time swap failures were evaluated as a subset of broken chains and defined as a chain which was broken on the day of the planned surgery. Loops (closed KPD sequences not initiated by a NDD) were also analyzed. Descriptive statistical analysis was performed. Results: A total of 281 chains and 76 loops were completed during the study period yielding a total of 1,483 transplants. 19 broken chains (6.8%) and 1 broken loop (1.3%) were identified. The mean chain length (# of transplants) within broken chains was 4.05 compared to 4.69 of ended (completed) chains. Seven of the 19 broken chains were caused by real time swap failures. The most common causes of a broken chain were "medical" (n=8), "no reason" (n=4), and "kidney declined by recipient surgeon" (n=3). One "aborted donor surgery" and one "recipient medical issue" occurred. A donor "renege" occurred in 1 case. Conclusions: Based on these results broken chains are infrequent and rarely due to a donor renege. The most common cause of broken chains within this KPD program was "medical". Methods: We examined the records at our center of 9 recipients of living donor kidney transplants, their 9 compatible (also known as original or intended) living donors, and their 9 "actual" living donors, whose transplants/donation occurred from 2012-2015. Results: The 9 recipients received LDKTs as part of 8 different chains that included 24 total LDKTs (15 additional LDKTs). The 9 recipients were all male and 67% White, with a mean age of 55.5 (SD 15.2) years. 4 (44%) were college graduates. 8 of 9 (89%) recipients had PRA of 5% or less. Of the 9 intended ("original") living donors, 4 (44%) were the spouse of the intended recipient, all 9 were female and blood group O, 5 (56%) were college graduates, and their mean age was 56.6 (SD 6.7). The primary reasons why the compatible donor-recipient pairs participated in paired exchange were the age difference between the recipient and intended donor (N=6), body or kidney size difference between the recipient and intended donor (N=2), and desire to help other recipients (N=1). Desire for better HLA matching was not the primary reason for participation for any pairs. In the 6 cases of age difference, the intended donors were a median of 3.3 years younger than the recipients (25-75% range, 11.9 years older to 4.5 years younger), while the actual donors were a median of 17.5 years younger than the recipients (25-75% range, 4.5-20.6 years younger). In the 2 cases of size difference, one recipient received a LDKT from a donor who was 60 pounds larger than the intended donor, and the other recipient received a LDKT from a donor whose calculated donor kidney volume was 78 mL greater than the volume of the intended donor's kidney. KT recipients (1999 KT recipients ( -2013 linked to Medicare claims through the USRDS. EHR was defined as ³1 hospitalization within 30 days of initial discharge after KT. We developed multilevel mixed effects logistic models to explore center-level variation using Empirical Bayes estimation in the association between EHR and survival at 1 year following deceased and living donor (DDKT, LDKT). Results: The incidence of EHR was 31.5%. 1-year graft loss was 7.18% in patients with EHR versus 3.25% in those without (P<0.001). 1-year mortality was 6.60% in patients with EHR versus 2.16% in those without (P<0.001). EHR was associated with increased odds of graft loss and mortality at 1 year following LDKT and DDKT. For LDKT, there was no center-level variation in the association between EHR and survival (graft loss: P=0.5, mortality: P=0.1). For DDKT, there was a small variation in graft and patient mortality attributable to center-level effects. The impact of EHR on graft loss was consistent at all but one center. Similarly, the association between EHR and mortality was consistent in 205 of 220 centers. Conclusion: EHR is associated with inferior 1-year patient and graft survival. While incidence of EHR varies across centers, we found no substantial center-level variation in the association between EHR and survival, suggesting EHR is a robust marker of KT recipients at increased risk of poor outcomes. We aim to report the rate and short-term outcomes of patients undergoing reoperation following kidney transplant in the United States. The NIS database was used to examine the clinical data of patients undergoing kidney transplant and reoperation during same the hospitalization from 2002-2012. Multivariate regression analysis was performed to compare outcomes of patients with and without reoperation. We sampled a total of 172,586 patients who underwent kidney transplant. Of these, 3802 (2.2%) had reoperation during the same hospitalization. Reoperation was associated with a significant increase in mortality (3.1% vs. 0.4%, AOR: 5.40, P<0.01), mean total hospital charges ($249425 vs. $145403, P<0.01), and mean hospitalization length of patients (18 days vs. 7 days, P<0.01). The most common day of reoperation was POD 1. Hemorrhagic complication (64.1%) was the most common reason for reoperation followed by urinary tract complications (8.9%). Renal vein complications needing reoperation was more than seven times higher than renal artery complications (2.9% vs. 0.4%, P<0.01). Preoperative coagulopathy (AOR: 3.65, P<0.01) was the strongest predictor of need for reoperation and renal artery (AOR: 1.50, P=0.04) and renal vein complications (AOR: 3.62, P<0.01). Also, diabetes (AOR: 1.26, P<0.01) and hypertension (AOR: 1.88, P<0.01) had associations with need for reoperation. Reoperation after kidney transplant occurs in 2.2% of cases. However, it is associated with a significantly increased mortality, hospitalization length, and total hospital charges. The first day after transplant and hemorrhagic complication are the most common day and reason of reoperation respectively. Preoperative coagulopathy was the strongest factor in prediction of need for reoperation, renal artery complications, and renal vein complications. Controlling coagulation disorders preoperatively may decrease need for reoperation, hospitalization length, and total hospital charges. on the risk of BK virus reactivation. METHODS: We identified 498 kidney allograft recipients (2007-11) who had serum levels of (25[OH]D), and PTH measured within the first year of transplantation. We evaluated the relationship between the average circulating levels of 25(OH)D and PTH and BK virus reactivation in blood. RESULTS: Vitamin D insufficiency -defined as average circulating levels of 25[OH] D <30 ng/mL-using The Endocrine Society Clinical Practice 2011 Guideline-was observed in 377 (75%) of 498 kidney graft recipients after kidney transplant. The VitD insufficient and sufficient groups were comparable in terms of gender, age, BMI, race, and immunosuppression regimen. Vitamin D insuffiency was more frequent in recipients of cadaveric allografts (P=0.009, Fisher exact test). By multivariable Cox regression analysis, vitamin D insuffiency was an independent risk factor for BKV reactivation after kidney transplantation (hazard ratio=2.4, 95% CI 1.3-4.7, P=0.002). Moreover, vitD insuffiency was associated with significantly earlier onset of BKV reactivation (8.8 vs15.6 months resp, P 0.01). Receiver operating characteristic analysis using logistic regression showed that 25[OH]D < 24ng/mL was predictive of BKV reactivation(OR:82.9, 95% CI: 17.22-744.6, P < 0.001, AUC 0.7). Vitamin D insufficiency was not associated with higher risk of CMV reactivation (16.7 vs 19% resp, p 0.5). The positive BKV group had significantly higher PTH compared to the negative group (196.7 vs 148.5 resp, P 0.01). No significant associations of 25[OH]D with clinical outcomes were observed in time-dependent or fixed-covariate Cox models. CONCLUSIONS: Vitamin D insufficiency is an independent risk factor for BKV reactivation after kidney transplantation. prospective clinical trials are warranted. 1 1 Medical Microbiology, LUMC, Leiden, Netherlands; 2 Nephrology, LUMC, Leiden, Netherlands; 3 Immunohematology and Blood Transfusion, LUMC, Leiden, Netherlands; 4 Medical Statistics and Bioinformatics, LUMC, Leiden, Netherlands; 5 UMR INRA 1282 ISP, Université François Rabelais, Tours, France. Background Kidney transplant (KTx) donors are not implicated in predicting BK polyomavirus (BKV) infection in the recipient. For BKV-associated nephropathy (BKVAN) however, donor origin of infection is likely. Since BKV-seroreactivity correlates with BKV-replication, we investigated whether baseline BKVseroreactivity of KTx-donors predicts viremia and BKVAN in recipients. Methods In a retrospective cohort of 407 living kidney allograft donor-recipient pairs, transplanted between 2003-2013, pre-KTx donor and recipient sera were tested for BKV IgG-levels. Baseline IgG-levels were correlated with the occurrence of viremia and BKVAN during the first year after transplantation. Results Baseline BKV-seroprevalence of both donors and recipients was high, ≥ 95%. A strong, statistically significant association was observed between donor BKV-IgG level and occurrence of viremia (Figure1) and BKVAN. This resulted in a sevenfold increased hazard ratio for BKV-viremia, which increased even further in case of a low BKV-seroreactive recipient. Baseline recipient BKV-seroreactivity as such was not associated with viremia or BKVAN. Multivariate analysis showed donor BKV-seroreactivity to be the strongest baseline factor associated with BKVviremia and BKVAN. Conclusion Donor level of BKV-IgG is a strong predictor of BKV-infection in KTx-recipients. The proportional relation suggests that donor BKV-seroreactivity reflects the infectious load of the kidney allograft. This finding promotes the use of BKV-serological testing pre-KTx, in order to assess the risk of BKVAN and to personalize BKV-plasma load-monitoring. Furthermore, it emphasizes the relevance of strategies aimed to increase BKV-immunity in kidney allograft-recipients. Background: The adaptive immune characteristics associated with BK Virus (BKV) infection and control remain incompletely described. In this study we further elucidate the immune response to BKV. Methods: 29 patients undergoing renal transplantation from two institutions were followed prospectively for one year to evaluate the recipients' BKV specific humoral and cellular immune responses. ELISA assay detected IgG antibody titers against BKV strains 1 and 4 and intracellular staining for interferon gamma (IFN-γ) determined BKV-specific CD4 + and CD8 + T lymphocytes. We quantified T lymphocyte phenotypes and expression of activation and exhaustion markers to demonstrate lymphocyte functions. Results: 4 patients developed BK viruria and 3 BK viremia. There were no clinical differences between BKV positive and negative patients. Viremic patients had an increase in BKV specific, IFN-γ producing CD8 + T Cells with time (P= 0.034). Prednisone was associated with fewer BKV specific CD8 + T Cells (P=0.027). Though there was a trend for viremic patients to have more IFN-γ producing CD4 + T cells, this did not reach significance. Viremic patients had an increase in IgG titers against both BKV 1 and 4(P= 0.044). Viremic patients also had higher titers against BKV 4 at 6 (P=0.035) and 12 months (P=0.004). A similar pattern was seen against BKV 1, but this did not reach significance. Viremic patients had fewer CD4 + central memory cells expressing activation markers CD38 (P=0.042) and HLA-DR (P= 0.027) at 1 month. Viremic patients had more CD8 + effector cells at baseline (P = 0.002) and 12 months (P = 0.009). Conclusions: The response to BK viremia includes both a BKV specific humoral and cellular component, dominated by CD8 + T lymphocytes, and may be impaired by prednisone. Viremia was associated with fewer activated CD4 + central memory cells at 1 month, perhaps indicating an early memory deficit; and significantly more CD8 + effector cells at 12 months, potentially reflecting the cytotoxic response to viremia. Monitoring and modulating specific immune factors against BKV post kidney transplant could help identify risk of BKV infection and factors to harness as immunotherapy against BKV. While differences in HLA class II and BKV and LTA binding in the retrospective study and HLA multiplex beads assay, the clinical utility of these findings requires further study. Epitope analysis may elucidate whether a specific amino acid sequence rather than an allele-level difference may decrease the binding affinity of BK viral peptides and thus decrease T cell immune responsiveness in the setting of immunosuppression. This ideally will identify susceptible recipients who can be monitored more vigilantly to avoid over-immunosuppression. Conclusions: Proximity MELD/PELD points can mitigate the transport burden of liver redistricting while reducing the disparity in median MELD/PELD at transplant. Skipping Comparable Local Waitlist Candidates Under Share 35. E. Chow, 1 A. Massie, 1 X. Luo, 1 C. Wickliffe, 1 S. Gentry, 2 A. Cameron, 1 D. Segev. 1 1 Surgery, Johns Hopkins University, Baltimore, MD; 2 Mathematics, US Naval Academy, Annapolis, MD. Under Share-35, deceased donor (DD) livers are offered regionally to candidates with MELD³35 before locally to lower MELD candidates. The characteristics and outcomes of local candidates, who would have been offered these livers first but were skipped under Share-35, have not been described. Methods: We used SRTR data to study local candidates from June 18, 2013 to Feb 2, 2015 who had the highest allocation MELD (excluding Status 1A), for the longest time, and were ABO-identical or compatible to regional candidates transplanted at MELD³35. Results: 1,477 livers were regionally shared to MELD 35+ recipients with transplant MELD of median 39 (IQR 37-40). 1,351 (92%) skipped an ABO-identical local candidate, 121 (8%) skipped a compatible (but not identical) local candidate, and 5 (0.3%) had no active ABO-compatible local candidate at the time. There were 1035 individual candidates who were skipped at a median waitlist MELD of 31 (IQR 27-34), some of whom were skipped several times. 97 (9%) had a MELD score only 1 point less than the regional recipient. 244 (24%) had a comparable MELD (within 3 points of the regional recipient) and were listed a median of 3.4 (IQR 0.5-21) months when skipped for regional recipients listed 0.7 (IQR 0.2-5.2) months (p<0.001). Of these comparable candidates, 175 (72%) were eventually transplanted with a liver of median DRI 1.50 vs 1.37 of the skipping liver (p=0.3), 46 (19%) died or were removed for deteriorating condition, 15 (6%) remained waitlisted, and 8 (3%) were removed for other reasons. Of comparable candidates who were eventually transplanted, 88 (36%) were allocated locally, 32 (13%) regionally from the OPO they were skipped for, 54 (22%) regionally from another OPO in their region, and 1 nationally. Conclusions: While 72% of locally-skipped comparable candidates were eventually transplanted, 19% died before transplantation. Modifications to Share-35 may be warranted to reduce potentially unnecessary regional swaps between comparable candidates. Purpose: There is a significant variation in the MELD scores and subsequent morbidity among liver recipients in the US. Larger OPOs consistently serve patients with advanced disease. Previous studies have shown 2.5 times greater prevalence of transplanted patients with MELD 24 in these OPOs. CMS recent reimbursement adjustments may disproportionately affect certain programs given their increased prevalence of patients with more advanced disease. Methods: We analyzed the prevalence of transplants among patients with high UNET MELD scores and associated charges, costs, and reimbursements. We compared low, medium and high MELD score groups. Between 2014-2015, 43 liver transplants, all with >30 days survival, were analyzed. Results: Only 2 had MELD scores below 25 at transplant, both of which from live donors. 95% of patients had MELD scores above 25 and among these, 18% had MELD 40 or were Status 1. Compared to the national average, our MELD scores were: 25% 21-30 (National 21.5%, p >0.05), 70.5% 31-40 (National 25.9%, p < 0.001), and 4.5% Status 1 (National 5.9%, p >0.05). Payer mix of patients with MELD 40/Status 1 was 22% Medicaid, 22% Medicare and 56% commercial plans. For MELD scores 21-30, hospital charges averaged $645,214 and reimbursements were $150,706. For MELD scores 31-39, charges were $686,720, and reimbursement were $139,776. Reviewing MELD 40/ Status 1 patients, the average hospital charges and reimbursements were $1,136,813 and $293,776 respectively. We compared their amounts to the MELD 40 patients who had hospital charges of $625,371 and reimbursements of $142,051. This demonstrated a loss of $843,037 for the first group and $483,320 for the second. Length of stay was 32 days for MELD 40/ Status 1 and 8 days for MELD 40 (p < 0.000). Conclusion: 40 MELD patients have a huge financial impact on institutions. The difference between 25-39 and 40 MELD points is greater than half a million dollars. These data reflect and include dialysis, intubation and ICU stay but do not include rehabilitation expenses which will be the focus of another study. We find that our institution, which likely reflects many institutions in our OPO, serves sicker patients and therefore incurs higher costs but receives lower reimbursements, as they are based on national expected care costs for healthier patients. A broader sharing in the US may equalize costs. Payers should take into account the added financial burden of performing transplant in high MELD patients. Background: It has been shown that EAD in LT is a useful tool to predict intermediate (5-year) mortality. The purpose of this study is to determine the utility of EAD and its individual components on the prediction of long-term patient survival. Methods: Charts from 1708 LT were reviewed between 2001-2014 at Indiana University. EAD was defined by the presence of 1 or more of the following components: (i) total bilirubin ³10mg/dL on post-operative day (POD) 7, (ii) INR ³1.6 on POD 7, and (iii) ALT or AST ³2000 IU/mL within the first POD 7. Results: Of 1708 LT, 480 (28%) had EAD. Patient survival with or without EAD was 85% or 90% at 1-year, 68% or 78% at 5-year, and 57% or 68% at 10-year. Cox-regression multivariate analysis demonstrated that EAD, MELD, donor age, recipient age, HCV, and DCD livers were significant independent variables associated with patient survival. EAD rate was higher in DCD LT (37/75, 49%). However, no survival difference was observed in DCD LT with or without EAD up to 10 years. In a cox-regression patient survival analysis controlling for MELD, donor age, and CIT; (i) if EAD defined by INR only, there was no difference in patient survival with those who did not have EAD. (ii) If EAD was defined by bilirubin only or AST/ ALT only, there was a significant drop in the patient survival at 10-year (58% vs. 68%). (iii) If EAD was defined by the combination of bilirubin and AST/ALT or INR and AST/ALT, patient survival further decreased to 43% at 10-year. (iv) The worst patient survival (<20% at 10-year) was observed in EAD by the combination of INR, and bilirubin, and AST/ALT. EAD is independently associated with decreased patient survival in the long-term in a multivariate regression model. Although EAD is more common in DCD LT, no survival disadvantage was seen compared to no-EAD DCD LT. There was incrementally worse survival at 10 years with addition of each component of EAD. EAD defined by all 3 components is associated with far worse patient survival. AimStudies suggest that certain patients with liver cirrhosis and a low MELD score (<15) remain at high risk for mortality. Current data is largely limited to transplant registry data which only captures a fraction of this patient population. Our aim was to assess the mortality rate among patients with a low MELD in a large population and to determine factors associated with this mortality.MethodHealthLNK(HL) captures EMRdata for ~2 million patients treated within 6 diverse healthcare networks in the greater Chicago area (Northwestern, Univ of Chicago, Rush Univ, Univ of IL of Chicago, Loyola Univ, Cook County) between 2006 and 2012. HLdata were merged with UNOS and death registry data. Patients with cirrhosis were identified by ICD9 (571.2, 571.5, 571.6) and Fib-4 scores. MELD scores were calculated and demographics were collected. The low MELD score cohort was identified as patients whose MELD was never ³15 during the study.Low to high MELD cohort had an initial MELD <15 but a final MELD ³15.The high MELD cohort consisted of patients with an initial MELD score ³15. A multivariate cox proportional hazards model was fitted to determine factors associated with mortality.ResultA cohort of 9880 patients with cirrhosis was identified. 6928 patients had an initial MELD<15 of which 4946 had a MELD never³15 during the study. 2952 had an initial MELD³15. The low vs high MELD group both had mean age 62 yrs. 53% v 63% were male, 61% v 58% white, etiology of cirrhosis was ETOH for 23% v 43%, NASH 19% v 13% and HCV 37% v 27%. 1089 (29%)v1318(45%) died during the study period (median fu 38mo; min0.6,max83). Mortality in patients with MELD <15 throughout the study period was increased with malnutrition(HR2.9;95%CI2.4,3.3),HRS(HR2.7;95%CI1.6,4.4 ),ascites(HR1.7;95%CI1.5,2.0), HE(HR1.4;95%CI1.2,1.7),thrombocytopenia(HR1 .4;95%CI1.1,1.5), male(HR1.395%CI1.1,1.5), and age(yr)(HR1.04;95%CI1.0,1.1). Factors associated with a reduced mortality were HCV(HR0.76;95%CI0.7,0.9), NASH(HR0.51;95%CI0.4,0.7) and variceal bleeding(HR0.67;95%CI0.5,0.9).All factors p<0.001.ConclusionMortality among cirrhotic patients with a low MELD is higher than previously reported. ~30% of cirrhotic patients died within ~3 years despite never raising their MELD ³15. Patients at highest risk are those with malnutrition. Identification of low MELD patients at high risk of death is important for alternate treatments such as LDLT or priority in organ allocation. (2002) transformed liver transplant waitlist mortality by prioritizing the "sickest first." We examine the extent to which removal of the "sickest" patients on the liver transplant waitlist is associated with the 2007 regulatory CMS policy, "Conditions of Participation (COP)." Study Design: We used UNOS/SRTR to identify 90,765 US adults (³18 years of age) on deceased donor liver transplant waiting lists at 102 transplant centers from April 2002 to December 2012. We used interrupted time series regression analysis to quantify national trends in the incidence of candidate delisting due to illness severity ("too sick to transplant" or "medically unsuitable") pre-post COP implementation (June 28, 2007) . Results: We observed increasing trends in delisting due to illness severity in the setting of comparable demographic and clinical characteristics pre-post COP implementation. The incidence of delisting abruptly increased by 16% at the time of COP implementation and the likelihood of being delisted continued to increase by approximately 3% per quarter thereafter (p<0.001). COP did not impact 1-year post-transplant mortality trends (p=0.38). Conclusions: CMS "Conditions of Participation" implementation critically altered candidate selection for liver transplantation in 2007. Meaningful improvement in post-transplant survival, at the societal cost of removing increasingly more of the sickest patients, was not observed. Policy-makers and clinicians should consider population-level survival in the design of performance measures and in clinical decision-making. Trend Change Background Financial disincentives remain a major barrier to increasing rates of living kidney donation. A recent study found that 96% of living kidney donors (LKD) reported a mean of $523 direct costs during the evaluation process (Rodrigue, 2015). There is limited research on the costs LKDs acquire during the peri-operative period. We conducted a longitudinal, prospective study of adults undergoing LKD evaluation at 3 centers. This abstract reports a subset of the data for participants who became LKDs, collected two weeks prior to their scheduled surgery date. LKDs completed interviews and demographic surveys. Transcripts were analyzed inductively for themes about experienced and expected financial difficulties, as well as planned coping mechanisms. The cohort included 48 LKDs (response rate 94%). Most were white (83%), female (67%), married (75%), employed (75%), had some college education (73%); 37% had an income of >$100,000/year. Many LKDs (42%) reported experiencing financial difficulty from donor evaluation; a third of these indicated that this was a significant burden. One-fourth described financial inconveniences related to taking time off work: "I'm losing out on making money." Travel expenses were particularly burdensome (21%): "It's been pretty cost prohibitive flying back and forth." Some LKDs ( Adult potential LKDs at 3 US transplant centers completed semi-structured interviews and demographic surveys before their donor eligibility was finalized. Participants were asked what they thought should happen if a LKD was willing to accept greater risks than the center was comfortable with. Open-ended responses were analyzed iteratively using grounded theory to identify emergent themes. Logistic regression assessed correlations between demographics and preferences for donor involvement in decisions. Introduction Post-transplant lymphoproliferative disorder (PTLD) is a potentially fatal complication of organ transplantation commonly associated with Epstein Barr virus (EBV) infection. EBV viral loads are typically monitored by qPCR but EBV viremia does not always portend PTLD, and EBV + PTLD can occur in the absence of increased viral levels. Thus, new biomarkers are critical to improving patient outcomes. Methods: miRNA-microarray profiling (Taqman-TLDA cards) was used to quantitate expression levels of 639 miRNA in RNA isolated from spontaneously arising EBV + B lymphoma cell lines from patients with PTLD (SLCL) (n=6), B lymphoblastoid cell lines (LCL) generated in vitro with the B95.8 strain of EBV (n=4), and normal human B cells (n=4). One-way ANOVA analysis was used to identify the miRNAs differentially modulated within the groups. Further, real-time qPCR analysis was used to validate the specific differentially expressed miRNAs identified from microarray profiling. Exosomes are nanometric (30-150nm) membrane vesicles that are released by most cell types. Studies have shown that exosomes contain and transfer miRNAs between cells thus allowing for local and distant intercellular communication. Exosomes were isolated from the culture supernatant of SLCL using ExoQuick-TC and centrifugation. RNA was isolated from exosomes and miRNAs quantitated Results: One hundred and thirty-three (133) miRNAs were significantly modulated by EBV infection in both LCL and SLCL Sixteen miRNAs were uniquely modulated in SLCLs (13 up, 3 down). Real-time qPCR validated a panel of 5 miRNAs (miR-19a, miR-100, miR-106a, miR-422a miR-449b) that were significantly increased in SLCLs as compared to normal EBV + B cells. Exosomes, which were confirmed by transmission electron microscopy, were present in very high levels in SLCL supernatants. Further, SLCL exosomes expressed miRNAs including miR-19a and miR-106a. Interestingly, miR-30c, which was downregulated in SLCLs, was increased in SLCL exosomes. Conclusion: EBV+ B lymphoma cell lines from PTLD patients produce miRNAcontaining exosomes and this may constitute a mechanism of intercellular communication by miRNA transfer to other cells. Importantly, cellular and circulating, cell-free miRNAs have great potential as non-invasive biomarkers of PTLD. Introduction: CTLA4Ig is a novel IS agent that acts through costimulatory blockade of T-cells and obviates the need for CNI. Despite benefits in improving renal function, CTLA4Ig therapy is associated with an increased risk for PTLD despite limiting use to pts who are EBV sero(+). Here we examined the risk for EBV and other viral infections in pts receiving CTLA4Ig-vs. CNI-IS. Methods: The incidence of EBV, CMV, BKV viremia (by PCR), PTLD and BK associated nephropathy (BKAN) were compared in 39 CTLA4Ig-vs. 1011 CNI-IS-treated pts. CMV-specific memory T cell levels (CMV-T) by intracellular cytokine flow cytometry were tested in 9 CTLA4Ig-treated pts. All pts were >18 years and 40% of those were HLA-sensitized. The median follow-up was 18.2M post-transplant and the median duration of CTLA4Ig treatment 19.1M. Results: The results are shown in the Table. Briefly, there was no significant difference in the CMV and BKV viremia rate and peak viral levels between the two groups. In addition, no pt in the CTLA4Ig group developed BKAN. In contrast, the incidence of EBV viremia (15.4% vs. 3.3%, p<0.0001) and PTLD (7.7% vs. 0%, p<0.0001) in the CTLA4Ig group were significantly higher than the CNI-IS group. In addition, the EBV levels at onset of PTLD were only 41.3±14.3 copies/PCR. Two of the PTLD cases were CNS based and resulted in deaths, both occurring <1.5 years post-CTLA4Ig therapy. CMV-T was detected in all CMV sero(+) CTLA4Ig-treated pts, suggesting no effect of CTLA4Ig on CMV-T function. Data for EBV-T is not known. Conclusions: CTLA4Ig-treated pts were not at increased risk for CMV or BKV viremia or BKAN. However, there was a significant risk for EBV viremia and PTLD in this group compared to CNI-IS group. At this point, the mechanism(s) are unclear. However, these findings suggest increased vigilance for EBV-PCR(+) is warranted. Early treatment with IVIG+ACV may be helpful. EBV is linked to a variety of lymphoid and epithelial malignancies. In transplant recipients EBV is associated with the development of B cell lymphomas in posttransplant lymphoproliferative disorder (PTLD). We performed an integrative, multi-cohort analysis of EBV-positive and negative tumor samples to identify shared gene-signatures associated with EBV oncogenesis. We selected three gene expression data sets (Gastric Cancer, PTLD, and Hodgkin's Lymphoma) that compared EBV-positive to EBV-negative tumors (n =170, collected from Gene Expression Omnibus). For each data set, we plotted the geometric mean of each gene probe to check for proper normalization and batch effects. The Hedges' g effect size, a measure of magnitude, was calculated for each gene in each dataset. To study the absolute change in expression of the three data sets, a meta-effect size was created by combining the gene effect sizes from each data set. Using a False Discovery Rate of < 0.05 and an Effect Size of 0.8, we utilized two meta-analysis methods, the random-effects model and the Fischer's sum-of-logs method. Leaveone-out validation was used to prevent bias from a large dataset. We identified 30 human genes that were significantly up-regulated and five human genes that were significantly down-regulated in EBV-positive tumors. Of the top 15 most significant genes, nine have proposed roles in oncogenesis; for example, PMAIP1 is up-regulated in adult T cell leukemia and SESN2 contributes to p53 signaling. Only FGR was previously shown to be up-regulated in EBV-associated tumors. Epstein-Barr virus (EBV) B cell lymphomas in post-transplant lymphoproliferative disorder (PTLD) is a major problem in organ transplantation. Reduction of immunosuppression sometimes result in tumor regression but also puts the graft at risk for rejection. Therefore, treatments that combine anti-tumor and anti-rejection properties would be of great importance to PTLD patients. Previous studies have shown that multiple cellular signal transduction pathways, including the PI3K/Akt/mTOR pathway, are dysregulated in EBV+ B cell lymphomas, and that inhibiting mTOR with rapamycin results in partial inhibition of proliferation. We hypothesized that targeting molecules upstream of mTOR would inhibit EBV+ B cell lymphoma proliferation. Moreover, blockade of this pathway may prolong graft survival, since this pathway is also known to be activated downstream of the T cell receptor. Phosphorylation of the PI3K/Akt/mTOR pathway was determined by protein array on cellular lysates from EBV+ B cell lymphoma lines from patients (n=5) with PTLD. The effect of small molecular inhibitors of the pathway on cellular proliferation was also determined. Finally, the effect of these inhibitors on graft survival was assessed using a fully MHC mismatched heterotopic mouse heart transplant model (Balb/c to C57Bl/6 Kidney transplant recipients (KTR) receiving sirolimus accrue fewer skin cancers than KTR receiving calcineurin inhibitor (CNI) based immunosuppression, potentially due to effects of sirolimus and CNI on skin-infiltrating T cells. Peripheral blood samples and skin biopsies from sun exposed and non sun exposed areas of the arm were collected from three groups of patients (all n=15): 1) KTR receiving sirolimus (2) KTR receiving CNI and (3) non-immunosuppressed patients with chronic kidney disease (CKD). The immunophenotype of T-cells in skin and blood were characterised by multicolor flow cytometry. The absolute number of CD8 memory cells (CD45RO + RA -) was greater in sun exposed skin in sirolimus and CNI treated KTRs compared to CKD patients (p <0.01 and <0.05 respectively). The proportion of CD4 + T cells with a T-reg phenotype (CD127 lo CD25 hi ) was higher in non sun-exposed skin in sirolimus compared to CNI treated KTRs (<0.05). In sirolimus treated patients Treg and memory CD8 T-cells showed poor correlation between blood and sun exposed skin (r=-0.22, 0.21 respectively). Treg and memory CD8 T-cells in CNI treated patients had better correlation between blood and sun exposed skin (r=0.8 and 0.35 respectively) Conclusion Type of immunosuppressive treatment and sun exposure modify the frequency of Treg and memory CD8 + T cells in the skin of KTRs and influence correlation in immune cell numbers between blood and skin. undergoing solid organ transplant between 1991 and 2010 were identified in the Canadian Organ Replacement Register and linked to the Ontario Cancer Registry to identify recipients with PTM. Recipients with PTM were matched to recipients without any PTM using a propensity score and overall survival (OS) was compared using the log-rank test and Cox proportional hazards models. For cause-specific mortality, organ/graft failure, and post-transplant cancer incidence, cause-specific hazards models were used and the cumulative incidence of the events was plotted and compared using Gray's test. Results: A total of 443 recipients with PTM were matched to 886 recipients without PTM. Recipients with PTM had a worse OS compared to recipients without PTM (median OS: 10.3 versus 13.4 years, p<0.001). When stratifying the patients with PTM by the time between cancer diagnosis and transplantation, only the subgroup with intervals ≥5 years were at increased risk of all-cause mortality (HR 1.61 95%CI: 1.32-1.97). Similarly, only recipients with high-risk PTM (those that require minimum cancer remission times of 5 years before listing for transplantation) were at increased risk of all-cause mortality (HR 2.04 95%CI: 1.58-2.64). Recipients with PTM were not only at increased risk of cancer-specific mortality (p<00.1) but also at increased risk of non-cancer death (p=0.02). Similarly, recipients with PTM were at increased risk of organ/graft failure and death with functioning graft (p=0.02 and P=0.01). Conclusions: Transplant recipients with PTM are at increased risk of all-cause mortality but this is not driven solely by the increased risk of cancer-specific mortality. The increased risk of non-cancer mortality may be associated with increased risk of organ/graft failure. Pro-resolution mediators such as resolvins and axonal guidance receptor-ligand interactions are increasingly recognized to regulate leukocyte trafficking into sites of inflammation. The implication of this biology in chronic allograft rejection is as yet unknown. We developed microfluidic platforms that mimic the human allograft microenvironment for the identification and screening of molecules that enhance pro-resolution. In these studies, we evaluated the effects of the axonal guidance cues Netrin-1 and Slit-2 on fMLP-or IL-8-induced neutrophil migration. When infused into the device, human leukocytes can migrate either towards or away from inflammatory foci through micro-channels and/or through mazes that model chemo-physical encounters in vivo. Slit-2 (1.5µg/mL) significantly reduced the migratory response induced by fMLP from 76.9±3.9 % to 25.8±6.9 % of migrating cells (p<0.001). Netrin-1 (0.5µg/mL) also reduced migration in response to IL-8 from 19.6±2.2 % migrating cells to 11.1±1.7 % (p<0.05). Interestingly, although both cues are reported as chemorepellants, we observed a striking pattern of chemoinhibition in response to pro-migratory chemokines. To study patrolling/surveillance mechanisms, we analyzed neutrophil migration in mazes resembling the interstitium of a graft. LTB4 was potent to elicit directional migratory responses towards the inflammatory focus (average ~95% maze area traversed). In contrast, C5a induced an exploratory migratory pattern within the maze without a directional response (~40% traversed, p<0.001). Thus, classic chemoattractants induce diverse migratory phenotypes. We also evaluated the effects of chemorepellents on T-lymphocyte trafficking. High concentrations of SDF-1 (10 µg/mL) were potent to repel pooled populations of T-lymphocytes (25.7±3.1 % repelled, 13.3±2.0 % attracted; p<0.001). In contrast, SDF-1 had no repellent effects on CD4+ T-reg migration (5.7±2.0 % repelled, 5.1±1.8 % attracted). This suggests that local release of SDF-1 within allografts has high potential to inhibit T effector cell infiltration, but has no effect on T-reg trafficking and/or local immune regulation. We conclude that pro-resolving mediators represent novel families of molecules with high potential to regulate leukocyte trafficking into allografts. Allograft-on-a-chip microfluidics supports the translation of these discoveries and the screening of novel anti-inflammatory agents. The mechanistic target of rapamycin (mTOR) is a kinase that functions in at least two complexes: rapamycin (RAPA)-sensitive mTORC1 and RAPA-insensitive mTORC2. These complexes play critical roles in myriad essential cellular functions. In dendritic cells (DC), targeting of mTORC1 using RAPA inhibits differentiation, maturation, and allostimulatory function; absence of mTORC2 in DC leads to an enhanced inflammatory profile. However, the allostimulatory role of mTORC2 in DC is undefined. In addition, while mTOR is an important metabolic regulator in DC, the role of mTORC2 in regulation of DC metabolism is unknown. To elucidate the role of mTORC2 in DC metabolism and alloimmunity, we generated novel mice deficient in mTORC2 (mTORC2 DC-/-) specifically in conventional CD11c+ DC. We hypothesized that ablation of mTORC2 in DC might alter metabolic activity resulting in augmented T cell responses and alloimmunity in vivo. The glycolytic capacity and mitochondrial respiratory activity of wild-type and mTORC2 DC-/generated from mouse BM were analyzed using a Seahorse XF Bioanalyzer. The role of mTORC2 in DC instigation of immune responses was investigated in vivo by: 1) intratumoral injection of mTORC2 DC-/in a B16 melanoma model, and 2) determining the impact of mTORC2 DC-/on in vivo alloreactive T cell responses and survival of mTORC2 DC-/-male>female skin and heterotopic heart allografts. mTORC2 DC-/exhibited increased glycolytic and mitochondrial metabolic activity and enhanced T cell allostimulatory function in vitro compared to control DC. Background:Our recent work demonstrated that laminins α4b1g1 and α5b1g1, components of the lymph node (LN) stroma, are differentially regulated. Tolerance is associated with relatively higher laminin α4 expression, while immunity is associated with laminin α5 upregulation. These differences resulted in altered regulation of alloreactive T cell and antigen-presenting cell trafficking within and extravasation through high endothelial venules. Here, we tested the hypothesis that laminins directly impact CD4 T cell activation and polarization. Methods:CD4 T cells from C57BL/6 mice were activated with anti-CD3 with or without laminin α4 and/or α5. The laminin α5 receptor was blocked with anti-α6 integrin mAb. After 3-5 days of culture, proliferation and Foxp3, CD69, CD25, CD44 and CD62L expression were determined by flow cytometry. For in vivo experiments, TCR Tg CD4cells from TEa mice that recognize donor I-E d presented by recipient I-A b were transferred with or without anti-α6 integrin mAb into C57BL/6 mice that had received BALB/c donor-specific splenocytes (DST). TEa T cell proliferation and activation were then analyzed. Results:Addition of laminin α4 to anti-CD3 stimulated cultures reduced CD4 T cell proliferation and activation. In contrast, addition of laminin α5 increased proliferation and activation marker expression by up to 8 fold. Addition of anti-α6 integrin blocking mAb abrogated the stimulatory effect, demonstrating that α6 integrin is the main receptor for laminin α5. Moreover, laminin α5 also reduced the proportion of Treg cells induced by TGF-β by 2 fold. This effect was also abrogated by anti-α6 integrin mAb. The proportion of Treg cells induced by TGF-β was unaltered by the presence of laminin α4. However, addition of laminin α4 was able to partially reverse the inhibitory effect of laminin α5 on Treg induction. In vivo, systemic blockade of α6 integrin resulted in decreased proliferation and activation of alloantigen specific CD4 T cells in the LNs of DST treated recipient mice. Conclusion:These results demonstrate that the laminin trimers α4b1g1 and α5b1g1 are co-inhibitory and co-stimulatory ligands, respectively, for CD4 T cells. Laminin α5b1g1 is recognized by the integrin a6 expressed by T cells. This confirmed the immunogenic role of this laminin-specific receptor and suggested a co-stimulatory role for laminin α5b1g1 in vivo. These findings demonstrate that laminins are not only passive LN structural molecules, but act as molecular switches for tolerance and immunity by directly influencing T cell functions. S1P and S1PR1 are used by T cells to migrate from thymus across microvascular endothelium to the blood circulation, and across medullary lymphatic endothelium of the lymph node (LN) for egress to efferent lymphatics and the blood circulation. However, whether S1PRs regulate T cell migration from tissues across afferent lymphatic endothelium and into draining LN are unknown. We hypothesized that different S1PRs are utilized by CD4 T cells and lymphatic endothelial cells (LECs) to promote CD4 T cell afferent lymphatic migration. Methods: MS-1 blood endothelial and SVEC4-10 lymphatic endothelial cell lines and human and murine primary LECs were used in transwell assays. Human and murine CD4 T cells placed in the upper chamber were migrated across endothelium to chemokines, cytokines or S1P. Results: Human and murine CD4 T cell migration toward S1P was enhanced by T cell-LEC interactions. In contrast, migration to CCL19, CCL21 and several other chemokines and cytokines was not enhanced by T cell-LEC interactions. CD4 T cell migration toward S1P was both chemokinetic and chemotactic. Pretreatment of CD4 T cells but not LEC with the non-specific S1PR inhibitors pertussis toxin, FTY720, or S1P inhibited CD4 T cell migration across LEC, showing that S1PRs played a critical role on CD4 T cell trans-lymphatic endothelial migration. The specific S1PR1, S1PR3, and S1PR4 inhibitors each blocked CD4 T cell function and migration, but did not affect the LEC functional ability to promote CD4 T cell migration. In contrast the S1PR2 inhibitor specifically blocked LEC and their ability to enhance CD4 T cell migration, but did not act on T cells to block migration. S1P upregulated VCAM-1 and VE-cadherin, but not ICAM-1 expression on LECs, and blocking anti-VCAM-1 or anti-VLA-4 mAbs inhibited CD4 T cell migration. Conclusion: CD4 T cells and LEC utilized distinct S1PRs to regulate CD4 T cell migration across afferent lymphatics in a VCAM-1 and VLA-4 dependent fashion. S1P engages active processes in both T cells and LEC to promote migration. These results demonstrate for the first time unique roles for S1PRs in regulating T cell and LEC functions in migration. These findings suggest new and specific drug targets for regulating lymphatic migration in immunity and tolerance. Antibody-mediated lymphocyte depletion is used as induction therapy in sensitized transplant recipients. Following lymphoablation, peripheral lymphopenia triggers homeostatic T cell proliferation (HP) and enhanced thymopoiesis. However, the relative contribution of these pathways to T cell recovery and allograft rejection is unknown. Using a mouse model of heterotopic heart transplantation and a murine analog of Thymoglobulin (mATG), we have reported that the recovery of CD8T cells following mATG depletion is associated with allograft rejection and requires the depletion-resistant memory CD4 T cells and CD40 expressing B cells. The goal of the current study was to investigate the role of thymus during T cell reconstitution in mATG treated recipients. Limiting CD4 T cell help by CD4 T cell depletion or CD154 blockade significantly reduced the numbers of single positive CD8 thymocytes generated in mATG treated mice suggesting a potential role for residual memory CD4 T cells in thymopoiesis. To definitively test whether thymus is required for T cell reconstitution, C57Bl/6 thymectomized and euthymic mice were transplanted with BALB/c heart allografts and treated with mATG (25 mg/ kg i.p.) on days 0 and 4 posttransplant. The recovery of CD4 and CD8 T cells in thymectomized recipients was markedly impaired compared to euthymic mice (10%, 13% vs. graft survival of up to 40 days. 2) In naïve non-grafted mice and those with syngeneic grafts, NRP1 was found on only few CD4+FoxP3-conventional T cells (Tconv) and CD8+ T cells (<5%), but on a high proportion of CD4+FoxP3+ Treg (~80%) and CD11c+ DC (50-60%). 3)In major and minor mismatch settings, WT recipients displayed no major changes for NRP1 expression within CD4+ Treg, but showed NRP1 upregulation to about 20%, 10% and 80% on CD4+Tconv, CD8+T cells and CD11c+DC, respectively. NRP1-KO recipients presented complete absence of NRP1 from CD4+ Treg and CD4+ Tconv but NRP1 upregulation on CD8+ T cells and CD11c+ DC at levels similar to WT animals.4)CD25 and CD69 were upregulated on both, WT and NRP1-KO, CD4+ Tconv and CD8+ T cells following transplantation, reflecting allogeneic activation. Overall, expression levels were slightly higher in the major as compared to the minor mismatch groups. Conclusion: NRP1 expression has been reported earlier to be beneficial for Treg. We show that NRP1 is upregulated on T effector cells and DC upon allogeneic stimulation, and that its deletion from CD4+ T cells prolongs graft survival in a minor mismatch skin transplant model. We suggest that in this transplant setting, NRP1 expression is not crucial for Treg suppressive function but rather necessary for CD4+ T cell activation and effector function. The challenge of immune modulation in transplantation is to inhibit pathologic alloreactivity while minimizing off-target toxicities and preserving protective immunity. Data have emerged demonstrating the role of cellular metabolism in controlling T cell function and fate. After antigen recognition, naïve cells undergo rapid expansion, a process fueled by a switch from oxidative respiration (OXPHOS) to glycolysis. Regulation of this process is in part controlled by mTOR signaling. Studies have shown that inhibition of mTOR using rapamycin promoted OXPHOS, resulting in augmented memory generation in pathogen-elicited T cell populations. However, while rapamycin augmented the CD8 + T cell response in the context of an infection, it failed to do so in the setting of a transplant. Here, we hypothesized that differences in T cell bioenergetics in pathogen-vs. graft-elicited antigen-specific CD8+ T cell populations drive the paradoxical effect seen with rapamycin. To investigate this, we used a transgenic system in which both the antigen of interest and the responding monoclonal T cell population were identical. TCR-transgenic OT-I and OT-II T cells specific for OVA were transferred into recipient B6 mice. Mice were either infected with OVA-expressing gamma-herpesvirus (gHV-OVA) or grafted with an OVA-expressing skin graft. Secondary lymphoid organs were harvested at the peak of response (day 10). We found that the gHV-elicited antigen-specific CD8 + T cells showed higher KLRG expression as compared to the graft-elicited cells (32.4% +/-4.5 vs. 5.5% +/-2.1, p<0.001), suggesting an altered differentiation status in monoclonal TCR tg T cells stimulated by gHV as compared to a graft. Furthermore, our study identified metabolic differences in gHV-vs. graft-elicited T cells. First, gHV-elicited CD8 + T cells exhibited increased expression of the glucose transporter GLUT-1 relative to the graft-elicited T cells (MFI 2548 +/-105 vs. 1235 +/-117, p<0.001). Second, gHV-elicited CD8 + T cells exhibited an increase in CD36, a surface receptor for long-chain fatty acids (MFI 2548 +/-105 vs. 2221 +/-82, p=0.04), suggesting that gHV-elicited CD8+ T cells may derive more fatty acid fuel from the extracellular environment as compared to graft-elicited T cells, which may preferentially utilize intracellular sources. These data shed light on metabolic differences in T cell populations generated in response to transplant vs. infection. Future work is aimed at determining the impact of these differences on immunosuppression. Antibody mediated rejection (AMR) is one of the most important barriers to improving long-term transplant outcomes. Traditional therapies for AMR do not deplete the source of antibody production such as the mature plasma cells. Bortezomib is the only plasma cell-targeted therapy used in humans with AMR. Bortezomib inhibits the constitutive proteasomes (c-20S), cellular proteases that degrade polyubiquitinated proteins and regulate many cell functions. However, c-20S are ubiquitously expressed in tissues and its inhibition leads to apoptotic cell death, which increases the toxicity of inhibition, limiting the use of bortezomib. However, immunoproteasomes (IP or i-20S) are primarily expressed in immune cells. c-20S and i-20S differ only in 3 proteolytic β subunits. We show here that memory B cells and plasma cells increase their expression of i-20S in the spleen and bone marrow in mice with a heart allograft. We hypothesized that i-20S inhibition would target B cell activation, memory B cells and plasma cells without the toxicity of inhibiting c-20S. We designed and synthesized a novel, specific, non-covalent, small molecule inhibitor (DPLG3) of the IP β5i subunit. DPLG3 inhibited mouse i-20S with IC50 of 9.4 nM and 1500-fold selectivity over mouse c-20S, and inhibited human i-20S with an IC50 of 4.5 nM and 7000-fold selectivity over human c-20S. C57BL/6 recipients of BALB/c hearts treated with low dose sCTLA4Ig (250 µg on day 2) and 2 weeks injections of DPLG3 (25 mg/kg/day) exhibited indefinite heart survival prolongation compared to control (CTLA4Ig + vehicle) (MST >100 and 30 days respectively, n=6/group, p<0.05). We found a significant reduction in activated B cells, memory B cells and plasma cells in the spleen and the bone marrow of DPLG3-treated mice compared to vehicle (p<0.05, n=6-8 mice/group). DPLG3 treatment induced significant suppression of donor specific antibodies measured in the serum of allograft recipients (p<0.05, n=6-8 mice/group). In summary, we designed a novel and safe plasma cell targeted inhibitor that could improve the treatment of AMR in transplant recipients. Alloantibody-mediated injury remains a major challenge for the improvement of long-term graft survival and there is no effective therapy to prevent alloantibody generation post-transplant. The Notch pathway is a major cell-cell signaling pathway that plays a crucial role in cell development and fate. In the immune system, Notch2 is differentially expressed on B cell subsets and is known to be critical for marginal zone B cells (MZB) differentiation and positioning in secondary lymphoid organs. However, little is known about the role of Notch2 in alloantibody production posttransplantation. Herein, using a fully MHC-mismatch cardiac and skin transplant model (BALB/c into C57BL/6J), we report that Notch2 fl/fl CD19-Cre (Notch2cKO) recipients do not develop allo-specific antibodies both short-and long-term (day 8 and >120 after transplant, respectively) . This was consistent with our previous findings that transient administration of anti-Notch2 antibody (day 0, 3, 5, 7, 9, 11) This study examines the basic mechanisms by which high levels of circulating HLA class II donor-specific-antibodies can contribute to microvasculature damage in chronic antibody mediated rejection of solid organ allografts. In a microvascular endothelial cell model, conditions for the cell surface expression of HLA-DR alone or HLA-DR and -DQ were optimized. These cells were stimulated using both monoclonal (anti-HLA-DR or -DQ antibodies) and luminex-analyzed polyclonal antibodies isolated from alloimmunized patients. The resulting signal transduction was assessed by immunoblotting, whilst functional outcomes were evaluated by co-culture of the differently activated endothelial cells with allogeneic peripheral blood mononuclear cells and the analysis of the expansion of selective CD4 + -T cell sub-populations by intracellular cytokine staining. The time-course of cell surface expression of HLA-DR differed with that of HLA-DQ in microvascular endothelial cells, and optimal expression of HLA-DQ required a combination of IFNγ and TNFα, whereas IFNγ alone was sufficient for optimal HLA-DR expression. Addition of either anti-HLA-DR or -DQ antibodies (monoclonal or polyclonal) rapidly activated Akt phosphorylation, and furthermore anti-DQ antibodies induced S6RP phosphorylation. Anti-HLA-DR antibody-mediated preactivation of endothelial cells increased Th17 and decreased FoxP3 hi Treg generation (Lion et al Am J Transplant 2015). Anti-HLA-DQ antibodies also selectively modified the expansion of Th17 and Treg subpopulations, but in contrast to anti-HLA-DR antibodies, the expansion of the Th17 subset was reduced, not increased. These data indicate that whilst both anti-HLA-DR and -DQ antibodies are associated with allograft damage, they may have differential mechanisms of damage implicating the pro-inflammatory allogeneic CD4 + -T cell response. Such effects have the potential to be compounded in the presence of both antibodies. Vβ-TCR staining was performed. Donor-specific unresponsiveness was tested by MLR and 2° skin and SOT. Results: Untreated animals rejected skin grafts, SOT and VCA acutely within 14±1 days, 9±2 days, and 8±1 days, respectively. The treatment regimen extended skin and SOT graft survival (32±8; 65±4, respectively). Additional DBM augmentation lead to allograft survival of >150 days in skin and SOT. However, indefinite graft survival of >250 days was observed in all animals receiving the induction regimen and a VCA ± DBM. In groups receiving a VCA ± DBM, donor chimerism was detected at 22.51±5.96% and 30.17±8.72%, respectively. Foxp3-chimerism showed recipient-derived Tregs predominantly contributing to the Treg pool (92.6±4.2%) in the early phase after transplantation (POD 14-30) whereas at later time points (POD 60-100) Foxp3+ cells were donor-derived (46.2±11.3%). Vβ-T cell receptor staining indicates an additional central tolerance mechanism. All long-term survivors showed donor-specific T cell unresponsiveness in-vitro (MLR) while demonstrated proliferation against 3rd party stimulators. In-vivo, tolerant animals accepted donor-matched secondary skin, while 3rd party FVB/N skin was acutely rejected. Donor-matched hearts were accepted long-term. Conclusion: Robust tolerance and immunosuppression-free long-term allograft survival can be induced with PTCy in stringent fully MHC mismatched murine models of skin, heart, and vascularized composite allotransplantation. Combined transplantation of solid organs and bone marrow or hematopoietic progenitor cells has resulted in successful achievement of tolerance in both laboratory animals and in humans. In the current study, the conditioning regimen of posttransplant total lymphoid irradiation (TLI) and anti-thymocyte serum (ATS) was used to prepare BALB/c mice for combined transplantation of bone marrow and heart grafts from MHC mismatched C57BL/6 donors. When wild type BALB/c mice were used, more than 90% of recipients became tolerant and accepted the heart grafts and developed stable mixed chimerism. When Batf3 -/-BALB/c mice that selectively lacked CD8 + myeloid dendritic cells (DCs) were used, tolerance and chimerism were abrogated indicating that the CD8 + DCs were required for tolerance induction. Further studies showed that the CD8 + DCs obtained from BALB/c mice just after conditioning with TLI alone without transplantation developed changes in surface receptors and functions that were associated with tolerogenic DCs including down regulation of co-stimulatory receptors such as CD40, suppression of the MLR, and upregulation of IDO. The changes in the CD8 + DCs after TLI and tolerance induction were dependent upon the presence of natural killer T (NKT) cells, since the DC changes and tolerance observed in wild type BALB/c mice were abrogated in Ja18 -/-BALB/c mice lacking invariant NKT cells. In addition, changes induced in the NKT cells after TLI alone including upregulation of the activation markers PD-1 and NKG2D were abrogated in the Batf3 -/mice. Our previous studies showed that TLI conditioning upregulated the negative co-stimulatory receptor, PD-1, on conventional CD4 + and CD8 + T cells and on Treg cells in an NKT cell and IL-4 dependent manner. Interestingly, the changes in PD-1 expression were also abrogated in the Batf3 -/mice suggesting that these NKT cell dependent changes were in turn dependent on CD8 + DCs. In conclusion, the experimental data indicate that TLI conditioning induces interactive changes in CD8 + DCs and NKT cells that promote tolerance. (BACKGROUND) Cardiac tolerance in nonhuman primates (NHPs) has been induced for the first time by kidney co-transplantation in combination with donor bone marrow infusion when the conditioning protocol is started six days before transplantation. However, this strategy precludes the use of cadaveric donors. Here, we investigated whether delaying the induction of mixed chimerism until four months post heart and kidney co-transplantation would achieve the same stable state of tolerance. (METHODS) Allogeneic heart and kidneys from the same donor were co-transplanted in NHPs treated with tacrolimus, mycophenolate mofetil and methylprednisolone. After four months, immunosuppression was stopped and each recipient underwent bone marrow transplantation with frozen cells from the organ donor. They also received nonmyeloablative conditioning consisting of 3 Gy total body irradiation, 7 Gy local thymic irradiation, anti-thymocyte globulin, anti-CD154 mAb, anti-CD8 mAb and a 28 day course of cyclosporine. Control animals underwent the same treatment but received an isolated heart transplant. (RESULTS) Recipients of combined heart and kidney transplants (n=3) achieved long-term survival of both allografts (>547, >588, >1093 days) with no evidence of acute or chronic rejection. Remarkably, tolerance of the cardiac allograft was maintained in one recipient even after the donor kidney was removed. Delayed tolerance induction was associated with donor-specific T cell hypo-/unresponsiveness, the absence of alloantibodies, and the presence of Tregs in the accepted kidney. In contrast, recipients of isolated hearts demonstrated acute rejection in 48, 147, and 152 days. (CONCLUSION) Tolerance of MHC-mismatched hearts has been achieved in NHPs through kidney co-transplantation and the delayed induction of mixed hematopoietic chimerism. Delaying mixed chimerism conditioning until four months after heart and kidney co-transplantation permits this protocol to be applied to recipients of cadaveric heart and kidney allografts. Purpose: HSC engraftment is essential for successful bone marrow transplantation, and successful HSC engraftment allows for induction of tolerance to solid organ allografts. Engrafted HSCs are thought to differentiate into donor dendritic cells (DCs) that direct negative selection of anti-donor T cells and/or down-regulate host anti-donor responses. Experimental evidence suggests that chimerism of hematopoietic cells might promote central T cell tolerance and allow for acceptance of an allograft from the donor without the need for immunosuppression. A logical, targeted, approach to enhance HSC engraftment involves blockade of PRRs, known triggers of cellular activation and stress. Methods: HSCs from WT, or NLRP3 inflammasome component deficient (NLRP3-/-, ASC-/-), all on a B6 (H-2b) background, were isolated and injected into allogeneic Balb/c (H-2d) mice after preconditioning with sublethal irradiation (300 Rads) and anti-CD8/antiCD40L. Donor cells were tracked by FACS, for H-2Kb congenic markers. T cell proliferation was checked in a MLR reaction. Results: Disruption of apoptosis-associated speck-like protein containing a CARD (ASC), significantly enhanced allogeneic HSC engraftment. Interestingly, deletion of the NOD-like receptor family, pyrin domain containing 3 (NLRP3) inflammasome protein did not improve HSC engraftment. There was also a significant decrease in the ability of ASC-/-T cells to proliferate in response to allogeneic DCs, and conversely a significant defect in the ability of DCs from ASC-/-mice to stimulate allogeneic T cells. Background: CNI toxicity is a well known risk factor for chronic kidney disease and ESRD in organ transplants. We report our experience in converting Pancreas Transplant Alone (PTA) recipients from tacrolimus to Belatacept in order to avoid further worsening of kidney function. Methods: Six (mean age = 52.8+/-7, 5 Females, 5 Caucasians) patients with PAT initially maintained on tacrolimus, sirolimus and mycophenolate with biopsy proven native chronic kidney atrophy & fibrosis consistent with CNI toxicity were enrolled in a 1 year prospective intention to treat IRB approved study. Tacrolimus was weaned off over 60 days. Patients were maintained on a steroid free regimen of Belatacept, Sirolimus (level 3-6ng/ml) and Mycophenolate. Results: Median time from PAT to conversion was 5.9 years(range 2.5-9.5). Preconversion the mean eGFR was 28.9+/-9, that modestly improved over 12 months to an eGFR of 32.7+/-11. One patient could not tolerate the oral immunosuppression regimen and another patient was non compliant and hence did not complete the study. The remaining 4 cases completed 12 months, of which only 1 case experienced elevation of Lipase requiring steroid therapy with subsequent successful response and continued on the same regimen with Belatacept. Serum glucose, C-peptide and Hemoglobin A1c levels remained unchanged over the study period. There was no incidence of BK, CMV, EBV, PTLD or Donor Specific Antibody (DSA) noted during prospective monitoring. No other new clinically significant event was noted with the use of this regimen over these 12 months. These 4 cases now have each completed a mean follow up of 17 months without any other new significant events with stable renal function and proteinuria. Conclusions: Chronic kidney disease progression due to CNI toxicity can likely be stabilized by converting to Belatacept in PTA recipients. Larger and longer term studies are needed to ensure the safety of this approach in PTA recipients in order to preserve kidney function. Pancreas After Islet Transplantation -A Successful Treatment Option After a Failed Islet Transplant. R. Gruessner, N. Ozden, V. Whittaker, V. Aggarwal, A. Gruessner. Department of Surgery, SUNY Upstate Medical University, Syracuse, NY. Introduction: Pancreas after islet (PAI) transplantation (tx) is a treatment option for patients seeking insulin independence through a whole organ transplant after a previous, failed cell transplant. Using the IPTR and UNOS databases, we studied PAI transplant outcome between 1/2004 and 12/2014. Methods and Results: There were 47 (29 female) recipients of a failed islet tx who subsequently underwent either a pancreas tx alone (PTA, n=35), a simultaneous pancreas and kidney tx (SPK, n=12) or a pancreas after previous kidney tx (PAK, n=10). Median recipient age was 45 (range 26-57) yrs; BMI, 23 (14-32); retx insulin requirements 24 (10-100) units. Enteric drainage was used in 89%; systemic venous drainage, 83%; PRA class1 > 50% was noted in 7 (13%) recipients; PRA class 2 > 50%, in 4 (7%) recipients. When compared to primary pancreas matched-pairs, recipient BMI and retx insulin dosages were significantly lower in the PAI group. Graft and patient survival rates in all 3 PAI recipient categories were not statistically significant when compared to primary pancreas txs. Irrespective of the PAI recipient category, overall 1-and 3-year PAI patient survival rates were 98% and 90%; graft survival rates defined as total insulin independence were 84% and 73%. A failed previous islet tx had no negative impact on kidney graft survival in the SPK category: it was 100% at 1-and 3-years postx and statistically not different than kidney graft survival in primary SPKs. Although PRA levels were higher after a previous islet transplant than for recipients of primary pancreas transplants, no statistically significant difference in patient and graft survival was noted. However, there was a trend (p=0.1) towards more favorable graft survival rates in the PAI group when compared to a matched-pair pancreas retx group. Summary and Conclusion: This analysis shows that: (1) a PAI transplant is a safe procedure with low recipient mortality, a high pancreas graft function rate both short-and long-term, and excellent kidney graft outcome; (2) pancreas (and kidney) graft survival is similar between PAI and primary pancreas transplant recipients; (3) a trend towards higher pancreas graft survival rates was noted in PAI vs. pancreas retx recipients. Patients with a failed islet transplant should be informed about the possibility of a subsequent pancreas transplant with excellent outcomes in their quest for insulin independence through transplantation. Background: Transplantation (Tx) of PeB kidney grafts has been shown to result in good long-term outcomes. However, the fate of PeB grafts that suffer loss of one of their kidneys remains unknown. DGF rate was 53% for all grafts. 4 of 15 grafts with UL failed at 0 to 61 days post-Tx -50% due to PNF & 50% due to thrombosis of the other kidney. 3 of the 4 failed grafts were from donors <5kg. Relaparotomy rate was 73%. Graft survival ( Figure 1 ) for all 15 Txs, and post-Tx Cr levels for the 11 functioning grafts ( Figure 2 ) were: Conclusion: Despite increased surgical peri-Tx morbidity, UL in PeB grafts from donors >5kg resulted in good long term outcomes. Conversely, poorer outcomes of UL in PeB grafts <5kg, probably warrants early removal of the contralateral kidney. The excellent long-term outcomes of single grafts from small peds donors underscores the significant compensatory capacity of these PeB grafts & provides added rationale for pursuing PeB Tx from small peds donors into adult recipients. Background. The evolution and association between body composition and glucose haemostasis after kidney transplantation has not been elucidated. Methods. We investigated 150 consecutive patients without suspected diabetes 8-10 weeks after kidney transplantation and subsequently after one year. Oral glucosetolerance tests and dual-energy x-ray absorptiometry (DXA) scans were performed. Body mass composition including VAT assessments were measured from DXA scans with a novel validated software. Insulin levels in plasma were measured and resistance indexes (HOMA-IR) and insulin release indexes (HOMA-ß) were calculated. Results. Median body weight increased from 74,5 to 76,9 kg (P<0.001) and total fat mass increased from 22,4kg to 25,0kg to (P<0.001) whereas bone mass and visceral fat mass remained unchanged (P <0.638 and P<0.13, respectively). Glucose metabolism improved after one year despite an increase in body fat and BMI . At baseline 13% of patients had PTDM, 19% had IGT and 68% had NGT. After one year 9% of patients had PTDM, 11% had IGT and 80% NGT. Insulin resistance index, HOMA-IR and insulin release index, HOMA-ß index remained unchanged.We addressed demographic and other baseline data that were potentially associated with a normal glucose tolerance after one year in a multivariate model. VAT percentage of fat <5% (HR=3.1, P=0.013), prednisolone dose <10 mg day (HR 3.9, P=0.013), age in the lower quartile (HR 3.2, P=0.047), and HOMA-IR index in lower quartile (HR=3.0, P= 0.04) were significant factors for prediction of normal glucose tolerance after one year. Body weight, BMI, sex, tacrolimus concentration, HOMA-ß index, use of live donor and preemptive transplantation were not significant predictors. Conclusions. The body composition of non-diabetes kidney transplant recipients is changed during the first year with weight gain due to increased fat mass without a change in lean body mass. However, visceral fat content is not increased and glucose metabolism is generally improved. While obesity is considered a risk factor for worse survival after kidney transplant (KT), a single-center study reported that obesity has no direct effect on survival: instead, obesity leads to surgical site infection (SSI), and then SSI leads to worse survival. We examined the effect of SSI on obesity and survival using a national registry. Methods: Using USRDS, we studied 92977 Medicare Primary KT recipients in 1999-2010. SSI was ascertained from Medicare inpatient claims during the 30 days post-KT using ICD-9 codes. We examined whether SSI fully mediates the association of obesity with patient/graft survival using these Cox models: base model (adjusted for basic confounders), base model stratified by occurrence of SSI, and base model adjusted for occurrence of SSI. Results: In the base model, overweight (25-30 kg/m 2 ), obese (30-35 kg/m 2 ), and morbidly obese (>35 kg/m 2 ) groups were at respectively 6%, 18%, and 25% higher hazards of death-censored graft loss than the normal weight (18.5-25 kg/m 2 ) group (Table) . The 30-day incidence of SSI was 3.7% overall; 2.3% in the normal weight, 3.3% in the overweight, 5.1% in the obese, and 8.2% in the morbidly obese group. When stratified by occurrence of SSI, obesity was associated with a higher hazard of graft loss among SSI(-) recipients (Table) , suggesting a direct effect of obesity on graft loss regardless of SSI. This association was not observed among SSI(+) recipients. When adjusted for SSI, the association of obesity with graft loss did not notably differ from the base model (Table) . In contrast, the hazard of death was significantly lower in the overweight and obese groups than the normal weight group by 8% and 7%, respectively (Table) . Among the SSI(-) recipients, the overweight, obese, and morbidly obese groups were at lower hazards of death, but not among the SSI(+) recipients (Table) . When adjusted for SSI, the association of obesity with death did not change notably (Table) . SSI was associated with a higher hazard of graft loss and death (aHR= 1.38 1.49 1.61 and 1.37 1.47 1.57 ). Conclusion: SSI is a risk factor for graft loss and death. Obesity is an independent risk factor for graft loss regardless of SSI. Conclusions: Obesity may impact patient risk for development of CMV as well as risk for breakthrough CMV infection while on valganciclovir prophylaxis after kidney transplantation. These results in the absence of a difference in the proportion of under-dosed patients calls into question whether current valganciclovir dosing using creatinine clearance based on ideal body weight is sufficient in a morbidly obese patient population, although further studies are needed to confirm an association. It is assumed that kidney transplant (KT) recipients have more complications, longer length of stay (LOS) and higher hospital cost associated with general surgery procedures, but this has not been studied formally. This is first study of these outcomes in inguinal hernia repairs for KT recipients using a nationwide data set. METHODS: The Nationwide Inpatient Sample was used to study 534 adult KT recipients and 340,000 non-transplant recipients who underwent open inguinal hernia repair between 2000-2011, including patients with elective, urgent or emergent repair. Outcomes of surgery at either a transplant or non-transplant center were compared. Hospitals were categorized as a transplant center if at least one transplant was performed during the study period. ICD-9 codes were used to categorize complications. Multilevel negative binomial, linear mixed and logistic models were used to compare LOS, hospital cost, and complication rates respectively. RESULTS: Of KT recipients, 72% had procedures performed at transplant centers, while non-transplant patients had 16% of procedures at transplant centers. Median hospital charges and LOS were similar for KT recipients and non-transplant patients ($6198 vs $5741 p=0.27; 2d vs 2d p=0.64). KT recipients were not at an increased risk of complications (aOR 0.58, 95% CI: 0.29-1.16), an extended LOS (IRR 0.88, 95% CI: 0.73-1.05) or higher costs (ratio 1.01, 95% CI 0.89-1.16) regardless of center type. Early postoperative outcomes following inguinal hernia repair in KT recipients are comparable to non-transplant patients. Although the majority of KT recipients undergo hernia repair at transplant centers, center type has no association with complication rate, LOS, or cost. This suggests that KT recipients may safely and feasibly undergo inguinal hernia repair at non-transplant centers. Background: KAS, implemented on 12/4/14, resulted in changes in recipient, donor/ recipient and tx characteristics that have the potential to affect post-tx outcomes. Data and Methods: We compared 6-month patient & graft survival and treated rejection rates for deceased donor solitary kidney txs in 6/1/13-12/3/14 vs. 12/4/14-2/28/15. Unadjusted survival rates were computed using the Kaplan-Meier method. Rates were also adjusted for diagnosis (including retx and diabetes), ethnicity, age, CPRA, time on dialysis, KDPI, DCD, 0-ABDR and cold ischemia time. Adjustments were performed using Cox proportional hazards regression for survival and logistic regression for rejection rates. Results were based on OPTN data supplemented with CMS dialysis data. (Fig 1) . Differences remained non-significant after adjusting: p= 0.59 for patient and p=0.53 for graft survival. There were no significant differences within each CPRA or dialysis group pre-vs. post-KAS. The unadjusted treated rejection rate was significantly higher post-KAS (6.4% vs. 8.5%, p<0.01), but it became insignificant after adjusting (p=0.11). This suggests that the post-KAS increase in rejection rates is at least partially due to a shift in recipient characteristics. Conclusions: Unadjusted 6-month patient and graft survival were statistically no different under KAS and rejection rates increased, although sample sizes and statistical power are limited. Some of the post-KAS changes in recipient, donor/ recipient and tx characteristics have the potential to affect both short-and long-term outcomes. Additional follow-up will be performed to determine clinical and statistical significance of changes in post-tx outcomes under KAS. Controlled oxygenated rewarming (COR) of cold stored livers by machine perfusion demonstrated convincing results in the first clinical pilot series (1) . Here we report on the extension of this new endischemic reconditioning concept for steatotic donor livers. Methods Between 03/2015 and 06/2015 eight distally procured livers from DBD-donors, which had been rejected by other transplant centers and had a minimal steatosis of 30% were subjected to the COR protocol before transplantation. Steatosis was assessed by histology. Immediately after arrival in our transplant center the hepatic artery and portal vein were cannulated. Gentle rewarming of the graft was effectuated thereafter by pressure controlled oxygenated machine perfusion (MP) via portal vein and hepatic artery (Organ Assist, Groningen, NL) while gradually increasing perfusate temperature up to 20°C. Custodiol-N was used as preservation solution. Real-time biochemical analysis of perfusate parameters was carried out in constant time intervals during the controlled rewarming. Results Donor age was 60 (50-71) years, ischemic times were 7.6 (6.7-13.5) hours and the DRI was 2.1 (1.6-2.5). 2 out of 8 organs were not transplanted due to the macroscopic appearance and the overall risk profile. Recipients (3m/ 3f) had a median age of 58 (44-65) years and a labMELD of 12 (9-21). 2 out of 6 patients experienced early allograft dysfunction. The mean peak aminotransferase (AST) levels within the first 7 days was 1812.8 (535-1866) U/l. The two discarded livers demonstrated during the machine perfusion period a continuous release of aminotransferases and the highest peak values in the perfusate (8542 U/l & 5186 U/l). A significant correlation (r 2 =0.8; p<0.05) was observed for the AST release into the perfusate with the postoperative aminotransferase peak. After minimum follow-up of 5 months after transplantation, all recipients are alive with excellent graft function. Conclusions This first clinical application of COR by machine perfusion for steatotic donor livers demonstrated feasibility and safety. Correlation of early available perfusion characteristics seems to be a promising new tool to decrease discard rates of steatotic organs. Hypothermic machine perfusion (HMP) of deceased donor kidneys is associated with better outcome compared to static cold storage (SCS). It is generally assumed that HMP will only improve results for kidneys with a substantial degree of ischemic injury. Many transplant clinicians believe that renal grafts with a short cold ischemic time (CIT) will benefit little from HMP. Also, it is often presumed that kidneys are safe during HMP and the duration of cold ischemia is less relevant while renal grafts are on the machine. Aim of our study was to investigate whether HMP also results in a lower incidence of delayed graft function (DGF) compared to SCS for kidneys that are transplanted with a maximum of 10 hrs of cold ischemia and to test whether CIT remains an independent risk factor for DGF when kidneys are machine perfused. We analysed data that has been prospectively collected in the Machine Preservation Trial. In this international RCT, HMP was compared to SCS of deceased donor kidneys. A total of 376 consecutive kidney donors were included, of whom one kidney was machine perfused and the other kidney was preserved by SCS. A post hoc multivariable data analysis was performed to investigate the effect of HMP versus SCS on renal grafts with a short (up to 10 hrs) CIT and to quantify the influence of CIT on the risk of DGF when kidneys are machine perfused. In this cohort, the mean CIT was 15 hrs 5 min (SD 4 hrs 58 min), the DGF incidence was 27.9%. DGF incidence in the sub group with up to 10 hrs CIT was 6.0% in the HMP arm and 28.1% in the CS arm (univariable P=0.002, multivariable OR 0.02, P=0.007). Three year graft survival in the <10 hrs CIT group did not differ significantly between study arms (88.0% for HMP and 81.2% for SCS, P=0.308). CIT remained an independent risk factor for DGF for all machine perfused kidneys recovered from DBD donors (OR =1.07, P=0.008), DCD donors (OR = 1.13, P=0.006) and ECD donors (OR = 1.14, P=0.001). Conclusion HMP results in lower rates of DGF than SCS in kidney transplantation. The current analysis shows that, contrary to popular belief, this is also true for renal grafts that are transplanted after a short CIT. In addition, our data suggests that CIT remains a relevant and independent risk factor for DGF in HMP-preserved kidneys. A lot of efforts have been done to identify the risk factors that tend to diminish the incidence of delayed graft function (DGF). Recently, the secretory leukocyte peptidase inhibitor (SLPI) was proposed as a biomarker candidate for acute kidney injury (AKI), since high level of it was found in plasma and urine of patients. The aim of the present work was to determine if the level of SLPI in the cold preservation solution could be a useful marker to predict early and late kidney function after transplantation. Twenty three patients were enrolled. Kidneys were perfused with 1 liter of Wisconsin solution before the transplantation. The SLPI concentration was measured on two fractions of the perfused solution, i.e. the first 50 ml (SLPI-A) and the last 50 ml (SLPI-B). Direct correlations were found between SLPI-B and DGF risk factors (donor age, p=0.013; pre-ablation plasma creatinine values, p=0.047; cold ischemia time, p=0.01). Furthermore, the patients with DGF had statistically significant higher SLPI-B values compared with patients without DGF ( Figure 1 ). Also, we found an indirect correlation between SLPI-B and MDRD measured at patient discharge. Interestingly, patients (6) that suffered rejections during the first 6 months after transplantation, had higher values of SLPI-A and SLPI-B than patients without rejections. Remarkably, we find a direct association between KDPI (a score that evaluate the quality of deceased-donor kidneys) and SLPI-B (r=0.55; p<0.005) ( Figure 2B ). In experimental liver transplantation (LTx), glycine, a non-essential amino acid, has been shown to prevent the activation of Kupffer cells and to reduce ischemia/reperfusion injury (IRI) in the liver. Material and Methods: A randomized controlled double-blinded clinical trial with two parallel study arms was performed. A total of 130 patients undergoing primary whole-liver transplantation were randomized and received 250 ml of either 4.4 % glycine solution (n=66) or injectable water (n=64) intravenously (i.v.) during the anhepatic phase and once a day during the first 7 consecutive postoperative days. Primary endpoints were peak levels of aspartate-amino-transaminase (AST)/alanine-aminotransaminase (ALT) as surrogates for the progression of liver related IRI, as well as graft and patient survival. Furthermore, the effect of glycine on cyclosporine A-induced nephrotoxicity is evaluated. Results: The intention to treat analysis as well as the per protocol analysis showed no difference in primary or secondary endpoints between the two study arms. A post-hoc subgroup analysis comparing patients with very high plasma glycine concentrations during the anhepatic phase of LTx (≥7000 ng/ml, n=29) and those with lower concentrations (<7000 ng/ml; n=68) was performed. A relative but not statistically significant reduction of ALT levels during the first 24 hrs and on the first day after LTx, as well as an improvement of patients' overall survival was related with higher plasma glycine levels. Comparison of the post-reperfusion biopsy results showed a significant reduction in both mild and moderate IRI in patients with very high plasma glycine concentrations. Glycine improved eGFR, not only in patients within the target Cyclosporine trough levels, but also in patients with trough levels much higher than target. Conclusion: Although the per protocol analysis could not verify the hypothesized effects of glycine, very high plasma concentrations of glycine achieved at the anhepatic phase proved to be safe, hepatoprotective and nephroprotective. Liver ischemia reperfusion injury (IRI) is an inflammatory event that contributes to graft dysfunction in orthotopic liver transplantation (OLT). Matrix Metalloproteinases (MMPs) are responsible for extracellular matrix (ECM) turnover in physiological and inflammatory conditions. We established that matrix metalloproteinase-9 (MMP-9) proteolytic activity facilitates inflammatory leukocyte invasion in murine models of warm and cold liver IRI. Here we characterize MMP-9 expression in biopsies of human OLT recipients. Methods: Liver biopsies were collected from 26 OLT recipients after reperfusion and prior to abdominal closure. In 5 cases, ischemic donor livers were also biopsied prior to transplant. Controls were biopsied from non-lesion liver of patients subject to liver resection. Biopsies were fixed in formalin prior to MMP-9 detection. Results: 15 male and 11 female recipients had a median age of 63, a mean MELD score of 28, with mean CIT and WIT of ~7h and ~1h, respectively. Triple immunofluorescence confirmed CD45 + infiltrating inflammatory leukocytes as the major sources of MMP-9 in human OLTs. MMP-9 + leukocytes were modestly detected in biopsies of both non-lesion controls and cold ischemic livers, and were significantly increased (~2-fold) in liver biopsies after OLT (MMP-9; OLT vs. CI vs. non-lesion: 32±5 vs. 17±3 vs. 11±4; p<0.002). Leukocyte MMP-9 expression had a positive correlation with liver injury post-OLT; MMP-9 + leukocyte numbers correlated with serum AST and ALT levels at post-operative day 5 (POD5) (AST: R=0.457, p<0.04; ALT: R=0.524, p<0.02) and 7 (POD7) (AST: R=0.407, ns; ALT: R=0.570, p<0.01). Moreover, recipients with serum transaminases elevated above the upper limit of normal (AST>48 & ALT>55 IU/L) had increased MMP-9 + leukocytes compared to recipients within normal AST and ALT range at POD5 (MMP-9: 37±16 vs. 8±6; p<0.02) and POD7 (MMP-9: 31±9 vs. 15±13; p<0.05). MMP-9 + leukocytes were also increased in reperfusion biopsies from steatotic donor grafts (~30% mean macrosteatosis) compared to reperfusion biopsies from non-steatotic donor grafts (MMP-9: 44±26 vs. 19±12; p<0.03). Conclusion: Our results show that MMP-9 is produced by infiltrating inflammatory leukocytes in human OLT biopsies and that its expression associates with liver injury and graft steatosis. Overall, this study correlates with our previous findings in murine models of liver IRI and together support an important role for MMP-9 expression in liver transplantation. Background: Epigenetics modifications in the graft may influence injury severity and function post liver transplantation (LT). This study aimed to interrogate the effect of different graft DNA methylation patterns on gene expression profiles that associate with graft injury after LT. Methods: The study included 22 deceased donor LT patients with severe (SI, n=11; AST>500 IU/L) and mild (MI, n=11; AST<500 IU/L) early graft injury at 1-day post-LT. Tissue biopsies were collected at pre-implantation (L1) and at post-reperfusion (L2). Genomic DNA was extracted from L1 biopsies; bisulfite converted and used in Infinium 450K methylation arrays. Raw data was normalized by SWAN method and analyzed with R bioconductor. Beta scores were converted to M-values. F-test was fit for significant demethylated CpG sites (q<0.05). RNA was isolated from all biopsies, labeled and used in gene expression microarrays (L1 and L2). Probeset summaries were obtained using RMA algorithm. Unpaired ANOVA was fit for deregulated probesets (p<0.001, FDR <5%). Molecular pathways were evaluated by IPA tool. CpGs (Methylight) and genes (RT-PCR) were validated. The long-term implications of delayed graft function (DGF) remain controversial. Moreover, non-proportional hazards of DGF have been reported in previous studies, which suggested that its impact might not be constant over time since the initial transplantation. We studied DGF (defined as hemodialysis in the first week posttransplant) in 127,251 adult deceased-donor kidney-only transplantation (DDKT) recipients from January 2000 to June 2014, using SRTR data and excluding recipients with primary nonfunction ( graft failure within the first week). We built Cox models with time-varying coefficients for DGF to test the associations between DGF and death-censored graft failure (DCGF) and post-transplant mortality. RESULTS: DGF occurred in 24.1% of the patients. DGF was associated with a 3.8-fold higher risk of DCGF at the end of the first month post-KT (p<0.001), which attenuated to a 2.3-fold increase by the end of the sixth month (p<0.001) and became insignificant by the tenth year post-KT (aHR:0.96, p=0.19) ( Figure A) . Likewise, DGF was associated with a 2.1-fold higher risk of mortality at the end of the first month post-KT (p<0.001), which attenuated to a 1.7-fold increase by the end of the sixth month (p<0.001) and remained significant by the end of the tenth year post-KT (aHR:1.17, p<0.001) ( Figure B ). The attenuation of DGF-associated risk over time was statistically significant (p<0.001 for both DCGF and mortality). CONCLUSIONS: DGF was associated with increased risk of DCGF and mortality immediately after DDKT, which attenuated over time. The dynamic change in the magnitude of DGF-associated risk warrants special attention to the early posttransplant management in patients with DGF, and motivates the selection of patients who might better tolerate the physiologic insult of DGF. Background: Late acute rejection (AR) is associated with inferior outcomes relative to early AR after kidney transplantation. We sought to examine the differences in risk factors for early AR (within 90 days of transplant) vs. late AR (occurring after one year post-transplantation). Methods: We analyzed 500 consecutive solitary kidney recipients transplanted between 1/06 and 12/12. The cohort was 52% Caucasian and 43% African American (AA). All patients received tacrolimus/mycophenolate mofetil combination therapy, and corticosteroid withdrawal (CSW) was undertaken at day 5 in 320/491 (65%) of patients. Patients considered higher risk or with delayed function (DGF) were maintained on steroids (CSM). Death-censored allograft survival rates were examined and risk factors for early vs. late AR were identified by multivariable analysis. Results: Early and late AR occurred in 49/492 (10%) and 32/490 (7%) patients, respectively. There was no correlation between early and late AR (r= -0.3). Survival curves for early and late AR are shown below. Early AR did not affect immediate graft survival but ultimately contributed to graft loss after 4 years post-transplantation. Late AR had a greater negative impact on survival, with an 80% graft loss by 7 years. Risk factors associated with early and late AR by multivariable analyses are shown in the Tables. Donor after cardiac death (DCD) kidneys, DGF, HLA mismatching, and CSW were all risk factors for early AR. Older age and use of induction therapy with anti-thymocyte globulin were protective. Alternatively, AA ethnicity was a risk factor for late AR, while older age and diabetic status at transplantation appeared to be protective. Conclusion: Both early and late AR negatively impact upon allograft survival, with a greater negative impact associated with late AR. Factors that predict early AR appear to have less bearing on late AR. Alternatively, AA ethnicity emerged as a risk for late AR, and may help to explain inferior outcomes identified in AA recipients. Given the high risk of allograft loss, it is critical to further assess the risk factors and causes of late AR after kidney transplantation. patient interest in donor programs (living donors, high KDPI donors, etc.,). The automated web-based software employs proprietary algorithms to generate an individualized dynamic patient questionnaire. The software interacts via email. Results: 1,074 patients completed the automated data collection, representing 43.8% of patients the system attempted to interact with thus far. Patients were not formally instructed about the new system prior receiving the email. Data are automatically transmitted to the transplant center in real-time. 7.0% of patients reported a new potential living donor, 48.5% indicated a secondary insurance provider, and 12.8% updated contact information. 45% of patients reported a recent visit to the ER, and 38% were recently admitted to the hospital. 13% received a blood transfusion over the last 12 months. Free-text entry also identified many other sensitizing events and changes in condition that should initiate re-evaluation of candidacy. 14% expressed a new interest in receiving a kidney from a high KDPI donor. Conclusion: A fully automated interactive web-based portal can successfully interact with patients on the waiting list. This approach improves communication, waitlist management, acceptance of different donor programs, and could be used to initiate testing for HLA antibodies after sensitizing events and re-evaluation of transplant candidacy after changes in health. Implementation of an automated phone call alerting the patient to the upcoming email, and formal education about the automated system are likely to increase completion of questionnaires. Background: Prior to the new kidney allocation system (KAS) implemented 12/4/2014, significant African American (AA) vs. white racial disparities existed in access to kidney transplantation (KTx) among waitlisted patients. While preliminary results show that the proportion of AA transplanted patients has increased, it is unknown whether this increase has eliminated racial disparities. In addition, it is unknown whether this disparity reduction is consistent across geographic region. Methods: We examined data from 173,639 patients waitlisted for KTx from the United Network for Organ Sharing (UNOS) standard analytic file from June 2013-June 2015, and divided the cohort into those waitlisted pre-and post-KAS eras. We calculated the proportion of waitlisted patients who received a deceased donor KTx by race as the number of transplants per 100 waitlisted patients; the difference in the proportion of transplants by race (AA vs. white) was mapped by UNOS region using ArcGIS. Results: Prior to Dec. 4, 2014, a smaller proportion of KTx patients were AAs vs. white (31.5% vs. 42.2%) and all 11 UNOS regions had a racial disparity in KTx; following KAS, the proportion of transplanted patients who were AAs increased to 37.7%. All UNOS regions had a racial disparity reduction in transplant rate from pre-to post-KAS but disparity reduction was not consistent across UNOS region ( Figure) . Following KAS, regions 5 and 9 still had a racial disparity, where white patients were transplanted at a higher rate than AAs. Conclusions: Following implementation of KAS, racial disparities were significantly reduced among AA vs. Caucasians, although disparity reduction varied by geographic region. Longer term follow-up is needed to determine whether greater equality in KTx access is sustained. Candidate mortality following waitlist removal was lower at LP centers(AHR=0.90,95% CI 0.87,0.94). Analyses limited to LP centers indicated a significant increase in WLR(+28.6 removals/1000 follow-up years,p<0.001), a decrease in transplant rates (-11.9/1000 follow-up years,p<0.001) and a decrease in mortality after removal(-67.5 deaths/1000 follow-up years,p<0.001) following LP evaluation. There is a significant association between LP evaluations and transplant center processes of care for wait listed candidates. Lower mortality of candidates following waitlist removal at LP centers suggests relatively healthier patients are removed at these centers. Further understanding is needed to determine the impact of performance oversight on transplant center quality of care, access to care and patient outcomes. The new kidney allocation scheme recommends simultaneous local and regional offer for kidneys with KDPI >85% in order to minimize discard rates. It is unclear whether the increase in cold ischemia time (CIT) that can come with regional sharing would adversely impact graft outcomes in these marginal kidneys. Using OPTN/UNOS database, we identified recipients of deceased donor kidneys with KDPI >85% (calculated retrospectively) from 2000 to 2013 who were discharged on calcineurin inhibitor/mycophenolic acid based maintenance immunosuppreson. From these patients, we selected 2 groups with each group including pairs of patients who received mate kidneys from the same donor but with different CIT as follows: Group 1= CIT ³12 but <24 hours (n=747) vs. CIT ³24 hours (n=747); group 2= CIT <12 hours (n=127) vs. CIT ³24 hours(n=127). Cox regression analysis adjusting for recipient and transplant related factors was used to compare delayed graft function (DGF), overall graft failure and patient death risks between mate kidneys with different CIT under each group. Recipient and transplant characteristics between mate kidney recipients in each group were similar except for the following: more frequent machine perfusion for CIT ³24h kidneys than mate kidneys in groups 1 (65 vs. 55%, p<0.001) and 2 (58 vs. 32%, p<0.01), while PRA>30% was less frequent in transplant recipients with CIT <12h in group 2 (19 vs. 10%, p=0.03).There were no significant differences in length of stay and acute rejection at discharge between mate kidney groups. The incidence of DGF was significantly lower with a trend towards lower graft failure risk for patients with CIT <12h in group 2 ( Our findings strongly favor keeping CIT <12 hours in deceased donor kidneys with KDPI ›85% in order to optimize outcomes of regional sharing of these organs. One-year rejection rates in HIV-infected kidney transplant recipients range from 15-40%, compared to overall rejection rates of 10% in HIV-negative patients. Protocols for immunosuppression and highly active antiretroviral therapy (HAART) regimens in this population vary substantially among transplant programs. The potential for significant drug-drug interactions, specifically between ritonavirboosted protease inhibitors (rtv+ PI) and calcineurin inhibitors, and the choice of induction therapy may influence outcomes. Methods: This is an IRB-approved, single center, retrospective study of adult HIVinfected patients with a kidney transplant performed between 5/2009 to 8/2014 with one-year follow up for each patient. Results: 36 patients were identified with a median age of 52 (interquartile range [IQR] 46, 57) years. 78% were male, 53% were African American, 19% were Caucasian, and 17% were Hispanic. The most common cause of renal failure was hypertensive nephrosclerosis (51%) followed by HIV-associated nephropathy (17%), and the median duration of pre-transplant dialysis was 6.2 (3.2, 8.9) years. All patients received IL-2 receptor antagonist (IL-2 RA): 81% with basiliximab and 19% with daclizumab induction. One patient also received thymoglobulin. Calcineurin inhibitor therapy included tacrolimus (75%), cyclosporine (19%), or transitions between these two (6%). 44% of patients received a rtv+ PI-based HAART regimen. Overall one-year patient and graft survival was 94% and 92%, respectively, and the mean serum creatinine was 1.35 (1.21, 1.80) . Treated biopsy-proven rejection within one year was 33% for the overall cohort; 44% for patients on rtv+ PI and 25% for patients on other HAART regimens (p=0.29). Conclusion: Despite high rates of acute rejection, HIV+ kidney transplant recipients have excellent outcomes. Though not statistically significant (likely related to the small number of patients), higher rejection rates were observed in the rtv+ PI group. Future studies should evaluate whether thymoglobulin induction is associated with lower rejection rates compared to IL-2 RA induction. Induction therapy can improve graft survival following kidney transplantation. Organ quality, a major determinant of graft outcomes may act as a confounder in comparative studies of transplant outcomes. In order to eliminate donor related confounding, we compared outcomes of depleting antibody induction with Thymoglobulin vs. alemtuzumab in deceased donor kidney transplant recipients (DDKTRs) utilizing paired kidney analysis. OPTN/UNOS data, as of June 2015, was used to identify first time adult DDKTRs in the United States from 2003 to 2013 who were discharged on tacrolimus/ mycophenolic acid maintenance. The cohort was then limited to recipients of transplanted kidneys from same donor, one induced with Thymoglobulin and the other with alemtuzumab. A Cox regression analysis adjusting for multiple recipient and transplant related confounders was used to calculate rejection-free graft, overall graft, and patient survivals. Cohort was limited to 2,298 patients. When compared to alemtuzumab group, Thymoglobulin group had more African Americans, higher prevalence of peripheral vascular disease, hepatitis B sero-positivity, panel reactive antibody titers >30% and steroid maintenance on transplant discharge (98 vs. 89%, p<0.001). Perfusion pump use, cold ischemia time and delayed graft function were similar between the groups. Transplant length of stay was longer in Thymoglobulin group (6 vs. 5 days, p<0.001). Adjusted hazard ratios with 95% confidence intervals for rejection-free graft, overall graft ( fig 1) and patient survivals were 0.97(0.87-1.07), 0.97(0.82-1.48), and 0.86(0.69-1.05), respectively. Subgroup analysis by steroid use was limited by unequal distribution between groups, with only 20 recipients in Thymoglobulin/ no steroid group. No overall graft survival differences were observed between alemtuzumab/steroid; alemtuzumab /no steroid and Thymoglobulin/steroid groups. Predonation narcotic use is independently associated with increased readmission risk after living donor nephrectomy. Future work should identify underlying mechanisms and approaches to improving postdonation outcomes. Alexander, 2 T. Vrtiska, 3 H. Chakkera, 4 S. Taler, 1 A. Rule. 1 1 Nephrology, Mayo Clinic, Rochester, MN; 2 Pathology, Mayo Clinic, Rochester, MN; 3 Radiology, Mayo Clinic, Rochester, MN; 4 Nephrology, Mayo Clinic, Scottsdale, AZ. Kidney structural characteristics may be predictive of long-term living donor outcomes such as a lower estimated Glomerular Filtration Rate (eGFR). As part of the Renal and Lung Living Donors Evaluation (RELIVE) Study, kidney donors with implantation biopsies and pre-donation CT scans were studied for longterm outcomes at least 5 years after donation by a serum creatinine based eGFR with the CKD-EPI equation. There were 868 living donors during the time period 2000-2005, of which, 391 (45%) participated with a follow-up serum creatinine. Of these 391 donors, 334 had macro-structural measures of the retained kidney at baseline by pre-donation CT scan (cortical volume and surface roughness) and 268 had micro-structural measures of the implantation biopsy of the donated kidney by morphometry, stereology, and pathologist review. Nephron number was estimated from cortical volume × glomerular density. On CT, the mean cortical volume was 105 cc and kidney surface roughness was 0.9 (range 0 to 3). On biopsy, the mean glomerular density was 14 per mm 3 , glomerular volume was 0.0028 mm 3 , % intimal occlusion was 32%, and nephron number was 813,382; any interstitial fibrosis and tubular atrophy occurred in 24% and any glomerulosclerosis in 35%. At follow-up, a mean 7.9 years after donation (mean age of 54 years), the mean eGFR was 67 (31% were 45-59 and 3.5% were <45 ml/ min/1.73 m 2 . Association of kidney structural findings at donation with follow-up eGFR is shown in the Table. Conclusion Cortical volume of the retained kidney may have utility for predicting long term kidney function in donors. In patients, allograft survival requires a cocktail of immunosuppressive drugs, while experimentally antibodies targeting the innate immune system have been shown to induce long-term tolerance, with severe side effects. We here induced long-term tolerance by nanomedicine delivery of rapamycin to myeloid cells. We developed a hybrid nanoparticle encapsulating rapamycin (R-HDL), which has inherent affinity for innate immune cells. In a mouse allogeneic heart transplantation model, we applied this nanotherapy using a regimen involving 3 intravenous tail vein injections of 5 mg/Kg rapamycin during the first week. Using a combination of radiolabeling, in vivo PET-CT imaging and flow cytometry, we evaluated allograft and cellular specificity. Subsequently, the innate immune system's response and allograft survival were extensively monitored. R-HDL nanoparticles, ~30 nm in diameter, had a high rapamycin encapsulation efficiency of ~65%. Radiolabeled R-HDL was observed to specifically accumulate in the transplanted heart and to be mainly associated with myeloid cells. We showed in the transplanted heart, a significant reduction of Ly-6C hi / Ly-6C low as well as CD25 -/ CD25 + cells. Most excitingly, our nanomedicine treatment resulted in a dramatic enhancement of allograft survival. We here presented a novel nanomedicine treatment paradigm to redirect rapamycin to innate immune cells and induce long-term allograft survival. Foxp3+ T-regulatory (Treg) cells are key regulators of immune responses that can result in unwarranted inflammation and promote allograft rejection. From a theoretical perspective, targeting histone/protein deacetylases (HDACs), of which there are eleven, to modulate the function of Tregs has considerable practical advantages in allograft recipients compared to injecting at some point post-transplant a bolus of variably pure Treg cells that may or may not survive very long after injection. However, consistent with a one size does not fit all concept, certain HDACs have proven absolutely required for optimal Treg function whereas others appear to be dispensable. Our task has been to determine which is which. The current studies assessed the role of a hitherto enigmatic HDAC, HDAC10, in murine Treg cells. We demonstrated that HDAC10 bound to Foxp3 in co-immunoprecipitation assays, and that germ-line knockout of this hitherto enigmatic HDAC had no effect on the overall health of targeted mice. Importantly, we found that HDAC10 deletion: (i) boosted Treg suppressive function in vitro compared to wild-type (WT) Tregs, (ii) increased Treg expression of Foxp3, Ctla4, Granzyme-b, Il-10 and other genes (microarray, qPCR), (iii) increased acetylation of NFkB/p65 at lysine 310 and thereby prolonged the nuclear retention of p65 and increased transcription from p65-dependent promoters; and (iv) increased oxidative phosphorylation in Tregs compared to WT Tregs (OCR and ECAR assays of mitochondrial function on the Seahorse platform). Lastly, heterotopic cardiac allograft studies (BALB/c-> C57BL/6) showed that compared to WT recipients, HDAC10-/-allograft recipients markedly prolonged allograft survival (p<0.01). Studies to further dissect the in vivo impact of HDAC10 deletion in additional experimental transplant and autoimmune models are underway, as are efforts directed at developing selective targeting of HDAC10 using pharmacologic inhibitors. Overall, these data show the promise for therapeutic targeting of HDAC10 as a novel approach for transplant rejection and potentially other immunologically mediated disorders. Aquaporin 4 (AQP4) belongs to the family of integral transmembrane proteins highly permeable to water. We have previously reported that blocking AQP4 with small molecule inhibitor AER270/271 during cold ischemia storage (CIS) and for 5 days posttransplant significantly prolonged survival of BALB/c heart allografts subjected to 8 h of CIS and transplanted into B6 recipients (MST of 25 d, n=24 vs. 6 d, n=5). At the time of rejection, the frequencies of donor-reactive T cells were significantly lower in AQP4 inhibitor treated than in saline treated recipients (500 vs. 2500 IFNγ spots/1x10 6 splenocytes, P<0.001).The goal of the current study was to test whether the activation and functions of alloreactive T cells may be directly affected by AER270. Real-time RT-PCR analysis showed AQP4 expression in naïve and effector/memory CD8, but not CD4 T cells from naïve B6 mice. CFSE-labeled naïve T lymphocytes were stimulated in vitro for 72 h with anti-CD3/anti-CD28 mAb. AER270 inhibited CFSE dilution in a dose dependent manner with >75% inhibition achieved at 0.25 µM. To assess how AQP4 blockade affects effector functions of previously primed T cells, spleen cells from B6 recipients of BALB/c skin allografts were tested in a 24 h recall IFNγ ELISPOT assay in response to donor BALB/c alloantigens. AER270 blocked IFNγ production by effector/memory alloreactive T cells (>80% inhibition at 0.25 µM). The decreased IFNɣ secretion was not due to direct reagent cytotoxicity, as the numbers of live cells were not reduced by AER270. To test whether the effects of AQP4 inhibition in vivo are limited to alloreactive T cell responses, we immunized B6 mice with ovalbumin (OVA) in CFA s.c. and treated with AER271, a prodrug converted into AER270 by liver phosphatases, on d 0-5 after immunization. AER271 treatment significantly reduced the activation of OVA-specific T cells in the draining lymph nodes at d. 10 post immunization compared to control group. This is the first evidence that AQP4 water channel is expressed in T cells and that blocking AQP4 inhibits activation and functions of both naïve and effector/memory T cells. Our results suggest that AQP4 can be a promising therapeutic target to diminish the detrimental effects of alloreactive T cells in transplantation. Purpose: Our goal was to determine the efficacy and immune mechanisms of costimulation blockade by CTLA-4Ig when treatment begins after an immune response is already established. Methods: C57BL/6 mice were given heart transplants from BALB/c donors, and treated with CTLA-4Ig starting 6d post-transplant. Mice were monitored for graft survival and grafts were examined histologically. Additionally, splenocytes were examined for endogenous donor MHC Class I (H-2K d )-reactive germinal center (GC) B cells, and CD4 and CD8 + T cells producing IFNγ in response to stimulus by BALB/c antigen presenting cells. Results: In C57BL/6 recipients of BALB/c hearts, a delayed start in CTLA-4Ig treatment 6d post-transplant (just 2 days before complete cessation of heart beat) inhibited alloantibody production and prevented acute rejection. 10 of 15 grafts survived to ≥ 30d post-transplant, despite significant T cell infiltration and C4d deposition in the graft at d6. To understand the mechanism by which CTLA-4Ig rescued heart allografts from acute rejection, we examined endogenous H-2K dreactive B cells and observed that CTLA-4Ig starting 6d post-transplant collapsed the germinal center response and reduced the number of H-2K d -reactive GC B cells to levels comparable to naïve mice. Furthermore, surviving grafts examined on d30 post-transplant had reduced C4d deposition compared to d6 allografts, when CTLA-4Ig treatment was initiated. Delayed CTLA-4Ig treatment had no significant effect on the alloreactive CD4 + IFNγ + response compared to untreated controls, but prevented the increase in the CD8 + IFNγ + response that occurred in untreated recipients between d6 and d14 post-transplant. Conclusions: Once rejection is initiated, halting the immune response becomes more difficult and effective therapies are necessary. While there is extensive evidence that CTLA-4Ig is ineffective at inducing tolerance in recipients with high frequencies of donor-specific memory T cells, we here report that CTLA-4Ig is unexpectedly effective at controlling established B cell responses and treating already established acute rejection. Taken together, the reversal of graft-specific B cell responses and the blunting of CD8 + IFNγ + T cell responses may independently contribute to the efficacy of delayed CTLA-4Ig at treating acute rejection. T cell function is tightly regulated by a fine balance of coinhibitory signals, including those transduced by 2B4 (SLAMf4, CD244), an immunoglobulinsuperfamily member constitutively expressed on NK cells and induced on some T cell subsets. Recent studies from our group suggest that 2B4 plays a functional role in the inhibition of donor-reactive CD8 + T cell responses in vivo in the setting of selective CD28 blockade. These findings raise the possibility that 2B4 itself may be a therapeutic target to attenuate allograft rejection. Thus, we hypothesized that augmenting 2B4 signaling would attenuate graft-specific CD8+ T cell responses following transplantation. To test this, we created 2B4 retrogenic (2B4rg) donorreactive CD8+ OT-I T cells which ectopically express 2B4. 2B4 retrogenic Thy1.1+ CD8+ T cells (or empty vector-transduced controls) were adoptively transferred into naïve animals. Mice then received an OVA-expressing skin graft and were sacrificed ten days later. Data show that constitutive 2B4 expression results in significantly reduced accumulation of antigen-specific CD8+ T cells in the spleen 10 days posttransplantation as compared to wild-type pMY controls (1.23x10 6 +/-4.27x10 5 vs. 6.51x10 6 +/-2.81x10 6 , respectively, p=0.0267). This was not due to differences in expression of the 2B4 ligand CD48 or the T cell activation or exhaustion markers CD44, KLRG-1, CD127, TIM-1, PD-1, and LAG-3. Further, the differences in donorreactive CD8+ T cell accumulation were not explained by increased cell death, as we observed no difference in frequencies of Annexin V+ 7-AAD+ apoptotic cells between 2B4rg donor-reactive CD8+ T cells and empty vector controls. Instead, we observed a marked reduction in the proliferation of the CD8+ Thy1.1+ 2B4rg cells when compared to controls as measured by CellTrace Violet (CTV) analysis. Specifically, the 2B4rg CD8+ Thy1.1+ population contained a higher frequency of CTV hi undivided cells as compared to non-retrogenic controls (13.62% +/-0.5476 vs. 2.642% +/-1.167, respectively, p=0.0022), suggesting that overexpression of 2B4 on donor-reactive CD8+ T cells results in reduced recruitment into the response. Interestingly, 2B4rg cells still differentiated into cytokine-producing effectors despite their inability to divide normally. These findings suggest that engaging the 2B4 coinhibitory pathway could represent a novel therapeutic strategy to prevent the expansion of alloreactive CD8+ T cells following transplantation. A growing body of evidence shows that induction of long term transplant survival by "costimulation blockade" (CoB) regimens is impaired by inflammatory responses. In particular, multiple studies reported that engagement of toll like receptors (TLR) abrogates the tolerogenic effect of CoB. Despite the identification of type-1 interferons (TI-IFN) as mediators of this effect in multiple models, the target population and specific pathway used by TI-IFN to induce this effect are not known yet. To better understand how an inflammatory environment, and more specifically IFN-β, could interfere with the induction of transplant tolerance we studied their impact on the immunomodulatory properties of IL-10. Mouse bulk T cells were isolated by negative-selection and Tmem and Treg subpopulations identified by flow cytometry. IL-10R expression and phospho-STAT3 induction (a key signaling step) after IL-10 and IL-6 stimulation in Tmem and Treg cells were measured via optimized flow cytometry techniques. The gene expression profile of T cell subsets exposed to TI-IFN was assessed by microarray and quantitative PCR analysis. Protein levels were measured by Western Blot. Our studies show that following 48h of bystander incubation with IFN-β, Tmem and Treg subpopulations present a dramatic defect in the production of phospho-STAT3 in response to IL-10, but not to IL-6. Microarray and flow cytometry data indicated that this IL-10-specific unresponsiveness was not associated with reduction in surface IL-10R expression or an increase in SOCS (Suppressor of Cytokine Signaling) 1 and 3, nor with reduced STAT3 cytoplasmic availability.They however suggested a possible role of STAT1 in this process. Before and after IFN-β exposure, there is a complete reversal of the STAT1/STAT3 ratio in T cells. Moreover, we observed a stronger increase in phospho-STAT3 levels in STAT1-KO T cells after IL-10 stimulation. Overall, these data reveal a promising new molecular mechanism where IFN-β interfere with IL-10 signaling in T cells by increasing levels of STAT1. Our data suggest that STAT1 exerts cross-competition with STAT3 for IL-10R binding, preventing its phosphorylation and activity. A targeted regulation of this mechanisms that counteract IL-10 suppressive functions could be a powerful tool to improve the efficacy of immunomodulatory strategies for transplant tolerance induction. After adjusting for race, sex, age, BMI, directed versus non-directed donation type, and biological relatedness between donor and recipient, low-SES PDs had 1.76-fold higher odds of any delay ( CONCLUSIONS:A substantial portion of PDs were delayed during the LKD process, particularly those from low-SES zip codes. These PDs also had lower proportions of candidacy and donation. Our findings may encourage investigation into optimizing LDK evaluation and PD support programs, particularly for low-SES PDs. Post-donation depression (PDD) and regret (PDR) remain poorly characterized for live kidney donors. Individuals who develop kidney-related medical conditions after donating may have higher odds of developing PDD or PDR. METHODS:617 live kidney donors participated in a survey at median (IQR) 10 (7-14) years post-donation. Each participant completed a Patient Health Questionnaire-2 (PHQ-2) and answered a question to gauge PDR: "Given the chance, would you offer to donate your kidney again?" 172 individuals also completed a PHQ-9. We tested the predictive value of PHQ-2 for PHQ-9 using non-parametric ROC analysis and determined an 83% sensitivity, 98% specificity, 77% PPV, and 99% NPV. We used individual logistic regression models -adjusted for race/sex/age -to investigate the odds of PDD in individuals with PDR, and the odds of PDD or PDR in those with new-onset morbidities (hypertension (HTN), diabetes, chronic kidney disease (CKD), and kidney stones). These morbidities were chosen for their biological relationship to donation or their severity with reduced renal biomass. RESULTS:9% of donors scored ³2 on the PHQ-2, indicating PDD. 25% of those who expressed PDR also exhibited PDD, versus 8.4% of those without PDR (aOR= 1.51 3.60 8.53 p<0.01). Donors who developed any morbidity had 2-fold higher odds of PDD (aOR=2.03, p=0.02) and those that developed HTN had 3-fold higher odds of PDD (aOR=3.08, p<0.01). There was no evidence of association between other morbidities and PDD (aOR=1.30, p=0.6 for CKD; aOR=0.99,p>0.9 for diabetes; aOR=0.65, p=0.7 for kidney stones), nor was there evidence of association between any morbidity and regret (aOR=1.24, p=0.6 for all combined; p>0.1 for each individually). Purpose: To evaluate the impact of and electronic feedback system (EFS), SIMpill Medication Adherence System®, on medication adherence in patients who have received a solid organ transplant. Methods: A total of 89 solid organ transplant recipients were randomized to 1 of 4 groups; 2 intervention groups (I1 and I2) & 2 control groups (C1 and C2). Subjects in I1 or I2 received EFS, a medication-dispensing device that communicates timing of openings or doses taken to a secure server. If a medication dose is missed a text reminder message is sent to the subject (I1). In addition to patient reminders, providers were notified if consecutive doses were missed over a 72 hour period (I2), enabling intervention. C1 received an EFS device but feedback was not provided to the patient or provider. C2 did not receive an EFS device. Results: A total of 39 subjects from two subgroups, I1 (N=20) and C1 (N=19), were selected for analysis with Mann-Whitney U tests. A majority of patients used the device for greater than 6 months. Median comparison of total doses taken, days with correct dosing, and doses taken within 2 hours of target time were significantly greater in I2 compared to C1. (60%). To date, the participating peer mentors (n=9) have completed 710 patient encounters over 54 visits in 12 different dialysis centers. To improve the program, staff suggested a more facility-specific mentor program in which the mentor would represent patient demographics, live locally, and be available to meet with patients outside of the dialysis facility. To address these comments, the SEKTC subcommittee developed a Peer Mentor Program Toolkit (figure 1) designed to help facilities recruit and train local KTx recipients to serve as peer mentors. The toolkit is a 63-page electronic booklet written at an 8 th -grade level, and includes peer mentor recruitment strategies and tools, mentor training materials, and activities a mentor could complete with their mentees. Abstract# 552 A linear relationship existed between median eGFR at six months and KDPI of the donor. Linear regression analysis showed for every 10% increase in KDPI the median eGFR dropped 3.1 ml/min/1.72m 2 . Abstract# 570 Outcomes Associated with Carbapenem-Resistant Enterobacteriaceae Infection After Solid Organ Transplantation in a Multicenter Study. S. Huprikar, 1 L. Casner, 1 L. Camera Pierrotti, 2 A. Nellore, 3 R. Madan, 4 J. Garcia-Diaz, 5 S. Jacobs, 6 D. Lee, 7 W. Trindade Clemente, 8 G. Alangaden, 9 R. La Hoz, 10 N. Theodoropoulos, 11 M. Miceli, 12 G. Santoro-Lopes, 13 D. Banach, 14 D. Simon, 15 G. Patel. 1 1 Mount Sinai, New York; 2 Univ of Sao Paulo, Sao Paulo, Brazil; 3 UAB, Birmingham; 4 Albert Einstein, Bronx; 5 Oschner, New Orleans; 6 Cornell, New York; 7 Drexel, Philadelphia; 8 Univ Federal de Minas Gerais, Belo Horizonte, Brazil; 9 Henry Ford, Detroit; 10 UT Southwestern, Dallas; 11 Ohio State, Columbus; 12 U of Michigan, Ann Arbor; 13 Federal Univ of Rio de Janeiro, Rio de Janeiro, Brazil; 14 Yale, New Haven; 15 Rush, Chicago. Carbapenem-resistant Enterobacteriaceae infection (CREI) is associated with poor outcomes in solid organ transplant (SOT) recipients but most reports are single center experiences. Patients who underwent SOT between 1/1/2007 and 7/31/2013 and later developed CREI were eligible for chart review. The primary outcome was one-year mortality in SOT recipients with CREI within one year of SOT. Our cohort consists of 164 SOT recipients from 15 sites. The median age was 56; 61% were male. The transplanted organs were as follows: kidney (72), liver (62), liver-kidney (14), other (16). There were 170 CRE isolates: Klebsiella (129), Enterobacter (26) and other (15). 196 sites of CREI were observed: urinary tract (62), bloodstream (40), abdomen (36), lung (26), surgical site (25) and other (7). Surgical complications prior to CREI occurred in 83 (51%). In the entire cohort, the median intervals from SOT to CREI and from CREI to death were 51 days and 71 days, respectively. CREI occurred within one year of SOT in 140 (85%). The one-year mortality rate was 39/140 (28%) with a median interval from CREI to death of 30 days. The median interval from SOT to CREI in the 24 patients who developed CREI after one year was 812 days. The mortality rate in this group was 10/24 (42%). The median interval from CREI to death in this group was 50 days. To our knowledge, this is the largest multicenter series of post-SOT CREI and confirms that CREI is usually an early complication. The one-year survival rate of 72% in SOT recipients with CREI in the first year is better than previously described in the literature. Analyses to identify factors associated with mortality and survival are in progress. Abstract# 571 HO-1/SIRT1/p53 Axis Regulates Macrophage Activation and Attenuates Liver Ischemia-Reperfusion Injury in Mice The mechanism by which macrophage heme oxygenase-1 As tumor suppressor protein p53 may be critical in macrophage activation, we have analyzed how macrophage HO-1/SIRT1/p53 axis affects hepatoprotection in liver IRI. Methods/Results: Livers in groups of wild type (WT) and myeloid specific HO-1 transgenic (HO-1TG) mice (C57/BL6) were subjected to partial warm ischemia (90 min) followed by reperfusion (6hr) Higher levels of SIRT1, p19, p53, MDM2, PUMA, and lower level of p-Stat1 in HO-1TG bone marrow-derived macrophage (BMM) cultures were abolished after transfection of HO-1TG macrophages with SIRT1-siRNA. Then, we asked how pretreatment with SIRT-1 activator, resveratrol (Res) may affect the severity of liver IRI in myeloid-specific HO-1 knockout (HO-1 KO) vs. FLOX-control (Con) mice. While myeloid-specific HO-1 deletion worsened IRI, Res treatment has rescued HO-1 deficient livers from IR-damage Addition of Res to HO-1-deficient BMM cultures restored SIRT1, p19, p53, MDM2, PUMA, Noxa, p21 expression while depressing p-Stat1, TNFα, MCP1, iNOS, IL-1β and IL-12. Res upregulated p19/p53/MDM2 expression in WT BMM cultures while p19-siRNA transfection abolished the ability of Res to induce p53. Conclusion: Macrophage SIRT1 upregulates p53 via p19 signaling. Myeloid cell-specific HO-1/SIRT1/p53 axis regulates inflammation and promotes hepatoprotection in IR-stressed livers We evaluated clinical and health care utilization differences for inpatient, outpatient, and emergency care at 90 days. We created multivariate logistic and Cox models to assess the association between discharge timing and complications, readmission, and patient and graft survival. Results: 13.7% of recipients were discharged early (n=117), of which 73 were living donor kidney recipients (62.4% of group). Major complications at 30 days were correlated stepwise with discharge status Graft and patient survival were similar based on discharge status on multivariate analysis (both, p=NS) outpatient clinic or ER utilization at 90 days. Care process improvements to reduce initial length of stay after kidney transplant are safe and do not result in excess health care utilization While the incidence of antibody-mediated kidney graft rejection has increased, the key cellular and molecular participants underlying this graft injury remain unclear. We have previously reported that rejection of kidney allografts in CCR5 -/-mice is dependent on production of donor-specific antibody. The goal of the current study was to determine if cells expressing cytotoxic function contributed to antibody-mediated kidney allograft rejection in these recipients. Wild type C57BL/6, B6.CCR5 -/-and CD8 -/-/CCR5 -/-mice were transplanted with complete MHC mismatched A/J kidney grafts and intra-graft inflammatory components were followed to rejection. B6.CCR5 -/-and CD8 -/-/CCR5 -/-recipients rejected kidney allografts by day 35 whereas 65% of allografts -recipients expressed high levels of VCAM-1 and MMP7 mRNA that was associated with high serum titers of donor-specific antibody. At rejection in wild type and CD8 -/-/CCR5 -/-recipients, kidney allografts also expressed genes associated with NK cell (Sh2d1B1 and MYBL1) but not with T cell (CXCR6) activity during inflammation. High levels of perforin and granzyme B mRNA expression in kidney allografts peaked on day 6 post-transplant in all recipients, but were absent in isografts. Depletion of NK cells in CD8 -/-/CCR5 -/-recipients reduced this expression to background levels and promoted the long-term survival of 40% of the kidney allografts. These results support a role for NK cells in increasing inflammation during antibody-mediated kidney allograft injury and in rejection of the grafts Metallothionein Accelerates Anti-MHC Induced Small Airway Obliteration and Fibrosis by Negatively Regulating IL-10 and Regulatory Cells. D. Nayak, 1,2 F. Zhou, 1 N. Benshoff, 1 T. Mohanakumar we have identified metallothionein 1 (Mt1) as one of the early genes induced following ligation of MHC with specific Ab. Metallothioneins are known to participate in heavy metal detoxification and suppress regulatory T cell differentiation. In this study, we examined effector function(s) of Mt1 in anti-MHC induced murine OAD. We determined development of OAD and immune responses to lung-associated self-antigens (SAgs), K alpha 1 Tubulin (Ka1T) and Collagen V (ColV), following MHC class I ligation by anti-MHC in Mt1 deficient (Mt1 -/-) mice. Mt1 -/-and WT mice were administered intrabronchially with anti-H-2K b or isotype control Abs for 30 days and evaluated for small airway obstruction and fibrosis by histopathology, and elicitation of immune responses to lung associated antigens (Ka1T and ColV) by ELISA and ELISPOT. Frequency of regulatory T cells (CD49b + and LAG3 + ) in lungs was quantitated by multicolor flow cytometry and production of IL-10 by lung leukocytes was assessed by quantitative PCR and luminex assays. MHC class I Ab (H-2K b ) significantly reduced OAD in Mt1 -/-, bronchiolar obstruction, epithelial hyperplasia and peribronchiolar fibrosis were similar to isotype control. Anti-H-2K b administered WT mice, on the other hand, demonstrated bronchiolar obstruction compared to that of isotype control. Concomitantly, MHC class I ligation in WT induced higher Ka1T and ColV specific auto-Abs, and IL-17 and IFN-γ secreting T cells compared to that in Mt1 -/-. Further analysis of the regulatory T cells indicated a reverse trend; higher accumulation of regulatory T cells and greater IL-10 response was found in Mt1 -/-over WT. Overall, loss of Mt1 abrogated anti-MHC induced OAD and resulted in significantly lesser Abs and T cell responses to lung SAgs. Using a murine model of anti-MHC induced OAD, we have identified Mt1, an important player, involved in the early inflammation and pathogenesis Metallothionein Accelerates Anti-MHC Induced Small Airway Obliteration and Fibrosis by Negatively Regulating IL-10 and Regulatory Cells Concurrent Session: Bone Marrow Transplantation and Chimerism: Animal Models Abstract# 475 Combination of Veto Cell Transfer and iNKT Cell Therapy Establishes Complete Hematopoietic Chimerism in Non-Myeloablative BMT Recipients Since bone marrow cells (BMCs) are endowed with "veto" activity, reduction in the number of transferred BMCs usually results in engraftment failure in nmBMT. We previously reported that BM engraftment could be promoted by (1) enhancing Treg activity via iNKT cell stimulation with a liposomal α-galactosylceramide (lipo-aGC), and (2) inducing donor-specific On the following day, skin grafts from B6 and C3H (H2 k ) mice were simultaneously transplanted on the recipient mice. When this clinically practical number of BMCs was used, all mice rejected both B6 and C3H grafts. However, when 5 × 10 5 splenic T cells obtained from an H2 b+ GFP-Tg donor were additionally administered, complete chimerism was established in the mice (fig 1) and they accepted the skin allograft permanently in an H2 b -specific manner. Only H2 b+ CD8 + T cells showed this veto effect, whereas transfer of H2 b+ CD4 + T cells or 3 rd party T cells did not facilitate BM engraftment. Concordantly, early expansion of GFP + CD44 hi CD62L lo effector CD8 + T cells was observed in many lymphoid organs, including the BM and thymus. BM-derived H2 b+ GFP -T cells immediately regenerated thereafter and had completely replaced GFP + T cells until day 60. Therefore, complete chimeric mice did not show GvHD physically and pathologically. Lack of host reactivity was also confirmed in an in vitro proliferation assay. The mice treated with lipo-aGC or MR1 alone rejected BM and skin grafts despite T cell transfer (fig 2), suggesting that Treg cells, peripheral deletion, and veto T cells act in concert for establishing tolerance in nmBMT recipients Post Transplantation High-Dose Cyclophosphamide Treatment Promotes Immune Tolerance After Skin, Solid Organ, and Vascularized Composite Allotransplantation Luznik, 2 G. Brandacher. 1 1 Department of Plastic and Reconstructive Surgery Methods: Murine skin, heart, and VCA transplants were performed across a full MHC mismatch barrier. Recipients treatment comprised non-myeloablative TBI and T-cell depletion and a single dose of post-transplant cyclophosphamide. Donor BM and splenocytes (DBM) were injected at the time of transplantation Antibody-Mediated Rejection in Pancreas Transplantation evaluated for graft dysfunction in the first post-PTx year. PTx were grouped by rejection grade using biopsy-proven (BP) and clinicallysuspected (CS) criteria. Univariate and multivariate analyses were performed to determine potential risk factors for rejection and graft failure. BP and CS rejection rates were correlated with patient and graft survival. RESULTS: Univariate analysis (Table 1) identified the following recipient risk factors for AMR: age (p=0.02), gender (p=0.04), and peak PRA (p<0.001). Recipients with >1 PTx were found to be borderline significant (p=0.06). Multivariate analysis confirmed peak PRA to be the only significant risk factor for AMR (p<0.001). Patient survival after 1-year was similar for all groups while graft survival was significantly reduced for patients with any type of rejection (Figure 1). The AMR/Mixed groups exhibited worse outcomes than the CMR group (p=0.005). After adjusting for recipient age, gender, and number of prior PTx, rejection grouping remained a significant predictor of graft survival. CONCLUSION: AMR or mixed rejection appears to We examined donor age, sex, race and BMI, as well as recipient age, sex, race BMI and history of kidney and/or pancreas rejection. None of these factors were significantly different between the PTX and non-PTX groups, and none were predictive of need for pancreatectomy. The most common indications for late PTX were thrombosis (n=24, 43.6%), PTX at time of re-transplant (n=11, 20%), intra-abdominal abscess (n=8, 14.5%), aortoenteric fistula (n=5, 9%), and duodenal perforation (n=4, 7%). Five (9%) grafts had features of chronic rejection. The majority of patients presented with abdominal pain, fever or nausea/vomiting. We find multiple indications for late PTX in patients with ongoing immunosuppression, the most common being allograft vascular thrombosis. The 13.8% PTX rate is lower than the reported late nephrectomy rate of 27% Morbid Obesity and Functional Status as Predictors of Surgical Complication After Renal Transplantation Concurrent Session: Ischemia Reperfusion Injury: Clinical Innovation Abstract# 496 Controlled Oxygenated Rewarming and Subnormothermic Machine Perfusion of Steatotic Livers: First Clinical Results. D. Hoyer, 1 A 1 1 General, Visceral and Transplantation Surgery Controlled Oxygenated Rewarming and Subnormothermic Machine Perfusion of Steatotic Livers: First Clinical Results Hypothermic Machine Perfusion Is Also Beneficial for Deceased Donor Kidneys When Cold Ischemic Time Is Short and a Short Cold Ischemic Time Is Also Beneficial When Kidneys Are Machine Perfused Abdominal Transplant Surgery These results suggest that SLPI concentration in the perfusion liquid could be a good predictor marker for short and long term clinical outcomes in kidney transplantation Abstract# 500 Glycine Is Graft Protective and Improves Kidney Function After Liver Transplantation: Data from HEGPOL-Trial 4 1 General, Abdominal and Transplant Surgery Predicting the Timing of Graft Failure in De Novo DSA Positive Patients The risk of graft failure after de novo DSA (dnDSA) is not always immediate. To make more informed therapeutic decisions in dnDSA+ patients Methods:We performed a single center study of 99 dnDSA (+) patients receiving a primary transplant between 3/99 -12/10 that had 3 years of post-dnDSA follow-up. All patients had HLA antibody monitoring by single antigen beads pre-txp, post-txp at 1,3,6,9,12 months, and then annually. IgG3 subclass testing was performed 01 vs grafts surviving beyond 1 year post-dnDSA, Fig B). The early graft failure group's profile at dnDSA onset had higher rates of IgG3 dnDSA, HLA Class I + II dnDSA, and serum creatinine >2 mg/dL. The intermediate graft failure group (n=17) was compared to the group with good graft function at least 1-3 years post-dnDSA (n=70) (Fig C). The intermediate graft failure patients had a higher rate of BK viremia (p=0.02). The development of IgG3 dnDSA by 1 year post-dnDSA occurred in twice as many intermediate graft failure patients Additionally, acute rejection by 1 year post-dnDSA >25% eGFR decline at 1-year post-dnDSA (compared to eGFR at dnDSA onset, HR 6.5, 95% CI 2.4-17.5) predicted graft failure between 1 -3 years post-dnDSA. The late graft failure group (n=15) was compared to dnDSA + grafts functioning > 3 years post-dnDSA (n=58) Predicting the Timing of Graft Failure in De Novo DSA Positive Patients Background: Reducing use of tissue biopsy by efficiently using non-invasive samples for continuous monitoring and evaluation of graft function would be ideal. Reports indicate continuous systemic and local molecular communication via cell free miRNA (cf-miRNA) and RNA in organisms. We aimed at identifying cf-miRNA biomarkers in plasma (systemic) and urine supernatant Also, a set of biopsies for cause with histological diagnosis of calcineurin inhibitor toxicity (CNIT) was included for testing specificity of markers. MicroRNA PCR arrays with panel of 84 miRNA involved in fibrosis pathway were tested in the samples. Data was normalized and was analyzed using ∆∆ CT method and online PCR data analysis software from SABiosciences. Results: 18 differentially expressed plasma cf miRNA between P and NP were identified (p value < 0.05) which include miRNA like 199b-5p, 10a-5P, 29b-3p with reported functions in pro-fibrosis, ECM remodeling, and EMT, respectively. While most of these miRNA (61%) overlap with the differentially regulated miRNA in CNIT, upregulation of miR-328-3p (pvalue= 0.012; FC=3.74) was unique to CNIT group Abstract# 505 IgG Dilutions, Subclasses, C1q, and IgM: Determining Relevant Testing for De Novo DSA and Outcome Prediction. M. Everly, 1 L All patients were tested at the following time points: at the time of DSA onset, at 6 months post dnDSA, and once between 12-24 months post-dnDSA. All samples were tested for IgG 1:3 dilution, IgG subclasses (IgG3 and IgG4), and IgG C1q. Samples at dnDSA onset were also tested at IgG1:10 dilution and for IgM. Results: At the time of dnDSA onset, IgG subclass and C1q testing did not distinguish those at increased risk of graft loss (Fig 1a). A dilution of 1:10 at dnDSA onset was correlated with an increased risk of eventual graft loss 4.6 times compared to DSA IgG 1:10 (-) patients. There was a trend toward IgM dnDSA at onset being a characteristic that correlates with development of acute rejection within 1 year post-dnDSA (p=0.06). Looking into testing beyond dnDSA onset, patients having C1q at 6 months were at a very high risk for graft loss. Finally, those with IgG3 DSA at 12 months post-dnDSA were also at a very high risk for graft loss. The overall trend in the progression of IgG Dilutions, Subclasses, C1q, and IgM: Determining Relevant Testing for De Novo DSA and Outcome Prediction Impact of Delayed Graft Function Varies by Demographic and Clinical Characteristics in Deceased Donor Kidney Transplantation The goal of this study was to use national data to comprehensively explore these effect modification relationships. METHODS: Using SRTR data from 127,251 adult kidney-only DDKT recipients from 2000-2014, we examined interaction terms between DGF and various factors in Cox models for death-censored graft failure (DCGF) and post-transplant mortality. We used time-varying coefficients for DGF to account for disproportional hazards. RESULTS: Overall DGF was associated with 1.86-fold higher risk of DCGF and zero HLA mismatch recipients (any mismatch:1.83, zero-mismatch:2.17) (Figure A). The association was attenuated in racial/ethnic minorities (White:2.20, African American (AA):1.60, Hispanic:1.90, other:1.59). DGF was associated with a 1.56-fold higher risk of mortality, which was amplified in female recipients (male:1.51, female:1.66) and zero-mismatch Better understanding the impact of DGF will help select patients who are more likely to tolerate this temporary physiologic insult without affecting longer-term outcomes Abstract# 507 Lower Renal Graft Function and Accelerated Fibrosis in HIV-Infected Transplant Recipients with Previous HIVAN Compared to Non HIVAN HIV-Positive Transplant Recipients. M.-N. Peraldi, 1 H. Ayari At last follow-up, 87.5% of patients were alive and 81% had a functional graft. Patient and graft survival was identical in patients with or without HIVAN. Eleven patients (34%) had acute rejection: 3 borderline lesions, 2 acute cellular rejection, and 6 acute humoral rejection Patients with previous HIVAN have a higher incidence rate of renal graft fibrosis and chronic vascular lesions than non-HIVAN HIV-positive kidney-transplant recipients. Recurrence of collapsing FSGS is a late frequent event in patients who had previous HIVAN Changes in the prevalence of CPRA 99-100% patients on the KI waiting list (WL) over time were also examined. Results: The % of txs into CPRA 99-100% recipients increased from 2.3% Pre-KAS to 17.7% in the first month post-KAS (Fig 1). 11.0% for 99%, 20 Geographic Differences in Racial Disparity Reduction in Kidney Transplant Rates Before and After the New Kidney Allocation System Transplantation Under KAS Increases Access 3-Fold for Blood Type B Patients, yet Very Few Candidates Are Listed as Methods: We examined trends in SCT prior (6/1/13-12/3/14) vs. post-KAS (12/4/14-8/31/15). Solitary, deceased donor kidney transplant (tx) rates per active patient year were compared post-KAS for B candidates listed as eligible vs. not eligible for SCT, accounting for changes in candidates' A2/A2B eligibility status. Relative risk of tx was approximated as the ratio of tx rates. Results: Nationally, SCT increased from 1.9 to 9.0 transplants per month, a 5-fold post-KAS increase. B candidates listed as SCT eligible had tx rates of 0 5% (n=455) of B registrations at 44 programs were reported as eligible to receive these offers, despite published research suggesting 80% of B patients have sufficiently low anti-A titers to medically qualify and SCT outcomes are comparable to B-to-B tx. Post-KAS, SCT transplants have occurred at 27 kidney programs. Of blood type A kidney donors, OPOs reported 10% as having subtype A2. Conclusions: Under KAS, B candidates listed as SCT eligible are not only permitted to receive A2/A2B offers, but they receive priority above other local adults for these kidneys. This has led to markedly higher tx rates for eligible candidates, yet few tx centers are currently participating. Increased awareness of this benefit and dissemination of strategies for reliable and cost-effective anti-A titer screening are needed for this aspect of KAS to reach its full potential Abstract# 518 Induction Therapy with Depleting Antibodies in Low Immunological Risk Renal Transplant Patients Treated with a Steroid Free Regimen -Comparison of Alemtuzumab vs At follow-up (6,4±2,1 yr vs. 2,6±1 yr ) chronic rejection rate was 11% vs 5.4% respectively. In low immunological risk kidney transplant recipient on steroid-free immunosuppression, both alemtuzumab and rATG induction therapies were safe and effective. Alemtuzumab was associated with lower clinical acute rejection rate. The statistically nonsignificant trend to a higher incidence of one-yr subclinical acute rejection, humural rejection and BK-virus nephritis, observed after alemtuzumab induction, highlights the need for further studies Although the policy resulted in a substantial increase in liver-intestine transplants, the death rate was essentially unchanged. This could be due to the increase in demand for liver-intestines in the post-era and perhaps changes in patient selection for transplant. The OPTN will continue to monitor the outcomes of these patients in the future Pre-Donation Weight Loss Postpones Living Kidney Donation without Attaining the Desired Weight Maintenance in the Midterm Pre-donation weight loss in living kidney donation has two main objectives: reduce surgical risk, and decrease long-term complications from obesity and glomerular hypertrophy secondary to kidney mass reduction. We retrospectively evaluated living kidney donors from a single center for pre-and post-donation weight changes, and compared clinical and renal outcomes 82±0.15mg/dL, GFR 92±24ml/min/1,73m2, median time to donation 3.6 months No differences between groups were found regarding age, gender, median time to donation, serum creatinine or GFR either at donation, or at 3 years follow-up On follow-up, weight gain in these patients was significantly higher since month 6 (p<.005), up to 3 years (p<.001), having recovered more than 66% of the weight by the 3 rd year (picture 1). No differences were found regarding GFR, proteinuria, or microalbuminuria at this point. In conclusion, weight loss previous to living kidney donation increases length to transplant, and is often regained as soon as 3 years post-donation. Efficient programs to prevent weigh regain in the post-donation period need to be implemented to prevent long-term complications Motivation and Stress Associated with Non-Directed Kidney Donation Background: Nondirected kidney donors (NDDs) are unique in that they have the risks of nephrectomy but often do not have the benefit of seeing the positive impact of their donation. Little is known about what motivates NDD to donate, or how they view donor-related stressors. Methods: We surveyed NDDs regarding factors motivating them to donate and about possible donor-related stressors At the time of donation 86% had > HS education and 91% were employed. Non-responders had donated longer ago and more likely to smoke. Regarding motivational factors, very or moderately influential were (Fig 1): desire to help another (96%), awareness of organ shortage (55%), moral duty (48%), and imagining self as recipient (44%) Regarding stress: 51% reported the overall donation experience as 'a little' stressful, 5% moderate and 4% very or extremely stressful (Fig 2). The most common stressors were concern related to possible physical consequences (67%); fear of recipient rejecting (53%) Not one regretted their choice to donate. Conclusion: The majority of the NDDs are driven by their desire to help, and universally agree that their decision to donate was the right one. Some had stress related to donation, but they still did not regret their decision. Understanding stressors for NDDs may help transplant centers target additional supports. CITATION INFORMATION Donation rates between eligible AA and Non-AA donors were similar, but significantly less when looking at the original pool of potential donors. The majority of AA donors were eliminated after the LDQ, suggesting initial interest in donation is not sustained. LD educational outreach, early intervention, and rapid workup tempo may increase AA kidney donation Grant/Research Support, Novartis Predonation Prescription Narcotic Use: A Novel Risk Factor for Readmission After Living Donor Nephrectomy. K ) to quantify predonation prescription narcotic use and postdonation readmission events. Narcotic fills in the yr before donation were normalized to morphine equivalents. Associations of predonation narcotic use (adjusted odds ratio, aOR) and other baseline clinical, procedural, and center factors with readmission within 1 yr postdonation were examined using multivariate logistic regression Adjusted readmission risk was significantly (P<0.05) higher for African Americans (aOR 1.44), women (aOR 1.23), exchange participants (aOR 1.44), uninsured LKD (aOR 1.35), LKD with predonation eGFR<60 (aOR 2.43), and after robotic nephrectomy (aOR 1.89). LKD at highvolume centers had lower readmission rates (aOR 0.82). LKD who used narcotics predonation were more likely to fill narcotics late Infusion of Extracorporeal Photopheresis Treated Donor Splenocytes Leads to Long-Term Liver Allograft Survival and Donor Specific Tolerance in Rats Hepatopancreatobiliary Surgery ECP-DSp mediated liver graft protection was coincided with increased numbers of CD4 + CD25 + FoxP3 + T regulatory cells (Tregs) in the blood of ECP treated recipients as compared with the untreated (p<0.05) and TAC treated recipients (p<0.05) at POD 7 and 14. Furthermore, ECP-DSp treated allografts (after POD100) accepted donor-type (ACI) skin grafts but not third-party skin grafts, suggesting that ECP could successfully induce donor-specific tolerance. In conclusion, a single pre-transplant infusion of ECP-DSp facilitated long-term liver allograft survival and led to donor specific tolerance, demonstrating a novel role of ECP, an existing immunotherapy, in induction of transplant tolerance. Promoting generation of Tregs may contribute to the mechanism of action for protection of liver allograft from rejection by ECP-DSp infusion Concurrent Session: Psychosocial and Treatment Adherence Abstract# 545 One of the barriers to live kidney donation (LKD) is successful completion of the evaluation process. While it is known that many potential donors (PDs) fail to complete this process, factors associated with delay or failure are poorly understood. We evaluated factors affecting progress through LKD evaluation through each of five LKD evaluation phases: 1, referral and questionnaire; 2, blood and tissue typing; 3, routine screening and physical exam; 4, evaluation and clearance; 5, donation. We defined phase-specific "delay" as ≥90 th percentile of time spent by PDs who completed that particular phase (respectively 37, 162, 132, 106, and 307 days) and applied these criteria to all eligible PDs. We built a zip code level SES index--as described by AHRQ--based on crowding, property values, median income, poverty, education, and unemployment. We linked the index by zip code and categorized PDs as "low-SES" (quintiles 1&2) or "high-SES" (quintiles 3-5). RESULTS:Of 1388 initial PDs Barriers to Comprehensive Transplant Education at U.S. For-Profit Dialysis Centers and Associations with Waitlist Access Less is known about differences in specific transplant education disseminated or covered within educational counseling sessions. In a national transplant education training conducted with 1,695 adult, chronic U.S. dialysis centers in 2011-2015, we surveyed staff about their use of 15 center-and staff-level education approaches (e.g., designated transplant educator; providing handouts) and 4 barriers (e.g., insufficient time to educate) While there were no differences by ownership type in the use of many education practices, for-profit centers were less likely to have detailed discussions about the risks/benefits of transplant Notably, for-profit dialysis centers had lower wait-listing rates than non-profit centers (IRR=0 For-profit dialysis centers are more likely to have a systematic transplant education program that meets CMS requirements, but their staff report more barriers to providing comprehensive risk-benefit information, which may drive lower waitlisting rates. Future research must examine the higher prevalence of these barriers among for-profit dialysis centers Coping with Patient Death on Pediatric Liver Transplant Teams ESRD Network 6, Raleigh; 4 University of South Carolina, Columbia. BACKGROUND The Southeastern Kidney Transplant Coalition's (SEKTC) mission is to improve the low kidney transplantation (KTx) rate in the North Carolina, South Carolina, and Georgia. Our aim was to evaluate a KTx-specific Peer Mentor Program and develop a dialysis facility-specific Peer Mentor Program Toolkit. METHODS A SEKTC subcommittee developed and implemented a Peer Mentor Program in 2014. Peer mentors were healthy examples of transplant recipients matched to a Georgia dialysis facility to complete monthly visits. Dialysis facility staff, peer mentors, and mentees (dialysis patients) were surveyed regarding the Peer Mentor Program; survey results were used to develop a Peer Mentor Program Toolkit. RESULTS Preliminary evaluation showed facility staff 'agree' or 'strongly agree' that the peer mentors provide social support (80%), improve a patient's knowledge about KTx (80%), and encourage dialysis patients to educate themselves about KTx monitored with Doppler of the stoma and angiography when necessary. Patients with hypercoagulable state received systemic anticoagulation from the time of graft reperfusion. Outcome parameters were thrombotic complications and graft survival. Results: 25/33 grafts (76%) were reperfused "centrally" (aorta and vena cava), 24 of them via vascular homograft. The other 8 grafts were reperfused via native superior mesenteric artery and vein. Overall, 28/33 (85%) patients required both arterial and venous homografts from the same donor (donor's median age 1.1 Generation of Naïve Donor-Derived Lymphocytes from Graft-Resident Lymphoid Progenitors After Human Intestinal Transplantation We recently demonstrated that mixed T cell chimerism (>3% donor cells) often appears in blood (7/12) usually without graft-versus-host disease (GVHD) following ITx, and was greatest in MVTx recipients. The donor T cells among 6 patients without GVHD were markedly enriched for the naïve recent thymic emigrant cell receptor excision circles (TRECs) compared with recipient cells and could last > 1 year. We utilized CFSE-MLR and TCRβ CDR3 deep sequencing to identify and track the GVH alloreactive repertoire in ITx recipients. Expanded GVH clones were detected early in intestinal biopsies in association with rapid myeloid antigen presenting cell replacement in the graft by the recipient. GVH-reactive clones were enriched early, but not late in recipient blood, in the absence of GVHD. GVH clones represented 35-65% of donor CD8 cells in early PBMCs (POD<50). GVH clones were later absent in circulating naïve donor-derived T cells (POD>100), consistent with their de novo generation from progenitors lymphoid-primed multipotent progenitors (LMPP), common lymphoid progenitors (CLP), and mixed myeloid progenitors (MPs) in liver and ileum of organ donors, and in perfusates from donor liver, small intestine, and transplanted multivisceral organ blocks. Our findings suggest that GVH-reactive donor T cells expand initially within the graft. These then enter the recipient circulation and may attack host hematopoietic cells, usually without causing GVHD. This lymphohematopoietic GVH response might allow the survival and expansion of donor hematopoietic progenitors from the graft to enter the peripheral blood and then the thymus, resulting in de novo T cell generation, and thereby promote sustained T cell chimerism. This pathway may help to reduce graft rejection after intestinal transplantation. CITATION INFORMATION Concurrent Session: The High KDPI Kidney: Outcomes and Optimal Utilization Analysis of Local versus Imported Expanded Criteria Donor (ECD) Kidneys: A Single Center Experience with 497 ECD Kidney Standardized management algorithms were implemented to preserve nephron function (including machine preservation) and pt selection was based on low immunologic risk, older age and predicted limited nephron need. Results: Over a 12.8 year period, we performed 497 ECD KTs including 247 local and 250 imported from other donor service areas. The import ECD group had more donors (16% vs 8.5%) and recipients (23% vs 16%) ≥ age 70, more zero HLA-mismatches (14% vs 2%), more pts with a PRA >20% (17% vs 9%), more KTs with a cold ischemia time >30 hours (46% vs 19%), fewer DCD/ECDs (5% vs 9%), higher pump resistance (mean 0.27 vs 0.20 mm Hg/ml/ min) and fewer kidneys managed with pump preservation (78% vs 92%, all p ≤ 0.05) compared to the local ECD group. The 2 groups were comparable in terms of other characteristics such as donor renal function and cause of death, donor and recipient gender and ethnicity, dialysis modality, waiting time, kidney laterality, as well as proportion of dual KTs (10% import vs 12% local) and retransplants (6% import vs 3% local). 271 pts (54.5%) had at least 5 years follow-up. With mean follow-up of 55 months, actual pt and graft survival rates were 71% and 57.6% in import vs 76% and 57.9% in local ECD KTs, respectively. Death-censored graft survival rates were 70% in import vs 69% in local ECD KTs. Delayed graft function occurred in 28% import vs 23% (p=NS) local ECD KTs. There were no differences in other outcomes. 1-and 2-year renal function (eGFR 42 ml/min/1.73 m 2 ) was similar in both groups. Conclusions: Mid-term outcomes are similar for import vs local ECD KTs, suggesting that broader sharing of ECD kidneys may improve utilization without compromising outcomes Survival Benefit of High-KDPI Kidneys Among Obese Transplant Candidates. M. Bowring 3%) received an 81-90 KDPI kidney and 1,421 (5.7%) received a 91-100 KDPI kidney. Post-KT time to equal risk was 4.9 months for recipients of KDPI 71-80 kidneys, 5.6 months for recipients of KDPI 81-90 kidneys, and 4.3 months for recipients of KDPI 91-100 kidneys. Time to equal survival was 8.5 months among recipients of KDPI 71-80, 14.5 months among recipients of KDPI 81-90 and 20.9 months among recipients of KDPI 91-100. Four years post-DDKT cumulative mortality was 32%, 22% and 7% lower in recipients of 71-80, 81-90 and 91-100 KDPI kidneys, respectively. CONCLUSION: Following a high risk period immediately post-DDKT, obese patients who accepted high-KDPI kidneys had decreased cumulative mortality when Survival Benefit of High-KDPI Kidneys Among Obese Transplant Candidates Avoidance of CNI and Steroids Using Belatacept -A Preliminary Report of CTOT-16 Secondary outcomes included treated episodes of acute rejection, de novo anti HLA antibodies, effector/memory and Treg subsets, and phenotypic differentiation of B cells. Results 163 HTX were enrolled with no significant differences between RIT and PLAC groups in the following characteristics: mean age 55 yrs, 85% male, 78% white, 45% mechanical circulatory support, 47% UNOS status 1A, 41% ischemic etiology, 23% diabetes, 20% CMV D+R-, average ischemic time 3.1 hours. 92.6% received maintenance CNI, MMF, steroids While these findings require further study, RIT should be used with caution as induction therapy in primary unsensitized HTX patients. CITATION INFORMATION: Chandraker A Scenarios. S. Gentry, 1,2 J. Pyke, 1 D. Schladt, 1 J. Zeglin, 1 W. Kim, 3 J. Lake, 4 R. Hirose, 5 D. Mulligan, 6 A. Israni. 1 Background: To address geographic disparity in liver allocation, a new organ distribution system consisting of 4 or 8 districts has been proposed (Gentry, Am J Transplant 2013). The OPTN Liver Committee is considering including proximity points for candidates within 150 or 250 miles of the donor. Purpose: We evaluated the impact of proximity preference on disparity in liver distribution. Methods: We used the liver simulated allocation model (LSAM) to project disparity (variance in median MELD/PELD at transplant by DSA) and transport burden (percentage of organs transported by flying) with bonus proximity MELD/ PELD points. Results: Broader sharing in the current 11 regions was projected to increase variance in median MELD/PELD at transplant compared with a current policy simulation. The 8-and 4-district maps decreased this variance by half (Figure 1 ). Adding proximity points did not substantially reduce this disparity benefit, but did reduce the percentage of organs transported by flying; flight percentage was lowest for the 8-district 5-point 150-mile scenario (Figure 2, 65% Association of Ultrastructural Changes in Renal Allograft Biopsies with Antibody-Mediated Rejection (AMR) and Graft Outcomes. A. Haririan, 1 M. Chaudhry, 2 J. Papadimitriou, 2 N. Costa, 1 B. Thomas, 1 M. Mavanur, 1 R. Ugarte, 1 C. Cangro, 1 C. Drachenberg. 2 1 Dept of Medicine, Unniversity of Maryland, Baltimore, MD; 2 Dept of Pathology, University of Maryland, Baltimore, MD.The role of electron microscopy (EM) in diagnosis of AMR is not well-defined. We sought to examine the EM findings in 796 biopsies from 622 adult recipients performed between July 2010 and Oct 2014. Median time to biopsy was 12.6 mo from transplantation ].Participation of ABO and HLA compatible pairs (CPs) in kidney paired donation (KPD) could significantly increase living donor (LD) transplantation (TX). CPs may be more likely to participate in KPD if there was some benefit. Potential benefits include a younger or better HLA matching donor, but only a minority of CPs will derive these benefits and benefits cannot be assured a priori. In contrast, providing the compatible recipient with priority for subsequent deceased donor TX in the event of primary transplant failure (reciprocity for KPD participation) is a tangible benefit that may encourage participation. We estimated the impact of reciprocity on overall TX. SRTR data on ABO and HLA compatible LD transplants between 2000-2011 and published information regarding the proportion of incompatible pairs and CPs that would match in a national KPD program were used to determine the overall percent increase in TX by discounting the number of CP recipients who would require wait-list priority after failure of a primary LD transplant from the projected increase in LD TX facilitated by the participation of CPs. Because a minority of CPs will develop primary LD transplant failure and require wait-list prioritization for a subsequent deceased donor transplant (table), reciprocity strategies (3, 4) provided a greater % increase in transplantation compared to limiting KPD participation to incompatible pairs (strategy 1), or a strategy that limited CP participation to pairs that matched to a younger or better HLA matching donor but provided no wait-list priority for CPs (Strategy 2). Conclusion: Reciprocity strategies could significantly increase transplantation and will be further evaluated in the context of the Canadian kidney paired donation program. Introduction: Instead of thinking of the developing world as a place where there are desperate people who will sell their kidneys for money, we propose a new approach where the developing world can be seen as a place where there are desperate patients with kidney failure who need kidney transplants and who have willing, living kidney donors, but insufficient financial resources to pay for their transplant and subsequent immunosuppression. Methods: We propose that the cost differential between dialysis and transplantation in some countries would allow the exchange of kidneys between patient/donor pairs with immunological barriers to transplantation in a First World country with patient/donor pairs with financial barriers to transplantation in a developing world country. By extending first world quality healthcare to impoverished patients in the developing world, we reverse the practice of transplant tourism and shed light and transparency on the black market organ trade by acknowledging that a kidney has financial value, while simultaneously protecting the fact that exchanging a kidney for a kidney transplantation for a desired patient is an altruistic gift and not a commercial exchange. Results: A blood type (BT) O donor and a BT A, PRA 0% ESRD patient from the Philippines were unable to pay for dialysis or transplantation. A US non-profit paid for their evaluation and some dialysis in the Philippines. A NEAD chain was identified starting with a US BT A non-directed donor (NDD) with no match in the US KPD pool. The US NDD donated to the Filipino recipient, resulting in a BT O Filipino donor who simultaneously donated to continue the chain. To date the chain has resulted in eleven kidney transplants and an active bridge donor. Six recipients had Medicare and five recipients had Commercial insurance. The transplant cost (including NDD nephrectomy and donor complication insurance) for the Filipino recipient was paid for by a non-profit organization. An additional $50,000 was reserved for subsequent immunosuppression and donor/recipient follow-up in the Philippines. The savings from transplanting 10 U.S. patients compared with the cost of dialysis will exceed $3M over the next 5 years. and a list of unacceptable antigens. Among all registered patients, 30% were non-sensitized (PRA 0%), 29% had a PRA 1-79%, 41% were highly sensitized (PRA 80-100%) and 25% of patients had PRA 95-100% (Figure 1 ; top). Level of sensitization of patients on the active list was higher, as 47% had a PRA 80-100% and 35% had PRA over 95%. The sensitization was different among transplanted patients; 47% were mildly (PRA 1-79%) and only 24% were highly (PRA=80-100%) sensitized; 12% had a PRA over 95%. We analyzed patterns of sensitization against HLA-A, B, C, DR, DQ and DP antigens across different PRA levels (Figure 1 ; bottom). Perhaps most strikingly, nearly 50% of highly sensitized active match run patients are sensitized to DPB antigens and nearly 30% to DQA antigens. Finally, waiting times were calculated based on registration and transplantation dates as reported by the transplant centers. Notably, the level of sensitization affected the waiting time: non-sensitized recipients waited an average of 199 days; those with a PRA of 1-79% waited 231 days; those with a PRA of 80-100% waited 393 days; and those with a PRA of 95-100% waited 524 days. Conclusions: Although KPD registries were designed to accommodate highly sensitized patients, these patients accumulate on the waiting list. An increasing number of patients are sensitized to DPB and DQA; however, this is not reflected in their UNOS PRA and can result in false negative virtual crossmatches. Therefore, all HLA genes should be used for matching, especially in sensitized patients. As such, we plan to establish an acceptable mismatch program for highly sensitized patients to increase their chances of getting a kidney transplant. Our protocol was initiated in early 2012 but was not fully matured until early 2013. A cohort of patients from 7/1/2013 to 6/30/2014 who received the ERAS protocol (ERAS Cohort) is most representative for our study population (N=46), with at least 1 year follow-up period. Immunosuppression was thymoglobulin induction with Tacrolimus-based triple therapy. A similar cohort from 7/1/2009-6/30/2010 who did not receive the ERAS protocol (Non-ERAS Cohort) was used as a comparison group. LOS was categorized into six categories (2, 3, 4, 5, and >5 days) . Differences between cohorts were analyzed using chi-square tests. RESULTS: The median LOS for all donor types was 2 days in the ERAS Cohort compared to 5 days in the Non-ERAS Cohort (p<0.001). The median LOS for patients receiving living donor kidneys was 2 days in the ERAS Cohort compared to 4 days in the Non-ERAS Cohort (p=0.003). The median LOS for patients receiving deceased donor kidneys was 2 days in the ERAS Cohort cohort compared to 5 days in the Non-ERAS Cohort (p=0.01). Graft and patient survival was 100% at 1 year in both cohorts. CONCLUSION: Enhanced recovery after kidney transplantation and reduced LOS are feasible using a structured protocol. Additional, larger, studies are necessary to establish the role of ERAS in kidney transplantation. BACKGROUND: Early hospital readmission (EHR) following kidney transplantation (KT) is independently associated with graft loss and mortality. Prior work assumes that this association is constant over time. It is possible that the risk associated with EHR is different during the EHR hospitalization compared to the risk during post-EHR follow-up. We used USRDS data to study 56,076 adult Medicare primary first-time KT recipients from December 1999-October 2011. EHR was any hospitalization within 30 days of KT discharge. Cox proportional hazard models were used to estimate the association between death-censored graft loss, mortality, and EHR. Separate models were created for live and deceased donor recipients. All models were adjusted for age, sex, race, BMI, pre-emptive transplant, cause of ESRD, peak PRA, HCV, time on dialysis, HLA mismatch, recipient/donor weight ratio, donor race. Deceased donor models were also adjusted for pulsatile perfusion, CIT, terminal creatinine, donor hypertension and diabetes, ECD, DCD, and regional/ national sharing. To determine functional significance we then synthesized BK Dunlop strain large T antigen (LTA) peptide PYHFKYHEKHFANAI (313-327), which has previously been shown to be an immunodominant BK peptide. Peptide binding to class II HLA multiplex beads was done. The peptide bound weakly to DQ alleles but bound strongly to certain DR alleles. A total of 96 potential LKDs were interviewed (138 were invited; 19% declined, 11% were not interviewed before being told if they could donate); 39% were men and 87% whites. The mean age was 46 years, and 70% completed college or beyond. Half (47%) thought an initial attempt should be made to resolve disagreements by center-LKD discussion. Should disagreement persist, participants thought a final decision about donation should be made by: centers (35%), shared decision-making (23%), LKDs if the risks fell below a certain threshold (9%), LKDs guided by center advice (7%), and LKDs alone (21%). Demographic variables, relationship to the recipient, prior LKD evaluation, and knowing another recipient or LKD were not associated with a preference for LKD involvement in decision making. Many (29%) provided reasons both for and against a decision-making role for LKDs. Reasons supporting a center-based decision included: professional expertise (36%), emotional impartiality (21%), responsibility for donor safety (20%), and center liability (12%). Reasons supporting a donor-based decision included: donor autonomy (29%), closeness of the donor-recipient relationship (19%), the donor's ability to comprehend risks (11%), and the belief that centers were unaware of donors' values or recipients' needs (7%). BACKGROUND: Following kidney transplantation, 31% of recipients experience early hospital readmission (EHR) and readmission is independently associated with increased risk of graft loss and mortality. The incidence of readmission has not changed over time. It is unknown whether the association between EHR and adverse outcomes is changing over time. We used USRDS data to study adult Medicare primary kidney transplant recipients from December 1999 through October 2011. EHR was any hospitalization within 30 days of initial transplant discharge. Cox proportional hazard models were used to determine the association between EHR, death-censored graft loss, and mortality adjusting for age, sex, race, BMI, pre-emptive KT, cause of kidney disease, PRA, HCV, pulsatile perfusion, cold ischemia time, donor/ recipient weight ratio, HLA mismatch, donor race, terminal creatinine, donor hypertension ,donor diabetes, ECD, DCD, and regional/national sharing . The interaction between readmission and year was explored. RESULTS: Over time there has been no statistically significant change in the risk of mortality associated with readmission, during the readmission event (aHR 1.003, 95% CI: 0.96-1.05, p=0.9), but there has been a statistically significant 2% increase per year in the risk of mortality associated with readmission, following readmission discharge Transplant center report cards developed by SRTR are publicly available and utilized for regulatory oversight and insurance contracting. There are significant associations of performance oversight with changes in transplant processes of care and declines in transplant volume. Our study aim was to quantify the incidence of low performance evaluations based on current MPSC and CMS standards We used data from six consecutive biannual SRTR Program-Specific Reports (January,2013 -July,2015 to evaluate the incidence of low performance ratings for 1-year graft or patient survival among US adult kidney transplant centers performing at least 10 transplants using publicly report SRTR outcome data. Among adult kidney programs, 32%(60/188) had ³1 low performance evaluation for one-year graft or patient survival based on current Bayesian criteria over the 3-year period. This proportion represented a 43% increase compared to current CMS standards 22%(40/188).Based on Bayesian criteria, the median differences in observed and expected graft and patient survival was -4.8% and -3.7% and median difference in observed and expected graft losses and deaths were 5.1 and 3.8 respectively for flagged PSRs. Based on current Bayesian(MPSC) criteria, one-third of adult kidney transplant centers performing ³10 transplants are identified as low performing in a three year time period in the US. Bayesian criteria have significantly increased low performing flagging relative to CMS criteria in this cohort. Given a primary goal of quality assurance, results question the value of identifying such a large proportion of centers as performance outliers with accompanying relatively small differences in observed and expected survival. Modification of performance criteria may both minimize the breadth of flagging, reduce risk adverse processes of care and potentially more efficiently allocate resources on centers with more dramatic differences in risk adjusted outcomes. Background: During allogeneic hematopoietic cell transplantation (alloHCT), nonhematopoietic cell IL-33 is augmented and released by recipient conditioning to promote Type 1 alloimmunity and lethal acute graft-versus-host disease (GVHD). Yet, IL-33 is highly pleiotropic and exhibits potent immunoregulatory properties in the absence of coincident pro-inflammatory stimuli. We tested if administration of IL-33 before alloHCT or "peri-alloHCT IL-33" could protect against development of GVHD by augmenting IL-33-associated regulatory mechanisms.Methods: For GVHD studies, recipient mice received repeated injections of recombinant IL-33 before total body irradiation (TBI) and alloHCT (bone marrow +/-T cells). Survival, clinical score, and weight were monitored. Changes in immune cell compartments were assessed by flow cytometry and the capacity of regulatory populations to suppress T cell proliferation was verified ex vivo. In related GVHD studies, recipient Treg were depleted from Foxp3-diphtheria toxin receptor (Foxp3 DTR ) mice by delivering DT concurrently with IL-33 administration.Results: IL-33 administration doubled recipient regulatory T cells (Treg) and increased suppressive myeloid cells, both of which persist following TBI. Importantly, peri-alloHCT delivery of IL-33 resulted in protection against lethal acute GVHD in the majority of recipients. ST2 expression is not exclusive to Treg and IL-33 expands innate immune cells with regulatory or reparative properties. However, selective depletion of recipient Foxp3 + cells concurrent with peri-alloHCT IL-33 administration accelerated acute GVHD lethality. IL-33-expanded recipient Treg were required for protection from GVHD by controlling macrophage activation and preventing accumulation of CD4 + and CD8 + effector T cells in GVHD target tissue Conclusions: We demonstrate a protective capacity for peri-alloHCT administration of IL-33 and IL-33-responsive Treg in mouse models of acute GVHD. These findings provide strong support for the concept that the immunoregulatory relationship between IL-33 and Treg can be harnessed therapeutically to prevent GVHD after alloHCT for treatment of malignancy or induction of solid organ transplantation tolerance. Adipose tissue-derived mesenchymal stem cells (ASC) may represent a new strategy to prevent allograft rejection after kidney transplantation due to its immunomodulatory properties. In the present study, the effect of ASC on the chronic renal allograft rejection model was analyzed. The chronic rejection model was developed by performing orthotopic kidney transplantation using Fisher rats (F344) as donors and Lewis rats (LEW) as recipients, without immunosuppression. ASC were isolated from Lewis rats and expanded until the 4 th passage. Rats that underwent transplantation were divided into 3 groups (n=6/group) and followed up for 6 months: Syngeneic (SYNG), untreated LEW rats receiving kidney from LEW rats; Allogeneic (ALLO), LEW rats receiving allogeneic kidney from F344; and ALLO+ASC, ALLO rats treated with ASC (3 doses of 1x10 6 , at 0, 1 and 3 months after transplantation). Blood pressure, urinary protein excretion, creatinine clearance, renal histology, immunohistochemistry for macrophages and T-cells, and qPCR for inflammatory cytokines were analyzed. Results are presented as mean±SEM; *p<0.05 vs SYNG, #p<0.05 vs ALLO. At 6 months, the ALLO group presented significantly increased levels of blood pressure and urinary protein excretion and decreased creatinine clearance. Treatment with ASC ameliorated all these parameters. ALLO animals developed significantly interstitial fibrosis compared with the SYNG group, reversed by ASC treatment. In addition, ASC also provided amelioration of allograft inflammation, characterized by decreased inflammatory cells and cytokine expression.In the chronic allograft rejection model Fisher to Lewis, administration of ASC was effective in protecting kidney allograft function, interstitial fibrosis and tissue inflammation. These findings may have important implications in the clinical settings and need further investigation. Purpose: Pancreas transplants (tx) are increasingly performed in older patients with type 1 diabetes mellitus. This trend may be explained by the significantly higher immunologic graft loss rate younger patients. The purpose of this study was to identify risk factors for pancreas txs in younger recipients and the impact of different immunosuppressive (IS) protocols on graft outcome. Methods: Using the IPTR/UNOS databases, we analyzed graft function and immunologic graft loss in 910 primary pancreas tx recipients under the age of 30 between 1/2005 and 12/2014. The majority of txs were performed in SPK (79%), followed by PTA (12%) and PAK (9%). Of note, in all 3 categories significantly more women were transplanted. Using uni-and multivariate models we assessed the impact of various induction and maintenance protocols on outcome adjusted for donor and recipient risk factors. Results: In recipients <30 yrs of age, pancreas graft function at 3 years posttx was 78% in SPK, 52% in PTA and 61% in PAK. At 1 year posttx, the immunologic graft loss rates were 6% for SPK/ pancreas (35% for SPK/kidney), 19% for PTA and 11% for PAK. When adjusted for the standard donor and center specific factors the IS protocol had the highest impact on graft outcome. In SPK, the use of depleting antibodies (RR: 0.59 (0.40-0.86) and tacrolimus in combination with MMF (RR: 0.52 (0.32-83) resulted in best outcome. Additional use of non-depleting antibodies had only a minimal and steroid maintenance no impact at all. In PTA, the factor with the highest impact was the use of depleting antibodies (RR: 0.30 (0.11-0.82 [1998] [1999] [2000] [2001] [2002] [2003] [2004] [2005] [2006] [2007] [2008] [2009] [2010] [2011] [2012] [2013] [2014] [2015] .In addition to AR reported in the first year, donor and recipient demographics, surgical technique, and immune factors were include as covariates. Outcomes were studied using univariate analysis; significant factors were analyzed in multivariate system. Figure 1 ). Logistic regression analysis demonstrated that morbid obesity and low functional status conditionally impact risk for surgical complications with an OR of 2.8 [95% CI (1.1, 7.3) ] as compared with patients with high functional status and BMI < 35 mg/m 2 . There was not a significant difference in occurrence of graft loss or death across the cohorts ( Figure 2 ). While neither morbid obesity nor poor functional status alone impacts outcomes, the combined presence is associated with significant increase in risk for surgical complications after renal transplantation. These patients may warrant more vigilance in the pre-and post-operative period to limit impact on patient morbidity and healthcare resources. Evaluation of metabolic function shows that failure to instigate aerobic respiration upon reperfusion relates to extensive mitochondrial damage (e.g disorganized christae and fragmentation). Pre-treatment of human kidney tissue with the mitochondrial stabilizing peptide SS-31 preserves mitochondrial function during simulated I/R (P<0.016). In conclusion, DGF is preceded by a profound post-reperfusion metabolic deficit. Strategies aimed at preventing DGF should focus on preservation of both mitochondrial integrity and optimal use of functioning anaerobic metabolic networks of the graft. (1) (2) web-based software platform (BREEZE TRANSPLANT, MedSleuth, Inc.) automated ongoing monitoring of candidates on the waiting list. Information from this automated system augments routinely collected data, and frequently leads to clinical decision making. We report 6 months of automated monitoring of a portion of our waitlist. The automated questionnaire was designed to identify events that might sensitize the patient or trigger a re-evaluation of transplant candidacy, as well as update contact information, clinical parameters (height, weight, BMI), and assess Wait BACKGROUND: There is no published study comparing the significance of risk factors for graft outcomes associated with common induction regimens combined with tacrolimus and mycophenolate maintenance regimen in kidney retransplants in US transplant centers. We retrospectively studied kidney-only retransplants in US adults (N=10 555) categorized according to induction: antithymocyte globulin (ATG),(N=7478), alemtuzumab(N=1390) and basiliximab(N=1687). Only cases maintained on tacrolimus and mycophenolate + steroids were included. We used Kaplan-Meier models to determine probabilities of patient survival and over-all, death-censored and rejection-free graft survival. We utilized Cox models to assess risk factors for acute rejection and death-censored graft loss (DCGL Kidney transplantation is a well-established treatment option for ESRD patients with Hepatitis C virus (HCV). However, the optimal induction regimen for this population is uncertain, mainly because a stronger immunosuppression may increase HCV viral load and eventually hasten the progression to liver fibrosis. Methods: Using SRTR, we compared patient/graft survival, acute rejection in the 1 year post-transplant, and delayed graft function (DGF) in 10,617 HCV+ recipients between 1999-2014 by induction regimen. We used shared frailty Cox models and mixed-effects logistic regressions to adjust for the differences among transplant centers.Results: Factors associated with use of depleting agents were recipient age<40, recipient African American race, PRA>80, longer time on dialysis, and living donor. Regimen choice was also largely explained by center-level variations (ICC=0.41). The hazard of death was significantly lower among those who received anti-thymocyte globulin (ATG) than the no induction group (aHR= 0.80 0.88 0.97 ). Alemtuzumab (ALM) group was also at a slightly lower hazard of death, but the difference was not statistically significant (aHR= 0.68 0.83 1.02 Introduction: Donor organ shortages have led to increased interest in finding new approaches to recover organs from extended criteria donors (ECD). Normothermic ex-vivo liver perfusion (NELP) has been proposed as a superior preservation method to reduce ischemia reperfusion injury (IRI), pre-condition sub-optimal grafts and treat ECD livers so that they can be successfully used for transplantation. The goal of this study was to investigate the efficacy of modified NELP to re-establish physiologic parameters and metabolic functions in discarded human livers that were rejected for transplantation. Methods: Seven human livers that were considered not-suitable for transplantation were put on the NELP system for eight hours. Blood samples were taken at hourly intervals and checked for blood gasses, markers of hepatic injury, oxygen extraction ratio, triglyceride (TG) and LDL (in case of steatotic livers). Biopsies were taken at the beginning and the end of the experiments. Results: NELP stabilized transaminases and in all ECD livers at the end of experiments: AST (1116±1363U/L) and (ALT: 1330±720U/L). This was accompanied by significant improvement in bile (4±2ml/hour) production and a remarkable decline in lactate (<0.05 mmol/L) and INR values (3.1±1.1). pH, Na + , K + , cl -, pCO 2 , bicarbonate and glucose levels were stabilized at physiologic levels. Oxygen extraction ratio was recovered (>0.8) in all livers during first 30 minutes of NELP and maintained until the end of study. NELP also, provided physiological vascular flows and pressures. Histology demonstrated normal parenchymal architecture and minimal to complete lack of IRI in all ECD livers. Figure 1 summarize some important data. Conclusion: The modified NELP circuit preserved hepatocyte architecture, recovered synthetic functions and hepatobiliary parameters of ECD grafts without additional injury to the graft. This approach has the potential to increase the donor pool for clinical transplantation. Conclusion Outcomes of recipients of ≥70grafts can be optimized to equal outcomes of younger grafts. These grafts appear ideal for patients with low MELDs and exception points and will result in excellent outcomes. Given the large number of stoke deaths in patients ≥70, the yield rate of such grafts should be maximized and dis-incentives removed in order to help quell the organ shortage crisis. Introduction. On June 18, 2013, the OPTN implemented regional sharing of adult donor livers for candidates with allocation MELD/PELD scores of 35 and greater. As part of this policy, after offers to Status 1A/1B candidates, adult donor livers are offered nationally to candidates awaiting a liver and intestine before being offered to local MELD/PELD candidates with scores of 29 and greater. We analyzed the impact of this policy on liver-intestine candidates. To assess the impact of the policy, we calculated transplant rates and mortality rates in two eras for both adult and pediatric candidates. The pre-era included listings from 6/18/2011 -6/17/2013. The post-era included listings from 6/18/2013 -6/18/2015. Each era consisted of 730 days, and the "at risk" period for each candidate was truncated at the end of each era where necessary. A competing risks analysis was performed to estimate the probability of transplant and the probability of death within 12 months of being listed simultaneously for both the liver and the intestine. Candidates removed for "too sick" were counted as deaths.Results. More candidates were simultaneously listed for a liver-intestine in the postera (310 vs 272), and there were more liver-intestine transplants in the post-era (154 vs 83). Transplants were performed at 17 centers in the pre-era, and at 16 centers in the post-era. The most common diagnosis at transplant in the post-era was short gut syndrome (62%), which occurred proportionally less often than in the pre-era (67%). More adult (49.3% vs 31.3%) and Hispanic (23.4% vs 10.8%) recipients were transplanted in the post-era. The distribution of gender was unchanged. The probability of a liver-intestine transplant within 12 months was significantly greater in the post-era (46% vs 30%, p < 0.05). The probability of death within 12 months was slightly lower in the post-era (11% vs 12%) but did not reach statistical significance. The Impact of Share 35 on Liver Allocation and Utilization: One DSA's Experience. J. Orlowski, C. Muse, J. Whaley, R. Squires. LifeShare of Oklahoma, Oklahoma City, OK. Background: In June 2013, U.S. liver allocation changed from a local-regionalnational model to "Share 35", broadening sharing to a regional level based upon recipient MELD score. While the policy was extensively modelled prior to implementation, one organ procurement organization (OPO) sought to study the impact of Share 35 on its donation service area (DSA), which is a single state, 3.81 million population, and includes 3 established liver transplant centers (2 adult, 1 peds). Method: A review of all deceased donors (DBD and DCD) was conducted for local and import donors in two cohorts, one 28.5 month cohort between 6/15/2013 and 10/31/2015 (Post-Share 35) and one 21.5 month cohort between 9/1/2011 and 6/14/2013 (Pre-Share 35). Liver recovery data from within the DSA, import/export activity, and utilization data were compared. Discussion/Conclusions: While the DSA significantly increased both total deceased donors recovered and total TXL from the DSA, a number of less favorable trends were noted. Overall, the percentage of local donors yielding a TXL decreased slightly after institution of broader sharing. Export and import activity were both dramatically increased to achieve a net decrease in livers transplanted within the DSA. The sum of these trends is that more transportation expense is being incurred to transplant fewer livers overall and within a DSA that has dramatically increased donation. We recommend careful study of Share 35's impact on overall utilization of TXL and costs associated with broader sharing on a DSA level before further broadening liver allocation. Results: Supply-demand ratios varied widely across the 52 donation service areas (DSAs) with active liver programs in 2013 ( Figure 1 ). The existing 11 regions had a 2.5-fold difference in the ratio of eligible deaths to waitlist candidates with M/P > 15 ( Background: Microscopic hematuria is not uncommon in potential kidney donors. After excluding urological causes, most of these donors require kidney biopsy to exclude glomerular lesions. We evaluated biopsy findings among donors with isolated microscopic hematuria and assessed short-term outcomes of their prospective recipients. Methods: kidney donors between Jan 2010 & Jan 2015 who had isolated microscopic hematuria were included in the study. All of them underwent native kidney biopsies which were examined by light microscopy,immunofluorescence and electron microscopy. Predonation characteristics were compared between donors with normal & abnormal biopsies. Short-term outcomes of donors were compared to 27age matched nonhematuric donors as a control group at 3 month post donation.Results: Out of 750 donors, 27 donors (3.6%) were found to have isolated microscopic hematuria & underwent kidney biopsy. Mean age was 32.6±8,all of these donors had positive dipstick hematuria in urine & minimum of 3 red blood cells/hpf on urine microscopy. Eight (29.6%) biopsies showed histopathological abnormalities of which, thin basement membrane disease (n=6, 22.2%) was the commonest lesion followed by IgA nephropathy (n=2, 7.4 %). Donors with abnormal biopsy findings were excluded from donation. The remaining (n=19, 70.4%) had no significant pathology and were permitted to donate. Except for 2 donors (7.4%) who were excluded due to other medical/recipient related issues. There was no significant difference in short-term outcomes between hematuric and nonhematuric donors accepted for donation at 3 month post donation. The recipients of kidneys from hematuric donors with no histopathological abnormalities had a mean SBP of 121 ± 9.5, mean DBP 74 ±8.5 and mean serum creatinine of (75.6µmol/l±21.3) at three month follow up with no rejection episodes. Conclusions: Our study showed that 29.6 % of our donors with isolated microscopic hematuria had abnormal histopathology on renal biopsy. Isolated asymptomatic microscopic hematuria justifies extensive work up including renal biopsy to identify donors who may have underlying renal pathology before accepting them for donation. The Background: The Live Donor Champion (LDC) program is a clinical program offered to kidney waitlist candidates at our high volume transplant center. The six-month program serves as educational and advocacy training for transplant candidates and friends and family members chosen to serve as advocates, or "Live Donor Champions", on the candidate's behalf. The goal of the program is to train candidates and their LDC to increase awareness of live donation and to identify potential live donors. Methods: We studied 104 adult kidney transplant candidates that have participated in the LDC program at Johns Hopkins Comprehensive Transplant Center between October 2013 and May 2015. We compared donor referrals for candidates that participated in the program with matched controls from our waiting list. We matched on age at listing, listing date, sex, race, and ABO blood type. We quantified the association between program participation and having at least one donor referral using logistic regression. We quantified the association between program participation and the number of donor referrals using Poisson regression. In a subset of participants (n=44) we quantified changes in knowledge of and comfort discussing live donation throughout participation in the program. Results: Among LDC participants, there were a total of 138 donor referrals, while matched controls had 82 donor referrals (p=0.002). LDC participation was associated with 2.6-fold higher odds of having at least one donor referral (95% CI 1.5-4.7, p<0.001). LDC participation was also associated with a 1.9-fold higher incidence of donor referrals (95% CI 1. Purpose: African Americans (AA) are less likely to undergo LD kidney transplantation. Reasons may include decreased health literacy, increased obesity, or hypertension. We sought to determine which stage of the donor evaluation process results in greatest loss of AA candidates. Methods: Review of data from screening donor questionnaire(LDQ), lab testing and full-day evaluation for all potential donors between 2012 and 2015. Results from AA and non-AA candidates were compared using JMP Pro 11, and a chi-square test to assess for group differences. Results: 530 potential donors were identified; 220 AA(42%), 310 non-AA(58%). Final donation rates were AA=9.1%, non-AA=19% (p=0.015). LDQ results eliminated 154 AA(70%) vs 179(57.7%) non-AA donors(p=0.004). Hypertension eliminated more AA donors (7.3%) than non-AA(1%)(p=0.0001). Lab completion rates were similar (AA=93.9%, non-AA=90.1%, NS) as were acceptable labs (AA=72.6%, Non-AA=78.9%, NS). Of those eligible for full evaluation, AA and non-AA completion rates were similar (AA=91.1%, non-AA=97.8%, NS). AA donors were more likely to be family members (80% vs. 69%,p=<0.001)and AA parents were less likely to qualify to donate to their children ( While educating dialysis patients about transplant increases its pursuit and receipt, there is variation in educational practices that affects wait-listing rates. There are also dialysis staff-and center-level barriers that reduce the effectiveness of transplant education. We examined the effectiveness of various transplant education practices on increasing dialysis center wait-listing rates in 1,695 U.S. adult, chronic dialysis centers, and determined if the absence of barriers modified the education's effectiveness.Representatives from centers in all 18 end-stage renal disease (ESRD) networks were surveyed between 2011-2015 about their use of 8 transplant education strategies (Table) and the absence or presence of 3 educational barriers: having insufficient time to educate; an administration unsupportive of providing transplant education; and having low transplant knowledge (assessed via a 12-question measure). Centers' wait-listing rates for the 12 mo after the survey were calculated using linked United States Renal Data System data. 16,644 patients who initiated dialysis within 6 mo of the survey at participating centers were included. Adjusting for multiple staff and center characteristics, negative binomial regression was used to assess the effect of each education strategy individually on wait-listing rates in a main effects model. Next, we added interaction terms between each educational strategy and the absence of each barrier to determine whether education was more effective in environments without barriers. Irrespective of center environment, centers that gave out transplant center phone numbers, education for living donors, and offered an opportunity to talk to a transplant recipient had higher wait-listing rates (Table) . Wait-listing rates were increased in centers supportive of transplant education.Multi-level interventions to develop a "pro-transplant" culture in dialysis centers, prepare staff to educate about transplant, and discuss and disseminate transplant education with dialysis patients while also referring externally, can increase the rates of wait-listing nationwide. Purpose: Few studies have been conducted on how pediatric clinicians cope with patient loss, and there is sparse literature to guide resource allocation specifically among pediatric transplant teams. The purpose of the present study is to determine a) how personnel on pediatric liver transplant teams cope with patient death and b) to offer recommendations for quality improvements. Methods: With IRB approval, a Qualtrics survey link was sent to the medical director of 25 randomly selected pediatric liver transplant centers from different regions in the US. Medical directors were requested to send the link to all physicians, nurses and support personnel on their team. The survey included questions about available resources, a needs assessment, and standardized measures of adjustment (Maslach Emotional Exhaustion Scale, EE, and the Bereavement Experiences Scale, BES). Results: Completed surveys were received from 72 respondents (32 physicians, 32 nurses, 8 social workers). The majority reported working in pediatric transplant for at least 6 years (58.9%) and experiencing 1-2 (59.7%) or 3-5 (31.9%) deaths per year. Most described having no formal training in coping with patient loss (83.3%); overwhelmingly (97.2%), respondents thought that formal debriefing procedures would be helpful, although this was routine for just 50%. Respondents frequently offered informal support to teammates (81.4%), but 29% reported that they did not receive any support. Mean scores on the EE (2.92) and the BES (1.38) were comparable to normative data, but nurses, 3.28 (SD=1.18), and social workers, 3.64 (SD=.08), reported significantly more EE than physicians, 2.39 (SD=1.02), F=7.20, p=< .01. Additionally, respondents who have formal debriefing procedures reported significantly less EE, 2.60 (SD=1.21), than those who do not have debriefing, 3.24 (SD=1.05), t=-2.40, p=.02. Conclusions: Based on a multicenter report, there are gaps in formal resources available to pediatric liver transplant team members after experiencing patient death. Although overall team members adjust well, these findings suggest that provision of routine debriefing after loss is desired and may be associated with better coping. Further research is needed to determine if relatively higher rates of emotional exhaustion among nurses and social workers is associated with loss and patient care responsibilities specific to these roles. Surgery, Transplant Center, Cleveland Clinic Foundation, Cleveland, OH. Purpose: Ischemia reperfusion injury is a major risk factor for short and long-term functional survival after intestinal transplantation. Normothermic machine perfusion is an emerging biotechnology designed to prevent the injury associated with cold storage. This study is the first to describe an experimental model of normothermic ex-vivo perfusion of the intestine. Such an innovative approach is critical for the further improvement in the clinical outcome after transplantation of such a uniquely vulnerable hollow organ. Methods: The intestine of 10 cardiac death pigs were retrieved, flushed with UW solution, and perfused using washed pig RBCs based colloid solution with Hct value of 10-15 % at 37°C for 6-12h. The perfusion circuit consisted of pulsatile pump, heat exchanger, oxygenator, and organ basin. The superior mesenteric artery was cannulated with free drainage of the vein. Mean arterial pressure was 80-100 mmHg with a flow of 0.010-0.014 L/min/kg. Peptide based elemental formula was used for luminal nutrition. The perfusate was analyzed for pH, PO 2 , PCO 2 , glucose. Results: Four intestines were used to establish the model (G-1) and the remaining six constitute the study group (G-2). The warm and cold ischemia time, 30 to 58min and 90 to 360min, respectively. G-2 showed more active peristalsis compared to G-1 with minimal edema and better perfusion indices. Vascular resistance was significantly (P=0.05) lower (97 ± 25 vs 203 ± 40 dyn·s/cm 5 ) with higher (P=0.02) pH (7.22 ± 0.04 vs 7.0 ± 0.06) and better (P=0.01) O2 consumption (330 ± 57 vs 107 ± 23 ml/ min). Interestingly, the enterocyte glucose production was significantly (P=0.04)unclear. The purpose of this study was to test the hypothesis that Th17-mediated alloimmune responses play a major role in the pathogenesis of intestinal transplant (ITx) rejection in humans. B. We obtained ITx allograft biopsies from recipients with acute cellular rejection (ACR), as well as a control ITx cohort with normal biopsies. Phenotypic analysis of cells was conducted by polychromatic flow cytometry and IHC. Graft-infiltrating cells were flow-sorted and real-time PCR (rtPCR) was performed to investigate effector cytokine and transcriptional factor profiles. C. Our previous studies suggested that ITx allograft rejection may be critically mediated by CCR6 expressing Th17 cells as indicated by markedly elevated expression of Th17 related transcription profile in ACR versus pre-rejection control biopsy samples. Based on this we tested the hypothesis that CCR6+ Th17 are truly mediating ITx allograft rejection by analyzing CD4+/CCR6+ T-cells in ACR versus healthy control recipients via flow cytometry. Importantly, we found a significant increase in CCR6+ Th17 cells in rejection versus control bxs (55.6% v 26%, p<0.05). This Th17 phenotype was specific for allograft rejection since recipients with active allograft infections did not show such an increase in Th17 cells. Further subset analyses of flow-sorted CD4+/CCR6+ cells confirmed high expression levels of the hallmark transcription factor RORα and the signature effector cytokine IL-17, corroborating that IL-17 producing CCR6+ cells drive ITx rejection. Finally, we tested the hypothesis that proinflammatory dendritic cells induce Th17 responses in ITx by studying flow-sorted intestinal CX3CR1+ myeloid dendritic cells, which are implicated in regulating proinflammatory T-cell alloimmune responses. We found a pro-inflammatory Th17-related mRNA profile in the flow sorted dendritic cells as indicated by high expression of IL-1β, IL6, IL-23, and TNFα, suggesting that proinflammatory CX3CR1+ dendritic cells induce Th17-mediated responses to allograft rejection. D:In conclusion, this study shows that Th17-mediated alloimmune responses play a significant role in intestinal allograft rejection in humans, which is of high clinical significance. Aim: We reviewed conditional graft survival and short and long term improvements in children receiving rATG (rabbit anti-thymocyte globulin) immunosuppression for intestine transplantation at a single center over 15 years. Methods: All children undergoing primary intestine transplantation under rATG and non rATG immunosuppression were analyzed. rATG immunosuppression was with 5 mg/kg rATG/ Tacrolimus/ Prednisolone (n=126) and other immunosuppression consisting of Tacrolimus/ Prednisolone (n=50), Tacrolimus/ Prednisolone/ Daclizumab (n=23), Tacrolimus/ Prednisolone/ Cyclophosphamide (n=16), Alemtuzumab (n=21) and Tacrolimus/ Prednisolone/ Basiliximab (n=1). Patients with intact grafts at one year were followed for subsequent long term outcomes of patient and graft survival, retransplant outcomes and significant morbidities. Results: 242 children underwent primary intestine transplantation between 1990 and 2015 consisting of isolated small bowel (n=94, 39%), liver and small bowel (n=109, 45%), modified multivisceral (n=7, 3%) and multivisceral type transplantations (n=32, 13%). The major causes of intestinal failure were gastroschisis (n=62, 26%), volvulus (n=51, 21%), necrotizing enterocolitis (n=28, 12%), pseudo obstruction (n=30, 12%), intestinal atresia (n=23, 10%) and microvillus inclusion disease (n=17, 7%). Conditional graft survival at 15 years for rATG and non rATG groups is illustrated in Figure 1 .Patients with intact grafts at one year had the subsequent outcomes, stratified by rATG and non rATG immunosuppression in Table 1 . Intestinal biopsies (bx) were obtained after recent ITx and followed longitudinally. Phenotypic analysis of cells was conducted by polychromatic flow cytometry and IHC. Graft infiltrating lymphocytes were flow sorted and rtPCR was performed to investigate effector cytokine and transcription factor profiles. C. First, we calculated the incidence of gram negative rod (GNR) bacteremia in the 6 month period following ITx. In the last 5 years, early post-op GNR bacteremia occurred in 31% of 85 ITx recipients. Given the essential role of ILCs in antimicrobial defense, we hypothesized that the pathogenesis of GNR bacteremia and translocation after ITx is due to ILC dysregulation. To test this hypothesis we studied serial ITx bxs longitudinally from the day of Tx via flow cytometry and IHC. Surprisingly, we found almost complete absence of protective ILC subsets in all recipients as early as 4h reperfusion compared to healthy ITx recipients at least 6 months following ITx (0.145% v 4.96%, p<0.001). Importantly, further subset analyses via flow cytometry and rtPCR of flow-sorted ILCs in the healthy cohort revealed that protective ILC subsets were present as shown by expression of NKp44, CD117, CD127 and CCR6. Further mRNA transcription profile studies revealed expression of the hallmark transcription factor RORgt, as well as the signature cytokine IL-22, both of which have been shown to play key roles in antimicrobial defense. These findings suggest that dysregulation of protective ILC populations in the immediate post-op period contributes to infectious complications secondary to attenuated antimicrobial defenses. D. This study shows that the ILC population immediately following ITx is dysregulated and regenerates over time. The study also shows that ILC dysregulation after ITx plays a role in the pathogenesis of infectious complications, which previously unknown, may be of high clinical relevance. The standard approach to informed consent for high KDPI (KDPI 86-100%) deceased donor kidneys (DDK) relies on anticipated graft half-life and may not provide candidates with accurate assessment of potential outcomes. We evaluated 6 month outcomes among DDK recipients from different donor allocation groups: ideal (KDPI 0-20%), standard (KDPI 21-85%) and high KDPI kidneys. We correlated these with long-term patient survival. All adult DDK recipients between January 1, 2005 and December 31, 2012 were identified in the SRTR database. KDPI was calculated using 2013 as a reference. We determined 6-month eGFR by CKD-EPI and death censored graft loss by KDPI group. We considered a suboptimal outcome any eGFR <30 ml/min/1.72m 2 or failed-graft not due to recipient death at 6 months. Death with graft function was not deemed a suboptimal outcome.We analyzed 78,022 DDK recipients after excluding 383 recipients lost to follow-up and 2.9% lacking a 6-month creatinine. Graft outcome by KDPI at 6-month is shown in Table 1 Figure 1 shows the distribution of recipient CKD stage by KDPI group.Recipients of a high KDPI kidney who had CKD stage 4 or 5 or death censored graft loss at 6 months had reduced patient survival by about 15% and 28% respectively.Most high KDPI DDK recipients did reasonably well but 25% of them had poorer outcomes. The effect is sustained and portends decreased patient survival. Candidates considering high KDPI organs should be better informed of the risks of accepting these organs versus the potential benefit of shortening waiting time. Figure 1 . Initial risk of graft loss was similar between KDPI groups, though early post-txp (<6 months) the rate of decline in risk is less pronounced among KDPI>85% organs. The long-term risk (>1-year) remained higher among KDPI>85% organs, likely from increased background risk. Across eras, there was a decrease in initial and background risks, most pronounced among KDPI>85% organs (Figure 2 CTOT-16 tested the hypothesis that belatacept used as maintenance immunosuppression (IS) in kidney transplantation would allow the long-term avoidance of CNI and corticosteroids. Methods: Primary renal transplant recipients who were EBV seropositive and without DSA or a positive CXM, were randomized to 3 groups. IS regimens for each group are depicted in Table I .Endpoints included eGFR, the incidence and severity of rejection, and safety measures. Results: Due to concerns about the increased rates of acute rejection (AR) enrollment in group 3 was halted in August 2014 and the entire study was stopped in April 2015 after 69 patients were randomized. Groups were comparable with respect to demographics. The number of patients enrolled in each group and selected outcome measures are shown in Table II .As noted rejection rates were increased in the groups receiving CNI-free belataceptbased long-term IS regimens. No Banff grade II or III rejections occurred in groups 1 or 3 while group 2 had 4 grade IIA and 2 grade III rejections. Recurrent episodes of AR were seen in 2 patients (1 each in group 1 and 2). Two subjects experienced AMR (one each in groups 1 and 2). Conclusion: The increased rates of AR observed in groups 2 and 3 suggest that belatacept may not provide adequate immunosuppression to allow avoidance of CNI (at least at relatively early time points) and corticosteroids. Additional analyses are ongoing in an attempt to identify clinical or laboratory factors that may identify high or low risk groups for treatment with a belatacept-based CNI-free regimen as well as assessment of the long-term impact of CNI avoidance on renal function and cardiovascular and metabolic risk profiles. Donor Purpose: Neutrophils mediate primary lung allograft dysfunction (PGD), the predominant cause of early graft loss following lung transplant. The mechanisms of neutrophil infiltration remain incompletely characterized. We show that donorderived nonclassical endothelial-bound CX 3 CR1 + CCR2 -Ly6C low monocytes (Ly6C low M) mediate neutrophil infiltration and PGD. Methods: Allogeneic single lung transplant was done using wild-type (WT) or CX 3 CR1 knockout (KO) mice. Intravascular (IV) clodronate liposomes (Clo-lip) were used to deplete all monocytes while anti-CCR2 was used to selectively deplete classical CCR2 + Ly6C hi monocytes. Neutrophil infiltration was analyzed by intravital 2-photon microscopy and multichannel flow cytometry. PGD was determined by arterial blood gas and histology. Results: Intravascular Ly6C low M represented 2-5% of lung myeloid cells in donor lungs and persisted despite standard flushing techniques. IV Clo-lip eliminated all Ly6C low M without affecting lung-resident macrophages, dendritic cells, or interstitial classical monocytes. Allograft neutrophil infiltration was abrogated by depletion of Ly6C low M in donors (Fig 1A) . At 24 hours, there was a 2-fold reduction in BAL neutrophils (Control: 3.0±0.8x10 5 v. Clo-lip: 1.4±0.3x10 5 , n=8) and protection against PGD (Fig 1B&1C) . Donor lungs reconstituted with WT, but not MyD88-TRIF KO Ly6C low M, which lack TLR signaling, restored allograft neutrophil infiltration. CX 3 CR1 deletion in donors led to decreased circulating Ly6C low M and 2-fold decreased neutrophils (CX 3 CR1 KO 0.7±0.2x10 6 v. WT 1.3±0.2x10 6 , n=4). Depletion of donor classical monocytes with anti-CCR2 had no effect on neutrophil infiltration DEPTOR, a recently described modulator of PI-3K/mTOR signaling, is expressed by CD4 + T cells, and functions to modulate CD4 + Teff cell differentiation. Upon CD4 + T cell activation, intrinsic DEPTOR is degraded, which in turn decreases its negative regulation of Teff responses. We also find that forced overexpression of DEPTOR in CD4 + cells stabilizes Foxp3 expression and increases Treg function in vitro.Here, we used a doxycycline (dox)-inducible DEPTOR transgenic mouse (rtTA +/+ DEPTOR +/+ , iDEP) to evaluate its effect on alloimmune Teff/Treg responses. Fully MHC mismatched BALB/c (H-2 d ) hearts were transplanted into C57BL/6 or iDEP recipients (both H-2 b ). Administration of dox to iDEP recipients resulted in a marked overexpression of DEPTOR within CD4 + cells, and was associated with a significant (p<0.001) prolongation of graft survival (MST 35 days, n=5) vs. WT recipients (MST 7 days, n=14). iDEP recipients (day 6) had a lower frequency of CD4 + T EM cells (CD44 high CD62L low ) and increased numbers of CD4 + Foxp3 + Tregs vs. WT recipients. CD4 + T cells from iDEP transgenic mice were also adoptively transferred into BALB/c heart transplanted Rag2 -/-IL2Rγ -/mice (H-2 b ) two days post-transplant. Treatment of iDEP recipients with dox again resulted in a significant prolongation of graft survival (MST 37 days, n=5 vs. 16 days in controls, n=6;p<0.01). These findings indicate that overexpression of DEPTOR in CD4 + cells is immunosuppressive. We next administered anti-CD25 (PC-61, 250µg on d-5, d-2 and d+2) to iDEP recipients which results in >70% depletion of CD4 + Foxp3 + Tregs and evaluated graft survival. Surprisingly, the graft prolonging effect of forced DEPTOR overexpression was completely abrogated in these Treg-depleted recipients (MST 9 days, n=5). Finally, we treated recipients of fully MHC mismatched allografts (BALB/c into C57BL/6) with MLN4924 (60 mg/kg BID x 5 days) as a pharmacologic agent to inhibit DEPTOR degradation and found a similar prolongation of graft survival (MST 18 days, n=5 vs. 7 days in controls, n=3; p<0.01).Collectively, these findings identify DEPTOR as a novel cell intrinsic immunomodulatory protein in CD4 + T cells that enhances T regulatory cell function following transplantation. These findings provide for the intriguing possibility that targeting DEPTOR degradation has promise as a future therapeutic. Purpose: Despite being a "perfect match", HLA-identical kidney transplants (HLA-Id KTx) are rejected without sufficient immunosuppression (IS), and hence can also benefit from tolerance induction. We tested whether DHSC infusion could achieve operational tolerance in living donor HLA-id KTx Methods: Twenty HLA-id KTx recipients were treated with two doses of perioperative Alemtuzumab and were then given 4 infusions of DHSC over 9 months. Initial IS treatment with tacrolimus/mycophenolate was converted to sirolimus/ mycophenolate at 3 months before the final 3 DHSC infusions. IS was totally discontinued by 2 years in recipients without complications. Recipients were designated tolerant (Tol) if protocol biopsies after one year off IS (3 year post-op) did not show rejection and had normal renal function. Results: Microchimerism was observed in peripheral blood (PBMC) and iliac crest marrow early after DHSC infusions, but disappeared by one year. Five of the 20 originally enrolled were removed from the protocol because of unexpected original disease recurrence (n=3), noncompliance (n=1), or development of an unexpected pre-transplant B cell crossmatch (n=1), but followed as intent to treat. Of the remaining 15, 7 were designated Tol; of these, 5 are still Tol at the 5 year protocol biopsy, 1 Tol died (unrelatedly) beyond the 5 year milestone, and 1 showed Banff 1A rejection at the 5 year biopsy without renal dysfunction after remaining IS free for 3 years. Among the non-Tol, 2 had graft dysfunction and 7 had normal function but evidence of subclinical rejection at 2 or 3 year biopsy, and IS had to be maintained or reinstated. Tol was associated with greater numbers of infused DHSC. Serial immunophenotyping for CD4 + CD127 -CD25 + FOXP3 + Tregs in PBMC could distinguish between Tol and non-Tol with the Tol group having higher Tregs (p<0.05). Similarly, genomic studies demonstrated Tol recipients to have lower transcripts in >300 immune response genes (P<0.001) and that these gene signatures were predictive of Tol from early on. Conclusions: This non-chimeric DHSC protocol in HLA-id KTx demonstrated Immunoregulation as a mechanism of tolerance. Therefore, plans for a combinatorial therapy with DHSC and expanded recipient Tregs are underway.