Kidney Replacement Therapy for Weaning: Comparison
Please note this is a comparison between Version 1 by Kada Klouche and Version 2 by Jason Zhu.

Acute kidney injury (AKI) is a common pathology in critical care settings, affecting more than half of all patients, 10% of whom require kidney replacement therapy (KRT). Modalities of KRT currently available include intermittent hemodialysis and continuous renal replacement therapies (continuous veno-venous hemodialysis or hemo/dia/filtration). Though a better survival using continuous compared with intermittent RRT has not been evidenced, the former has gained wide application in ICUs, often supplanting intermittent modalities because of the belief that it is better tolerated in hemodynamically unstable patients. Regardless of the modality used, the need for KRT considerably increases in-hospital mortality, which then fluctuates between 40% and 60%. More than three-fourths of patients who survive this acute episode develop chronic renal failure, 10 to 30% of whom remain dependent on KRT. In the long term, they remain exposed to a worsening of their morbidity and mortality, and a deterioration in their quality of life.

  • acute kidney injury
  • kidney replacement therapy (KRT)
  • KRT weaning
  • urine output
  • creatinine clearance
  • urinary urea

1. Reasons for Optimal and Earliest KRT Weaning

In the management of AKI, the indication for KRT depends on the severity of renal damage and metabolic disorders. Once initiated, KRT is prolonged for an average of 5 to 10 days, after which the intensivist must temporarily interrupt treatment in order to assess native kidney function and attempt weaning [1][24]. These interruptions may, however, come too late or too early.
KRT exposes patients to numerous complications, such as vascular access-associated infections or thrombosis, bleeding favored by systemic anticoagulation, drug and antibiotic elimination, electrolyte (phosphorus) and nutrient depletion, hemodynamic instability and pro-inflammatory effects generated by extracorporeal circulation [2][3][9,10]. The reported risk for bacteremia with nontunneled percutaneous catheters ranges between 3% and 10% in the ICU, increasing significantly after 1 week of catheter use [4][5][6][25,26,27]. This daily risk of colonization and catheter-related infections was reported to be significantly higher in dialysis catheters compared to central venous catheters within the first 7 days of catheter maintenance [7][28]. The incidence of thrombosis of a cannulated vein ranged from 20% to 70%, depending on the site and diagnostic procedure [8][29]. In a prospective study, vascular access thrombosis was observed in 2.3 to 4.2 episodes per 1000 days of temporary catheters [9][30]. During KRT in critically ill patients, adverse events occurred in 23% and 16% of patients, respectively, in each group analyzed in the STARRT-AKI study (evaluating an accelerated vs. standard KRT strategy). Hypotension and hypophosphatemia were the most frequent adverse events [10][20]. The occurrence of intradialytic arterial hypotension, sometimes subclinical, is thought to lead to renal ischemic lesions [11][31]. Indeed, renal biopsies carried out in patients who had been on hemodialysis for several days or weeks revealed recent tubular ischemic lesions probably induced by per-dialytic hypotensive episodes [12][32]. Similar observations were reported by Conger [13][33], who also demonstrated, in an experimental model of post-ischemic AKI, a loss of renal plasma flow autoregulation, explaining the vulnerability of renal tubular cells to even small drops in blood pressure. He showed, in a rat model of ischemic AKI, that a reduction in renal perfusion pressure within the autoregulatory range induced a marked decrement in renal blood flow in AKI rats compared to that of controls. These renal ischemic micro-lesions would delay the recovery of renal function [14][34]. The elimination of mediators and growth factors required for tubular cell regeneration may also contribute [15][35]. An untimely increase in the dose of dialysis delivered could also affect renal recovery. A meta-analysis found that intensification of KRT was associated with an increase in dependence on this therapy [16][36]. The prolongation of the duration of AKI is also associated with a worsening of mortality [1][24]. These observations suggest that unwarranted prolongation of KRT is deleterious. It would also unnecessarily increase workload and treatment cost. On the other hand, stopping KRT too early exposes the patient to fluid overload, with its consequences for ventilation, electrolyte and acid–base disorders, and nitrogen retention. Indeed, elevated blood urea could lead to adverse effects such as digestive hemorrhage. Aggravation of metabolic acidosis due to failure of the native kidney to eliminate acids, and discontinuation of KRT, is deleterious. Finally, the risk of fluid overload remains, with its cardiopulmonary impact. If weaning fails, KRT must be repeated, with new vascular access and all of the complications already described. Wu et al. [17][37] reported that re-institution of KRT after weaning failure significantly worsened the prognosis, without, however, being able to distinguish between the severity of the disease and the impact of the weaning attempt. They retrospectively studied 304 postoperative patients who had undergone KRT. A third of their patients (94, 30.9%) were weaned off acute dialysis for more than 5 days, and 64 (21.1%) were successfully weaned for at least 30 days. They found that surgical patients with AKI requiring resumption of dialysis after being temporarily weaned had a worse prognosis. Other observational studies also suggest that weaning failure is associated with increased mortality [18][19][20][38,39,40]. Whether failure of weaning from KRT is harmful by itself or just a marker of severity of disease remains, however, questionable.

2. Predictive Criteria for Successful Weaning from KRT

The international KDIGO recommendations, dating from 2012, suggest weaning the patient off KRT when it is no longer necessary due to sufficient renal recovery to meet the patient’s needs, or because it is no longer consistent with the goals of care [21][23]. The generality of this statement underlines the lack of objective data that would make it possible to protocolize weaning [22][41]. Nevertheless, several strategies are practiced, ranging from a “late” to an “early” attitude, with the risk of unjustified prolongation or unsuccessful attempts in either case.
Most studies of weaning from KRT are observational and vary in terms of quality and definition of success. Successful weaning is defined by the cessation of KRT, the duration of which may vary between 7 and 28 days, depending on the study, during which time serum creatinine levels should fall or only stabilize.
The first criterion that could justify discontinuation of KRT is the resumption of diuresis. A survey of intensivists in England showed that an increase in diuresis was the most frequently cited reason for weaning (74%), followed by normalization of pH (70%) and achievement of adequate hydration (55%) [23][52]. In a case–control study including 304 patients treated with intermittent KRT, 94 (31%) were weaned for at least 5 days and 64 (21.1%) for at least 30 days. Longer duration of KRT, higher SOFA score and diuresis less than 300 mL/24 h on the day of the weaning attempt, and age > 65 years were predictive of failure before day 30 [17][37]. An international study included 1006 patients on continuous KRT, of whom 529 survived and 313 were successfully weaned from KRT for more than 7 days. The mortality of those weaned was significantly lower than that of others (28.5% vs. 42.7%, p < 0.0001) [18][38]. The best predictor of weaning success was diuresis, with sensitivity and specificity optimal for a threshold of 436 mL/24 h without diuretics and 2330 mL/24 h with diuretics. Other studies have confirmed the strong predictive value of diuresis in weaning success [19][20][24][25][26][27][39,40,42,43,44,45]. The thresholds observed in the aforementioned study [18][38], by collecting diuresis 24 h before discontinuation of the KRT, appear to be those adopted by the majority of authors [28][53].
However, weaning based on urine volume alone is no guarantee of success. The use of a weaning algorithm based on the presence of a diuresis of more than 500 mL/24 h enabled only one-third of eligible patients to be weaned, and KRT was continued in 69% of patients because of fluid overload [29][54]. The significant increase in urine output obtained after diuretics simplifies the management of the hydro-sodic status, but it is difficult to interpret and to associate with renal recovery, as the scant data are sometimes contradictory [28][53]. Despite controversial reports, the use of diuretics is often associated with successful weaning [18][25][38,43]. In a prospective analysis of 92 patients, fluid–sodium balance in the 48 h following discontinuation of KRT was negative in successfully weaned patients at D7, whereas it was largely positive in those who had failed, with no significant difference in diuretic use between the two groups [30][46]. However, a comparison of furosemide (0.5 mg/kg/h) with placebo after continuous KRT failed to show any benefit in terms of time to renal recovery [31][55].
Assessment of glomerular filtration rate (creatinine clearance) is the best marker of possible recovery of function. It needs to be carried out in a steady-state situation, which is difficult to obtain, particularly in the case of intermittent techniques inducing rapid and significant variations in solute concentrations. Given the reliability of the assessment only during the inter-dialytic period, clearance measurements (UV/P) have been proposed for relatively short periods ranging from 2 to 12 h. In a retrospective analysis of 53/85 weaned patients, creatinine clearance by two-hour urine collection outperformed diuresis in predicting weaning success [32][47]. A clearance greater than 23 mL/min, in the 12 h preceding cessation of continuous KRT, had the best sensitivity, specificity and positive predictive value for weaning up to D7. In the ATN study [33][56], it was recommended to stop RRT as soon as diuresis exceeded 30 mL/h and if creatinine clearance over 6 h was >20 mL/min, to continue it if it was <12 mL/min, and to leave the choice to the intensivist between these two values. In the course of inclusion, practice has shown that the threshold of 20 mL/min is too high, leading to a new amendment recommending a lowering of this threshold to 12 mL/min [34][57]. These data can be compared with those of a prospective study that showed that creatinine clearance > 11 mL/min or a reduction in creatinine levels between D0 and D2 were associated with successful weaning at D7 [30][46].
Biochemical analysis of simple urinary markers such as urea and creatinine could be of interest in assessing renal excretory function. Retrospective analysis of 54 patients who survived severe AKI showed that creatininuria ≥ 5.2 mmol/24 h on D1 of KRT discontinuation, irrespective of diuretic use, was associated with successful weaning in 84% of cases (not requiring RRT within 15 days of KRT discontinuation) [35][48]. In a similar study involving patients treated with intermittent KRT, a urine urea > 1.35 mmol/kg/24 h predicted weaning success with an AUC of 0.96, significantly better than a urine output > 8.5 mL/kg/h [36][49].
The new blood and urine biomarkers that have been developed for the early diagnosis of AKI [37][38][59,60] have, for some at least, been tested in the assessment of renal recovery. Several urinary biomarkers, including NGAL, HGF, KIM1, Cys-C and [TIMP-2]x[IGFBP7], that correlate with renal cell injury or function could predict renal recovery and outcome of AKI. Indeed, studies in critically ill AKI patients treated with KRT have shown that those with lower initial levels of biomarkers of inflammation and tissue or kidney injury or whose levels of these biomarkers decrease over time are more likely to recover kidney function [39][40][61,62]. Also, plasma NT-pro-BNP at the initiation of CKRT has been found to be a weaning-related factor [41][63]. However, these studies focused more on the differentiation between transient and persistent AKI and on renal recovery than on predictive value for successful KRT cessation. Only a prospective study including 110 AKI patients treated by CKRT showed that serum CysC (less than 1.85 mg/L) was an independent predictor of the successful weaning from CKRT for more than 14 days [20][40]. In contrast, in a prospective study of 54 patients, urinary NGAL did not outperform urine output in predicting successful weaning from KRT at 72 h. This performance was, however, improved by combining 24-h diuresis and NGAL levels at H6 of weaning [42][50]. Stads et al. [30][46] confirmed these observations and found the performance of urinary NGAL to be inferior to that of creatinine clearance at D2. Kim et al. [20][40] found no significant association between plasma NGAL levels and successful weaning from KRT. In a recent study, a plasma NGAL level ≤ 403 ng/mL was predictive of successful weaning from continuous KRT in non-septic patients, with diuresis being more informative in septic patients [19][39]. Plasma Cystatin C levels at initiation were a factor associated with successful weaning [43][64] and were associated with a better long-term renal prognosis when <2.97 mg/L at KRT discontinuation [44][51].
Due to the heterogeneity of studies and proposed thresholds, it is difficult to conclude on the real usefulness of plasma Cystatin C [28][53]. In short, current data do not support the use of biomarkers to guide the discontinuation of RRT. In fact, they were evaluated in the assessment of renal recovery and not in the weaning from KRT.
Video Production Service