Therapeutic Drug Monitoring Case Study

This site has been blocked by the network administrator.

Block reason: Gateway GEO-IP Filter Alert

IP address:

Connection initiated from country: Russian Federation

Two recent multidisciplinary efforts in which the Yale TDM laboratory has participated have demonstrated that some desirable behaviors may be maintained with relatively minimal ongoing input. Both efforts began as attempts to reduce the number of drug assays ordered, and both originated outside the laboratory in response to articles about unnecessary measurements. Previous efforts by the TDM laboratory to improve the use of drug measurements, through educational newsletters and direct intervention, had produced few lasting effects. This was attributed in part to the fact that the house staff more often viewed the laboratory service as a utility than as professional colleagues. The new initiatives provided the opportunity to provide guidance through faculty more likely to be viewed as role models. A major objective of the laboratory was to redirect the focus from simply reducing the number of tests to improving the use of TDM results. While reduced testing could lower costs modestly, better use of TDM results could both reduce testing and improve outcomes.


The neurosurgery intensive care unit proposed an initiative to reduce phenytoin monitoring as a quality-improvement project. Schoenenberger et al. (1) had reported that the major reason for excessive phenytoin monitoring was routine daily measurements. This practice was almost universal in the unit, an average of 0.92 results/day being reported for each patient on therapy. The initial goal of the project was to reduce phenytoin measurements by 50%.

In addition to habit, and to the fear of not having a result when the attending physician wanted one, another major impetus for daily measurements was discovered in a review of the usual dosing strategy. Patients were almost uniformly loaded with 1 g of phenytoin and maintained with 100 mg every 8 h. Adjustments in response to out-of-range results consisted of withholding a dose or giving an extra one. Almost all adjustments were reactive. This strategy could be compared to driving down a road and steering your car only when you went off the road. Moreover, you could only look once a day to see if you were on the road. Indeed, in this view, the initial goal of the proposed program (50% reduction of utilization) was to look only once every other day.

Instead, the revised goal became figuratively to encourage “defensive prescribing,” steering drug therapy proactively rather than reactively. This needed to be done without any substantial imposition on the house staff, who were “more concerned about not cutting the patient’s spinal cord or carotid artery than in what the patient’s phenytoin level is.” Simple guidelines were developed for loading and maintenance doses that were weight-adjusted in convenient steps. Additional guidelines suggesting appropriate responses to drug concentrations in various ranges (dose changes, if any, and when to obtain the next result) were distributed on cards.

Table 1⇓ shows the changes in phenytoin requests in the first 3 months, compared with the same period 1 year earlier. The number of requests decreased by 26%, but the percentage of values falling within the therapeutic range increased by 22%. The latter change probably resulted from more-appropriate dosing, in that many patients began receiving regimens other than 100 mg every 8 h.

This interpretation was also supported by a follow-up study of the period 6–9 months after the program was initiated. There had been no ongoing efforts to promote compliance in the interim, and the frequency of phenytoin requests returned to baseline; however, the improvement in the percentage of results within the therapeutic range was largely maintained. Without ongoing promotion, reducing order frequency was not positively rewarded (but did receive negative reinforcement when attending physicians wanted results that had not been obtained). Accordingly, that behavior underwent extinction. On the other hand, results falling within the therapeutic range were intrinsically reinforcing. Thus, more-appropriate dosing appeared to have been maintained.

This occurred despite some unexpected negative feedback: When patients were discharged on individualized dosing regimens, their regular physicians complained. Doses involving other than an integral number of 100-mg tablets were felt to decrease compliance and were also noted to increase costs, because both 30-mg and 100-mg tablets were required. These are valid concerns, particularly with regard to compliance. However, because phenytoin has nonlinear pharmacokinetics, a sizable minority of patients could not achieve steady-state phenytoin concentrations of between 10 and 20 mg/L when only doses that were multiples of 100 mg were used. If more-complex regimens are not acceptable, there must be a willingness to accept some phenytoin concentrations outside the traditional therapeutic range, provided the clinical effects are acceptable.

Table 1.

Results from the Yale TDM laboratory: phenytoin and vancomycin results after intervention to improve the use of these drugs and their monitoring.


A vancomycin initiative was initially proposed by the Antibiotic Drug Use Subcommittee of the Pharmacy and Therapeutics Committee in response to articles citing a lack of evidence justifying therapeutic monitoring of vancomycin (7)(8). The subcommittee did not propose to halt vancomycin monitoring, but rather to discourage it in patients with normal renal function. This proposal was supported by the pharmacy and the TDM laboratory. Also suggested was the situation that measurements of vancomycin peaks were rarely indicated, given the lack of evidence linking vancomycin peaks with toxicity (7)(8)(9), as well as practical considerations based on the pharmacokinetics of vancomycin. The recommended draw time for peaks was during the distribution phase, when concentrations were changing rapidly and timing was critical. Past experience, however, suggested that incorrect draw times for vancomycin peaks were the rule rather than the exception. Moreover, clearance calculated from such peaks would include a component of distribution, resulting in overestimation of clearance and underestimation of half-life. When the distribution phase is not taken into account, vancomycin peak values may mislead more often than inform.

Because patient weight was the largest contributor to variability in the volume of distribution, and the peak target range was quite broad, administering a weight-adjusted vancomycin dose to a patient with an appropriate trough concentration could largely assure a peak concentration within the target range. Accordingly, it was felt that almost all patients could be monitored with only trough concentrations.

The recommendations to forgo vancomycin monitoring in patients with normal renal function and to forgo vancomycin peaks in almost all patients were presented at the Infectious Disease Service conference. Additional discussions were held with the Infectious Disease fellows, the ones most likely to recommend use of vancomycin. The recommendations also were introduced elsewhere by pharmacy in-service presentations. Finally, a computer screen appeared whenever a request for vancomycin determination was ordered on-line. This screen briefly summarized the recommendations and further noted that requests for vancomycin peaks should be discussed with the laboratory medicine resident. This screen provided the only ongoing reminder of the recommendations.

The initial response was quite impressive, with a decrease in frequency of vancomycin requests of nearly 60% (Table 1⇑ ). Almost no tests labeled as peak measurements were ordered, although some requests for peak values were submitted as “random” or trough measurements to circumvent the need to provide a rationale. Most of the reductions were maintained a full year later, with the only ongoing reinforcement being a computer screen that could be skipped unread in less than a second. This nearly subliminal input, coupled with a modest barrier requiring discussion before ordering a properly labeled peak, appeared to be enough to maintain the desired behaviors.

An apparent negative effect of the recommendations was the decrease in the percentage of peaks and troughs that were within the target ranges (random results had no defined target range and were not included). This probably resulted from concentrations not being obtained for patients with normal renal function, who should usually achieve concentrations within these ranges when receiving standard doses. Additionally, peak values were not being measured, and these too were more likely to be within their range. When only troughs were considered, the in-range percentage fell from 33% to 27%.

The low percentages of patients with measured values in the target range suggested substantial room for improvement in the dosing of vancomycin. The only guidance provided to improve dosing was the suggestion to use weight-based dosing, which proved to be difficult to implement. When weight-adjusted doses were ordered, they were often converted to standard doses by the pharmacy.

In these cost-conscious times, one may ask whether efforts to reduce unnecessary testing are cost-effective. The marginal costs (the costs of doing one more assay) for many laboratory tests are often quite low, making savings difficult to achieve. The sustained reduction in vancomycin requests was >2500 specimens per year, yielding paper reductions in annual costs of >$40 000. Because drug concentrations must be obtained at fairly precise times, this usually means obtaining a separate specimen, and the largest component of these paper savings was in the costs associated with obtaining the specimen. The only unequivocal savings was in the costs of the reagents, amounting to ∼$1500 per year. Although this is modest, it exceeded the costs of the intervention. Moreover, because the behavior seems to be sustained, the savings will continue to accrue in subsequent years.

These efforts were probably successful for several reasons. First, they were evidence-based, and clinicians responded positively to evidence. Second, the entire process was considered, including drug dosing. Third, the recommendations were relatively modest and readily understandable. Finally, they were multidisciplinary, involving clinicians inside and outside the laboratory as well. Others have reported that it took surprisingly long for new evidence or guidelines to change practice (10)(11). Strategies such as these used at Yale may help narrow that interval.

0 thoughts on “Therapeutic Drug Monitoring Case Study”


Leave a Comment

Your email address will not be published. Required fields are marked *