Clinical Health Updates

Tap water irrigation okay for suture repair

Clinical Question:
Is wound irrigation with tap water equally effective as sterile saline in reducing the risk of infection?

Bottom Line:
This study found no significant difference in infection rates after uncomplicated wound closure following irrigation with either direct tap water or sterile saline in the emergency department. Equivalent rates of wound infection were found using either irrigant. The results of this multicenter trial evaluating tap water as an irrigant agree with those from previous single institution trials.

Reference:
Moscati RM, Mayrose J, Reardon RF, Janicke DM, Jehle DV. A multicenter comparison of tap water versus sterile saline for wound irrigation. Acad Emerg Med 2007;14:404-409.

Study Design:
Randomized controlled trial (single-blinded)

Synopsis:
These investigators compared wound infection rates for irrigation with tap water versus sterile saline before closure of wounds in the emergency department. The study was a multicenter, prospective, randomized trial conducted at two Level 1 urban hospitals and a suburban community hospital. Subjects were a convenience sample of adults presenting with acute simple lacerations requiring sutures or staples. Subjects were randomized to irrigation in a sink with tap water or with normal saline using a sterile syringe. Wounds were closed in the standard fashion. Subjects were asked to return to the emergency department for suture removal. Those who did not return were contacted by telephone. Wounds were considered infected if there was early removal of sutures or staples, if there was irrigation and drainage of the wound, or if the subject needed to be placed on antibiotics. Equivalence of the groups was met if there was less than a doubling of the infection rate. A total of 715 subjects were enrolled in the study. Follow-up data were obtained on 634 (88%) of enrolled subjects. Twelve (4%) of the 300 subjects in the tap water group had wound infections, compared with 11 (3.3%) of the 334 subjects in the saline group. The relative risk was 1.21 (95% confidence interval = 0.5 to 2.7).

Rapid antigen test reduces antibiotic use in adult sore throat

Clinical Question:
What is the best strategy for diagnosing strep throat in adults?

Bottom Line:
The use of a rapid antigen test reduces antibiotic use in adults with sore throat better than usual care and better than the use of a clinical decision rule alone. A combined approach using a clinical decision rule plus a rapid antigen test when the clinical rule is equivocal may be the most efficient approach.

Reference:
Worrall G, Hutchinson J, Sherman G, Griffiths J. Diagnosing streptococcal sore throat in adults: randomized controlled trial of in-office aids. Can Fam Physician 2007;53:666-671.

Study Design:
Randomized controlled trial (nonblinded)

Synopsis:
Sore throat is among the most common problems seen in primary care practices, and it is evaluated using a variety of strategies. In this study, 37 Canadian family doctors were asked to recruit 20 successive adults with sore throat. The physicians were randomized to use 1 of 4 strategies: usual clinical practice, decision rule only, rapid antigen test only, and clinical decision rule plus rapid antigen test if the decision rule was equivocal. The clinical decision rule was based on the well-validated Centor rule, with 1 point each for fever, swollen glands, absence of cough, and tonsillar exudate, and 1 point subtracted for presence of cough. Interpretation of the rule was as follows: Antibiotics were not recommended for a patient with less than 2 points; antibiotics were recommended for a patient with 3 or 4 points; and no recommendation was made if a patient had 2 points. Between 102 and 170 patients were recruited into each arm and 47% of all patients received a prescription for an antibiotic. The percentage of visits resulting in an antibiotic prescription was 27% for rapid antigen test alone; 38% for clinical decision rule plus rapid antigen test; 55% for clinical rule only; and 58% for usual practice. The difference between the 2 rapid antigen groups and the usual care group was statistically significant, but the difference between clinical rule plus rapid antigen test and the rapid antigen test alone was not. We are not told how many patients had the rapid antigen test in the combined approach group, but presumably it was fewer than in the group where all patients received rapid antigen testing. We are also not told clinical outcomes such as the percentage of patients cured at 2 weeks or the percentage returning because of treatment failure.

High-protein, low-carb diet associated with increased mortality in women

Clinical Question:
Is a diet high in protein and low in carbohydrates beneficial to women’s health?

Bottom Line:
There is an association between women?s mortality risk and a diet that is low in carbohydrates, high in protein, or both. The strength of this association is reinforced by a dose-response gradient in the observations, as well as the additive effects of low carbohydrates and high protein.

Reference:
Lagiou P, Sandin S, Weiderpass E, et al. Low carbohydrate-high protein diet and mortality in a cohort of Swedish women. Journal of Internal Medicine 2007;261:366-374.

Study Design:
Cohort (prospective)

Synopsis:
The long-term health consequences of diets used for weight control are not established. They evaluated the association of the frequently recommended low carbohydrate diets – usually characterized by concomitant increase in protein intake – with long-term mortality. The Women’s Lifestyle and Health cohort study initiated in Sweden during 1991-1992, with a 12-year almost complete follow up. The study was done in The Uppsala Health Care Region. Included were 42,237 women, 30-49 years old at baseline, volunteers from a random sample, who completed an extensive questionnaire and were traced through linkages to national registries until 2003. They evaluated the association of mortality with: decreasing carbohydrate intake (in deciles); increasing protein intake (in deciles) and an additive combination of these variables (low carbohydrate-high protein score from 2 to 20), in Cox models controlling for energy intake, saturated fat intake and several nondietary covariates. Decreasing carbohydrate or increasing protein intake by one decile were associated with increase in total mortality by 6% (95% CI: 0-12%) and 2% (95% CI: -1 to 5%), respectively. For cardiovascular mortality, amongst women 40-49 years old at enrolment, the corresponding increases were, respectively, 13% (95% CI: -4 to 32%) and 16% (95% CI: 5-29%), with the additive score being even more predictive.

Duct tape ineffective for common viral warts

Clinical Question:
Is duct tape an effective treatment for common viral warts in adults?

Bottom Line:
Occlusion with transparent duct tape is no more or less effective than occlusion with moleskin. The low success rate overall argues against any effect for occlusion. One interesting suggestion is that since hypnosis has been shown to be an effective treatment, perhaps that is the mechanism by which duct tape occlusion works, and perhaps adults are less suggestible than children. While this may not be the final word on this topic, it is discouraging news for the good folks at the American Duct Tape Council.

Reference:
Wenner R, Askari SK, Cham PM, et al. Duct tape for the treatment of common warts in adults: a double-blind randomized controlled trial. Arch Dermatol 2007; 143: 309-13.

Study Design:
Randomized controlled trial (double-blinded)

Synopsis:
The authors evaluated the efficacy of duct tape occlusion therapy for the treatment of common warts in adults. They did a double-blind controlled clinical intervention trial in a Veterans Affairs medical center. A total of 90 immunocompetent adult volunteers with at least 1 wart measuring 2 to 15 mm were enrolled between October 1, 2004, and July 31, 2005. Eighty patients completed the study. Patients were randomized by a computer-generated code to receive pads consisting of either moleskin with transparent duct tape (treatment group) or moleskin alone (control group). Patients were instructed to wear the pads for 7 consecutive days and leave the pad off on the seventh evening. This process was repeated for 2 months or until the wart resolved, whichever occurred first. Follow-up visits occurred at 1 and 2 months. Complete resolution of the target wart. Secondary outcomes included change in size of the target wart and recurrence rates at 6 months for warts with complete resolution. There were no statistically significant differences in the proportions of patients with resolution of the target wart (8 [21%] of 39 patients in the treatment group vs 9 [22%] of 41 in the control group). Of patients with complete resolution, 6 (75%) in the treatment group and 3 (33%) in the control group had recurrence of the target wart by the sixth month.

Ureteral stents not effective, increase symptoms

Clinical Question:
Following treatment of ureteral stones, is stent placement safe and effective?

Bottom Line:
Patients with stents after ureteroscopy have significantly higher morbidity in the form of irritative lower urinary symptoms with no influence on stone free rate, rate of urinary tract infection, requirement for analgesia, or long term ureteric stricture formation. Because of the marked heterogeneity and poor quality of reporting of the included trials, the place of stenting in the management of patients after uncomplicated ureteroscopy remains unclear.

Reference:
Nabi G, Cook J, N’Dow J, McClinton S. Outcomes of stenting after uncomplicated ureteroscopy: systematic review and meta-analysis. BMJ 2007;334:572.

Study Design:
Meta-analysis (randomized controlled trials)

Synopsis:
The authors investigated the potential beneficial and adverse effects of routine ureteric stent placement after ureteroscopy. They did a systematic review and meta-analysis of randomised controlled trials. Data gathered from Cochrane controlled trials register (2006 issue 2), Embase, and Medline (1966 to 31 March 2006), without language restrictions. Review methods We included all randomised controlled trials that reported various outcomes with or without stenting after ureteroscopy. Two reviewers independently extracted data and assessed quality. Meta-analyses used both fixed and random effects models with dichotomous data reported as relative risk and continuous data as a weighted mean difference with 95% confidence intervals. Nine randomised controlled trials (reporting 831 participants) were identified. The incidence of lower urinary tract symptoms was significantly higher in participants who had a stent inserted (relative risk 2.25, 95% confidence interval 1.14 to 4.43, for dysuria; 2.00, 1.11 to 3.62, for frequency or urgency) after ureteroscopy. There was no significant difference in postoperative requirement for analgesia, urinary tract infections, stone free rate, and ureteric strictures in the two groups. Because of marked heterogeneity, formal pooling of data was not possible for some outcomes such as flank pain. A pooled analysis showed a reduced likelihood of unplanned medical visits or admission to hospital in the group with stents (0.53, 0.17 to 1.60), although this difference was not significant. None of the trials reported on health related quality of life. Cost reported in three randomised controlled trials favoured the group without stents. The overall quality of trials was poor and reporting of outcomes inconsistent.

Sustained effects of false-positive mammogram results

Clinical Question:
What are the effects of false-positive mammogram results?

Bottom Line:
Some women with false-positive results on mammography may have differences in whether they return for mammography, occurrence of breast self-examinations, and levels of anxiety compared with women with normal results.

Reference:
Brewer NT, Salz T, Lillie SE. Systematic review: The long-term effects of false-positive mammograms. Ann Intern Med 2007;146:502-510.

Study Design:
Systematic review

Synopsis:
Although abnormal screening mammograms deleteriously affect the psychological well-being of women during the time immediately surrounding the tests, their long-term effects are poorly understood. The author characterized the long-term effects of false-positive screening mammograms on the behavior and well-being of women 40 years of age or older. English-language studies from the MEDLINE, Web of Science, EMBASE, CINAHL, PsycINFO, and ERIC databases through August 2006. Studies were identified that examined the effects of false-positive results of routine screening mammography on women’s behavior, well-being, or beliefs. Two investigators independently coded study characteristics, quality, and effect sizes. 23 eligible studies (n = 313,967) were identified. A random-effects meta-analysis showed that U.S. women who received false-positive results on screening mammography were more likely to return for routine screening than those who received normal results (risk ratio, 1.07 [95% CI, 1.02 to 1.12]). The effect was not statistically significant among European women (risk ratio, 0.97 [CI, 0.93 to 1.01]), and Canadian women were less likely to return for routine screening because of false-positive results (risk ratio, 0.63 [CI, 0.50 to 0.80]). Women who received false-positive results conducted more frequent breast self-examinations and had higher, but not apparently pathologically elevated, levels of distress and anxiety and thought more about breast cancer than did those with normal results.

LIMITATIONS:
Correlational study designs, a small number of studies, a lack of clinical validation for many measures, and possible heterogeneity.

Atkins equally or more effective for weight loss in premenopausal women

Clinical Question:
Which diet is most effective for maintaining weight loss in premenopausal women at 1 year?

Bottom Line:
In this study, premenopausal overweight and obese women on the Atkins diet lost comparable or more weight than similar women following other diet plans, including the Zone, Ornish, or LEARN (Lifestyle, Exercise, Attitudes, Relationships, and Nutrition) diets. In addition, women on the Atkins diet showed equally or more improved overall metabolic effects. The difference in weight loss between the Atkins and Zone diet was statistically significant, but weight loss was not different among the other diets.

Reference:
Gardner CD, Kiazand A, Alhassan S, et al. Comparison of the Atkins, Zone, Ornish, and LEARN diets for change in weight and weight related risk factors among overweight premenopausal women. The A to Z weight loss study: A randomized controlled trial. JAMA 2007;297:969-977.

Study Design:
Randomized controlled trial (single-blinded)

Synopsis:
Popular diets, particularly those low in carbohydrates, have challenged current recommendations advising a low-fat, high-carbohydrate diet for weight loss. Potential benefits and risks have not been tested adequately. The authors compared 4 weight-loss diets representing a spectrum of low to high carbohydrate intake for effects on weight loss and related metabolic variables. Twelve-month randomized trial conducted in the United States from February 2003 to October 2005 among 311 free-living, overweight/obese (body mass index, 27-40) nondiabetic, premenopausal women. Participants were randomly assigned to follow the Atkins (n = 77), Zone (n = 79), LEARN (n = 79), or Ornish (n = 76) diets and received weekly instruction for 2 months, then an additional 10-month follow-up. Weight loss at 12 months was the primary outcome. Secondary outcomes included lipid profile (low-density lipoprotein, high-density lipoprotein, and non-high-density lipoprotein cholesterol, and triglyceride levels), percentage of body fat, waist-hip ratio, fasting insulin and glucose levels, and blood pressure. Outcomes were assessed at months 0, 2, 6, and 12. The Tukey studentized range test was used to adjust for multiple testing. Weight loss was greater for women in the Atkins diet group compared with the other diet groups at 12 months, and mean 12-month weight loss was significantly different between the Atkins and Zone diets (P<.05). Mean 12-month weight loss was as follows: Atkins, -4.7 kg (95% confidence interval [CI], -6.3 to -3.1 kg), Zone, -1.6 kg (95% CI, -2.8 to -0.4 kg), LEARN, -2.6 kg (-3.8 to -1.3 kg), and Ornish, -2.2 kg (-3.6 to -0.8 kg). Weight loss was not statistically different among the Zone, LEARN, and Ornish groups. At 12 months, secondary outcomes for the Atkins group were comparable with or more favorable than the other diet groups.

Botulinum might be more effective than nitroglycerine in anal fissure

Clinical Question:
Is type A botulinum toxin more effective than nitroglycerine ointment in treating chronic anal fissure?

Bottom Line:
In this study, botulinum toxin (Botox, Dysport) appeared slightly more effective than topical nitroglycerine in managing chronic anal fissures.

Reference:
Brisinda G, Cadeddu F, Brandara F, Marniga G, Maria G. Randomized clinical trial comparing botulinum toxin injections with 0.2 per cent nitroglycerin ointment for chronic anal fissure. Br J Surg 2007;94:162-167.

Study Design:
Randomized controlled trial (single-blinded)

Synopsis:
In recent years treatment of chronic anal fissure has shifted from surgical to medical. These authors compared the ability of two non-surgical treatments-botulinum toxin injections and nitroglycerin ointment-to induce healing in patients with idiopathic anal fissure. One hundred adults were assigned randomly to receive treatment with either type A botulinum toxin (30 units Botox or 90 units Dysport) injected into the internal anal sphincter or 0.2 per cent nitroglycerin ointment applied three times daily for 8 weeks. After 2 months, the fissures were healed in 46 (92 per cent) of 50 patients in the botulinum toxin group and in 35 (70 per cent) of 50 in the nitroglycerin group (P=0.009). Three patients in the botulinum toxin group and 17 in the nitroglycerin group reported adverse effects (P<0.001). Those treated with botulinum toxin had mild incontinence to flatus that lasted 3 weeks after treatment but disappeared spontaneously, whereas nitroglycerin treatment was associated with transient, moderate-to-severe headaches. Nineteen patients who did not have a response to the assigned treatment crossed over to the other therapy.

Surgery better than no surgery for spinal stenosis

Clinical Question:
In adults with spinal stenosis, is surgical treatment more effective than nonsurgical treatment?

Bottom Line:
Although patients improved over the 2-year follow-up regardless of initial treatment, those undergoing decompressive surgery reported greater improvement regarding leg pain, back pain, and overall disability. The relative benefit of initial surgical treatment diminished over time, but outcomes of surgery remained favorable at 2 years. Longer follow-up is needed to determine if these differences persist.

Reference:
Malmivaara A, Slatis P, Heliovaara M, et al, for the Finnish Lumbar Spinal Research Group. Surgical or nonoperative treatment for lumbar spinal stenosis? a randomized controlled trial. Spine 2007;32:1-8.

Study Design:
Randomized controlled trial (nonblinded)

Synopsis:
No previous randomized trial has assessed the effectiveness of surgery in comparison with conservative treatment for spinal stenosis. So authors from Finland assessed the effectiveness of decompressive surgery as compared with nonoperative measures in the treatment of patients with lumbar spinal stenosis. They did a randomized trial in four university hospitals agreed on the classification of the disease, inclusion and exclusion criteria, radiographic routines, surgical principles, nonoperative treatment options, and follow-up protocols. A total of 94 patients were randomized into a surgical or nonoperative treatment group: 50 and 44 patients, respectively. Surgery comprised undercutting laminectomy of the stenotic segments in 10 patients augmented with transpedicular fusion. The primary outcome was based on assessment of functional disability using the Oswestry Disability Index (scale, 0-100). Data on the intensity of leg and back pain (scales, 0-10), as well as self-reported and measured walking ability were compiled at randomization and at follow-up examinations at 6, 12, and 24 months. Both treatment groups showed improvement during follow-up. At 1 year, the mean difference in favor of surgery was 11.3 in disability (95% confidence interval [CI], 4.3-18.4), 1.7 in leg pain (95% CI, 0.4-3.0), and 2.3(95% CI, 1.1-3.6) in back pain. At the 2-year follow-up, the mean differences were slightly less: 7.8 in disability (95% CI, 0.8-14.9) 1.5 in leg pain (95% CI, 0.3-2.8), and 2.1 in back pain (95% CI, 1.0-3.3). Walking ability, either reported or measured, did not differ between the two treatment groups.

Alendronate (Fosamax) therapy unnecessary for most women after 5 years

Clinical Question:
What is the optimal duration of therapy with alendronate for women with postmenopausal osteoporosis?

Bottom Line:
Women who discontinued alendronate after 5 years showed a moderate decline in BMD and a gradual rise in biochemical markers but no higher fracture risk other than for clinical vertebral fractures compared with those who continued alendronate. These results suggest that for many women, discontinuation of alendronate for up to 5 years does not appear to significantly increase fracture risk. However, women at very high risk of clinical vertebral fractures may benefit by continuing beyond 5 years.

Reference:
Black DM, Schwartz AV, Ensrud KE, et al. Effects of continuing or stopping alendronate after 5 years of treatment. The Fracture Intervention Trial Long-term Extension (FLEX): A randomized trial. JAMA 2006;296:2927-38.

Study Design:
Randomized controlled trial (double-blinded)

Synopsis:
The optimal duration of treatment of women with postmenopausal osteoporosis is uncertain. The authors compared the effects of discontinuing alendronate treatment after 5 years vs continuing for 10 years. They did a randomized, double-blind trial conducted at 10 US clinical centers that participated in the Fracture Intervention Trial (FIT). One thousand ninety-nine postmenopausal women who had been randomized to alendronate in FIT, with a mean of 5 years of prior alendronate treatment. Randomization to alendronate, 5 mg/d (n = 329) or 10 mg/d (n = 333), or placebo (n = 437) for 5 years (1998-2003). The primary outcome measure was total hip bone mineral density (BMD); secondary measures were BMD at other sites and biochemical markers of bone remodeling. An exploratory outcome measure was fracture incidence. Compared with continuing alendronate, switching to placebo for 5 years resulted in declines in BMD at the total hip (-2.4%; 95% confidence interval [CI], -2.9% to -1.8%; P<.001) and spine (-3.7%; 95% CI, -4.5% to -3.0%; P<.001), but mean levels remained at or above pretreatment levels 10 years earlier. Similarly, those discontinuing alendronate had increased serum markers of bone turnover compared with continuing alendronate: 55.6% (P<.001) for C-telopeptide of type 1 collagen, 59.5% (P < .001) for serum n = propeptide of type 1 collagen, and 28.1% (P<.001) for bone-specific alkaline phosphatase, but after 5 years without therapy, bone marker levels remained somewhat below pretreatment levels 10 years earlier. After 5 years, the cumulative risk of nonvertebral fractures (RR, 1.00; 95% CI, 0.76-1.32) was not significantly different between those continuing (19%) and discontinuing (18.9%) alendronate. Among those who continued, there was a significantly lower risk of clinically recognized vertebral fractures (5.3% for placebo and 2.4% for alendronate; RR, 0.45; 95% CI, 0.24-0.85) but no significant reduction in morphometric vertebral fractures (11.3% for placebo and 9.8% for alendronate; RR, 0.86; 95% CI, 0.60-1.22). A small sample of 18 transilial bone biopsies did not show any qualitative abnormalities, with bone turnover (double labeling) seen in all specimens.