How do they measure the effectiveness of birth control?

March 17, 2017

Dear Cecil:

How is the effectiveness of contraception measured? Do they survey people? Could researchers randomize different birth control methods, even if they wanted to? As much as I'd like to, I don't think I could have nearly as much sex as it would take to make a statistically significant sample.

Cecil replies:

Given the stakes involved — higher than those associated with, say, nasal decongestant — you’d certainly hope there’s plenty of published research to confirm that birth control really does what it’s supposed to. And sure enough, there is. Though gauging contraceptives’ effectiveness isn’t quite the grueling sexual slog you apparently imagine, you’re right to guess that logistical and ethical concerns make this task somewhat trickier than figuring out how many noses got unstuffed.

Typically researchers test a birth control method about the same way they’d test any drug or medical device — via randomized controlled trial. Participants are assigned randomly to one of several groups: some use the contraceptive that’s under scrutiny; others use some previously tested treatment to establish a baseline — that’s the control group. So when pharmaceutical docs tested a transdermal contraceptive patch in 2001, the control group got the pill; in a 1999 trial of polyurethane condoms, the controls used the latex kind.

What you won’t see in these studies, for obvious reasons, is a placebo control group: assuming your volunteers genuinely don’t want to get pregnant, you can’t just give some of them a sugar pill and tell them it’s the pill. Similarly, there usually isn’t a “no-method” group to compare to; if researchers want a baseline conception rate for young women regularly having sex without contraception, they may use an estimate based on external data. (Something like 85 percent within a year is a decent guess.)

And despite your concern, Christine, there’s no need for any one subject to shoulder the sample-size burden herself; the subjects enrolled in these studies regularly number in the thousands. FDA guidelines for condom-effectiveness studies, for instance, recommend at least 400 subject couples over a minimum of six menstrual cycles; testing may be conducted “outside of clinical care settings.” (Most participants prefer it that way, you’d figure, though undoubtedly not all.) But with the real action taking place offsite, test results depend at least in part on subjects’ self-reporting: in that 1999 condom study, participants kept “coital diaries” to record frequency of use, breakage and slippage events, etc.

To compare various contraceptives across multiple studies, you need a single apples-to-apples measurement of effectiveness. The most common is something called the Pearl Index, which professes to quantify how often a birth control method will fail per 100 woman-years of use: the lower the number, the more likely the method is to keep you fetus-free. Devised back in 1933, the Pearl Index enjoys the advantage of being simple to calculate: you just divide the number of pregnancies during a contraceptive study by the number of participants using the method and how many months the study went on, then multiply by 1,200. That’s it. Spermicide used alone might score as high as 20; the pill is somewhere between 0.1 and 3.

Simple — or too simple? A big problem with the Pearl Index is that it assumes the results of a study are consistent from month to month, and that just ain’t so. The longer a contraceptive trial continues, the rarer pregnancies become. Why? The most fertile women conceive early and drop out of the study; the women who remain may be less pregnancy-prone, or they may have grown increasingly adept at using the birth control method. Long trials, then, tend to produce lower Pearl numbers, and thus can’t be compared fairly to shorter ones. For this reason, many researchers prefer a stat format called life tables (or decrement tables), which shows results broken out by month instead.

But much of what we know about relative contraceptive effectiveness isn’t based on clinical trials at all. For decades now, Princeton population researcher James Trussell has been compiling and reviewing current data on birth control use for a series of reports called “Contraceptive Failure in the United States.” In setting out his 2011 charts of unintended pregnancy rates, Trussell leans less on test results than on women’s responses (adjusted appropriately) from the long-running National Survey of Family Growth, run by the Centers for Disease Control. Now, it’s the CDC, so the survey is conducted with the utmost rigor. But trying to correct for known distortions in the data, Trussell suggests, is complicated to say the least: study participants regularly underreport abortions, for instance, meaning a number of unintended pregnancies don’t get counted; but if you adjust for this by surveying women seeking abortions in clinics, they tend to overreport that they really were using contraception, meaning you count too many failures.

If we’re always having to take the subjects’ word for it, you may wonder, how do we reliably distinguish between contraception failure — called “perfect-use failure” in the literature — and user error? This issue isn’t lost on Trussell: “Additional empirically-based estimates of pregnancy rates during perfect use are needed,” he concludes. The march of science is being held back, it seems, because there aren’t enough folks who can roll a condom on correctly every time.

Related Posts with Thumbnails

Last Articles

west egg calculator synonyms for dope round city terran eclipse purchase coal two mattresses stacked vampires biting humans manhattan movie theaters genevi?ve bujold voyager weak crossword clue medieval gas mask confederate yankees definition fortnight trach ventilator eye pun straight guys peeing creme fraiche pronunciation red sucks type1 error latino capitalized screen door submarine ashcan comics domesticated monkeys 47 weeks pregnant transformer blew 6 letter nouns tanbark playground define ionizer how to cashier green canalope can ice evaporate whipping coffee how to take off toilet seat how to write a check without a checkbook how often should car batteries be replaced skin peeling on feet bottoms college guys locker room i don't know what i've been told but eskimo top outlet not working music in the graduate 2003 ford focus key replacement ac recovery vacuum pump set timing by ear lovers or losers giveaway why do allergies get worse at night what is a heeb origin of sweet 16 where can i buy garam masala i go pee pee in your coke busch beer vs budweiser loud hum on phone line stainless steel braided hoses for washing machines how long should men's fingernails be a bag of weed song sugar free candy recipes splenda slate comments won't load computer chair on carpet can u declaw a dog song from oceans 11 names to call your stepdad largest open world games clean water action charity rating landscape spikes home depot can you leave your cat alone for a week hard on the outside soft on the inside swiss army watch battery most evil fictional characters how to make papaya taste good

Recent Additions:

A Straight Dope Staff Report by SDStaff Ian, Straight Dope Science Advisory Board
A Straight Dope Classic by Cecil Adams
A Straight Dope Staff Report by SDStaff Ian, Straight Dope Science Advisory Board
A Straight Dope Classic by Cecil Adams
A Straight Dope Staff Report by SDStaff Ian, Straight Dope Science Advisory Board
A Straight Dope Staff Report by SDStaff Melis, Straight Dope Science Advisory Board
A Straight Dope Classic by Cecil Adams
A Straight Dope Staff Report by SDStaff Lileth, Melis, Wolf, and Dogster, Straight Dope Science Advisory Board
A Straight Dope Classic by Cecil Adams
A Straight Dope Staff Report by SDStaff Eutychus, Straight Dope Science Advisory Board