Skip to Content

The NEJM and Clinical Trials: What’s Going On?

Here’s an article from the New England Journal of Medicine on randomized clinical trials. You would expect one of the most well-known medical journals in the world to be in favor of clinical trials, but that doesn’t quite seem to be the case. The article is very much a on-the-one-hand on-the-other-hand affair:

By the turn of the 21st century, RCTs had achieved the status of gold standard for therapeutic evidence — but one with well-documented limitations. Physicians continue to pursue alternative methods of knowledge production that are faster or less expensive than RCTs, or that claim to answer questions that RCTs cannot. Yet beyond medicine, RCTs are increasingly emulated, even idealized. . .

Yet despite their limitations, RCTs have revolutionized medical research and improved the quality of health care by clarifying the benefits and drawbacks of countless interventions. Clinical investigators, supported by government funding and empowered by FDA regulations, have used RCTs to advance clinical research theory and practice. Critics have become increasingly adept at ferreting out flaws in RCTs, forcing trialists to be more vigilant in their designs. From a historical perspective, the RCT is not a single or stable technique, but an approach that has evolved as physicians have revised and refined clinical research.

The idea that RCTs would be the only authoritative arbiter to resolve medical disputes has given way to more pragmatic approaches. Experimentalists continue to seek new methods of knowledge production, from meta-analyses to controlled registry studies that can easily include large numbers of diverse patients. Observational methods are seen as complementary to RCTs, and new forms of surveillance can embed RCTs into the structure of data collection within electronic health records. RCTs are now just a part — though perhaps the most critical part — of a broad arsenal of investigative tools used to adjudicate efficacy and regulate the therapeutic marketplace. This status may continue to evolve with the recent turn (back) to personalized or precision medicine. As medicine focuses on the unique pathophysiology and coexisting conditions of individual patients, the applicability of the generalized data produced by RCTs will come under intensified scrutiny.

The whole article is like that – trials have done great things, but there’s just something not right about them, somehow. I can’t quite get a handle on where the authors are coming from, and I don’t seem to be the only reader with that problem. Here’s Vinay Prasad’s take, and he seems to think this is part of a larger movement at the NEJM:

This week the NEJM published, “Assessing the Gold Standard — Lessons from the History of RCTs.”  The article rather boringly describes the history of RCTs, and makes some uncontroversial points, but at the same time it systematically denigrates the role of the RCTs and undermines their importance to justify future medical treatments. 

Most of the criticisms of RCTs made in the article are completely in error.  For this reason, the article is a subtle threat to the pursuit of evidence-based medicine and a threat to better decisions for patients.  And, precisely because it is subtle, its danger is all the greater.  It is likely to be swallowed, hook line and sinker.   Furthermore, the article comes at a time when the fundamental editorial direction of the Journal has been questioned.  This article is likely further evidence of the NEJM’s regressive thinking, and is a strategic move by the Journal to undermine evidence.

Why on earth would they want to do that, is the question. The only thing I have to offer, and I really hope that I’m wrong about this, is that some of the authors of this current paper are history-of-science and history-of-medicine types. David Wootton’s The Invention of Science (blogged about here) goes into detail on some of the battles that have gone on in those fields. There have been acrimonious disputes about what science really is, what ways we have of knowing what the facts really are, and what exactly a fact is in general. As you can see, things get pretty philosophical, and they also get pretty political, too, with all sorts of stuff being dragged in from the social science hallways about privileged ways of knowledge, etc.

I really wonder if some of that is behind this latest article, especially when I read in it passages like this one:

Such controversies attracted attention from social scientists and policy scholars. As sociologist Steven Epstein noted, RCTs had become “crucial sites for the negotiation of credibility, risk, and trust.” When they take place in fraught medical, social, and political contexts, RCTs, “rather than settling controversies, may instead reflect and propel them.”46 Historian Harry Marks argued that RCTs must be understood not merely as scientific techniques but also as social events: “even the simplest RCT is the product of a negotiated social order, replete with decisions — some contested, some not — and with unexamined assumptions.”36 Even though RCTs were developed to produce generalizable, universal biomedical knowledge, they have remained deeply entangled in local social conditions, economics, and politics.

That, to me, is the social science worldview in a nutshell – that everything, simply everything, is deeply entangled in social conditions, economics, and politics. Take an NMR spectrum? A political act. Weigh out some copper sulfate? Politics. I sometimes think that that’s my vision of Hell. A more immediate vision of it, for me, can be found in the sorts of abstracts tweeted out by RealPeerReview (a recent example of which made the pages of Science is here). Behind some of this stuff is a worldview that holds that there isn’t such a thing as “knowledge” at all, just power struggles and wishful thinking. It’s hard for me to put into words just how strongly I reject that line of thought, which at one point seemed to be going too far for even some of its practitioners. Inasmuch as I can follow what he’s talking about, Bruno Latour seems to be one of the leading writers in this area who, in more recent years, has been wondering just exactly what he’s done.

I hope that’s not what’s going on here. The NEJM is too valuable as a medical journal to head down that road. For now, have a look at the article and at Prasad’s detailed critique of it. He says the journal is on a mission to “further its agenda to replace better evidence with lesser evidence“, and I hope he’s wrong.

37 comments on “The NEJM and Clinical Trials: What’s Going On?”

  1. William Gerber says:

    I’m a layperson so question may not be asked elegantly: Hypothetically, if we ever have a unified database with detailed information on all participating in the healthcare system, including each individual’s genetics, proteome, prior illnesses/diseases, treatments, treatment outcomes, etc. etc. etc., assuming all attributes that would be factors in randomization are available in the database, could we ever have historic controls good enough to replace the control group in a RCT? I realize this may be a pipe dream.

    1. NJBiologist says:

      The complexity and data requirements would get out of hand in a hurry. Think of the number of things you’d want to track just for hypertension, and then add in the things you’d need to monitor prospectively in case they turned out to be important; you’ll need enough data points (people) to accurately measure the impact of each.

    2. Mike says:

      The FDA has accepted historical data (patient registries) as a control for some approvals. It’s not that unusual in orphan diseases where there are so few patients that it makes recruiting a highly-enough-powered sample possible.

    3. I think an additional point is the question of unknowns. While we can hope that we are measuring and assessing all relevant variables, we don’t know what we don’t know. Compounding this is that environment is never the same. Therefore historical controls are highly unlikely to ever be adequate controls.

      As a tangent, this is one of the issues with the current Sarepta/eteplirsen debate about the value of their phase II trial. Not only was the trial highly underpowered (12 boys), there was a reliance on natural history (ie, historic control) data to support the claim of efficacy. So far the FDA and many commentators have found that comparison wanting.

    4. David says:

      From the top of my head, I probably overlooked some things, since I am not involved in RCTs, so take this with a grain of salt:
      I hardly think it is possible to have for all the needed diseases with all the matching combinations of possible factors (age? genetics? blood-test results? …) all the matching amount of and kind of administrations. Even if we had this colossal amount of data, wouldn’t that mean, that all studys after the historic control group would get the real test treatment? Then this no double-blind study anymore and the scientist doing the study would be biased. At some point, this would become public knowledge and the placebo effect would mess everything up even more. If this were hidden and only sometimes the historical data were used (making it half obsolete), people would still expect to get the real test treatment more often and again there is the placebo effect.
      And at the end the environmental factor comes into play. Our environment is always changing and can mess up data as well. For example banning asbestos or leaded gasoline had probably impact on the overall health. So maybe all people in the newer study are healthier than the people from the historic control group, just because of those environmental factors and not the tested drug.

    5. Chaos Theory says:

      One of the amazing things I learned in chaos theory is that with a complex system (even a deterministic one with well described governing equations) as you improve your measurement of the system, your ability to predict the system’s outcome increases only by the log of your improvement. So, even if you measure the temperature and wind speed a million times more accurately than today, your ability to predict the weather (even knowing the governing equations of weather if that were possible), would only increase 6-fold.

      Biology certainly is complex enough to have chaotic behavior and so our predictive abilities of it will always be poor even in the world where genome/proteome/metabolome/other-cool-omes are known.

    6. tangent says:

      Is your idea that with all that descriptive data, you can do an observational study of patients receiving various treatments, and control for all factors other than the treatment choice? Unfortunately that will never be nearly as good as randomizing subjects between the treatment choices. There are an infinite number of subtle reasons that could lie behind why a patient got one treatment versus another, and history tells us nobody knows how to control for them all. So we cut through the knot by making a coin-flip the only reason operating.

      Or did I misread, and you meant it would be a randomized trial, but using two treatment groups rather than a placebo? Sure, you could do that, and trials are run against “standard of care” rather than against placebo.

  2. NJBiologist says:

    “When they take place in fraught medical, social, and political contexts, RCTs, “rather than settling controversies, may instead reflect and propel them.”46 Historian Harry Marks argued that RCTs must be understood not merely as scientific techniques but also as social events: “even the simplest RCT is the product of a negotiated social order, replete with decisions — some contested, some not — and with unexamined assumptions.”36 Even though RCTs were developed to produce generalizable, universal biomedical knowledge, they have remained deeply entangled in local social conditions, economics, and politics.”

    How does any of that get better if we use less rigorously controlled study designs (which is everything I’ve ever seen proposed as an alternative to RCTs)?

  3. cp says:

    “Not even wrong” comes to mind whenever I see sociological commentary of scientific methods.

    Then again, the concept of right/wrong might very well be the product of some pervasive hegemony for all I know.

  4. Hap says:

    Maybe someone should ask NEJM what they’d prefer instead of RCTs, and why their suggested alternative is likely to be better than RCTs (well, for people other than supplementeers and small pharma companies looking for more dependable payouts).

  5. a says:

    The authors are right, plenty of sociological factors mess up many RCTs. A perfect trial would be…perfect, but there’s no such thing in the real world. MDs get excited about a drug and fudge exclusion criteria, or fail to record a pt characteristic because they like the pt and know that it would keep them out if a promising trial, causing a drug to fail. Nissen and others use RCT data for meta analyses that are wrong headed. Novartis includes a crazy long run in for LCZ696 to ensure success, then everyone notices that the trial was oddly impractical compared to real world use, so no one prescribes it. It’s not all cut and dried. In many ways, retrospective database studies are more powerful and more realistic.

  6. tnr says:

    I did not read the NEJM article but perhaps what they are getting at is that fundamental difficult issues remain even in the presence of a well designed, well executed clinical trial. Issues such as “how generalizable are the results of this trial to the likely population who will receive the intervention?” and “is the overall efficacy and safety in the trial a good summary for all patients or are there subsets of patient who benefit more/less or at higher risk serious adverse events?”

    I doubt anyone is arguing for less controlled designs but research continues on how to design and analyze trials which better answer questions like the above.

  7. Curious Wavefunction says:

    RCTs reminds me of Churchill’s quote about democracy: they are the least of all evil systems. The limitations of RCTs also remind me about the now well known story of Iressa (gefitinib) which was withdrawn from US markets because of lack of efficacy, only to be re-introduced a few years later when it was found that it worked exceptionally well in cancer patients with an EGFR mutation. Based on that story, I think it does make sense for people who run RCTs to put some more thought into selecting the right patient populations, perhaps based on biochemical or other preclinical evidence.

    1. CMCguy says:

      CW perhaps you do not mean to imply such is not being done now but I know for studies I have seen a heck of a lot of effort goes “into selecting the right patient populations” and is probably about the most difficult of all tasks to achieve well in drug development executions. Even if there are potential known biomarkers upfront, which frequently has not been the case, being able to adequately and reliably test for such things during a trial is not usually trivial (as often only get revealed as being highly significant upon post-analysis of patient data). As far as utilization of preclinical evidence how often is the lack of good disease models bemoaned on this site, so beyond opening the door the info may not be useful for classification of best treatable populations. I do agree Churchill quote may be apt here, since at the best advancing a new drug can still be messy, however in terms of the NEJM article, especially after reading Prasad’s debate points, that other than denigrating the value that RCTs does offer there was anything there proposed that could be applied broadly as a viable alternative.

  8. Anonymous says:

    This article makes sense and is long overdue if one understands the ideological war that has been waged about who should control patient care. In 1997 Sackett et al. published an article on evidence based medicine. They were arrogant enough about the claims for it that they said things like “if you find that [a] study was not randomized…we’d suggest that you stop reading it and go on to the next article.” As if knowledge cannot also come from observation or actually figuring out biological processes really work. And never mind that the averages that result from an RCT may or may not fit individual patients, especially if they have multiple comorbidities.

    A section of the health policy community jumped on the RCT bandwagon and started behaving as if individual physicians had been using Ouija boards rather than evidence in treating their patients for the last 300 years. Some were ideologically committed to centralized control of patient care. Some stood to make fortunes selling clinical guidelines, metrics, or pay for performance programs. Some wanted power, others stood to rise in stature and get more grant money because they had the ability to conduct large randomized studies. The hubris has resulted in some very large, very expensive, RCTs that produced treatment recommendations from such flawed samples that their recommendations were ignored by individual physicians even as national bodies declared that patients should be treated according to their results.

    The Women’ s Health Initiative (WHI) study of Hormone Replacement Therapy (HRT) is a good example of this. It was billed as a “gold-standard” RCT. Those running it said its results applied to “healthy” women (never mind that many were on blood pressure meds, had had prior heart attacks, a history of angina, and were on cholesterol meds). The study was stopped in 2002 due to “an increased risk of breast cancer and because the risk of breast cancer, coronary heart disease, stroke, and blood clots outweighed the benefits on hip fracture and colorectal cancer.” The breast cancer risk rose by less than a tenth of a percent per woman per year. Guidelines recommending against HRT were hastily devised by the US Preventive Services Task Force. By 2009, US HRT use had fallen by more than 70 percent.

    The problem was that the study, as one professor put it, “used a treatment we don’t use on a group of patients we don’t treat.” The benefits of HRT seem to depend on how soon it is prescribed after menopause. The delivery mechanism seems to matter, too, and there was subsequent disagreement about how deaths were classified. It was large enough to influence the results. Seventy percent of the women in the study were way past menopause and in an age group in which they would be expected to experience negative cardiovascular effects from HRT based on what medicine already knew. In addition, there were too few 50-54 year olds in the sample to show cardio protective effects even if they did exist. Results from other studies suggest that the heart attacks may have been a result of giving HRT to women who had been in menopause for a decade or more.

    The European Menopause and Andropause Society looked at all the studies and the scientific discussion in the years following the WHI. It changed its recommendations in 2009. It now says that HRT may be good for women aged 50-59. Policy makers in the US have unfortunately been conditioned to think RCT equals gold standard by scores of courses in schools of public health and medicine. With the WHI study the only RCT in the arena, US recommendations remain the same. A lot of women are probably being advised against a therapy that could help them.

    In my view, science requires weighing all of the available evidence. After decades of claims that RCT evidence is the only evidence that counts in guiding patient care, it is nice to see a at least a nod towards rationality. Plus, I agree with your definition of Hell and would rather not have politics running my medical care.

    1. Hap says:

      The whole point of randomizing trials is to be reasonably sure about what you know (to make sure that you have evidence and not opinion or fiction or imagination). Doctors have lots of experience, but we filter things by what we already think, and so the stuff people remember may not be representative of the stuff they experienced. Meta-analysis depends a lot on being able to standardize outcomes across trials which is…difficult. How do you sort outcomes and get information that doesn’t represent your (or someone else’s) biases without randomizing?

  9. Lukas Kambic says:

    I wondered when we’d see postmodernist academic anti-rational claptrap start to infiltrate formerly legitimate science literature. Now we’ll see claims that “recent findings in sociology suggest” that traditional herbal treatments fail in the clinic because the physicians administering them aren’t sufficiently familiar with cultural oppression as interpreted by post-structuralist anthropology. Sad but perhaps inevitable- it seems people can only handle so much reality.

  10. dave w says:

    I’m thinking this passage from your quote above may be the nut of the concern:

    “This status may continue to evolve with the recent turn (back) to personalized or precision medicine. As medicine focuses on the unique pathophysiology and coexisting conditions of individual patients, the applicability of the generalized data produced by RCTs will come under intensified scrutiny.”

    If such trials are an “absolute standard” for government aproval, what happens to potential therapies that just don’t fit into that model?

    How do you do a “large-scale randomized trial” of (for example) a cancer treatment that would be based on tailoring synthetic antibodies to the genetic details of each patient’s specific tumor…?

    1. tangent says:

      What’s the issue if you specify “here’s the antibody-tailoring process we’re going to apply to every patient in the treatment group”? You get a result saying how well that process works over the population you drew from.

      Granted that your result is only as valid as your definition of “the treatable population”. And philosophically that you will never know whether an individual patient had good odds to improve on treatment or just got lucky, but that’s life, right?

    2. Imaging guy says:

      Actually your question concerning personalized treatments has already been satisfactorily addressed by RCTs per Vinay Prasad (link in the post).

      “Along these lines, many say RCTs are not needed for precision oncology where drugs are given for specific mutations at an N of 1 level. How can we test an individualized therapy? Easy, you randomize pts to a precision oncology strategy or to usual care. And, thus far, when you do it, you get no benefit—as seen in the Lancet Oncology RCT SHIVA. So, hello people!! We are already doing RCTs on this topic. IF you oppose RCTs as impossible but some have been reported, something is wrong with your thinking.”

  11. Streptomycin says:

    Rank-and-file physicians were among the most staunch opponents of RCTs around the time of Kefauver-Harris, something along the lines of, “my lifetime of anecdotal experience has to mean more than a statistically-powered analysis.” Time, however, has consistently shown this sentiment to be the epitome of misguided.

  12. Anonymous Researcher snaw says:

    Time after time, when something “everybody knows” is finally tested with a proper RCT it turns out our collective wisdom is wrong. I suspect this pushback represents a reaction by the many experts who resent being shown the errors of their long-established views

  13. Anonymous Researcher snaw says:

    There is this classic:

    http://www.bmj.com/content/327/7429/1459

  14. Peter S. Shenkin says:

    There seems to be a view of some responders that “true” RCT would involve decomposition of every enumerable possible contributing factor. Clearly impossible.

    But in fact, serendipitous observations can and do drive new RCTs. The Viagra story is well known. It was only the observation that participants in an unsuccessful trial of Viagra for cardiac conditions wanted to hold onto their meds that curious conductors of the trial were led to find out why. That led to a new RCTs that in turn led to approval of Viagra as a safe and effective medication for ED.

    I don’t know the history of the differentiation of estrogen-dependent breast cancer from other varieties, but whether or not it started serendipitously, it led to the ability to define a subclassification of the disease that allowed more effective treatment of at least that subclass, and these must have been tested (and I’m sure are continuing to be tested with new treatments) for estrogen-dependent breast tumors.

    Off-label use of medication is sometimes driven by serendipitous observation. For example, it has been serendipitously observed that rivastigmine and similar drugs, approved for slowing the onset of cognitive decline in Alzheimers patients, greatly decreases the tendency of Parkinson’s patients to fall. Some Parkinson’s patients fall several times a day, and can hurt themselves severely. Neurologists are now prescribing these drugs for that purpose. Here, you might ask whether an RCT is really necessary, since it really does seem to work. But an RCT will almost certainly turn out to be needed if insurance companies start balking at paying for off-label use of the drug.

    Finally, I suppose we owe the NEJM’s attitude, ultimately to Thomas Kuhn. Perhaps they should be reminded that Kuhn eventually said (paraphrasing a statement attributed to Marx) “I am not a Kuhnian!”

  15. Oblarg says:

    Reading that twitter page you linked makes me wonder if I’m not actually trapped in some sort of Borges short story – a lot of those abstracts would be perfectly reasonable in Tlön, I imagine. Absolutely ghastly. One has to wonder to what extent many of those postmodern “scholars” actually believe what it is that they’re writing.

    You can really see what Sokal meant when he spoke of “protecting the Left from a trendy segment of itself.” Gender and race are important issues, and I think there probably are some legitimate questions as to what extent they have affected some fields of scientific knowledge (especially in behavioral psychology and related fields, where the “science” is quite a bit flimsier than, say, physics) – but if this sort of garbage is what comes to mind when people think of those questions, no one is going to take *any* of it seriously.

  16. thinking says:

    I have always wanted to write something on the inherent limits that RCTs impose on discovery. There are many fundamental questions about human biology (disease, nutrition, again etc.) that could only be plausibly answered with RCTs.

    We just can’t run that many of them, they are costly to run. This puts some pretty strict limits about the knowledge that could only come from RCTs, we will never discover most of it.

  17. steve says:

    The current system is simple unsustainable. Requiring large (and sometimes multiple) randomized, placebo-controlled Ph3 trials for every drug makes drug discovery so expensive that the entire system will eventually fail. I don’t know how to replace it but there needs to be a better way of triaging the drugs that require that level of proof vs those that don’t. There is too much competition for patients and way too much cost; in a world of diminishing health care spending there is simply no way we can continue on the present course of 10 years and $1B for each approved drug.

    1. steve says:

      Sorry – “simply unsustainable”

    2. Hugo says:

      This is not a problem of the RCT, but a case of diminishing returns on pharma research. People are looking for compounds with smaller and smaller benefits, requiring larger and larger trials to prove this benefit exists at all. Of course you need a $1B trial if you want to sell a new statin, the benefit over existing ones will be very small at best, and you need thousands of patients to figure this out.
      If someone finds a completely new antibiotic that works on MRSA I doubt they need more than a few dozen patients, and the RCT will be much, much cheaper (and faster).

      Healthcare spending in the USA is getting out of hand because everybody (doctors, hospitals, insurers) wants to make way too much money, and they are able too because it’s an under-regulated market (from a European point of view) with extreme information asymmetry. Not because RCT are too expensive.

  18. JeffC says:

    When I read this, the first thing that came to mind was Ebola and the clinical trials of the vaccines. If I remember correctly, essentially, the NIH wanted a statistically perfect RCT whereas CDC and the Europeans were happy with a comparative study or ring vaccination approach, that while not as “perfect” as a RCT would provide reasonable evidence that the vaccines worked. This also involved a huge number of social and ethical considerations; Liberia is not the Beltway. The NIH trial was essentially impossible and was never going to recruit sufficient patients. And sure enough, it never did and the ring vaccination study was sufficient to demonstrate that the vaccine works. The NIH trial was unethical in my view and totally ignored the social situation on the ground. So I can completely understand that there are some occasions where a RCT is inappropriate from an ethical and social standpoint even if the RCT design is “better” scientifically and statistically and is theoretically possible to do. It’s worth noting that the stats group at NIH (Infection) rule the roost on all trial designs so there’s a nice bit of internal politics there too.

  19. Random Scientist says:

    I am a bit surprised at how frustrated both this NEMJ paper and this discussion have made me. Apparently I have some strongly held beliefs that I’ve formed over the years, but the majority of the points made in the paper and the comments in this thread are completely losing the forest for the trees in my limited worldview. Highlighting just a few of the ones that I find more bothersome:

    1) RCT’s are a tool, and as with any tool it must be used appropriately – this goes both for the effect being studied as well as the trial design. You don’t need a hammer to push in a thumb tack, and you shouldn’t use a screwdriver to remove a bolt. Shining a spotlight on a few poorly designed/executed RCT’s or situations where a RCT was not the appropriate tool for the job are complete strawman arguments, as it ignores all of the instances where properly designed and executed RCT’s did exactly what they were supposed to.

    2) Personalizes medicine is still somewhat of a pipe dream at this point. Yes, genetic screening is making improvements along these lines, but even in the most carefully screened and selected patients it is very rare that there is 100% response to a given treatment. If there is a separate genetic variation that causes a certain sub-population to become non-responders, this can only be determined by generating more data in some manner. If there is more than one genetic variant that can cause people to be non-responders, the problem increases exponentially. Same goes for drug-drug interactions, etc.

    3) Highlighting meta-analysis as an “advance” is just plain silly. My issue with meta-analysis is that authors are trying to extract small effect sizes from several studies with different controls / populations, etc. If the effect sizes were unambiguous they would have typically been picked up in the original studies. At best the conclusion of a meta-analysis is that more investigation into the proposed effect is warranted, since it is a rare case where they are or can be conclusive in and of themselves. Which brings me to

    4) The purpose of an RCT is to properly remove experimenter or physician bias – emphasis on the word “properly”. This is something that is even more difficult to do with a meta-analysis, since the authors are mining the data for a desirable (i.e. publishable) outcome. Same goes for all of the anecdotal evidence and conventional wisdom, as others have mentioned. And don’t even get me started on the idea of relying on “experts”….

    I think I am frustrated because RCT’s are one method of doing good science – and having good science to back up your conclusion is the REAL desired outcome, whether it’s done by an RCT or not. There are plenty of ways that people have done and continue to do bad science. Yes, there are confounding problems of social implications and financial incentives, but nothing is perfect. I often think about the shortcomings of my knife and fork when I’m eating a meal but I’m also willing to live with them.

    End of rant.

  20. emjeff says:

    I completely agree with many of the comments above which state that RCT methods are a tool, and are not appropriate in all cases. One of the trends that has driven the view that you can only believe in RCTs is the fact that experimental design has been completely ceded to statisticians, as opposed to being a shared task with clinicians and scientists. When you tell statisticians, who , as a rule do not have much clinical knowledge, to design a trial, guess what they’ll do? Clinicians and others need to wrest back some control here – RCTs are a great tool, but are not always appropriate.

  21. matt says:

    Where’s Erebus? This is right up his/her alley: damn RCTs with faint praise, then da-duh-DA! pull out observational studies as the real gold standard. That’s all we need, look at how completely they’ve sorted out dietary guidelines.

    I do worry that we bump up against the limits of our culture’s ability to pursue scientific evidence against the pull of tradition, misplaced self-confidence, and the unpalatability of losing “the magic” and “the mystery.” People claim education is the answer, and I do believe education can change thinking a bit, but social ties are the rubber bands that limit how much education does.

    Physicians like Dr. Oz (and people like Erebus) are an example. Fully aware of RCTs, but deep down they believe they can spot the truth in anecdotal data, or observational data, or historically controlled non-randomized trials, and so on. Deep down they believe there’s a lot of miracle cures being denied them by all this stuffy insistence on RCTs, and they believe those cures can be discerned by smart people such as themselves without the need for RCTs.

    1. Hap says:

      Don’t call up what you can’t send back down, please.

  22. Student1 says:

    First off: I’m a sociology student. (Note the distinction to Social Science.) Undergraduate, too. Quelle Horreur! I dare try my hand at writing a comment, nonetheless and hope for a bit of a forgiving eye by the readers.

    I think there’s a bit that can be learned from Poststructuralist discussion. Context matters a lot more than what people think in many cases. A lot of the facts people think exist may not /have to/ be thought of that way. The usual example here is the Sex/Gender difference. (Problems discussed below.)

    At the same time, Poststructuralism isn’t all of Social Science. Not nearly. In fact it isn’t even all that fashionable at my boring, very German, alma mater. And hasn’t been fashionable for a while as far as I can tell. Though that may just be the particular institution. So part of this discussion may actually only be early 90s stuff that is still rippling outwards though the different fields.

    I’m several kinds of terrified that that is all that is seen of the Humanities by the Science peers(*) around here. Everything is its context in Social Science — therefore no information exists therefore Social Science is meaningless is how I read Derek. But that is not really how it works. Or at least how I understand the corner I try to sit in to work.

    Sociology — and here I am switching to where I actually have a bit of semi-firm knowledge; I dare not speak about all Social Science everywhere — does not have a firmly established unified methodology — unlike Natural Science. Our epistemology is a pluricentric one. Epistemologies, really. Methods are developed in conjunction with continued efforts in the direction of at least making clearer ways towards one and exist in an uneasy, strained parallelism. I don’t think Science in general has a comparable degree of internal diversity on that level of the discussion. And Scientists need to recognize that very fundamental difference.
    Just as an example on the question of Emergence:
    Weak emergence? (RC models, Institutionalism et al.) Strong emergence? (Durkheim and a whole lot of Frenchmen) No emergence at all? (Network Theory) There’s theories and methods built on them with all of these as built-in assumptions and depending on what you read and whom you read. And there’s experiments built on them.

    Takeaway: Sociology doesn’t even have a unified way of asking questions.

    However that does not mean that the questions asked within these frameworks are necessarily meaningless — or that the results from working towards answers to those questions are necessarily meaningless.

    In the end Latour and others created tools, possible ways of framing problems — of trying to understand the world — with inherent limitations and inherent problems.

    These tools may even not be practically useful at all. That’s the philosophy side of Sociology for you. Knowledge need not be practical. A thought can be beautiful or a shining purple color and derive some worth from that. In the end these thoughts are tools. Just like an RCT is a tool.
    I think one of the problems is that creating these always comes with incredible amounts of self-seriousness that manifest as attempts to use the hammer on every nail in sight. Or as Latour himself put it: The Nail beats the Hammer. (Though I’m back translating from memory here — the Quote may not work like this in English.)

    It may also be worthwhile to note that in many ways Latour started the discussion on the historicity of Science with Laboratory Life — and expecting him to “get things right” is a bit like expecting Freuds theories of mind to hold up to modern standards. In fact Wottoons examination of the development of scientific terminology would not be possible I dare say here without the linguistic turn in the Social Sciences in general that re-emphasized the importance of linguistic concepts.

    (*) Not my peers.

Leave a Reply

Your email address will not be published. Required fields are marked *

Time limit is exhausted. Please reload CAPTCHA.