In the May 2 issue of JAMA, a study reveals that clinical studies registered in clinicaltrials.gov between 2007-2010 are dominated by small, single-center trials.

In addition, the studies include significant heterogeneity (different in nature, hard to compare) in methodological approaches, including the use of data monitoring committees, randomization, and blinding.

The researchers explain:

“Clinical trials are the central means by which preventive, diagnostic, and therapeutic strategies are evaluated, but the U.S. clinical trials enterprise has been marked by debate regarding funding priorities for clinical research, the design and interpretation of studies, and protections for research participants.”

The ClinicialTrials.gov registry was created in 1997 in order to help individuals with serious illnesses participate in trials. In September 2004, a policy requiring registration of clinical research as a prerequisite for publication was announced by the International Committee of Medical Journal Editors (ICMJE). The policy took effect in 2005.

The researchers said:

“Recent work highlights the inadequate evidence base of current practice, in which less than 15 percent of major guideline recommendations are based on high-quality evidence, often defined as evidence that emanates from trials with appropriate designs; sufficiently large sample sizes; and appropriate, validated outcome measures, as well as oversight by institutional review boards and data monitoring committees (DMCs) to protect participants and ensure the trial’s integrity.”

In order to analyze the prime characteristics of interventional human trials registered in the ClinicalTrials.gov database, Robert M. Califf, M.D., of the Duke Translational Medicine Institute, Durham, N.C., and his team conducted a study that focused on which characteristics are desirable for producing solid evidence.

After downloading data on 96,346 human trials from the registry and entering it into a relational database to evaluate aggregate data, the team identified interventional trials. The researchers focused their examination on three clinical specialties oncology, cardiovascular, and mental health.

In the United States, these three specialties combined encompass the largest number of disability-adjusted life-years lost.

The team examined differences in characteristics as a function of clinical specialty; how those characteristics have changed over time; as well as factors linked with the use of DMCs, blinding, and randomization.

According to the researchers, between 2004-2007 to 2007-2010, the number of trials submitted for registration increased from 28,881 to 40,970. Even though the database was filled in more completely between 2007-2010, 59.4% of trials revealed that they did not use DMCs.

In addition, the team found that 96% of these trials had under 1,000 participants, and 62% had 100 or less. For completed trials the median number of participants per trial was 58 and 70 for trials that had been registered but not completed.

Of the 40,970 clinical trials registered during the 2007-2010 period, 37,520 had data on the number of sites and funding source. The researchers found that the majority of these trials did not receive funding by industry or the National Institutes of Health (NIH) 47%, n = 17,592) with 16,674 (44%) funded by industry, 3,254 (9%) funded by the NIH, and 757 (2.0%) received funding by other U.S. federal agencies.

Furthermore, the researchers found that 34% of trials were conducted at multiple sites while the majority (66%) were conducted at a single site.

The researchers explain:

“Heterogeneity in the reported methods by clinical specialty; sponsor type; and the reported use of DMCs, randomization, and blinding was evident.

For example, reported use of DMCs was less common in industry-sponsored vs. NIH-sponsored trials, earlier-phase vs. phase 3 trials, and mental health trials vs. those in the other 2 specialties. In similar comparisons, randomization and blinding were less frequently reported in earlier-phase, oncology, and device trials.”

According to the researchers the finding of considerable differences in the use of blinding and randomization across specialties raises “fundamental questions about the ability to draw reliable inferences from clinical research conducted in that arena.”

In addition they state that the finding that only half of interventional trials registered from 2007-2010 by design had less than 70 participants may have vital policy implications.

“Small trials may be appropriate in many cases. …However, small trials are unlikely to be informative in many other settings, such as establishing the effectiveness of treatments with modest effects and comparing effective treatments to enable better decisions in practice.”

The researchers said:

“Our analysis raises questions about the best methods for generating evidence, as well as the capacity of the clinical trials enterprise to supply sufficient amounts of high-quality evidence needed to ensure confidence in guideline recommendations.

Given the deficit in evidence to support key decisions in clinical practice guidelines as well as concerns about insufficient numbers of volunteers for trials, the desire to provide high-quality evidence for medical decisions must include consideration of a comprehensive redesign of the clinical trial enterprise.” In an associated report, Kay Dickersin, M.A., Ph.D., of the Johns Hopkins Bloomberg School of Public Health, Baltimore, and Drummond Rennie, M.D., of the University of California, San Francisco, and Deputy Editor JAMA, explain that: “it appears that despite important progress, ClinicalTrials.gov is coming up short, in part because not enough information is being required and collected, and even when investigators are asked for information, it is not necessarily provided. As a consequence, users of trial registries do not know whether the information provided through ClinicalTrials.gov is valid or up-to-date.” They continue:

“Trial registration is not some bureaucratic exercise but partial fulfillment of a promise to the patients who agree to participate in these trials on the understanding that the information learned with be made public.

Given the evidence that registration of trials at inception can benefit patients, it is difficult to understand why some investigators and sponsors take this responsibility so lightly.

Trial registries do not evolve on their own. Their content and the transparency they provide is influenced by investigators, systematic reviewers, clinicians, journal editors, sponsors, and regulators and also by patients and the public. Only through the generosity and positive engagement of all will something emerge that is truly useful.”

Written By Grace Rattue