A large meta-analysis of 47 randomized trials totaling 275,078 patients found that, on average, treatment effect estimates for subjective outcome events assessed by onsite assessors and those assessed by adjudication committees were similar.

The new article in the Cochrane Database of Systematic Review is co-authored by Lee Aymar Ndounga Diakou, Ludovic Trinquart, Asbjørn Hróbjartsson, Caroline Barnes, Amelie Yavchitz, Philippe Ravaud, and Isabelle Boutron, from Cochrane France and INSERM U1153, Paris, France.

Many clinical trials assess the benefit of new treatments on non-fatal events. Such outcomes frequently lack standard definitions and are subjective in their determination. Central adjudication is the process of reviewing clinical data by a group of independent physicians to assess consistently and to validate events.

It is widely recommended that multicentre trials should have a central adjudication committee, rather than relying on the outcomes reported by assessors at the relevant site where the decision might be subjective. These central adjudication committees are commonly used, especially in large trials. However, the adjudication process can be very time and resource consuming. There is very limited evidence supported the use of adjudication committees.

This meta-analysis examined randomized trials across medical areas to evaluate the impact of central adjudication on the estimates for treatment effect produced by randomized trials. All selected trials reported the same subjective clinical event outcome assessed by both an onsite assessor and an adjudication committee. The authors investigated whether using the event data from the adjudication committee produced different treatment effect estimates than the data from onsite investigators.

The researchers combined the findings of 47 RCTs and found no evidence of difference, on average, in treatment effect estimates from onsite assessors and adjudication committee (combined ratio of odds ratios: 1.00, 95% confidence interval 0.97 to 1.04).

When the researchers divided the data into whether or not the onsite assessors knew the patient's allocated treatment in the randomized trials and the various ways of submitting data to adjudication committees, they found that there might be important differences between onsite assessment and adjudication committee, depending on which methods are used. The combined ratio of odds ratio was 1.00 (95%CI 0.96 to 1.04) when onsite assessors were blinded; 0.76 (95% CI 0.48 to 1.12) when the adjudication committee assessed events identified independently from unblinded onsite assessors; and 1.11 (95% CI 0.96 to 1.27) when the adjudication committer assessed events identified by unblinded onsite assessors. There was a statistically significant interaction between these subgroups (P = 0.03)

The authors conclude that their findings question the benefit of having an adjudication committee for a randomized trial and highlight the need to revise the planning and functioning of adjudication committees.