Alternate Study Designs to Answer Chronic Pain Research Questions

Clinical Trial Monitoring - Newcastle Research Institute - Genesis Research Services

Often the clinical trial question of significance chosen within a pain research project is a variation of:

The introduction or application of X reduces the mean overall pain in patients with Y (and improves quality of life).

To answer this and similar pain research questions there are a diverse range of both distinct and relatively analogous study designs that may be developed and employed to produce real scientific evidence that can support reliable, clinically meaningful conclusions to be made in regards to whether or not an intervention reduces pain and improves quality of life in patients with different pain conditions. Contrasting designs should always be explored as each can offer advantages and disadvantages to answer these essential questions within both pain populations and particular clinical settings. Superior design must also take into account not only the developmental phase and indication but also reference the intended statistical analysis outcomes and/or commercial study funding application.

In addition to experimental lab and animal studies, researchers and physicians need ongoing reliable scientific evidence that supports the efficacy of using new and different treatments for pain. Often animal studies work by creating what is essentially a proof of concept, an important one at that, but nonetheless merely an indicator for further research. Therefore, the assertion is to generate a valid basis of efficacy, to prove clinically meaningful effect and to this end, subsequent discussion of alternate study designs will be focused on defining the significance, if it truly exists, of efficacy of investigative interventions.

It is reasonable to begin consideration of study designs by attempting to first conduct a case series. By sampling an adequately sized population total with the outcome of interest (e.g. a chronic pain condition) and implementing the specific exposure (experimental intervention) in a case series, researchers can examine and describe the outcomes of treatments. This valuable information can influence key opinion leaders, prospective co-investigators and study sponsors regarding the validity of continuing or funding further research into this intervention and may provide reasonable quality evidence to satisfy prospective ethics committee applications and justify progressive, stronger study design as a result of efficacious treatment effects. (Esene, et al, 2014) The data derived from this type of design can be used to engender pertinent hypotheses that lead to potential, more focussed studies to investigate treatments and their effect on pain conditions. An advantage of a case series to investigate this study question is that investigative treatment application techniques and treatment protocols can be refined before study designs advance (Chan & Bhandari, 2011) and in regards to the application for funding, a case series is one of the more feasible design options. In comparison to more complex designs (RCT, Cohort), a case series is easier to conduct, requires less time and resources and ultimately less financial support. (Dekkers et al, 2012).

Despite their relative simplicity and potential value, a case series proposal for design also has significant limitations. In a case series, the investigator does not control treatment allocation nor potential confounders. This highlights the case series biggest disadvantage in design as control over important and potentially influential variables is basically surrendered, meaning that any findings that are derived from the study may only be applicable to highly comparable cases. (Dekkers et al, 2012) Researchers must have a conscientious awareness of the risk of the inherent biases this design presents and the impact to external validity as any results or outcomes will have limited generalisability to larger populations of interest. It is fair to say that a case series can accomplish many similar experimental outcomes as other study designs but to answer many chronic pain related clinical trial questions, such a case series is often inappropriate statistically and insufficient to provide real value to study financiers.

Another type of study design to address chronic pain clinical trial questions is a prospective, open-label, non-randomised, single arm study. By significantly increasing the complexity of the design, size of the sample and level of evidence, researchers may be able to more adequately evaluate the effect of investigational treatments. The advantages of this design again focus on feasibility as an open label, non-randomised, non-comparative study could be a relatively small scale investigation, still allowing for establishment of baseline efficacy data to assess the feasibility, cost, time, safety and adverse events and effect size associated with a therapy, thereby facilitating prediction of prospective sample sizes and identifying design quality improvements prior to performance of larger scale randomised clinical trials in the instance that efficacy is demonstrated. The potential intrinsic value associated with this design is compelling to address pain research study questions and has real clinical relevance.

The open-label, single arm design feature can be appropriate to measure treatment effect in the study as long as the endpoints are appropriately objective and outcomes suitably robust. (Yao, 2014) Researchers may identify treatment responder rates and mean changes in patient reported outcomes within the sample, assisting in establishing future sample size calculations, and positively influencing statistical analysis planning for larger studies. We must also consider that clinical trial designs where potential participants know that they are going to receiving active treatment (i.e. not placebo/sham) are much more attractive to recruitment and this advantage may be profound in a feasibility sense. Open-label designs are also less onerous for participants than, for example a randomised, blinded study, so attrition rates are generally lower in open-label studies. (Westendorf & Buller, 2011) It is fair to say that open-label designs can establish moderate evidence of efficacy in a clinical trial and require less responsibility for both participants and researchers than RCT’s although they are not without their disadvantages.

It is well known that open-label study designs carry a high threat to both internal and external trial validity. This can be influenced by a number of factors which introduce progressive potential for biases including selection bias, performance and detection bias and attrition bias. (Westendorf & Buller, 2011). Participant selection in an open-label, single arm study will predominantly affect the external validity of the study, (usually by an investigators subjective assessment of a participants eligibility) meaning that any trial results will ultimately be less generalisable to the “usual” patient population and hence be less clinically relevant but we may also see disturbance in the outcomes within participants in response to the treatment thereby threatening reliability. (Gray, 2007) In open-label trials there is a risk of reporting and detection bias of adverse events as participants and investigators may be influenced in their clinical assessment and reporting behaviour of potential side effects. (Westendorf & Buller, 2011) There is also a known potential for biased endpoint reporting which is a result of non-blinding in open-label designs. Single-arm trials obviously do not allow treatment arm comparisons due to the lack of comparator group and this limits the inferences researchers can make regarding the significance of a pain therapy. Due to the significant potential impact that that knowledge of the investigative treatment allocation could have on post-treatment decisions and outcome reporting, an open-label design may not always be appropriate to pursue.

In considering additional and scientifically appropriate clinical trial design options to address the study question we must now examine alternate randomised clinical trial (RCT) paradigms. A randomised clinical trial design yields the highest level of evidence of any single study design and is considered the ‘gold standard’ of trial designs specifically when analysing the effectiveness of an intervention. (Woodward, 2013) Implementing a randomised design affords a rigorous means of determining whether or not a cause-effect relationship exists between our treatment of interest and outcome (pain reduction/improved quality of life) and for quantifying the cost-effectiveness of the intervention. (Sibbald, 1998) As we indeed intend to vigorously examine the effectiveness of our intervention on both pain and quality of life, a randomised design provides a recognised advantage.

An appropriate randomised design to adequately address interventional chronic pain study questions would involve recruitment of a suitable sample with the pain condition of interest as planned and then randomly assigning these participants to one of two treatment arm groups. Both intervention groups are treated the same, except for the active or experimental treatment. Using this type of random allocation design one can ensure that no systematic differences between the intervention groups exist in regards to factors both known and unknown and therefore the potential influence over the outcome is responsibly controlled. (Sibbald, 1998) The participants are analysed within the treatment group to which they were randomly allocated, irrespective of whether they received the investigative treatment (intention to treat analysis), and the data derived from analysis can allow investigators to estimate the size of the differences related to the study outcomes between the study groups. Therefore the benefit of not only adding a comparator arm as a design function, but randomly allocating treatment (e.g. investigative treatment v Placebo/Sham/Active comparator) and assessing targeted pain and quality of life outcomes over time allows for direct comparisons to be made using statistical analysis methods and subsequently investigators are able to make reliable inferences about the efficacy of the interventional treatment within the clinical populations from which the participants were drawn. Randomised allocation of treatment is an ideal design to evaluate efficacy as it serves to minimise bias through different mechanisms including the standardisation of the treatment. (Singal et al, 2014). Importantly an RCT design can significantly reduce or even eliminate issues of access into a trial as the intervention is provided at no cost to the participant, the treating physician is often recommends the trial and this goes a long way to promoting participant acceptance and adherence. (Singal, et al 2014)

Although RCT’s are powerful trial design tools, their use can be limited at times by both ethical and practical concerns. For example exposing participants to an intervention known or believed to be inferior to current therapy is often thought to be unethical. (Sibbald, 2014) This provides a realistic concern within chronic pain populations and correspondingly we must ask is it ethical to withhold potential pain relief from half of our sample and would ethics committees be unwilling to approve a study where researchers deprive participants of this potentially useful treatment? Another important and well-known disadvantage of an RCT design is that they are generally much more costly and time consuming than other, simpler study designs. This highlights an issue that study sponsors will consider seriously therefore careful consideration needs to be apportioned to the scope, timing and resource allocation to such a study design. RCT can at times be harder to recruit participants for due to restrictive eligibility criteria and randomisation, and often the more restrictive the entry criteria is for a study we can see a correlated proportional drop in the generalisability and clinically value the study results produce. Lastly, an RCT design can also prove very difficult recruiting potential investigators and referring clinicians as they may be often unwilling to “experiment” with alternative treatments.

An imperative clinical trial design to help address the chronic pain study questions is a blinded study. Blinding refers to the method of concealing treatment assignment for the duration of the clinical trial in an attempt to control or reduce systematic biases. Blinding participants to the specific treatment they have received in an RCT is particularly crucial when the response or outcome criteria are subjective, such as alleviation of pain. (Day & Altman, 2000) An often superior design option is to double-blind. By keeping the study participants and the investigators involved with their ongoing management, and those collecting, analysing and reporting their clinical data unaware of the treatment assigned, essentially they should not be biased or influenced by that knowledge. (Day & Altman, 2000). Effective blinding of investigators is advantageous to assess the efficacy of an investigative treatment by minimising possible bias in participant management and in the examination of outcome status (pain reduction) as well as potential expectation influencing findings, promoting more reliable results. Effective blinding of participant is advantageous and useful for preventing bias due to possible placebo effect of an investigative treatment, non-active comparator and demand characteristics of those with the pain condition of interest.

Blinding and double-blinding are not without potential disadvantages in this clinical setting though. The complexity of a double-blind design may limit participant selection, providing a disproportionate representation of pain patients and subsequently critically affecting the external validity of the study results. (Buller et al, 2008). Study assessment of efficacy can be compromised within the treatment groups due to deviations from normal clinical procedures, (i.e. if the investigative treatment has an unexpected adverse effect) which in turn may introduce systematic errors in outcome events. Double-blinding may also eliminate the possibility of investigating other significant aspects of treatment, including participant-oriented experiences such as quality of life and assessment of health economics like resource utilisation. (Buller et al, 2008)

There has been a discourse on randomisation and blinding design options for treatment allocation between two groups in regards to answering pain related study questions. In consideration of this and the developmental phases of investigative treatments we must now explore the appropriateness of effective comparators, such a placebo or sham to produce robust data when randomising and blinding. The assumption with a placebo-controlled design is that the non-placebo effect of a treatment is the “real” or “true” effect. (Vickers & de Craen, 2000) A placebo/sham comparator can aid in the facilitation of blinding and control for placebo effects, minimising potential biases and have a high internal validity. In additional to these advantages, a placebo design can again be attractive for study sponsors as they are only suppling study treatment to half of the sample involved.

Although using a placebo/sham approach to answer such clinical questions is scientifically sound, some ethical concerns may arise that can outweigh the potential benefits in this design. There is a valid ethical viewpoint that suggests that research participants who have a burdensome, debilitating disease (such as moderate to severe chronic pain) should not be unduly exposed to the risks of placebo-controlled trials. (Chiodo et al, 2000) So to promote the use of a placebo control arm in a study, in people who are suffering chronic pain, when for example researchers may be able to use an evidence-based, effective treatment, presents an ethically questionable design that may represent substandard care. So to consider employing a placebo/sham control we must first establish that the chronic pain condition is of clinical equipoise and allay the therapeutic misconceptions potential participants have regarding perceived benefit from placebos’.

Another legitimate study design to establish efficacy data in an interventional pain study would be to utilise a crossover design. A crossover design describes a study in which study participants are given both the investigative treatment and comparator interventions in successive treatment periods. There is a random order to which the study interventions are received. A crossover design is appropriate in most chronic pain populations as it is a chronic condition that is not known to change significantly over a short period of time (e.g. the intervention period of the study). The essential feature that distinguishes a crossover trial is that each participant serves as their own control. (Wellek et al, 2012) The crossover design subsequently avoids influential problems of comparability of control and experimental groups with regard to confounding variables (e.g. gender, age). A crossover design is advantageous in respect to the power of the statistical tests performed to confirm the existence of any treatment effect. (Wellek et al, 2012) Both of these factors present feasible advantage for the conduct of a study to determine if an investigative treatment reduces pain and improves quality of life. A significant benefit of using such a design for study sponsors is that crossover trials require lower sample sizes than other designs such as parallel arm designs to meet the same criteria in terms of controlling type I and type II errors.

The main disadvantage of a crossover design is however that carryover effects may be confounded with direct treatment effects, so essentially any effect cannot truly be estimated separately. This bias from previous treatment needs to be accounted for as these significant carryover effects can influence the data analysis and interpretation of study results. Wellek et al, 2012 suggests that, “Crossover trials in which the results are not analysed separately by sequence group are of limited, if any, scientific value”. In regards to chronic pain studies, such as the intended investigation, this may create an onerous data management and study logistics task that complicates the design, reduces the validity of crossover benefits and threatens the integrity of the outcome data.

A final design option for an interventional chronic pain study is a parallel group trial. A parallel study provides a simple, achievable design. Participants are randomly allocated to receive either an investigative treatment or placebo/sham/comparator at a ratio of 1:1. The participants in both groups are followed prospectively with study specific outcome data to address the study question measured at appropriate time points, allowing for direct comparison to be made between the treatment groups which can highlight treatment effects. (Richens, 2001) A parallel design facilitates a desired statistical analysis by a simple t-test of the between group difference in the pain reduction outcomes. An advantage a parallel design has over a crossover is the duration of a parallel-arm trial is generally shorter due to only one treatment period being required. (Richens, 2001)

The downside of a parallel-group trial is some instances is that they almost always require a multicentre, multi-investigator approach, and with that come the inevitable logistic issues involved and increased resource demand. An additional disadvantage is that although the duration of a parallel study is shorter, this may be significantly offset by the much larger sample size needed to be recruited and the time involved in achieving those target numbers. There are both obvious advantages and disadvantages involved in a parallel-group study that must be identified and appreciated prior to design and implementation.

The study design ultimately chosen to establish reliable, appropriate and reproducible data regarding efficacy in the treatment of pain must ultimately be determined with consideration of the study population, the intervention and the planned outcome analysis and any design must present an ethical, fully formed, clear and justifiable clinical trial to focus purely on the research question and primary outcome measure.


References:

  1. Büller, H. R., et al. “Double‐blind studies are not always optimum for evaluation of a novel therapy: the case of new anticoagulants.” Journal of Thrombosis and Haemostasis 6.2 (2008): 227-229.
  2. Chan K, Bhandari M. Three-minute critical appraisal of a case series article. Indian Journal of Orthopaedics. 2011;45(2):103-104. doi:10.4103/0019-5413.77126.
  3. Chiodo, Gary T., Susan W. Tolle, and Leslie Bevan. “Placebo-controlled trials: good science or medical neglect?” Western Journal of Medicine 172.4 (2000): 271.
  4. Day, S. J., & Altman, D. G. (2000). Blinding in clinical trials and other studies. Bmj, 321(7259), 504.
  5. Dekkers OM, Egger M, Altman DG, Vandenbroucke JP (2012) Distinguishing case series from cohort studies. Annals of Internal Medicine Jan 3;156(1 Pt 1):37-40. doi: 10.7326/0003-4819-156-1-201201030-00006.
  6. Esene I N, Ngu J, Zoghby M, Solaroglu I, Sikod A M, Kotb A, Dechambenoit G & Husseiny H (2014) Case series and descriptive cohort studies in neurosurgery: the confusion and solution. Child’s nervous system: ChNS Berlin: Springer Vol. 30, No. 8 (2014), p. 1321-1332
  7. Beyer-Westendorf J, Büller H (2011) External and internal validity of open label or double-blind trials in oral anticoagulation: better, worse or just different? J Thromb Haemost. 2011 Nov;9(11):2153-8. doi: 10.1111/j.1538-7836.2011.04507. x.
  8. Freidlin, B., Korn, E. L., George, S. L., & Gray, R. (2007). Randomized clinical trial design for assessing noninferiority when superiority is expected. Journal of Clinical Oncology, 25(31), 5019-5023.
  9. Richens, Alan. “Proof of efficacy trials: cross-over versus parallel-group.” Epilepsy research 45.1 (2001): 43-47.
  10. Sibbald, B., & Roland, M. (1998). Understanding controlled trials. Why are randomised controlled trials important? BMJ: British Medical Journal, 316(7126), 201.
  11. Singal, Amit G., Peter DR Higgins, and Akbar K. Waljee. “A primer on effectiveness and efficacy trials.” Clinical and translational gastroenterology 5.1 (2014): e45.
  12. Vickers, Andrew J., and Anton JM de Craen. “Why use placebos in clinical trials? A narrative review of the methodological literature.” Journal of clinical epidemiology 53.2 (2000): 157-161.
  13. Wellek, Stefan, and Maria Blettner. “On the proper use of the crossover design in clinical trials.” Dtsch Arztebl Int 109.15 (2012): 276-81.
  14. Woodward, M. (2013). Epidemiology: study design and data analysis. CRC press.
Share this story!

Related Posts