vgr logotyp

  • Jump to content
  • Create new application

Single-Case Research Designs: Methods for Clinical and Applied Settings

  • Published 9 February 2010

2,707 Citations

Single-case experimental designs. evaluating interventions in research and clinical practice., integrating routine clinical interventions with single-case methodology: parallels, differences and bridging strategies, enhancing the scientific credibility of single-case intervention research: randomization to the rescue., single case designs in clinical practice: a contemporary cbs perspective on why and how to, special issue on advances in single-case research design and analysis, single-case design, analysis, and quality assessment for intervention research., single-case experimental designs: reflections on conduct and analysis, single-case experimental designs to evaluate novel technology-based health interventions.

  • Highly Influenced

Applying Empirical Methods in Clinical Practice: Introducing the Model for Assessing Treatment Effect

A comparison of rubrics for identifying empirically supported practices with single-case research, 31 references, avoiding treatment failures in the anxiety disorders, clinical assessment of child and adolescent personality and behavior., the twenty-four hour mind: the role of sleep and dreaming in our emotional lives, getting together: building relationships as we negotiate, adobe photoshop 6.0 classroom in a book, counselling ideologies: queer challenges to heteronormativity, quaker spirituality: selected writings, farmers of forty centuries, or permanent agriculture in china, korea and japan, related papers.

Showing 1 through 3 of 0 Related Papers

U.S. flag

An official website of the United States government

The .gov means it’s official. Federal government websites often end in .gov or .mil. Before sharing sensitive information, make sure you’re on a federal government site.

The site is secure. The https:// ensures that you are connecting to the official website and that any information you provide is encrypted and transmitted securely.

  • Publications
  • Account settings

Preview improvements coming to the PMC website in October 2024. Learn More or Try it out now .

  • Advanced Search
  • Journal List
  • HHS Author Manuscripts

Logo of nihpa

Single-Case Design, Analysis, and Quality Assessment for Intervention Research

Michele a. lobo.

1 Biomechanics & Movement Science Program, Department of Physical Therapy, University of Delaware, Newark, DE, USA

Mariola Moeyaert

2 Division of Educational Psychology & Methodology, State University of New York at Albany, Albany, NY, USA

Andrea Baraldi Cunha

Iryna babik, background and purpose.

The purpose of this article is to describe single-case studies, and contrast them with case studies and randomized clinical trials. We will highlight current research designs, analysis techniques, and quality appraisal tools relevant for single-case rehabilitation research.

Summary of Key Points

Single-case studies can provide a viable alternative to large group studies such as randomized clinical trials. Single case studies involve repeated measures, and manipulation of and independent variable. They can be designed to have strong internal validity for assessing causal relationships between interventions and outcomes, and external validity for generalizability of results, particularly when the study designs incorporate replication, randomization, and multiple participants. Single case studies should not be confused with case studies/series (ie, case reports), which are reports of clinical management of one patient or a small series of patients.

Recommendations for Clinical Practice

When rigorously designed, single-case studies can be particularly useful experimental designs in a variety of situations, even when researcher resources are limited, studied conditions have low incidences, or when examining effects of novel or expensive interventions. Readers will be directed to examples from the published literature in which these techniques have been discussed, evaluated for quality, and implemented.

Introduction

The purpose of this article is to present current tools and techniques relevant for single-case rehabilitation research. Single-case (SC) studies have been identified by a variety of names, including “n of 1 studies” and “single-subject” studies. The term “single-case study” is preferred over the previously mentioned terms because previous terms suggest these studies include only one participant. In fact, as will be discussed below, for purposes of replication and improved generalizability, the strongest SC studies commonly include more than one participant.

A SC study should not be confused with a “case study/series “ (also called “case report”. In a typical case study/series, a single patient or small series of patients is involved, but there is not a purposeful manipulation of an independent variable, nor are there necessarily repeated measures. Most case studies/series are reported in a narrative way while results of SC studies are presented numerically or graphically. 1 , 2 This article defines SC studies, contrasts them with randomized clinical trials, discusses how they can be used to scientifically test hypotheses, and highlights current research designs, analysis techniques, and quality appraisal tools that may be useful for rehabilitation researchers.

In SC studies, measurements of outcome (dependent variables) are recorded repeatedly for individual participants across time and varying levels of an intervention (independent variables). 1 – 5 These varying levels of intervention are referred to as “phases” with one phase serving as a baseline or comparison, so each participant serves as his/her own control. 2 In contrast to case studies and case series in which participants are observed across time without experimental manipulation of the independent variable, SC studies employ systematic manipulation of the independent variable to allow for hypothesis testing. 1 , 6 As a result, SC studies allow for rigorous experimental evaluation of intervention effects and provide a strong basis for establishing causal inferences. Advances in design and analysis techniques for SC studies observed in recent decades have made SC studies increasingly popular in educational and psychological research. Yet, the authors believe SC studies have been undervalued in rehabilitation research, where randomized clinical trials (RCTs) are typically recommended as the optimal research design to answer questions related to interventions. 7 In reality, there are advantages and disadvantages to both SC studies and RCTs that should be carefully considered in order to select the best design to answer individual research questions. While there are a variety of other research designs that could be utilized in rehabilitation research, only SC studies and RCTs are discussed here because SC studies are the focus of this article and RCTs are the most highly recommended design for intervention studies. 7

When designed and conducted properly, RCTs offer strong evidence that changes in outcomes may be related to provision of an intervention. However, RCTs require monetary, time, and personnel resources that many researchers, especially those in clinical settings, may not have available. 8 RCTs also require access to large numbers of consenting participants that meet strict inclusion and exclusion criteria that can limit variability of the sample and generalizability of results. 9 The requirement for large participant numbers may make RCTs difficult to perform in many settings, such as rural and suburban settings, and for many populations, such as those with diagnoses marked by lower prevalence. 8 To rely exclusively on RCTs has the potential to result in bodies of research that are skewed to address the needs of some individuals while neglecting the needs of others. RCTs aim to include a large number of participants and to use random group assignment to create study groups that are similar to one another in terms of all potential confounding variables, but it is challenging to identify all confounding variables. Finally, the results of RCTs are typically presented in terms of group means and standard deviations that may not represent true performance of any one participant. 10 This can present as a challenge for clinicians aiming to translate and implement these group findings at the level of the individual.

SC studies can provide a scientifically rigorous alternative to RCTs for experimentally determining the effectiveness of interventions. 1 , 2 SC studies can assess a variety of research questions, settings, cases, independent variables, and outcomes. 11 There are many benefits to SC studies that make them appealing for intervention research. SC studies may require fewer resources than RCTs and can be performed in settings and with populations that do not allow for large numbers of participants. 1 , 2 In SC studies, each participant serves as his/her own comparison, thus controlling for many confounding variables that can impact outcome in rehabilitation research, such as gender, age, socioeconomic level, cognition, home environment, and concurrent interventions. 2 , 11 Results can be analyzed and presented to determine whether interventions resulted in changes at the level of the individual, the level at which rehabilitation professionals intervene. 2 , 12 When properly designed and executed, SC studies can demonstrate strong internal validity to determine the likelihood of a causal relationship between the intervention and outcomes and external validity to generalize the findings to broader settings and populations. 2 , 12 , 13

Single Case Research Designs for Intervention Research

There are a variety of SC designs that can be used to study the effectiveness of interventions. Here we discuss: 1) AB designs, 2) reversal designs, 3) multiple baseline designs, and 4) alternating treatment designs, as well as ways replication and randomization techniques can be used to improve internal validity of all of these designs. 1 – 3 , 12 – 14

The simplest of these designs is the AB Design 15 ( Figure 1 ). This design involves repeated measurement of outcome variables throughout a baseline control/comparison phase (A ) and then throughout an intervention phase (B). When possible, it is recommended that a stable level and/or rate of change in performance be observed within the baseline phase before transitioning into the intervention phase. 2 As with all SC designs, it is also recommended that there be a minimum of five data points in each phase. 1 , 2 There is no randomization or replication of the baseline or intervention phases in the basic AB design. 2 Therefore, AB designs have problems with internal validity and generalizability of results. 12 They are weak in establishing causality because changes in outcome variables could be related to a variety of other factors, including maturation, experience, learning, and practice effects. 2 , 12 Sample data from a single case AB study performed to assess the impact of Floor Play intervention on social interaction and communication skills for a child with autism 15 are shown in Figure 1 .

An external file that holds a picture, illustration, etc.
Object name is nihms870756f1.jpg

An example of results from a single-case AB study conducted on one participant with autism; two weeks of observation (baseline phase A) were followed by seven weeks of Floor Time Play (intervention phase B). The outcome measure Circles of Communications (reciprocal communication with two participants responding to each other verbally or nonverbally) served as a behavioral indicator of the child’s social interaction and communication skills (higher scores indicating better performance). A statistically significant improvement in Circles of Communication was found during the intervention phase as compared to the baseline. Note that although a stable baseline is recommended for SC studies, it is not always possible to satisfy this requirement, as you will see in Figures 1 – 4 . Data were extracted from Dionne and Martini (2011) 15 utilizing Rohatgi’s WebPlotDigitizer software. 78

If an intervention does not have carry-over effects, it is recommended to use a Reversal Design . 2 For example, a reversal A 1 BA 2 design 16 ( Figure 2 ) includes alternation of the baseline and intervention phases, whereas a reversal A 1 B 1 A 2 B 2 design 17 ( Figure 3 ) consists of alternation of two baseline (A 1 , A 2 ) and two intervention (B 1 , B 2 ) phases. Incorporating at least four phases in the reversal design (i.e., A 1 B 1 A 2 B 2 or A 1 B 1 A 2 B 2 A 3 B 3 …) allows for a stronger determination of a causal relationship between the intervention and outcome variables, because the relationship can be demonstrated across at least three different points in time – change in outcome from A 1 to B 1 , from B 1 to A 2 , and from A 2 to B 2 . 18 Before using this design, however, researchers must determine that it is safe and ethical to withdraw the intervention, especially in cases where the intervention is effective and necessary. 12

An external file that holds a picture, illustration, etc.
Object name is nihms870756f2.jpg

An example of results from a single-case A 1 BA 2 study conducted on eight participants with stable multiple sclerosis (data on three participants were used for this example). Four weeks of observation (baseline phase A 1 ) were followed by eight weeks of core stability training (intervention phase B), then another four weeks of observation (baseline phase A 2 ). Forward functional reach test (the maximal distance the participant can reach forward or lateral beyond arm’s length, maintaining a fixed base of support in the standing position; higher scores indicating better performance) significantly improved during intervention for Participants 1 and 3 without further improvement observed following withdrawal of the intervention (during baseline phase A 2 ). Data were extracted from Freeman et al. (2010) 16 utilizing Rohatgi’s WebPlotDigitizer software. 78

An external file that holds a picture, illustration, etc.
Object name is nihms870756f3a.jpg

An example of results from a single-case A 1 B 1 A 2 B 2 study conducted on two participants with severe unilateral neglect after a right-hemisphere stroke. Two weeks of conventional treatment (baseline phases A 1, A 2 ) alternated with two weeks of visuo-spatio-motor cueing (intervention phases B 1 , B 2 ). Performance was assessed in two tests of lateral neglect, the Bells Cancellation Test (Figure A; lower scores indicating better performance) and the Line Bisection Test (Figure B; higher scores indicating better performance). There was a statistically significant intervention-related improvement in participants’ performance on the Line Bisection Test, but not on the Bells Test. Data were extracted from Samuel at al. (2000) 17 utilizing Rohatgi’s WebPlotDigitizer software. 78

A recent study used an ABA reversal SC study to determine the effectiveness of core stability training in 8 participants with multiple sclerosis. 16 During the first four weekly data collections, the researchers ensured a stable baseline, which was followed by eight weekly intervention data points, and concluded with four weekly withdrawal data points. Intervention significantly improved participants’ walking and reaching performance ( Figure 2 ). 16 This A 1 BA 2 design could have been strengthened by the addition of a second intervention phase for replication (A 1 B 1 A 2 B 2 ). For instance, a single-case A 1 B 1 A 2 B 2 withdrawal design aimed to assess the efficacy of rehabilitation using visuo-spatio-motor cueing for two participants with severe unilateral neglect after a severe right-hemisphere stroke. 17 Each phase included 8 data points. Statistically significant intervention-related improvement was observed, suggesting that visuo-spatio-motor cueing might be promising for treating individuals with very severe neglect ( Figure 3 ). 17

The reversal design can also incorporate a cross over design where each participant experiences more than one type of intervention. For instance, a B 1 C 1 B 2 C 2 design could be used to study the effects of two different interventions (B and C) on outcome measures. Challenges with including more than one intervention involve potential carry-over effects from earlier interventions and order effects that may impact the measured effectiveness of the interventions. 2 , 12 Including multiple participants and randomizing the order of intervention phase presentations are tools to help control for these types of effects. 19

When an intervention permanently changes an individual’s ability, a return to baseline performance is not feasible and reversal designs are not appropriate. Multiple Baseline Designs (MBDs) are useful in these situations ( Figure 4 ). 20 MBDs feature staggered introduction of the intervention across time: each participant is randomly assigned to one of at least 3 experimental conditions characterized by the length of the baseline phase. 21 These studies involve more than one participant, thus functioning as SC studies with replication across participants. Staggered introduction of the intervention allows for separation of intervention effects from those of maturation, experience, learning, and practice. For example, a multiple baseline SC study was used to investigate the effect of an anti-spasticity baclofen medication on stiffness in five adult males with spinal cord injury. 20 The subjects were randomly assigned to receive 5–9 baseline data points with a placebo treatment prior to the initiation of the intervention phase with the medication. Both participants and assessors were blind to the experimental condition. The results suggested that baclofen might not be a universal treatment choice for all individuals with spasticity resulting from a traumatic spinal cord injury ( Figure 4 ). 20

An external file that holds a picture, illustration, etc.
Object name is nihms870756f4.jpg

An example of results from a single-case multiple baseline study conducted on five participants with spasticity due to traumatic spinal cord injury. Total duration of data collection was nine weeks. The first participant was switched from placebo treatment (baseline) to baclofen treatment (intervention) after five data collection sessions, whereas each consecutive participant was switched to baclofen intervention at the subsequent sessions through the ninth session. There was no statistically significant effect of baclofen on viscous stiffness at the ankle joint. Data were extracted from Hinderer at al. (1990) 20 utilizing Rohatgi’s WebPlotDigitizer software. 78

The impact of two or more interventions can also be assessed via Alternating Treatment Designs (ATDs) . In ATDs, after establishing the baseline, the experimenter exposes subjects to different intervention conditions administered in close proximity for equal intervals ( Figure 5 ). 22 ATDs are prone to “carry-over effects” when the effects of one intervention influence the observed outcomes of another intervention. 1 As a result, such designs introduce unique challenges when attempting to determine the effects of any one intervention and have been less commonly utilized in rehabilitation. An ATD was used to monitor disruptive behaviors in the school setting throughout a baseline followed by an alternating treatment phase with randomized presentation of a control condition or an exercise condition. 23 Results showed that 30 minutes of moderate to intense physical activity decreased behavioral disruptions through 90 minutes after the intervention. 23 An ATD was also used to compare the effects of commercially available and custom-made video prompts on the performance of multi-step cooking tasks in four participants with autism. 22 Results showed that participants independently performed more steps with the custom-made video prompts ( Figure 5 ). 22

An external file that holds a picture, illustration, etc.
Object name is nihms870756f5a.jpg

An example of results from a single case alternating treatment study conducted on four participants with autism (data on two participants were used for this example). After the observation phase (baseline), effects of commercially available and custom-made video prompts on the performance of multi-step cooking tasks were identified (treatment phase), after which only the best treatment was used (best treatment phase). Custom-made video prompts were most effective for improving participants’ performance of multi-step cooking tasks. Data were extracted from Mechling at al. (2013) 22 utilizing Rohatgi’s WebPlotDigitizer software. 78

Regardless of the SC study design, replication and randomization should be incorporated when possible to improve internal and external validity. 11 The reversal design is an example of replication across study phases. The minimum number of phase replications needed to meet quality standards is three (A 1 B 1 A 2 B 2 ), but having four or more replications is highly recommended (A 1 B 1 A 2 B 2 A 3 …). 11 , 14 In cases when interventions aim to produce lasting changes in participants’ abilities, replication of findings may be demonstrated by replicating intervention effects across multiple participants (as in multiple-participant AB designs), or across multiple settings, tasks, or service providers. When the results of an intervention are replicated across multiple reversals, participants, and/or contexts, there is an increased likelihood a causal relationship exists between the intervention and the outcome. 2 , 12

Randomization should be incorporated in SC studies to improve internal validity and the ability to assess for causal relationships among interventions and outcomes. 11 In contrast to traditional group designs, SC studies often do not have multiple participants or units that can be randomly assigned to different intervention conditions. Instead, in randomized phase-order designs , the sequence of phases is randomized. Simple or block randomization is possible. For example, with simple randomization for an A 1 B 1 A 2 B 2 design, the A and B conditions are treated as separate units and are randomly assigned to be administered for each of the pre-defined data collection points. As a result, any combination of A-B sequences is possible without restrictions on the number of times each condition is administered or regard for repetitions of conditions (e.g., A 1 B 1 B 2 A 2 B 3 B 4 B 5 A 3 B 6 A 4 A 5 A 6 ). With block randomization for an A 1 B 1 A 2 B 2 design, two conditions (e.g., A and B) would be blocked into a single unit (AB or BA), randomization of which to different time periods would ensure that each condition appears in the resulting sequence more than two times (e.g., A 1 B 1 B 2 A 2 A 3 B 3 A 4 B 4 ). Note that AB and reversal designs require that the baseline (A) always precedes the first intervention (B), which should be accounted for in the randomization scheme. 2 , 11

In randomized phase start-point designs , the lengths of the A and B phases can be randomized. 2 , 11 , 24 – 26 For example, for an AB design, researchers could specify the number of time points at which outcome data will be collected, (e.g., 20), define the minimum number of data points desired in each phase (e.g., 4 for A, 3 for B), and then randomize the initiation of the intervention so that it occurs anywhere between the remaining time points (points 5 and 17 in the current example). 27 , 28 For multiple-baseline designs, a dual-randomization, or “regulated randomization” procedure has been recommended. 29 If multiple-baseline randomization depends solely on chance, it could be the case that all units are assigned to begin intervention at points not really separated in time. 30 Such randomly selected initiation of the intervention would result in the drastic reduction of the discriminant and internal validity of the study. 29 To eliminate this issue, investigators should first specify appropriate intervals between the start points for different units, then randomly select from those intervals, and finally randomly assign each unit to a start point. 29

Single Case Analysis Techniques for Intervention Research

The What Works Clearinghouse (WWC) single-case design technical documentation provides an excellent overview of appropriate SC study analysis techniques to evaluate the effectiveness of intervention effects. 1 , 18 First, visual analyses are recommended to determine whether there is a functional relation between the intervention and the outcome. Second, if evidence for a functional effect is present, the visual analysis is supplemented with quantitative analysis methods evaluating the magnitude of the intervention effect. Third, effect sizes are combined across cases to estimate overall average intervention effects which contributes to evidence-based practice, theory, and future applications. 2 , 18

Visual Analysis

Traditionally, SC study data are presented graphically. When more than one participant engages in a study, a spaghetti plot showing all of their data in the same figure can be helpful for visualization. Visual analysis of graphed data has been the traditional method for evaluating treatment effects in SC research. 1 , 12 , 31 , 32 The visual analysis involves evaluating level, trend, and stability of the data within each phase (i.e., within-phase data examination) followed by examination of the immediacy of effect, consistency of data patterns, and overlap of data between baseline and intervention phases (i.e., between-phase comparisons). When the changes (and/or variability) in level are in the desired direction, are immediate, readily discernible, and maintained over time, it is concluded that the changes in behavior across phases result from the implemented treatment and are indicative of improvement. 33 Three demonstrations of an intervention effect are necessary for establishing a functional relation. 1

Within-phase examination

Level, trend, and stability of the data within each phase are evaluated. Mean and/or median can be used to report the level, and trend can be evaluated by determining whether the data points are monotonically increasing or decreasing. Within-phase stability can be evaluated by calculating the percentage of data points within 15% of the phase median (or mean). The stability criterion is satisfied if about 85% (80% – 90%) of the data in a phase fall within a 15% range of the median (or average) of all data points for that phase. 34

Between-phase examination

Immediacy of effect, consistency of data patterns, and overlap of data between baseline and intervention phases are evaluated next. For this, several nonoverlap indices have been proposed that all quantify the proportion of measurements in the intervention phase not overlapping with the baseline measurements. 35 Nonoverlap statistics are typically scaled as percent from 0 to 100, or as a proportion from 0 to 1. Here, we briefly discuss the Nonoverlap of All Pairs ( NAP ), 36 the Extended Celeration Line ( ECL ), the Improvement Rate Difference ( IRD) , 37 and the TauU and the TauU-adjusted, TauU adj , 35 as these are the most recent and complete techniques. We also examine the Percentage of Nonoverlapping Data ( PND ) 38 and the Two Standard Deviations Band Method, as these are frequently used techniques. In addition, we include the Percentage of Nonoverlapping Corrected Data ( PNCD ) – an index applying to the PND after controlling for baseline trend. 39

Nonoverlap of all pairs (NAP)

Each baseline observation can be paired with each intervention phase observation to make n pairs (i.e., N = n A * n B ). Count the number of overlapping pairs, n o , counting all ties as 0.5. Then define the percent of the pairs that show no overlap. Alternatively, one can count the number of positive (P), negative (N), and tied (T) pairs 2 , 36 :

Extended Celeration Line (ECL)

ECL or split middle line allows control for a positive Phase A trend. Nonoverlap is defined as the proportion of Phase B ( n b ) data that are above the median trend plotted from Phase A data ( n B< sub > Above Median trend A </ sub > ), but then extended into Phase B: ECL = n B Above Median trend A n b ∗ 100

As a consequence, this method depends on a straight line and makes an assumption of linearity in the baseline. 2 , 12

Improvement rate difference (IRD)

This analysis is conceptualized as the difference in improvement rates (IR) between baseline ( IR B ) and intervention phases ( IR T ). 38 The IR for each phase is defined as the number of “improved data points” divided by the total data points in that phase. IRD, commonly employed in medical group research under the name of “risk reduction” or “risk difference” attempts to provide an intuitive interpretation for nonoverlap and to make use of an established, respected effect size, IR B - IR B , or the difference between two proportions. 37

TauU and TauU adj

Each baseline observation can be paired with each intervention phase observation to make n pairs (i.e., n = n A * n B ). Count the number of positive (P), negative (N), and tied (T) pairs, and use the following formula: TauU = P - N P + N + τ

The TauU adj is an adjustment of TauU for monotonic trend in baseline. Each baseline observation can be paired with each intervention phase observation to make n pairs (i.e., n = n A * n B ). Each baseline observation can be paired with all later baseline observations (n A *(n A -1)/2). 2 , 35 Then the baseline trend can be computed: TauU adf = P - N - S trend P + N + τ ; S trend = P A – NA

Online calculators might assist researchers in obtaining the TauU and TauU adjusted coefficients ( http://www.singlecaseresearch.org/calculators/tau-u ).

Percentage of nonoverlapping data (PND)

If anticipating an increase in the outcome, locate the highest data point in the baseline phase and then calculate the percent of the intervention phase data points that exceed it. If anticipating a decrease in the outcome, find the lowest data point in the baseline phase and then calculate the percent of the treatment phase data points that are below it: PND = n B Overlap A n b ∗ 100 . A PND < 50 would mark no observed effect, PND = 50–70 signifies a questionable effect, and PND > 70 suggests the intervention was effective. 40 The percentage of nonoverlapping (PNDC) corrected was proposed in 2009 as an extension of the PND. 39 Prior to applying the PND, a data correction procedure is applied eliminating pre-existing baseline trend. 38

Two Standard Deviation Band Method

When the stability criterion described above is met within phases, it is possible to apply the two standard deviation band method. 12 , 41 First, the mean of the data for a specific condition is calculated and represented with a solid line. In the next step, the standard deviation of the same data is computed and two dashed lines are represented: one located two standard deviations above the mean and the other – two standard deviations below. For normally distributed data, few points (less than 5%) are expected to be outside the two standard deviation bands if there is no change in the outcome score due to the intervention. However, this method is not considered a formal statistical procedure, as the data cannot typically be assumed to be normal, continuous, or independent. 41

Statistical Analysis

If the visual analysis indicates a functional relationship (i.e., three demonstrations of the effectiveness of the intervention effect), it is recommended to proceed with the quantitative analyses, reflecting the magnitude of the intervention effect. First, effect sizes are calculated for each participant (individual-level analysis). Moreover, if the research interest lies in the generalizability of the effect size across participants, effect sizes can be combined across cases to achieve an overall average effect size estimate (across-case effect size).

Note that quantitative analysis methods are still being developed in the domain of SC research 1 and statistical challenges of producing an acceptable measure of treatment effect remain. 14 , 42 , 43 Therefore, the WWC standards strongly recommend conducting sensitivity analysis and reporting multiple effect size estimators. If consistency across different effect size estimators is identified, there is stronger evidence for the effectiveness of the treatment. 1 , 18

Individual-level effect size analysis

The most common effect sizes recommended for SC analysis are: 1) standardized mean difference Cohen’s d ; 2) standardized mean difference with correction for small sample sizes Hedges’ g ; and 3) the regression-based approach which has the most potential and is strongly recommended by the WWC standards. 1 , 44 , 45 Cohen’s d can be calculated using following formula: d = X A ¯ - X B ¯ s p , with X A ¯ being the baseline mean, X B ¯ being the treatment mean, and s p indicating the pooled within-case standard deviation. Hedges’ g is an extension of Cohen’s d , recommended in the context of SC studies as it corrects for small sample sizes. The piecewise regression-based approach does not only reflect the immediate intervention effect, but also the intervention effect across time:

i stands for the measurement occasion ( i = 0, 1,… I ). The dependent variable is regressed on a time indicator, T , which is centered around the first observation of the intervention phase, D , a dummy variable for the intervention phase, and an interaction term of these variables. The equation shows that the expected score, Ŷ i , equals β 0 + β 1 T i in the baseline phase, and ( β 0 + β 2 ) + ( β 1 + β 3 ) T i in the intervention phase. β 0 , therefore, indicates the expected baseline level at the start of the intervention phase (when T = 0), whereas β 1 marks the linear time trend in the baseline scores. The coefficient β 2 can then be interpreted as an immediate effect of the intervention on the outcome, whereas β 3 signifies the effect of the intervention across time. The e i ’s are residuals assumed to be normally distributed around a mean of zero with a variance of σ e 2 . The assumption of independence of errors is usually not met in the context of SC studies because repeated measures are obtained within a person. As a consequence, it can be the case that the residuals are autocorrelated, meaning that errors closer in time are more related to each other compared to errors further away in time. 46 – 48 As a consequence, a lag-1 autocorrelation is appropriate (taking into account the correlation between two consecutive errors: e i and e i –1 ; for more details see Verbeke & Molenberghs, (2000). 49 In Equation 1 , ρ indicates the autocorrelation parameter. If ρ is positive, the errors closer in time are more similar; if ρ is negative, the errors closer in time are more different, and if ρ equals zero, there is no correlation between the errors.

Across-case effect sizes

Two-level modeling to estimate the intervention effects across cases can be used to evaluate across-case effect sizes. 44 , 45 , 50 Multilevel modeling is recommended by the WWC standards because it takes the hierarchical nature of SC studies into account: measurements are nested within cases and cases, in turn, are nested within studies. By conducting a multilevel analysis, important research questions can be addressed (which cannot be answered by single-level analysis of SC study data), such as: 1) What is the magnitude of the average treatment effect across cases? 2) What is the magnitude and direction of the case-specific intervention effect? 3) How much does the treatment effect vary within cases and across cases? 4) Does a case and/or study level predictor influence the treatment’s effect? The two-level model has been validated in previous research using extensive simulation studies. 45 , 46 , 51 The two-level model appears to have sufficient power (> .80) to detect large treatment effects in at least six participants with six measurements. 21

Furthermore, to estimate the across-case effect sizes, the HPS (Hedges, Pustejovsky, and Shadish) , or single-case educational design ( SCEdD)-specific mean difference, index can be calculated. 52 This is a standardized mean difference index specifically designed for SCEdD data, with the aim of making it comparable to Cohen’s d of group-comparison designs. The standard deviation takes into account both within-participant and between-participant variability, and is typically used to get an across-case estimator for a standardized change in level. The advantage of using the HPS across-case effect size estimator is that it is directly comparable with Cohen’s d for group comparison research, thus enabling the use of Cohen’s (1988) benchmarks. 53

Valuable recommendations on SC data analyses have recently been provided. 54 , 55 They suggest that a specific SC study data analytic technique can be chosen based on: (1) the study aims and the desired quantification (e.g., overall quantification, between-phase quantifications, randomization, etc.), (2) the data characteristics as assessed by visual inspection and the assumptions one is willing to make about the data, and (3) the knowledge and computational resources. 54 , 55 Table 1 lists recommended readings and some commonly used resources related to the design and analysis of single-case studies.

Recommend readings and resources related to the design and analysis of single-case studies.

General Readings on Single-Case Research Design and Analysis
3rd ed. Needham Heights, MA: Allyn & Bacon; 2008. New York, NY: Oxford University Press; 2010. Hillsdale, NJ: Lawrence Erlbaum Associates; 1992. Washington, D.C.: American Psychological Association; 2014. Philadelphia, PA: F. A. Davis Company; 2015.
Reversal Design
Multiple Baseline Design
Alternating Treatment Design
Randomization
Analysis
Visual Analysis
Percentage of Nonoverlapping Data (PND)
Nonoverlap of All Pairs (NAP)
Improvement Rate Difference (IRD)
Tau-U/Piecewise Regression
HLM

Quality Appraisal Tools for Single-Case Design Research

Quality appraisal tools are important to guide researchers in designing strong experiments and conducting high-quality systematic reviews of the literature. Unfortunately, quality assessment tools for SC studies are relatively novel, ratings across tools demonstrate variability, and there is currently no “gold standard” tool. 56 Table 2 lists important SC study quality appraisal criteria compiled from the most common scales; when planning studies or reviewing the literature, we recommend readers consider these criteria. Table 3 lists some commonly used SC quality assessment and reporting tools and references to resources where the tools can be located.

Summary of important single-case study quality appraisal criteria.

CriteriaRequirements
1. Design The design is appropriate for evaluating the intervention.
2. Method details Participants’ characteristics, selection method, and testing setting specifics are adequately detailed to allow future replication.
3. Independent variable , , , The independent variable (i.e., the intervention) is thoroughly described to allow replication; fidelity of the intervention is thoroughly documented; the independent variable is systematically manipulated under the control of the experimenter.
4. Dependent variable , , Each dependent/outcome variable is quantifiable. Each outcome variable is measured systematically and repeatedly across time to ensure the acceptable 0.80–0.90 inter-assessor percent agreement (or ≥0.60 Cohen’s kappa) on at least 20% of sessions.
5. Internal validity , , The study includes at least three attempts to demonstrate an intervention effect at three different points in time or with three different phase replications. Design-specific recommendations: 1) for reversal designs, a study should have ≥4 phases with ≥5 points, 2) for alternating intervention designs, a study should have ≥5 points per condition with ≤2 points per phase, 3) for multiple baseline designs, a study should have ≥6 phases with ≥5 points to meet the WWC standards without reservations . Assessors are independent and blind to experimental conditions.
6. External Validity Experimental effects should be replicated across participants, settings, tasks, and/or service providers.
7. Face Validity , , The outcome measure should be clearly operationally defined, have a direct unambiguous interpretation, and measure a construct is was designed to measure.
8. Social Validity , Both the outcome variable and the magnitude of change in outcome due to an intervention should be socially important, the intervention should be practical and cost effective.
9. Sample attrition , The sample attrition should be low and unsystematic, since loss of data in SC designs due to overall or differential attrition can produce biased estimates of the intervention’s effectiveness if that loss is systematically related to the experimental conditions.
10. Randomization , If randomization is used, the experimenter should make sure that: 1) equivalence is established at the baseline, and 2) the group membership is determined through a random process.

Quality assessment and reporting tools related to single-case studies.

Quality Assessment & Reporting Tools
What Works Clearinghouse Standards (WWC)Kratochwill, T.R., Hitchcock, J., Horner, R.H., et al. Institute of Education Sciences: What works clearinghouse: Procedures and standards handbook. . Published 2010. Accessed November 20, 2016.
Quality indicators from Horner et al.Horner, R.H., Carr, E.G., Halle, J., McGee, G., Odom, S., Wolery, M. The use of single-subject research to identify evidence-based practice in special education. Except Children. 2005;71(2):165–179.
Evaluative MethodReichow, B., Volkmar, F., Cicchetti, D. Development of the evaluative method for evaluating and determining evidence-based practices in autism. J Autism Dev Disord. 2008;38(7):1311–1319.
Certainty FrameworkSimeonsson, R., Bailey, D. Evaluating programme impact: Levels of certainty. In: Mitchell, D., Brown, R., eds. London, England: Chapman & Hall; 1991:280–296.
Evidence in Augmentative and Alternative Communication Scales (EVIDAAC)Schlosser, R.W., Sigafoos, J., Belfiore, P. EVIDAAC comparative single-subject experimental design scale (CSSEDARS). . Published 2009. Accessed November 20, 2016.
Single-Case Experimental Design (SCED)Tate, R.L., McDonald, S., Perdices, M., Togher, L., Schulz, R., Savage, S. Rating the methodological quality of single-subject designs and n-of-1 trials: Introducing the Single-Case Experimental Design (SCED) Scale. Neuropsychol Rehabil. 2008;18(4):385–401.
Logan et al. ScalesLogan, L.R., Hickman, R.R., Harris, S.R., Heriza, C.B. Single-subject research design: Recommendations for levels of evidence and quality rating. Dev Med Child Neurol. 2008;50:99–103.
Single-Case Reporting Guideline In BEhavioural Interventions (SCRIBE)Tate, R.L., Perdices, M., Rosenkoetter, U., et al. The Single-Case Reporting guideline In BEhavioural interventions (SCRIBE) 2016 statement. J School Psychol. 2016;56:133–142.
Theory, examples, and tools related to multilevel data analysisVan den Noortgate, W., Ferron, J., Beretvas, S.N., Moeyaert, M. Multilevel synthesis of single-case experimental data. Katholieke Universiteit Leuven web site. .
Tools for computing between-cases standardized mean difference ( -statistic)Pustejovsky, J.E. scdhlm: A web-based calculator for between-case standardized mean differences (Version 0.2) [Web application]. .
Tools for computing NAP, IRD, Tau and other statisticsVannest, K.J., Parker, R.I., Gonen, O. Single case research: Web based calculators for SCR analysis (Version 1.0) [Web-based application]. College Atation, TX: Texas A&M University. Published 2011. Accessed November 20, 2016. .
Tools for obtaining graphical representations, means, trend lines, PNDWright, J. Intervention central. Accessed November 20, 2016.
Access to free Simulation Modeling Analysis (SMA) SoftwareBorckardt, J.J. SMA Simulation Modeling Analysis: Time Series Analysis Program for Short Time Series Data Streams. Published 2006. .

When an established tool is required for systematic review, we recommend use of the What Works Clearinghouse (WWC) Tool because it has well-defined criteria and is developed and supported by leading experts in the SC research field in association with the Institute of Education Sciences. 18 The WWC documentation provides clear standards and procedures to evaluate the quality of SC research; it assesses the internal validity of SC studies, classifying them as “Meeting Standards”, “Meeting Standards with Reservations”, or “Not Meeting Standards”. 1 , 18 Only studies classified in the first two categories are recommended for further visual analysis. Also, WWC evaluates the evidence of effect, classifying studies into “Strong Evidence of a Causal Relation”, “Moderate Evidence of a Causal Relation”, or “No Evidence of a Causal Relation”. Effect size should only be calculated for studies providing strong or moderate evidence of a causal relation.

The Single-Case Reporting Guideline In BEhavioural Interventions (SCRIBE) 2016 is another useful SC research tool developed recently to improve the quality of single-case designs. 57 SCRIBE consists of a 26-item checklist that researchers need to address while reporting the results of SC studies. This practical checklist allows for critical evaluation of SC studies during study planning, manuscript preparation, and review.

Single-case studies can be designed and analyzed in a rigorous manner that allows researchers strength in assessing causal relationships among interventions and outcomes, and in generalizing their results. 2 , 12 These studies can be strengthened via incorporating replication of findings across multiple study phases, participants, settings, or contexts, and by using randomization of conditions or phase lengths. 11 There are a variety of tools that can allow researchers to objectively analyze findings from SC studies. 56 While a variety of quality assessment tools exist for SC studies, they can be difficult to locate and utilize without experience, and different tools can provide variable results. The WWC quality assessment tool is recommended for those aiming to systematically review SC studies. 1 , 18

SC studies, like all types of study designs, have a variety of limitations. First, it can be challenging to collect at least five data points in a given study phase. This may be especially true when traveling for data collection is difficult for participants, or during the baseline phase when delaying intervention may not be safe or ethical. Power in SC studies is related to the number of data points gathered for each participant so it is important to avoid having a limited number of data points. 12 , 58 Second, SC studies are not always designed in a rigorous manner and, thus, may have poor internal validity. This limitation can be overcome by addressing key characteristics that strengthen SC designs ( Table 2 ). 1 , 14 , 18 Third, SC studies may have poor generalizability. This limitation can be overcome by including a greater number of participants, or units. Fourth, SC studies may require consultation from expert methodologists and statisticians to ensure proper study design and data analysis, especially to manage issues like autocorrelation and variability of data. 2 Fifth, while it is recommended to achieve a stable level and rate of performance throughout the baseline, human performance is quite variable and can make this requirement challenging. Finally, the most important validity threat to SC studies is maturation. This challenge must be considered during the design process in order to strengthen SC studies. 1 , 2 , 12 , 58

SC studies can be particularly useful for rehabilitation research. They allow researchers to closely track and report change at the level of the individual. They may require fewer resources and, thus, can allow for high-quality experimental research, even in clinical settings. Furthermore, they provide a tool for assessing causal relationships in populations and settings where large numbers of participants are not accessible. For all of these reasons, SC studies can serve as an effective method for assessing the impact of interventions.

Acknowledgments

This research was supported by the National Institute of Health, Eunice Kennedy Shriver National Institute of Child Health & Human Development (1R21HD076092-01A1, Lobo PI) and the Delaware Economic Development Office (Grant #109).

Some of the information in this manuscript was presented at the IV Step Meeting in Columbus, OH, June 2016.

  • Skip to main content
  • Skip to primary sidebar

IResearchNet

Single-Case Experimental Design

Single-case experimental design, a versatile research methodology within psychology, holds particular significance in the field of school psychology. This article provides an overview of single-case experimental design, covering its definition, historical development, and key concepts. It delves into various types of single-case designs, including AB, ABA, and Multiple Baseline designs, illustrating their applications within school psychology. The article also explores data collection, analysis methods, and common challenges associated with this methodology. By highlighting its value in empirical research, this article underscores the enduring relevance of single-case experimental design in advancing the understanding and practice of school psychology.

Introduction

Single-case experimental design, a research methodology of profound importance in the realm of psychology, is characterized by its unique approach to investigating behavioral and psychological phenomena. Within this article, we will embark on a journey to explore the intricate facets of this research methodology and unravel its multifaceted applications, with a particular focus on its relevance in school psychology.

Single-case experimental design, often referred to as “N of 1” research, is a methodology that centers on the in-depth examination of individual subjects or cases. Unlike traditional group-based designs, this approach allows researchers to closely study and understand the nuances of a single participant’s behavior, responses, and reactions over time. The precision and depth of insight offered by single-case experimental design have made it an invaluable tool in the field of psychology, facilitating both clinical and experimental research endeavors.

One of the most compelling aspects of this research methodology lies in its applicability to school psychology. In educational settings, understanding the unique needs and challenges of individual students is paramount, and single-case experimental design offers a tailored and systematic way to address these issues. Whether it involves assessing the effectiveness of an intervention for a specific learning disability or studying the impact of a behavior modification program for a student with special needs, single-case experimental design equips school psychologists with a powerful tool to make data-driven decisions and individualized educational plans.

Throughout this article, we will delve into the foundations of single-case experimental design, exploring its historical evolution, key concepts, and core terminology. We will also discuss the various types of single-case designs, including AB, ABA, and Multiple Baseline designs, illustrating their practical applications within the context of school psychology. Furthermore, the article will shed light on the data collection methods and the statistical techniques used for analysis, as well as the ethical considerations and challenges that researchers encounter in single-case experiments.

In sum, this article aims to provide an in-depth understanding of single-case experimental design and its pivotal role in advancing knowledge in psychology, particularly within the field of school psychology. As we embark on this exploration, it is evident that single-case experimental design serves as a bridge between rigorous scientific inquiry and the real-world needs of individuals, making it an indispensable asset in enhancing the quality of psychological research and practice.

Understanding Single-Case Experimental Design

Single-Case Experimental Design (SCED), often referred to as “N of 1” research, is a research methodology employed in psychology to investigate behavioral and psychological phenomena with an emphasis on the individual subject as the primary unit of analysis. The primary purpose of SCED is to meticulously study the behavior, responses, and changes within a single participant over time. Unlike traditional group-based research, SCED is tailored to the unique characteristics and needs of individual cases, enabling a more in-depth understanding of the variables under investigation.

The historical background of SCED can be traced back to the early 20th century when researchers like B.F. Skinner pioneered the development of operant conditioning and experimental analysis of behavior. Skinner’s work laid the groundwork for single-case experiments by emphasizing the importance of understanding the functional relations between behavior and environmental variables. Over the decades, SCED has evolved and gained prominence in various fields within psychology, notably in clinical and school psychology. Its relevance in school psychology is particularly noteworthy, as it offers a systematic and data-driven approach to address the diverse learning and behavioral needs of students. School psychologists use SCED to design and assess individualized interventions, evaluate the effectiveness of specific teaching strategies, and make informed decisions about special education programs.

Understanding SCED involves familiarity with key concepts and terminology that underpin the methodology. These terms include:

  • Baseline: The initial phase of data collection where the participant’s behavior is measured before any intervention is introduced. Baseline data serve as a point of reference for assessing the impact of subsequent interventions.
  • Intervention: The phase in which a specific treatment, manipulation, or condition is introduced to the participant. The goal of the intervention is to bring about a change in the target behavior.
  • Dependent Variables: These are the behaviors or responses under investigation. They are the outcomes that researchers aim to measure and analyze for changes across different phases of the experiment.

Reliability and validity are critical considerations in SCED. Reliability refers to the consistency and stability of measurement. In SCED, it is crucial to ensure that data collection procedures are reliable, as any variability can affect the interpretation of results. Validity pertains to the accuracy and truthfulness of the data. Researchers must establish that the dependent variable measurements are valid and accurately reflect the behavior of interest. When these principles are applied in SCED, it enhances the scientific rigor and credibility of the research findings, which is essential in both clinical and school psychology contexts.

This foundation of key concepts and terminology serves as the basis for designing, conducting, and interpreting single-case experiments, ensuring that the methodology maintains high standards of precision and integrity in the pursuit of understanding individual behavior and psychological processes.

Types of Single-Case Experimental Designs

The AB Design is one of the fundamental single-case experimental designs, characterized by its simplicity and effectiveness. In an AB Design, the researcher observes and measures a single subject’s behavior during two distinct phases: the baseline (A) phase and the intervention (B) phase. During the baseline phase, the researcher collects data on the subject’s behavior without any intervention or treatment. This baseline data serve as a reference point to understand the natural or typical behavior of the individual. Following the baseline phase, the intervention or treatment is introduced, and data on the subject’s behavior are collected again. The AB Design allows for the comparison of baseline data with intervention data, enabling researchers to determine whether the introduced intervention had a noticeable impact on the individual’s behavior.

AB Designs find extensive application in school psychology. For instance, consider a scenario where a school psychologist wishes to assess the effectiveness of a time-management training program for a student with attention deficit hyperactivity disorder (ADHD). During the baseline phase, the psychologist observes the student’s on-task behavior in the absence of any specific time-management training. Subsequently, during the intervention phase, the psychologist implements the time-management program and measures the student’s on-task behavior again. By comparing the baseline and intervention data, the psychologist can evaluate the program’s efficacy in improving the student’s behavior.

The ABA Design is another prominent single-case experimental design characterized by the inclusion of a reversal (A) phase. In this design, the researcher initially collects baseline data (Phase A), introduces the intervention (Phase B), and then returns to the baseline conditions (Phase A). The ABA Design is significant because it provides an opportunity to assess the reversibility of the effects of the intervention. If the behavior returns to baseline levels during the second A phase, it suggests a strong relationship between the intervention and the observed changes in behavior.

In school psychology, the ABA Design offers valuable insights into the effectiveness of interventions for students with diverse needs. For instance, a school psychologist may use the ABA Design to evaluate a behavior modification program for a student with autism spectrum disorder (ASD). During the first baseline phase (A), the psychologist observes the student’s behavior patterns. Subsequently, in the intervention phase (B), a behavior modification program is implemented. If the student’s behavior shows positive changes, this suggests that the program is effective. Finally, during the second baseline phase (A), the psychologist can determine if the changes are reversible, which informs decisions regarding the program’s ongoing use or modification.

The Multiple Baseline Design is a versatile single-case experimental design that addresses challenges such as ethical concerns or logistical constraints that might limit the use of reversal designs. In this design, researchers stagger the introduction of the intervention across multiple behaviors, settings, or individuals. Each baseline and intervention phase is implemented at different times for each behavior, allowing researchers to establish a cause-and-effect relationship by demonstrating that the intervention corresponds with changes in the specific behavior under investigation.

Within school psychology, Multiple Baseline Designs offer particular utility when assessing interventions for students in complex or sensitive situations. For example, a school psychologist working with a student who displays challenging behaviors may choose to implement a Multiple Baseline Design. The psychologist can introduce a behavior intervention plan (BIP) for different target behaviors, such as aggression, noncompliance, and self-injury, at different times. By measuring and analyzing changes in behavior across these multiple behaviors, the psychologist can assess the effectiveness of the BIP and make informed decisions about its implementation across various behavioral concerns. This design is particularly valuable when ethical considerations prevent the reversal of an effective intervention, as it allows researchers to demonstrate the intervention’s impact without removing a beneficial treatment.

Conducting and Analyzing Single-Case Experiments

In single-case experiments, data collection and measurement are pivotal components that underpin the scientific rigor of the research. Data are typically collected through direct observation, self-reports, or the use of various measuring instruments, depending on the specific behavior or variable under investigation. To ensure reliability and validity, researchers meticulously define and operationalize the target behavior, specifying how it will be measured. This may involve the use of checklists, rating scales, video recordings, or other data collection tools. In school psychology research, systematic data collection is imperative to make informed decisions about interventions and individualized education plans (IEPs). It provides school psychologists with empirical evidence to track the progress of students, assess the effectiveness of interventions, and adapt strategies based on the collected data.

Visual analysis is a core element of interpreting data in single-case experiments. Researchers plot the data in graphs, creating visual representations of the behavior across different phases. By visually inspecting the data, researchers can identify patterns, trends, and changes in behavior. Visual analysis is particularly well-suited for detecting whether an intervention has had a noticeable effect.

In addition to visual analysis, statistical methods are occasionally employed in single-case experiments to enhance the rigor of analysis. These methods include effect size calculations and phase change calculations. Effect size measures, such as Cohen’s d or Tau-U, quantify the magnitude of change between the baseline and intervention phases, providing a quantitative understanding of the treatment’s impact. Phase change calculations determine the statistical significance of behavior change across different phases, aiding in the determination of whether the intervention had a meaningful effect.

Visual analysis and statistical methods complement each other, enabling researchers in school psychology to draw more robust conclusions about the efficacy of interventions. These methods are valuable in making data-driven decisions regarding students’ educational and behavioral progress.

Single-case experimental designs are not without their challenges and limitations. Researchers must grapple with issues such as the potential for confounding variables, limited generalizability to other cases, and the need for careful control of extraneous factors. In school psychology, these challenges are compounded by the dynamic and diverse nature of educational settings, making it essential for researchers to adapt the methodology to specific contexts and populations.

Moreover, ethical considerations loom large in school psychology research. Researchers must adhere to strict ethical guidelines when conducting single-case experiments involving students. Informed consent, confidentiality, and the well-being of the participants are paramount. Ethical considerations are especially critical when conducting research with vulnerable populations, such as students with disabilities or those in special education programs. The ethical conduct of research in school psychology is pivotal to maintaining trust and ensuring the welfare of students and their families.

In conclusion, the application of single-case experimental design in school psychology research is a powerful approach for addressing individualized educational and behavioral needs. By emphasizing systematic data collection, employing visual analysis and statistical methods, and navigating the inherent challenges and ethical considerations, researchers can contribute to the advancement of knowledge in this field while ensuring the well-being and progress of the students they serve.

In conclusion, this article has provided a comprehensive exploration of Single-Case Experimental Design (SCED) and its vital role within the domain of school psychology. Key takeaways from this article underscore the significance of SCED as a versatile and invaluable research methodology:

First and foremost, SCED is a methodological cornerstone for investigating individual behavior and psychological phenomena. Through meticulous observation and data collection, it enables researchers to gain deep insights into the idiosyncratic needs and responses of students in educational settings.

The significance of SCED in school psychology is pronounced. It empowers school psychologists to design and assess tailored interventions, evaluate the effectiveness of educational programs, and make data-driven decisions that enhance the quality of education for students with diverse needs. Whether it’s tracking progress, assessing the efficacy of behavioral interventions, or individualizing education plans, SCED plays an instrumental role in achieving these goals.

Furthermore, the article has illuminated three primary types of single-case experimental designs: AB, ABA, and Multiple Baseline. These designs offer the flexibility to investigate the effects of interventions and assess their reversibility when required. Such methods have a direct and tangible impact on the daily practices of school psychologists, allowing them to optimize support and educational strategies.

The importance of systematic data collection and measurement, the role of visual analysis and statistical methods in data interpretation, and the acknowledgment of ethical considerations in school psychology research have been underscored. These aspects collectively serve as the foundation of SCED, ensuring the integrity and reliability of research outcomes.

As we look toward the future, the potential developments in SCED are promising. Advances in technology, such as wearable devices and digital data collection tools, offer new possibilities for precise and efficient data gathering. Additionally, the integration of SCED with other research methodologies, such as mixed-methods research, holds the potential to provide a more comprehensive understanding of students’ educational experiences.

In summary, Single-Case Experimental Design is a pivotal research methodology that bridges the gap between rigorous scientific inquiry and the real-world needs of students in school psychology. Its power lies in its capacity to assess, refine, and individualize interventions and educational plans. The continued application and refinement of SCED in school psychology research promise to contribute significantly to the advancement of knowledge and the enhancement of educational outcomes for students of all backgrounds and abilities. As we move forward, the integration of SCED with emerging technologies and research paradigms will continue to shape the landscape of school psychology research, leading to more effective and tailored interventions for the benefit of students and the field as a whole.

References:

  • Barlow, D. H., & Nock, M. K. (2009). Why can’t we be more idiographic in our research? Perspectives on Psychological Science, 4(1), 19-21.
  • Cook, B. G., & Schirmer, B. R. (2003). What is N of 1 research? Exceptionality, 11(1), 65-76.
  • Cooper, J. O., Heron, T. E., & Heward, W. L. (2020). Applied behavior analysis (3rd ed.). Pearson.
  • Kazdin, A. E. (1982). Single-case research designs: Methods for clinical and applied settings. Oxford University Press.
  • Kratochwill, T. R., Hitchcock, J. H., Horner, R. H., Levin, J. R., Odom, S. L., Rindskopf, D. M., & Shadish, W. R. (2010). Single-case intervention research design standards. Remedial and Special Education, 31(3), 205-214.
  • Levin, J. R., Ferron, J. M., Kratochwill, T. R., Forster, J. L., Rodgers, M. S., Maczuga, S. A., & Chinn, S. (2016). A randomized controlled trial evaluation of a research synthesis and research proposal process aimed at improving graduate students’ research competency. Journal of Educational Psychology, 108(5), 680-701.
  • Morgan, D. L., & Morgan, R. K. (2009). Single-participant research design: Bringing science to managed care. Psychotherapy Research, 19(4-5), 577-587.
  • Ottenbacher, K. J., & Maas, F. (1999). The effect of statistical methodology on the single subject design: An empirical investigation. Journal of Behavioral Education, 9(2), 111-130.
  • Shadish, W. R., Cook, T. D., & Campbell, D. T. (2002). Experimental and quasi-experimental designs for generalized causal inference. Houghton Mifflin.
  • Sidman, M. (1960). Tactics of scientific research: Evaluating experimental data in psychology. Basic Books.
  • Vannest, K. J., Parker, R. I., Gonen, O., Adigüzel, T., & Bovaird, J. A. (2016). Single case research: web-based calculators for SCR analysis. Behavior Research Methods, 48(1), 97-103.
  • Wilczynski, S. M., & Christian, L. (2008). Applying single-subject design for students with disabilities in inclusive settings. Pearson.
  • Wong, C., Odom, S. L., Hume, K. A., Cox, A. W., Fettig, A., Kucharczyk, S., & Schultz, T. R. (2015). Evidence-based practices for children, youth, and young adults with autism spectrum disorder: A comprehensive review. Journal of Autism and Developmental Disorders, 45(7), 1951-1966.
  • Kratochwill, T. R., & Levin, J. R. (2018). Single-case research design and analysis: New directions for psychology and education. Routledge.
  • Hall, R. V., & Fox, L. (2015). The need for N of 1 research in special education. Exceptionality, 23(4), 225-233.
  • Shadish, W. R., & Sullivan, K. J. (2011). Characteristics of single-case designs used to assess intervention effects in 2008. Behavior Research Methods, 43(4), 971-980.
  • Campbell, D. T., & Stanley, J. C. (2015). Experimental and quasi-experimental designs for research. Ravenio Books.
  • Kazdin, A. E. (2011). Single-case research designs: Methods for clinical and applied settings (2nd ed.). Oxford University Press.
  • Therrien, W. J., & Bulawski, J. (2019). The use of single-case experimental designs in school psychology research: A systematic review. Journal of School Psychology, 73, 92-112.
  • Gavidia-Payne, S., Little, E., & Schell, G. (2018). Single-case experimental design: Applications in developmental and behavioral science. Routledge.

Single Case Research Design

  • First Online: 04 January 2024

Cite this chapter

single case research design svenska

  • Stefan Hunziker 3 &
  • Michael Blankenagel 3  

925 Accesses

2 Citations

This chapter addresses single-case research designs’ peculiarities, characteristics, and significant fallacies. A single case research design is a collective term for an in-depth analysis of a small non-random sample. The focus of this design is in-depth. This characteristic distinguishes the case study research from other research designs that understand the individual case as a relatively insignificant and interchangeable aspect of a population or sample. Also, researchers find relevant information on writing a single case research design paper and learn about typical methods used for this research design. The chapter closes by referring to overlapping and adjacent research designs.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Subscribe and save.

  • Get 10 units per month
  • Download Article/Chapter or eBook
  • 1 Unit = 1 Article or 1 Chapter
  • Cancel anytime
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
  • Available as EPUB and PDF
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

Barratt, M., Choi, T. Y., & Li, M. (2011). Qualitative case studies in operations management: Trends, research outcomes, and future research implications. Journal of Operations Management, 29 (4), 329–342.

Google Scholar  

Baškarada, S. (2014). Qualitative case studies guidelines. The Qualitative Report, 19 (40), 1–25.

Berg, B., & Lune, H. (2012). Qualitative research methods for the social sciences . Pearson.

Berry, L. L., Conant, J. S. & Parasuraman, A. (1991), “A framework for conducting a servicemarketing audit”. Journal of the Academy of Marketing Science, 19 , 255–266.

Bryman, A. (2004). Social research methods 2nd edn. Oxford University Press, 592.

Burns, R. B. (2000). Introduction to research methods . United States of America.

Boos, M., (1992). A Typologie of Case Studies, in: M. O’S´uilleabhain, E.A. Stuhler, D. de Tombe (Eds.), Research on Cases and Theories , (Vol. I), München.

Creswell, J. W. (2013). Qualitative inquiry and research design. Choosing among five approaches (3rd ed.). SAGE.

Darke, P., Shanks, G., & Broadbent, M. (1998). Successfully completing case study research: Combining rigour, relevance and pragmatism. Inform Syst J, 8 (4), 273–289.

Article   Google Scholar  

Dey, I. (1999). Grounding grounded theory: Guidelines for qualitative inquiry . Academic Press.

Dick, B. (2005). Grounded theory: A thumbnail sketch. Retrieved 11 June 2021 from http://www.scu.edu.au/schools/gcm/ar/arp/grounded.html .

Dooley, L. M. (2002). Case study research and theory building. Advances in Developing Human Resources, 4 (3), 335–354.

Dubé, L., & Paré, G. (2003). Rigor in information systems positivist case research: Current practices, trends, and recommendations. MIS Quarterly, 27, 597–635.

Edmonds, W. A., & Kennedy, T. D. (2012). An applied reference guide to research designs: Quantitative, qualitative, and mixed methods . Sage.

Edmondson, A. & McManus, S. (2007). Methodological fit in management field research. The Academy of Management Review, 32 (4), 1155–1179.

Eisenhardt, K. M. (1989). Building theories from case study research. Academy of Management Review, 14 (4), 532–550.

Flynn, B. B., Sakakibara, S., Schroeder, R. G., Bates, K. A., & Flynn, E. J. (1990). Empirical research methods in operations management. Journal of Operations Management, 9 (2), 250–284.

Flyvbjerg, B. (2001). Making social science matter: Why social inquiry fails and how it can succeed again . Cambridge University Press.

Flyvbjerg, B. (2006). Five misunderstandings about case-study research. Qualitative Inquiry, 12 (2), 219–245.

General Accounting Office. (1990). Case study evaluations. Retrieved May 15, 2021, from https://www.gao.gov/assets/pemd-10.1.9.pdf .

Gerring, J. (2004). What is a case study and what is it good for? American Political Science Review, 98 (2), 341–354.

Glaser, B. G. (1978). Theoretical sensitivity: Advances in the methodology of grounded theory . Sociology Press.

Glaser, B., & Strauss, A. (1967). The discovery of grounded theory: Strategies for qualitative research . Sociology Press.

Gomm, R. (2000). Case study method. Key issues, key texts . SAGE.

Halaweh, M., Fidler, C., & McRobb, S. (2008). Integrating the Grounded Theory Method and Case Study Research Methodology Within IS Research: A Possible ‘Road Map’, ICIS 2008 Proceedings

Halaweh, M. (2012). Integration of grounded theory and case study: An exemplary application from e-commerce security perception research. Journal of Information Technology Theory and Application (JITTA), 13 (1).

Hancock, D., & Algozzine, B. (2016). Doing case study research: A practical guide for beginning researchers (3rd ed.). Teachers College Press.

Hekkala, R. (2007). Grounded theory—the two faces of the methodology and their manifestation in IS research. In Proceedings of the 30th Information Systems Research Seminar in Scandinavia IRIS, 11–14 August, Tampere, Finland (pp. 1–12).

Horton, J., Macve, R., & Struyven, G. (2004). Qualitative Research: Experiences in Using Semi-Structured Interviews. In: Humphrey, Christopher and Lee, Bill H. K., (eds.) The Real Life Guide to Accounting Research: a Behind-The-Scenes View of Using Qualitative Research Methods. Elsevier Science (Firm) , (pp. 339–358 ), Amsterdam, The Netherlands.

Hyett, N., Kenny, A., & Dickson-Swift, V. (2014). Methodology or method? A critical review of qualitative case study reports. International Journal of Qualitative Studies on Health and Well-Being, 9, 23606.

Keating, P. J. (1995). A framework for classifying and evaluating the theoretical contributions of case research in management accounting. Journal of Management Accounting Research, 7, 66.

Levy, J. S. (2008). Case studies: Types, designs, and logics of inference. Conflict Management and Peace Science, 25 (1), 1–18.

Maoz, Z. (2002). Case study methodology in international studies: from storytelling to hypothesis testing. In F. P. Harvey & M. Brecher (Eds.). Evaluating methodology in international studies . University of Michigan Press.

Mayring, P. (2010). Design. In G. Mey & K. Mruck (Hrsg.), Handbuch qualitative Forschung in der Psychologie (S. 225–237). VS Verlag für Sozialwissenschaften.

May, T. (2011). Social research: Issues, methods and process . Open University Press/Mc.

Merriam, S. B. (2002). Qualitative Research in Practice: Examples For discussion and Analysis . Jossey-Bass Publishers.

Merriam, S. B. (2009). Qualitative research in practice: Examples for discussion and analysis . Jossey-Bass publishers.

Meyer, J.-A., & Kittel-Wegner, E. (2002a). Die Fallstudie in der betriebswirtschaftlichen Forschung und Lehre . Stiftungslehrstuhl für ABWL, insb. kleine und mittlere Unternehmen, Universität.

Meyer, J.-A., & Kittel-Wegner, E. (2002b). Die Fallstudie in der betriebswirtschaftlichen Forschung und Lehre, Schriften zu Management und KMU, Nr. 2/02, Universität Flensburg, Mai 2002.

Mitchell, J. C. (1983). Case and situation analysis. The Sociological Review, 31 (2), 187–211.

Ng. (2005). A principal-distributor collaboration moden in the crane industry. Ph.D. Thesis, Graduate College of Management, Southern Cross University, Australia.

Ng, Y. N. K. & Hase, S. (2008). Grounded suggestions for doing a grounded theory business research. Electronic Journal on Business Research Methods, 6 (2).

Otley, D., Anthony J.B. (1994), “Case study research in management accounting and control”. Management Accounting Research, 5 , 45–65.

Onwuegbuzie, A. J., Leech, N. L., & Collins, K. M. (2012). Qualitative analysis techniques for the review of the literature. Qualitative Report, 17 (56).

Piekkari, R., Welch, C., & Paavilainen, E. (2009). The case study as disciplinary convention. Organizational Research Methods, 12 (3), 567–589.

Ridder, H.-G. (2016). Case study research. Approaches, methods, contribution to theory. Sozialwissenschaftliche Forschungsmethoden (vol. 12). Rainer Hampp Verlag.

Ridder, H.-G. (2017). The theory contribution of case study research designs. Business Research, 10 (2), 281–305.

Stake, R. E. (1995). The art of case study research . Sage.

Stake, R. E. (2005). Qualitative case studies. The SAGE handbook of qualitative research (3rd ed.), eds. N. K. Denzin & Y. S. Lincoln (pp. 443–466).

Strauss, A. L., & Corbin, J. (1990). Basics of qualitative research: Grounded theory procedures and techniques . Sage publications.

Strauss, A. L., & Corbin, J. (1998). Basics of qualitative research techniques and procedures for developing grounded theory . Sage.

Tight, M. (2003). Researching higher education . Society for Research into Higher Education; Open University Press.

Tight, M. (2010). The curious case of case study: A viewpoint. International Journal of Social Research Methodology, 13 (4), 329–339.

Walsham, G. (1995). Interpretive case studies in IS research: nature and method. Eur J Inf Syst 4, 74–81.

Walsham, G. (2006). Doing interpretive research. European Journal of Information Systems, 15 (3), 320–330.

Welch, C., Piekkari, R., Plakoyiannaki, E., & Paavilainen-Mäntymäki, E. (2011). Theorising from case studies: Towards a pluralist future for international business research. Journal of International Business Studies, 42 (5), 740–762.

Woods, M. (2009). A contingency theory perspective on the risk management control system within Birmingham city council. Management Accounting Research, 20 (1), 69–81.

Yin, R. K. (1994). Discovering the future of the case study. Method in evaluation research. American Journal of Evaluation, 15 (3), 283–290.

Yin, R. K. (2004). Case study research: Design and methods (3rd ed.). Chongqing University Press.

Yin, R. K. (2009). Case study research: Design and methods (4th ed.). SAGE.

Yin, R. K. (2014). Case study research. Design and methods (5th ed.). SAGE.

Download references

Author information

Authors and affiliations.

Wirtschaft/IFZ, Campus Zug-Rotkreuz, Hochschule Luzern, Zug-Rotkreuz, Zug, Switzerland

Stefan Hunziker & Michael Blankenagel

You can also search for this author in PubMed   Google Scholar

Corresponding author

Correspondence to Stefan Hunziker .

Rights and permissions

Reprints and permissions

Copyright information

© 2024 Springer Fachmedien Wiesbaden GmbH, part of Springer Nature

About this chapter

Hunziker, S., Blankenagel, M. (2024). Single Case Research Design. In: Research Design in Business and Management. Springer Gabler, Wiesbaden. https://doi.org/10.1007/978-3-658-42739-9_8

Download citation

DOI : https://doi.org/10.1007/978-3-658-42739-9_8

Published : 04 January 2024

Publisher Name : Springer Gabler, Wiesbaden

Print ISBN : 978-3-658-42738-2

Online ISBN : 978-3-658-42739-9

eBook Packages : Business and Management Business and Management (R0)

Share this chapter

Anyone you share the following link with will be able to read this content:

Sorry, a shareable link is not currently available for this article.

Provided by the Springer Nature SharedIt content-sharing initiative

  • Publish with us

Policies and ethics

  • Find a journal
  • Track your research

IMAGES

  1. PPT

    single case research design svenska

  2. 11 Single-Case Research Designs.

    single case research design svenska

  3. Single-Case Research Design and Analysis: An Overview

    single case research design svenska

  4. what is case study research design

    single case research design svenska

  5. Single-Case Design Research Methods Brief

    single case research design svenska

  6. Mixed Methods Single Case Research: State of the Art and Future

    single case research design svenska

VIDEO

  1. Simple Habits for Big Changes: Lifestyle Hacks No 237

  2. Research Designs: Part 2 of 3: Qualitative Research Designs (ሪሰርች ዲዛይን

  3. BCBA Task List 5: D 4

  4. Single Case Research Designs

  5. Single Case Design and Effect Size of a Treatment

  6. Series 3: Conducting Single-Case Research Design in the Classroom Setting: Its Power and Effect

COMMENTS

  1. Single Case Research Experimental Design (SCRED)

    print. Single Case Research Experimental Design (SCRED) Kallas även i dagligt tal för single subject design. Med SCRED avses en studie där alla personer får samma intervention. SCRED är alltså inte en cross-overstudie där olika behandlingsalternativ jämförs. SCRED anses möjlig att använda vid interventionsstudier där det är svårt ...

  2. Single Case Research Design

    A single case research design is not the same as researching a "case". Everything can be considered a "case" in most languages and certainly in English: a product, a patient, a business, an industry, a country, a currency, an ethnicity, a social group etc. Researching such a "case" does not make your research design a case study. ...

  3. Single-Case Experimental Designs: A Systematic Review of Published

    The single-case experiment has a storied history in psychology dating back to the field's founders: Fechner (1889), Watson (1925), and Skinner (1938).It has been used to inform and develop theory, examine interpersonal processes, study the behavior of organisms, establish the effectiveness of psychological interventions, and address a host of other research questions (for a review, see ...

  4. Single case research experimental design

    Single case research experimental design: SCRED : en klinisk utvärderingsmetod för det enskilda fallet FoU rapport / Vårdhögskolan, Stockholms läns landsting, ISSN 0284-0553: Author: Ingrid Söderback: Publisher: Vårdhögsk., 1988: Length: 32 pages : Export Citation: BiBTeX EndNote RefMan

  5. Single-Case Designs

    Single-case designs (also called single-case experimental designs) are system of research design strategies that can provide strong evidence of intervention effectiveness by using repeated measurement to establish each participant (or case) as his or her own control. Although the methods were initially developed as tools for studying basic ...

  6. [PDF] Single-Case Research Designs: Methods for Clinical and Applied

    2018. TLDR. This special issue focused on application of single-case research designs to address the needs of individuals with neurodevelopmental disabilities and behaviour disorders called for submissions that described the use of advanced methodological and statistical procedures to the design and analysis. Expand.

  7. Single‐case experimental designs: Characteristics, changes, and

    Tactics of Scientific Research (Sidman, 1960) provides a visionary treatise on single-case designs, their scientific underpinnings, and their critical role in understanding behavior. Since the foundational base was provided, single-case designs have proliferated especially in areas of application where they have been used to evaluate interventions with an extraordinary range of clients ...

  8. The Family of Single-Case Experimental Designs

    1. Introduction. Single-case experimental designs (SCEDs) represent a family of experimental designs to examine the relationship between one or more treatments or levels of treatment and changes in biological or behavioral outcomes. These designs originated in early experimental psychology research ( Boring, 1929; Ebbinghaus, 1913; Pavlov, 1927 ...

  9. The Family of Single-Case Experimental Designs

    Abstract. Single-case experimental designs (SCEDs) represent a family of research designs that use experimental methods to study the effects of treatments on outcomes. The fundamental unit of analysis is the single case—which can be an individual, clinic, or community—ideally with replications of effects within and/or between cases.

  10. Single-case design standards: An update and proposed upgrades

    In this paper, we provide a critique focused on the What Works Clearinghouse (WWC) Standards for Single-Case Research Design (Standards 4.1).Specifically, we (a) recommend the use of visual-analysis to verify a single-case intervention study's design standards and to examine the study's operational issues, (b) identify limitations of the design-comparable effect-size measure and discuss ...

  11. Single-Case Research Design

    Design of Experiments. C.A. Albers, T.R. Kratochwill, in International Encyclopedia of Education (Third Edition), 2010 Single-Case Research Designs. Although usually labeled a quasi-experimental time-series design, single-case research designs are described in this article as a separate form of research design (formerly termed single-subject or N = 1 research) that have a long and influential ...

  12. Single-Case Design, Analysis, and Quality Assessment for Intervention

    Single-case studies can provide a viable alternative to large group studies such as randomized clinical trials. Single case studies involve repeated measures, and manipulation of and independent variable. They can be designed to have strong internal validity for assessing causal relationships between interventions and outcomes, and external ...

  13. A systematic review of applied single-case research ...

    Single-case experimental designs (SCEDs) have become a popular research methodology in educational science, psychology, and beyond. The growing popularity has been accompanied by the development of specific guidelines for the conduct and analysis of SCEDs. In this paper, we examine recent practices in the conduct and analysis of SCEDs by systematically reviewing applied SCEDs published over a ...

  14. Single-Case Experimental Design

    Single-Case Experimental Design (SCED), often referred to as "N of 1" research, is a research methodology employed in psychology to investigate behavioral and psychological phenomena with an emphasis on the individual subject as the primary unit of analysis. The primary purpose of SCED is to meticulously study the behavior, responses, and ...

  15. Single-case research designs: Methods for clinical and applied settings

    Single-case research has played an important role in developing and evaluating interventions that are designed to alter a particular facet of human functioning. Now thoroughly updated in its second edition, acclaimed author Alan Kazdin's Single-Case Research Designs provides a notable contrast to the quantitative methodology approach that pervades the biological and social sciences.

  16. PDF Single-Case Design Research Methods

    Studies that use a single-case design (SCD) measure outcomes for cases (such as a child or family) repeatedly during multiple phases of a study to determine the success of an intervention. The number of phases in the study will depend on the research questions, intervention, and outcome(s) of interest (see Types of SCDs on page 4 for examples).

  17. Single-Case Intervention Research Design Standards

    These single-case design (SCD) standards have been designed to complement other standards developed by the WWC Panel for group-based methods such as randomized controlled trials and regression discontinuity designs (Schochet et al., 2010).

  18. Single Case Research Design

    A single case research design is not the same as researching a "case". Everything can be considered a "case": a product, a patient, a business, an industry, a country, a currency, an ethnicity, a social group, etc. Researching such a "case" does not make your research design a case study. A case study is a specific research design ...

  19. Generality of Findings From Single-Case Designs: It's Not All About the

    In single-case design research, this type of replication is most apparent in the ABAB design, which includes an initial demonstration of the treatment effect (the first AB effect) and a subsequent direct replication of that effect (the second AB effect). Similarly, a multiple-baseline design across four participants includes one initial ...

  20. Single-case experimental designs to assess intervention effectiveness

    Single-case experimental designs to assess intervention ...

  21. Single-case intervention research design standards: Additional proposed

    A mixed methods single-case design research (MMSCDR) approach may be particularly well-suited to investigating questions about participant engagement/completion of the intervention, fidelity of implementation, and the contextual fit or social validity of the intervention (Hitchcock et al., 2010; Onghena, Maes, & Heyvaert, 2019; Van Ness et al ...

  22. Special Issue on Advances in Single-Case Research Design and Analysis

    We are pleased to introduce this special issue focused on application of single-case research designs to address the needs of individuals with neurodevelopmental disabilities and behaviour disorders. Single-case research designs incorporate experimental methodology originating from experimental and applied behaviour analysis, 1 − 3 and have ...

  23. Effectiveness of Narrative Exposure Therapy for Treatment of PTSD

    To effectuate the first step in the research cycle of a novel clinical intervention for a population, we set up an exploratory study using an AB single-case series design (Kazdin 2016). The main aim of the current study was to investigate the effect of NET on PTSD symptoms in patients with a history of repeated childhood trauma.

  24. Single-Case Designs

    Single-Case Evaluation Designs. Single-case evaluation methodology is a mainstay of ABA research (Kazdin, 2011) and the basis of many sports related studies (Luiselli, 2011; Martin et al., 2004). The publications we reviewed in this chapter are testimony to the variety of single-case designs available to researchers.