Enhancing Mathematical Problem-Solving Competence: A Single-Case Study on Response Prompting Intervention for Students with Borderline Intellectual Functioning
Abstract
In this single-case study, we examined the impact of a structured teaching technique, response prompting, on improving problem-solving skills among four students with Borderline Intellectual Functioning (BIF). We utilized a multiple baseline design (AB) to evaluate the efficacy of this method. The intervention was conducted by the class teacher in a sixth-grade classroom at a school dedicated to students with learning challenges. Initially, the participants, consisting of two girls and two boys, engaged in problem-solving activities over eight to twelve sessions that included response prompting. Our results show a marked enhancement in the analytical abilities of all students, evidenced by the successful completion of most tasks. The paper concludes by offering a critical analysis of these results and proposing directions for future research.
We encounter mathematics in so many different ways in everyday life. For example, managing our budgets and personal finances, adjusting recipes in cooking and baking, comparing prices while shopping, organizing our daily schedules, and efficiently using various technologies all require calculation skills. Children and adolescents must develop robust mathematical competencies to integrate into society successfully. This process extends beyond merely comprehending basic concepts such as arithmetic, geometry, algebra, and stochastics in an academic context. It also encompasses the ability to translate these principles into practical skills that can be effectively utilized in real-world scenarios (Haigh, 2019; Yates, 2020).
One effective way to prepare students to apply their mathematical knowledge to solve challenges in everyday life is by teaching specific, step-by-step strategies. This refers to structured approaches that help make even highly demanding and complex cognitive processes more transparent and manageable. Such strategies emphasize breaking down the path to a solution into small steps, thereby enabling learners to tackle tasks that might otherwise seem insurmountable. (Jitendra & Woodward, 2019; Thevenot, 2017; Verschaffel et al., 2020; Zhang, 2023).
Solving word problems necessitates considerable practice (Tolar et al., 2012). It is far from simple or trivial. Fortunately, most students acquire the skills to solve even complex word problems, which involve multiple steps and an understanding of various mathematical concepts, during their early secondary school years. However, some do not. Students with developmental disabilities are particularly at risk of failure in this respect. These conditions arise from genetic risks, critical prenatal environmental influences, or complications during birth that affect brain function. This manifests in significant cognitive and intellectual delays (Hughes & Yakubova, 2019). As for the ability to process and apply mathematical concepts, individuals with developmental disabilities are undoubtedly limited. This is because they often face difficulties with attention, concentration, and comprehension. Thus, students with developmental disabilities frequently struggle with organizing knowledge, selecting appropriate problem-solving strategies, and maintaining focus on tasks (Sharp et al., 2023).
This paper reports on an intervention study conducted in Germany, focusing on the largest special needs group in the country—those designated as having “Förderschwerpunkt Lernen” (“Special Learning Needs”). A diagnosis of this kind refers to a developmental disability characterized by below-average cognitive skills that lead to low academic achievement but are not severe enough to be classified as intellectual impairment. Specifically, this condition is marked by an IQ between the first and second negative standard deviations and consistently poor performance across the four core academic areas: reading, spelling, writing, and math (Lauth et al., in press; Streit, 2021). Internationally, this condition is sometimes referred to as “Borderline Intellectual Functioning” (BIF). Despite it not being widely used, Wieland and Zitman (2016) argued in their article, “It is time to bring borderline intellectual functioning back into the main fold of classification systems,” highlighting that a significant number of students meet the criteria outlined above and emphasizing the need for this categorization. Therefore, the term BIF will be utilized throughout this paper.
For students with severe academic difficulties, such as those with BIF, mathematical word problems are particularly challenging. Reusser and Stebler (1997) observed that these tasks are completed up to 30% less frequently or less effectively by such learners compared to straightforward arithmetic exercises. Fortunately, the knowledge base on how to effectively help students with developmental disabilities, including BIF, acquire the necessary competencies in solving word problems is growing. In their comprehensive meta-analysis, Lein et al. (2020) synthesized empirical evidence on the impact of relevant interventions for students with learning disabilities or those otherwise academically struggling, finding a moderate positive mean effect size (g = 0.56). The study identified several key factors that enhance the effectiveness of these strategies, including explicit instruction, frequent feedback, scaffolded support, visual representations, and practice and repetition.
Among all possible options, response prompting aligns especially well with these principles, providing structured and immediate guidance, which can be particularly effective in promoting mathematical word problem-solving skills among individuals with BIF (Schorcht et al., 2024). This error-reducing intervention facilitates learning success for students, especially those facing academic challenges (Collins et al., 2018). Through the use of prompts, their engagement is guided to respond within a defined “action framework.” In this setup, student behaviors are shaped by specific parameters, fostering personalized interaction. Response prompting is highly adaptive, allowing educators to customize questions and prompts to align with each individual’s level of understanding, and providing immediate and personalized feedback. This method actively engages learners, encouraging critical thinking and participation at their own pace (Wolery et al., 1992).
Response prompting has been used in the past with various populations. For example, research has been conducted with individuals with Autism Spectrum Disorder (to teach communication and social skills) (e.g., Kim, 2017), children with Down syndrome (to foster independent living routines) (e.g., Cariveau & Brown, 2023), and elderly individuals with dementia (to help maintain independence) (e.g., Harris et al., 2021). However, it seems reasonable to extend this technique beyond enhancing life and personal functioning competencies to promoting academic achievement as well. This is especially significant for students with BIF, as their challenges are primarily characterized by poor school performance. By addressing their specific learning gaps, response prompting could potentially serve to support their educational development.
Several secondary analyses have evaluated the effectiveness of response prompting, including its impact on academic achievement: In a 2004 literature review, Morse and Schuster included data from 18 published studies, examining demographics, procedures, and outcomes. They considered individuals ranging from preschool-aged children to adults, primarily those with intellectual or learning disabilities. Their review suggests that response prompting is versatile and effective across various ages and challenges. Tekin-Iftar et al. (2018) documented in a meta-analysis that simultaneous prompting, a specific form of response prompting, is highly effective for teaching various skills to individuals with disabilities, given consistent implementation. Brown and Cariveau (2022) published a systematic review summarizing results from 11 articles on prompt delay and simultaneous prompting. Both methods were similarly efficient, with simultaneous prompting often leading to fewer errors before mastery.
Despite the substantial knowledge regarding the effectiveness of response prompting across different target variables, the utility of this method in teaching mathematical word problem-solving skills, particularly for individuals with BIF, remains underexplored. A computer-based search in the databases Academic Search Ultimate, APA PsychINFO, ERIC, MEDLINE, and TOC Premier on June 28, 2024, using the terms “response prompting” AND “mathematics” OR “math” OR “math education” OR “mathematics education” in their titles yielded not a single hit. Given this gap, we aim to investigate the effectiveness of response prompting through a controlled single-case study involving four individuals with BIF in a grade level where proficiency in solving mathematical word problems is typically expected: sixth grade. Our research questions were as follows:
Will a response prompting intervention lead to a clear improvement in the ability of sixth-grade students with BIF in a German-speaking educational environment to solve text-based real-world problems more effectively?
How do the students perceive the effectiveness and acceptability of the response prompting intervention, as reflected in their feedback provided through a social validity questionnaire?
Methods
Participants and Setting
The study took place in a sixth-grade classroom at a special school catering to students with diverse learning needs, located in a rural small town about 25 miles from a major metropolitan area in Germany. The class teacher chose the participants according to the following criteria:
The students fell within the range for BIF, defined as possessing an IQ between the first and second standard deviation below the average,
They were required to have basic reading skills (defined as not scoring below the 25th percentile), but faced challenges with reading comprehension,
They needed to have foundational knowledge in mathematical skills such as addition and multiplication within the 100 number range (T-score not below 40), yet had significant difficulties in solving real-word math problems,
They had to possess a fair command of the subject-specific mathematical vocabulary, defined as not scoring below the 25th percentile, and
They had to be willing to participate in the study.
The Kaufman Assessment Battery for Children (KABC-II; Kaufman & Kaufman, 2004) was used to determine IQ. Applying a widely used German standardized math test (Basic Diagnostic Mathematics for Grades 4-8 by Moser Opitz et al. 2010), we gauged the student’s proficiency in basic arithmetic within the number range up to 100 and ensured that criterion number 2 was met. The extent of problems in solving real-world math problems was assessed using proven school-internal materials. At the beginning of the study, none of the students were able to successfully complete five tasks that had been presented to them during the three weeks prior to the study’s start. The remaining information was obtained from school records. These included comprehensive documentation of each student’s academic performance, behavior reports, attendance records, results from standardized tests, and individualized education plans. The information related to the criteria for basic reading skills and command of subject-specific mathematical vocabulary was derived from these sources, ensuring a thorough and objective assessment process. However, the standardized tests were not always uniform—a circumstance reflecting the reality in German schools. Nevertheless, sufficient comparability to select participants was assumed. It would have been ethically problematic to subject the students to even more diagnostics at this point.
Among the suitable candidates from the sixth grade, five individuals were initially selected. However, one was later excluded from the analysis due to excessive absenteeism. The final group included two boys (Student 1 and Student 2) and two girls (Student 3 and Student 4), who were all 12 years old. Each participant was a native German speaker and was officially diagnosed with BIF.
Our study took place within the classroom setting, where classmates engaged in quiet individual work during the implementation of the response prompting instruction, ensuring a predominantly calm environment that allowed for concentrated execution without significant distractions. The class was seated at desks in rows, and the respective students were each situated individually at a group table with the class teacher.
Design
We adopted a single-subject multiple-baseline (AB) design across participants, as outlined by Horner et al. (2005). This approach included a baseline (A) phase followed by a response prompting intervention (B) phase. To infer causality using the AB design, the commencement of the B phase was staggered over time for each individual, initiated after 3, 5, 6, and 7 days, respectively. There was one instance where a student was scheduled to begin the intervention after a 4-day baseline period. However, as previously mentioned, this individual was excluded from the data analysis due to excessive absences. We determined the start time for each participant through random assignment. Evidence for the effectiveness of the treatment is derived from the emergence of enhanced problem-solving skills coinciding with the onset of the staggered intervention phase, suggesting that it was instrumental in fostering these skill improvements.
Materials
For the performance assessment, 15 text-based (real-world) problems were created that had to meet the following criteria:
All tasks were presented to the participants in continuous text, the length of the text for each task was kept constant, and there was no direct arithmetic question (word count: 51-55).
Both fundamental operations of addition and multiplication were represented in each task, and the range of numbers for calculations was limited to 100.
The tasks were generated from the students’ real-world experiences and were logically coherent in their calculations.
We ensured that all problems presented were of equivalent difficulty. Each task was printed on a separate 8.27 × 11.69 inch sheet, with calculation squares provided below. To measure the ability to solve text-based problems, we employed a rating scale modeled after Troia and Graham (2002). It comprised the following six statements: (1) “The text was read attentively, unknown words were looked up, and key points were highlighted”, (2) “The task was paraphrased in one’s own words”, (3) “The arithmetic operation was identified – the interrogative sentence matched the arithmetic operation”, (4) “The calculation was executed; presented as a math problem and solved accurately”, (5) “The response sentence was constructed and aligned with the arithmetic operation”, and (6) “The solution was verified for consistency (cross-referencing text analysis, question, computation, and response).” Depending on how well each subtask was performed, students could score between 0 and 5 points per question. Therefore, the theoretical range of possible scores was between 0 and 30.
A script for the response prompting intervention was also crafted, following the approach by Hudson et al. (2013). It detailed the sequence of steps and connected the phases of problem-solving with the prepared prompts, their respective explanations, and any related non-target information. For the approach utilizing this script, a range of additional text-based problems was devised, varying from relatively simple to more complex.
To gauge the extent to which the objectives, procedures, and results of an intervention are considered acceptable, relevant, and beneficial by participants, a brief social validity questionnaire was constructed. This included questions such as, “Did you enjoy the intervention?”, “Do you find solving text-based problems easier now?”, and “Would you be willing to participate in another program similar to this one?” Students could indicate their level of agreement on a five-point scale, where 1 meant “I do not agree at all” and 5 meant “I agree completely.”
Additionally, we developed a concise checklist to monitor treatment fidelity, including items like “Sessions were conducted in the designated quiet area,” “The time frame was adhered to,” and “At the end of the session, a performance check was conducted.” All points were formulated as operationally as possible to minimize subjective bias in the responses. During the implementation of the study, the teacher made a concerted effort to reliably follow each point on the checklist. (All materials are available upon request from the first author.)
Dependent Variable
At the end of each baseline and intervention session, the students were asked to work on one of the previously mentioned 15 text-based problems. Selection from this pool occurred randomly. The teacher ensured that no student was given the same problem twice. Participants had 5 minutes to solve the issues presented to them. During this time, the class teacher closely monitored all activities and recorded the extent to which each step was expertly executed (using the aforementioned rating scale).
An additional special educator was present throughout the sessions. His role was to observe the proceedings and monitor the measurement of performance. He was responsible for verifying the recordings and intervening if he considered a different evaluation necessary. This occurred approximately once every ten individual ratings on the six-statement, five-point scale. After each session, the class teacher and the special educator would collaborate to reflect on the ratings and discuss any discrepancies until a consensus was reached. This process was designed to ensure the reliability of the measurements.
Procedures
Baseline and intervention sessions were conducted in the students’ regular classroom. Participants and their teacher gathered in a quiet area, partitioned from the rest of the class engaged in independent activities. The order in which students received individual attention from the teacher varied daily. Each session lasted approximately 20 minutes (with minor variations), taking place during the third school period, between 9:15 AM and 11:00 AM.
During the baseline phase, the teacher engaged the students in a game for 15 minutes. Afterward, the learners were expected to work on text-based math problems, which typically took about 5 minutes. The treatment conditions were similar to those during the baseline, with two notable differences:
Each session began with a roughly 3-minute review of the text problem that the child had completed the previous day. The teacher provided brief feedback, highlighting what was done well and identifying areas needing improvement (this brief review is considered an integral part of our response prompting procedure).
Instead of a game, she employed a response prompting procedure very similar to that used in the Hudson et al. (2013) study.
During the initial treatment session, the teacher reviewed the word problem students had completed the previous day, as part of their performance assessment. She highlighted the strengths and identified areas for improvement, including explicit praise for their achievements throughout the assessment. Following this, she announced the intervention’s objective, “Today, you will learn how to solve word problems.” She presented a simple example from the collection and explained, “I will show you the first step in solving the task.” Immediately after this (with no delay), the teacher presented an index card that read “Read the task.” She then demonstrated, “First, you need to read the problem carefully. Let me show you how.” A verbal prompt followed that included non-target information (e.g., “You can underline important details, like signal words such as ‘per,’ ‘each,’ ‘together…’”). After the teacher read and underlined the crucial information, she asked the students to do the same. Participants who began writing within five seconds and finished within three minutes were commended, with the teacher emphasizing that this partial success was a result of their effort. It was irrelevant if the students simply replicated the teacher’s method. If they did not show the desired behavior, they received constructive feedback on what they should do (“No, you must read the task carefully and highlight the important information”) and the explanation plus modeling was repeated before they were asked to write an opening sentence again. Once the correct response was given, the teacher proceeded to the next step as similarly listed in Table 1 until time elapsed. Notably, it never occurred that a student failed to get the correct response. Afterward, a text problem was given to assess performance, followed by the teacher completing the checklist for treatment fidelity.
The second lesson opened with a succinct evaluation of the text problem tackled in the previous day’s performance assessment, where the students were praised for their accomplishments. Subsequently, the steps outlined in Table 1 were applied to a new text problem. Upon finishing, the teacher reviewed the entire procedure with the students. To conclude, the mandatory performance assessment was conducted, and the treatment fidelity checklist was filled out.
In the third and subsequent sessions, which all commenced with praise and concluded with performance assessments, the teacher guided students through progressively difficult text problems. Once a learner flawlessly executed the routine three times in a row, the teacher scaled back her assistance, resorting to merely signaling the prompts listed in Table 1 (“Read the problem”, “Paraphrase the text”, “Develop a solution plan”, etc.). If a student encountered difficulties or made an error, the teacher provided structured feedback while progressively reducing her assistance. This continued until the end of the intervention sessions. Of course, performance measurements were taken at the end of each lesson.
Procedural Fidelity
Before the implementation of the treatment, the classroom teacher received training on the response prompting procedure from the second author during a one-hour session. The special educator, previously mentioned as being continuously present and observant, was thoroughly informed about the content and purpose of the intervention. His role was to ensure that the implementation adhered to the protocols previously agreed upon by the classroom teacher and the second author, as outlined in the checklist described above. During the ongoing fidelity observations, adherence was systematically documented. This procedure ensured that the intervention was conducted in strict accordance with the established guidelines. There were no deviations from the planned procedure, meaning that treatment fidelity was maintained at 100%.
Social Validity
At the end of the study, the class teacher handed out the previously mentioned social validity questionnaire to the participants. They completed it independently and promptly returned it upon finishing. The responses were then analyzed by examining the students' ratings for each question on a five-point scale, where a rating of 1 indicated strong disagreement and a rating of 5 indicated strong agreement.
Results
The “Scan” package (Wilbert, 2023) for the statistical computing environment R was utilized to analyze descriptive statistics, overlap indices, and conduct regression analyses. Figure 1 displays the count of points that each of the four participants was awarded for trying to solve the math problems within the allotted time. The visual analysis offers convincing evidence of a causal link between the intervention and the enhancement of performance. Moreover, there was a distinct consistency in level, trend, and variability noted throughout the baseline and intervention phases.


Citation: Journal of International Special Needs Education 28, 2; 10.9782/JISNE-D-24-00008
It should be noted that the four students solved a significantly higher number of math problems during the treatment phase compared to the baseline period. On average, before the intervention, Student 1 scored 6.67 (SD = 0.58), Student 2 scored 14.00 (SD = 1.87), Student 3 scored 12.33 (SD = 3.20), and Student 4 scored 12.50 (SD = 1.76). During the intervention, Student 1 achieved an average score of 22.42 (SD = 2.84), Student 2 achieved 24.50 (SD = 2.64), Student 3 achieved 23.67 (SD = 4.27), and Student 4 achieved 26.50 (SD = 1.77). Consequently, from baseline to intervention, the performance increase was 236.13% for Student 1, 75.00% for Student 2, 91.97% for Student 3, and 112.00% for Student 4.
Additionally, to gain further insights into the effectiveness of the intervention beyond descriptive analysis, overlap indices were calculated as presented in Table 2. These included the Non-overlap of All Pairs (NAP; Parker et al., 2011), the Percentage of Non-overlapping Data (PND; Scruggs et al., 1987), and Tau-U (Parker et al., 2011). The p-value for the PND was calculated following the methodology of Tarlow and Penland (2016). For Tau-U, we employed the formula that accounts for a trend in phase A (A vs. B + trend in B – trend in A). Strong and significant effect sizes were found for the NAP across all students. The same was true for the PND. As for the Tau-U, it indicated that all participants experienced a large magnitude of change.
To finalize the visual and quantitative analyses, a regression model was computed for each child and collectively for all participants (level 1 and level 2 analyses), as shown in Table 3. Except for Student 3, a significant level effect was evident for everyone. Although Student 3 did not show a substantial performance increase immediately after the first intervention session, she demonstrated notable improvements after the second and third sessions. This variability can occur, as learners may not always be equally motivated or in the same psychological state. However, there was also a considerable improvement in Student 3’s level upon the start of the intervention, and a very pronounced and statistically significant improvement was observed in all other participants. The level 2 analysis revealed a statistically significant increase in overall performance, specifically reflecting the substantial improvement observed immediately after the onset of the treatment. Consequently, the response prompting intervention led to a pronounced enhancement in the mathematical skills of the students.
The social validity questionnaire was completed by all four students, and each expressed the most favorable ratings in all categories (a rating of 5). This means, for example, that they unanimously believed they could now solve text-based problems better than before, found the time allocated appropriate, and felt adequately supported and frequently praised. Additionally, all of them indicated a willingness to participate in a similar program again, with each of these aspects receiving the highest possible rating.
Discussion
Main Findings
This study investigated the impact of a response prompting procedure on the problem-solving skills of four sixth-grade students with BIF. All participants exhibited substantial improvements that began with the onset of the intervention. As soon as the students were guided very closely through response prompting, they were able to solve the text-based problems much more effectively than before. This was evident not only through visual inspection but also through the overlap measures used. Except for Student 3, a piecewise regression analysis revealed statistically significant advancements in performance levels for all participants (nonetheless, the gains for this individual were visible in the data trend). Furthermore, the results of the hierarchical piecewise linear regression model, incorporating data from all students, were also substantial. All of this speaks to the high effectiveness of the response prompting method used in this study. The positive outcomes are in harmony with the highly positive feedback recorded in the social validity questionnaire.
Limitations
Our study has certain limitations. Firstly, even though efforts were made to maintain consistency in the difficulty level of text tasks for performance assessment, absolute certainty in this regard cannot be guaranteed. Nevertheless, given that we randomized the task presentation, we believe that any variations in task difficulty did not systematically influence the study’s results.
Furthermore, time constraints prevented conducting a follow-up survey, precluding any conclusions about the long-term effects of the intervention in this study. However, further research with more comprehensive designs is essential, given the limited ability to generalize findings from isolated observations in individual case studies. It is worth noting that response-prompting procedures have been shown to produce maintenance effects in other experiments (Hudson et al., 2013; Pennington et al., 2014).
In conclusion, as with all controlled single-case studies, the results do not allow for generalized statements. Reliable conclusions about the effectiveness of response-prompting in improving mathematical problem-solving competence can only be drawn from multiple replications of research across diverse geographical regions. At this stage, we therefore present only preliminary indications of the potential benefits of the intervention method examined in this study.
Practical Implications and Future Research
Prior research in the field of productive writing for students with severe learning difficulties has shown that employing the response prompting technique can lead to significant gains for students (Hudson et al., 2013). Moreover, this single-case study sheds some light on text comprehension skills, which are vital for identifying suitable problem-solving strategies (Pongsakdi et al., 2020).
Through detailed guidance during the complex processing stage, response prompting appeared to be effective in enhancing the participants' ability to comprehend written text and convert it into mathematical models using appropriate strategies. Consequently, a significant number of participants demonstrated notable improvements in their capacity to transform written narratives into mathematical equations. These observations are in line with previous findings from multiple studies that have utilized response prompting methods with students with disabilities (Brown & Cariveau, 2022; Morse & Schuster, 2004; Pennington et al., 2014; Tekin-Iftar et al., 2018). Looking forward, future research should aim to explore more varied task types and include learners identified with different special needs, enhancing the robustness and generalizability of the findings.
The adaptability and practicality of the response prompting method across diverse tasks make it a valuable tool for educators. Moreover, the intervention’s social validation among students facing learning challenges is confirmed by study participants. Considering the positive impact of motivational strategies on problem-solving in text-based contexts (Pongsakdi et al., 2019), the significance of such interventions is increasingly evident.
Conclusion
Despite the previously mentioned limitations, the response prompting intervention appeared to support the problem-solving abilities of four sixth graders with BIF in text-based math tasks. This study provides preliminary insights into the development of process-related competencies in similar student groups within German-speaking regions. While the findings suggest that response prompting may be beneficial in improving the quality of text-based problem-solving and in transferring these skills to real-world scenarios among students with BIF, these conclusions should be considered with caution due to the potential for subjectivity introduced by the study’s design. Consequently, to strengthen the reliability of these findings, further research is essential. Future studies should consider and address the aforementioned limitations when attempting to replicate and expand upon this study.

Performance Data in Phase A and B for all Participants
Contributor Notes