Can’t Do or Won’t Do? A Go-To Diagnostic Tool

Posted on October 10, 2011

Print this entry

“He’s just not trying.”

How many of us have either heard or used this phrase referring to a student who is not performing as we expect him or her to do? Too often we assume that a student’s low performance is an issue of motivation, never taking into account that perhaps the student does not know how to perform the desired skill or behavior. Upon identifying that a student is not performing at an expected level, either in academic or behavioral areas, the first question that should be asked is whether it’s a skill or motivation deficit. Put more simply, is it a “can’t do” or a “won’t do” problem?

A can’t do/won’t do assessment is one of the most important diagnostic assessments we can perform. It should be a gateway assessment conducted with any student being considered for supplemental (Tier 2) or intensive (Tier 3) supports. Interventions for skill deficit may be very different than those for a motivation deficit. A can’t do/won’t do assessment will guide a team or individual toward the right hypothesis about the source of the problem and identification of appropriate interventions.

Witt & Beck (1999) outline the process for determining whether it’s a “can’t” do or “won’t” do problem, and provide ideas for interventions depending on the source of the problem. The process for conducting this assessment is described here.

First, a student is identified as being at-risk through a screening process. This may be a probe or set of probes in reading (e.g., DIBELS, AIMSweb) or math (e.g., Monitoring Basic Skills Progress, mathfactcafe.com). A student who falls below a specified standard (e.g., norm, benchmark) is targeted for additional diagnostic assessment.

The student is administered the same probe that he/she received in the screening process. Using the same probe is important as it provides more confidence in attributing any change in performance to motivation and not to any differences that exist between probes.

The evaluator provides instructions to the student. The student is told what their previous score was and told that if they can beat their previous score they can select from a number of incentives (a treasure box, a list of reinforces, etc.). It is important to make sure that there are motivating incentives for the students that are developmentally appropriate. A ninth grade student may be motivated by free time or five minutes on a computer game while a second grade student may be more motivated by a sticker or small toy. The probe is administered to the student and immediately scored. If the student’s score is better than their previous score, they are given the incentive.

To determine whether it’s a can’t do or won’t do problem, the two scores are compared. An increase of 15-20% is recommended as a cut off in this determination (Witt & Beck, 1999). For ease of calculation and consistency in decision-making, I recommend using 15%, although professional judgment is always important, taking into account other factors in the student’s history and performance. If a student’s score increases by 15% or more, it may an indicate a motivational issue (won’t do). With the incentive in place the student was able to perform at a significantly higher level. The formula for calculating this change is subtracting the baseline score (b) from the score with incentive (i), divided by the baseline score (b). This product is then multiplied by 100 to get a percentage*.

% change = (i-b/b ) * 100

If a student’s score shows no increase or increases less then 15%, it may indicate that it’s a skill deficit (can’t do). Even with an incentive in place, the student is unable to perform at a significantly higher level. If there is a significant increase in the score but the higher score is still below the cut score, it may be that the student has both a skill deficit and motivational deficit. Many students who have struggled over time in a subject may have decreased motivation for performing in the subject so it is common to see students who can’t do and won’t do the work. Examples may help clarify.

Ellison, a 3rd grade student obtained a median score of 60 on the DIBELS Next reading screener in the fall. The school team is using the DIBELS Next benchmark scores which indicates that 70 words per minute is the benchmark in the fall for 3rd grade. When given the same passage with an incentive a week later, Ellison read 74 words per minute. This reflects an increase of 23% ([74-60/60]* 100). The school team determined this to be a “won’t do” problem and as an intervention, provided Ellison with access to a mystery motivator for improved completion of work during instruction time and weekly progress-monitoring probes.

Monroe, a 5th grade student obtained a score of 22 correct digits on a Monitoring Basic Skills Progress (1999) basic math computation probe in the winter. The school team used the 25th percentile, which in the winter of 5th grade is 23 correct digits. With an incentive on the same probe, Monroe obtained a score of 24 correct digits. This is an increase of 9% (24-22/23 * 100). In this case, Monroe was technically above the cut score with an incentive but there was not a significant increase (15% or more) in the score. It was decided that this may be a “can’t do” problem. Using their professional judgment, knowledge of the student’s background, and the knowledge that he had very few errors but was not fluid in the math skills on which he was assessed, the school team determined to provide some additional independent practice on the math skills being targeted and progress monitor him to make sure he continued to perform at a level above the cut score.

A can’t do/won’t do assessment is efficient and powerful. In just a few minutes, information is obtained that will drive your hypothesis regarding the student and the selection of an appropriate intervention. A student who “can’t do” the work may need more practice, may need to have easier work, or may need more help (i.e., instruction) learning the skill. Witt & Beck (1999) provide interventions and teaching strategies that are effective in supporting students who can’t do or won’t do the work. Examples of “won’t do” interventions include goal setting, self-monitoring, mystery motivator, use of high-interest materials, shorter practice sessions. Can’t do interventions may be more specific to the specific skill deficit (fluency, accuracy, comprehension, etc.). Examples include drill and overcorrection, appropriate prompts, response cards, self-monitoring and charting of performance, peer tutoring, flash cards, and cover, copy, and compare. The reader is referred to Witt & Beck (1999) for more information on these and other interventions.

With this simple tool, educators will be lead down the appropriate path of providing appropriate interventions. Students will benefit from increased motivation and skill leading to improved student outcomes.

*A fillable form for calculating the formula can be found at http://wiki.updc.org/groups/devinhealey/wiki/73c82/Can’t_Won’t_Do_Assessment_Form.html

Fuchs, L. S., Hamlett, C. L., & Fuchs, D. (1999). Monitoring Basic Skills Progress, Basic Math Manual. Austin, TX: PRO-ED.

Witt, J., & Beck, R. (1999). One Minute Academic Functional Assessment and Interventions: “Can’t” Do It…or “Won’t” Do It?” Longmont, CO: Sopris West.

Author: Devin Healey, Program Specialist, Utah Personnel Development Center (UPDC)