AIMSweb Assessment Issues

AIMSweb Assessment Issues in an Urban / Suburban School District in Northeastern Kansas.

Any time you begin a new assessment system, there will be "growing pains". In the district where I teach we had our share, but by the end of the second AIMSweb assessment session (Winter, 2012), we felt like we had a pretty good handle on it for the next session (Spring, 2012). Following, I will discuss some of the issues we had and the solutions - or attempted solutions - we implemented.

First of all let me describe our situation. We had a district-wide testing team. This was the first time our district had done this. Of course this was also the first time our district had really implemented an assessment system. We had not used a universal screener district-wide before. Previously, each building tested its own students, usually using a formative or diagnostic test. Our team consisted of our building-level interventionists. Our instructional coaches played a role, but for the most part didn't administer any assessments. They helped with organizing materials, served as "runners" to get kids to and from the testing area, helped make the schedule, etc. It should be noted that by "helped" I mean they basically did it all by themselves.

Our testing team received training in how to administer and score the AIMSweb measures. The training was over the course of two to three sessions. We basically followed the AIMSweb Training Workbook, used the video examples to practice scoring, received information from presenters in person and via webcam. We didn't get training from certified AIMSweb trainers. And, I think this made a difference. In my opinion it would have been helpful to get training from someone who had actually administered the assessment measures. We had some pretty specific questions that couldn't be answered.

Our first session (Fall, 2011), went pretty well considering we knew next to nothing going in. Our main issue was in scoring. We didn't agree on what to do about the whole "does the answer have to be written in the blank or not" on the MCAP (Math Concepts & Application) and MComp (Math Computation) measures. The instructions are pretty explicit. According to AIMSweb, if the answer isn't in the blank, it is marked incorrect. However, there is a grey area. In the standardized instructions for the students on the MCAP it specifically says to write the answer in the blank. In the standardized instructions for the students on the MComp is doesn't say to write the answer in the blank. Unfortunately, what this meant for us was that some scorers counted it wrong and some didn't. So, we didn't have consistent scoring the first time around.

The lesson we learned (I hope) is that you need to have those types of issues decided before you even administer the test. We talked about what it meant if the student didn't write the answer in the blank. We decided it meant they couldn't follow instructions, not that they could or couldn't compute the problem correctly. We asked ourselves, "What are we trying to determine with the test?" We decided we were not trying to determine whether or not a student can follow directions. We decided that it didn't matter if they put it in the blank or not, we just needed to all be scoring the same way. So, we eventually decided not to count it wrong as long as the answer was in the box somewhere and correct.

Another issue we had was that there was no "team leader" for our testing team. We had someone we could call, but not anyone on site. In hindsight it would have been helpful to have a "go to" person assigned or appointed to the group. This could be one of the interventionists, or someone who doesn't do any testing. This would have saved us quite a bit of time when we had to try and figure out how we were going to score the math. That person could have made an executive decision or called someone to find out. Then there would have been no disagreement about what to do in a particular situation.

Another major issue we still have is what to do about the data in terms of getting it out to the teachers and explaining what it means. We did eventually print out parent report letters and talk about the results at conferences. We found what we really need is for the teachers to receive some AIMSweb training. We need to know how to read the data, interpret or analyze the data, and learn how to talk to parents about the data. Some people aren't familiar with percentile rank, norms, standardization, etc.

Progress monitoring is another area that hasn't been perfectly implemented. Some schools progress monitor once per week, some once every two weeks, some barely once a month. There is a lot of information to consider when progress monitoring. Some of the more important pieces of information are: How often do you progress monitor? Should you progress monitor or strategically monitor a particular student? When progress monitoring is it really necessary to "drill down" or "test backwards" until you find the level at which to monitor the student? (That, by the way, takes a long time.) How do you set the goals for the student? What formula do you use? What do you do if the student reaches his/her goal? Are they automatically dismissed from intervention? What if they aren't on a trajectory that shows they will meet their goal? Do they automatically go to tier 3 interventions? How many data points are necessary to make a decision about a student?

Hopefully you will be able to have some of these questions answered before you begin your district-wide assessment system. It will save you so much time and effort and you will be able to focus on what matters: what to do with the students who are at-risk according to the screener.

FOR MORE INFORMATION REGARDING AIMSWEB TESTING GO TO www.aimsweb.com or visit Mark Davoren's website at http://www.readingandmathintervention.com

Mark Davoren is a reading and math interventionist in a northeastern Kansas urban / suburban school district. He has a Bachelor's Degree in Elementary Education, a Master's Degree in Curriculum & Instruction, and is a Reading Specialist. Mark has taught for over 15 years and has a wide range of educational experience. His primary areas of expertise are: reading intervention, teaching, training, data-based decision-making, testing and assessment.

No comments:

Post a Comment