At a recent conference I overheard an instructor boasting that her second year students were all at the Intermediate High level of oral proficiency. Years of experience and a wealth of test data suggests that this was extremely unlikely. But, it is easy to see how she came to this conclusion. It is likely that she used a rubric tied to the ACTFL Oral Proficiency Scale to rate the results of her students' performance on textbook-related tests. She compared the proverbial apples to oranges – described the results of one test with the proficiency scale of another. Because this is an error I see often, I believe it is important for language educators to understand the distinction.
Are your students 'road-ready'? - It is not unusual to find second year students correctly replying to items like the past-tense questions given on book tests. The questions are directly tied to familiar contexts in the unit being studied. Good students memorize the information and reproduce it well on the corresponding test. We can liken this to performing well on the knowledge portion of a driver's test. Anyone who has taught a teenager to drive will surely agree that simply passing the “written exam” does not qualify the young driver to take your car out on their own. We must know if the student can take what has been learned in their driver's manual and apply it on the road. How PROFICIENT is the student performing in real-life situations? The ACTFL/SOPI proficiency tests simulate real life situations and are not based on any given textbook. Their corresponding proficiency scale describes a student's ability to perform in real life situations – not repeat what they have learned in their recently-studied chapter.
Let's look at a specific way I often see this demonstrated with language students – past-tense probes. This language skill requires extensive practice, and cannot be attained in the month or two that is dedicated to a given unit of the text book. Typically, students do not perform as well with past-tense probes on proficiency tests because these items require them to apply their learning to a variety of real-life language tasks that are not specifically tied to a given text. While a novice student may be able to conjugate the verbs they are focusing on in a given unit, they cannot readily produce the proper verb tenses in connected speech samples.
The “written” test... Most achievement tests determine the percentage of errors students make — the focus is on error correction. With discreet-point questions, students fill in the blank, translate or add an ending. There is usually only one correct answer. Teachers look for, and reward, perfection--100%! Good Job!
Achievement tests are important tools to guide instructors. By looking at the errors on achievement tests instructors focus and reinforce their students' skills in those areas that are lacking; thus helping students build a strong foundation for learning.
Can you parallel park? Proficiency tests, on the other hand, are performance-based and are used for students to demonstrate what they know and what they can do. Their outcomes are judged using a specific scale or rubric. The scale is applied over the student's entire academic life – not just the particular unit, semester, grade or even degree. Students perform open-ended speaking or writing tasks, and make expected errors – more at lower levels and ever fewer at higher levels. Unlike an “A “grade or percentage score on an achievement test, ratings on the ACTFL Proficiency Scale show students their increasing levels of proficiency from elementary school through graduate school. Results on the scale or rubric highlight a student’s capabilities at the point in time and what they need to do to reach the next level. Teachers act as coaches to help students attain ever higher goals.
Proceed with caution... Educators should take care not to turn their proficiency test into an achievement test. If the exact same SOPI-type test is given year after year, it is possible that teachers will start to “teach the test” rather than “teach toward the test.” When teachers teach specific test items, students will recall responses; rather than applying what they learned to create unique responses to new items. To help avoid this pitfall, schools can develop an item bank from which they can select tasks and create their SOPI test each year. In this way, instructors are not burdened with designing a unique test each year, however; the test does not become practiced or stale. Even though you are using different items each year, drawing from an item bank also allows you to track performance data on specific tasks over the long term.
Choose the right vehicle... Both achievement and proficiency testing are useful, but they should be used for different purposes and at different times. The OWL Test Management System can be used to make both types of assessments easier to manage. OWL makes it easy to create items, develop item banks, create tests, score items, and rate speech and writing samples. The OWL Community Library includes samples of SOPI-type tests and task banks in several languages as well as several rubrics that are tied to the ACTFL Scale. Perhaps most important, OWL allows you to collect and store the results data from these exams over the life of your program and careers of your students, raters and instructors. Deficits can be identified and improvements be made in all aspects of your program. For students, an electronic portfolio of performance can be created to motivate them to achieve their language learning goals.
You can visit actfl.org or www.cal.org for specific information on proficiency testing and rating materials and upcoming workshop information.