CAPE was developed in the late 1990’s as a collaborate effort between BYU professors and language departments. CAPE’s ongoing development is overseen by Dr. Jerry Larson, Director of the Humanities Technology and Research Support Center. Originally, CAPE was developed for Windows machines and delivery was on CD. However, since 2002 CAPE has evolved to webCAPE and is now administered completely online.
Spanish was the first language developed, followed by French, German, Russian, ESL and Chinese. The development of CAPE requires significant involvement and research. Content has to be developed and reviewed, followed by rigorous testing to determine the level and significance of each question. The process must repeat multiple times until the each question is calibrated and weighted according to its difficulty.
Each language has a database of questions that range from 400 (Russian) to as many as 1000 (Spanish). Studies have shown that a student would have to take the exam approximately 6 times to begin to see repeated questions.
How CAPE works
After starting the CAPE exam, the student enters a password and responds to questions regarding their previous language experiences to initiate a test record file.
Once the record identification information is entered, the computer prepares the student for the test. The first screen briefly explains that the student is to respond to multiple choice questions by typing and confirming the letter of the correct answer. To ensure the student has understood the instructions, a sample test item is given. After this screen the actual test begins. The computerized adaptive placement exams are designed to provide individualized testing, identifying the student’s ability level with combinations of grammar, reading, and vocabulary questions.
As a student proceeds through the test, the computer selects and displays items based upon his or her previous responses to previous items. The adaptive testing algorithm has been written so that the first six questions serve as “level checkers.” After the first six items, the test begins to “probe” in order to fine-tune the measurement by increasing or decreasing the difficulty by one level after each response. The test terminates if 1) the student incorrectly answers four questions at the same difficulty level, or 2) the student answers five questions at the highest difficulty level possible.
By requiring at least four misses at a given level, the test makes allowances for lucky guesses or inadvertent errors due to lack of concentration, nervousness, or other distractions. To avoid duplicate questions, the index to the question is flagged. When a question is used during the test, a sequential file is created to show the student’s performance during the test. At the conclusion of the test, the computer displays the performance level of the student. The student then consults the placement chart (which is determined by the respective language department) that lists the ranges of performance levels that pertain to the various language courses of the curriculum. Thus, the student is immediately advised of the class that appears most suited to his or her ability level.
A survey of BYU students who had taken the CAPE showed that students with little or no computer experience strongly agreed that having limited experience with the computer had little or no effect on their performance on the computer test.