ESL (Reading, Listening, Comprehension)
Exam results can be browsed on-line or retrieved automatically in bulk.
The result management web page allows recent exams to be selected by entering a date. The results can be sorted by date, name, score, exam language, or a user specified status. Details of the exam, such as survey answers, recommended placement, and summary information about the questions seen can be accessed by selecting one of the students listed. The basic link for results is: webcape.org/results You will need your account ID and password.
Also, once a list of students has been selected, the corresponding result details can be exported in plain text or in a form suitable for spreadsheets and databases.
It is also fairly simple to have a computer program you create (or have created) retrieve exam results automatically and process them according to your needs. A link explaining that retrieval is: webcape.org/nwcresulthelp.html#autoexport
Due to the adaptive nature of the test the time will vary. On average most students take 20 – 25 minutes to complete the test.
The exam can be as short at five minutes. If a student answers all the first 8-11 questions correctly (or incorrectly) the exam will be over. The student will have no knowledge of the language (a zero) or be above the testing capabilities of webCAPE. The test is designed to place students into the first 3-6 semesters. Why the range? Because placement is determined locally at each institution. The versatility of webCAPE allows it to fit the variety of language programs that exist. We strongly recommend that webCAPE be calibrated (customized) to your own institution.
It is possible for students of uneven ability to take longer than average. The problem is that uneven ability causes the system to thrash. In other words, the system tries to zero in on a score, but then, just as it is settling in on an ability level estimate, the student gives an answer that sends the system seeking a new target.
At this time the system ends the exam under two conditions: when confidence in the estimated ability level is high enough, or else when all questions at the estimated ability level are exhausted. Adding a time or question limit would reduce the accuracy level of the exams.
ESL reading and listening usually take longer due to the large amount of material that the student must read or listen to. If their ability is uneven, expect much longer times.
One last note. Since professors are experts with very high ability, they will always be able to finish a test fast (even when answering incorrectly). To get a better feel for the adaptive nature, try to answer the questions at the ability level of a 1st or 2nd year student.
Students can retake the exam with exactly the same name, ID and email as a previous attempt by pressing the ‘Continue’ button at the bottom of the registration page. The result of both exams will be recorded and shown on the result page.
If they press the ‘Resume’ button instead, then they will see the results of the most recent completed attempt. If they have not completed the exam and press the ‘Resume’ button, then they will be given the option of continuing the exam in progress or starting a new attempt, either with different information or restarting with the same information.
We recommend establishing and publishing a policy for retakes that makes sense for your situation. Using the highest, lowest, average, first or last score are all viable approaches.
The default note on the student result report page suggests that a score near a cutoff could mean that an other attempt be made. This is why multiple attempts are allowed.
With the external interface it is possible to authenticate students using the ID number, which uniquely identifies a student. Then one can deal with multiple attempts however one wants to do so.
There are no other components of the exam except for ESL which has the listening test.
The large number of students selected for the initial item calibration process were carefully chosen to represent the spectrum of ability levels from the novice to the typical, competent third semester college student. The results of the exam were found to correlate closely with the ability level determined by other means, such as standardized written tests and oral proficiency interviews.
Periodically the exam results are reevaluated by giving the test to BYU students taking classes at the various levels, and results of those trials show the test results remained indicative of placement level.
Schools are encouraged and expected to calibrate the cutoff scores in the placement table to correlate with each one’s combination of curriculum, staff, and typical incoming student ability. After any calibration process, please send us a total of the tests used so that we can deduct them from your invoice.
The calibration procedure starts by giving the exam to some students who have just completed the language courses into which you want to place incoming students. It is best if ten or more students for each course are tested, although fewer may be used if the results are pooled over multiple semesters. It is also better, when possible, to select students who have taken the courses from different instructors.
The scores for these tests are then grouped and ordered by course level. The median score for each group is found, and the cutoff scores are set half-way between them. The high end cutoff score may be approximated from the results of the highest course tested, or, with a large enough sample, may be set between one and two standard deviations (depending on circumstances) above the average score for that course.
After you have determined the cutoff scores go to webcape.org/admin and log in. Once there, click on Licenses then the language you are adjusting.
Occasionally you may wish to reevaluate and possibly change the cutoff scores by repeating the calibration process. This is done to account for shifts in typical incoming student ability, and for changes in teaching staff or curriculum.
WebCAPE is currently not compatible with most screen readers at this time. It is expected for the institution to make accommodations for the student via other testing methods, just as is done for written tests.
BYU spent over 5 years and a great deal of research to develop and accurately calibrate the weight of each question. These questions and their weight are the value behind the exams. Therefore we cannot allow anyone to see the set of questions. Should the information inadvertently be leaked, the validity of the test would be compromised, which would be to the detriment of everyone who relies on the test for accurate placement.
There are many hundreds of possible questions for each language, and each question was exhaustively examined by a panel of language teaching and testing experts. Then each question’s performance was analyzed by statistical methods in several trials. Still, from time to time a question is removed when it is found that it is not performing up to expectations. Be sure to let us know if you have concerns about a specific question so we can look at it more closely.
Because of this, it is also not possible to modify and/or extend the exam questions. To do so without the proper rigorous testing would decrease the accuracy and validity of the exams.
There is an interface that allows you to program your own student authentication, data collection, and result management system on your own servers. Only the exam itself comes from our server. Link: webcape.org/eei