Judging Chinese Proficiency via WebCAPE
The Chinese reading CAPE is a joint effort of Brigham Young University’s Chinese Flagship Center and its Humanities Technology and Research Support Center. The test is based on the original CATRC (Computer Adaptive Test for Reading Chinese) developed by Ted Yao and Richard Chi. The items for this version of test were calibrated by several hundred students of Chinese at universities throughout the U.S.
CATRC CAPE is intended to assist in the placement of students into college level Chinese language courses from the lowest to highest levels. The BYU Chinese Flagship Center is currently doing research toward correlation of the CATRC to ACTFL guidelines and to various Chinese Placement tests, such as the HSK–––the Chinese Government’s official test of Chinese as a second language. These guidelines will be included in subsequent releases.
Frequently Asked Questions
Because the CATRC is created to function as a placement test as well as an achievement test, it uses both types of characters by design. The lower level items have prompts in both types next to each other. As learners are pushed higher they are presented text prompts in one or the other form based on the idea that being an advanced learner means being able to handle texts where ever they are from.. Most programs expose students to both forms by the higher levels, otherwise an advanced reader might only be able to read material from one place or the other (say Mainland or Taiwan or Hong Kong. are typically pushed to read both forms by third year, which begins to align with the advanced levels as reflected in the test.
Although, it is possible to do a version of this test exclusively in one type or the other, it would measure something a little different. We are considering a version with all one type or the other as an exclusive placement tool.
The original test approximated an ACTFL rating by rating each question in the test on the ACTFL scale based on published descriptors. The level of question at which the test taker consistently then represents the estimated rating. The new test was recalibrated in 2009 using a method of analysis called Item Response Theory (IRT) and put on continuous scale. This technique allows for greater accuracy in determining the level of proficiency or placement. We are working to correlate formally test outcomes with known reading levels, with reference to ACTFL descriptors.
As a placement test these numbers mean different things to different programs. Depending on institutional standards, a 400 for example, might match with the beginning of third semester or any other semester. Also, since some programs emphasis reading more or less, it is possible that a 400 score would correlated with a much higher level in a program focusing on speaking and listening. The test is meant to suggest placement, not to predict grades.
Plans for future updates include the following:
- Re-entry of the text to more updated fonts
- Expanded number of items
- Correlation to ACTFL levels
- Possible all-traditional and all simplified version for placement only application
Sample Results and Ranges (BYU)
- For technical problems please contact the WebCape team at: firstname.lastname@example.org
- Questions on conceptual issues around the exam can be directed to Dr. Dana Bourgerie in the Department of Asian and Near Eastern Languages.
- For information on sales see the BYU Creative Works Catalogue or send an email to: email@example.com.