June 2010 Exam, Fourth AR Game
Posted: Tue Jun 08, 2010 11:13 am
After hearing about peoples' various reactions to fourth game on the June 2010 exam, I feel that there are merits to both sides of argument being discussed. I can't speculate on how LSAC is actually going to respond, but I'm bothered with the proposed solution of dropping the entire fourth game of the analytical reasoning section.
People taking this exam have different strengths and weaknesses. Some people cruise through the logical reasoning sections, some people excel on the reading comprehension section, and some people shine on the analytical reasoning section. The test taker's score derives from his or her performance on these three section types, and their contributions to the overall raw score (for the June 2010 exam, with 100 questions) are as follows:
LR: 50 questions, 50.0%
RC: 27 questions, 27.0%
AR: 23 questions, 23.0%
It is worth pointing out that analytical reasoning currently receives the least amount of love; that is, it makes the smallest contribution to the overall raw score. I don't recall how many questions were in the fourth game, but let's assume that there were six questions for the sake of argument. Consider how the raw score would be impacted if the fourth game is removed from scoring (resulting in 94 scored questions):
LR: 50 questions, 53.2%
RC: 27 questions, 28.7%
AR: 17 questions, 18.1%
If you're one of those people that are strong in one section but weak in another, you typically rely on a solid performance in the former to compensate for any deficiencies in the latter. If the proposed solution is implemented, it presents rather disheartening implications for people whose strength lies in the analytical reasoning section.
I believe the fairest solution would be to have the fourth game's difficulty reflected in the score conversion chart, instead of merely reducing the amount of scored questions in the analytical reasoning section. I don't know too much about the equating process, but my lay understanding is that the intent is to strongly correlate raw scores with scaled scores (and thus provide scaled scores that are consistent and meaningful) while accounting for the variance in difficulty in each test.
For what it's worth, I do not believe that the fourth game was particularly difficult. However, if LSAC deems that the fourth game was atypical of most games' difficulty, I would definitely feel more comfortable with the solution that I've brought forth rather than the proposed solution of entirely dropping the fourth game from scoring.
People taking this exam have different strengths and weaknesses. Some people cruise through the logical reasoning sections, some people excel on the reading comprehension section, and some people shine on the analytical reasoning section. The test taker's score derives from his or her performance on these three section types, and their contributions to the overall raw score (for the June 2010 exam, with 100 questions) are as follows:
LR: 50 questions, 50.0%
RC: 27 questions, 27.0%
AR: 23 questions, 23.0%
It is worth pointing out that analytical reasoning currently receives the least amount of love; that is, it makes the smallest contribution to the overall raw score. I don't recall how many questions were in the fourth game, but let's assume that there were six questions for the sake of argument. Consider how the raw score would be impacted if the fourth game is removed from scoring (resulting in 94 scored questions):
LR: 50 questions, 53.2%
RC: 27 questions, 28.7%
AR: 17 questions, 18.1%
If you're one of those people that are strong in one section but weak in another, you typically rely on a solid performance in the former to compensate for any deficiencies in the latter. If the proposed solution is implemented, it presents rather disheartening implications for people whose strength lies in the analytical reasoning section.
I believe the fairest solution would be to have the fourth game's difficulty reflected in the score conversion chart, instead of merely reducing the amount of scored questions in the analytical reasoning section. I don't know too much about the equating process, but my lay understanding is that the intent is to strongly correlate raw scores with scaled scores (and thus provide scaled scores that are consistent and meaningful) while accounting for the variance in difficulty in each test.
For what it's worth, I do not believe that the fourth game was particularly difficult. However, if LSAC deems that the fourth game was atypical of most games' difficulty, I would definitely feel more comfortable with the solution that I've brought forth rather than the proposed solution of entirely dropping the fourth game from scoring.