New Low For February Bar takers

maxmartin
Posts: 611
Joined: Tue Nov 29, 2011 5:41 pm

Re: New Low For February Bar takers

Postby maxmartin » Mon Apr 17, 2017 10:53 am

happyhour1122 wrote:taking 25 questions out is just unfair. We had more opportunities to get it right!
Since we don't get any scores taken off from getting the questions wrong, I still we had more chance of gaining more points by solving more questions. I don't know how this isn't going to make any difference..


Personally I got 9 points drop from July to this Feb edition.

happyhour1122
Posts: 1059
Joined: Wed Feb 08, 2017 5:08 pm

Re: New Low For February Bar takers

Postby happyhour1122 » Mon Apr 17, 2017 11:20 am

maxmartin wrote:
happyhour1122 wrote:taking 25 questions out is just unfair. We had more opportunities to get it right!
Since we don't get any scores taken off from getting the questions wrong, I still we had more chance of gaining more points by solving more questions. I don't know how this isn't going to make any difference..


Personally I got 9 points drop from July to this Feb edition.


oh, I didn't know you got your score. Did that 9 point drop fail you? (I hope not...)

maxmartin
Posts: 611
Joined: Tue Nov 29, 2011 5:41 pm

Re: New Low For February Bar takers

Postby maxmartin » Mon Apr 17, 2017 11:33 am

happyhour1122 wrote:
maxmartin wrote:
happyhour1122 wrote:taking 25 questions out is just unfair. We had more opportunities to get it right!
Since we don't get any scores taken off from getting the questions wrong, I still we had more chance of gaining more points by solving more questions. I don't know how this isn't going to make any difference..


Personally I got 9 points drop from July to this Feb edition.


oh, I didn't know you got your score. Did that 9 point drop fail you? (I hope not...)


I disclosed my result in Feb result thread. Luckily, it did not fail me. And I use the same material to prepare for these two MBE exams.

happyhour1122
Posts: 1059
Joined: Wed Feb 08, 2017 5:08 pm

Re: New Low For February Bar takers

Postby happyhour1122 » Mon Apr 17, 2017 11:48 am

maxmartin wrote:
happyhour1122 wrote:
maxmartin wrote:
happyhour1122 wrote:taking 25 questions out is just unfair. We had more opportunities to get it right!
Since we don't get any scores taken off from getting the questions wrong, I still we had more chance of gaining more points by solving more questions. I don't know how this isn't going to make any difference..


Personally I got 9 points drop from July to this Feb edition.


oh, I didn't know you got your score. Did that 9 point drop fail you? (I hope not...)


I disclosed my result in Feb result thread. Luckily, it did not fail me. And I use the same material to prepare for these two MBE exams.


Congrats!

maxmartin
Posts: 611
Joined: Tue Nov 29, 2011 5:41 pm

Re: New Low For February Bar takers

Postby maxmartin » Mon Apr 17, 2017 11:57 am

happyhour1122 wrote:
maxmartin wrote:
happyhour1122 wrote:
maxmartin wrote:
happyhour1122 wrote:taking 25 questions out is just unfair. We had more opportunities to get it right!
Since we don't get any scores taken off from getting the questions wrong, I still we had more chance of gaining more points by solving more questions. I don't know how this isn't going to make any difference..


Personally I got 9 points drop from July to this Feb edition.


oh, I didn't know you got your score. Did that 9 point drop fail you? (I hope not...)


I disclosed my result in Feb result thread. Luckily, it did not fail me. And I use the same material to prepare for these two MBE exams.


Congrats!

Thank you. Good luck to you too.

JoeSeperac
Posts: 60
Joined: Thu Feb 16, 2017 3:30 pm

Re: New Low For February Bar takers

Postby JoeSeperac » Mon Apr 17, 2017 12:10 pm

InterAlia1961 wrote:Does anyone know what a 136.147 scaled score even means? As in, how many of the 200 questions were correct to end up with a given scaled score? Or even a raw score? I hate how confusing this all is, and I really have never given it any thought until recently. Thank you in advance.


The bar examiners don't want you to know your raw scores. For example, if you took the exam twice and averaged 8/25 correct on the Property MBE questions (32%), you might take a calculated risk and ignore studying Real Property on your 3rd attempt since you would get about 25% correct anyway just by guessing (this really is something only a low-ability examinee would do).

NY stopped releasing raw MBE scores in 2006 (although they inadvertently released them on 2013 score reports). I made a calculator that converts these raw NY MBE scores to scaled NY MBE scores:
http://www.seperac.com/zcalc-mbe-febjuly.php.

You can see that the scale varies for each exam and it is non-linear (e.g. a raw MBE score of 100 leads to a scaled MBE score of about 117 while a raw MBE score of 150 leads to a scaled MBE score of about 158). While this calculator doesn't reflect the MBE scale for other states, it is probably close.

Having fewer graded MBE questions reduces the reliability of the MBE component of the exam, but it shouldn't affect pass rates, although pass rates will probably continue to decline as I explain here:
http://www.seperac.com/index.php#UPCOMING

Since only 175 MBE questions will be graded on the upcoming UBE exam, you want to answer a minimum of 110 of these graded questions correctly (63%). According to a 1997 study entitled Basic Concepts in Item and Test Analysis, "the ideal percentage of correct answers on a four-choice multiple-choice test is not 70-90%. According to Thompson and Levitov (1985), the ideal difficulty for such an item would be halfway between the percentage of pure guess (25%) and 100%, (25% + {(100% - 25%)/2}. Therefore, for a test with 100 items with four alternatives each, the ideal mean percentage of correct items, for the purpose of maximizing score reliability, is roughly 63%."

If you failed the F17 UBE and want to know your raw scores, click here:
http://www.seperac.com/scoreform.php

JDNE
Posts: 23
Joined: Fri Mar 10, 2017 3:01 pm

Re: New Low For February Bar takers

Postby JDNE » Mon Apr 17, 2017 12:41 pm

Wow, @JoeSeperac, you've posted some good stuff. If I flunk Mass. I'll do it again in July. If I ever take a UBE state I will be in touch with you and take your course. Thanks.

ur_hero
Posts: 136
Joined: Sat Nov 19, 2016 6:52 pm

Re: New Low For February Bar takers

Postby ur_hero » Mon Apr 17, 2017 3:33 pm

happyhour1122 wrote:taking 25 questions out is just unfair. We had more opportunities to get it right!
Since we don't get any scores taken off from getting the questions wrong, I still we had more chance of gaining more points by solving more questions. I don't know how this isn't going to make any difference..


Not sure I follow the logic of having "more opportunities to get it right"?

Less graded questions just means each is weighed heavier, right? I suppose a test with more questions may tend to find a more accurate average . . . but at a certain point I feel it's pretty accurate and the difference between 175 and 190 graded questions doesn't seem all that significant.

I think the issue isn't the "opportunities to get it right", but rather, whether or not NCBE has the ability to manipulate what those 25 questions are in a way that will affect either performance during the test or the apportioning of which ones count and point values per question after the fact. It might just be fine and all addressed in scaling, or it may help or hurt examinees - I just don't know.

Also, we may have seen performance go down this cycle by a percent but it's impossible for us to know whether that's to be attributed to the quality of examinees, the difficulty of the test itself, or the change in "pre-test" questions. Likely, we'll be kept in the dark on much of the information to actually determine this; and on the other hand it's just difficult to attribute correlation to causation here with so many competing variables that are difficult to measure. Likely, examinees will continue to be blamed for being less qualified, dumber, less-motivated, lower LSATS, blah blah blah, etc.

happyhour1122
Posts: 1059
Joined: Wed Feb 08, 2017 5:08 pm

Re: New Low For February Bar takers

Postby happyhour1122 » Mon Apr 17, 2017 3:43 pm

ur_hero wrote:
happyhour1122 wrote:taking 25 questions out is just unfair. We had more opportunities to get it right!
Since we don't get any scores taken off from getting the questions wrong, I still we had more chance of gaining more points by solving more questions. I don't know how this isn't going to make any difference..


Not sure I follow the logic of having "more opportunities to get it right"?

Less graded questions just means each is weighed heavier, right? I suppose a test with more questions may tend to find a more accurate average . . . but at a certain point I feel it's pretty accurate and the difference between 175 and 190 graded questions doesn't seem all that significant.

I think the issue isn't the "opportunities to get it right", but rather, whether or not NCBE has the ability to manipulate what those 25 questions are in a way that will affect either performance during the test or the apportioning of which ones count and point values per question after the fact. It might just be fine and all addressed in scaling, or it may help or hurt examinees - I just don't know.

Also, we may have seen performance go down this cycle by a percent but it's impossible for us to know whether that's to be attributed to the quality of examinees, the difficulty of the test itself, or the change in "pre-test" questions. Likely, we'll be kept in the dark on much of the information to actually determine this; and on the other hand it's just difficult to attribute correlation to causation here with so many competing variables that are difficult to measure. Likely, examinees will continue to be blamed for being less qualified, dumber, less-motivated, lower LSATS, blah blah blah, etc.


If there is anyway to rebut my assumption, PLEASE PLEASE PLEASE help me undo my logic that: if we have 25 more questions-> we have more opportunities to to get it right. I am already pissed (but can't do anything about it, so i'll just accept but, still bitter).

I think this logic comes from my practice questions.
Sometimes I would do exam A 75 questions. Sometimes I would do exam B, 100 questions.
I get high score on exam B. Why? because I solved more questions.

HELP ME UNDO THIS LOGIC!! :(

Estecontre
Posts: 110
Joined: Wed Oct 12, 2016 6:03 pm

Re: New Low For February Bar takers

Postby Estecontre » Mon Apr 17, 2017 3:54 pm

happyhour1122 wrote:
ur_hero wrote:
happyhour1122 wrote:taking 25 questions out is just unfair. We had more opportunities to get it right!
Since we don't get any scores taken off from getting the questions wrong, I still we had more chance of gaining more points by solving more questions. I don't know how this isn't going to make any difference..


Not sure I follow the logic of having "more opportunities to get it right"?

Less graded questions just means each is weighed heavier, right? I suppose a test with more questions may tend to find a more accurate average . . . but at a certain point I feel it's pretty accurate and the difference between 175 and 190 graded questions doesn't seem all that significant.

I think the issue isn't the "opportunities to get it right", but rather, whether or not NCBE has the ability to manipulate what those 25 questions are in a way that will affect either performance during the test or the apportioning of which ones count and point values per question after the fact. It might just be fine and all addressed in scaling, or it may help or hurt examinees - I just don't know.

Also, we may have seen performance go down this cycle by a percent but it's impossible for us to know whether that's to be attributed to the quality of examinees, the difficulty of the test itself, or the change in "pre-test" questions. Likely, we'll be kept in the dark on much of the information to actually determine this; and on the other hand it's just difficult to attribute correlation to causation here with so many competing variables that are difficult to measure. Likely, examinees will continue to be blamed for being less qualified, dumber, less-motivated, lower LSATS, blah blah blah, etc.


If there is anyway to rebut my assumption, PLEASE PLEASE PLEASE help me undo my logic that: if we have 25 more questions-> we have more opportunities to to get it right. I am already pissed (but can't do anything about it, so i'll just accept but, still bitter).

I think this logic comes from my practice questions.
Sometimes I would do exam A 75 questions. Sometimes I would do exam B, 100 questions.
I get high score on exam B. Why? because I solved more questions.

HELP ME UNDO THIS LOGIC!! :(


Let me try: stop doing a straight swap. You're assuming that everything is still the same. The math didn't change, minus 25 less questions, the questions didn't change (the material, the difficulty of questions, testing conditions) moreover your math is again too simple. Both Exam A and Exam B are worth 100 points total. The biggest difference is that for each question right on Exam B its worth exactly one point, while Exam A is roughly 1.3 per question wrong. Yes the more you get wrong with Exam A the easier it is to get a lower score, but it also works the opposite. Lets say you were shooting for an 80 score. You would need 80 questions correct in order to get the score in Exam B, while with Exam A you would need roughly 61.5 questions rights.

Hope that helps undo your logic.

happyhour1122
Posts: 1059
Joined: Wed Feb 08, 2017 5:08 pm

Re: New Low For February Bar takers

Postby happyhour1122 » Mon Apr 17, 2017 3:57 pm

Estecontre wrote:
happyhour1122 wrote:
ur_hero wrote:
happyhour1122 wrote:taking 25 questions out is just unfair. We had more opportunities to get it right!
Since we don't get any scores taken off from getting the questions wrong, I still we had more chance of gaining more points by solving more questions. I don't know how this isn't going to make any difference..


Not sure I follow the logic of having "more opportunities to get it right"?

Less graded questions just means each is weighed heavier, right? I suppose a test with more questions may tend to find a more accurate average . . . but at a certain point I feel it's pretty accurate and the difference between 175 and 190 graded questions doesn't seem all that significant.

I think the issue isn't the "opportunities to get it right", but rather, whether or not NCBE has the ability to manipulate what those 25 questions are in a way that will affect either performance during the test or the apportioning of which ones count and point values per question after the fact. It might just be fine and all addressed in scaling, or it may help or hurt examinees - I just don't know.

Also, we may have seen performance go down this cycle by a percent but it's impossible for us to know whether that's to be attributed to the quality of examinees, the difficulty of the test itself, or the change in "pre-test" questions. Likely, we'll be kept in the dark on much of the information to actually determine this; and on the other hand it's just difficult to attribute correlation to causation here with so many competing variables that are difficult to measure. Likely, examinees will continue to be blamed for being less qualified, dumber, less-motivated, lower LSATS, blah blah blah, etc.


If there is anyway to rebut my assumption, PLEASE PLEASE PLEASE help me undo my logic that: if we have 25 more questions-> we have more opportunities to to get it right. I am already pissed (but can't do anything about it, so i'll just accept but, still bitter).

I think this logic comes from my practice questions.
Sometimes I would do exam A 75 questions. Sometimes I would do exam B, 100 questions.
I get high score on exam B. Why? because I solved more questions.

HELP ME UNDO THIS LOGIC!! :(


Let me try: stop doing a straight swap. You're assuming that everything is still the same. The math didn't change, minus 25 less questions, the questions didn't change (the material, the difficulty of questions, testing conditions) moreover your math is again too simple. Both Exam A and Exam B are worth 100 points total. The biggest difference is that for each question right on Exam B its worth exactly one point, while Exam A is roughly 1.3 per question wrong. Yes the more you get wrong with Exam A the easier it is to get a lower score, but it also works the opposite. Lets say you were shooting for an 80 score. You would need 80 questions correct in order to get the score in Exam B, while with Exam A you would need roughly 61.5 questions rights.

Hope that helps undo your logic.


The last sentence cleared it up.
You just passed your bar. :P

JoeSeperac
Posts: 60
Joined: Thu Feb 16, 2017 3:30 pm

Re: New Low For February Bar takers

Postby JoeSeperac » Mon Apr 17, 2017 4:44 pm

Fewer questions means lower reliability, so the examinees who may benefit from this change are the ones with lower MBE ability. For example, if you are playing someone in 1-on-1 basketball, the weaker player is better off agreeing to a short game against a stronger player (e.g. 7 points to win) because the weaker player has a better chance of getting lucky and winning. Conversely, if you are the stronger player, you want to agree to a longer game (e.g. 21 points to win) to give yourself the best chance at winning (b/c the longer the game, the more opportunities you have to demonstrate your higher-ability). However, the MBE component of the exam is so reliable to begin with that the change from 190 to 175 is probably a non-issue (e.g. the weaker basketball player probably has a negligibly better chance of winning by playing to 175 versus playing to 190).

jtp191
Posts: 20
Joined: Fri Oct 30, 2015 4:29 pm

Re: New Low For February Bar takers

Postby jtp191 » Mon Apr 17, 2017 5:09 pm

JoeSeperac wrote:Fewer questions means lower reliability, so the examinees who may benefit from this change are the ones with lower MBE ability. For example, if you are playing someone in 1-on-1 basketball, the weaker player is better off agreeing to a short game against a stronger player (e.g. 7 points to win) because the weaker player has a better chance of getting lucky and winning. Conversely, if you are the stronger player, you want to agree to a longer game (e.g. 21 points to win) to give yourself the best chance at winning (b/c the longer the game, the more opportunities you have to demonstrate your higher-ability). However, the MBE component of the exam is so reliable to begin with that the change from 190 to 175 is probably a non-issue (e.g. the weaker basketball player probably has a negligibly better chance of winning by playing to 175 versus playing to 190).


So, using that logic, do you think in the mind of the NCBE they are trying to increase bar passage rates to pick up more of the people who may be just missing the cutoff?

jtp191
Posts: 20
Joined: Fri Oct 30, 2015 4:29 pm

Re: New Low For February Bar takers

Postby jtp191 » Mon Apr 17, 2017 5:11 pm

jtp191 wrote:
JoeSeperac wrote:Fewer questions means lower reliability, so the examinees who may benefit from this change are the ones with lower MBE ability. For example, if you are playing someone in 1-on-1 basketball, the weaker player is better off agreeing to a short game against a stronger player (e.g. 7 points to win) because the weaker player has a better chance of getting lucky and winning. Conversely, if you are the stronger player, you want to agree to a longer game (e.g. 21 points to win) to give yourself the best chance at winning (b/c the longer the game, the more opportunities you have to demonstrate your higher-ability). However, the MBE component of the exam is so reliable to begin with that the change from 190 to 175 is probably a non-issue (e.g. the weaker basketball player probably has a negligibly better chance of winning by playing to 175 versus playing to 190).


So, using that logic, do you think in the mind of the NCBE they are trying to increase bar passage rates to pick up more of the people who may be just missing the cutoff?


And if so, I wonder what the passage rates would have been had they left it at 190. Only the NCBE knows for sure.

JoeSeperac
Posts: 60
Joined: Thu Feb 16, 2017 3:30 pm

Re: New Low For February Bar takers

Postby JoeSeperac » Mon Apr 17, 2017 6:42 pm

I think that one of NCBE's most important jobs is to tell the bar examiners how this administration's pool of applicants compared to prior administrations. The bar examiners then scale the scores to reflect the knowledge level of that pool of examinees. Scaling is done to increase the reliability of the exam. Your individual score on a particular exam doesn't affect your final score – if you had the 50th best MBE score and 90th best Essay score before scaling, you will still have the 50th best MBE score and 90th best Essay score after scaling. However, to account for the lower reliability of the non-MBE components of the exam, these components are scaled to the mean MBE for that administration. For example, nationally, the average MBE scaled score on July exams (from 1974-2012) was 142.1 while the average MBE on February exams (from 1974-2012) was 136.7. For scaling, the lower the mean MBE, the lower the scale. The July 2009 mean MBE score in New York was approximately 143.7. If you had an average scaled score of 50 on the Essays/MPT on the July 2009 exam, your Essays/MPT Common Scaled Score would have been 708.5. The February 2009 mean MBE score in New York was approximately 128.7. If you had an average scaled score of 50 on the Essays/MPT on the February 2009 exam, your Essays/MPT Common Scaled Score would have been 650.4. This is a difference of 58.1 points. Since the Essays/MPT are 50% of your final score, this results in a 29 point difference in final score. This is why the February pass rates are lower. Put simply, the essays/MPT scores are scaled to the MBE because an examinee that scores a 50 in July is more knowledgeable than an examinee who scores a 50 in February (as demonstrated by the higher MBE average in July versus February) and deserves a higher essay grade. According to NCBE, "[s]caling the essays to the MBE is an essential step in ensuring that scores have a consistent meaning over time. When essay scores are not scaled to the MBE, they tend to remain about the same: for example, it is common for the average raw July essay score to be similar to the average February score even if the July examinees are known to be more knowledgeable on average than the February examinees. Using raw essay scores rather than scaled essay scores tends to provide an unintended advantage to some examinees and an unintended disadvantage to others." The Bar Examiner May 2005. (see http://seperac.com/pdf/740205_testing.pdf) (emphasis added)

You can see this effect in the New York pass rates between July and February. In New York, over the past 5 years, for the demographic of First Timers (NY ABA Law Schools), there were 18,393 examinees who took the exam in July and 15,382 examinees passed the exam for an overall pass rate of 83.6%. In contrast, there were 1,601 First Timers (NY ABA Law Schools) who took the exam in February and 1,074 examinees passed the exam for an overall pass rate of 67.1%. These graduates of New York ABA Schools should possess a similar level of proficiency – accordingly you would expect consistent pass rates among these first time candidates between the July and February exams. However, between 2012-2016, there was a 16.5% difference in pass rates between July and February for First-Time examinees who are graduates of New York ABA Schools. Put simply, a high tide lifts all boats - the higher the MBE mean, the higher the scale.

So to answer your question, with the downward trend in the mean MBE, I expect there to be a downward trend in bar exam pass rates until at least 2019, but whether the change from 190 to 175 will reduce the impact of this, I don't know.

jtp191
Posts: 20
Joined: Fri Oct 30, 2015 4:29 pm

Re: New Low For February Bar takers

Postby jtp191 » Mon Apr 17, 2017 7:44 pm

JoeSeperac wrote:I think that one of NCBE's most important jobs is to tell the bar examiners how this administration's pool of applicants compared to prior administrations. The bar examiners then scale the scores to reflect the knowledge level of that pool of examinees. Scaling is done to increase the reliability of the exam. Your individual score on a particular exam doesn't affect your final score – if you had the 50th best MBE score and 90th best Essay score before scaling, you will still have the 50th best MBE score and 90th best Essay score after scaling. However, to account for the lower reliability of the non-MBE components of the exam, these components are scaled to the mean MBE for that administration. For example, nationally, the average MBE scaled score on July exams (from 1974-2012) was 142.1 while the average MBE on February exams (from 1974-2012) was 136.7. For scaling, the lower the mean MBE, the lower the scale. The July 2009 mean MBE score in New York was approximately 143.7. If you had an average scaled score of 50 on the Essays/MPT on the July 2009 exam, your Essays/MPT Common Scaled Score would have been 708.5. The February 2009 mean MBE score in New York was approximately 128.7. If you had an average scaled score of 50 on the Essays/MPT on the February 2009 exam, your Essays/MPT Common Scaled Score would have been 650.4. This is a difference of 58.1 points. Since the Essays/MPT are 50% of your final score, this results in a 29 point difference in final score. This is why the February pass rates are lower. Put simply, the essays/MPT scores are scaled to the MBE because an examinee that scores a 50 in July is more knowledgeable than an examinee who scores a 50 in February (as demonstrated by the higher MBE average in July versus February) and deserves a higher essay grade. According to NCBE, "[s]caling the essays to the MBE is an essential step in ensuring that scores have a consistent meaning over time. When essay scores are not scaled to the MBE, they tend to remain about the same: for example, it is common for the average raw July essay score to be similar to the average February score even if the July examinees are known to be more knowledgeable on average than the February examinees. Using raw essay scores rather than scaled essay scores tends to provide an unintended advantage to some examinees and an unintended disadvantage to others." The Bar Examiner May 2005. (see http://seperac.com/pdf/740205_testing.pdf) (emphasis added)

You can see this effect in the New York pass rates between July and February. In New York, over the past 5 years, for the demographic of First Timers (NY ABA Law Schools), there were 18,393 examinees who took the exam in July and 15,382 examinees passed the exam for an overall pass rate of 83.6%. In contrast, there were 1,601 First Timers (NY ABA Law Schools) who took the exam in February and 1,074 examinees passed the exam for an overall pass rate of 67.1%. These graduates of New York ABA Schools should possess a similar level of proficiency – accordingly you would expect consistent pass rates among these first time candidates between the July and February exams. However, between 2012-2016, there was a 16.5% difference in pass rates between July and February for First-Time examinees who are graduates of New York ABA Schools. Put simply, a high tide lifts all boats - the higher the MBE mean, the higher the scale.

So to answer your question, with the downward trend in the mean MBE, I expect there to be a downward trend in bar exam pass rates until at least 2019, but whether the change from 190 to 175 will reduce the impact of this, I don't know.


Thank you Joe for all your insight into this, it really helps.

Jmazz88
Posts: 55
Joined: Mon Dec 10, 2012 10:34 am

Re: New Low For February Bar takers

Postby Jmazz88 » Mon Apr 17, 2017 8:18 pm

JoeSeperac wrote:I think that one of NCBE's most important jobs is to tell the bar examiners how this administration's pool of applicants compared to prior administrations. The bar examiners then scale the scores to reflect the knowledge level of that pool of examinees. Scaling is done to increase the reliability of the exam. Your individual score on a particular exam doesn't affect your final score – if you had the 50th best MBE score and 90th best Essay score before scaling, you will still have the 50th best MBE score and 90th best Essay score after scaling. However, to account for the lower reliability of the non-MBE components of the exam, these components are scaled to the mean MBE for that administration. For example, nationally, the average MBE scaled score on July exams (from 1974-2012) was 142.1 while the average MBE on February exams (from 1974-2012) was 136.7. For scaling, the lower the mean MBE, the lower the scale. The July 2009 mean MBE score in New York was approximately 143.7. If you had an average scaled score of 50 on the Essays/MPT on the July 2009 exam, your Essays/MPT Common Scaled Score would have been 708.5. The February 2009 mean MBE score in New York was approximately 128.7. If you had an average scaled score of 50 on the Essays/MPT on the February 2009 exam, your Essays/MPT Common Scaled Score would have been 650.4. This is a difference of 58.1 points. Since the Essays/MPT are 50% of your final score, this results in a 29 point difference in final score. This is why the February pass rates are lower. Put simply, the essays/MPT scores are scaled to the MBE because an examinee that scores a 50 in July is more knowledgeable than an examinee who scores a 50 in February (as demonstrated by the higher MBE average in July versus February) and deserves a higher essay grade. According to NCBE, "[s]caling the essays to the MBE is an essential step in ensuring that scores have a consistent meaning over time. When essay scores are not scaled to the MBE, they tend to remain about the same: for example, it is common for the average raw July essay score to be similar to the average February score even if the July examinees are known to be more knowledgeable on average than the February examinees. Using raw essay scores rather than scaled essay scores tends to provide an unintended advantage to some examinees and an unintended disadvantage to others." The Bar Examiner May 2005. (see http://seperac.com/pdf/740205_testing.pdf) (emphasis added)

You can see this effect in the New York pass rates between July and February. In New York, over the past 5 years, for the demographic of First Timers (NY ABA Law Schools), there were 18,393 examinees who took the exam in July and 15,382 examinees passed the exam for an overall pass rate of 83.6%. In contrast, there were 1,601 First Timers (NY ABA Law Schools) who took the exam in February and 1,074 examinees passed the exam for an overall pass rate of 67.1%. These graduates of New York ABA Schools should possess a similar level of proficiency – accordingly you would expect consistent pass rates among these first time candidates between the July and February exams. However, between 2012-2016, there was a 16.5% difference in pass rates between July and February for First-Time examinees who are graduates of New York ABA Schools. Put simply, a high tide lifts all boats - the higher the MBE mean, the higher the scale.

So to answer your question, with the downward trend in the mean MBE, I expect there to be a downward trend in bar exam pass rates until at least 2019, but whether the change from 190 to 175 will reduce the impact of this, I don't know.


As a first time NY February taker, this is certainly disheartening.

JDNE
Posts: 23
Joined: Fri Mar 10, 2017 3:01 pm

Re: New Low For February Bar takers

Postby JDNE » Mon Apr 17, 2017 8:51 pm

jtp191 wrote:Thank you Joe for all your insight into this, it really helps.

I didn't even know they added 15 additional questions to the 10 blanks (that I was expecting) until I read threads on here after I took the MBE. I can understand how having 12.5% of the questions not count makes it seem like the 25 blanks (i.e., unscored questions) would reduce our chances of getting lucky or guessing well. I can see how it seems like we have less of a chance of getting questions right than test-takers could get correct when only 10 of the questions were blanks (i.e., didn't count).

But, in the big picture, why count on getting lucky or making good guesses? We have to know the law for our careers, anyway, so why not just learn it, once and for all, for the Bar? That way nothing they can throw at us in 200 questions can whoop us.

My approach was to just study and understand the explanations to every one of the 2,000-plus Practice MBE Questions that I had. Every lawyer I know told me the best way to study was just learn all of those practice questions and explanations. So I did, or at least I tried. Because of that approach, I can say that nothing on the MBE looked foreign or confusing. It was time-consuming, no doubt. But it was easier to do that than to try to figure out how the scaling would work in relation to having X Amount of unscored questions. Best of all, it will help us best serve our clients and employers every day in our careers.

User avatar
PersistentAttorney
Posts: 159
Joined: Thu Oct 13, 2016 9:22 am

Re: New Low For February Bar takers

Postby PersistentAttorney » Tue Apr 18, 2017 6:02 am

JoeSeperac wrote:I think that one of NCBE's most important jobs is to tell the bar examiners how this administration's pool of applicants compared to prior administrations. The bar examiners then scale the scores to reflect the knowledge level of that pool of examinees. Scaling is done to increase the reliability of the exam. Your individual score on a particular exam doesn't affect your final score – if you had the 50th best MBE score and 90th best Essay score before scaling, you will still have the 50th best MBE score and 90th best Essay score after scaling. However, to account for the lower reliability of the non-MBE components of the exam, these components are scaled to the mean MBE for that administration. For example, nationally, the average MBE scaled score on July exams (from 1974-2012) was 142.1 while the average MBE on February exams (from 1974-2012) was 136.7. For scaling, the lower the mean MBE, the lower the scale. The July 2009 mean MBE score in New York was approximately 143.7. If you had an average scaled score of 50 on the Essays/MPT on the July 2009 exam, your Essays/MPT Common Scaled Score would have been 708.5. The February 2009 mean MBE score in New York was approximately 128.7. If you had an average scaled score of 50 on the Essays/MPT on the February 2009 exam, your Essays/MPT Common Scaled Score would have been 650.4. This is a difference of 58.1 points. Since the Essays/MPT are 50% of your final score, this results in a 29 point difference in final score. This is why the February pass rates are lower. Put simply, the essays/MPT scores are scaled to the MBE because an examinee that scores a 50 in July is more knowledgeable than an examinee who scores a 50 in February (as demonstrated by the higher MBE average in July versus February) and deserves a higher essay grade. According to NCBE, "[s]caling the essays to the MBE is an essential step in ensuring that scores have a consistent meaning over time. When essay scores are not scaled to the MBE, they tend to remain about the same: for example, it is common for the average raw July essay score to be similar to the average February score even if the July examinees are known to be more knowledgeable on average than the February examinees. Using raw essay scores rather than scaled essay scores tends to provide an unintended advantage to some examinees and an unintended disadvantage to others." The Bar Examiner May 2005. (see http://seperac.com/pdf/740205_testing.pdf) (emphasis added)

You can see this effect in the New York pass rates between July and February. In New York, over the past 5 years, for the demographic of First Timers (NY ABA Law Schools), there were 18,393 examinees who took the exam in July and 15,382 examinees passed the exam for an overall pass rate of 83.6%. In contrast, there were 1,601 First Timers (NY ABA Law Schools) who took the exam in February and 1,074 examinees passed the exam for an overall pass rate of 67.1%. These graduates of New York ABA Schools should possess a similar level of proficiency – accordingly you would expect consistent pass rates among these first time candidates between the July and February exams. However, between 2012-2016, there was a 16.5% difference in pass rates between July and February for First-Time examinees who are graduates of New York ABA Schools. Put simply, a high tide lifts all boats - the higher the MBE mean, the higher the scale.

So to answer your question, with the downward trend in the mean MBE, I expect there to be a downward trend in bar exam pass rates until at least 2019, but whether the change from 190 to 175 will reduce the impact of this, I don't know.


Thank you very much for your insight in this. However, to me the essay scaling sounds very unfair. It definitely has a negative effect on examinees who are borderline passing or failing as the same performance in a July administration would result to a pass due to the higher boost essays would get based on the generally higher MBE scale. Unless I am getting this wrong, same performances would have better chances to pass in a July administration than a February administration. This is simply unfair.

TheJuryMustDie
Posts: 25
Joined: Mon Mar 20, 2017 12:27 pm

Re: New Low For February Bar takers

Postby TheJuryMustDie » Tue Apr 18, 2017 6:18 am

PersistentAttorney wrote:
JoeSeperac wrote:I think that one of NCBE's most important jobs is to tell the bar examiners how this administration's pool of applicants compared to prior administrations. The bar examiners then scale the scores to reflect the knowledge level of that pool of examinees. Scaling is done to increase the reliability of the exam. Your individual score on a particular exam doesn't affect your final score – if you had the 50th best MBE score and 90th best Essay score before scaling, you will still have the 50th best MBE score and 90th best Essay score after scaling. However, to account for the lower reliability of the non-MBE components of the exam, these components are scaled to the mean MBE for that administration. For example, nationally, the average MBE scaled score on July exams (from 1974-2012) was 142.1 while the average MBE on February exams (from 1974-2012) was 136.7. For scaling, the lower the mean MBE, the lower the scale. The July 2009 mean MBE score in New York was approximately 143.7. If you had an average scaled score of 50 on the Essays/MPT on the July 2009 exam, your Essays/MPT Common Scaled Score would have been 708.5. The February 2009 mean MBE score in New York was approximately 128.7. If you had an average scaled score of 50 on the Essays/MPT on the February 2009 exam, your Essays/MPT Common Scaled Score would have been 650.4. This is a difference of 58.1 points. Since the Essays/MPT are 50% of your final score, this results in a 29 point difference in final score. This is why the February pass rates are lower. Put simply, the essays/MPT scores are scaled to the MBE because an examinee that scores a 50 in July is more knowledgeable than an examinee who scores a 50 in February (as demonstrated by the higher MBE average in July versus February) and deserves a higher essay grade. According to NCBE, "[s]caling the essays to the MBE is an essential step in ensuring that scores have a consistent meaning over time. When essay scores are not scaled to the MBE, they tend to remain about the same: for example, it is common for the average raw July essay score to be similar to the average February score even if the July examinees are known to be more knowledgeable on average than the February examinees. Using raw essay scores rather than scaled essay scores tends to provide an unintended advantage to some examinees and an unintended disadvantage to others." The Bar Examiner May 2005. (see http://seperac.com/pdf/740205_testing.pdf) (emphasis added)

You can see this effect in the New York pass rates between July and February. In New York, over the past 5 years, for the demographic of First Timers (NY ABA Law Schools), there were 18,393 examinees who took the exam in July and 15,382 examinees passed the exam for an overall pass rate of 83.6%. In contrast, there were 1,601 First Timers (NY ABA Law Schools) who took the exam in February and 1,074 examinees passed the exam for an overall pass rate of 67.1%. These graduates of New York ABA Schools should possess a similar level of proficiency – accordingly you would expect consistent pass rates among these first time candidates between the July and February exams. However, between 2012-2016, there was a 16.5% difference in pass rates between July and February for First-Time examinees who are graduates of New York ABA Schools. Put simply, a high tide lifts all boats - the higher the MBE mean, the higher the scale.

So to answer your question, with the downward trend in the mean MBE, I expect there to be a downward trend in bar exam pass rates until at least 2019, but whether the change from 190 to 175 will reduce the impact of this, I don't know.


Thank you very much for your insight in this. However, to me the essay scaling sounds very unfair. It definitely has a negative effect on examinees who are borderline passing or failing as the same performance in a July administration would result to a pass due to the higher boost essays would get based on the generally higher MBE scale. Unless I am getting this wrong, same performances would have better chances to pass in a July administration than a February administration. This is simply unfair.


Wouldn't one think the bar examiners should recognise that? The unfairness in the scaling method, if indeed, such method is adopted, could not have been more apparent. No one should lose or gain anywhere near 15 - 25 points owing only to whether they take their test in February or July.

jtp191
Posts: 20
Joined: Fri Oct 30, 2015 4:29 pm

Re: New Low For February Bar takers

Postby jtp191 » Tue Apr 18, 2017 8:09 am

TheJuryMustDie wrote:
PersistentAttorney wrote:
JoeSeperac wrote:I think that one of NCBE's most important jobs is to tell the bar examiners how this administration's pool of applicants compared to prior administrations. The bar examiners then scale the scores to reflect the knowledge level of that pool of examinees. Scaling is done to increase the reliability of the exam. Your individual score on a particular exam doesn't affect your final score – if you had the 50th best MBE score and 90th best Essay score before scaling, you will still have the 50th best MBE score and 90th best Essay score after scaling. However, to account for the lower reliability of the non-MBE components of the exam, these components are scaled to the mean MBE for that administration. For example, nationally, the average MBE scaled score on July exams (from 1974-2012) was 142.1 while the average MBE on February exams (from 1974-2012) was 136.7. For scaling, the lower the mean MBE, the lower the scale. The July 2009 mean MBE score in New York was approximately 143.7. If you had an average scaled score of 50 on the Essays/MPT on the July 2009 exam, your Essays/MPT Common Scaled Score would have been 708.5. The February 2009 mean MBE score in New York was approximately 128.7. If you had an average scaled score of 50 on the Essays/MPT on the February 2009 exam, your Essays/MPT Common Scaled Score would have been 650.4. This is a difference of 58.1 points. Since the Essays/MPT are 50% of your final score, this results in a 29 point difference in final score. This is why the February pass rates are lower. Put simply, the essays/MPT scores are scaled to the MBE because an examinee that scores a 50 in July is more knowledgeable than an examinee who scores a 50 in February (as demonstrated by the higher MBE average in July versus February) and deserves a higher essay grade. According to NCBE, "[s]caling the essays to the MBE is an essential step in ensuring that scores have a consistent meaning over time. When essay scores are not scaled to the MBE, they tend to remain about the same: for example, it is common for the average raw July essay score to be similar to the average February score even if the July examinees are known to be more knowledgeable on average than the February examinees. Using raw essay scores rather than scaled essay scores tends to provide an unintended advantage to some examinees and an unintended disadvantage to others." The Bar Examiner May 2005. (see http://seperac.com/pdf/740205_testing.pdf) (emphasis added)

You can see this effect in the New York pass rates between July and February. In New York, over the past 5 years, for the demographic of First Timers (NY ABA Law Schools), there were 18,393 examinees who took the exam in July and 15,382 examinees passed the exam for an overall pass rate of 83.6%. In contrast, there were 1,601 First Timers (NY ABA Law Schools) who took the exam in February and 1,074 examinees passed the exam for an overall pass rate of 67.1%. These graduates of New York ABA Schools should possess a similar level of proficiency – accordingly you would expect consistent pass rates among these first time candidates between the July and February exams. However, between 2012-2016, there was a 16.5% difference in pass rates between July and February for First-Time examinees who are graduates of New York ABA Schools. Put simply, a high tide lifts all boats - the higher the MBE mean, the higher the scale.

So to answer your question, with the downward trend in the mean MBE, I expect there to be a downward trend in bar exam pass rates until at least 2019, but whether the change from 190 to 175 will reduce the impact of this, I don't know.


Thank you very much for your insight in this. However, to me the essay scaling sounds very unfair. It definitely has a negative effect on examinees who are borderline passing or failing as the same performance in a July administration would result to a pass due to the higher boost essays would get based on the generally higher MBE scale. Unless I am getting this wrong, same performances would have better chances to pass in a July administration than a February administration. This is simply unfair.


Wouldn't one think the bar examiners should recognise that? The unfairness in the scaling method, if indeed, such method is adopted, could not have been more apparent. No one should lose or gain anywhere near 15 - 25 points owing only to whether they take their test in February or July.


It seems to me after looking at the grading formula from the Seperac site that it has mostly to do where you score in relation to other bar takers and not necessarily July or February administration. Yes July MBE scores are higher by a few points so that will help, but if your essays are not near the mean your scores will be dragged down, and possibly significantly depending on how far you are away from the mean in your jurisdiction. I think this formula assumes a 50-50 split of the MBE and MEE scores though.

JoeSeperac
Posts: 60
Joined: Thu Feb 16, 2017 3:30 pm

Re: New Low For February Bar takers

Postby JoeSeperac » Tue Apr 18, 2017 9:49 am

Wouldn't one think the bar examiners should recognise that? The unfairness in the scaling method, if indeed, such method is adopted, could not have been more apparent. No one should lose or gain anywhere near 15 - 25 points owing only to whether they take their test in February or July.


I use 50 to illustrate the difference, but that is not what the average passing score ends up being for each exam. For example, in J15, you needed a score of 48.41 on an essay to have an exactly passing essay score on the J16 exam while for F16, you needed a score of 51.77 to have an exactly passing essay score on the F17 exam (meaning that if you scored a 48.41 in J15 and then a 48.41 in F16, your essay performance declined). But the lower scale will still affect you more in February – in looking at 4,000+ pre-UBE scores I received from examinees from February 2008 to present, the average July final score was 615.5 while the average February final score was 611.5, a difference of 4 points favoring July takers.

TheJuryMustDie
Posts: 25
Joined: Mon Mar 20, 2017 12:27 pm

Re: New Low For February Bar takers

Postby TheJuryMustDie » Wed Apr 19, 2017 5:03 am

JoeSeperac wrote:
Wouldn't one think the bar examiners should recognise that? The unfairness in the scaling method, if indeed, such method is adopted, could not have been more apparent. No one should lose or gain anywhere near 15 - 25 points owing only to whether they take their test in February or July.


I use 50 to illustrate the difference, but that is not what the average passing score ends up being for each exam. For example, in J15, you needed a score of 48.41 on an essay to have an exactly passing essay score on the J16 exam while for F16, you needed a score of 51.77 to have an exactly passing essay score on the F17 exam (meaning that if you scored a 48.41 in J15 and then a 48.41 in F16, your essay performance declined). But the lower scale will still affect you more in February – in looking at 4,000+ pre-UBE scores I received from examinees from February 2008 to present, the average July final score was 615.5 while the average February final score was 611.5, a difference of 4 points favoring July takers.


Hi Joe, I understand. I just don't think it makes (complete) rational sense to wholly scale the essays to the MBE when there are limitless variables that can affect the mean of any given MBE administration. I met candidates in New York for example who only came to the exam because they feared being sanctioned for repeated withdrawals and spent less than 20 minutes randomly bubbling their scantron sheets. Whatever these candidates score would obviously form part of the mean's calculation, wouldn't they? Considering the number of such candidates could be huge, scaling the essays to the MBE in such circumstances would appear an unfair penalty on takers of such administration.

Can you please tell if the essays are scaled to the national mean of the MBE or jurisdictional MBE's mean? And for the UBE states, do they all have to scale their essays to the MBE or that's down to individual jurisdiction's discretion which scaling mechanism they adopt?

Many thanks.
Last edited by TheJuryMustDie on Wed Apr 19, 2017 9:55 am, edited 1 time in total.

JoeSeperac
Posts: 60
Joined: Thu Feb 16, 2017 3:30 pm

Re: New Low For February Bar takers

Postby JoeSeperac » Wed Apr 19, 2017 9:32 am

TheJuryMustDie wrote:Hi Joe, I understand. I just don't think it makes (complete) rational sense to wholly scale the essays to the MBE when there are limitless variables that can affect the mean of any given MBE administration. I met candidates in New York for example who only came to the exam because they feared being sanctioned for repeated withdrawers and spent less than 20 minutes randomly bubbling their scantron sheets. Whatever these candidates score would obviously form part of the mean's calculation, wouldn't they? Considering the number of such candidates could be huge, scaling the essays to the MBE in such circumstances would appear an unfair penalty on takers of such administration.

Can you please tell if the essays are scaled to the national mean of the MBE or jurisdictional MBE's mean? And for the UBE states, do they all have to scale their essays to the MBE or that's down to individual jurisdiction's discretion which scaling mechanism they adopt?

Many thanks.


It's my understanding that every jurisdiction determines their scale based on the MBE scores of the examinees in that jurisdiction rather than using a national scale. I also believe that every UBE jurisdiction scales the MEE/MPT scores to the MBE. It's hard to answer this because the state bar examiners release very little about their methods - I can only make indirect observations after looking at score reports.

As much as you may dislike scaling, it leads to a more reliable score. For example, prior to July 2012, Michigan scaled the essays to the MBE. Then, for the July 2012 exam, the Michigan Board of Law Examiners changed the grading formula and stopped scaling the raw essay points on the Michigan portion of the bar exam. From what I have read on this, the Michigan bar examiners felt the quality of the Michigan examinees' essays was diminishing, but their MBE scores were improving, resulting in higher pass rates due to the better MBE scores coupled with the scaling of essay scores to the MBE. What has likely happening was that Michigan examinees were focusing on the MBE portion of the exam at the expense of the written portion. Since the written portion was scaled to the MBE using an equating method, the overall decline in performance on the essay portion was masked by the MBE equating. In one of the articles I read, the Michigan Board of Law Examiners were complaining that the examinee essays were pretty poor as compared to prior years. Thus, the Michigan bar examiners stopped scaling their essays to the MBE in July 2012. However, this resulted in a precipitious drop in the July 2012 Michigan pass rates (between 1995-2011 the July pass rate was about 75% and then the July 2012 pass rate was 57% - the lowest it had even been prior to that was 59% in 1995). What happened after that is exactly what was predicted by NCBE – the pass rates between July and February started becoming consistent, even though the pool of candidates in July were considered more knowledgeable (between 1995-2011 the July pass rate was about 75% while the Feb pass rate was about 67%; for the period between 2012-2014, July pass rate was about 61% while the Feb pass rate was about 62%). By not scaling, the 2012-2014 February Michigan examinees were passing the exam at a higher percentage than July examinees who should be more knowledgeable. In July 2014, Michigan flip-flopped and went back to scaling essays (likely to stop giving an unintended advantage to February examinees).

In the end, with or without scaling, a licensure exam is basically a form of economic protectionism - a bar exam passing score operates as an arbitrary limit on the number of attorneys licensed in a jurisdiction. For example, in a few states (Vermont, North Dakota, Alaska, and South Dakota), less than 100 examinees take that state's bar exam each year. With such a small group, one would think that could be possible for each and every candidate to be qualified to practice law (if minimal competency was indeed the measure). However, the 20-year pass rate average for these states was between 64%-82%. In Alaska, the highest pass rate over the past ten years was 70%; in Vermont, the highest pass rate over the past ten years was 71%; in North Dakota, the highest pass rate over the past ten years was 83%; and in South Dakota, the highest pass rate over the past ten years was 94%.

maxmartin
Posts: 611
Joined: Tue Nov 29, 2011 5:41 pm

Re: New Low For February Bar takers

Postby maxmartin » Wed Apr 19, 2017 10:58 am

JoeSeperac wrote:
TheJuryMustDie wrote:Hi Joe, I understand. I just don't think it makes (complete) rational sense to wholly scale the essays to the MBE when there are limitless variables that can affect the mean of any given MBE administration. I met candidates in New York for example who only came to the exam because they feared being sanctioned for repeated withdrawers and spent less than 20 minutes randomly bubbling their scantron sheets. Whatever these candidates score would obviously form part of the mean's calculation, wouldn't they? Considering the number of such candidates could be huge, scaling the essays to the MBE in such circumstances would appear an unfair penalty on takers of such administration.

Can you please tell if the essays are scaled to the national mean of the MBE or jurisdictional MBE's mean? And for the UBE states, do they all have to scale their essays to the MBE or that's down to individual jurisdiction's discretion which scaling mechanism they adopt?

Many thanks.


It's my understanding that every jurisdiction determines their scale based on the MBE scores of the examinees in that jurisdiction rather than using a national scale. I also believe that every UBE jurisdiction scales the MEE/MPT scores to the MBE. It's hard to answer this because the state bar examiners release very little about their methods - I can only make indirect observations after looking at score reports.

As much as you may dislike scaling, it leads to a more reliable score. For example, prior to July 2012, Michigan scaled the essays to the MBE. Then, for the July 2012 exam, the Michigan Board of Law Examiners changed the grading formula and stopped scaling the raw essay points on the Michigan portion of the bar exam. From what I have read on this, the Michigan bar examiners felt the quality of the Michigan examinees' essays was diminishing, but their MBE scores were improving, resulting in higher pass rates due to the better MBE scores coupled with the scaling of essay scores to the MBE. What has likely happening was that Michigan examinees were focusing on the MBE portion of the exam at the expense of the written portion. Since the written portion was scaled to the MBE using an equating method, the overall decline in performance on the essay portion was masked by the MBE equating. In one of the articles I read, the Michigan Board of Law Examiners were complaining that the examinee essays were pretty poor as compared to prior years. Thus, the Michigan bar examiners stopped scaling their essays to the MBE in July 2012. However, this resulted in a precipitious drop in the July 2012 Michigan pass rates (between 1995-2011 the July pass rate was about 75% and then the July 2012 pass rate was 57% - the lowest it had even been prior to that was 59% in 1995). What happened after that is exactly what was predicted by NCBE – the pass rates between July and February started becoming consistent, even though the pool of candidates in July were considered more knowledgeable (between 1995-2011 the July pass rate was about 75% while the Feb pass rate was about 67%; for the period between 2012-2014, July pass rate was about 61% while the Feb pass rate was about 62%). By not scaling, the 2012-2014 February Michigan examinees were passing the exam at a higher percentage than July examinees who should be more knowledgeable. In July 2014, Michigan flip-flopped and went back to scaling essays (likely to stop giving an unintended advantage to February examinees).

In the end, with or without scaling, a licensure exam is basically a form of economic protectionism - a bar exam passing score operates as an arbitrary limit on the number of attorneys licensed in a jurisdiction. For example, in a few states (Vermont, North Dakota, Alaska, and South Dakota), less than 100 examinees take that state's bar exam each year. With such a small group, one would think that could be possible for each and every candidate to be qualified to practice law (if minimal competency was indeed the measure). However, the 20-year pass rate average for these states was between 64%-82%. In Alaska, the highest pass rate over the past ten years was 70%; in Vermont, the highest pass rate over the past ten years was 71%; in North Dakota, the highest pass rate over the past ten years was 83%; and in South Dakota, the highest pass rate over the past ten years was 94%.

Thank you. Do you have any insight into CA score system? Especially the new score system of coming July?




Return to “Bar Exam Prep and Discussion Forum”

Who is online

Users browsing this forum: airdudeme and 6 guests