Those of you who have been reading my columns for many years (which probably means none of you) know that I’ve been a tad skeptical about studies. A committee somehow gets appointed, is told to figure out a problem, spends years “studying” the issue and then comes back with more questions than answers. My guess is that it’s self-preservation. After all, if people who study found solutions then there’d be no more jobs for people who study the issues that have been solved.
We had a fine example of this last week. The State Bar of California issued a press release announcing that a Committee of Bar Examiners staff memo had presented “two potential options” based on a study by a Ph.D.
“One option, drawing from findings in the study which validate the current pass line of 1440, is to make no change to the bar exam passing score at this time. A second option is to consider an interim passing score of 1414.”
In other words, do something or don’t do something.
I don’t want to sound like I’m bragging, but I’m pretty sure I could have come up with the same recommendations without doing any kind of studying.
Just do something or don’t!
But after all, this was just a press release. Maybe the release writer got it wrong. The staff recommendation and the result of the study couldn’t be that obvious.
The memo is pretty clear: “Retain the status quo – no change to the pass line” or “Adopt an interim revised cut score of 1414.”
And it calls for the standard study conclusion: Do more studies.
The Ph.D. study, though pretty weird, doesn’t actually recommend lowering the passing score (unless I’m missing something). It just says it could be lowered if you’re less worried about unqualified candidates passing the exam.
It also says that the passing score could be raised if you’re less worried about qualified candidates not passing the exam. The “staff,” however, chose to ignore this option.
The study, produced by something called ACS Ventures, isn’t even a study of students and their success. The way it’s described in the report, in excruciating detail, is that they got 20 lawyers and law professors to spend three days deciding whether a bunch of real test answers were either not competent, competent or highly competent.
Then they compared the results with actual scores on the test and set the recommended passing score at the number that test takers who were on the border between competent and not competent got.
Think about this for a moment: A bunch of lawyers and professors looked at test answers to decide whether they represented passing grades.
Doesn’t that sound strangely like how tests are graded in the first place?
Shockingly, the result of all this was that the group of 20 came up with a passing score – i.e. the line between competent and not competent – that was almost exactly the same as the actual passing score for the exam.
Again, not to brag, but I could have guessed that without studying.
I don’t recommend reading most of the study unless you really enjoy graphs showing weird things like how the studiers rated the study or the race, age, and nominator breakdown of 20 people.
I do, however, recommend skipping to Appendix D at the very end where we get a three-page list of comments from panelists. Some of my favorites:
“More time could easily be spent on the practice rating, but I doubt that it would make a difference in the outcome.”
“Not convinced this methodology is valid. Many of us clearly do not know some applicable law and these conclusions may therefore determine that incompetent answers amounting to malpractice are nevertheless passing/competent.”
“I would have liked to know ahead of time that I would be ‘grading’ 40 essays when I came in.”
“Snacks for end of day grading would help :)”
“We need breaks to stretch our bodies and we need to go outside, so our brains can get fresh air.”
I think we may have had a hostage situation.