The University of North Carolina is considering the adoption of an “Achievement Index” to supplement/replace the GPA, supposedly to combat “grade inflation.” The Achievement Index works like this:
The model, which is Bayesian, calculates “achievement index” scores for each student as latent variables that best explain the grade cutoffs for each class in the university. As a result, it captures several phenomena: (a) if a class is hard and full of very good students, then a high grade is more indicative of ability (and a low grade less indicative of lack of ability); (b) if a class is easy and full of poor students, then a high grade doesn’t mean much; (c) if a certain instructor always gives As then the grade isn’t that meaningful — though it’s more meaningful if the only people who take the class in the first place are the extremely bright, hard-working students. Your “achievement index” score thus reflects your actual grades as well as the difficulty level of the classes you have chosen.
I’m skeptical such a system would ever be adopted for the simple reason that so few people would actually understand it. I read over the Primer on the Achievement Index (available here) and kind of understood it, but not very well, and I’ve taken graduate level courses in statistics (well, okay, statistics for social sciences…and I’m still not very good at it.) My point is that most of the students affected by the change, and probably most of the faculty, would not really “get it.” If it’s mystifying to most people, how much stock will they place in it? (Of course, one response would be to start requiring classes in statistics at the high school level to ensure people would understand it…a long over-due move if you ask me.)
Aside from that, however, I do really like one aspect of the “AI”: it rewards students who get decent grades in hard classes more than students who get great grades in easy classes. For example, see Figure 4 on page 5 of the Primer (PDF). A lot of my friends as an undergraduate were in Engineering and they worked way harder than me to get much lower grades because their classes were much harder and had a much wider grade distribution. Of course, you can say employers understand that when they look at a resume, but still it’d be better to have some way of officially accounting for that at the University level.
Additionally, it gets at what grades really should do: account for the variation in student performance. When I grade papers, I find the best method is to read all the papers through once and simply sort them from best to worst. Then I go through again and try to find the cut-offs between papers. Obviously there are formal requirements that have to be met or not met to make a certain grade, but once that’s done it’s still surprisingly easy to look at two papers and say, “I can’t honestly say paper B deserves the same grade as paper A — it just wouldn’t be fair to paper A’s author.” At the U of M we have a plus/minus grading scale, which I like because it makes it easy to sort grades with finer distinction than simply A, B or C (which I had at K-State as an undergrad). It doesnt solve the problem of grading differences between classes and between majors though, like the AI claims to do.
All of these advantages, however, are distinct from the issue of grade inflation, which is apparently the motivating factor behind adopting the AI. While it’s conventional wisdom these days that grade inflation is a huge problem, is it really happening? This article by Freese, Artis and Powell (via Freese’s website) argues that most of the concern over grade inflation is based on myths that have little empirical evidence. For example, they find: