Think Pieces

Michael's thoughts on the auto industry, its products, and/or this website.

Comparison tests and "best pick" lists:
What are they good for?

Earlier this month Consumer Reports made headlines by announcing that, for the first time ever, Japanese cars had taken all ten spots on its "Top Picks" list. You'll find plenty of such lists out there. Similarly, the major car magazines conduct comparison tests. Visit any automotive forum and you'll find people overjoyed that their car won one or upset that the magazine made a mistake and picked the wrong car. Many people buy a car based on these lists and tests. But how useful are they?

The "best car" -- it always depends

People ask me all the time, "So, what's the best car?" My answer every time: "It depends on what you need and like." This isn't merely a cop-out. As I wrote in my piece on what makes a car a great car, different people have different priorities. Some want a seat that's easy to get in and out of, some want a seat that holds them tightly in turns. Some want a vehicle that is easy to maneuver through traffic, others want as large an interior as possible. There are so many possible criteria that this list could go on virtually forever.

These days there are few bad cars. But there are many bad cars for you.

Criteria and weights

Each of these organizations has a set of criteria that are summed up using a set of weights. Most sets of criteria include the basics: acceleration, handling, braking, ride, noise, seat comfort, and so on. But they have their idiosyncracies. Consumer Reports separately evaluates routine and emergency handling, and analyzes headlights. Car and Driver includes styling, posts separate scores for objective test results and subjective evaluations of steering and brake feel, and further includes a "gotta-have-it factor" and "fun to drive" score. The last two can be especially useful when the editors want to tip the playing field.

The really tricky part is how individual scores are combined to form an overall score. Consumer Reports has never revealed the formula it uses to compute cars' overall road test scores.

In selecting their "best picks," they first select all cars with average or higher reliability and no poor crash test scores, then select the one in this group with the highest road test score. Interestingly, this implies that even those who run Consumer Reports feel that, once a car has at least average reliability ratings and decent safety scores, then what it is like to sit in and drive becomes most important.

October 2003: Car and Driver caves

Up until October 2003, C&D's writers independently evaluated the overall goodness of a car; overall scores were not calculated from individual scores. But, despite stating this method at the bottom of every score table, they still received many letters telling them their math skills needed work. So that month they switched to the more common system of totaling up individual scores.

I preferred the earlier system, as it implicitly acknowledged that determining the weights for the individual scores is bound to be an exercise in frustration. How much weight should rear seat room and comfort have compared to front seat room and comfort? Should this weight be the same in a minivan and a sports coupe? Can all important criteria even be included? Better to just let the skilled editors sort it all out in their own heads and guts.

Back in the old days the car that was the most fun to drive hardly ever lost. Which made sense: what could be more important to readers of a car magazine? This hasn't been the case since the change.

Puzzling verdict #1:
Jeep Grand Cherokee SRT8 vs. Chevrolet TrailBlazer SS

In March 2006, they tested the Jeep Grand Cherokee SRT8 against the Chevrolet TrailBlazer SS. When I drove and reviewed these vehicles, I thought the Jeep the clear winner. Yet the Chevrolet won the C&D comparison test.

This disagreement is easily explained: under the current C&D system a well-rounded vehicle can win over a fun-to-drive one. In this instance, the Chevrolet won by a mere two points, 226 to 224. Yet its "gotta-have-it" score is three higher. I'm a bit puzzled by this, since it's far easier to find the TrailBlazer SS sitting on dealer lots. The Jeep did score higher on "fun to drive," if only by a single point.

Other factors in the Chevrolet's victory: three extra points for its higher towing capacity, two extra points for rear seat room and comfort, and three extra points for its more voluminous cargo area. For people who are interested in these things, the Chevrolet could well be the better choice. But some people considering an ultra-high-performance SUV won't care about these things at all, or at least not with as much weight as C&D's calculations give them.

In the Jeep's favor, they included only "as-tested" prices in the calculations, so the two tied. In the real world, TrueDelta will show you that the Chevrolet is about $4,500 cheaper after adjusting for features.

Add it all up, and we have a systematically calculated yet dubious conclusion.

Puzzling verdict #2:
Honda Civic Si vs. Volkswagen GTI

Similarly, in a Civic Si vs. GTI comparison in the same issue the VW won. Perhaps the VW was more fun to drive? Nope--the Si outscored the GTI by a substantial margin--25 to 21--on that criterion. In handling and in steering feel the Si was also given a one-point edge. Add up the things I personally care the most about, and the Honda wins by six.

But toss in the other scores and the VW wins by five. What gives? "Gotta-have-it" is a tie this time, even though you'll have a much easier time finding a GTI available for a test drive. Instead, the VW earned a substantial six extra points for rear seat room and comfort, and this ultimately decided the contest. These criteria are weighted as heavily for these two-door sport compacts as they are in a test of family sedans. But does the typical buyer of these cars care about the rear seat?

All in all, the car that was the most fun to drive only won one of the four comparison tests in this issue. It turns out this score is worth just under ten percent of the total. If I were setting the weights, it'd be at least a quarter of the total. For you, on the other hand, it might be worth only five percent of the total, or nothing at all.

Solution: get out there and drive them all yourself

Where does all of this lead? These lists and comparison tests can be endlessly scrutinized, criticized, and debated. But the lists and tests aren't really the point. Finding the best car for your needs and wants is. And if you truly want to find this car, you simply cannot have someone else do the work for you.

Instead, you need to conduct your own comparison test. Test drive all possible contenders extensively. Published reviews and ratings can usefully highlight which strengths and weaknesses to look for. But they can only guide you, not make the decision for you. Whichever vehicle you like the best, assuming its safety and reliability ratings are acceptable, is for all intents and purposes the best car. No matter what Consumer Reports, Car and Driver, or even Michael Karesh says.

Thanks for reading.

Michael Karesh, TrueDelta

First posted: March 28, 2006
Last updated: November 16, 2006