Studies show that there’s not much rhyme, reason, or consistency to how judges and critics rate wines and decide who gets the gold medals. This means that we can basically tell this snobby “expert” breed to put a cork in it.
A WSJ story describes how wine ratings are badly flawed, incredibly subjective, and arguably meaningless. Nonetheless, higher ratings translate into higher prices—sometimes much, much higher prices. So if you’re paying more because a wine received a good rating or was awarded some medals, what, exactly, are you paying for?
Here’s an example of how there can be a shockingly little amount of consistency among critics:
A 1996 study in the Journal of Experimental Psychology showed that even flavor-trained professionals cannot reliably identify more than three or four components in a mixture, although wine critics regularly report tasting six or more. There are eight in this description, from The Wine News, as quoted on wine.com, of a Silverado Limited Reserve Cabernet Sauvignon 2005 that sells for more than $100 a bottle: “Dusty, chalky scents followed by mint, plum, tobacco and leather. Tasty cherry with smoky oak accents…” Another publication, The Wine Advocate, describes a wine as having “promising aromas of lavender, roasted herbs, blueberries, and black currants.” What is striking about this pair of descriptions is that, although they are very different, they are descriptions of the same Cabernet. One taster lists eight flavors and scents, the other four, and not one of them coincide.
There’s also not a whole lot of consistency when a critic tastes the same wine more than once:
The judges’ wine ratings typically varied by ±4 points on a standard ratings scale running from 80 to 100. A wine rated 91 on one tasting would often be rated an 87 or 95 on the next. Some of the judges did much worse, and only about one in 10 regularly rated the same wine within a range of ±2 points.
These figures, and much of the WSJ piece, are based on studies by Robert Hodgson, a retired statistician and owner of a small California winery. Guess what he discovered when analyzing the results of wine competitions? They too have little consistency:
The medals seemed to be spread around at random, with each wine having about a 9% chance of winning a gold medal in any given competition… The distribution of medals, he wrote, “mirrors what might be expected should a gold medal be awarded by chance alone.”
Now, you’d think that if a wine was really, really good, it would have a Michael Phelps-like track record in competitions. (Is that enough muddled sports metaphors?) But a wine that dominates one competition is fairly likely to come up with no awards at the next. Could the wine have been juicing? (Sorry, that’s the last awful mixed sports-beverage metaphor, I swear.) No. The critics are just making all of this stuff up.
What it comes down to is that there’s little science to wine ratings. I’d liken wine critics to movie critics. Their reviews are based on who they are as much as what they’re reviewing, and their opinions are highly subject to one’s mood, sense of trendiness, style, and so on. Um, duh, it depends on one’s taste. If you know and trust the opinions of a favorite movie reviewer, then go see the movie he or she recommends. If you know and trust the opinions of a wine reviewer, like Internet phenom Gary Vaynerchuk (who has no formal training, by the way), or TIME’s own Joel Stein (who is not remotely a wine expert, but sure is game for doing some tastings in this video), then drink the wine that person recommends.
But the only way to tell if you, the consumer, truly like something is to watch it or taste it for yourself.