Any time you have a system with clear ‘winners’, like wine over 90 points, you have also created a system of ‘losers’. It’s double-edged and helps some and hurts others. The perception of 90 points and greater is a truly a shallow one as I could not even begin to count the great wines I have enjoyed that fell under the 90-point threshold. Unfortunately, perception is reality when it comes to a consumer’s willingness put down his or her hard-earned cash in return for a winery’s goods. But the consumer is confused. Let me explain exactly why and the way out, as a problem without a solution is like, well, Congress. And we all know where that’s taken us!
I liberally apply this theory to what I call the “middle 80″. What I do here is exclude the first 10 percent of folks that are perfectly happy drinking box or jug wine, and really aren’t actively seeking a better wine experience. We also respectfully exclude those that already are expert in their searches and not only know exactly what they like, but know where to find it. These two groups of folks aren’t the ones that need the help – it’s the rest of us!
The information that is available and widely used by the “middle 80″ is both confusing and poorly represented by retailers. It stems from various sources with not only varying views and rating processes, but incredibly poor use of these ratings on the retail end of things. It’s no secret the raters (WA,WE,WS) numbers and descriptors are different so this is the first level of confusion. Then comes the whammy, the retail spin to move product. Wine is not unique to marketing spin but it’s spin of the worst kind and the end result is you lose the consumer. Say you go to a large retailer like Costco. The rating number displayed is typically that of the raters that give the highest rating on that wine. You can’t help but notice that all three are not on the same display. Granted it is also highly possible that any particular wine may not have been rated by all three outfits as well. Then comments are placed below, yet we don’t know who they belong to. Here are a couple examples:
In this first example, the wine being offered for sale is a 2009 Cuvaison Chardonnay from Carneros (on the Napa side). There is absolutely no reference to this vintage year, and there is a 2008 WS rating of 87 and and a 2007 WE rating of 92. Folks, let me be clear here. What information is here tells you NOTHING AT ALL about the wine you are considering purchasing as the wine for sale is of the 2009 vintage. In simple terms, two things are seriously wrong here. First is it lacks ANY RATING for the vintage that is being offered for sale. Secondly, you are using two differerent rating outfits for the two preceding years, which also renders the data less credible. Also, just whose comments are those underneath it?
This next example is a Louis Martini 2008 Cabernet Sauvignon. Here again, nothing on the wine vintage, but at least here we have a WA record of the last two years, though past performance is never a guarantee of future results. Again, I don’t know from where the comments originate. My fear is folks see the 90 points, and go for it not realizing that wine is a vintage-sensitive product. Also understand this isn’t about Costco as much as it is about how wine is represented in the market place. The hope here is that better tools, like WineMatch’s Wheel, are allowed to help folks select a wine. A simple vintage chart for major grape growing regions would be more helpful than ratings alone, but even that has its imperfections as with proper vineyard management, great wines come out of challenging years!
If one or more of the rating outfits’ numbers are visibly absent, one of two scenarios exists here. The first scenario is they simply did not rate the wine. The other scenario is that they rated it lower, so that gets conveniently omitted from the store shelf-talker. It’s important when buying things to get the truth and the whole truth, for that is what ultimately successful decisions are based upon. It’s truly not apples-to apples as these raters are all different outfits all with a different way of doing things. Therefore, it would make infinitely more sense for retailers to contract with one of these outfits to have some degree of consistency. Also to me, it’s important to know when it was rated as a point of reference so I know if I am buying it one month or two years after the rating. Knowing this enables me to make a judgment call of the aging effect in a retail store environment. Wineries care for the wines differently as it’s considered much like offspring with the exception it does not put the wine through college!
So let’s say you give up on the multiple-rating outfit model as shown previously, and adopt BevMo. Here, it’s mostly the world according to Wilfred Wong, yet another rater who is also a hired tongue for BevMo and actively participating in wine selection and bottling/branding operations as well. Not sure many BevMo “Winery Partners” get scores below 90 so this does not appear too objective. This feels a bit like the fox watching the hen house. The bottom line here is simply that if you own both sides of a process, there is simply too much room and not enough of a distinction for checks and balances to ensure that true objectivity is maintained. I simply can’t buy into this model for that reason alone.
Folks, you have effectively lost the consumer! It’s important to try and keep it simple and have checks and balances to sanitize the data. The amount of time I see folks trying to pick out a wine they will enjoy is excessive when compared to the success rate. Surely I am not alone in my thinking or experience here…
By the way, you probably guessed it – the consumer was yours truly in 1997, before I started on my quest to create a better system.
I think I’ll call it WineMatch