At the risk of upsetting every wine critic/judge out there, I set out to create a wine scoring system that matched my view of fine wine. I will include this scoring template at the end of the article, for those that might be like-minded. Email me if you would like a self-calculating spreadsheet copy.
After pro Sommelier training (where scoring was discouraged), I was exposed to the WSET scoring method and wine judging courses. Both used a variation of the UC Davis 20 Point Scoring System. I was shocked how these systems were unable to separate amateur from premium wines effectively. In these classes, we scored fruit wines (cherry, blueberry, strawberry, etc.) and vitis labrusca wines (Concord, Chambourcin, Catawba, etc.). These wines were near undrinkable for me and were being given the same scores as mediocre Cali Cabernet. The methodology and scoring systems taught in these classes were intended to be appropriate for both amateur and fine wines. Although, away from class these same people would explain the intent of these systems was to score wines based on a comparison of LIKE wines. This is not how I understood the training and it is likely the public views this scoring similarly. This experience motivated me to build a scoring system that is weighted properly and could be used to provide comparatively accurate scores for amateur, professional AND fine wines, without a bias.
The Evaluation Criteria
First, it was necessary to determine what separates fine wine, from other wines. In that evaluation, I arrived at the following characteristics that are under-represented in the UC Davis System: Balance, Complexity, Finish and Aging Potential. All of these measures are intended to be scored in the UC Davis “Quality” category, but to make the scores more comparatively accurate, I decided these characteristics needed their own point categories. I then looked at what seemed to be weighted incorrectly in the UC Davis System and arrived at: Clarity, Color and Acidity. Four of twenty points for clarity and color is 20% of the score. This is weighted too heavily towards mediocre wines. Acidity was only 5% of the score – not weighted heavily enough. I realized, if I reduced the points for clarity and color, increased points for acidity and added balance, complexity, finish and aging potential categories… I might be able to devise a scoring system that could properly measure a Concord wine (for example) and build an appropriate score against say… an aged Bordeaux Gran Cru.
A Wine Scoring Template
Now I was ready to put my scoring template together. I realized that many media outlets still use the old Robert Parker 100 pt system and decided to add it to my template. I wanted to help both systems arrive at a roughly equivalent score. I realized this could only be done, if I started the 100 pt score at 50, instead of 0. You will see what I mean below. The closer the wine came to the premium category, the better my 100 pt method seemed to arrive at an accurate score. It was the opposite with my 20 pt method, albeit much closer to reality than the UC Davis 20 pt method.
After the long explanation, here is my effort to build a scoring system that can evaluate both a poor blueberry wine and a Gran Cru Bordeaux – with the same template – done accurately and with a logical systematic approach.
In the past, my Somm training won out and I tried not to add scores to my tasting notes. In retrospect, I think this was mostly due to being uncomfortable with the systems available. I intend to use my scoring template moving forward and hopefully develop consistency and comparative accuracy across my tasting notes.
I would be very interested in other opinions regarding both the thinking that drove this creative process AND the relative accuracy using this scoring system. I am also open to modifying aspects, if the changes fit within the logic model used to build it. Please feel free to leave your comments on this page. Thanks!