Music, at its core, is a collection of numbers; and so, in theory, if you were really smart, you could develop an algorithm that takes all aspects of those numbers into account and distills them down to a single statistic. You could then use that one number for every piece of music to rank, objectively, them top to bottom.
Now, of course, people would likely have some disagreements with these, because there's no accounting for taste in these kinds of things, but you would have numbers, and if you felt confident about those numbers, you could show them to people and let the reaction happen.
However, if you were to put such a list together, and the top three songs of all time were listed as being by Nickelback, Jimmy Buffett, and the Dave Matthews Band, while Beethoven, the Beatles, and Miles Davis didn't make it into your top 100, it might be wise to throw your list in the garbage, light both the list and the computer that came up with it (and maybe your home, just to be safe) on fire, and never ever speak of it again.
This, however, was not the tack taken by Statistical Sports Consulting, which is run by two people (including an honest-to-god hockey coach) and claims to be a firm that aims to make tools that give clients a competitive edge. They annually release something called the Total Hockey Ratings, or “ThoR,” which is meant to judge how many wins a player contributes to his team based on a number of factors.
These factors, as you might imagine, include “all of the on-ice action events when a players is on the ice as well as their linemates, their opponents and where their shift starts,” or put another way: “Everything on Extra Skater.” These factors are then put together to determine how likely they are to lead to a goal, and added up over the course of a season. That's a lot of data, so you'd think the model would spit out some fairly credible rankings.
You'd be wrong. This statistical ranking, like WAR in baseball, is meant to show how much better a given forward or defenseman is than a “average player” — the study never presents an actual definition of its own in this regard, but other similar catch-all stats typically define a “replacement player” as a league-average call-up who plays the same position — and it is also terrible.
Ryan Suter was No. 1 on the list when taking into account all situations (even-strength, power play, shorthanded), and while that's a bit of a reach there's no arguing he's not extremely valuable to the Minnesota Wild, who play him roughly a billion minutes a night.
If you don't accept Ryan Suter as the most valuable player in the league in comparison with a regular ol' replacement-level defenseman, then maybe you'll be befuddled by the rest of the top 10, which features Chris Kunitz, Andrei Markov, and Troy Brouwer. The idea that Brouwer would be considered the No. 8 player in the National Hockey League, by any reckoning, should have raised a lot of red flags that this is perhaps not the world's greatest statistical ranking of all time.
It should be noted, too, that the list only goes up to No. 250 (Brayden Schenn, one spot back of Matt Bartkowski, for some reason). Thus, we don't know the true stat of anyone below that level. Here's a list of a few guys who are not in the Top 250 in the NHL, according to THoR: Claude Giroux and Alex Ovechkin.
Fortunately, most of the league's top 10 scorers made the Top 250, according to ThoR. Tyler Seguin was No. 44, right behind another Tyler (Toffoli), and three ahead of Taylor Hall. Phil Kessel was 69th, only one spot back of Mats Zuccarello. Ducks teammates Corey Perry (87) and Ryan Getzlaf (103) were almost both in the top 100, but the latter was seven spots back of Eric Gelinas. Jamie Benn was No. 113. Sidney Crosby checks in at 163, between Ryan O'Reilly and Ryan Smyth.
One ranking I actually buy here: Joe Pavelski at No. 17. That doesn't sound wrong, anyway. But look, you can sit here and make fun of this list all day long (Tomas Tatar ahead of Henrik Zetterberg, Mikhail Grabovski ahead of Evgeni Malkin, who by the way was 180th, and so on).
It is of course ludicrous, as is this sentence explaining one particular aspect of the ranking: “Of note is that Ovechkin (not in the Top 250) compensates somewhat for his really poor even strength play with PP play that makes him a replacement level player for this year though.”
Ovechkin, who had 27 even-strength goals this season (fifth in the league) is only even as good as a replacement player at his position because he scored a league-leading 24 power play goals.
That's the only reason.
It's not so much how bad the rankings here are (again: Extraordinarily bad!!!!), so much as it gets to the heart of everything that's perceived to be wrong with the analytics movement in sports, and hockey in particular.
You know those people who say, “Put down the calculator and watch the games?” This is the garbage they're talking about. Anyone who would come up with this list and actually present it to the world as any kind of reasonable or objective analysis of the league is out of his mind.
There's no rational way to discuss a list which posits that if they had both played an 82-game season, Steven Stamkos would have been less valuable than Darren Helm this year.
Lots of people are doing very good work with advanced statistics that also involves watching the game. They keep track of corsi or fenwick, yes, but they do it with relation to how those numbers are impacted by zone entries, faceoffs, and so on. They watch the games meticulously, often multiple times over, in an effort to better understand the way the numbers we see are impacted by things which are not yet quantified. One day, it's all going to look primitive, but for now it's the best we have to look at the game more deeply than, “There was a fight or a big hit so that changed momentum.”
We've already learned so much.
But then, their very good data is lumped in by luddites (for reasons not worth getting into) with this kind of finding, which by the way somehow finished second at the 2013 MIT Sloan Sports Analytics Conference's research paper competition, assumedly because there was only one other entry. And it brings the whole thing down, because this was obviously put together by people who in fact didn't watch the games, and seem not to have a basic understanding that the numbers their formula spits out aren't worth the paper they should be printed on for the express purpose of shredding them.
You can be sure that the people who put together THoR are terribly smart. One is a hockey coach and former professional player who has a degree in economics. The other is an associate professor of statistics. But they should be smart enough to know their product doesn't work like it's supposed to. Their mission, in a way, is to make the sport of hockey easier for everyone to understand, but all they've done is given critics more ammo to take down their whole community.
Seems counterproductive to me, but then I think Crosby is a lot better than Chris Kunitz, so what do I know.