Advanced stats, metrics, and tracking data allows basketball observers to test conventional wisdom at a heretofore unseen rate. Basic precepts of the sport can be challenged, often for the better. It's often the case that the old guard turns out to be correct, but at least we have the opportunity to make sure that's the case.
One of the first such ideas to be tested was the hot hand theory, or the idea that a player can get hot (or in a zone, or whatever term you prefer) and carry a team offensively for a lengthy period of time. Most of these initial studies found that the idea is wrong, and that said player is likely to miss after a string of makes, suggesting that these streaks are essentially random and not tied to any special quality held by the player. Deep thinkers People like David Brooks and Larry Summers have promoted this view, as well.
[Yahoo Fantasy Basketball: Manage your team with free live scoring, stats and advice]
However, a new study provides some proof that these prior investigations were limited by the type of data available. From Zach Lowe for Grantland:
But the hot hand is quietly enjoying a bit of a renaissance, and that revival might peak this weekend in Boston, when a group of authors [John Ezekowitz, Andrew Bocskocsky, and Carolyn Stein] with access to new optical tracking data present a paper showing that the hot hand might actually exist. The paper comes on the heels of one study that showed a possible hot-hand effect on free throws, and a series of papers that essentially concluded it might be impossible to debunk the concept. Most players and coaches, including shooting coaches, have kept the faith in something like the “hot hand” even as the statistical proof against it has piled up.
The authors of the new paper, to be presented this weekend at the New England Symposium on Statistics in Sports, actually find themselves in agreement with much of the hot-hand debunking that came before them. But the fancy SportVU motion-tracking cameras allowed them to look for things previous researchers couldn’t — the exact location of each shot, the identity of the closest defender, how close that defender actually was to the shooter, the trajectory of the ball in the air, and lots of other things. All of that stuff allowed them to measure the difficulty of a shot in ways that were once impossible. [...]
But this is where the new study begins to break some new ground using that camera data. “We’re not saying those prior studies were wrong,” Ezekowitz says. “Just that they didn’t have enough access to information.” The three also found that defenders tended to creep a bit closer to guys who had made an unusually high percentage of their last five shots. The effect was often small; a shooter making one more shot than expected in a string of five resulted in defenders chopping only about half an inch of the usual distance between shooter and defender. But the effect increased as shooters hit more shots, and the effect was consistent; defenders on average were within four feet of shooters, so even a few inches might matter a lot.
Measuring difficulty was the key to the authors finding some kind of hot-hand effect. Previous studies either treated every shot equally, so that making a layup and missing a 3-pointer were basically coded as 1s and 0s, or could estimate shot difficulty only in a rough way — so rough a lot of studies couldn’t do much with it.
Lowe goes into the methodology in great detail and speaks with several experts on the subject, but the general idea is that the hot hand exists as long as we recognize that players do not suddenly get better at shooting increasingly difficult shots when in that zone. A player can get in a rhythm, but it still behooves him to take good, high-percentage shots.
In a way, this conclusion doesn't contradict prior studies — Lowe quotes one researcher who says his anti-hot hand work at least proved that countereffects such as this superior defense and increased risk-taking may balance out a player feeling slightly more comfortable taking particular shots. Yet this take seems to gloss over exactly what made so many of the initial hot hand debunkings so frustrating. When Ezekowitz says, for instance, that prior hot hand studies weren't wrong, just incomplete, he overlooks that proponents of these ideas were willing to compare those who disagreed with them to sasquatch conspiracy theorists. The issue was that the data was seen as complete when many of us pointed to the necessity of data just like that used in this new study.
The problem here is not in using data to debunk old myths — many legitimately important developments in the history of science have turned out to be wrong, in retrospect — but in assuming that the presently available best data always constitutes the best argument. As these metrics and tracking systems get better, they will continue to show us new things about the worlds we once presumed to understand. Sometimes, though, it's best to keep in mind that any piece of information depends on the context in which it's presented.