Fighting Biases: Avoiding the Trap (Pt. 1)
The first step to becoming an impartial scout is to understand the biases at play
In early December, a tweet went viral on Draft Twitter about the types of biases that were pertinent and need to be avoided as best as possible. Shout out to Mike De La Rosa for bringing this to our attention:
Years ago, we wrote our own piece about our four biggest inherent biases, a look at the skills we know we are skewed towards or against as scouts. What Mike’s graphic, and subtle reminder by tweeting it out, does is force us to dig deeper into identifying those biases and different factors beyond it.
Our goal at The Box and One is to elevate the conversation and be as mindful as possible of the factors that could lead us astray in our prospect evaluations. An article like this serves almost as our professional development session or cognitive workshop for the year.
Mike Gribanov, a long-time friend and international scout, added to this conversation with a thread of examples of these biases in action. Our intention is to add to that in more depth. Below is the full visual graphic that De La Rosa put forward:
We will give an example of each within the draft scouting world, forecast how to identify them before they happen and find ways to avoid committing these errors in a way that could cost us, or a prospect, an important and accurate evaluation.
Anchoring Bias
First impressions mean everything. We can do a decent job combing through the small sample size phenomenon, something many of us are accustomed to fighting. But what happens when the sample size starts to level out? We’re left anchoring our opinions on the starting point for a player, seeing how they change in relation to our first impression of them.
One fantastic example of that might be Grant Williams of the Boston Celtics. Williams got off to a historically poor shooting start to his career, going 0-25 from 3-point range. Williams was a so-so shooter in college, but for NBA fans, the conversation around Grant has centered on his shooting improvement. While those first 21 games were pretty challenging, Williams is shooting 38% from 3-point range since then. That’s a 130-game sample, more than six times the size of his initial dud. To anchor our opinion of Williams, who we continually see in the light of an “improving” shooter is to attach a label to someone not based on who Grant is, but the opinion of him we’re anchored to. Perhaps Michigan’s Caleb Houstan is having the same trajectory this season.
How can we identify those when they happen? The first and biggest task is to tether our analysis to changes we see and become evident on tape, not just statistical ones. It would be okay to call Williams an improving shooter if overhauled mechanics, tweaks to speed up his release or the like took place around the same time as his jump in percentages. At the very least, anchoring bias shows the importance of watching tape in figuring out how a prospect evolves, not just looking at what the numbers show.
Availability heuristic
In some regard, the desire to depend heavily on the information that’s available is akin to anecdotal storytelling. Saying that Indianapolis is unsafe, for example, because you know one person who got mugged there avoids the use of data and hinges an opinion solely on personal experience — or those only one degree of separation away.
Quite frankly, finding every data point on a prospect isn’t realistic. Instead, we wind up gaining different levels of insight on each prospect. When doing background on some prospects, we wind up achieving a deeper look at some. I’ve coached the last few years in college in the Mid-Atlantic or East Coast. I also spent time coaching in Indiana. My background knowledge on pro prospects from these two regions, or ones who I happened to see while playing AAU, is a lot higher than those from the West Coast. In some instances, conversations with mutual friends or coaches helps color my background to them in different ways.
Most importantly, I get to see them in venues or forums outside of college basketball. Seeing Jaden Ivey go 0-8 on jump shots against my team when he was a freshman in high school, for example, is a combination of anchoring bias and availability hueristic. I anchor my opinion on him based off the poor shooting he demonstrated five years ago, and frame his shooting as a project or development from that point forward. I also look beyond the stats that are out there on college-reference, because there’s an 0-8 showing on top of that which is purely anecdotal: I don’t have a poor high school performance from other prospects to compare it to.
The best we can, we should all strive to get as level of a playing field on each prospect as possible. The more background knowledge we have on one guy while not digging deeper on others, the more out of balance we’re bound to be.
Bandwagon Effect
The key to combating the bandwagon effect comes really in measuring ones own confidence to stand alone. Human nature is difficult to fight, and really all this means is that inherent in hiring any scout is in trusting them to stand on their own and no go with the herd. There’s a level of self-confidence necessary, and that’s always rooted in doing the work. Confidence comes through putting in the time at whatever your craft is.
Regardless of how hard we try, we’ve fallen victim to the bandwagon effect in the past. In 2019, we were very low on Brandon Clarke out of Gonzaga. While mainstream outlets never really bought into Clarke, a movement on Twitter and some fringe publications tried to champion him as a top-five player in the draft class. By our own measures and evaluations, Clarke was an early second-round pick.
In one regard, we copped out and placed him 19th on our overall big board to split the difference. It’s a moment of weakness we showed that was indicative of hopping on the bandwagon: we didn’t want to be the only one who missed on this guy, and saw an opportunity to play it safe without risking being totally wrong or going against our principals if we failed. We’ve learned from it since, but continued work to find our own confidence in our evaluations and spending less time reading/ paying attention to other boards is really the biggest way to combat this effect. You can’t jump on a bandwagon if you aren’t anywhere near the road…
Blind Spot Biases
This is one we’ve written on before and still stands out as one that we need to combat harder than many of the others. In our inherent biases piece, we tried to bring those blind spots into the light by discussing the things we like the most and how that tends to lead us astray on prospects.
The first bullet point on our own inherent bias article: shooting is our favorite skill. It’s driven by my own role as a basketball player: I was a shooter. At an early age, I learned the roles and impact those specialists can have, quickly learned the minutia to get free as a shooter and move without the ball, and value identifying those tricks while scouting. It’s lead me to make a few errors on my overall rankings: Tyrell Terry over LaMelo Ball as a lead guard, Isaiah Joe over Isaac Okoro, or low overall rankings on non-shooting point guards on the whole.
Like driving a car, experience and consistently checking your side and rear-view mirrors help you feel confident there’s nothing still in your blind spot. In drivers ed, they say you should check your mirrors every time there’s a change of scenery. We believe we need to do the same: every time there’s a change in data or another round of tiered rankings, we need to make sure those biases typically in our blind spot are out in the open so we don’t run anyone over when changing lanes.
Choice-Supportive Bias
What an important topic to discuss and dive into. What Harvard’s infographic describes as choice-supportive bias we term as “stick to your guns syndrome.” It’s a desire to believe that what we initially believed was right/ can continue to be positive despite evidence that might tell us we’re wrong. We stick to our guns because, quite frankly, we don’t want to be wrong. Sometimes it’s easier to try and ignore those signs, or simply not give them enough credence, than to look in the mirror and diagnose where we missed up.
To be frank, I think we see this more with players we championed pre-draft once they get into the NBA. We want to blame the circumstances, the team that’s working with them and not doing enough to develop them, and believe that a clean slate will vindicate our initial opinion. I caught myself doing this earlier in the year with Jalen Smith from the Phoenix Suns. Smith was drafted in the top ten, a reach according to many. Smith was a top-six player on our 2020 board, and barely a year into his career, the Suns declined his contract option, meaning they’ll cut bait and release him after two years.
My initial thought: blame the circumstances in Phoenix, the development staff botching how to handle such a uniquely-talented big man. To see a tweet from his high school coach after the Suns’ decision and say to myself “yeah, Coach Clatchey is right… Smith is a great kid and will find a way to be impactful in the future.” Yet continually doing so doesn’t give enough value to the mounting evidence: in 34 career games, Smith has shot 21% from 3-point range (as a stretch big), 25% in G-League action, has a negative assist to turnover ratio and hasn’t looked comfortable on an NBA floor yet. Clinging to hope that the next circumstance or opportunity he gets will see him thrive isn’t necessarily misguided, but it’s very supportive of the choice we made initially: to value Smith pre-draft and believe in his game.
Clustering Illusions
Seeing trends that don’t exactly exist, or searching for patterns in random events. To be honest, I wrestle with the inclusion of this one. On some level, rooting out this bias is as simple as understanding the difference between a correlation and causation. In draft circles, we’re looking for causations: the reason behind a statistical output or the spark that lights the match.
In most cases, the distinction is pretty clear once you master the correlation and causation differences. In others, where it’s a challenge to draw that line, the trap is very easy to fall into. On his Twitter thread, Gribanov mentioned a solid example here: looking at prospects in California, seeing several of them he likes/ that succeed, and therefore ranking future prospects higher simply because they’re from California.
I don’t really think there’s a definitive way to know in many of these cases whether the data that comes out is random or not. Only time can tell, and even then there are exceptions to the rule that skew results. One example we fall into time and time again: Syracuse prospects don’t pan out well in the NBA. Sure, there’s a mountain of evidence that supports that claim of first-round picks who flame out and one outlier in Carmelo Anthony. But the result might be writing off the few who have been good and outperformed our pre-draft evaluation quite easily.
The only data that is available is the performances of all those prior Orangemen in the NBA. That in itself isn’t or can’t be an indicator of future success. But when betting on humans and making decisions based on information available, figuring out what is an illusion and what isn’t can only be found through time, as well as trial and error.
Confirmation Bias
First impressions are formed, and opinions on evaluations flow from there. While there’s always danger in anchoring our future analysis to the first data point we find, how we engage in future scouting opportunities is really important. How do we fight the notion of trying to look for traits that reinforce what our eyes told us the first time?
The best way to do so is to try to look at games or players from different perspectives. One tool I’ve found useful is to ask myself “what would a person who saw Player X play for the first time leave this game thinking?”
Some scouts have physical forms and evaluations they have to submit when finished with their game notes. I strongly advocate for adding that question to the bottom of a scouting form — it helps eliminate confirmation bias. At the end of the year, when multiple game notes are compiled, you can draw conclusions from the totality of the answers to that question.
Conservatism Bias
Alright, hoopaholics… this is where we’re going to start showing some disagreement. Not in the virtue of conservatism bias as a worry to be aware of, but in where we set the bar at to combat such a claim. Conservatism bias is essentially a desire not to change your opinion, thinking that it’s too soon to do so based on the new information available. It’s where confirmation bias meets small sample sizes: is what we’re seeing really indicative of a change we have to make, or if we hold out a little longer will things return to the way we initially thought they were?
I’m a big believer in not overreacting to a small sample size. We could debate for years when a sample is no longer small, and whether that standard is different for different traits. For example, shooting over the course of a three-game stretch might be too short a time to draw a conclusion from, but you can generally tell if a player is an NBA-caliber athlete from that same stretch. If that’s the case, then conservatism bias on a micro level is really about knowing when to draw the line.
On a macro level, when major changes to the landscape happen (ie the G-League Ignite or Overtime Elite come onto the scene and churn out draftable prospects) we have to find ways to reinvent how we evaluate. The pandemic provided some major shifts to how the pre-draft process unfolds: more video, less travel in-person and finding ways to look at new factors statistically to replace what we don’t get from in-person settings. As that game is shifting, don’t let your scouting department or front office go back to the old ways simply because that’s “how we used to do them.” Innovate and embrace the changes to how we scout (or hire more remote scouts like me!)
Information Bias
Over-analysis is real. See: 2020 NBA Draft. We all spent SO much time scouting and preparing for those prospects that we nitpicked every little detail, dug deeper for more information and perhaps spent too much time in the weeds to the point where he lost sight of the entire yard.
The goal with gathering information is to help influence sound decisions. At some point, an abundance of information prevents a sound decision from taking place, instead causing us to separate what’s actually important from what isn’t. In doing so, we inherently spend time thinking about those data points that aren’t relevant instead of what we should be doing.
Synergy is a very dangerous tool in that regard. First off, player stats and metrics are dependent on video being properly logged and categorized, which it often is not. What’s the difference between a cut and a post-up, or a roll and a post-up? Sometimes, not much as one turns into another. Basing decisions solely on those numbers, or diving so deeply into them, is flawed on its own. But bringing forward a 35% turnover rate on post-ups vs. a hard double on the left block? Sometimes that’s a little too far into the weeds.
Keep it simple. Come up with a formula or a baseline of the stats you value most. Then only look at those and stay away from individual research. All data can be helpful if contextualized the right way, but some data can also be harmful if you base evaluations on the weeds and not the grass in the yard.
Ostrich Effect
When times are tough, don’t look. Close your eyes when things get scary and wait to open them until you’re on the other side…
As evaluators, that can be pretty stupid. We’re looking to construct narratives and draw conclusions from the hole, so ignoring any one part doesn’t really work. We’ve seen this before from some draft folks online: if you don’t look at the defense, Obi Toppin is a top-three talent in this class.
But… you have to look at the defense.
“Ignore the turnover, look at these handles from Kyrie Irving.”
But… the turnover and result is kind of important, too.
Too frequently when trying to achieve a point, the ones making it will unfairly try to turn our attention away from the negative and only look at the positive. When you see that, or hear those words, run in the opposite direction.
It’s one thing to say “while the turnover is worth discussing, the handles are encouraging for how functional he can be in other areas.” But telling someone to ignore or pay less attention to something is trying to get you to see an engineered outcome. Don’t fall into that trap.
Part two will be released soon, looking at biases 11-20 on the Business Insider infographic and their relation to NBA Draft Scouting.
Fighting Biases: Avoiding the Trap (Pt. 1) delves into the intricate web of biases that shape our perceptions. An enlightening read that encourages self-awareness and critical thinking. The high heel vector https://depositphotos.com/vector-images/high-heel.html visually captures the balance required to step beyond preconceived notions, much like the journey this article embarks upon.