2015 EP Series – The Challenges

“Consensus is a group discussion where everyone’s opinions are heard and understood, and a solution is created that respects those opinions. Consensus is not what everyone agrees to, nor is it the preference of the majority. Consensus results in the best solution that the group can achieve at the time.” – Wikipedia

In Part 1 of this series, I focused on evaluating prospects, specifically three important factors that I felt were sometimes overlooked or forgotten. I ended that article with a note that people will never agree on projecting prospects, and that is not a bad thing. None of us are a god. We are not all-knowing. We are projecting how humans will develop. Humans are not robots. They do not develop in predictable linear patterns. Ultimately, evaluating prospects comes down to making educated guesses about the future of over a hundred people each year based on historical trends that are both often disproved and which do not necessarily project the future.

Take a look at any draft. There are hits. There are misses. There are second rounders who have had long, productive careers and top five picks who never made a positive impact and washed out of the league. Take a look at any past mock drafts. Take a look at any past big boards. Take a look at past statistical analyses. None of them are perfect. None of them are close to perfect. They each have their own hits and their own misses. Yet every year, millions of words are written about what prospects are going to hit, which will bust, who will be a sleeper, and on and on. In this article, I want to talk about just how difficult it is to project prospects with any degree of certainty.

“Looney might be the single hardest player in the draft to get a good read on right now. As I noted last week in my latest Big Board update, some teams have him ranked in the top 10, a few in the 30s.” – Chad Ford

There will always be disagreement as to how prospects should be ranked. This is regardless of whether a scouting, statistical, or combination approach is taken. There is simply very little objectivity to the process. Multiple people can watch the same player and come away with very different takes. I am sure that in reading articles or comments, you have seen at least a few people post a scouting-based opinion that you strongly disagreed with. It’s inevitable. We are dealing with small sample sizes, and none of us has watched every minute of every game every prospect has played. Maybe you saw some of the good games, while somebody else saw some of the bad games. No prospect was good in every single game. And of those games, maybe only 25 were against good college-level competition and maybe only 3-5 times were they actually matched up against another prospect-caliber player.

While players who stay longer would theoretically give us a better read because of the larger sample size, strangely, it doesn’t always work out that way. If a prospect makes a huge leap, we’re left to figure out whether it’s because they’re now just older than the competition or whether they legitimately added new skills. If a prospect isn’t as good as they were in previous years, we’re left to figure out whether it was due to a role change, an injury, or whether they just played above or below their head for a year. This is really hard.

Just like the scouting perspective does not lend itself to identical evaluations, the statistical perspective creates significant disagreement as well. DraftExpress recently published an article with five different statistical models. They looked at 75 prospects in this draft. Of those 75 prospects, at least two models disagreed with each other by 10 or more spots on 69 of them. The remaining 6 were all in the top 8, where the differences between each spot are magnified. It is not that one system is better than another. Each of these systems were put together by intelligent people with the same goal in mind. It’s just difficult to discern what to project and how to project it.

This is important to keep in mind. Sometimes, you’ll see people asking “what do the advanced numbers say?” or “how does he rate analytically?”. Those are unfair questions. It assumes that there’s only one way to project prospects, and that every other way is inferior. That simply isn’t true. There are a ton of factors, and changing the weighting on any single one of them can significantly change the rankings. In my own system, the difference between 14th and 25th right now is 0.04. That’s 12 players separated by about 3.5% in my system. I was able to find the numbers for the models in the DX article, and looking at them, that kind of separation is somewhat common. In case you’re not sure, 3.5% is a really small difference. In some of these systems, that could be the difference between 10 and 10.5 points per game, or 36% from 3 and 37% from 3, which is usually in the area of 1 made 3 pointer, or 100 FTA vs. 120 FTA. We’re talking about tiny differences played out over a sample size of 35 games.

“Zach LaVine was selected to the Rising Stars challenge, won the 2015 Slam Dunk Championship and named to the NBA All-Rookie Second team. Most consider this to be a successful rookie season and a good draft selection by Minnesota. On the other hand, ‘analytics’ rate LaVine’s year as one of the worst in the entire NBA and a bad draft choice.” – Daniel Frank

This is another major problem in projecting: nobody can agree what actually constitutes being a good basketball player. There is no accepted metric for what is “good”. There is no fWAR and bWAR in basketball. It’s not as simple as going back, inputting the lists of what players did the best in the pros, and then seeing what college prospects most closely match those players. I can rattle off dozens of players who would spark major disagreement as to whether they were good or not last year. So two people could project which players will be good, but judge “good” by completely different criteria.

And right now, the NBA is evolving at an extremely rapid pace. What may have been good twenty, ten, or even five years ago may not be good going forward. That leaves projectors in a quandary of trying to project against what has historically been successful or trying to project what the new NBA trends will be and trying to find players that fit them. Neither of these approaches are objectively right or wrong. They’re simply two different ways of approaching the same problem.

“But wait, there’s more!” – Billy Mays

There’s additional confounding factors. There’s only 240 minutes a night available on an NBA team. 150+ of those minutes tend to go to the top five guys, with 90 more split between four others. 312 players played 800+ minutes this season. Only 15 of them were in the 2014 NBA draft class. Fifteen. Two of those did not attend college, so most statistical models didn’t try to project them. Three were undrafted. As a general rule of thumb, models work by looking at what has been successful in previous years and trying to find more of that.

Ignoring whether these players were good or not in their minutes, these simply aren’t many data points. But as you start to shrink the minutes minimum, you end up with players who didn’t really play enough to generate any kind of reasonable sample size. If you want to limit it to class of 2014 rookies who were good enough to crack the rotation for a playoff team, you get…Marcus Smart. Shrink the minutes limit to 700+ and you can add Markel Brown (and Jabari Parker even though he only played 25 games). Shrink it to 500+ to get Cory Jefferson. K.J. McDaniels, who played over 25 minutes per game for the Sixers, got traded to the Rockets and barely played over 25 minutes total the rest of the season. There wasn’t a single rookie who played more than 500 minutes for a playoff team over .500 this season.

The 2013 draft class really isn’t faring much better. There were 18 players in the class of 2013 that came out of college and played 800+ minutes. 7 of them have played in the playoffs last year or this year, combining for 2 playoff games started (both by Matthew Dellavedova) and not a lot of minutes played. That means that we have two draft classes in a row in which the data basically says “top rookies aren’t good enough to carry lower teams to the playoffs and mid-level rookies aren’t good enough to be more than be 9th men at best for playoff teams”. That is really unhelpful.

Take the lack of rookies getting minutes and combine it with the lack of rookies making an impact and you’re almost projecting blind at this point. Maybe we just had two bad years and this year’s class will be better. Maybe players from 2013 and 2014 make the leap and give us a better idea of what we should be looking for in prospects. Or maybe there continue to be few impact players and we’re looking at a few years of the few teams with truly elite talent dominating, with the boring playoffs that brings. Quite frankly, there’s no way to tell at this point other than to just wait and see.

“If I wanted to kill myself, I would climb up your ego and jump down to your IQ level.” – Unknown

I didn’t write this article to point out that we’re probably all spending hours and hours of our life putting in a ton of effort into trying to predict the unpredictable when “everybody sucks” may be just as good a prediction and takes much less time. I wrote this article as a response to the amount of comically unwarranted insults I see thrown around in discussions about prospects. Projecting prospects is really hard. There’s no right or wrong way to do it, and no system that works even close to 100% of the time.

I started this article with a quote about consensus, because it appears to be a concept forgotten by many. Far too often, I see “he’s ranked X on Chad Ford’s Big Board” or “DX projects him to go X” used as an argument that a player deserves to be drafted at that spot, and arguments that a player should go higher or lower are often insulted. But, as discussed above, Big Boards are often amalgamations of different opinions that don’t reflect what any one person or team thinks but which generally reflects the midpoint of what multiple people or teams think. A player ranked 20th could be ranked #5 by a few sources and #35 by a few sources. That doesn’t make those sources wrong, or dumb, or anything else. It just means that there’s a wide variety of opinions on each player.
Even statistical models can strongly disagree with each other, but put some together, and you can get a composite ranking. The fact that the individual models are each far off from the composite rank doesn’t mean that the inputs are bad. That would be nonsensical. What it means is that there’s multiple ways to judge and project a prospect, and so there are different results depending on how you do so.

So, keep this lesson in mind: when somebody declares their love or hate of a prospect, or that a prospect is underrated or overrated, and you disagree with them, don’t insult them. Don’t go “look at the big boards, they’re smarter than you”. Rather, ask “that’s interesting, why do you feel that way?” You may learn something, or be given something new to think about when you evaluate prospects. Or, you may just continue thinking that they’re horribly off-base. But regardless of how you feel, remember that projecting prospects is really hard, and declaring with any kind of certainty that you are correct…well, people may disagree with you, but they shouldn’t insult you. Maybe they’ll just continue thinking you’re horribly off-base.