Do Certain NCAA Basketball Systems Generate NBA Stars More Often? (1 of 3)

Nicholas Canova

In April 2016, the University of North Carolina hosted its annual Sports Analytics Summit, featuring a series of excellent guest speakers, from Dean Oliver to Ken Pomeroy, as well as a case competition that challenged teams to analyze the effects of NCAA basketball systems on generating star NBA players. More specifically, the case challenged participants to answer the question “Are there certain types of systems (offensive and/or defensive) that work to best identify future NBA superstars?” Our team of four entered the competition, focusing on the impact of offensive systems specifically, and we present here our core analyses answering the question and thoughts throughout the process.

Given the open-endedness of the challenge, we asked ourselves several initial questions including (1) what constitutes an NBA superstar and bust player, (2) how could we categorize NCAA basketball teams into different systems, and (3) what analyses / metrics could we look at that may indicate an NCAA player is more likely to become an NBA superstar or bust than is already expected for that player. We will address the majority of our work in detail over 3 short posts, highlighting some of the key assumptions in this first post. Looking at each of these 3 questions in detail should give a fairly thorough review of our overall analysis.

First, what constitutes an NBA superstar? We considered several metrics for classifying superstars, including a player’s number of all-star appearances, his box score stats both for impressiveness and consistency, performance in the NBA playoffs, etc., however we ultimately selected a player’s total win shares (WS) over the first 4 years of his career as the sole metric to classify a star player, which brings up a key factor of our analysis. Since an underlying focus of the analysis is helping teams identify NBA superstars (the case competition was hosted and judged by the Charlotte Hornets), we looked only at player performance over the first 4 years of their career after being drafted, which is the time period during which they are contractually tied to a team before reaching free agency. Mentions of total WS throughout the post should be read as a player’s total WS over his first 4 years after being drafted. Since a player’s likelihood of becoming a superstar is of course closely tied to his first 4 years of performance, we did not see this focus as limiting. As for the cutoff, we selected 20 WS over a player’s first 4 years. WS assesses a player’s overall NBA value in terms of the share of their teams’ wins each player is accountable for, and serves well in determining superstar players.

Stars         Busts

Second, what constitutes an NBA bust? We considered this question more challenging to quantify than the question on superstars, believing we could not look at WS alone on an absolute basis. Think about it this way – is a 60th overall pick with 0 WS a greater or lesser bust than a 1st overall pick with 5 WS? (5 WS over 4 years is very low for a top 10 pick – Greg Oden, highly considered one of the NBA’s premier bust players, even had 6.8 WS, whereas a star player such as Kevin Durant had 38.3 over this period). As expected, we consider that 1st overall pick to be the bigger bust than the 60th pick, due to the higher expectations put on top draft picks. More specifically, we considered any player drafted in the top 20 overall, with fewer than 8 total WS, whose WS were more than 6 fewer than what would have been expected given their draft position as a bust player. Both cutoffs for NBA superstar and bust seem arbitrary, but were selected them such that 5% – 10% of all players drafted were classified as stars and busts, respectively. The tables above highlight several of the star and bust players taken in the drafts between 2006 – 2012, and the players included in each table seems reasonable and passes a reasonableness test. Since this analysis requires 4 years of NBA WS data, we did not look at players drafted more recently than 2012, and lacked certain data earlier than 2006.

The last item we’d like to highlight in this post is clarifying what is meant by “WS were more than 6 fewer than what would have been expected given their draft position”. We will refer to total WS in excess of expected WS as net WS, and it is calculated based on the difference between actual WS and the expected number of WS given a player’s draft position. The graph below shows historically the average number of win shares in a player’s first 4 seasons at each draft position, with a line of best fit. We can use the graph’s line of best fit to estimate how many WS we expect a player to have then, given their draft position. For a player to over-perform their draft position, he would need to earn more WS than what the best fit line estimates. Going back to our earlier example, 1st overall pick Greg Oden would be expected to earn (-5.789 * ln(1) + 24.2) = 24.2 WS win shares, however only earned 6.8 WS, for a net WS of -17.4 As for Kevin Durant, his actual WS of 38.3 vs. expected WS given draft position of 20.2 resulted in a net WS of 18.1.

WS vs Draft Pick

With this basic foundation laid down, in the next post we will begin to look at our main clustering analysis of NCAA systems based on play-types, and extend this clustering analysis to the college systems of those players we’ve classified as stars and busts using the criterion above.

 

 

Advertisements

A fresh take on batting the pitcher eighth

Eli Shayer and Scott Powers

First-year Cubs manager Joe Maddon made headlines shortly after joining his new team this offseason when he asked Chicago’s analytics staff to investigate the effect of batting the pitcher eighth in the lineup[1], rather than in the standard nine-hole. Maddon had demonstrated an affinity for batting the pitcher eighth in the past when his Tampa Bay Rays played interleague games in National League ballparks, requiring that the pitcher be included in the batting order.

Through his first 17 games at the helm of the Chicago Cubs, Maddon has written his starting pitcher’s name in the eighth slot of his lineup card each time. Should Maddon continue this habit, at season’s end he will have slotted his pitcher eighth more often in his career than did any other manager since 1916 not named Tony LaRussa[2]. But it would take almost two more full seasons of managing an NL team beyond that in order to pass LaRussa, the modern-day champion of the strategy.

The most common argument in favor of moving the pitcher up one spot in the order is based on the value of having a position player batting last, right before the lineup turns over and the strongest batters get their hacks. By batting the pitcher ninth, the argument goes, the best hitters are less likely to have runners on base when they come to the plate. This effect must be balanced with the mild counter-effect that, over the course of a 162-game season, the no. 8 hitter will get something like 20 more plate appearances than the no. 9 hitter.

There are additional reasons to suspect that batting the pitcher eighth may be the better strategy. Maddon himself points out that after five or six innings, the pitcher’s spot in the lineup is often filled by a pinch hitter, who may be a better batter than the worst-hitting position player in the starting lineup and certainly has the potential to be a better fit for the situation[3]. Sabermetricians have tackled this problem in the past, for example Mitchel Lichtman concluding that, while the answer depends on the lineup, it is often a toss-up between the two strategies[3] and John Beamer concluding that batting the pitcher eighth was better for the 2007 Cardinals[4].

Here we present the results of an original analysis to tackle the same question, based on simulation and using more recent data. Specifically, using 2014 National League data only, we estimate the probability of each possible outcome of a plate appearance for non-pitchers in each spot of the order, first through eighth. We estimate the same probabilities for pitchers and pinch hitters. Additionally, for each type of ball in play, we estimate the distribution of baserunner advancement, depending on the number of outs and the spot in the order of the baserunner. For example, with the leadoff hitter on second base and two outs, 81% of singles plated that runner while 15% of singles advanced the runner only to third base and 4% of singles resulted in the runner being thrown out. Those same fractions for a no. 4 hitter are 78%, 16% and 6%, respectively.

Equipped with these percentages, we simulated a large number (500,000, if you must know) of games each with the starting pitcher batting eighth and the pitcher batting ninth, varying the number of innings pitched by the starter from three to nine. The results are summarized in the table below. The important observation to take away from these results is that while some numbers are larger than others and these differences may be statistically significant due to the large number of simulations, there is no evidence of a strategically significant difference between the two lineups.

Pitcher IP 3 4 5 6 7 8 9
Pitcher 9th 3.4994 3.4972 3.4967 3.4924 3.4994 3.4997 3.4960
Pitcher 8th 3.4963 3.4990 3.4965 3.4999 3.4925 3.4966 3.5001

One problem with this approach for evaluating the strategy is that the simulator underestimates the run-scoring environment. An average of about 3.5 runs per game is lower than in the 2014 National League, so there is some room for improvement in the simulator. But our results are consistent with past results, the difference between the two lineups likely being on the order of less than one run over the course of an entire season.

Given our findings, we suspect that the Cubs analytics staff came to a similar conclusion — that it doesn’t really matter whether the pitcher bats eighth or ninth — and gave Maddon the thumbs-up to do whatever his heart felt was right. At least, the Cubs’ lineups to this point this season have not been inconsistent with this hypothesis.

References

[1] Neil Finnell. Cubs researching benefits of batting the pitcher eighth in the lineup. Chicago Cubs Online. December 3, 2014.

[2] J.G. Preston. A history of pitchers not batting ninth, and the managers who did it most often. The J.G. Preston Experience. Accessed April 28, 2015.

[3] Richard Bergstrom. Baseball rarity: Cubs, Rockies hit pitchers in eighth slot. ESPN. April 10, 2015.

[4] John Beamer. Is LaRussa right to bat his pitcher in the eight slot? The Hardball Times. October 1, 2007.

New Leadership Elected

The Stanford Sports Analytics Club has held elections for its officer positions for the next year. The club will again be led by co-Presidents. Vihan Lahksman will continue to serve as co-President, and will be joined by the newly elected Scott Powers. Sandy Huang will continue in his previous position as Financial Officer. Serving as Blog Editor-in-Chief and Tech Officer will be Eli Shayer.

Thank you to the previous leadership of the club for bringing about a tremendously successful first year of existence for SSAC. A special thank you goes to outgoing co-President John Sears, who co-founded SSAC last year and leaves the club in a great position to continue into the future.

The Frictional Cost of a Call to the Bullpen

Photo from wikimedia.org

Post by Eli Shayer and Scott Powers

It is well known that a starter’s performance tails off as he pitches deeper into a game. This drop off in results is attributed to facing the same batters multiple times, pitcher fatigue, and inconsistencies in mechanics. In this work, we examine reliever performance to see if there is an analogous effect.

Our study uses wOBA. a statistic developed by Tom Tango that measures the contribution of plate appearance results toward run creation, in units of runs. When assessing a pitcher’s quality, a low wOBA indicates a high performance by the pitcher, while a high wOBA shows the opposite. Expected wOBA is derived from the season wOBA of the batter. The figure below shows the difference between observed and expected wOBA for relievers, as a function of the number of batters faced, based on all batters faced by MLB relievers from 2000 to 2013.

For example, the value 1 along the x-axis corresponds to the first batter faced by relievers. The value 2 along the x-axis corresponds to the second batter faced by relievers, and so on. Because pitcher and batter handedness have a significant impact on the result, we separate the results into separate curves for each possible handedness pairing. The “All Handedness” curve is the unweighted average of the four other curves.

ShayerPowers1

After the fourth or fifth batter faced (BF), results fluctuate greatly due to insufficient sample size, but all curves show the same pattern at the beginning: On average, the wOBA of the first BF is 10 wOBA points higher, relative to expectation, than the wOBA of the second BF. This difference of 10 wOBA points scales to a difference of about 0.37 runs per 9 innings (because the average number of batters faced per 9 innings is about 37).

Our proposed explanation of this frictional substitution cost is that pitchers require some feeling out of their pitches and throwing at full effort before being completely game-ready. Warm up pitches in the bullpen appear to not sufficiently prepare a reliever for appearing in the game, and they pay the price when facing their first batter.

What kind of relievers struggle most against the first batter faced?

While we account for batter skill by comparing results against expected results for the batter (and in doing so adjust for year), the above results do not account for pitcher ability. Pitchers who face more batters than average are over-represented, relatively, against the fifth BF, while pitchers with fewer BFs than average are relatively over-represented against the first batter faced.

To account for this source of bias, we define a reliever’s type based on the number of batters he faces in an average outing. The three categories were < 3.5 BF, 3.5 – 4 BF, and > 4 BF. These categories were derived from the distribution of average number of batters faced, which was centered at 3.5 – 4 BF, with long tails on either end. Using the same model as above, we made a similar graph for each category of reliever, included in the figure below.

ShayerPowers2

Dividing the data to this granularity, we observe that the sample sizes have been reduced sufficiently to mask the signal with the noise. In none of the three graphs above is there a clear trend. However, one important observation is that the three groups of relievers do not have significantly different performance over all among the first five batters faced. So we have assuaged concerns that the observed first batter effect may be due to sampling bias.

How do relievers struggle against the first batter faced?

To try to understand exactly how reliever performance changes as they face more batters, we broke down the distribution of results for each number of batters faced. In the table that follows, we have found the percentage of plate appearances that end in each result.

ShayerPowers3

The data in the table demonstrate that the mechanism for relievers performing worse against their first batter faced is a high level of power. The first batter of a reliever’s appearance hits fewer singles than typical, but makes up for it by hitting an above average proportion of doubles, triples, and home runs. Additionally, the peril of leaving a reliever in too long is clear when comparing the first few batters faced to the last few batters faced in the chart. In fact, the first batter effect is overcome by a reliever tiredness effect by the 7th batter, at which point reliever performance increasingly worsen, and is worse than their performance against the first batter.

What about the first batter faced in subsequent innings?

The final aspect of our analysis was looking at whether there is a first batter effect for each inning similar to the one we found for each appearance. Knowing that the first batter effect exists for the first inning we separated out that effect from a potential first batter of the inning effect. Thus this analysis looked exclusively at plate appearances pitched by relievers coming out of the dugout after pitching the final out of the previous inning. Pooling together all innings besides the first into number of batters faced results in the figure below.

ShayerPowers4

The graph doesn’t show any notable patterns in the first several batters faced. There doesn’t appear to be an analogous first batter effect, and moreover the data shows an opposite result. There is an oddly consistent result in the fifth and sixth batters faced, which is a source of intrigue. Otherwise, the data doesn’t show an effect on the first batter of an inning, other than the first batter of the appearance as a whole.

Conclusion

We have shown that relievers struggle against the first batter they face, relative to expectation. Data were insufficient to identify which types of relievers suffered from this effect most, but we were able to identify that the reason for the increase wOBA of the first batter faced is an increase in power numbers. That is, the proportion of doubles, triples, and home runs against the first BF is higher than would otherwise be expected when relievers enter a ballgame.

Intuitively, these results make sense. A reliever who has just entered the game could not be described as being “in rhythm.” These results suggest that there is an increased risk of such a reliever throwing a mistake pitch, resulting in extra bases. Perhaps, on average, the time spent warming up in the bullpen is insufficient for a reliever to be “game ready.”

The frictional cost we observed is the equivalent of a difference of about 0.37 runs in ERA. So while much has been made of the value of using relievers, this effect is something that managers need to take into account when they are managing their bullpens.

Something that we did not explore is whether relievers struggle more against the first batter face when they have more or less forewarning that they will enter the game. This preparedness may be difficult to measure, but a possible surrogate would be an indicator of whether the reliever entered mid-inning. We leave this to future work.

Eli Shayer is an undeclared freshman from Anchorage, Alaska. He misses having snow available for cross country skiing.

Scott Powers is a PhD student in statistics and an analytics consultant to the Oakland Athletics. He plays catcher for the club baseball team and setter for the club volleyball team.

Contact Eli at eshayer ‘at’ stanford.edu and Scott at sspowers ‘at’ stanford.edu

Examining MLB Postseason Cluster Luck: or, Why the Playoffs Might Be a Crapshoot

Photo from wikimedia.org

Post by Vihan Lakshman

What role does luck play in baseball success? As one of the pioneering sports in quantitative analysis, our national past time is now understood—in many respects—as a finely tuned game of numbers. But does that tell the whole story?

Many prominent baseball figures, including Billy Beane, have described the MLB playoffs as a “crapshoot,” a roll of the dice that throws regular season success out the window. As Beane puts it, the teams who make the playoffs undoubtedly deserve to be there following a marathon 162 game regular season, but pure luck might be ultimate factor behind who finally ends up hoisting the World Series trophy.

To explore this idea of postseason luck in more detail, we can examine the “cluster luck” of teams in the regular season and the postseason. First coined by Joe Peta in his book Trading Bases, cluster luck provides a numerical measure of a team’s fortune in stringing together hits.

Jonah Keri of Grantland explains the phenomenon of cluster luck with an example: “Say a team tallies nine singles in one game. If all of those singles occur in the same inning, the team would likely score seven runs; if each single occurs in a different inning, however, it’d likely mean a shutout.”

As a further example of very unfortunate cluster luck, consider this box-score from Baseball-Reference from a 2005 meeting between Minnesota and Kansas City where the Twins tied a 1969 MLB record for the most hits in a game without a run.

Vihan1

Thus, if we use cluster luck as a tool to measure the respective fortunes of MLB teams in the regular season and the postseason, we might be able to shed some light on whether the playoffs are indeed a crapshoot, or if there is, in fact, a correlation between regular season and post season cluster luck—suggesting that cluster luck may not be luck at all.

While the idea behind cluster luck may make intuitive sense, there is no clear-cut, standard method of calculating how well a team bunches hits together. In this analysis, I used the base-runs formula, a model of predicting scoring, and considered the most accurate sabermetric statistic for run estimation. For all playoff teams between the years 2007 and 2014, I calculated each club’s regular season and postseason luck by determining their predicted run totals from the base runs formula and subtracting that from the actual amount of runs scored. A negative number indicates that a team scored fewer runs than expected and is hence “unlucky” while a positive score denotes “good luck” and specifies how many runs a team exceeded our base-runs prediction.

In examining the World Series winner from 2007-2014, we see that the vast majority of teams enjoyed positive cluster luck in the postseason.

Vihan2

Perhaps what’s more surprising about this list is the overwhelming amount of negative cluster luck during the regular season, most notably on the part of the 2009 Yankees who finished at the bottom of MLB in regular season luck. This phenomenon can likely be explained by considering that teams who manage to win games in spite of bad luck might be the most talented. In addition, this table of World Series winners provides our first bit of evidence that there may not be a correlation between regular season and postseason cluster luck, affirming the theory of the playoffs as a crapshoot.

To test this idea in further detail, I conducted a simple linear regression examining postseason cluster luck versus regular season luck.

Vihan3

Under the null hypothesis that the true slope of our linear regression is 0, we use a two-sided t-test to obtain a p-value of 0.6201, which is greater than our significance level of 0.1. Therefore, we cannot reject our null hypothesis and cannot conclude anything further about the relationship between postseason and regular season luck.

In our regression, we obtained an R2 value of 0.003987, suggesting that regular season cluster luck explains virtually none of the variance in postseason luck.

Ultimately, we found no evidence of a relationship between a team’s luck in the regular season and in the playoffs, which is what one would expect if it were truly luck. Although we cannot conclude that no relationship exists, there might in fact be something to the intuitive notion that the playoffs are a crapshoot. Whether this news is comforting to perennial playoff disappointments like the A’s, I can’t say, but the idea that luck can play such a huge role in determining legacies in sports is a fascinating question and definitely deserving of further exploration.

Vihan Lakshman is a junior from Savannah, GA studying mathematics. He also writes about football for The Stanford Daily and broadcasts sports for KZSU student radio. In his free time, he loves playing intramural sports and hopelessly rooting for the Atlanta Falcons to return to the Super Bowl.

Contact Vihan at vihan ‘at’ stanford.edu

The Importance of Having a High NBA Draft Pick

Photo from pixgood.com

Post by Konstantinos Balafas

On October 21st, the NBA board of governors voted against reforming the NBA’s draft lottery. A very good review of the proposed changes and potential ramifications can be found here but the overarching theme of the league’s proposal was limiting “tanking”. The board of governors ended up rejecting the proposal and, while the argument that was made was that the changes would hurt small-market teams, it indicates that there are NBA GMs and owners that are (or may be in the future) willing to embrace a losing ideology for the reward of a high draft pick. That brings us to the “million-dollar” question: Is tanking really worth it?

THE DATA

In an attempt to answer that question with numbers, names and simple analysis, we gathered data for the “most successful” players since 2000 (from Wikipedia) and of the teams’ Win/Loss percentages since 1985 (from basketball-reference.com) – the year the lottery system came into effect. For the purposes of this article, the “most successful” players are those elected to All-NBA and All-Star teams, as well as the starters for teams that played in NBA Finals.

There are certain caveats to this analysis. As far as the players are concerned, traded picks, on draft night or otherwise, are not considered. So, for this analysis, Kobe Bryant is a Charlotte Hornets pick despite never playing a minute for them and Jeff Green, as the #5 pick in 2008, is not considered for helping Boston have the best single-year turnaround in league history. As far as the team performances are concerned, only the top pick of each team is considered in order to simplify the analysis. That means that any effect that Tristan Thompson (#4 pick, 2011) may have had for the Cleveland Cavaliers has been attributed to Kyrie Irving (#1 pick, 2011).

PICK DISTRIBUTIONS

As a first-pass analysis, we plotted the histograms of the draft picks for the aforementioned player categories, which are shown below. The histograms show a concentration of draft picks in the 1-10 range, which reinforces the intuitive belief that “good players are generally drafted high”.

Balafas1

It is worth noting that no player drafted lower than 10 has made the first All-NBA team since 2000. So far, the pick distributions shown indicate that it is indeed important for a team to have high draft picks and therefore tanking may indeed be a viable strategy for lottery teams. However, a (very) good player does not a good team make, or Kevin Love would still be plying his trade in Minnesota.

For that reason, let us explore the picks of the players that have started at least one game in the NBA Finals over the past 14 years. Figure 2 shows these picks for the NBA Champions (left) and the NBA Runners-up (right).

Balafas2

Again, the vast majority of the players are drafted in the lottery (picks 1-14). Interestingly enough, with the exception of the 2007-2011 interval and the ’04 Pistons there has been no NBA Champion without a #1 pick. Even in the listed exceptions, these teams had multiple Top-10 picks. Still more indication that teams need lottery picks to contend for a title!

THE BIG-3 EFFECT

There is, however, an important parameter that has not been yet investigated. As the Miami Heat proved, the draft is not the only way to high draft picks and, subsequently, title rings. For that reason, Figure 3 shows the same histograms as Figure 1, only in this case different colors correspond to players that achieved the honors with the team that drafted them or a different one.

Balafas3

It generally seems that there is no clear trend in the distributions of draft picks with the drafting or with a different team. Top picks tend to stay (or be more successful) with the team that drafted them, while starting five in the NBA Finals tend to be assembled in ways other than the draft.

DRAFT POSITION VS. IMPROVEMENT

So far, then, even if there is no clear answer on whether a team is justified in tanking, quite a bit of the data seem to point that way. On the other hand we’ve looked at All-NBA teams, All-Star teams and NBA finalists. That can be a tall order for a young kid that has just been drafted (unless your name is Tim Duncan, but more on that later). It is reasonable then to investigate the more short-term effect of draft picks.

Balafas4

Generally, if a high draft pick were to be strongly correlated with success, we’d expect teams with a high draft pick to exhibit a significant improvement over the next year and the points in the top part (teams with a high draft pick) of Figure 4 would be clustered towards the right of the figure (large difference in W/L percentage), which is clearly not the case.

Maybe then, one year is too short of a time for a rookie to prove his worth? To control for that, we looked at the progression of win/loss percentage over four years after a high draft pick. The four-year window was selected since that is also the length of a rookie contract. Figure 5 shows the league average of the difference in win/loss percentage against the number of years since the team had a particular lottery pick in the draft.

Balafas5

Based on the previous figure, it can be argued that a team will consistently improve over the four years after a lottery pick. Of course, there are many other factors that play a part, such as other roster moves, coaching changes, new draft picks etc., as well as the fact that this is the league average. Still, it is hard to make a strong case against tanking.

Does that, then, mean that a couple of draft picks can turn a franchise around? Figure 6 shows a grid of teams and seasons. A blue square indicates that a particular team had a lottery pick at a particular year and a larger square corresponds to a higher pick.

Balafas6

It can be seen that lottery picks come in waves. It takes more than a few years for a team to accumulate enough talent (or assets) to go from lottery team to playoff contender. Once the team goes through that breakthrough, though, there’s a good chance it will stay that way for at least a few years.

FROM DRAFT PICKS TO GLORY, OR, THE TIM DUNCAN EFFECT

So, we saw that once a team has stockpiled enough high draft picks, it can break through the cycle of mediocrity and the Durant-Westbrook-led Thunder are living proof of that. Can that, though, lead a team to glory? The following figure shows the number of years since the last lottery pick for the NBA Champions since 1985 and, by the looks of it, it usually takes 4-6 years since the last lottery pick to win a championship. So, not an immediate turnaround, but well within the realm of possibility that the team won the Larry O’Brien trophy thanks to its lottery picks.

Balafas7

That is especially true for the case of one Timothy Theodore Duncan, who, as the last lottery pick of the San Antonio Spurs, has led them into a state of perpetual championship contention, 5 rings and 0 lottery picks in the past 16 years. While the contribution of Duncan is undeniable, there’s also a lot to be said about the system that he was drafted in. From the existence of a Hall of Famer like David Robinson and a Hall of Fame caliber coach in Gregg Popovich to the scouting team that brought All-Stars like Tony Parker and Manu Ginobili with the 28th and 57th pick respectively.

It is also worth noting that in the two cases of quickest lottery-to-championship turnaround (one year between lottery and championship), the 2004 Pistons and the 2008 Celtics, neither draft pick contributed significantly to the team. Darko Milicic, the #2 pick in 2003 averaged 4.8 minutes in 32 games for the Pistons (1.8 minutes per game in 8 games in the playoffs), while Jeff Green, the #5 pick in 2007 was traded to the Seattle Supersonics. It could, however, be argued that Jeff Green did actually contribute to the Celtics’ championship season as he was part of the package that took Ray Allen to Boston.

CONCLUSIONS

The first, and easiest, conclusion to be made here is that high draft picks tend to be good players. Secondly, it can be seen that players of that caliber are absolutely necessary for a team to challenge for a championship. Not only that, but, on average, a lottery pick will result in an improvement in win/loss percentage. Maybe not necessarily right away but at least within the lifespan of the rookie deal of said lottery pick. On the other hand, it is also demonstrated that it takes multiple high draft picks for a team to become a playoff contender, and that’s what it all boils down to. If a team is willing to suffer several years of mediocrity (to put it mildly) and accumulate a significant amount of talent through the draft, chances are that they will become a playoff (or even championship) contender. Like everything else, tanking takes commitment, but also has its rewards.

Konstantinos Balafas is finishing up his PhD on detecting damage from earthquakes. He grew up watching soccer and basketball and loves Steve Nash, Paolo Maldini and Bill Self.

Contact Konstantinos at balafas ‘at’ stanford.edu