Expected Goals (xG), Uncertainty, and Bayesian Goalies

All xG model code can be found on GitHub.

Expected Goals (xG) Recipe

If you’re reading this, you’re likely familiar with the idea behind expected goals (xG), whether from soccer analytics, early work done by Alan RyderBrian MacDonald, or current models by DTMAboutHeart and Asmean, Corsica, Moneypuck, or things I’ve put up on Twitter. Each model attempts to create a probability of each shot being a goal (xG) given the shot’s attributes like shot location, strength, shot type, preceding events, shooter skill, etc. There are also private companies supplementing these features with additional data (most importantly pre-shot puck movement on non-rebound shots and some sort of traffic/sight-line metric) but this is not public or generated in the real-time so will not be discussed here.[1]

To assign a probability (between 0% and 100%) to each shot, most xG models likely use logistic regression – a workhorse in many industry response models. As you can imagine the critical aspect of an xG model, and any model, becomes feature generation – the practice of turning raw, unstructured data into useful explanatory variables. NHL play-by-play data requires plenty of preparation to properly train an xG model. I have made the following adjustments to date:

  • Adjust for recorded shot distance bias in each rink. This is done by using a cumulative density function for shots taken in games where the team is away and apply that density function to the home rink in case their home scorer is biased. For example (with totally made up numbers), when Boston is on the road their games see 10% of shots within 5 feet of the goal, 20% of shots within 10 feet of the goal, etc. We can adjust the shot distance in their home rink to be the same since the biases of 29 data-recorders should be less than a single Boston data-recorder. If at home in Boston, 10% of the shots were within 10 feet of the goal, we might suspect that the scorer in Boston is systematically recording shots further away from the net than other rinks. We assume games with that team result in similar event coordinates both home and away and we can transform the home distribution to match the away distribution. Below demonstrates how distributions can differ between home and away games, highlighting the probable bias Boston and NY Rangers scorer that season and was adjusted for. Note we also don’t necessarily want to transform by an average, since the bias is not necessarily uniform throughout the spectrum of shot distances.
home rink bias

No Place Like Home

  • Figure out what events lead up to the shot, what zone they took place in, and the time lapsed between these events and the eventual shot while ensuring stoppages in play are caught.
  • Limit to just shots on goal. Misses include information, but like shot distance contain scorer bias. Some scorers are more likely to record a missed shot than others. Unlike shots where we have a recorded event, and it’s just biased, adjusting for misses would require ‘inventing’ occurrences in order to adjust biases in certain rinks, which seems dangerous. It’s best to ignore misses for now, particularly because the majority of my analysis focuses on goalies. Splitting the difference between misses caused by the goalie (perhaps through excellent positioning and reputation for not giving up pucks through the body) and those caused by recorder bias seems like a very difficult task. Shots on goal test the goalie directly hence will be the focus for now.
  • Clean goalie and player names. Annoying but necessary – both James and Jimmy Howard make appearances in the data, and they are the same guy.
  • Determine the strength of each team (powerplay for or against or if the goaltender is pulled for an extra attacker). There is a tradeoff here. The coefficients for the interaction of states (i.e. 5v4, 6v5, 4v3 model separately) pick up interesting interactions, but should significant instability from season to season. For example, 3v3 went from a penalty-box filled improbability to a common occurrence to finish overtime games. Alternatively, shooter strength and goalie strength can be model separately, this is more stable but less interesting.
  • Determine the goaltender and shooter handedness and position from look-up tables.
  • Determine which end of the ice and what coordinates (positive or negative) the home team is based, using recordings in any given period and rink-adjusting coordinates accordingly.
  • Calculate shot distance and shot angle. Determine what side of the ice the shot is from, whether or not it is the shooters off-wing based on handedness.
  • Tag shots as rushes or rebound, and if a rebound how far the puck travelled and the angular velocity of the puck from shot 1 to shot 2.
  • Calculate ‘shooting talent’ – a regressed version of shooting percentage using the Kuder-Richardson Formula 21, employed the same way as in DTMAboutHeart and Asmean‘s xG model.

All of this is to say there is a lot going on under the hood, the results are reliant on the data being recorded, processed, adjusted, and calculated properly. Importantly, the cleaning and adjustments to the data will never be complete, only issues that haven’t been discovered or adjusted for yet. There is no perfect xG model, nor is it possible to create one from the publicly available data, so it is important to concede that there will be some errors, but the goal is to prevent systemic errors that might bias the model. But these models do add useful information regular shot attempt models cannot, creating results that are more robust and useful as we will see.

Current xG Model

The current xG model does not use all developed features. Some didn’t contain enough unique information, perhaps over-shadowed by other explanatory variables. Some might have been generated on sparse or inconsistent data. Hopefully, current features can be improved or new features created.

While the xG model will continue to be optimized to better maximize out of sample performance, the discussion below captures a snapshot of the model. All cleanly recorded shots from 2007 to present are included, randomly split into 10 folds. Each of the 10 folds were then used a testing dataset (checking to see if the model correctly predicted a goal or not by comparing it to actual goals) while the other 9 corresponding folders were used to train the model. In this way, all reported performance metrics consist of comparing model predictions on the unseen data in the testing dataset to what actually happened. This is known as k-fold cross-validation and is fairly common practice in data science.

When we rank-order the predicted xG from highest to lowest probability we can compare the share of goals that occur to shots ordered randomly. This gives us a gains chart, a graphic representation of the how well the model is at finding actual goals relative to selecting shots randomly. We can also calculate the Area Under the Curve (AUC), where 1 is a perfect model and 0.5 is a random model. Think of the random model in this case as shot attempt measurement, treating all shots as equally likely to be a goal. The xG model has an AUC of about 0.75, which is good, and safely in between perfect and random. The most dangerous 25% of shots as selected by the model make up about 60% of actual goals. While there’s irreducible error and model limitations, in practice it is an improvement over unweighted shot attempts and accumulates meaningful sample size quicker than goals for and against.

gains chart

Gains, better than random

Hockey is also a zero-sum game. Goals (and expected goals) only matter relative to league average. Original iterations of the expected goal model built on a decade of data show that goals were becoming dearer compared to what was expected. Perhaps goaltenders were getting better, or league data-scorers were recording events to make things look harder than they were, or defensive structures were impacting the latent factors in the model or some combination of these explanations.

Without the means to properly separate these effects, each season receives it own weights for each factor. John McCool had originally discussed season-to-season instability of xG coefficients. Certainly this model contains some coefficient instability, particularly in the shot type variables. But overall these magnitudes adjust to equate each seasons xG to actual goals. Predicting a 2017-18 goal would require additional analysis and smartly weighting past models.

Coefficient Stability

Less volatile than goalies?

xG in Action

Every shot has a chance of going in, ranging from next to zero to close to certainty.  Each shot in the sample is there because the shooter believed there was some sort of benefit to shooting, rather than passing or dumping the puck, so we don’t see a bunch of shots from the far end of the rink, for example. xG then assigns a probability to each shot of being a goal, based on the explanatory variables generated from the NHL data – shot distance, shot angle, is the shot a rebound?, listed above.

Modeling each season separately, total season xG will be very close to actual goals. This also grades goaltenders on a curve against other goaltenders each season. If you are stopping 92% of shots, but others are stopping 93% of shots (assuming the same quality of shots) then you are on average costing your team a goal every 100 shots. This works out to about 7 points in the standings assuming a 2100 shot season workload and that an extra 3 goals against will cost a team 1 point in the standings. Using xG to measure goaltending performance makes sense because it puts each goalie on equal footing as far as what is expected, based on the information that is available.

We can normalize the number of goals prevented by the number of shots against to create a metric, Quality Rules Everything Around Me (QREAM), Expected Goals – Actual Goals per 100 Shots. Splitting each goalie season into random halves allows us to look at the correlation between the two halves. A metric that captures 100% skill would have a correlation of 1. If a goaltender prevented 1 goal every 100 shots, we would expect to see that hold up in each random split. A completely useless metric would have an intra-season correlation of 0, picking numbers out of a hat would re-create that result. With that frame of reference, intra-season correlations for QREAM are about 0.4 compared to about 0.3 for raw save percentage. Pucks bounce so we would never expect to see a correlation of 1, so this lift is considered to be useful and significant.[2]

intra-season correlations

Goalies doing the splits

Crudely, each goal prevented is worth about 1/3 of a point in the standings. Implying how many goals a goalie prevents compared to average allows us to compute how many points a goalie might create for or cost their team. However, a more sophisticated analysis might compare goal support the goalie receives to the expected goals faced (a bucketed version of that analysis can be found here). Using a win probability model the impact the goalie had on win or losing can be framed as actual wins versus expected.

Uncertainty

xG’s also are important because they begin to frame the uncertainty that goes along with goals, chance, and performance. What does the probability of a goal represent? Think of an expected goal as a coin weighted to represent the chance that shot is a goal. Historically, a shot from the blueline might end up a goal only 5% of the time. After 100 shots (or coin flips) will there be exactly 5 goals? Maybe, but maybe not. Same with a rebound from in tight to the net that has a probability of a goal equal to 50%. After 10 shots, we might not see 5 goals scored, like ‘expected.’ 5 goals is the most likely outcome, but anywhere from 0 to 10 is possible on only 10 shots (or coin flips).

We can see how actual goals and expected goals might deviate in small sample sizes, from game to game and even season to season. Luckily, we can use programs like R, Python, or Excel to simulate coin flips or expected goals. A goalie might face 1,000 shots in a season, giving up 90 goals. With historical data, each of those shots can be assigned a probability of a being a goal. If the average probability of a goal is 10%, we expect the goalie to give up 100 goals. But using xG, there are other possible outcomes. Simulating 1 season based on expected goals might result in 105 goals against. Another simulation might be 88 goals against. We can simulate these same shots 1,000 or 10,000 times to get a distribution of outcomes based on expected goals and compare it to the actual goals.

In our example, the goalie possibly prevented 10 goals on 1,000 shots (100 xGA – 90 actual GA). But they also may have prevented 20 or prevented 0. With expected goals and simulations, we can begin to visualize this uncertainty. As the sample size increases, the uncertainty decreases but never evaporates. Goaltending is a simple position, but the range of outcomes, particularly in small samples, can vary due to random chance regardless of performance. Results can vary due to performance (of the goalie, teammates, or opposition) as well, and since we only have one season that actually exists, separating the two is painful. Embracing the variance is helpful and expected goals help create that framework.

It is important to acknowledge that results do not necessarily reflect talent or future or past results. So it is important to incorporate uncertainty into how we think about measuring performance. Expected goal models and simulations can help.

simulated seasons

Hackey statistics

Bayesian Analysis

Luckily, Bayesian analysis can also deal with weighting uncertainty and evidence. First, we set a prior –probability distribution of expected outcomes. Brian MacDonald used mean Even Strength Save Percentage as prior, the distribution of ESSV% of NHL goalies. We can do the same thing with Expected Save Percentage (shots – xG / shots), create a unique prior distribution of outcome for each goalie season depending on the quality of shots faced and the sample size we’ll like to see. Once the prior is set, evidence (saves in our case) is layered on to the prior creating a posterior outcome.

Imagine a goalie facing 100 shots to start their career and, remarkably, making 100 saves. They face 8 total xG against, so we can set the Prior Expected Save% as a distribution centered around 92%. The current evidence at this point is 100 saves on 100 shots, and Bayesian Analysis will combine this information to create a Posterior distribution.

Goaltending is a binary job (save/goal) so we can use a beta distribution to create a distribution of the goaltenders expected (prior) and actual (evidence) save percentage between 0 and 1, like a baseball players batting average will fall between 0 and 1. We also have to set the strength of the prior – how robust the prior is to the new evidence coming in (the shots and saves of the goalie in question). A weak prior would concede to evidence quickly, a hot streak to start a season or career may lead the model to think this goalie may be a Hart candidate or future Hall-of-Famer! A strong prior would assume every goalie is average and require prolonged over or under achieving to convince the model otherwise. Possibly fair, but not revealing any useful information until it has been common knowledge for a while.

bayesian goalie

Priors plus Evidence

More research is required, but I have set the default prior strength of equivalent to 1,000 shots. Teams give up about 2,500 shots a season, so a 1A/1B type goalie would exceed this threshold in most seasons. In my goalie compare app, the prior can be adjusted up or down as a matter of taste or curiosity. Research topics would investigate what prior shot count minimizes season to season performance variability.

Every time a reported result actives your small sample size spidey senses, remember Bayesian analysis is thoroughly unimpressed, dutifully collecting evidence, once shot at a time.

 Conclusion

Perfect is often the enemy of the good. Expected goal models fail to completely capture the complex networks and inputs that create goals, but they do improve on current results-based metrics such as shot attempts by a considerable amount.  Their outputs can be conceptualized by fans and players alike, everybody understands a breakaway has a better chance of being a goal than a point shot.

The math behind the model is less accessible, but people, particularly the young, are becoming more comfortable with prediction algorithms in their daily life, from Spotify generating playlists to Amazon recommender systems. Coaches, players, and fans on some level understand not all grade A chances will result in a goal. So while out-chancing the other team in the short term is no guarantee of victory, doing it over the long term is a recipe for success. Removing some the noise that goals contain and the conceptual flaws of raw shot attempts helps the smooth short-term disconnect between performance and results.

My current case study using expected goals is to measure goaltending performance since it’s the simplest position – we don’t need to try to split credit between linemates. Looking at xGA – GA per shot captures more goalie specific skill than save percentage and lends itself to outlining the uncertainty those results contain. Expected goals also allow us to create an informed prior that can be used in a Bayesian hierarchical model. This can quantify the interaction between evidence, sample size, and uncertainty.

Further research topics include predicting goalie season performance using expected goals and posterior predictive distributions.

____________________________________________

[1]Without private data or comprehensive tracking data technology analysts are only able to observe outcomes of plays – most importantly goals and shots – but not really what created those results. A great analogy came from football (soccer) analyst Marek Kwiatkowski:

Almost the entire conceptual arsenal that we use today to describe and study football consists of on-the-ball event types, that is to say it maps directly to raw data. We speak of “tackles” and “aerial duels” and “big chances” without pausing to consider whether they are the appropriate unit of analysis. I believe that they are not. That is not to say that the events are not real; but they are merely side effects of a complex and fluid process that is football, and in isolation carry little information about its true nature. To focus on them then is to watch the train passing by looking at the sparks it sets off on the rails.

Armed with only ‘outcome data’ rather than comprehensive ‘inputs data’ analyst most models will be best served with a logistic regression. Logistic regression often bests complex models, often generalizing better than machine learning procedures. However, it will become important to lean on machine learning models as reliable ‘input’ data becomes available in order to capture the deep networks of effects that lead to goal creation and prevention. Right now we only capture snapshots, thus logistic regression should perform fine in most cases.

[2] Most people readily acknowledge some share of results in hockey are luck. Is the number closer to 60% (given the repeatable skill in my model is about 40%), or can it be reduced to 0% because my model is quite weak? The current model can be improved with more diligent feature generation and adding key features like pre-shot puck movement and some sort of traffic metric. This is interesting because traditionally logistic regression models see diminishing marginal returns from adding more variables, so while I am missing 2 big factors in predicting goals, the intra-seasonal correlation might only go from 40% to 50%. However, deep learning networks that can capture deeper interactions between variables might see an overweight benefit from these additional ‘input’ variables (possibly capturing deeper networks of effects), pushing the correlation and skill capture much higher. I have not attempted to predict goals using deep learning methods to date.

Hockey Analytics, Strategy, & Game Theory

Strategic Snapshot: Isolating QREAM

I’ve recently attempted to measure goaltending performance by looking at the number of expected goals a goaltender faces compared to the actual goals they actually allow. Expected goals are ‘probabilitistic goals’ based on what we have data for (which isn’t everything): if that shot were taken 1,000 times on the average goalie that made the NHL, how often would it be a goal? Looking at one shot there is variance, the puck either goes in or doesn’t, but over a course of a season summing the expected goals gives a little better idea of how the goaltender is performing because we can adjust for the quality of shots they face, helping isolate their ‘skill’ in making saves. The metric, which I’ll refer to as QREAM (Quality Rules Everything Around Me), reflects goaltender puck-saving skill more than raw save percentage, showing more stability within goalie season.

Goalies doing the splits

Good stuff. We can then use QREAM to break down goalie performance by situations, tactical or circumstantial, to reveal actionable trends. Is goalie A better on shots from the left side or right side? Left shooters or right shooters? Wrist shots, deflections, etc? Powerplay? Powerplay, left or right side? etc. We can even visualise it, and create a unique descriptive look at how each goaltender or team performed.

This is a great start. The next step in confirming the validity of a statistic is looking how it holds up over time. Is goalie B consistently weak on powerplay shots from the left side? Is something that can be exploited by looking at the data? Predictivity is important to validate a metric, showing that it can be acted up and some sort of result can be expected. Unfortunately, year over year trends by goalie don’t hold up in an actionable way. There might be a few persistent trends below, but nothing systemic we can that would be more prevalent than just luck. Why?

Game Theory (time for some)

In the QREAM example, predictivity is elusive because hockey is not static and all players and coaches in question are optimizers trying their best to generate or prevent goals at any time. Both teams are constantly making adjustments, sometimes strategically and unconsciously. As a data scientist, when I analyse 750,000 shots over 10 seasons, I only see what happened, not what didn’t happen. If in one season, goalie A underperformed the average on shots from the left shooters from the left side of the ice that would show up in the data, but it would be noticed by players and coaches quicker and in a much more meaningful and actionable way (maybe it was the result of hand placement, lack of squareness, cheating to the middle, defenders who let up cross-ice passes from right to left more often than expected, etc.) The goalie and defensive team would also pick up on these trends and understandably compensate, maybe even slightly over-compensate, which would open up other options attempting to score, which the goalie would adjust to, and so on until the game reaches some sort of multi-dimensional equilibrium (actual game theory). If a systemic trend did continue then there’s a good chance that that goalie will be out of the league. Either way, trying to capture a meaningful actionable insight from the analysis is much like trying to capture lightning in a bottle. In both cases, finding a reliable pattern in a game there both sides and constantly adjusting and counter-adjusting is very difficult.

This isn’t to say the analysis can’t be improved. My expected goal model has weaknesses and will always have limitations due to data and user error. That said, I would expect the insights of even a perfect model to be arbitraged away. More shockingly (since I haven’t looked at this in-depth, at all), I would expected the recent trend of NBA teams fading the use of mid-range shots to reverse in time as more teams counter that with personnel and tactics, then a smart team could probably exploit that set-up by employing slightly more mid-range shots, and so on, until a new equilibrium is reached. See you all at Sloan 2020.

Data On Ice

The role of analytics is to provide a new lens to look at problems and make better-informed decisions. There are plenty of example of applications at the hockey management level to support this, data analytics have aided draft strategy and roster composition. But bringing advanced analytics to on-ice strategy will likely continue to chase adjustments players and coaches are constantly making already. Even macro-analysis can be difficult once the underlying inputs are considered.
An analyst might look at strategies to enter the offensive zone, where you can either forfeit control (dump it in) or attempt to maintain control (carry or pass it in). If you watched a sizable sample of games across all teams and a few different seasons, you would probably find that you were more likely to score a goal if you tried to pass or carry the puck into the offensive zone than if you dumped it. Actionable insight! However, none of these plays occur in a vacuum – a true A/B test would have the offensive players randomise between dumping it in and carrying it. But the offensive player doesn’t randomise, they are making what they believe to be the right play at that time considering things like offensive support, defensive pressure, and shift length of them and their teammates. In general, when they dump the puck, they are probably trying to make a poor position slightly less bad and get off the ice. A randomised attempted carry-in might be stopped and result in a transition play against. So, the insight of not dumping the puck should be changed to ‘have the 5-player unit be in a position to carry the puck into the offensive zone,’ which encompasses more than a dump/carry strategy. In that case, this isn’t really an actionable, data-driven strategy, rather an observation. A player who dumps the puck more often likely does so because they struggle to generate speed and possession from the defensive zone, something that would probably be reflected in other macro-stats (i.e. the share of shots or goals they are on the ice for). The real insight is the player probably has some deficiencies in their game. And this where the underlying complexity of hockey begins to grate at macro-measures of hockey analysis, there’s many little games within the games, player-level optimisation, and second-order effects that make capturing true actionable, data-driven insight difficult.[1]
It can be done, though in a round-about way. Like many, I support the idea of using (more specifically, testing) 4 or even 5 forwards on the powerplay. However, it’s important to remember that analysis that shows a 4F powerplay is more of a representation of the team’s personnel that elect to use that strategy, rather than the effectiveness of that particular strategy in a vacuum. And team’s will work to counter by maximising their chance of getting the puck and attacking the forward on defence by increasing aggressiveness, which may be countered by a second defenseman, and so forth.

Game Theory (revisited & evolved)

Where analytics looks to build strategic insights on a foundation of shifting sand, there’s an equally interesting forces at work – evolutionary game theory. Let’s go back to the example of the number of forwards employed on the powerplay, teams can use 3, 4, or 5 forwards. In game theory, we look for a dominant strategy first.While self-selected 4 forward powerplays are more effective a team shouldn’t necessarily employ it if up by 2 goals in the 3rd period, since a marginal goal for is worth less than a marginal goal against. And because 4 forward powerplays, intuitively, are more likely to concede chances and goals against than 3F-2D, it’s not a dominant strategy. Neither are 3F-2D or 5F-0D.
Thought experiment. Imagine in the first season, every team employed 3F-2D. In season 2, one team employs a 4F-1D powerplay, 70% of the time, they would have some marginal success because the rest of the league is configured to oppose 3F-2D, and in season 3 this strategy replicates, more teams run a 4F-1D in line with evolutionary game theory. Eventually, say in season 10, more teams might run a 4F-1D powerplay than 3F-2D, and some even 5F-0D. However, penalty kills will also adjust to counter-balance and the game will continue. There may or may not be an evolutionary stable strategy where teams are best served are best mixing strategies like you would playing rock-paper-scissors.[2] I imagine the proper strategy would depend on score state (primarily), and respective personnel.
You can imagine a similar game representing the function of the first forward in on the forecheck. They can go for the puck or hit the defensemen – always going for the puck would let the defenseman become too comfortable, letting them make more effective plays, while always hitting would take them out of the play too often, conceding too much ice after a simple pass. The optimal strategy is likely randomising, say, hitting 20% of the time factoring in gap, score, personnel, etc.

A More Robust (& Strategic) Approach

Even if it seems a purely analytic-driven strategy is difficult to conceive, there is an opportunity to take advantage of this knowledge. Time is a more robust test of on-ice strategies than p-values. Good strategies will survive and replicate, poor ones will (eventually and painfully) die off. Innovative ideas can be sourced from anywhere and employed in minor-pro affiliates where the strategies effects can be quantified in a more controlled environment. Each organisation has hundreds of games a year in their control and can observe many more. Understanding that building an analytical case for a strategy may be difficult (coaches are normally sceptical of data, maybe intuitively for the reasons above), analysts can sell the merit of experimenting and measuring, giving the coach major ownership of what is tested. After all, it pays to be first in a dynamic game such as hockey. Bobby Orr changed the way the blueliners played. New blocking tactics (and equipment) lead to improved goaltending. Hall-of-Fame forward Sergei Fedorov was a terrific defenseman on some of the best teams of the modern era.[3]  Teams will benefit from being the first to employ (good) strategies that other teams don’t see consistently and don’t devote considerable time preparing for.
The game can also improve using this framework. If leagues want to encourage goal scoring, they should encourage new tactics by incentivising goals. I would argue that the best and most sustainable way to increasing goal scoring would be to award AHL teams 3 points for scoring 5 goals in a win. This will encourage offensive innovation and heuristics that would eventually filter up to the NHL level. Smaller equipment or big nets are susceptible to second order effects. For example, good teams may slow down the game when leading (since the value of a marginal goal for is now worth less than a marginal goal against) making the on-ice even less exciting. Incentives and innovation work better than micro-managing.

In Sum

The primary role of analytics in sport and business is to deliver actionable insights using the tools are their disposal, whether is statistics, math, logic, or whatever. With current data, it is easier for analysts to observe results than to formulate superior on-ice strategies. Instead of struggling to capture the effect of strategy in biased data, they should using this to their advantage and look at these opportunities through the prism of game theory: testing and measuring and let the best strategies bubble to the top. Even the best analysis might fail to pick up on some second order effect, but thousands of shifts are less likely to be fooled. The data is too limited in many ways to create paint the complete picture. A great analogy came from football (soccer) analyst Marek Kwiatkowski:

Almost the entire conceptual arsenal that we use today to describe and study football consists of on-the-ball event types, that is to say it maps directly to raw data. We speak of “tackles” and “aerial duels” and “big chances” without pausing to consider whether they are the appropriate unit of analysis. I believe that they are not. That is not to say that the events are not real; but they are merely side effects of a complex and fluid process that is football, and in isolation carry little information about its true nature. To focus on them then is to watch the train passing by looking at the sparks it sets off on the rails.

Hopefully, there will soon be a time where every event is recorded, and in-depth analysis can capture everything necessary to isolate things like specific goalie weaknesses, optimal powerplay strategy, or best practices on the forecheck. Until then there are underlying forces at work that will escape the detection. But it’s not all bad news, the best strategy is to innovate and measure. This may not be groundbreaking to the many innovative hockey coaches out there but can help focus the smart analyst, delivering something actionable.

____________________________________________

 

[1] Is hockey a simple or complex system? When I think about hockey and how to best measure it, this is a troubling question I keep coming back to. A simple system has a modest amount of interacting components and they have clear relationships to other components: say, when you are trailing in a game, you are more likely to out-shoot the other team than you would otherwise. A complex system has a large number of interacting pieces that may combine to make these relationships non-linear and difficult to model or quantify. Say, when you are trailing the pressure you generate will be a function of time left in the game, respective coaching strategies, respective talent gaps, whether the home team is line matching (presumably to their favor), in-game injuries or penalties (permanent or temporary), whether one or both teams are playing on short rest, cumulative impact of physical play against each team, ice conditions, and so on.

Fortunately, statistics are such a powerful tool because a lot of these micro-variables even out over the course of the season, or possibly the game to become net neutral. Students learning about gravitational force don’t need to worry about molecular forces within an object, the system (e.g. block sliding on an incline slope) can separate from the complex and be simplified. Making the right simplifying assumptions we can do the same in hockey, but do so at the risk of losing important information. More convincingly, we can also attempt to build out the entire state-space (e.g different combinations of players on the ice) and using machine learning to find patterns within the features and winning hockey games. This is likely being leveraged internally by teams (who can generate additional data) and/or professional gamblers. However, with machine learning techniques applied there appeared to be a theoretical upper bound of single game prediction, only about 62%. The rest, presumably, is luck. Even if this upper-bound softens with more data, such as biometrics and player tracking, prediction in hockey will still be difficult.

It seems to me that hockey is suspended somewhere between the simple and the complex. On the surface, there’s a veneer of simplicity and familiarity, but perhaps there’s much going on underneath the surface that is important but can’t be quantified properly. On a scale from simple to complex, I think hockey is closer to complex than simple, but not as complex as the stock market, for example, where upside and downside are theoretically unlimited and not bound by the rules of a game or a set amount of time. A hockey game may be 60 on a scale of 0 (simple) to 100 (complex).

[2] Spoiler alert: if you performing the same thought experiment with rock-paper-scissors you arrive at the right answer –  randomise between all 3, each 1/3 of the time – unless you are a master of psychology and can read those around you. This obviously has a closed form solution, but I like visuals better:

[3] This likely speaks more to personnel than tactical, Fedorov could be been peerless. However, I think to football where position changes are more common, i.e. a forgettable college receiver at Stanford switched to defence halfway through his college career and became a top player in the NFL league, Richard Sherman. Julian Edelman was a college quarterback and now a top receiver on the Super Bowl champions. Test and measure.

The Path to WAR*

*Wins-Above-Replacement-Like Algorithm-Based Rating

Dream On

The single metric dream has existed in hockey analytics for some time now. The most relevant metric, WAR or Wins Above Replacement, represents an individual player’s contribution to the success of their team by attempting to quantify the number of goals the add over a ‘replacement-level’ player. More widely known in baseball, WAR in hockey is much tougher to delineate, but has been attempted, most notably at the excellent, but now defunct, war-on-ice.com. The pursuit of a single, comprehensive metric has been attempted by Ryder, Awad, Macdonald, Schuckers and Curro, and Gramacy, Taddy, and Jensen.

Their desires and effort are justified: a single metric, when properly used, can be used to analyze salaries, trades, roster composition, draft strategy, etc. Though it should be noted that WAR, or any single number rating, is not a magic elixir since it can fail to pick up important differences in skill sets or role, particularly in hockey. There is also a risk that it is used as a crutch, which may be the case with any metric.

Targeting the Head

Prior explorations into answering the question have been detailed and involved, and rightfully so, aggregating and adjusting an incredible amount of data to create a single player-season value.[1] However, I will attempt to reverse engineer a single metric based on in-season data from a project.

For the 2015-16 season, the CrowdScout project aggregated the opinions of individual users. The platform uses the Elo formula, a memoryless algorithm that constantly adjusts each player’s score with new information. In this case, information is the user’s opinion that is hopefully guided by the relevant on-ice metrics (provided to the user, see below). Hopefully, the validity of this project is closer to Superforecasting than the NHL awards, and it should be: the ‘best’ users or scouts are given increasing more influence over the ratings, while the worst are marginalized.[2]

The CrowdScout platform ran throughout the season with over 100 users making over 32,000 judgments on players, creating a population of player ratings ranging from Sidney Crosby to Tanner Glass. The system has largely worked as intended, but needs to continue to acquire an active, smart, and diverse user base – this will always be the case when trying to harness the ‘wisdom of the crowd.’ Hopefully, as more users sign-up and smarter algorithms emphasize the opinions of the best, the Elo rating will come closer to answering the question posed to scouts as they are prompted to rank two players – if the season started today, which player would you choose if the goal were to win a championship.

stamkosvkopitar

Let’s put our head’s together

Each player’s Elo is adjusted by the range of ratings within the population. The result, ranging from 0 to 100, generally passes the sniff test, at times missing on players due to too few or poor ratings. However, this player-level rating provides something more interesting – a target variable to create an empirical model from. Whereas in theory, WAR is cumulative metric representing incremental wins added by a player, the CrowdScout Score, in theory, represents a player’s value to a team pursuing a championship. Both are desirable outcomes, and will not work perfectly in practice, but this is hockey analytics: we can’t let perfect get in the way of good.

Why is this analysis useful or interesting?

  1. Improve the CrowdScout Score – a predicted CrowdScout Score based on-ice data could help identify misvalued players and reinforce properly valued players. In sum, a proper model would be superior to the rankings sourced from the inaugural season with a small group of scouts.
  2. Validate the CrowdScout Score – Is there a proper relationship between CrowdScout Score and on-ice metrics? How large are the residuals between the predicted score and actual score? Can the CrowdScout Score or predicted score be reliably used in other advanced analyses? A properly constructed model that reveals a solid relationship between crowdsourced ratings and on-ice metrics would help validate the project. Can we go back in time to create a predicted score for past player seasons?
  3. Evaluate Scouts – The ability to reliably predict the CrowdScout Score based on on-ice metrics can be used to measure the accuracy of the scout’s ratings in real-time. The current algorithm can only infer correctness in the future – time needs to pass to determine whether the scout has chosen a player preferred by the rest of the crowd. This could be the most powerful result, constantly increasing the influence of users whose ratings agree with the on-ice results. This is, in turn, would increase the accuracy of the CrowdScout Score, leading a stronger model, continuing a virtuous circle.
  4. Fun – Every sports fan likes a good top 10 list or something you can argue over.

Reverse Engineering the Crowd

We are lucky enough to have a shortcut to a desirable target variable, the end of season CrowdScout Score for each NHL player. We can then merge on over 100 player-level micro stats and rate metrics for the 2015-16 season, courtesy of puckalytics.com. There are 539 skaters that have at least 50 CrowdScout games and complete metrics. This dataset can then be used to fit a model using on-ice data to explain CrowdScout Score, then we use the model output to predict the CrowdScout Score, using the same player-level on-ice data. Where the crowd may have failed to accurately gauge a player’s contribution to winning, the model can use additional information to create a better prediction.

The strength of any model is proper feature selection and prevention of overfitting. Hell, with over 100 variables and over 500 players, you could explain the number of playoff beard follicles with spurious statistical significance. To prevent this, I performed couple operations using the caret package in R.

  1. Find Linear Combination of Variables – using the findLinearCombos function in caret, variables that were mathematically identical to a linear combination of another set of variables were dropped. For example, you don’t need to include goals, assists, and points, since points are simply assists plus goals.
  2. Recursive Feature Elimination – using the rfe function in caret and a 10-fold cross-validation control (10 subsets of data were considered when making the decision, all decision were made on the models performance on unseen, or holdout, data) the remaining 80-some skater variables were considered from most powerful to least powerful. The RFE plot below shows a maximum strength of model at 46 features, but most of the gains are achieve by about the 8 to 11 most important variables.
  3. Correlation Matrix – create a matrix to identify and remove features that are highly correlated with each other. The final model had 11 variables listed below.RFEcorr.matrix

The remaining variables were placed into a Random Forest models targeting the skaters CrowdScout Score. Random Forest is a popular ensemble model[3]: it randomly subsets variables and observations (random) and creates many decision-trees to explain the target variable (forest).  Each observation or player is assigned a predicted score based on the aggregate results of the many decision-trees.

Using the caret package in R,  I created Random Forest model controlled by a 10-fold cross-validation, not necessarily to prevent overfitting which is not a large concern with Random Forest, but to cycle through all data and create predicted scores for each player. I gave the model the flexibility to try 5 different tuning combinations, allowing it to test the ideal number of variables randomly sampled at each split and number of trees to use. The result was a very good fitting model, explaining over 95% of the CrowdScout Score out of sample. Note the variation explained, rather than the variance explained was closer to 70%.

RF.players

Note the slope of the best-fit relationship between actual and predicted scores is a little less than 1. The model doesn’t want to credit the best players too much for their on-ice metrics, or penalize the worst players too much, but otherwise do a very good job.

RF.VarImp

Capped Flexibility

Let’s return to the original intent of the analysis. We can predict about 95% of CrowdScout Score using vetted on-ice metrics. This suggests the score is reliable, but that doesn’t necessarily mean the CrowdScout Score is right. In fact, we can assume that the actual score is often wrong. How does a simpler model do? Using the same on-ice metrics in a Generalized Linear Model (GLM) performs fairly well out of sample, explaining about 70% of the variation. The larger error terms of the GLM model represent larger deviations of the predicted score from the actual. While these larger deviations result in a poorer fitting model fit, they may also contain some truth. The worse fitting linear model has more flexibility to be wrong, perhaps allowing a more accurate prediction.

GLM.players

GLM.VarImp

coefficients

Note the potential interaction between TOI.GM and position

Residual Compare

How do the player-level residuals between the two models compare? They are largely the same directionally, but the GLM residuals are about double in magnitude. So, for example, the Random Forest model predicts Sean Monahan’s CrowdScout Score to be 64 instead of his current 60, giving a residual of +4 (residual = predicted – actual). Not to be outdone, the Generalized Linear Model doubles that residual predicting a 68 score (+8 residual). It appears that both models generally agree, with the GLM being more likely to make a bold correction to the actual score.

Residuals-Compares

Conclusion

The development of an accurate single comprehensive metric to measure player impact will be an iterative process. However, it seems the framework exists to fuse human input and on-ice performance into something that can lend itself to more complex analysis. Our target variable was not perfect, but it provided a solid baseline for this analysis and will be improved. To recap the original intent of the analysis:

  1. Both models generally agree when a player is being overrated or underrated by the crowd, though by different magnitudes. In either case, the predicted score is directionally likely to be more accurate than the current score. This makes sense since we have more information (on-ice data). If it wasn’t obvious, it appears on-ice metrics can help improve the CrowdScout Score.
  2. Fortunate, because our models fail to explain between 5% and 30% of the score and vary more from the true ability. Some of the error will be justified, but often it will signal that the CrowdScout Score needs to adjust. Conversely, a beta project with relatively few users was able to create a comprehensive metric that can be mostly engineered and validated using on-ice metrics.
  3. Being able to calculate a predicted CrowdScout Score more accurate than the actual score gives the platform an enhanced ability to evaluate scouting performance in real-time. This will strengthen the virtuous circle of giving the best scouts more influence over Elo ratings, which will help create a better prediction model.
  4. Your opinion will now be held up against people, models, and your own human biases. Fun.

______________________________________________________

Huge thanks to asmean to contributing to this study, specifically advising on machine learning methods.

[1] The Wins Above Replacement problem is not unlike the attribution problem my Data Science marketing colleagues deal with. We know the was a positive event (a win or conversion) but how do we attribute that event to the input actions between hockey players or marketing channels. It’s definitely a problem I would love to circle back to.

[2] What determines the ‘best’ scout? Activity is one component, but picking players that continue to ascend is another. I actually have plans to make this algorithm ‘smarter’ and is a long overdue explanation on my end.

[3] The CrowdScout platform and ensemble models have similar philosophies – they synthesize the results of models or opinions of users into a single score in order to improve their accuracy.

Advanced Goaltending Metrics

Preamble: The following is a paper I wrote while in college about 6 years ago. It is very theoretical, without understanding the realities of data quality in the real world. However, it still reflects my general attitude toward how goaltending performance should be measured, manifesting itself in my current Expected Goals model.

 

How new metrics concerning hockey’s most important position can offer critical insights into goaltender performance, development, and value.

 

Introduction

During the last 20 years, the goaltending position has changed more than any other position in hockey. Advances in equipment and training have raised the benchmark for expected goaltender performance. Teams promptly began investing in the position in the mid-90’s as a new breed of goaltender found success in the NHL. From 1994-2006 an average of almost 3 goaltenders were selected in the 1st round. Of these 37 highly touted goaltenders, none had won a Vezina trophy as of 2011. With this surprising lack of success, teams began to avoid using high draft picks on goaltenders—from 2007-2011 less than 1 goaltender was drafted in the 1st round annually.

Teams will continue to invest less in the goaltending position for a number of reasons. First, it is a matter of economics—the supply of good goaltenders has increased, decreasing their value. Initially, the demand for goaltenders drove their stock up, but teams eventually realized that they struggle to correctly value goaltending prospects. Subsequently, many of the leagues most successful goaltenders during this period were late round picks. Outside of the legendary Martin Brodeur, the last 3 Vezina trophy winners were drafted in the 5th, 5th, and 9th rounds. In fact, in the last decade the only goaltenders to make the NHL 1st or 2nd All-Star teams that were drafted in the 1st round were Roberto Luongo and, of course, Martin Brodeur. Lastly, goaltenders appear to mature later, which means teams want to invest less in them, especially considering the new Collective Bargaining Agreement allows players to become free agents earlier. In summary, there are more good goaltenders, they are generally incorrectly valued, and teams are hesitant to develop goaltenders through the draft, preferring high-priced, experienced goaltenders.

These factors create a unique opportunity for teams that can properly value goaltenders. Goaltending is still a critical part of any team, but it can be acquired without giving up valuable assets. Goaltenders are generally selected later in the draft, exchanged for less than their intrinsic value via trade, or require no assets to acquire through free agency and from waivers. Solid NHL goaltending should ideally come at a friendly cap hit, since the premium for the highest paid goaltenders is diminishing. Another trend is evident: some of the most successful teams are using strong backups throughout the regular season to compliment their starters and gain a post-season advantage—since the 2005 lockout the average Stanley Cup winning goalie has played less than 50 regular season games. Teams can no longer hope to find a franchise goaltender and maintain elite performance by locking them up to a rich, long-term contract without possessing the option of cheaper alternatives. The inability for teams to objectively understand the difference in performance between a goaltender with a $5 million salary and a $1.5 million salary is curious—goaltending is the only position in hockey that performance could be measured in a largely empirical way, analogous to how baseball has managed to successfully employ advanced metrics to better measure player performance. Teams that could use goaltending metrics that more accurately evaluate goaltenders would have an enormous advantage to acquire and retain elite level goaltending at an economical price.

The Estimated Save Percentage Index Model

The most common metric used to measure goaltending performance is save percentage, the number of saves as a percentage of total shots on goal. This metric is fundamentally flawed. To more accurately understand the quality of a particular goaltender, save percentage must be more sophisticated. This is possible because the goaltending position has two important prerequisites that make performance the most quantifiable in hockey. First, the result is absolute: any shot on goal is either stopped or results in a goal. Second, the position is passive: the difficulty to the goaltender is generally dictated by the game in front of him, except for rebound control and puck handling, which can be addressed later in the model.

The Expected Save Percentage (ES% Index) is a predictor of a goaltenders success based on a number of inputs that assigns the individual difficulty of each shot the goaltender faces. The inputs used in the model are shot location, puck visibility, and the rate at which the puck changes angle before or during the shot. The model assumes the goaltender has NHL quality blocking-width, positioning, lateral movement, and reflexes. Then, through an array of formulas, the model determines the expected save percentage for each shot on goal given the inputs. Once these expected save percentages are aggregated over a game, or over a season, we can see how the goaltender’s actual save percentage compares with the expected save percentage and compare them to their peers. The best goaltenders will consistently exceed the predicted save percentage whether they are facing 20 high quality shots or 40 lower quality shots. The Expected Save Percentage Index—the difference between real save percentage and expected save percentage—will measure the proficiency of the goaltender. The index can be tracked game-by-game and season-by-season. Since we are removing much of the fluctuation in team performance we will have a much better idea of a goaltender’s consistency—an attribute critical to NHL success that can be lost in the potentially misleading statistics that are currently employed.

The inputs have been selected for simplicity and versatility. The most obvious is shot location—the closer the shot, the more likely it will be a goal. Assuming the average NHL shot is about 90 miles/hour and a NHL goaltender has a reaction time of .11 seconds, the Expected Save Percentage increases greatly once the shot is from a distance of greater than 15 feet.  Inside of 15 feet it assumes the goaltender can cover around 70%- 80% of the net through size and positioning, and the distance model reflects this assumption. Location can also allow the model to determine the shot angle and net available to the shooter, two other factors that are automatically worked into the model. If applicable, visibility is a binary input determining whether the goaltender has a chance to see the puck. Again, since we are assuming NHL quality goaltending, there is no ‘half-screen’ or ‘distraction.’ If the goaltender has an opportunity to see the puck, they are expected to gain a sightline to the puck. If they are completely screened, the expected save percentage is lowered as a function of the net available when the shot is taken—the better angle, the more dangerous the screen. Lastly, the model factors in the rate of the change in the angle of the puck when the shot as taken, if applicable. This way we can discount the expected save percentage if the shot is a one-timer, deke, passing play, or even a deflection to better reflect the difficulty of a shot against. The model assumes NHL quality lateral movement, edge control, and post save recovery. At lower levels, where puck movement is slower, goaltenders will have to put up higher real save percentages to maintain an ES% Index that predicts NHL skills.

These inputs create an admittedly arbitrary, yet sophisticated, expected save percentage. The formulas can be retrofitted as more data is collected to move closer to a universally accurate expected save percentage—ideally the median ES% Index would be 0. The data can be then broken up into three categories, shots with no screen or movement, shots that are screened, and shots where the puck is moving laterally as it is released. Breaking each shot into individual components will make it possible to track and eventually acquire objective data, replacing the placeholder formulas with actual NHL results. However, as it stands now, the expected save percentage is a benchmark, and it is the discrepancy between the realized and expected save percentage that will be the true measure of individual performance. Shot placement may seem like a troublesome omission from the model, however since the model is built on aggregated averages we can account for the complete distribution of shots put on net. NHL quality defense generally takes away time and space from shooters, limiting their ability to place the puck wherever they desire. Teams are not necessarily inclined to giving up shots in a particular place in the net, but weaker teams are prone to giving up shots from more dangerous locations on the ice. In this way shot placement is indirectly built into the expected save percentage: a shot from 10 feet out the shooter has a much greater chance of hitting a target, say high glove, than a shot from 20 feet.

Win Contribution

The ES% Index measures goaltender performance in a vacuum, comparing actual performance to how we would expect him to perform in a given situation. However, the goaltender can influence the amount of shots they face through rebound control and effective puck handling. Tracking these occurrences will allow the model to adjust the expected save percentage further. Easier than average shots that result in a rebound will lead to the successive shot not being factored into the model. This is analogous to saying the resulting shot should not have happened. Difficult shots that result in rebounds will take into consideration the difficulty of both shots when assigning expected save percentage to the potentially ‘preventable’ rebound shot. Whenever a goaltender handles the puck and it results in the puck directly clearing the zone, it will be assume the goaltender prevented a shot a certain percentage of the time. By adding the potential shots and removing preventable shots to the actual shot total we will have a good idea of how the goaltender is helping their team and influencing the game.

With the expected save percentage and expected shots against, we can manufacture an expected goals against for each game. We can compare expected goals against to the goal support the goaltender received and determine whether or not the goaltender should have won the game. If the game should have been won based on the actual goals for and expected goals against, but was not, this will be a contributed loss. Conversely, if it was predicted the team should have lost, yet won, this will be a contributed win. So we can remove the bias toward goaltenders on bad teams—who have more opportunity to register contributed wins—we can measure the number of potential contributed wins and losses and compare them to the actual contributed wins and losses.

How does this model predict future goaltending performance?

This analysis allows an NHL team to gain a concise, quantified measurement of goaltending performance across leagues and time. It will more accurately identify goaltending proficiency and consistency. It can be adjusted from league to league as the goaltender advances and will better predict future success as the database grows. The model automatically assumes each goaltender has NHL size, speed, and positioning, so if the goaltender can consistently perform better than his peers, then they will likely continue to outperform them at higher levels. This can apply to a late round pick playing on a weak team in Europe or a college goaltender discredited for being on a strong defensive team. Since the ES% Index can be broken into components—stationary shots, screened shots, and moving shots—it will be easy to identify weaknesses that may be hidden by a specific team. For example, a goalie with poor lateral movement on a team that limits puck movement might perform well by traditional standards, but if the ES% Index on shots with puck movement is below average, chances are they will be exposed at the next level. There is a very real advantage to employing increasingly accurate goaltending metrics that other teams are not using to value goaltenders. It can also be broken up into individual components lending itself to the in-depth analysis of goaltending prospects, opposition goaltenders, and even the performance of other players on the ice. While the ES% Index will likely have limitations, predicting the development and value of goaltenders has not improved during an era when the quality of goaltending has increased dramatically. Therefore, a more accurate metric will almost certainly improve the valuation of each goaltender and offer critical insights into their development.

Other Considerations

While advanced goaltending metrics can aid management decisions, they can also lend coaches a helpful perspective when preparing for games. The objective ES% Index will help explain some of the volatility in goaltender performance. Coaches do not always understand the subtleties of the position, their only concern lies in the proficiency of the goaltender in preventing goals—exactly the intent of the ES% Index. It can also be used as a pre-scout for opposing goaltenders. Situational success rates for each NHL goalie are tracked through the season, offering a strategic advantage to the coaching staff and players. If an otherwise successful goaltender is performing below the norm on shots with puck movement, then this is a clear indication to move the puck before shooting. Ability can be judged based on data from an entire season rather than anecdotal observations. This is advantageous because the goaltending position is inconsistent by nature, one bad bounce or mental lapse can be the difference between a good game and a bad game. Watching a select few games of a goaltender will make it difficult to judge their true ability—no doubt part of the reason teams struggle to value goaltenders at the draft. It can also compliment scouting reports. If a scout sees a particular trend or weakness in a goaltenders game, there will be data available which can be used to verify or contradict the scout’s claims.

Additionally, goaltender performance can influence the statistics of players at other positions. Both a defenseman playing if front of poor goaltending and a goal scorer who faced an unlikely sequence of superb goaltending are going to have their statistics skewed. Adjusting these statistics for goaltending performance will give management a clearer idea of why a certain player’s statistics might be deviating from their expectations. For example, the model can be expanded to measure the difference between even-strength expected goals for and expected goals against for each player over the course of the game based on the data already being recorded. This type of analysis is separate from the ES% Index, however having more accurate goaltending statistics would provide an organization another tool properly evaluate players and put the absolute best product on the ice.

Conclusion

No statistical analysis can replace comprehensive subjective evaluation that is performed by the most experienced hockey minds in the world. However, it can offer a fresh perspective and lend objective analysis to a position where contrarians can often be the most successful. The unorthodox goaltending styles of Tim Thomas and Dominik Hasek have remarkably won 8 out of the last 17 Vezina trophies awarded. Not only were they drafted in the 9th and 10th rounds, respectively, they did not even become starting goaltenders until aged 32 and 29 despite their success outside of the NHL. Very few understood how they stopped the puck, but both men clearly prevented goals. It is my hope that employing more advanced goaltending metrics can remove the biases that exist and pinpoint goal prevention, the sole objective of a goaltender. Due to my extensive knowledge of the position as both a student and a coach, the model has been constructed to reflect the complex simplicity of the position—Where is shot from? Can I see it? Can I reach my optimal position?—while deducing the existence of attributes that are critical to NHL success: size, speed, positioning, lateral movement, and consistency. For these reasons, Expected Save Percentage Index and Win Contribution analysis manages to combine the qualitative and quantitative factors that are necessary to properly evaluate goaltenders, benefiting any team that employs these advanced metrics.

Goaltending—Game Theory, the Contrarian Position, and the Possibility of the Extreme

Preamble: The following is a paper I wrote while in college about 6 years ago. It is a slightly different approach and worse logic that I employ now, likely reflecting my attitude at the time – a collegiate goaltender with the illusion of control (hence goals were likely unpredictable events, else I would have stopped it). I have softened on this thinking, but still think the recommendation holds: goaltenders can outperform the average by mixing strategies and adding an element of unpredictability to their game.

 

How goaltender strategy and understanding randomness in hockey can lend insight into the success of truly elite goaltenders.

Introduction

This paper outlines general strategies and philosophies behind goaltending, focusing on what makes great goaltenders great. Philosophy and goaltending make interesting partners—few athletic positions are continuously branded with a ‘style.’ Since such subjective labels are the norm for this position, then I feel quite comfortable using the terms rather broadly in a philosophical analysis. I will use loose generalisations to formulate a big-picture view of the position—how it has evolved, the type of goaltender that has consistently risen above their peers during this evolution, and why. Using game theory and attempting to clearly label player strategies is, at times, clumsy. Addressing the impact of unquantifiable randomness in hockey does not provide much comfort either. However, the purpose is to encourage further thought on the subject, and not provide a numerical, concise answer. It is a question that deserves more thought, at both the professional (evaluation and scouting) and grass-root (development and training) level. The question: what makes a consistently great goaltender?

Game Theory—The Evolution of Goaltending Strategy

Passive ‘blocking’ tactics have become prevalent among goaltenders at all levels. It is simple, statistically successful, and passive. There are tradeoffs like any strategy—the goaltender forfeits aggressiveness in order to force the shooter to make perfect shots to beat them. This ‘fated’ strategy exposes the goaltender to the extreme—most goals allowed are classified as ‘great plays’ or ‘lucky,’ certainly not the fault of the goaltender. However, there are other considerations. Shooters, no doubt, have adjusted their strategy based on this approach, further compromising the passive approach to goaltending. This means a disproportionate number of shooters will look to make ‘perfect’ shots—high and tight to the post against a blocking goaltender—despite the risk of missing the net entirely.

Historically, goaltenders did not have the luxury of light, protective equipment that is designed specifically to seal off any holes while in a butterfly position. Equipment lacking proper protection and effectiveness required goaltenders to spend the majority of the time on their feet while facing shots.

Player/Goaltender Interactions Then and Now

Game theory applications allow a crude analysis of the evolution of strategies between players and goaltenders. The numbers I use are arbitrary, however, they demonstrate an important strategic shift in goaltending tactics. First, let us assume that players have to decide whether to shoot high or low and always try to shoot for the posts. Simultaneously, goaltenders must choose to block or react.

In the age of primitive equipment, goaltenders were required to stand-up most of the time to make saves. From here we can make three assumptions in this ‘game’ or ‘shot’: 1) While blocking, the goaltender’s expected success rate was the same if the shooter shot high or low. Since the ‘blocking’ tactic was simply standing up and challenging excessively when possible, it would not matter if the player shot high or low, the goaltender was simply covering the middle of the net. 2) While reacting, high shots were easier saves than low shots. Goaltenders generally stood-up, which make reach pucks with the hands easy and reaching pucks with the feet hard. 3) Goaltenders were still better reacting than blocking on low shots, since players will always shoot for the posts.

We can then use the iterated elimination of dominated strategies technique to find a dominant strategy for each player. In this scenario, goaltenders are always more successful, on average, reacting than blocking. Since goaltenders will always react, shooters acknowledge they are generally better off shooting low than high (while this is just a fabricated example, the fact goaltenders survived without helmets might prove this). Regardless, the point of this exercise demonstrates that goaltenders needed to have the ability to react to shots during this time. These strategies and the expected save percentages are displayed in the matrix below (Figure 1). Remember goaltenders want the highest save percentage strategy, while shooters want to find the lowest.


However, the game of hockey is not as simple as the pure simultaneous-move game we have set up. Offensive players are not shooting in a vacuum. They are often facing defensive pressure or limited to long distance shots, both circumstances limit the ability of offensive players to accurately shoot the puck. If the goaltender believes his team will be able to limit the frequency of high shots to less to 50%, then the goaltenders expected save percentage while blocking is greater than their expected save percentage while reacting.Advances in equipment then allowed the adoption of a new blocking tactic—the butterfly. By dropping to their knees and flaring out their legs, goaltenders were maximising their blocking surface area, particularly along the ice. Equipment was lighter, bigger, and increasingly conducive to the butterfly style, allowing goaltenders to perform at higher levels. Now the same simultaneous-move game described above began to increasingly favour the goaltender. Not only did the butterfly change the way goaltenders blocked, it changed the way they reacted. Goaltenders now tended to react from a butterfly base—dropping down to their knees at the onset of the shot and reacting as they dropped. The effectiveness of the down game now meant shooters were always better off shooting high. In a pure game theory sense, this would suggest players would always shoot high, so goaltenders should still always react. These strategies and the new payoffs are displayed in Figure 2.


This suggests that goaltenders with a good defence, good blocking technique, and modern goaltending equipment are better off blocking. When a goaltender is said to be ‘playing the percentages,’ this suggests the goaltender routinely blocks the majority of the net and forces the shooter to make a perfect shot. This strategy has raised the average performance of goaltenders. However, in a zero-sum game such as hockey, simply maintaining a level of adequate performance will not increase the goaltender’s absolute success, measured in wins and losses. The only way for a goaltender to positively impact their team is to exceed the average, which—as we will see—can be accomplished by defying the norm.

In conclusion, these strategic interactions did not create hard rules for goaltenders or shooters. However, the permeation of advanced tactics has heavily skewed the payoffs toward the goaltender. Goaltenders block more, and shooters shoot high as much as possible. An unspoken equilibrium has been created and maintained at all levels of hockey—thus altering the instinctive strategies employed by both groups.

The ‘Average’ Position

Goaltenders could now simplify their approach to their position, while simultaneously out-performing their historical predecessors. The average NHL save percentage rose from 87.6% in 1982 to 91.6% in 2011.* This rise in success rate would give any goaltender little incentive to break the norm. Imagine an ‘average’ goaltender, posting a save percentage equivalent to the NHL average save percentage each year. The ‘average’ goaltender would put up better numbers each successive year. While they would be perceived to be more valuable—higher personal statistics means a bigger contract, more starts, and a greater reputation—it is entirely conceivable that, despite their statistical improvement, they would not contribute to any more victories. If the goaltender at the other end of the ice is performing just as well as you (on average, of course) then the ‘average’ goaltender will not contribute any extra wins to his team compared to the year before. However, this effect would be difficult to observe over the course of a goaltenders career, and coaches and managers would become enamoured with ‘average’ goaltending, comparing it favourably to the recent past. The ‘success of mediocrity’ encouraged a simplified, safe, and ‘high-percentage’ approach to the position. If you looked like other goaltenders, played like other goaltenders, and performed like other goaltenders, there was little reason to worry about job security. In short, through the evolution of goaltending, goaltenders generally have had very little to gain from breaking the idyllic norm of how a goaltender should look or play like. The implicit equilibrium between shooters and goaltenders has persisted across different eras—most recently centring around a ‘big butterfly, blocking’ game, resulting in historically superior statistics for the ‘average’ goaltender.

The Limits of Success

There is no doubt that now the craft of goaltending is significantly superior to the efforts that preceded it. Goaltenders today are bigger, faster, more athletic, and advanced technically. However, the quest to fulfil the requirement of ‘average’ will be an empty pursuit in absolute terms (wins and losses) to any goaltender. In order to avoid becoming ‘average’ the goaltender must deviate from the strategic equilibrium that primarily consists of large goaltenders simply ‘playing the percentages.’ While goaltenders can exceed the average by simply being even bigger, faster, and more athletic than their peers, this is becoming increasingly difficult. Not only will teams continue to draft goalies for these attributes, there are natural limits to how tall, fast, and coordinated a human being can be. Shooters will also continue to adjust. An extra 2” in height does not necessarily prevent a perfectly placed shot over or under the glove. Recall the over simplified instantaneous move game: shooters will always be better off shooting high and to the posts—when they have time. High-level shooters have evolved to target very specific areas of the net, preying on the predictability of the modern butterfly goalie. However, the shooter will not always have time to attempt the perfect shot, which means the goaltender can revert back to primarily blocking and mediocrity without being exposed.

 

 

The Contrarian Position

While the goaltender cannot change his physiology in order to exceed the average, they can (slowly) alter their approach to the game. Remember, the strategic interaction between the goaltender and shooter has become predictable. The goaltender will fill up as much net as possible, forcing the shooter to manufacture a perfect shot, while the shooter will attempt to comply.  If a goaltender were to begin to mix strategies effectively and react some percentage of the time, they would be better off. The shooter has been trained to shoot high (that is their dominant strategy), and goaltenders are better off reacting to high shots than blocking and leaving their arms pinned to their sides. Essentially, by mixing strategies when it is wise, (when the simple block-react instantaneous move model applies) the goaltender can increase their expected save percentage—and exceed the average.

To demonstrate this point we must move away from the abstract and the general, focusing on specific examples. A disproportionate amount of statistical success throughout the ‘butterfly’ era has been the work of unorthodox goaltenders. While an ‘unorthodox’ style has had a negative connotation in the conventional world of goaltending, it is the defectors that have broken through the limits reached by the big, butterfly goaltender. Sub-six-foot Tim Thomas recently broke the modern NHL save percentage record by willing himself to saves and largely defying the established goaltending practice. The save percentage record previously belonged to Dominik Hasek. Like Thomas, Hasek was less than six feet tall and would consistently move toward the puck like no other goaltender in the game. To shooters that have very clear, habitual objectives (shoot high glove or low blocker just over the pad or through his legs if he is sliding, etc.) facing these contrarians led to a historically low shooter success rate. These athletes effectively mixed their strategies between blocking and reacting (their own versions of these strategies, mind you) to keep shooters guessing. Their contrarian approach has been remarkably sustainable as well—Hasek and Thomas have combined to win 8 out of the last 17 Vezina Trophies, despite their NHL careers only overlapping 3 years. By moving further away further the archetypical goaltender, both Thomas and Hasek exceeded the average considerably. It is exceeding the average that causes goaltenders to contribute to victories, the absolute measurement of success for any goaltender.

Consider the correlation between a unique approach and sustained success when accessing the careers of four Calder Trophy winning goaltenders: Ed Belfour, Martin Brodeur, Andrew Raycroft, and Steve Mason. Each began their NHL career in impressive fashion; however, two went on to become generational goaltenders, while the other two will struggle to equal their initial success. This may seem like an unfair comparison, but it is important to understand why it unfair. Both Brodeur and Belfour maintained an elite level of play because they generally defied convention throughout their career. Both played unique styles and were excellent puck handlers. When Belfour entered the league at the very start of the 1990’s his combination of athleticism, intensity, and an advanced understanding of positional play made him formidable. He mastered the butterfly before it was the standard—you could argue the success of Patrick Roy and Belfour helped create the current generation of ‘big, butterfly’ goaltenders. Brodeur has always been different—there has been no comparable goaltender to him throughout his career, just like Thomas or Hasek. He has been the most consistent and celebrated goaltender in NHL history without utilising the most common save tactic employed by his peers—he rarely drops into a true butterfly. Counter-intuitively, despite lacking a standard, universal save movement, he has also been remarkably consistent. Martin Brodeur has mixed his save selection strategies magnificently, preying on shooter programmed to shoot against predictable butterfly practitioners.

Now consider the other rookie standouts: Raycroft and Mason. It is difficult to distinguish their approach to the game from the approach of other ‘average’ professions. Mason is taller than average and catches right, but he does not present a unique challenge to shooters. They are goaltenders with an average, ‘percentage-based’ approach to goaltending. There is nothing note-worthy about the way they play the position. Why the initial success? Both goaltenders likely overachieved (positive deviation from the average) due to a favourable situation and the vague element of surprise. Shooters would soon adjust to the subtleties in the young goaltender’s game.* Personal weaknesses would become exploited and their performance regressed towards the mean. Their rookie years could have been duplicated by a number of other rookie goaltenders, with similar skill and luck. Their ‘average’ size, skill set, and approach to the game have manifested itself in an ‘average’ NHL career. An impressive beginning was nothing more than favourable luck and circumstance—their careers diverged significantly from other Calder-winning goaltenders. Goaltenders that went throughout their career masterfully mixing save selection strategies, by contrast, set the standard for consistency, longevity, and performance.

In conclusion, the modern equilibrium between goaltenders and shooters has been successfully disrupted by the contrarians like Dominik Hasek, Tim Thomas, and Martin Brodeur. The rest have enjoyed the benefits of the ‘big, butterfly goaltender’ doctrine—stopping more pucks on average—but have gained little ground on other ‘average’ goaltenders. These goaltenders are playing a strategy that contributes little to their team because they are more susceptible against the extreme.

 

The Possibility of the Extreme—The Black Swan Save 

If contrarians exceed the average, it is important to understand how they can do it with remarkable consistency. I believe their unconventional style and willingness to react to shots leaves them better prepared to handle the possibility of the statistically unique shot—which I will call a ‘Black Swan’ opportunity.§ They can always use the butterfly tactic in situations that call for it, while the butterfly-reliant goaltenders struggle to improvise like contrarians. The ‘reaction’ strategy leaves them free to make the unconventional saves necessary to prevent Black Swans from becoming goals.

The position relies on instinct and split second decisions. Reactions and responses to defined situations are drilled into goalies from an increasingly young age. Long before these goaltenders are capable of playing in the NHL, they have generally mastered technical responses to certain, finite situations. Goaltenders may be trained very well to react predictably in trained circumstances, but this leaves the goaltender susceptible to the extreme—breeding mediocrity. In this case, the extreme or Black Swan shot, is the result of 10 position players on the ice, moving at speeds up to 30 miles per hour, chasing an object that can move close to 100 miles per hour. Despite the simple objective and the definitive results of the goaltending position, every shot against them has the potential to create an infinite amount of complexities and permutations. A one-dimensional approach—where the goaltender determines they are better off ‘playing the percentages’—to the position offers the goaltender the opportunity to make a large number of saves, but it does not prepare the goaltender to react favourably to a Black Swan. The problem, then, is not maintaining a predictable level of performance—making the saves ‘you should make’—it is the ability to adjust to the unpredictable and the extreme in order to make a critical save. This is accomplished by reacting to shots a healthy percent of the time.

The real objective of the goaltender is to give up fewer goals than the opposing goaltender. In a low scoring game such as hockey, it is likely one goal against will determine the outcome of any given game. Passively leaving the outcome up to chance is a mistake in my opinion. Aggressiveness and assertiveness are competitive qualities that are compromised by a predominantly butterfly style. By dropping in the butterfly the goaltender is surrendering to whatever unlikely or unlucky shot that may occur. A great play, a seeing-eye shot, or unlikely bounce—the ‘unlikely, undrilled’ occurrences that have the potential to win or lose games—happen randomly. The goaltender must be aggressive and decisive in order to adjust to these situations. These are the shots that cannot be replicated in repetitive drills; they require the creativity and instinctual reaction of an instinctual contrarian.

Goaltending—A Lesson in Randomness

The frequency of the Black Swan shot or goal against is erratic. They can happen at any time. There is little correlation between shots against and goals against on a game-by-game basis. If we assume the amount of Black Swan’s a goaltender faces is roughly proportional to the number of goals given up*— generally the more improbable shots faced, the more goals against—we counter-intuitively observe that the ‘Black Swans’ and the goals they caused occur randomly in a hockey game, largely independent of the number of shots against the goaltender. Taking the 10 busiest goaltenders of the 2010-2011 season, we see that their save percentage generally goes up as they receive more shots against. It does not matter whether the team gives up 20 shots or 40 shots, the random Black Swan occurrences that result in goals will happen just as frequency, regardless of the shots against. In outings where those goaltenders faced more than 40 shots, the average save percentage and shots against were 94.63% and 43.51, respectively. This implies these goaltenders gave up, on average, 2.33 goals per game when facing more than 40 shots. When these same goaltenders faced less than 20 shots, their save percentage was a paltry 82.17% on an average of 14.85 shots. This implies 2.64 goals against per outing where the goaltender faced less than 20 shots.§ Counter-intuitively they fared worse while facing less than half of the shots.

The frequency of the ‘Black Swan’ occurrences that led to goals appears to be largely independent of shots on goal. ‘Playing the percentages’ leaves every goaltender hopelessly exposed to random chance throughout the game. Goaltenders in the world’s best league do no better in absolute terms when they face 20 shots than 40 shots. They are the same goaltenders, they just fall victim to circumstance and luck.

Simply ‘playing the percentages,’ with an emphasis on blocking from the butterfly, leaves the goaltenders fate up to pure chance. No goaltender can attempt to consistently out-perform their peers by playing the percentages—at least, not with certainty. Hoping to block 90% of the net while relying on your team to limit quality opportunities will result in mediocrity. The Black Swan events that lead to goals occur randomly and just as frequently facing 15 shots as 50 shots. This has manifested itself in ‘average’ goaltenders’ performances fluctuating unpredictably from game to game and from season to season. In a game where random luck is prevalent, employing a strategy that struggles to adjust to the complexities of a game as dynamic as hockey will result in erratic and unexplainable outcomes.

The Challenge to the Contrarian

This creates a counter-intuitive result: the prototypical, ‘by the book’ goaltender will likely be subjected to greater fluctuations in performance, despite having the technical mastery of the position that suggests a level of control. Instead, it is the contrarian, with no attachment to the ‘proper’ way to make the save that will achieve more consistent results. The improvisational nature of a Tim Thomas stick save may appear out of control, but his approach to the game will yield more consistent results. The aggressiveness and assertiveness will allow the contrarian to make saves when there is no technical road map to reach the proper position on a Black Swan shot. Consider the attributes necessary the make an incredible save. Physical attributes vary among NHL goaltenders, but not by much. Height, agility, reflexes, and other critical skills for any professional goaltender will cluster around a certain standard. On the other hand, the mental approach to the game can vary between goaltenders by magnitudes. Goaltenders can become robust against the effects of Black Swans by having the creativity to reach pucks ‘technicians’ could not and having the courage to abandon the perceived safety of the butterfly. Decreasing the effects of Black Swan’s would be huge, and there are no theoretical limitations (unlike physical limitations) that exist. In a game containing the possibility of the extreme, it is the contrarian goaltender that will best be able to prevent goals against.

Leaving the safety of the ‘butterfly style’ can be dangerous for a goaltender. Coaches, managers, analysts, and peers will be quick to realise when a goal could have been stopped by a goaltender passively waiting in their butterfly. These ‘evaluators’ and ‘experts’ have subscribed to the ‘average’ goaltender paradigm for over a decade. After game 5 of the 2011 Stanley Cup Final, Roberto Luongo suggested that the only goal of the game against Tim Thomas would have been “an easy save for (him).” Proactively mixing save strategies does leave the contrarian potentially exposed to the unconventional goal against. Improbable, unconventional saves are great, but coaches and managers really only care about goals against. They can handle them if it was not the fault of the goalie—the perfect shot or improbable bounce that prey’s upon the passive butterfly goaltender. Just don’t pass up the opportunity to make an easy save and get scored on, contend the experts (luckily, Thomas was able to put together the greatest season of any goaltender in the modern game, he got a pass). Playing the game like freed from the ‘butterfly-first’ doctrine is a leap of faith, but it gives the goaltender the opportunity to contribute something positive to their team: wins.

Consider the great Martin Brodeur—the winningest goaltender in NHL history has often been discredited for playing behind strong defensive clubs while winning games and championships. However, random Black Swan chances have little regard for the number of shots against, as we have seen.  So why does Martin Brodeur have the most victories of any goaltender in NHL history? I would give a large amount of credit to his ability to make the ‘key save’ on the unlikely chance against. These saves would not necessarily manifest themselves noticeably at the end of the game or in any statistically significant way—rather they are randomly distributed throughout the game, like Black Swan’s are. Remember that, while New Jersey has been traditionally strong defensively, they have averaged 16th in the league in scoring during Brodeur’s tenure. With this inconsistent (and at times lethargic) goal support, Brodeur’s win totals remained remarkably consistent. During his prime he recorded at least 37 victories in 11 consecutive seasons. The low scoring years required extreme focus and competency. Where the game could hinge on one great play or bad bounce, Brodeur preserved victory more than any contemporary by being vigilant against the Black Swan chances. You can make the argument the low shot totals (and the subsequent merely ‘good’ save percentage) led to him being overrated considering his absolute success. However, Black Swan’s are somewhat independent of shots against, and until his detractors understand how three ‘Brodeur-only saves’ were the difference in a 3-2 win in a game where New Jersey gave up only 23 shots, the winningest goaltender of all-time will continue to be regrettably underrated, except for where it counts. No statistical analysis can measure the increased importance of a save to preserve victory compared to a save without that pressure.

Conclusion

I felt it was important to actively think about the strategies that have permeated the goaltending position and the impact it has had on goaltending performance. It was also important to liberate my thinking from too much quantitative analysis, rather focusing on the qualitative relationships between goaltender strategy, the random nature of the position, the goaltenders that consistently exceed the norm, and the goaltenders that will always be products of circumstance. None of this could be done with traditional goaltender metrics, they do not begin the even consider the possibility of the Black Swan opportunity against. Traditional statistics can be manipulated to underrate the winningest goaltender of all-time. Winning is sport’s sole objective, the goaltender always has some influence on winning, so goaltender wins are important. Traditional statistics lead to complacency with ‘average’ goaltending, which is goaltending that adds nothing to the bottom-line—winning. Leaving these statistical constraints behind can help clarify the connection between strategy and the contrarian, then between the contrarian and success.

Based on this philosophical analysis, I believe goaltenders should unsubscribe from the conventional goaltending handbook, aggressively mix their save selection, helpful remaining robust against the inevitable Black Swans opportunities against. This will allow them to exceed the ‘expected’ performance, and ultimately win more games.

____________________________________________

* A 4% increase in save percentage is significant; this is analogous to saying goaltenders gave up 48% more goals of the same number of shots in 1982 than 2011.

* While the butterfly style may be generic, each goaltender has relative strengths and weaknesses. NHL shooters will eventually expose these weaknesses unless the goaltenders can successfully vary their strategy (remain unpredictable).

In the ‘modern’ game-theory example, the goaltender would have to react the vast majority of the time to force the shooter to mix between shooting high or low (which is ideal for the goaltender). By doing so the goaltenders can exert their influence on the shooter, opposed to simply accepting that a great shot or lucky bounce will beat them.

  • A term borrowed from Nassim Nicholas Talib and his book The Black Swan: The Impact of the Highly Improbable. Black Swan’s, named after the rare bird, represent the improbable and random occurrences in hockey and in life. Just because we cannot conceive a particular challenge nor have we prepared for it, does not mean it will not happen. ‘Black Swans’ are unpredictable, can have a large impact (a goal), and are the result of an ecosystem that is far too complex to predict (10 players, a puck, and physics create infinite possibilities). Events are weakly explained after the fact (you held your glove too high) but in reality the causes are much deeper and impossible to predict.

* While I would argue some goaltenders are better equipped to handle ‘Black Swan’ opportunities against them, these difficult, unforeseen events will still be approximately proportionate to the amount of goals they give up. NB: Tim Thomas is not included in this list.

This ‘extreme’ case happened 47 times out of the 677 games collectively played.

  • Many of these games saw the goaltender pulled, so the goals against is ‘per appearance’ rather than ‘per game.’ While it may be argued that these goaltender just ‘didn’t have it’ these games, I would argue that more often they faced a cluster of bad luck and improbable chances against them. The total sample size is 60 games.

This attitude may explain the regression in Luongo’s game over the last couple of seasons. He once was a 6’3 goaltender with freakishly long limbs that would reach pucks in unconventional and spectacular ways. Now he views himself as pure positional goaltender that is better off on the goal line than aggressively attacking shots against him. Apparently it is better to look ‘good’ getting scored on multiple times than look ‘bad’ getting scored on once.

The standard deviation is 10 places, basically all over the place, both leading the in goals for and finishing last in goals for.

CrowdScout Score and Salary – A Study in Market Value

It’s All Relative

In a salary cap league, how teams spend their finite budget has become very important to any present or future success.[1] The relative value of a contract is often more important than the absolute value of the contract. Within a very strict set of contract rules, teams will devote a share of their allotted cap space to a player at a price dependent on a number of market forces. The goal of this study is to determine what that price should be considering some of those market forces to compare to the actual salary.

So, how do we go about determining the market rate?[2] First, it helps to make some simplifying assumptions – we expect the cap-hit or AAV (Annual Average Value of the contract) to probably be a function of:

  • Position – different positions are valued slightly differently. Any contract negotiation anchor would consist of comparables playing the same position.
  • Age – the NHL’s not-so-free labor market puts significant restrictions and limitations on young player’s earnings. Thus, any analysis looking at market rate should factor in age.
  • Skill / ability / comprehensive contribution to winning – the player’s perceived ability will determine market value. Unlike age and position, skill is extremely difficult to accurately gauge and forecast (since many deals are multi-year). This will pose the biggest obstacle to a clean quantitative analysis. Across all sports, teams consistently misvalue player ability, most notoriously over-valuing their ability and overpaying them.
  • Contract Length (Term) – There are different interactions between age, term, and AAV. A short contract length might signal less money (a ‘show me’ bridge contract) for a young RFA or more money (player trading longer term for higher AAV) for an older UFA. Data courtesy of generalfanager.com.
  • Projected Salary Cap at Contract Date – A $5M AAV contract signed in the summer of 2009 is not the same as a contract signed in the summer of 2016. Managers are forward-looking allocating a set percentage of their expected salary cap to a player rather than an absolute amount. Data courtesy of generalfanager.com.

Finding Value

To determine how each player cap-hit stacks up against what we would expect, we must create a formula or algorithm to return each player’s expected AAV. Finding the difference between the expected AAV and actual AAV – or residual – would signal the relative value of their cap-hit. Spending a million less than market forces would expect (or, more specifically, our model would predict) allows the team to allocate to either save money or invest it elsewhere.

A model can be built using the features discussed above, predicting AAV as a function of age, position, and ability – the catch-all for talent or skill or whatever. But how do we comprehensively quantify ability, the age old question?

One Feature to Rule Them All

My baseline method will be to use GAR (Goals Above Replacement) from war-on-ice.com to help predict salary. GAR is a notable attempt to assign numerical credit to players based on their team winning, which proves a decent proxy for ability. However, GAR or any ‘be all, end all’ stat has limitations – injuries interrupt accumulation of goals above replacement and defensive contributions are very difficult to quantify, among other things. No algorithm is omnipotent, but GAR is a very helpful attempting to answer this question.

In addition to GAR, I will use data collected from my project, CrowdScout Sports, designed to smartly aggregate user judgment. It has been in beta over the course of the 2015-16 season with over 100 users making over 32,000 judgments on players relative to each other. With advanced metrics provided, a diversity of users, and the best forecasters gaining influence, I hope the data provides an increasingly reliable comprehensive player rating metrics. The rating is intended to answer the question posed to the user as they are prompted to rank two randomly chosen players – if the season started today, which player would you choose if the goal were to win a championship.[3] 

Both metrics will be used as a proxy for ability when trying to explain AAV, data courtesy of generalfanager.com. Both metrics are designed not to be influenced by cap-hit, a necessity for the model to properly to explain cap-hit.

GAR Linear Model

First, let’s explore the relationship between AAV and term, salary cap expectations, position, age, and ability using the GAR metric. Using 2014-2015 data[4] from war-on-ice.com and using their GAR model, a dataset containing player features at the onset of 2014-15 season was assembled. The AAV of the upcoming 2015-16 season (where the player was signed prior to the season) was targeted. Any incomplete records were removed. The age variable was transformed into a bucketed variable since there isn’t a linear relationship between age and AAV, rather different levels of pay by age. The natural bucketing of age in relation to cap-hit are:

  • 18-21 – Entry Level Contract (ELC) players
  • 22-24 – A mix of ELCs, bridge contracts, and a few high fliers who get paid
  • 25-27 – RFA controlled, second contract players in their early prime
  • 28-31 – UFA contract years (likely higher cap-hit) but players likely to still be in their prime
  • 32-35 – UFA contract years with some expected decline in ability
  • Over 35 – Declining ability compounded with specific contract rules for 35 plus players

The 924 remaining players were then split into 10 folds to cross-validate the Generalized Linear Model (GLM) – iteratively training on 90% of the data and testing out of sample on the remaining unseen 10% of data, then combining the 10 models. The cross-validated model is then used to score the original dataset – the coefficients from the GLM are multiplied by each player’s individual variables – age (1/0 for each bucket), position (1/0 for each position), contract length, projected cap, and GAR. The outcome is the expected AAV.

Resampling: Cross-Validated (10 fold)
Summary of sample sizes: 836, 837, 838, 838, 838, 836, ...
Resampling results:


RMSE Rsquared
1.10514 0.7112609

Estimate Std. Error t value Pr(>|t|)
(Intercept) -3.109087 0.838919 -3.706 0.0002***
GAR 0.066895 0.005919 11.301 < 2e-16***
age_group21-24` -0.022097 0.165447 -0.134 0.8938
age_group24-28` 0.473283 0.163387 2.897 0.0039**
age_group28-31` 0.812176 0.17113 4.746 0***
`age_group31-35` 1.078278 0.179754 5.999 0***
age_groupgt35 1.819195 0.21776 8.354 0***
PosD 0.129242 0.099992 1.293 0.1965
PosG 0.218529 0.140888 1.551 0.1212
PosW -0.112796 0.095704 -1.179 0.2389
Contract Length 0.673488 0.021353 31.541 < 2e-16***
Projected.Cap 0.041061 0.011842 3.467 0.0006***

Signif. codes: 0 ‘***’ 0.001 ‘**’ 0.01 ‘*’ 0.05 ‘.’ 0.1 ‘ ’ 1

Our simple GLM explains about two-thirds of cap-hit. GAR, Contact Length, and Projected Cap are all a strong positive predictors. Each age bucket is subsequently paid more. Of note, the 22-24 age bucket is the weakest age coefficient since at that age some players are on their ELC while others have earned legitimate star contracts. In this model, position wasn’t a significant predictor, although it signals defensemen and goaltenders probably go at a premium to centers, while wingers take a discount.

The player-level residuals (expected AAV less actual AAV, a positive value representing surplus value to the team) are plotted below. The model would be stronger, but for some significant outliers – Jonathan Toews, Patrick Kane, Thomas Vanek, and Tyler Meyers were all paid about $4M more than the model expected. Conversely, Duncan Keith, Roberto Luongo, and Marian Hossa were all underpaid by at least an expected $4M. Like most linear models, it had trouble predicting a non-normal target. That is, the distribution of AAV values had a skew to the right, where the model struggled to pick up ‘extreme’ values. Transforming AAV into a log of AAV did not increase predictive power.

WAR.LM.Players

Crowd Wisdom

The next iteration of the GLM was run using the CrowdScout score as a proxy for ability. A few notes on the inclusion of this data:

  • What is this metric? It represents the relative strength of that player’s Elo rating compared to the entire population at the time of analysis. The Elo rating is the cumulative result of over 100 scouts selecting between two randomly generated (but generally similar) players some 32,000 times. Each of these selections feed into an algorithm that adjusted each player’s score based on the prior probability of the match-up and k-factor given to the user – the more active and accurate that user had been historically the greater their influence.
  • I think skepticism should be applied to any analysis performed on data acquired through some level of effort of the owner. That said, the CrowdScout data is the result of my own engineering project and is intended to aid (fantasy) managerial decision-making, rather than provide advanced analytical insight. Any clean, methodologically tight analysis would be a bonus.
  • There is a concern of collinearity in this analysis – since it is possible a subset of users associated higher salary with better ability, opposed to the reverse. Conversely, an obviously overpaid player can be under-rated due to an emotional discounting of their ability. For the purpose of this analysis, we will assume the effects neutralize each other and in aggregate AAV did not significantly impact the CrowdScout score.[5] There will obviously be a correlation between player score and AAV, but that does not imply causation.

With the CrowdScout data, I kept all players from the 2015-16 who had been judged at least 70 times, effectively dropping players who did not spend a significant amount of time on an NHL roster or didn’t receive many implied ratings from a diverse set of users. A dataset containing position, age bucket (same buckets as GAR Linear Model) as of 10/1/2015[6], and CrowdScout score as of 5/25/2016 was constructed for 548 players. A model was then built cross-validating 10 folds from the data, testing each model on unseen, out of sample subsets.


Resampling: Cross-Validated (10 fold)
Summary of sample sizes: 494, 494, 493, 494, 492, 492, ...
Resampling results:


RMSE Rsquared
1.039983 0.7632717

Estimate Std. Error t value Pr(>|t|)
(Intercept) -4.059971 0.99779 -4.069 0.0001***
CrowdScout Score 0.040156 0.002344 17.129 < 2e-16***
age_group21-24 -0.048386 0.387105 -0.125 0.90057
age_group24-28 0.716755 0.378693 1.893 0.05894.
age_group28-31 1.148196 0.386094 2.974 0.00307**
age_group31-35 1.530753 0.389492 3.93 0.000096***
age_groupgt35 2.2832 0.422219 5.408 0.0000000963***
PosD -0.151021 0.123711 -1.221 0.22272
PosG -0.050527 0.171222 -0.295 0.76803
PosW -0.126127 0.122439 -1.03 0.30342
Term 0.474544 0.027242 17.419 < 2e-16***
Projected.Cap.K.Date 0.042666 0.013781 3.096 0.00206**

Signif. codes: 0 ‘***’ 0.001 ‘**’ 0.01 ‘*’ 0.05 ‘.’ 0.1 ‘ ’ 1

The same model methodology using CrowdScout score as a proxy for ability explains about three-quarters of AAV.  Like the GAR model, ‘ability’ has a strong positive relationship with AAV. Pay increases with significant jumps in expected pay from 21-24 to 24-28 and then again as players hit unrestricted free agency at around 28. Goaltenders and wingers are likely expected to have their AAV discounted, all else equal, although the relationship isn’t significant.

Using CrowdScout as a proxy for ability creates a better fitting model compared to using GAR. This is consistent with what we would expect to see since CrowdScout data doesn’t have to worry about players missing games due to injury. This is a study into what we would expect players to be paid – rather than players should be paid – therefore the CrowdScout score is very likely baking in some reputational assessments leading to a stronger relationship with cap-hit. It’s all possible that crowd wisdom is able to determine the impact defensive prowess has on comprehensive ability better than most public data.

Player-Level Residuals:

CS.LM.Players

Team-Level Residuals:

CS.LM.Team

This analysis also measures spending efficiency based on the 2015-16 AAV and ability, because the CrowdScout Score was not available at the start of the season. However, we can create a predicted CrowdScout Score from the 2014-15 season to hold up against 2015-16 AAV, since teams can only act on past performance and project out.

Paid Against the Machine

The original goal of the analysis was to compare player cap-hit to the expected cap-hit. A simple linear model explaining AAV as a function of age, position, term, projected cap at the time of the deal, and CrowdScout score does a good job predicting cap-hit. However, we can also explore additional modeling methods, increasing the depth of interactions between variables (i.e. age and draft year) and strengthen the predictive power. I will make an adjustment to the CrowdScout Score and use a machine learning model which will be able to handle the additional interactions between features:

  • Predicted CrowdScout Score – Outlined here the CrowdScout Score can be reliably predicted using on-ice metrics. I will score each players 2014-15 statistics from puckaltyics.com with the GLM and Random Forest model and take the average of the predicted scores. This will replace the actual CrowdScout Score in the model, which can be biased.
  • Age (as of season start, 10/1/2015) – Move from a strictly bucketed age to a continuous age variable, to help aid the different interactions. This would not work in a linear model, Jagr would mess everything up.
  • Contract Length – Length has proved to be a key explanatory variable. Data courtesy of generalfanager.com.
  • Projected Salary Cap at Contract Date – Also a key explanatory variable. Data courtesy of generalfanager.com.
  • Drafted Boolean – The interaction between whether the player was drafted or not, term, and age should help the model to work out if the player is on an ELC, 2ndcontract, or UFA contract player.

In order to handle interactions between the new variables in the model, a Regression Tree will be used – known as the Random Forest algorithm. A Random Forest is an ensemble model, creating decision trees from randomized variables and subsets of observations, then each ‘tree’ is considered when scoring or predicting an observation. The advantage of this algorithm is that it is extremely powerful. The disadvantage is that it is basically a black box, there are no clean, interpretable parameters to say ‘when all else is equal we expect a player moving from the 31-35 age group to over 35 to be paid about $500k more’ like in a GLM.

A 500 tree model was able to minimize the RMSE under 0.5, with an R2 of close to 0.95.

Despite the lack of coefficients, we can also take a peek under the hood to check how important each variable is in the algorithms decision-making.

AAV.RF.Varimp

The CrowdScout Score and Term variables are the most important variables in the Random Forest model when explaining AAV. That is, when they are used to create a ‘tree’ or decision, they cumulatively reduce the sum of squared residuals more than the other variables. Age, which should work together in tandem with draft history and term, was also important. Projected Cap was had some influence, Draft History even less so.  Team salary and position (consistent with the linear models) were the least important, having no influence in the enhanced model and were dropped.

Note, when the 2014-15 GAR was added to a dataset of non-rookie players and added to the Random Forest model, the importance of GAR was around that of age and did not increase the performance of the model.[7]

The Random Forest model still has trouble predicting very high cap-hits. For example, Patrick Kane and Jonathan Toews and their AAV of $10.5M are considered to be overpaid by over $1M when compared to market value, Toews slightly more with a 78 predicted CrowdScout Score compared to Kane’s 86. With a predicted CrowdScout score of 88, Alex Ovechkin makes $1.2M more than the model would predict. On the flip side, Justin Abdelkader was underpaid by about $2M in the Random Forest model last season. Interestingly, this summer he received a raise of almost the same amount. Patrick Eaves was also underpaid last year by over a million. He was notably underpaid in both GLM models, using Elo and GAR – sporting a healthy predicted CrowdScout score of 58 and 2014-15 GAR of 13.8 he was a 31-year-old winger paid a paltry $1.15M. Other players making about a million less than predicted during the 2015-116 season were Morgan Rielly, Mattias Ekholm, and Kyle Okposo – all of who received healthy raises this summer.

AAV.RF.Player

At a team level, the Islanders, Hurricanes, and Predators led the way in contracting players for less than market value last year. The Islanders received strong value from pending free agents Nielsen and Okposo. The Hurricanes had positive value across the board less Skinner. The Predators are frugal by design, extracting value from their young defense. Note that this analysis fails to include goaltending, where Rinne and Ward would move each team down.

The Avalanche, Flames, and Rangers had the worst value from their contracts. Colorado has very few good contracts when compared to the market. The Flames had a few bad contracts on defense and did not receive an sort of bonus from having top players on ELCs. The Rangers were also pulled down by an overpaid defense.

Also note that the error terms here are small and it wouldn’t take much to move a team up or down the rankings. It also demonstrates that the future is tough to predict and few managers can avoid making salary allocation errors every now and then.

AV.RF.Team

Conclusion

It is critical that NHL franchises effectively manage their salary cap in order to be viable. It appears a model and can explain about 95% of the market for NHL talent. This feels about right, some deals are visibly off from the start, some valuations will change with time, but most of the time teams and agents are in line with what the market would expect as a function the player’s age, draft year, position, term, team salary, and ability. In this study, it appears holding up data from the CrowdScout project to objective on-ice features provided a good proxy for ability.

The Random Forest model is quite strong, with 5% of contracts left unexplained. Some share of this is mis-valuation of the player and market, some of it is inaccuracies of the CrowdScout rating and modeling, some of it might be unexplainable (discount to stay close to family, injury or character concerns, etc.). We are specifically interested in quantifying the first term – how teams might misvalue certain players. With a relatively small error term, it is possible the majority of these residuals are made up of the unquantifiable and the majority of team-level differences is noise. Eye-balling teams in the top 5 and bottom 5 by spending efficiency passed the sniff test, but most managers and agents settle on deals that are in line with the league market.

Finally, it’s important to remember this is a study in what we expect a player’s cap-hit to be given market conditions, rather than what they should make in a free-market NHL. Players on ELCs often provide teams very good value relative to their contract, but in this analysis there is no bonus for production from ELCs since the player age and contract length often signaled when players are likely to be on an ELC. The expected AAV is also calculated with perfect information at the start of the 2015-16 season, where deals have to project out future performance during contract discussions. This alternative analysis might be looked at in the near future, expecting considerably larger error terms – longer timelines introduce more uncertainty.

It’s also important to remember that this analysis leans on ever-maturing data from the CrowdScout project. As expected, it contains enough reputational information to help build a stronger model than using GAR from war-on-ice.com as a proxy for ability. It is possible that this data contains systemic bias – if a higher salary caused the CrowdScout Score to be higher, rather than them simply being correlated. A simple plot (below) suggests that the CrowdScout Score often differs from AAV, which is encouraging. Given that, I hope this unique dataset and model will prove helpful in evaluating contracts and cap management in the future.

Huge thanks to asmean to contributing to this study, specifically advising on machine learning methods.

AAVvScore

______________________________________________________

[1] If a team can consistently acquire and retain talented players who consistently play above their expected contract, they will be operating with a significant advantage. If your 24-year old top 4 defenseman is signed at $4.5M AAV and most comparable players are averaging over $5M AAV, more depth or quality can be acquired elsewhere. If your mid-range starting goalie makes $6M and the goaltending market falls out and sees comparables average less than $5M, you are at a disadvantage. Easy enough.

[2] In absolute terms, that’s a very tough question. The NHL labor market is a long way than the economic-textbook-supply-meets-demand-free-efficient-market. There are salary floors, ceilings, team floors, team ceilings, bonuses, rules regarding age and accrued seasons. Deals are often made with little certainty of future performance (read: teams are poor at forecasting individual player career arcs), and often see a trade-off in salary and duration. An efficient market this is not.

[3] A model is only as good as its target variable, and I believe any comprehensive analysis of ability should attempt to answer that question or one similar to it. Hockey is a goal-scoring contest first and foremost, but the ultimate goal (winning the championship) resembles a marathon of hockey games. This is a tricky distinction since it invites past winners to be overrated, when in alternative histories they did not win, thanks to luck. This is certainly a deeper philosophical question, but an analysis in market value should only care about results.

[4] 2015-2016 GAR has not or will not be posted.

[5] Opposed to simply over-rating a player based due to reputation and other biases. The system is designed to reward those users who have the foresight to forecast declining ability of a player getting by on reputation alone. Some reputational bias will be present until the time a sizeable crowd of excellent forecasters exists.

[6] Presumably when most players were under contract for the 2015-16 season.

[7] varimp

Goaltending and Hockey Analytics – Linked by a Paradox?

There may be an interesting paradox developing within hockey. The working theory is that as advanced analysis and data-driven decision-making continue to gain traction within professional team operations and management, the effect of what can be measured as repeatable skill may be shrinking. The Paradox of Skill suggests as absolute skill levels rise, results become more dependent on luck than skill. As team analysts continue (begin) to optimize player deployment, development, and management there should theoretically be fewer inefficiencies and asymmetries within the market. In a hypothetical league of more equitable talent distribution, near perfect information and use of optimal strategies, team results would be driven more by luck than superior management.

Goaltenders Raising the Bar

Certainly forecasting anything, let alone still-evolving hockey analytics, is often a fool’s errand – so why discuss? Well, I believe that the paradox of skill has already manifested itself in hockey and actually provides a loose framework of how advanced analysis will become integrated into the professional game. Consider the rise of modern goaltending.

Absolute NHL goaltender ability has continually increased for the last 30 years. However, differential ability between goaltenders has tightened. It has become increasingly difficult to distinguish long-term, sustainable goaltender ability while variations in results are increasingly owed to random chance. Goalies appear ‘voodoo’ when attempting to measure results (read: ability + luck) using the data currently available – much like the paradox of skill would predict.[1] More advanced ways of measuring goaltending performance will be developed (say, controlling for traffic and angular velocity prior to release), but that will just further isolate and highlight the effect of luck.[2]

Spot the Trend Data courtesy of hockey-reference.com

Spot the Trend
Data courtesy of hockey-reference.com

Will well-managed teams create a similar paradox amongst competing professional teams in the future? Maybe. Consider such a team would maximize the expected value talent acquired, employ optimal on-ice strategies, and employ tactics to improve player development. Successful strategies could be reverse engineered and replicated, cascading throughout the league – in theory. Professional sports leagues are ‘copycat’ leagues and there is too much at stake not to adopt a superior strategy, despite a perceived coolness to new and challenging ideas.

Dominant Strategies“I don’t care what you do, just stop the puck”

How did goaltending evolve to dominate the game of hockey? And what parallel pathways need to exist in hockey analytics to do the same?

  1. Advances in technology – equipment became lighter and more protective.[3] This allowed goaltenders to move better, develop superior blocking tactics (standing up vs butterfly), cover more net, and less worry of catching a painful shot. The growth of hockey analytics has been dependent on web scraping, automation, and increasing processing power and will soon come to rely on data derived from motion-tracking cameras. Barriers to entry and cost of resources are negligible lending all fanalysts the opportunity to contribute to the game.
  2. Contributions from independent practitioners – The ubiquitous goaltending coach position is a relatively new one compared to most professional leagues. In the early 2000s, I was lucky enough to cross paths with innovative goaltending instructors who distributed new tactics, strategies, and training methods available to young goaltenders. Between their travel, camps, and clinics (and later their own development centers) they diffused innovative approaches to the position, setting the bar higher and higher for students. A few of these coaches went on become NHL goalie coaches – effectively capturing a position that didn’t exist 30 years prior. Now the existence of goalie coach cascade down to all levels of competitive hockey.[4]  Similarly, the most powerful contributions to the hockey analytics movement have been by bright individuals exposing their ideas and studies to the judicious public. The best ideas were built upon and the rest (generally) discarded. Will hockey analytics evolve (read: become accepted widely among executives) faster than goaltending? I don’t know – a goaltending career takes well over a decade to mature, but they play many games providing feedback on new strategies rather quickly.[5] Comparatively, ideas develop quicker but might take longer to demonstrate their value – not only are humans hard-wired to reject new ideas there are fewer managerial opportunities to prove a heavy data-driven approach to be a dominant strategy.
  3. Existence of a naïve acceptance – The art (and science) of goaltending is not especially well understood among many coaches, particularly with relative skill levels converging. However, managers and coaches do understand results. Early in my career, I had a coach who was only comfortable with stand-up goaltenders, his own formative experiences occurring when goaltender predominately remained erect (in order to keep their poorly padded torso and head from constant danger). However, he saw a dominant strategy (more net coverage) and placed faith in my ability without a comprehensive understanding or comfort of modern goaltending. Analytics will have to be accepted the same way – gradual but built on demonstrated effectiveness. Not everyone is comfortable with statistics and probabilities, but like goaltenders, the job of analysts is to produce results. That means rigorous and actionable work that offers a superior strategy to the status quo. This will earn the buy-in from owners and senior management who understand that they can’t be at a competitive disadvantage.

Forecasting Futility

Clearly the arc of the analytics evolution will differ from the goaltender evolution, primary reasons being:

  • Any sweeping categorization of two-decade-plus ‘movement’ is prone to simplification and revisionist history.
  • While goaltending as a whole has improved substantially, incremental differences in ability still obviously exist between goaltenders. In the same way, not all analysts or teams of analysts will be created equal. A non-zero advantage in managerial ability may compound over time. However, the signal will likely be less significant than variation in luck over that extended timeframe. In both disciplines, that rising ability may give way to a paradox of not being able to decipher their respective skills, muddying the waters around results.
  • Goaltending results occur immediately and visibly. Fair or not, an outlier goaltender can be judged after a quarter of a season, managerial results will take longer to come to fruition. Not only that, we only observe the one of many alternative histories for the manager, while we get to observe thousands of shots against a goaltender. Managerial decisions will almost always operation under a fog of uncertainty.

Alternatively, it important to consider the distribution of athlete talent against those of those in the knowledge economy. Goaltenders are bound by normally distributed deviations of size, speed, and strength. Those limitations don’t exist for engineers and analysts, but they do operate in a more complex system, leaving most decisions to be subjected to randomness. This luck is compounded by the negative feedback loops of the draft and salary cap, it is unlikely a masterfully designed team would permanently dominate, but it suggests some teams will hold an analytical advantage and the league won’t turn into some efficient-market-hypothesis-all-teams-50%-corsi-50%-goals-coin-flip game. But if a superstar analyst team could consistently and handily beat a market of 29 other very good analyst teams in a complex system, they should probably take their skills to another more profitable or impactful industry.

xkcd.com

xkcd.com

Other Paradoxes of Analytics

Because these are confusing times we live in, I’d be remiss if I didn’t mention two other paradoxes of hockey analytics.

    • Thorough, rigorous work is often difficult to understand and not easily understood by senior decision-makers. This is a problem in many data-intensive industries – analytical tools outpace the general understanding of how they work. It seems that (much like the goaltending framework available to us) once data-driven strategies are employed and succeed, all teams will be forced to buy-in and trust that they have hired competent analysts that can deliver actionable insights from a complex question. Hopefully.

  • With more and more teams buying into analytics, the some of the best work is taken private. The best work is taken in-house seemingly overnight, sometimes burying a lot of foundational work and data. That said, these issues are widely understood and there is a noble and concerted effort to maintain transparency and openness. We can only hope that these efforts are appreciated, supported, and replicated.

 

Final Thoughts

The best hockey analysis has borrowed empiricism and data-driven decision-making from the scientific method, creating an expectation that as hockey analytics gain influence at the highest levels, we (collectively) will know more about the game.[7] However, assuming the best hockey analysts end up influencing team behavior, it is possible much of the variation between NHL teams[8] will be random chance – making future predictive discoveries less likely and weakening the relationship of current discoveries.

Additionally, when it feels like the analytical approach to hockey is receiving unjustified push back or skepticism, it is important to remember that the goaltender evolution, initiated by fortuitous circumstance, eventually forced buy-ins from traditionalists by offering a superior approach and results. However, increasing absolute skill in a field can have unintended consequences – relative differences in skill will decrease, possibly causing results to become more dependent on luck than skill. Something to consider next time you try to make sense of the goaltender position.

 

[1] This is not to say all goalies in 2016 are of equal skill levels, but they are absolutely more talented than their ancestors and fall within a smaller range of abilities. That said, outside of a top 2 or 3 guys, the top 5-10 list of goalies is a game of musical chairs, quarter to quarter, season to season.

[2] Goaltenders don’t get a chance to ‘drive the play,’ so it is very important to control for external factors. This can’t be done comprehensively with current data. Even with complete data, it may be futile.

[3] And cooler, possibly attracting better athletes to the position, your author notwithstanding.

[4] Another feature of the paradox of rising skill levels: to fail to improve is the same as getting worse. Hence, employing a goalie coach is necessary in order to prevent a loss of competitiveness. The result: plenty of goalie coaches of varying ability, but likely without a strong effect on their goaltender’s performance. This likely causes some skepticism toward their necessity. This is probably a result of their own success, they are indirectly represented by an individual whose immediate results might owe more to luck than incremental skill aided by the goalie coach.

[5] For example, a strategy devised at 6 years old of lying across the goal line forcing other 6 year-olds to lift the puck proved to be inferior and was consequently dropped from my repertoire.

[7] Maybe even understanding the link between shot attempts and goals (you can read this sarcastically if you like).

[8] And other leagues that are able to track and provide accurate and useful data.

Re-Tooling the Rebuild – An Auction Based Entry Draft System

The Current Entry Draft System
 
In the NHL and NBA the annual entry drafts have become strange affairs. The lottery system (and their ever-changing weights) have at times encouraged fans of meddling teams to urge their favorite team to underperform in order to have a higher probability of selecting a top prospect. This is known as tanking or, more euphemistically, re-building or re-tooling. Many rational fans suggest that if you are not going to win a championship (and stronger predictive analytics have helped clarify these probabilities), you might as well maximize the chance of adding top young (cost-controlled) talent. Under the current system, they aren’t wrong.
 
There are no easy solutions to such a problem because there are two opposing forces at play:
     1)    The goal of the entry draft is to distribute new talent fairly throughout the league. Ideally, the worst teams should have an opportunity to draft the best talent, giving them an opportunity to compete in the future.
     2)    The goal of the league is to maintain a competitive product throughout the season. In a world where the incentive to win is diminished, the league product and brand suffers.
 
A lottery makes some sense. Teams can lose on purpose, but that still doesn’t guarantee the top pick. Would ‘rebuilding’ teams strip down their roster and be satisfied with a top 5 pick? Probably. Would the same team completely throw a few games to increase the probability of drafting 1st overall by 5%? Probably not. Draft lotteries use randomness to uphold a general competitive balance within the league.

Tanking it to the extreme?


 
However, very few teams are happy with the current system. This is a function of dumb luck and perceived abuse of the entry draft system. Research suggests the value of a draft pick decays non-linearly. That is, the difference in value between the 1st overall and 2nd overall is greater than the difference between the 2nd overall and 3rd overall and so on. If you can’t compete, you are better off trying to maximize draft pick value by taking advantage of this non-linear curve.
 
With very few happy with the current system, alternatives have been suggested and gained some traction (most recently the Gold Plan). However, year to year lotteries with 30 teams will never appear completely random to the human mind, so there inevitably will be annual disappointment with the system from all but one fanbase.
 
The Entry Draft Auction Proposal
 
The parameters below require more research, but more importantly must be sold to the owners and teams. An expanded rationalization and methodology can be found later.
·         Each team receives a set amount of draft currency based on their finish during the regular season
     ·         The worst team would receive 1,000 base draft units (to be coined Bettmans in the NHL). Rank ordered from worst to best each subsequent team would receive 10 fewer units, meaning the champion would receive 710 draft units.
      ·         Each team would have their base draft units adjusted by the z-score of their offensive production multiplied by 10. A team can receive more draft units than the team below them by out-scoring them by a significant amount (approximately 23 goals in the NHL).
     ·         The team’s maximum bid is set to number of base draft unit plus offensive production adjustment draft units. This prevents bottom teams from selling current assets in order to secure enough draft units to guarantee winning the bid for the top pick (this would be a terrible strategy unless there was a generational talent available, but still).
     ·         Draft units by year could be traded in absolute, share of total, or conditional amounts.
·         On draft day each pick or draft slot is auctioned off in real-time at the draft. The number of draft slots available remains unchanged from the current system.
     ·         Bids of whole units are blindly and simultaneously submitted. Ties would go to the team with the fewest number of picks to that point, then highest number of picks since prior team pick, else re-auction between tied teams.[1] Losing teams would lose no draft units. The winning team would lose their bid amount, or alternatively the value of the 2nd highest bid.[2]
 
Rationalizations & Methodology
 
Base Draft Unit Allocation
 
The initial allocation of base Draft Units requires research and agreement from all parties. I think there are different ways to do this but here is my framework.
 
A number of attempts have been made at creating an expected value of a draft pick in NHL. I used research from Michael Schuckers, @DTMAboutHeart and The Leafs Nation’s Chemmy adoption of Avs blogger Jibblescribbits work. Each of these models were indexed against the 1st overall pick (given a value of 1,000), providing an average value of draft value by pick indexed to the 1st overall.
Sources: http://myslu.stlawu.edu/~msch/sports/Schuckers_NHL_Draft.pdf http://theleafsnation.com/2011/3/16/on-relative-worth-of-draft-picks http://donttellmeaboutheart.blogspot.com/2014/11/nhl-draft-pick-value-chart.html

Sources:
http://myslu.stlawu.edu/~msch/sports/Schuckers_NHL_Draft.pdf
http://theleafsnation.com/2011/3/16/on-relative-worth-of-draft-picks
http://donttellmeaboutheart.blogspot.com/2014/11/nhl-draft-pick-value-chart.html

In the NHL the team with the top draft position can be expected to recoup about twice as much talent as the championship team. The shape of the curve also suggests the value by position decays again in a non-linear way. Should the difference in expected value received by the worst team and 2nd worst team be greater than the difference received by the 2nd worst and 3rd worst team? Probably not in a fair system. Also consider that these positions were likely arranged by a lottery.
 
I would argue that a more equitable system would decay team-level draft value linearly. This can only be accomplished by assigning the granular draft units proposed above. The graphic below re-enforces how this could be accomplished by distributing draft units.
Sources: http://myslu.stlawu.edu/~msch/sports/Schuckers_NHL_Draft.pdf http://theleafsnation.com/2011/3/16/on-relative-worth-of-draft-picks http://donttellmeaboutheart.blogspot.com/2014/11/nhl-draft-pick-value-chart.html

Sources:
http://myslu.stlawu.edu/~msch/sports/Schuckers_NHL_Draft.pdf
http://theleafsnation.com/2011/3/16/on-relative-worth-of-draft-picks
http://donttellmeaboutheart.blogspot.com/2014/11/nhl-draft-pick-value-chart.html

An auction system and agreed upon distribution of draft units would also allow the league to close the gap between the expected value received between the top and bottom teams. The worst team receiving twice the expected draft value seems excessive in the age of salary-cap assisted parity. Expanded research might answer this question in a quantitative manner, but realistically the robustness of the research would take a backseat to a buy-in among the 30 teams. The Auction Entry Draft proposal sells the idea of a more equitable and competitive league (and nobody envisions themselves being the worst team in the league) so it seems like there would be support for closing the gap.
 
I would suggest the championship team should receive 71% of the draft value of the worst team (see chart above). This is the result of easy math – each team receives 10 fewer base draft units than the team immediately below them in the standings. The formula could be expanded to account for non-playoff team ties, distributing points throughout the league from the worst to best teams in a linear fashion.
 
Bonus Draft Units for points scored
 
Yes, it is kind of video game-y, but there are 2 reason I think it would be worthwhile:
     1)    Add some noise to the system. If a generation player came along that you would trade your entire draft for, the last place team couldn’t sit on their standing position and out-bid everyone, other bottom feeders could outbid them if they out-scored them by the appropriate margins. Obviously, everyone is trying to score the maximum amount of goals anyway, this just keeps teams honest.
     2)     Incentivization of higher scoring strategies – this is generally good for excitement.
Between 2007 and 2016 (excluding the lockout shortened 2013-14 season) team’s scored an average of 222 goals per season, with a standard deviation of 23. Full equation:
equation
Below are the distributions of Goals For z-scores in the last 8 season. By definition 68% of teams would not have their draft units adjusted by more than 10.
Data courtesy of hockey-reference.com

Data courtesy of hockey-reference.com

Alternatively, this calculation could use goal differential. Or there could be no adjustment, again the concern would be a team could theoretically tank and guarantee the right to draft a generational talent with their entire draft stock. Adding an adjustment would prevent this strategy.
 
Trading
 
Can’t mess with Trade Deadline Day. Teams can get even more creative since there is no draft picks constrained by round and standing. Want to trade 100 draft units? Great. Trade for 10% of the other team’s base draft units? Cool. 500 draft units if they make the Cup final, 200 otherwise? Sign right here.
 
Limited Number of Auction Slots
 
This is more of a Players Association issue. A cap on the number of auctions keeps the number of drafted players the same or fewer. A case where draft units are not properly rationed and there were no teams left to bid on the last few picks would be a generally good thing.
 
Real-Time Auction System
 
This is where I give pause. Entry drafts are high profile events, with lots on the line. The technology component would be critical, any failure would be embarrassing[3] and would require the right safe-guards. Every team would have 30 seconds to submit a bid, the winner or a re-auction would be announced immediately, and a mandatory 30 more seconds would pass in order for any team with a technical objection to raise it to officials, then the winner would be on the clock to make their pick.
 
Information Overload?
 
Entry drafts have traditionally been monkeys throwing darts at a dart board, right? So why add in another layer of complexity?
 
Well, economists love auctions because of their ability to imply value, particularly hard-to-calculate values (like the right to draft an unproven, underdeveloped teenager). An auction-based system would be a bonanza of implied information all while being highly entertaining.
It would also further the encourage the operational analysis that has recently grown in sport. Drafts would be fueled by both computer models and high-drama gambles.[4] Data at the draft slot-team level could be made available to teams and public allowing for a unique look into the question – how do teams value draft picks? The trend is clear – advanced analytical methods are becoming the norm in sport, and this system would only accelerate that healthy trend.
 
Most teams would struggle to neatly quantify the value of draft auction (factors could include, but not limited to, talent currently on the board, total amount of draft units in circulation, current team draft units, historical valuation of draft slot), but it would be a beautiful mess of varying strategies with plenty of unforeseen events. Poorly managed team would struggle under this configuration but the signal is clear under an auction system: organizations must commit to compete annually, provide an exciting product, and leverage analytical methods.
 
Conclusion
 
The Entry Draft Auction would:
 
      ·         Remove the incentive to tank, distributing talent in a more equitable, linear way
      ·         Incentivize offensive strategies, increasing quality of product
      ·         Create a unique and highly entertaining experience, producing highly informative data
      ·         Create more granular and flexible trade blocks, helping facilitate trades and optimal talent distribution around the league

 


[1] Other tie breakers may be applied.
[2] This distinction would really only matter at the top of the draft, but important.
[3] I can’t think of any league bungling a technological roll out in recent memory. Nope.
[4]  There would be shades of Mike Ditka going all-in on Ricky Williams in 1999.

Welcome to Game Theory

Thanks for visiting the CrowdScout blog – Game Theory!

The CrowdScout platform was designed to automatically and elegantly aggregate the opinions of awesome fanalysts and create unique content – dynamic player rankings that can:
·     aid the decision making of managers (fantasy or professional)
·     settle arguments that happen over cold ones (or not)
·     provide benchmarks for more advanced analysis, i.e. when determining what players are over/undervalued
·     identify scouts with the ability to be ahead of the curve on judging talent
The inaugural beta season was a great learning experience, and I have some exciting plans for season 2, but clearly the website hasn’t hit the critical mass to provide dynamic and self-sustaining content. To supplement the CrowdScout system, I’ll be throwing out some of my own thoughts in my Game Theory blog.

What is Game Theory (or what will it be)?

·     Hopefully delivers both qualitative and quantitative insights to sports (predominantly hockey) – the original idea behind the CrowdScout platform
·     Part thought experiment, part analysis. Some logic and some numbers
·     Whatever seems interesting and easy to write to me. If it is boring to write, I can’t imagine how bad it would to be to read
·     Ideas a little different than the standard – meant to be critiqued. Ideas are stronger with more diverse input – one of the main principles behind CrowdScout
·     Potentially more advanced analysis, possibly combining my own proprietary CrowdScout data with public data
My Background

I’ve been lucky enough to live lives of a colligate hockey goaltender[1], an antitrust economist[2], and a data scientist. I plan on relying on the ensemble of my experiences rather than one – there are more interesting economists and statisticians discussing sports worthy of your time (I believe market forces have spoken on my goaltending abilities as well – unless the NHL was really serious about increasing goals). After finishing my college hockey career, I took some time away from being completely immersed in hockey while the hockey analytics community matured. When I decided I wanted to contribute, I thought it would be best to create something different – a platform that was able to combine analytic and traditional information in a meaningful way. I hope to do the same in the Game Theory blog.


[1] Full disclosure: I played Division III NESCAC hockey (only because there was no Division IV, as my coach liked to remind me)
[2] I also fell ass-backwards into doing anti-trust economic consulting and advising the most recent NHL lockout – a bitter-sweet, but very exciting experience