I’m working on a post for later in the day, and I found this interesting. Huge thanks to Scott Sereday of Basketball-Analysis.com for providing the data I analyzed.
Here are the NBA-wide expected and actual free-throw percentages between the 2005-06 and 2009-10 seasons based on the type of foul that preceded the attempts. The expected value is determined by using a weighted average free-throw percentage (because different players are sent to the line by shooting fouls than are sent to the line by non-shooting fouls).
I think the expected free-throw percentage after non-shooting fouls is so much higher than after shooting fouls because teams intentionally foul late in games. This indicates leading teams are better at getting the ball into the hands of quality free-throw shooters than opposing teams are at forcing it into the hands of poor free-throw shooters. Considering the leading team can put quality free-throw shooters on the court and poor free-throw shooters on the bench, this makes sense.
If that largely explains the difference in expected free-throw percentage based on foul type, that means players shoot a disproportionate number of free throws after non-shooting fouls late in games – when players are tired and likely to struggle at the line. That would explain why the actual percentage after non-shooting fouls falls short of the expect percentage.
I have one other theory: players might benefit from being in a rhythm after a shot. Players often shoot for no reason after a referee blows his whistle, but that practice could benefit the players who do that. If I were a player and drew a non-shooting foul that would send me to the free-throw line, I’d shoot the ball instead of handing it directly to the referee when possible. Better safe than sorry.