Accuracy vs. Explainability Tradeoff
This fascinating article attempting to explain machine learning for statisticians is very interesting. I don't know if I even properly understand it, and the tone is a bit negative. Here's the simplest way I can think of it.
If I have:
(b1 * x1) + (b2 * x2) + (b3 + x3) + a = y
the statistician is trying to minimize error in the b values and the machine learning person is trying to minimize error in the y value?
If you get the stats "too correct", the ML guy will know you are overfitting, and model will do worse on new data that was not in the sample/training set.
Is that it?