We explore the extension of James–Stein–type estimators in a direction that enables them to preserve their superiority when the sample size goes to infinity. Instead of shrinking a base estimator toward a fixed point, we shrink it toward a data-dependent point. We provide an analytic expression for the asymptotic risk and bias of James–Stein–type estimators shrunk toward a data-dependent point and prove that they have smaller asymptotic risk than the base estimator. Shrinking an estimator toward a data-dependent point turns out to be equivalent to combining two random variables using the James–Stein rule. We propose a general combination scheme that includes random combination (the James–Stein combination) and the usual nonrandom combination as special cases. As an example, we apply our method to combine the least absolute deviations estimator and the least squares estimator. Our simulation study indicates that the resulting combination estimators have desirable finite-sample properties when errors are drawn from symmetric distributions. Finally, using stock return data, we present some empirical evidence that the combination estimators have the potential to improve out-of-sample prediction in terms of both mean squared error and mean absolute error.
All Science Journal Classification (ASJC) codes
- Statistics and Probability
- Statistics, Probability and Uncertainty