A decision-directed learning strategy is presented to recursively estimate (i.e., track) the time-varying a priori distribution for a multivariate empirical Bayes adaptive classification rule. The problem is formulated by modeling the prior distribution as a finite-state vector Markov chain and using past decisions to estimate the time evolution of the state of this chain. The solution is obtained by implementing an exact recursive nonlinear estimator for the rate vector of a multivariate discrete-time point process representing the decisions. This estimator obtains the Doob decomposition of the decision process with respect to the a-field generated by all past decisions and corresponds to the nonlinear least squares estimate of the prior distribution. Monte Carlo simulation results are provided to assess the performance of the estimator.
(c) 1987 IEEE. Personal use of this material is permitted. However, permission to reprint/republish this material for advertising or promotional purposes or for creating new collective works for resale or redistribution to servers or lists, or to reuse any copyrighted component of this work in other works must be obtained from the IEEE.;