We propose an adaptive eye tracking system for robust humancomputer interaction under dynamically changing environments based on the partially observable Markov Decision Process (POMDP). In our system, realtime eye tracking optimization is tackled using a flexible world-context model based POMDP approach that requires less data and time in adaptation than those of hard world-context model approaches. The challenge is to divide the huge belief space into world-context models, and to search for optimal control parameters in the current world-context model with real-time constraints. The offline learning determines multiple world-context models based on imagequality analysis over the joint space of transition, observation, reward distributions, and an approximate world-context model is balanced with the online learning over a localized horizon. The online learning is formulated as a dynamic parameter control with incomplete information under real-time constraints, and is solved by the real-time Q-learning approach. Extensive experiments conducted using realistic videos have provided us with very encouraging results.