Routing vehicles based on real-time traffic conditions has been shown to significantly reduce travel time, and hence cost, in high-volume traffic situations. However, taking real-time traffic data and transforming them into optimal route decisions are a computational challenge. This is in a large part due to the amount of data available that could be valuable in the route selection. The authors model the dynamic route determination problem as a Markov decision process (MDP) and present procedures for identifying traffic data having no decision-making value. Such identification can be used to reduce the state space of the MDP, thereby improving its computational tractability. This reduction can be achieved by a two-step process. The first is an a priori reduction that may be performed using a stationary deterministic network with upper and lower bounds on the cost functions before the trip begins. The second part of the process reduces the state space further on the nonstationary stochastic road network as the trip optimally progresses. The authors demonstrate the potential computational advantages of the introduced methods based on actual data collected on a road network in southeast Michigan.
|Number of pages||12|
|Journal||IEEE Transactions on Intelligent Transportation Systems|
|Publication status||Published - 2005 Sep|
Bibliographical noteFunding Information:
Manuscript received July 30, 2004; revised November 15, 2004. This paper was supported by the Michigan Department of Transportation (MDOT) and the Alfred P. Sloan Foundation through the University of Michigan Trucking Industry Program (UMTIP). The Associate Editor for this paper was S. C. Wong.
All Science Journal Classification (ASJC) codes
- Automotive Engineering
- Mechanical Engineering
- Computer Science Applications