This paper presents a subspace-based acoustic factorization framework to transform-based adaptation in speech recognition. In the proposed method, adaptation transforms are projected onto factor-dependent low-rank subspaces in a way that decouples the combined extrinsic factors affecting the speech signals. Usually, mismatch between the observed speech and the acoustic models is caused by multiple acoustic factors simultaneously, such as the speaker and environment. Data-driven adaptation methods, such as constrained MLLR, compensate for all sources of mismatch jointly. In many scenarios, however, it is highly desirable to separate the sources of mismatch in order to adapt to speaker and environment variability independently. This adds flexibility to the model adaptation framework. For example, a speaker transform obtained in one environment can be reused when the same speaker is in different environments. Or, an environment transform obtained during training, independently of speaker identities, can be applied to a speaker in deployment. One way to achieve this factorization is to construct each set of transforms such that they are orthogonal to each other, so that any change in one acoustic factor keeps other factors intact. The proposed subspace approach provides a straightforward factor analysis framework while allows us to explicitly formulate the independence among the estimated factor transforms. A series of experiments performed on the Aurora 4 corpus validates our approach.