The direct use of a deep convolutional neural network (CNN) in no-reference image quality assessment (NR-IQA) usually struggles for a good performance due to a lack of training data, which can be alleviated by transfer learning. However, depending on the similarity between the source and target tasks, the final performance differs vastly. In particular, various kinds of distortion types exist in IQA, which requires different kinds of features to predict visual quality. In this paper, to make the transferred model robust to various distortion types, we propose a Multiple-level Feature-based Image Quality Assessor (MFIQA) which considers multiple levels of features simultaneously. Through rigorous experiments, we prove that MFIQA consistently yields state-of-the-art performance regardless of the distortion types including synthetic and authentic corruption.