Recent progress in video style transfer has shown promising results which contain less flickering effects. However, existing algorithms mainly trade off generality for efficiency, i.e., constructing one network per style example, and often work for short video clips only. In this work, we propose a video multi-style transfer (VMST) framework which enables fast and multi-style video transfer within one single network. Specifically, we design a multi-instance normalization block (MIN-Block) to learn different style examples and two ConvLSTM modules to encourage the temporal consistency. The proposed algorithm is demonstrated to be able to generate temporally-consistent video transfer results in different styles while keeping each stylized frame visually pleasing. Extensive experimental results show that the proposed method performs favorably against single-style models and some post-processing techniques that alleviate the flickering issue. We achieve as many as 120 stylization effects in a single model and show results on long-term videos that consist of thousands of frames.