Automated and scalable approaches for understanding the semantics of places are critical to improving both existing and emerging mobile services. In this paper, we present CrowdSense@Place (CSP), a framework that exploits a previously untapped resource - opportunistically captured images and audio clips from smartphones - to link place visits with place categories (e.g., store, restaurant). CSP combines signals based on location and user trajectories (using WiFi/GPS) along with various visual and audio place "hints" mined from opportunistic sensor data. Place hints include words spoken by people, text written on signs or objects recognized in the environment. We evaluate CSP with a sevenweek, 36-user experiment involving 1,241 places in five locations around the world. Our results show that CSP can classify places into a variety of categories with an overall accuracy of 69%, outperforming currently available alternative solutions.