An image is a very effective tool for conveying emotions. Many researchers have investigated emotions in images by using various features extracted from images. In this paper, we focus on two high-level features, the object and the background, and assume that the semantic information in images is a good cue for predicting emotions. An object is one of the most important elements that define an image, and we discover through experiments that there is a high correlation between the objects and emotions in images in most cases. Even with the same object, there may be slight differences in emotion due to different backgrounds, and we use the semantic information of the background to improve the prediction performance. By combining the different levels of features, we build an emotion-based feedforward deep neural network that produces the emotion values of a given image. The output emotion values in our framework are continuous values in two-dimensional space (valence and arousal), which are more effective than using a small number of emotion categories to describe emotions. Experiments confirm the effectiveness of our network in predicting the emotions of images.
All Science Journal Classification (ASJC) codes
- Signal Processing
- Media Technology
- Computer Science Applications
- Electrical and Electronic Engineering