Researchers have developed an AI algorithm capable of recognizing emotions in works of art, taking a step toward creating machines with emotional intelligence.
The study was conducted by researchers at the Stanford Institute for Human-Centered Artificial Intelligence and in France and Saudi Arabia. It focused on teaching computers not only to identify objects within images but also to understand how those images evoke emotional responses in people.
This technology could lead to future systems that see and interpret emotions much more deeply than current technologies.
The researchers chose art as the focus for their study because artists aim to elicit specific emotional reactions in viewers. The algorithm can classify artworks into one of eight emotional categories, including awe, amusement, fear, and sadness.
It goes beyond identifying the overall mood of a painting by pinpointing differing emotions within the same image. Additionally, the AI generates written captions that accurately describe the painting’s content and justify the emotional read.
To train the AI, the researchers created a new dataset called Artemis, which includes 81,000 paintings found online and 440,000 emotional responses from over 6,500 participants. These responses describe how the artworks made viewers feel and include explanations for their emotional choices. The AI uses this data to train neural speakers—neural networks that generate written responses to visual art. These neural speakers express the emotions a painting evokes and explain the reasoning behind those emotions in natural language.
Researchers believe that this technology could serve as a tool for artists, helping them evaluate their works and refine their creative process. Beyond art, the project has broader implications, potentially bridging the gap between human psychology and artificial intelligence by enabling machines to understand emotions more deeply and communicate with emotional nuance