Large language model (LLM) AI chatbots may be able to outperform the average human at a creative thinking task where the participant devises alternative uses for everyday objects (an example of divergent thinking), suggests a study published in Scientific Reports. However, the human participants with the highest scores still outperformed the best chatbot responses.
Divergent thinking is a type of thought process commonly associated with creativity that involves generating many different ideas or solutions for a given task. It is commonly assessed with the Alternate Uses Task (AUT), in which participants are asked to come up with as many alternative uses for an everyday object as possible within a short time period. The responses are scored for four different categories: fluency, flexibility, originality, and elaboration.
Mika Koivisto and Simone Grassini compared 256 human participants’ responses with those of three AI chatbots’ (ChatGPT3, ChatGPT4, and Copy. Ai) to AUTs for four objects—a rope, a box, a pencil, and a candle. The authors assessed the originality of the responses by rating them on semantic distance (how closely related the response was to the object’s original use) and creativity.
Comments are closed.