Toggle light / dark theme

Study finds users disclose more to AI chatbots introduced as human

“One of the most surprising findings was that participants disclosed more and felt more comforted by a chatbot introduced as a human, even though almost everyone knew they were still talking to a chatbot. This means the effect wasn’t driven by deception or belief that the chatbot was human, but rather by the framing itself, how the chatbot was introduced and named. That subtle change alone was enough to activate more social and affective responses. Therefore, people’s behaviour toward chatbots can be shaped not just by what the chatbot does, but by what they expect it to be, showing how powerful simple context cues are in guiding our interactions with AI.”

Not all the differences favored the chatbot presented as a human. Although participants disclosed less to Chatbot D12, they rated it as slightly friendlier. Their answers to D12 were also more sentimental, meaning they expressed stronger emotions, both positive and negative. Despite these differences, participants did not rate either chatbot as significantly more trustworthy, and both were rated similarly in terms of overall interaction quality.

“When framing a chatbot more like a person, by giving it a human name and introducing it as a human, people tend to open up more, attribute social traits to it, and feel more comforted when speaking with it, even when they suspect it’s still a bot. But there’s a catch: when a ‘human-like’ chatbot doesn’t fully meet our social expectations, people judge it as less friendly or trustworthy. So, design cues that make chatbots feel human can encourage self-disclosure, but they need to be balanced with transparency and realistic expectations.”

Leave a Comment

Lifeboat Foundation respects your privacy! Your email address will not be published.

/* */