Even if an android’s appearance is so realistic that it could be mistaken for a human in a photograph, watching it move in person can feel a bit unsettling. It can smile, frown, or display other various, familiar expressions, but finding a consistent emotional state behind those expressions can be difficult, leaving you unsure of what it is truly feeling and creating a sense of unease.
Until now, when allowing robots that can move many parts of their face, like androids, to display facial expressions for extended periods, a “patchwork method” has been used. This method involves preparing multiple pre-arranged action scenarios to ensure that unnatural facial movements are excluded while switching between these scenarios as needed.
However, this poses practical challenges, such as preparing complex action scenarios beforehand, minimizing noticeable unnatural movements during transitions, and fine-tuning movements to subtly control the expressions conveyed.