Essential for many industries ranging from Hollywood computer-generated imagery to product design, 3D modeling tools often use text or image prompts to dictate different aspects of visual appearance, like color and form. As much as this makes sense as a first point of contact, these systems are still limited in their realism due to their neglect of something central to the human experience: touch.
Fundamental to the uniqueness of physical objects are their tactile properties, such as roughness, bumpiness, or the feel of materials like wood or stone. Existing modeling methods often require advanced computer-aided design expertise and rarely support tactile feedback that can be crucial for how we perceive and interact with the physical world.
With that in mind, researchers at MIT’s Computer Science and Artificial Intelligence Laboratory (CSAIL) have created a new system for stylizing 3D models using image prompts, effectively replicating both visual appearance and tactile properties. Their research is published on the arXiv preprint server.