Study claims that we interact differently with various robots based on their physical characteristics


Remember the idiom “People should not judge a book by its cover?” Well, in an article in Psychology Today, researchers said that people will apparently judge a robot based on its physical appearance. They will assign human personality traits such as laziness or meanness to it.

First impressions of a person will lead to lasting opinions. Likewise, people cannot help but give human-like qualities to robots based on the machine’s looks, sound, and purpose. This tendency to humanize an object enables people to care about things that cannot reciprocate their feelings, like a car or a favorite instrument.

Anthropomorphism can only get you so far for so long. Sometimes, an object fails to live up to the human characteristics that are being projected onto it.

One egregious example is “Clippy,” the infamous Microsoft Office Assistant. The googly-eyed electronic assistant was cute for all of five minutes before you started looking for ways to permanently delete it.

On the other end of the scale is the “uncanny valley.” This is when something non-human looks and/or acts so human, it becomes too human and therefore creepy. (Related: Engineers develop a “soft robot” that uses responsive materials to swim through water.)

Humans develop preconceptions about robots based on the latter’s looks

Researchers have conducted more than a thousands studies on the social interactions between robots and humans. Most of these efforts focused on just one “social robot.”

A Stanford University (Stanford) research team took a much broader approach. They showed photographs of 342 different social robots to participants. Each social robot received a score based on the participant’s judgment of its personality.

The survey results were analyzed using the Stereotype Content Model. First devised by Fiske in 2002, this psychological model suggests that humans around the world judge other people based on two dimensions of personality: competence and warmth.

Psychologists use the Stereotype Content Model to understand how people develop “stereotypes” about the personalities of other people. The Stanford researchers believe this model can also be applied to robots.

They explained how a chatty and cute robot like R2-D2 has a high chance of being considered friendly and easy to approach. Megatron, on the other hand, is clearly something to steer clear of.

The Stanford researchers noted that humans classify robots in much the same way they attach personality traits to people during the stereotyping process.

Different robots show varying levels of competence and warmth

In the experiment, participants graded each robot according to its competence and warmth. A combination of these two scores generated four categories of personality for robots.

Robots that enjoyed high scores in both competence and warmth were considered to be good social partners for humans. Their appearance was agreeable and they gave off the feel of being knowledgeable about their roles. Most robot designers aim for this.

Some robots did well in competence but failed when it came to exuding warmth. They are invariably the big and strong-looking machines that can perform hard labor without need for rest or relaxation.

Units with low competence and high warmth were the cutest, cuddliest, and fuzziest robots. They often have large eyes and features that remind you of human children. You wouldn’t trust them with a screwdriver, but that was because you had the feeling they would run with said tool, trip, and damage themselves.

Finally, there are robots that do not give off a feeling of competence or warmth. Participants considered these units to be bums who didn’t care about humans or other robots. These robots were very mechanical and did not look like they were useful or trustworthy.

You can find more stories about all kinds of robots at Robots.news.

Sources include:

PsychologyToday.com

IPFS.io

PsyCIQ.APA.org [PDF]



Comments
comments powered by Disqus

RECENT NEWS & ARTICLES