In 1983, David Chambers asked a group of children to draw what they believed scientists to look like. He compared the children’s perception to reality and recorded his findings in the article “Stereotypic Images of the Scientist.”
Recently, researchers at Illinois State University repeated the test, this time substituting “scientist” with “robot.” Psychologist Corinne Zimmerman and engineer Kevin Devine presented 143 schoolchildren, between six and 10-years-old, with the proposition, “Draw a picture of a robot doing something robots often do.”
What the children drew, according to Zimmerman, showed a “clear stereotype of robots.” The children drew robots that were square and autonomous, engaging in activities such as household chores and homework (keep dreaming, kids). Around 30 percent of the children drew robots engaged in “robo-boogieing.”
29 of the children involved in the study were then pulled aside and taught about what robots currently do in the “real world.” These lessons included a trip to see an industrial robot in action. Later asked to redraw their vision of a robot doing robot things, 28 of the children sketched robots more akin to the industrial mechanoid they had seen.
Three months later, those 29 children were asked again to complete a drawing. For the most part, the change in perception had stuck and a majority of the children drew industrial, human-operated machines.
Zimmerman claimed that the lessons had shortened the distance between fantasy and reality for the children. She opined that the acceptance of realistic robots would “help students move into related careers.”
However, not everyone is excited about Zimmerman’s opinions on converging realism and creativity. Noel Sharkey, professor of artificial intelligence and robotics at the University of Sheffield in the UK, worried aloud about the “damping of ideas” and was doubtful whether acclimating children to the realistic robots of today would inspire them to build the robots of tomorrow.
Surely, the Wright brothers dreamed of wings before they ever flew. I would also be willing to bet that more aeronautical engineers were inspired by Buck Rogers than the WWII rockets that formed much of the foundational research that they continued. Surely, the future is defined more by the impossible than the possible.
For that reason, I say don’t let the robots stop dancing.
(image via Metro.co.uk)
Scientists from the University of Hertfordshire recently unveiled Nao — the first robot allegedly capable of both developing and expressing emotions. This sensitive robot is the result of Feelix Growing (Feel, Interact, Express), a project aimed at socially situating robots in our society. According to Dr. Lola Cañamero, the computer scientist who is running the project, “Emotions foster adaptation to environment, so robots would be better at learning things.”
Nao is the emotional equivalent of a one-year-old child, showing emotion through non-verbal clues like posture and gestures, rather than more advanced facial or verbal expression. Non-verbal clues from actual human beings, body language and distance in particular, are also what guide Nao’s reactions and feelings. The robot learns from human interactions, can remember faces and is programmed to form close bonds with people who treat it (him? Pronoun struggle.) with kindness. This basic understanding of human body language, along with a programmed set of basic rules about what’s “good” and “bad” for it, allow Nao to indicate how it’s feeling.
This originally made me think that Nao is just another basic robot, BUT found out that while the actions used to display each emotion are preprogrammed, Nao decides by itself which feeling to display, and when. (Robot agency!)
Hunching its shoulders when it’s sad and raising its arms for a hug when it’s happy, the robot really does emulate the physical expressions of a very young child. If frightened, Nao will only stop cowering in fear when soothed by gentle strokes on the head. Along with happiness, sadness and fright, Nao can also express anger, guilt, excitement and pride.
Beyond just being a novelty, Nao has several projected practical uses. The FEELIX team members in charge of creating Nao’s emotions believe that robots are absolutely going to act as human companions in the near future, and that responses from the robots will make it easier for humans to interact with them.
“If people can behave naturally around their robot companions, robots will be better-accepted as they become more common in our lives.”
In addition to being an ambassador for the ideal everyday companions of the future, one of the immediate aims of FEELIX’s project is to provide 24-hour companionship for young children and the elderly in hospitals and to provide support for their parents, carers, doctors and nurses. He would be capable of helping out with therapeutic aspects of their treatment, as well as providing companionship and helping their emotional well-being.
I don’t think we’re anywhere close to the point where robots will replace actual human attention, but they could be a great helper, when no one else is available. The public might not be ready for robot companions with a mind of their own, but the technology is here, it’s consistently improving, and it can’t be ignored.
(All seriousness aside, I think my favorite thing about Nao is that he happens to be an awesome dancer, bringing a whole new meaning to ‘The Robot’.)