Posted: 12 January 2018, 3:00 p.m. EST
Panelists: Moderator Scott Fouse, vice president, Advanced Technology Center, Lockheed Martin; Gerhard Grunwald, head of orbital robotics, Institute of Robotics and Mechatronics; Kris Kearns, senior adviser for autonomy research, Air Force Research Laboratory; Stefanie Tellex, assistant professor of computer science and assistant professor of engineering, Brown University
Lawrence Garrett, AIAA web editor
With computing power rapidly approaching that of the human brain, the availability of low-cost sensors and dramatic improvements in autonomy, human-machine teams and robotics previously found only in science fiction are certain to improve many aspects of life over the next 25 years, a panel of experts said Jan. 12 during the “Serving Our Robot Overlords” session at the
2018 AIAA SciTech Forum in Kissimmee, Florida.
“Robots and humans will work effectively together and essentially be a far more effective team than either by themselves and truly demonstrate that the whole is greater than the sum of the parts,” said Scott Fouse, vice president of Lockheed Martin’s
Advanced Technology Center.
For human-machine teaming technology to mature and provide humanity its full suite of benefits, communication is key, said Gerhard Grunwald, head of orbital robotics at the
Institute of Robotics and Mechatronics in Germany.
Grunwald noted that human-machine communications are divided into two categories: direct and indirect.
“Indirect ones mean human and robot are not in the same location,” he explained, citing the example of a human based on Earth working with an International Space Station-based robot. “What is so important in space with telerobotics is that the human is in the loop.”
Grunwald said another challenge is the communication delay between Earth and space. He noted that the delay between Earth and the ISS is 20-30 milliseconds.
Because robotics do not behave the same way in space as on Earth, safety tests are being conducted to “ensure robots don’t injure robots in space,” he said.
The U.S. Air Force is also working to overcome a number of challenges presented by human-machine teaming with the aim of getting airmen and machines working together to be efficient and effective, said Kris Kearns, senior adviser for autonomy and research at the
Air Force Research Laboratory.
She said one of the Air Force’s primary challenges is to determine how to share decision-making between machines and humans: which decisions humans should always make and which ones are OK for machines to make on their own.
Participants in the discussion “Serving Our Robot Overlords” Jan. 12 at the 2018 Science and Technology Forum in Kissimmee, Fla.
The end goal, Kearns said, is to capitalize on the strengths of what artificial intelligence or an intelligent machine can provide as well as the strengths of airmen.
“This is the basis for which we are developing technologies to be able to have a relationship between the human and the machine so that we have calibrated trust,” Kearns said. “We want joint learning between the machine and the human.”
She said the Air Force is working to develop systems in which an intelligent machine performs more of the tasks required to operate an aircraft, such as takeoff and landing, collision avoidance or route planning.
However, Kearns said humans will always be in the loop when it comes to identifying targets or giving authority to kill.
Stefanie Tellex, assistant professor of computer science and assistant professor of engineering at
Brown University, said now is a really exciting time in robotics because they’re starting to work.
Tellex imagines a future in which people talk to robots like they’re other people.
But what’s still needed, she said, is a model for “these language understanding modules to connect everything the robot can see and everything the robot can do.”
To be successful, Tellex cautioned, “We have to in some sense embrace failure.” She said robotics and autonomy are difficult and the world and aerospace are complicated.
“It’s critical ... to have the robot be able to identify, detect and recover from failures,” Tellex said. “We want the robot to be able to engage in corrective behavior.”
Back to 2018 SciTech Forum Headlines
Back to 2018 SciTech Forum home