Nuances of Human-Machine Integration in Autonomous Systems panel at TSIS

By Kate Finkel, Staff Writer

 

Industry professionals participated in the discussion panel, “The Nuances of Human Machine Integration in Autonomous Systems,” during the Training & Simulation Industry Symposium (TSIS) 2024 in Orlando, Florida, June 12.

 

The annual TSIS conference provides industry representatives with opportunities to network and interact with procurement officials specialized in training and simulation from the Department of Defense.

 

Emily Mills, Ph.D., research and development portfolio manager with Design Interactive, Inc., moderated the panel. She asked how industry experts can effectively train human operators to work alongside common systems and ensure they understand the system capabilities and limitations.

 

“What we have to begin to think about is trust,” said Teresa Pace, Ph.D., algorithms, and data architect with L3Harris Technologies. “With trust, we need to look at transparency, explainability, ethics and bias. We can’t just assume that AI [artificial intelligence] or an automated system will do something perfectly, so how we can trust is to educate our users by training courses, through hands-on experience, and various training exercises. Those are the things we really need to focus on to have a successful relationship between autonomous systems and human operators.”

 

According to the panel members, human collaboration with machines will be more important than ever as AI technology advances. They emphasized that human-machine collaboration required the unique strengths of humans and AI systems to solve complex problems more effectively than either could do alone. Integrating both skill sets has the potential to transform any industry.

 

Mills later asked about addressing the challenges they have experienced with AI models, and how do they build that trust between users and autonomous systems.

 

“If you are using a computer application, usually you test what decision humans make first,” said Randy Allen, Ph.D., chief scientist at Lone Star Analysis. “Then [you] see what the AI is recommending [and] compare if they agree or not. “If they do agree, that can go a long way to ensure trust between machines and humans.”

 

People who read this article also found these articles interesting :