The most recent Tech Grove Mixology focused on artificial intelligence (AI) and the metaverse. A panel of experts discussed relevant questions and concerns surrounding the development of Artificial Intelligence.

 

To begin, the panel discussed where AI can make the most impact and how does society get there. Dr. Brian Stensrud, director of simulation at Soar Technology, Inc (SoarTech), explained, “The Metaverse is a persistent digital extension of our current reality and there is a huge opportunity for digital things to live and be persistent along with us. Moving forward with the development of AI, the biggest impact is going to be how to exploit the various systems to live persistently within a metaverse and provide that level of activity for human across a variety of different domains.” He noted this would provide real time responses and presence around the world.

 

Dr. Keith Brawner, U.S. Army SFC Paul Ray Smith Simulation & Training Technology Center (STTC), simply stated, “It’s always the same question whether it is the metaverse, the holodeck, or any training system. What are you going to use it to do?” He elaborated that AI systems are quickly becoming all computer systems and the systems that do not have AI are rapidly diminishing. Ethically, the constraints on each system using AI are dependent on the nature of the system.

 

Another key topic of discussion was the need for a good balance of human-in-the-loop within AI technology. The subject matter experts all agreed as the AI systems gain more memory and ability to act and be proactive, there will still need human verification.

 

Beth F. Wheeler Atkinson, Naval Air Warfare Center Training Systems Division’s (NAWCTSD) senor research psychologist lead of the BATTLE Lab, said, “There needs to be proper checks and balances to ensure that there is control and understanding of the decision being made by the artificial intelligent capacities.”

 

Additionally, the question of humans existing in two worlds was raised. She explained that the checks and balances are needed because AI gives the user anonymity, shielding them from direct interaction and allows people to act in ways they wouldn’t normally act in the real world.

 

David Bragg, National Security Program area lead and professor of Practice and FLARE at University of Florida, continued the conversation on human-in-the-loop by discussing the shift to human-centric AI capabilities, where the focus of applications being developed are in support of humans. Bragg made a key point on the human balance. “It comes back to who has responsibility if you employ a weapon system and make the decision on where to employ that system, who holds the responsibility if something goes wrong?” explained Bragg. He said the DoD just updated the Directive 3000.09 that now requires extensive approval before an AI system can be deployed.

 

Extensive data is essential for AI to function in the metaverse. Not having enough data or the correct data is resolved by using synthetic data, where it is applicable. Mark Ashford, Air Force Agency for Modeling and Simulations principle senior data scientist, said, “The trick is to get to the point where they system replicates reality as close as possible and you can create pixel perfect data.” Data is created to mirror real world scenarios to train the AI in the metaverse. And while extensive data will greatly improve the functionality of AI, it is more important to have a window into the reasoning AI is using to make those decisions.

 

In a similar situation, David Nelson, director of Mixed Reality Lab at University of Southern California’s Institute of Creative Technology, is using existing data to build new scripts and characters. The project is collecting information from subject matter experts and victims to get storylines. Using the information to run an amalgamation to write new scripts and build characters. Nelson explained, “We can have a million different scenarios developed instantaneously using AI.”

 

According to Stensrud, “The key question is what data set is the AI system ultimately learning from and how is it applying that information to decision making.”
The subject matter experts all agreed that AI needs to be trainable and trustable, concluding the panel discussing confidence levels required to perform different tasks teamed with AI.

 

“Explainability is the crucial piece of decision optimization and getting to the point you are most confident about the decision,” said Stensrud.

People who read this article also found these articles interesting :