The Training and Simulation Industry Symposium’s first panel discussion, “NTSA I/ITSEC 2023 Next Big Thing: Acquisition Insights for Generative AI [artificial intelligence] in Government Training and Simulation,” took place in Orlando, Florida, on June 21, 2023.


The “Next Big Thing” committee operates under the auspices of the National Training and Simulation Association, according to Bob Kleinhample, who made initial introductions. The committee helps academia, industry and government prepare for new technologies, and position themselves to take full advantage of those technologies as they emerge.


Richard Boyd, CEO for Ultisim, Inc., moderated the panel that focused on generative AI, which featured Dr. Sean Conroy, mission unit specialist for Microsoft Federal, and Javier Fadul, chief innovation officer for HTX Labs.


Boyd began the conversation referencing a “bump in the night” on Nov. 30, 2022, when OpenAI launched ChatGPT and made it free. Boyd related this event’s relevance to what the technology community has referred to as “the Singularity.” According to Boyd, the Singularity is the idea that humans build a technology so complex that it isn’t fully understood. That technology soon outstrips humans, and at a certain point, it becomes difficult to predict.


Boyd clarified that he did not consider Nov. 30 to be “the” Singularity, but it was “a” singularity. He went on to outline his primary concerns that involved what humans and machines should be doing with their efforts and attention.


“That balance is important to get right,” Boyd said. “If our adversaries do a better job than we do, then they’re going to prevail… and we won’t be competitive.”


Boyd said the 20th century was about the significance of the moving image, as it was the first time in human history when people could see recent and current events by viewing video footage. He stated the effects changed people physically, psychologically and societally.


“This [new] century is about modeling and simulation, and those organizations who get adept at modeling and simulating the future can not only predict the future better, but they can affect it,” Boyd said. “This is an incredibly powerful medium that we’re playing with, and it got a major upgrade on Nov. 30, 2022, with generative AI.”


AI is not like ordinary enterprise software according to Boyd, who said it “keeps me up at night” because it’s more intimate, more powerful, and there is no debugger. The popular industry phrase, “move fast and break things,” should not be applied to AI development, advised Boyd.


“We don’t understand how this stuff works, it’s not transparent to inquiry, and there is source code, but it doesn’t explain itself,” Boyd said.


Despite potential concerns, the benefits of properly harnessing AI were generally agreed upon by panel members. Fadul addressed the challenges of the transitioning workforce, specifically the “Silver Tsunami” phenomenon that describes several people expected to retire soon. This impending departure represents a significant loss of experience in various industries, but AI could possibly be used to maintain that institutional knowledge and experience.


“There is a sense of urgency we felt to leverage these technologies to capture as much of that expertise as possible,” Fadul said. “To address some of those challenges, we developed a training platform to help organizations bring XR training technologies to their fields, to include asset management, digitization and digital twins… to create customized training scenarios using that subject matter expertise [and making it] available on tablets, PCs and laptops. Ultimately, this technology is about being able aggregate data and present that information back to students and instructors to make better decisions… in how to prepare the workforce and the warfighter for upcoming challenges.”


Fadul noted that AI has already been around for several years, but more recently there has been a significant change in its quality and the types of intelligence systems that could be created.


“We have to be very precise in our execution of these systems and there are a lot of potential compromises that we, as organizations, will have to balance,” Fadul said. “Ultimately, this will be about: Will these systems replace people or augment them?”


Conroy noted the concerns of “hallucinations” that AI have demonstrated. An AI hallucination refers to a confident conclusion stated by an AI that is either incorrect, or not seemingly justified by the training data that is input into the AI for consideration.


“We’ve had some courses of action come out [of AI] that are completely outrageous, mostly in terms of human life and weapons deployment,” Conroy said. “We’ve been working to reign that in.”


Near the end of the talk, Boyd discussed the possibility of hackers and bad actors who could introduce fake data into an AI’s process, and he acknowledged that it was a difficult problem.


“I’m not worried about this technology breaking out and doing something crazy, and I’m not worried about nuclear weapons,” Boyd emphasized, as he referenced the “Terminator” movies that present a war-torn, Dystopian future. “I worry a lot about what humans do with [AI] – all of this is a double-edged sword. These technologies are moving incredibly fast and coming up with countermeasures to understand what’s real and what isn’t is going to be a big problem.”


Photo credit: NTSA

People who read this article also found these articles interesting :