GW Business Forum Propels Conversation on AI’s Impact and Trustworthiness


April 8, 2024

Dean Lach introducing panelists

As organizations across all sectors work to capitalize on the transformational benefits provided by artificial intelligence (AI), pressing questions about its accompanying risks and public skepticism toward this emerging technology need to be addressed. Corporate executives, business leaders, government workers, policymakers, and GW faculty, students, and alumni convened to discuss these questions at GW’s second annual Business & Policy Forum titled, “Imagining the Future with AI.” 

During this day-long event, expert panelists explored issues surrounding AI across various industries including healthcare, real estate, financial services, cybersecurity, and the public sector. Each panelist highlighted public distrust of AI as a barrier to realizing its full potential. 

In healthcare, for example, Quimby Kaizer, Principal, Advisory at KPMG, said, “Leaders have a chasm between what’s possible with AI and the reality of skepticism with employees.”

GW’s President Ellen Granberg highlighted the importance of collaboration for addressing this gap in her opening remarks, saying, “Collaboration is at the heart of all progress, and that is why we’re dedicated to working across academia, industry, and government.”

During the forum, Granberg announced the launch of the GW Trustworthy AI (GW TAI) initiative and the initiative’s first corporate partner, SAIC, a Fortune 500 technology integrator. The collaboration with SAIC is an example of a public-private partnership that allows industry to leverage the research and technical expertise of academic scholars. GW’s research will help SAIC assess and inform policy using science and peer-reviewed evidence.

A key theme highlighted at the forum was that the data used to train AI systems is one of the reasons why the public fears AI since there are inherent biases in that data. AI in its current form can solve many of today’s pressing challenges. Still, the training data is not representative of the diverse populations who use the technology, meaning its use may exacerbate existing inequalities specifically in the healthcare setting. 

“The only way that you can really begin to address healthcare disparities or health inequities using AI is to make sure that you have good data that you’re training the models on in the first place,” said LaQuandra Nesbitt, executive director for the Center for Population Health Sciences and Health Equity. “Once you do that, you can begin to build toolkits that help navigate people into the right systems of care.”

Another key theme of the forum was the importance of balancing innovation and regulation to address these biases and mitigate associated risks. However, striking this balance is an ongoing conversation and requires strong governance strategies at every level. 

“If we are too restrictive, then we’ll prevent these industries from flourishing and it will cut us out of the environment where this is the standard,” said Tiffany Moore, the senior vice president of Political and Industry Affairs at the Consumer Technology Association, on AI regulation in the global environment.

In the forum’s final panel, experts zeroed in on the concept of trustworthy AI to chart a path toward increasing public confidence in AI. GW Engineering Dean John Lach introduced panelists from UL Research Institutes, the National Institute of Standards and Technology (NIST), and SAIC who discussed that AI systems must be developed and deployed in a manner that prioritizes ethics, human rights, and input and feedback from marginalized communities to increase public trust. 

“We’re hyper-focused on how do we use the technology to go change problems now, which changes our definition of trust, right?” said Andy Henson, senior vice president of SAIC’s Digital Innovation Factory. “It can’t be, it’s cool, it gives you Google-like results when societal impacts are on the line. It has to be, and this is a really simple definition of trust, but it has to work for the person that’s using it. It has to solve their problems, right? And that’s complex.”

GW Engineering students closed the discussion on trustworthy AI at a poster session during the forum’s closing reception by presenting their research addressing the socio-technical questions of AI. These students are affiliated with GW TAI’s core programs, the Institute for Trustworthy AI in Law and Society (TRAILS) and the Co-Design of Trustworthy AI in Systems (DTAIS), which allows them the opportunity to collaboratively shape a future where AI enhances and empowers human workers and problem solvers.

Overall, the GW Business and Policy Forum, “Imagining the Future with AI,” highlighted that the transformative potential of AI is shadowed by public skepticism and concerns. GW hopes the forum inspires attendees to continue the conversation on the responsible development and deployment of AI while keeping the impactful comments made throughout the day in mind.