Eyes on Trust: Gaze Prediction Powers Safer Extended Reality


December 9, 2025

Shu Hong and Rongqian Chen with their Best Poster Award

Extended reality (XR) combines augmented, mixed, and virtual reality to create immersive experiences that are rapidly transforming the way people learn, collaborate, and interact with each other. Yet, this next-generation technology also introduces security, privacy, and trust challenges due to the intimate connection between users, their devices, and their environments.

Every year, researchers gather at MobiHoc, the International Symposium on Theory, Algorithmic Foundations, and Protocol Design for Mobile Networks and Mobile Computing, to discuss advancements and challenges in computing and dynamic networks, including emerging areas such as XR. In 2025, MobiHoc organizers held the first workshop on enhancing cognitive security, privacy, and trust in XR systems, where members of GW Engineering’s Department of Electrical and Computer Engineering won a Best Poster Award for their research.

In XR, cognitive security means protecting a user’s mental state and attention patterns from misinterpretation or manipulation by the system. One major contributor to cognitive security risks is latency, the delay between a user’s movement and the system’s response. When latency increases, the system may struggle to accurately process and interpret eye-tracking data and attention cues, thereby increasing the vulnerability to attacks.

Addressing this issue, post-doctoral researcher Shu Hong and second-year Ph.D. student Rongqian Chen developed a time-aware Long Short-Term Memory (LSTM) model, referred to as a “human digital twin,” trained on mixed reality (MR) data collected under controlled latency variations. By incorporating timing information, the model remains reliable even under fluctuating system latency.

“Our ‘human digital twin’ predicts true gaze targets, even when latency distorts raw signals,” Hong said. “This motivates the later defense mechanism and reduces incorrect interferences and supports safer, more trustworthy XR interactions.”

Hong and Chen led different tasks in the research process according to their expertise. Hong leveraged her skills in machine learning and network privacy and security to lead the model design, algorithm development, and robustness evaluation, while Chen applied his knowledge in machine learning and human-machine interaction. Together, they collaborated with researchers at Pennsylvania State University (PSU), who built the data collection pipeline and configured experimental conditions based on their XR system testbed.

“Together, our teams’ complementary strengths ensured both strong modeling and realistic system testing,” Hong stated.

This modeling innovation is part of a larger endeavor, VeriPro, a project funded by the Defense Advanced Research Projects Agency (DARPA) and co-led by PSU and GW Engineering. VeriPro aims to develop tactical MR systems that safeguard human users and provide verifiable cognitive guarantees. The GW team focuses on modeling, including gaze prediction and perception models under cognitive attacks. At MobiHoc, Chen and Hong also presented a perception model for protecting against perception attacks that they developed for this project.

“In Mixed Reality (MR) systems, the seamless blending of digital and physical elements exposes users to cognitive attacks that manipulate their perception,” Chen said. “We propose a framework utilizing Vision-Language Models (VLMs), perception graphs, and statistical analysis to measure the precise impact of these distortions.”

Altogether, this work will enhance the security of all MR systems on the market while also supporting their use in critical operations. The Best Poster Award at MobiHoc 2025 highlights the significance of Hong and Chen’s LSTM model for safeguarding XR and supporting the formal reasoning of cognitive security.

“We had a great experience at MobiHoc 2025 and appreciated the feedback and conversations with researchers across systems, networking, and security,” said Hong. “Presenting at the XR Security workshop helped us understand broader implications of our work, and receiving the Best Poster Award is both encouraging and motivating for our future research.”

“This recognition encourages further innovation in our work. We remain focused on leveraging machine learning and large language models to enhance XR systems, prioritizing improvements in intelligence, interactivity, and security,” Chen said.