CS faculty Qiang Zeng and PhD students  Xiang Li recently presented their work OneFlip at USENIX Security ’25, a premiere conference on security. OneFlip introduces a novel and interesting attack against AI systems: by flipping just a single bit among billions of weight bits, it can inject a backdoor into the model. Unlike traditional AI attacks that require modifying training data, OneFlip targets the system during the inference stage, making it extremely stealthy and difficult to detect.

Read more at: GMU News Feature