Skip to main content

Security of Artificial Intelligence

NLR researchers work collaboratively across cybersecurity and artificial intelligence (AI), with a focus on energy security to strengthen the grid against evolving threats.

Ensuring the security of AI builds on NLR's expertise in explainable and trustworthy AI, formal verification, red teaming of defenses against cyberattacks, generative AI, and extreme events. NLR researchers are leading this emerging area of research to build reliable and secure energy systems and to solve evolving cybersecurity challenges.

Three NREL cybersecurity researchers stand in front of a wall-sized projection demonstrating the Advanced Research on Integrated Energy Systems (ARIES).

Advanced Emulation for Cyber Evaluation

NLR's Advanced Research on Integrated Energy Systems platform provides unique simulation and emulation capabilities—including interfaces with real devices and infrastructure—to be able to generate cyber physical data at scale. The Advanced Research on Integrated Energy Systems Cyber Range also provides emulation and virtualization capabilities to execute novel red team testing on AI models in a safe and isolated environment.

Detecting and Mitigating Cyberattacks

In a recent development on the security of AI, Lawrence Berkeley National Laboratory is leading the Mitigation via Analytics for Grid-Inverter Cybersecurity project in partnership with NLR, Siemens Corporate Technologies, Cornell Tech, and National Rural Electric Cooperative Association Research. Sponsored by the U.S. Department of Energy’s Office of Cybersecurity, Energy Security, and Emergency Response, the project is developing secure AI/machine learning tools to both detect and mitigate cyberattacks on aggregations of distributed energy resources in electric power distribution systems and microgrids. 

This work includes development of attack detection and mitigation algorithms, a hardware-in-the-loop testbed and demonstration test plan, and a red team software test environment for AI models. Testing, algorithm refinement, and demonstration for algorithm hardware-in-the-loop will follow in phase two.

As part of this project, NLR and Cornell Tech are developing a structured red teaming framework that will assess flaws and vulnerabilities of AI models and improve the security of such systems to address cybersecurity threats. The red teaming process—conducted in a controlled environment and in collaboration with AI developers—will adopt adversarial methods to stress test each AI system for flaws and vulnerabilities and lead to more robust attack detection and mitigation algorithms.

Contact

Ryan King

Senior Scientist, Applied Mathematics, Computational Science

[email protected]
303-275-4182

Richard Macwan

Manager, Cyber-Resilience Science Group

[email protected]
303-275-4579


Share

Last Updated Feb. 17, 2026