Purdue University researchers devise a “self-aware” program to thwart hacking attempts

Purdue University researchers devise a

The scenario isn’t fictional; it happened in 2010, when the Stuxnet virus was used to damage nuclear centrifuges in Iran. And as ransomware and other cyberattacks around the world increase, system operators worry more about these sophisticated “false data injection” strikes. In the wrong hands, the computer models and data analytics – based on artificial intelligence – that ensure smooth operation of today’s electric grids, manufacturing facilities, and power plants could be turned against themselves.

It sounds like a scene from a spy thriller. An attacker gets through the IT defenses of a nuclear power plant and feeds it fake, realistic data, tricking its computer systems and personnel into thinking operations are normal. The attacker then disrupts the function of key plant machinery, causing it to misperform or break down. By the time system operators realize they’ve been duped, it’s too late, with catastrophic results.


  • “We call it covert cognizance,” said Abdel-Khalik, an associate professor of nuclear engineering and researcher with Purdue’s Center for Education and Research in Information Assurance and Security (CERIAS). “Imagine having a bunch of bees hovering around you. Once you move a little bit, the whole network of bees responds, so it has that butterfly effect. Here, if someone sticks their finger in the data, the whole system will know that there was an intrusion, and it will be able to correct the modified data.”

  • Purdue University’s Hany Abdel-Khalik has come up with a powerful response: to make the computer models that run these cyberphysical systems both self-aware and self-healing. Using the background noise within these systems’ data streams, Abdel-Khalik and his students embed invisible, ever-changing, one-time-use signals that turn passive components into active watchers. Even if an attacker is armed with a perfect duplicate of a system’s model, any attempt to introduce falsified data will be immediately detected and rejected by the system itself, requiring no human response.

Abdel-Khalik will be the first to say that he is a nuclear engineer, not a computer scientist. But today, critical infrastructure systems in energy, water, and manufacturing all use advanced computational techniques, including machine learning, predictive analytics, and artificial intelligence. Employees use these models to monitor readings from their machinery and verify that they are within normal ranges.

Trust through self-awareness

From studying the efficiency of reactor systems and how they respond to equipment failures and other disruptions, Abdel-Khalik grew familiar with the “digital twins” employed by these facilities: duplicate simulations of data-monitoring models that help system operators determine when true errors arise.

But gradually he became interested in intentional, rather than accidental, failures, particularly what could happen when a malicious attacker has a digital twin of their own to work with. It’s not a far-fetched situation, as the simulators used to control nuclear reactors and other critical infrastructure can be easily acquired. There’s also the perennial risk that someone inside a system, with access to the control model and its digital twin, could attempt a sneak attack.

“Traditionally, your defense is as good as your knowledge of the model. If they know your model pretty well, then your defense can be breached,” said Yeni Li, a recent graduate from the group, whose Ph.D. research focused on the detection of such attacks using model-based methods. Abdel-Khalik said, “Any type of system right now that is based on the control looking at information and making a decision is vulnerable to these types of attacks. If you have access to the data, and then you change the information, then whoever’s making the decision is going to be basing their decision on fake data.”