Tech News

“Poaching Genius: 5 Sneaky Ways to Replicate AI Mastery Without Getting Caught”

**Stealing AI Models: Researchers Find a Surprisingly Easy Way**

Artificial intelligence (AI) models have become increasingly valuable assets for companies and organizations. However, these models can be surprisingly stealable, as researchers at North Carolina State University have recently discovered. In a new paper, the team described a technique that allows them to capture the electromagnetic signature of a neural network, effectively stealing the model’s architecture and hyperparameters.

The researchers used an electromagnetic probe and several pre-trained, open-source AI models to analyze the radiation emitted by a Google Edge Tensor Processing Unit (TPU) while it was actively running. By comparing the electromagnetic field data to data captured while other AI models ran on the same type of chip, they were able to determine the model’s architecture and specific characteristics, known as layer details, with “99.91% accuracy”.

**How the Attack Works**

The attack begins by using an electromagnetic probe to capture the radiation emitted by the AI model while it is running on the TPU. This radiation is then compared to data captured while other AI models run on the same type of chip. By analyzing these data, the researchers can determine the model’s architecture and hyperparameters, including the number of layers, the type of activation functions used, and the weights assigned to each layer.

**Concerns and Implications**

The ability to steal AI models with “99.91% accuracy” raises significant concerns about the security of these models. As Kurian noted, the theft of an AI model is essentially the theft of proprietary information, as the model requires significant time and computing resources to develop. This could have significant implications for companies that rely on AI models to generate revenue.

**Potential Applications**

While the researchers did not specifically test the feasibility of stealing AI models running on smartphones, Kurian speculated that it would be possible, albeit more challenging due to the compact design of these devices. This technique could potentially be used to steal AI models running on edge devices or servers, highlighting the need for better physical security measures.

**Response from Industry**

The security research community has responded to the findings, with Mehmet Sencan, a security researcher at AI standards nonprofit Atlas Computing, noting that “anyone deploying their models on edge or in any server that is not physically secured would have to assume their architectures can be extracted through extensive probing”.

**FAQs**

Q: How do the researchers steal the AI models?
A: The researchers use an electromagnetic probe to capture the radiation emitted by the AI model while it is running on the TPU. They then compare this data to data captured while other AI models run on the same type of chip to determine the model’s architecture and hyperparameters.

Q: How accurate is the technique?
A: The researchers claim that they can determine the model’s architecture and hyperparameters with “99.91% accuracy”.

Q: Can this technique be used to steal AI models running on smartphones?
A: The researchers did not specifically test the feasibility of stealing AI models running on smartphones, but Kurian speculated that it would be possible, albeit more challenging due to the compact design of these devices.

Q: What are the implications of this technique?
A: The ability to steal AI models with “99.91% accuracy” raises significant concerns about the security of these models, as the theft of an AI model is essentially the theft of proprietary information. This could have significant implications for companies that rely on AI models to generate revenue.

**Conclusion**

The researchers’ technique highlights the importance of physical security measures to protect AI models. As AI continues to play a increasingly important role in our lives, it is essential that we prioritize the security and integrity of these models. The findings of this study serve as a wake-up call for the AI community to take a closer look at the security measures in place to protect their intellectual property.

Related Articles

Leave a Reply

Your email address will not be published. Required fields are marked *

Back to top button
×