Post-quantum cryptography is advancing fast, but Ferhat Yaman wants us to start paying attention to the layer we rarely see: the hardware. As a researcher at AMD’s Product Security Office, Yaman works at the intersection of post-quantum cryptography, side-channel attacks, and AI privacy. His research shows that no matter how secure an algorithm is, if your hardware leaks, your secrets are already at risk.
In a recent episode of Shielded: The Last Line of Cyber Defense, Yaman joined host Johannes Lintzen to break down the practical realities of implementing quantum-safe systems in silicon. From side-channel analysis to AI model extraction, his work points to one message: building secure systems for a post-quantum world starts before your code even runs.
“I started working on this more than eight years ago,” he explains. His early research explored homomorphic encryption and AI privacy, two areas where computations can be done without decrypting the data itself. “That introduced me to lattice cryptography, and I saw that NIST was working on standardization. So I kept going.”
But when it comes to hardware, theoretical security isn’t enough. “If you design in software, it’s one thing,” he says. “If you design in hardware, you have to make sure you’re not leaking anything.” That’s where side-channel attacks come in, exploiting electromagnetic radiation or power consumption to extract cryptographic secrets. And even with post-quantum algorithms like Kyber or Dilithium, those risks are real.
“If we design a specific modular reduction algorithm for Kyber or Dilithium, we may figure out what kind of key is used during that operation,” he explains. “You can pivot that specific part and measure what kind of key is used during that phase.” These types of attacks don’t break the math. They break the physical behavior of the chip.
And it’s not just cryptography that’s vulnerable. Yaman’s team also showed that AI models running on edge hardware like Google’s Edge TPU can leak structure and hyperparameters through electromagnetic signals. “You put that model in a self-driving car, and it’s running. Or you can just tap on it and start doing measurement, such as power consumption, or EM emissions, because it’s your device. And then, layer by layer, you can extract how many nodes are in the network, how many layers, and what kind of layer it is. Whether it’s convolutional, fully connected, or linear, we can reveal all of that just by measuring.”
That’s not just a privacy issue. It’s an intellectual property risk for anyone shipping AI models in physical products. But the good news, Yaman says, is that these risks can be reduced if addressed early.
His team at AMD uses a mix of commercial and open-source tools like Keysight’s side-channel analysis suite, Ansys simulations, and ChipWhisperer to evaluate designs before tape-out. “If we can shift side-channel evaluations to pre-silicon before taping out the chip, that makes attacks more resilient. That’s what we do at AMD.”
He also emphasizes the importance of countermeasures like masking, shuffling, and randomness insertion techniques that make leakage harder to correlate with sensitive operations. “Each of them works differently. Some diminish the leakage, some make it harder to recognize, and some introduce randomness so attackers can’t match patterns. But you have to decide based on timing, area, and your threat model.”
One promising direction Yaman points to is hybrid cryptography systems that combine classical and post-quantum algorithms. AMD’s open-source Caliptra project does exactly that, giving users the option to switch between both. “It’s not just about replacing everything. You have to give that option to the user. You need flexibility.”
The same is true for homomorphic encryption, which allows data to be processed while still encrypted. But despite over 15 years of research, it’s still not widely deployed. “We developed software, but it is still not practically doable. To do that, we have to accelerate our design and build new hardware based on the needs of homomorphic encryption.”
His team is now looking at how to apply techniques like quantization and pruning, commonly used in AI, to improve performance for encrypted workloads. “Even our applications have to evolve. We have to think differently and implement differently. The way we design for normal AI doesn’t work in this space.”
Throughout the conversation, one theme comes up again and again: early decisions matter. “Once you ship the chip, you can’t go back and fix it. So everything PQC, leakage mitigation, testing has to be part of the design process from day one.”
The final takeaway? You can’t bolt on post-quantum security after the fact. If your hardware isn’t designed to be quiet, it doesn’t matter how strong your encryption is.
You can hear the full conversation with Ferhat Yaman on Shielded: The Last Line of Cyber Defense, available now on Apple Podcasts, Spotify, and YouTube Podcasts.
About Ferhat Yaman
Ferhat Yaman is a hardware security researcher at AMD’s Product Security Office, specializing in side-channel analysis, post-quantum cryptography, and AI privacy. His work spans from lattice cryptography and homomorphic encryption to electromagnetic leakage analysis and secure bootloader verification. He has contributed to projects including AMD’s Caliptra root-of-trust and has conducted pioneering research on AI model extraction from hardware accelerators. With a focus on real-world vulnerabilities and pre-silicon validation, Yaman’s work sits at the critical intersection of cryptographic innovation and practical implementation.
With the shift to post-quantum hardware security accelerating, Yaman’s message is clear: protecting systems requires more than new math; it demands early testing, layered defenses, and security built into the silicon itself.
You can hear the full conversation with Professor Bill Buchanan on Shielded: The Last Line of Cyber Defense, now available on Apple Podcasts, Spotify, and YouTube.