“And it became much easier to become a cybercriminal because you don’t have to do all this hacking yourself.” – Geri Revay
Cybercrime has changed dramatically over the past decade. What once required deep technical expertise and extensive preparation can now be carried out through a structured ecosystem of services. Tools, infrastructure, and expertise that were once scarce are now available through underground marketplaces.
In this episode of Shielded: The Last Line of Cyber Defense, Geri Revay from Fortinet and Haon Park from AIM Intelligence explore how this shift is transforming both cyber attacks and cyber defense.
The conversation reveals how cybercrime has become industrialized, how artificial intelligence is accelerating the pace of attacks, and why the rapid deployment of AI systems is creating entirely new security risks.
Cybercrime Has Become a Supply Chain
Cyber attacks no longer rely on a single individual performing every step of an operation. Instead, the work is divided among specialized actors who focus on different stages of the attack process.
Some groups concentrate on gaining access to corporate networks. These initial access brokers sell that access to other attackers. Other groups develop ransomware tools that can be deployed against compromised systems. Some even specialize in negotiating ransom payments.
This division of labor lowers the barrier to entry for cybercriminals. An attacker no longer needs to develop malware, conduct reconnaissance, and manage payment negotiations alone. Each component of the attack chain can be purchased as a service. The result is a cybercrime ecosystem that operates more like a business supply chain than an isolated hacking operation.
AI Is Accelerating the Speed of Attacks
Artificial intelligence is adding another layer of acceleration to cybercrime. AI tools allow attackers to generate malware variants, craft phishing emails, and automate reconnaissance faster than ever before. Even tasks that previously required skilled developers can now be assisted by AI driven tools.
However, AI does not only benefit attackers.
Security teams collect enormous volumes of telemetry from networks, endpoints, and cloud infrastructure. AI systems can analyze this data to identify behavioral anomalies and emerging threats. Over time, this data advantage may strengthen defensive capabilities. The key difference lies in time horizons. Attackers may gain a short term advantage through speed, while defenders may gain a long term advantage through data.
Operational Technology Changes the Security Model
Cybersecurity strategies that work in traditional IT environments often do not translate directly into operational technology environments.
Industrial systems control physical infrastructure such as manufacturing equipment, energy systems, and transportation networks. In these environments, availability and safety are often more critical than protecting data.
Many devices remain in operation for decades and cannot be patched frequently. Even routine network scanning can disrupt sensitive systems.
Because of these constraints, organizations must rely on alternative security approaches such as monitoring, segmentation, and deception technologies rather than frequent updates.
AI Systems Are Becoming the Next Attack Surface
The rapid deployment of AI models and agents is introducing a new category of cybersecurity risk. Enterprises are deploying AI powered chatbots, internal assistants, and automated decision systems across their operations. These systems often have access to internal data, workflows, and business processes.
If attackers manipulate inputs or exploit vulnerabilities in these models, they may influence how the system behaves. This could lead to data exposure, operational disruption, or incorrect automated decisions.
To address these risks, Haon Park’s work focuses on automated AI red teaming. Instead of relying solely on human testers, AI driven attacker agents simulate large numbers of potential attacks to identify vulnerabilities before they are exploited.
When AI Moves Into the Physical World
One of the most significant emerging risks involves AI systems that interact with the physical world. Autonomous vehicles, drones, and robotics rely on multimodal inputs such as images, audio, and sensor data to interpret their environment. If attackers manipulate these signals, they may influence how the system behaves.
Unlike traditional cybersecurity incidents, failures in physical AI systems could result in real world consequences.
As AI systems move beyond software and into infrastructure, cybersecurity must expand to address risks that affect both digital and physical environments.
The Takeaway
The cybersecurity landscape is evolving in two parallel directions. On one side, cybercrime has become faster and more scalable through specialization and automation. On the other, organizations are deploying new technologies such as AI agents and autonomous systems that introduce entirely new security challenges.
Defending against these threats requires more than traditional security practices. It requires adapting security strategies to match the speed, scale, and complexity of modern cyber threats.
You can hear the full conversation with Geri Revay and Haon Park on Shielded: The Last Line of Cyber Defense, available now on Apple Podcasts, Spotify, and YouTube.
About the Guests
Geri Revay
Geri Revay is a Principal Security Researcher at Fortinet’s FortiGuard Labs. With more than fifteen years of experience in security research, ethical hacking, malware analysis, and penetration testing, he focuses on threat intelligence and advanced attack techniques that affect enterprises, governments, and critical infrastructure.
Haon Park
Haon Park is Co-Founder and CTO of AIM Intelligence. His work focuses on securing AI agents and enterprise AI systems through automated red teaming, policy driven guardrails, and continuous risk testing across multimodal AI systems.

