Abstract
Cybersecurity threats have evolved significantly from the early days of individual hackers experimenting independently. In their conversation at Mobile World Conference 2026, Geri Revay explains how cybercrime has matured into a structured and profitable ecosystem that resembles a business supply chain. Instead of one attacker performing every step of an intrusion, the work is now divided across specialized groups. Some actors focus on gaining initial access to corporate networks and then sell that access to others. Other groups build ransomware tools, while separate teams manage ransom negotiations or distribute stolen data.
This division of labor dramatically lowers the barrier to entry for cybercriminals. Attackers no longer need deep technical expertise to carry out an operation. Many tools and services can now be purchased directly from underground marketplaces. As a result, cybercrime has become more opportunistic, more scalable, and more accessible than it was even a few years ago.
However, defenders also have access to AI driven capabilities. Security teams already collect enormous amounts of telemetry through logs, network monitoring, and endpoint detection tools. AI systems can analyze this data to detect anomalies, identify emerging threats, and automate parts of the defensive workflow. Over time, this access to large datasets may give defenders a strategic advantage.
The conversation also explores how cybersecurity challenges differ between traditional IT environments and operational technology environments. Industrial systems often prioritize operational availability and safety above all else. Many devices run for decades and cannot easily be patched or modified. This creates a different security model where monitoring, segmentation, and deception technologies play a more important role than frequent system updates.
Haon’s work focuses on automated AI red teaming. Instead of relying only on human testers, AI driven attacker agents can simulate thousands of potential attacks against an AI model or service. This allows organizations to identify vulnerabilities earlier and test whether guardrails and policies are functioning correctly.
One of the most significant emerging risks involves physical AI systems. Autonomous vehicles, drones, and robotics rely on multimodal inputs such as images, audio, and sensor data to interpret their environment. If attackers manipulate these inputs, they may influence how the system behaves. As AI systems move from digital environments into the physical world, the consequences of security failures could extend beyond data breaches and into real world harm.
Across both conversations, a consistent theme emerges. The cybersecurity landscape is expanding in both scale and complexity. Attackers are accelerating their operations through automation and specialization, while defenders must also learn how to secure the new technologies they are building. Organizations that fail to address AI related risks early may discover vulnerabilities that traditional security frameworks were never designed to handle.
What You’ll Learn:
- How cybercrime evolved from individual hackers to a structured ecosystem
- Why ransomware services and access brokers lowered the barrier to entry for attackers
- How artificial intelligence accelerates cyber attacks and defensive analysis
- Why defenders may gain long term advantages through data and telemetry
- How operational technology environments create unique security challenges
- Why enterprise AI systems introduce a new category of attack surface
- How automated AI red teaming identifies vulnerabilities faster than manual testing
- Why physical AI systems may create the next major cybersecurity risk
