30
Mon, Jul
109 New Articles

Cybersecurity in the AI Age

Issue 12.3
Tools
Typography
  • Smaller Small Medium Big Bigger
  • Default Helvetica Segoe Georgia Times

As AI increasingly intersects with nearly every dimension of digital security, so too does the consciousness of creating conditions to use it in a secure cyberspace. As Space Hellas Group General Counsel Konstantinos Argyropoulos puts it, “there is an acceleration in the way AI interfaces with cybersecurity,” pointing to an emerging arms race in which malicious actors and defenders alike adopt increasingly automated tactics. Argyropoulos shared his thoughts on this during the CEE Legal Matters GC Summit 2025 in Prague.

Growing Influence in Cybersecurity

Argyropoulos observes that AI can radically change both offense and defense. “As for offense, we must be aware of the fact that AI is frequently used to update the mechanisms via which malicious code is created in a system,” he warns, noting how it automates hacking steps, from probing networks to generating ever more convincing phishing attempts. Yet he also underlines AI’s protective potential. “On defense, AI speeds up discovery and repair of software or system weaknesses,” pinpointing threats such as “malicious code, intrusion detection, and other types of anomalous activities.” This shift is fueling massive investment.

Alphabet’s recent USD 32 billion acquisition of Wiz, a cloud security startup, underscores the urgency. Moreover, Argyropoulos points out that data centers proliferate globally – over 11,000 worldwide – and often integrate advanced AI functions, yet each expansion can open new security gaps. “AI is important in use and as a strategy, but we must identify the risks we need to mitigate.”

Furthermore, new forms of fraud highlight how AI seamlessly blends into criminal playbooks. “An advanced type of scam we call pig-butchering involves scammers ‘fattening’ the victim with illusions of trust,” Argyropoulos explains. “They spend weeks or months posing as friends, business partners, or romantic interests, then ‘slaughter’ them by draining their resources.” He says there are documented cases with losses in the millions, driven by social engineering “often powered by AI chatbots and manipulative digital personas.”

Another chilling example is the deepfake scam at a UK engineering company. “A finance worker transferred USD 25 million after being on what appeared to be a legitimate video call with the CFO,” Argyropoulos recounts. “But the malicious actor used deepfake technology, so all participants were synthetic creations.” In both cases, AI helped criminals appear alarmingly authentic, underscoring how easily corporate procedures can be circumvented when a convincing – and automated – deception slips into an organization’s workflow.

Corporate Environment and Legal Strategies

AI agents are also reshaping daily routines inside companies. “Today’s agents can be set to work indefinitely,” Argyropoulos continues, “researching, opening new software tools, even placing orders online – all without human intervention.” This near-autonomous functionality boosts efficiency but raises critical oversight questions. “We need to do well to be well informed and gain experience to control AI agents,” he notes, warning that a poorly designed system might ignore ethical or compliance constraints when pursuing a goal.

Meanwhile, encryption tools protect private communications but complicate the detection of malicious activities. “Around 2.5 billion people already use services like WhatsApp or Apple’s iMessage,” he explains, and an additional billion joined the ranks when Facebook Messenger introduced default encryption. Although these measures enhance privacy, they also obscure messages where criminals coordinate attacks. Argyropoulos foresees a coming wave of “quantum-safe” algorithms, pointing out that quantum computing breakthroughs could undermine existing encryption standards. In his view, “security is equal parts a matter of technology as it is of strategy,” and staying ahead means adapting proactively to evolving threats.

From a legal standpoint, Argyropoulos stresses that “organizations must plan for cyber threats well before one occurs.” He cites pre-negotiation efforts that map out how a company might respond if ransomware hits, including setting up incident-response protocols and designating points of contact. “When an attack actually occurs,” he continues, “you need experienced legal teams and crisis responders who can decide whether to pay ransom or attempt a workaround. That’s where specialized roles, like ransom negotiators, come into play.”

Afterward, a post-incident review helps strengthen defenses – installing fresh anti-malware solutions, running penetration tests, and sharing details with colleagues. “It’s essential to share relevant information inside the organization, well before situations escalate into crises,” Argyropoulos says, calling collaboration key to minimizing damage. He also mentions algorithmic impact assessments as a “proactive accountability mechanism” that can reveal potential blind spots in AI-driven security tools.

However, none of these measures succeed without qualified professionals. “Many companies are turning to AI as a tool for their cybersecurity needs,” Argyropoulos observes, “but AI alone cannot fill the workforce gap.” Europe, for instance, is short by roughly 200,000 cybersecurity experts. “Beyond technology, cybersecurity is people,” he emphasizes. Even sophisticated systems require humans who know how to interpret data, shape guidelines, and make strategic calls under pressure.

Moreover, AI oversight can alter workplace dynamics. “If decisions might be overruled by AI, people can feel resentment or relief,” Argyropoulos notes. “We must understand how that psychological shift affects decision-making, especially when dealing with rapid, high-stakes choices like a cyberattack.” For him, having “a human in the loop” remains essential to guard against errors and consider broader ethical or legal consequences. Finally, as he puts it, “AI has become important in use and as a strategy – having identified risks, we now need to mitigate them.”

This article was originally published in Issue 12.3 of the CEE Legal Matters Magazine. If you would like to receive a hard copy of the magazine, you can subscribe here.