NordVPN’s CTO Marijus Briedis on dark web credential markets, the limits of breach monitoring, zero trust theatre, and whether the security industry is overstating its capabilities.
In 2025, IBM discovered more than 300,000 stolen ChatGPT credentials available for purchase on the dark web. In the same period, 1.8 billion credentials were stolen globally in just six months. These are not isolated incidents — they are symptoms of an industry adopting artificial intelligence faster than it can secure it, while attackers are using the same technology to strike with greater precision, scale, and sophistication than ever before.
Few people are better positioned to assess that reality than Marijus Briedis. As Chief Technology Officer at NordVPN since 2019, he has led the company’s technology vision across security, privacy, scalability, and user protection — a career grounded in government IT, freelance web development, and hands-on Linux administration long before cybersecurity became a boardroom conversation. He is a regular voice at international conferences and a contributor to sector-wide research on emerging threats and digital privacy.
He is also not given to reassurance for its own sake. In this conversation, Briedis addresses the questions the security industry too often deflects — on AI-driven crime, the theatre of zero trust compliance, the limits of breach monitoring, and what consumers should actually believe when companies tell them they are protected.
Full interview;Â
IBM found over 300,000 ChatGPT credentials for sale on the dark web in 2025. Are companies rushing into AI adoption faster than they can secure it — and creating their own next breach?
Companies are not necessarily creating their next breach by adopting AI, but many are definitely expanding their attack surface faster than they are securing it. The problem is not AI itself; it is the way businesses rush to plug it into daily workflows without applying the same discipline they would to any other critical system.
In this case, the exposed ChatGPT credentials were linked to infostealer malware on infected devices, not evidence that the AI platform itself had been breached. That is an important distinction, because it shows the real weakness is often much more familiar: compromised endpoints, poor credential hygiene, and weak access controls around new tools.Â
If companies bring AI into everyday business processes before securing employee devices, identities, and permissions, they are not creating a completely new problem; they are extending old security gaps into a new environment. AI should be treated like any other business-critical system, with the same standards for endpoint protection, strong authentication, conditional access, and clear rules around what data can be shared.
With 1.8 billion credentials stolen in just six months, tools that alert users after exposure seem reactive by design. Is breach monitoring real security—or just a polished way to manage failure?
Breach monitoring is not fake security, but it is not enough on its own. It is a visibility tool, not a shield. When 1.8 billion credentials are stolen in just six months, the reality is that prevention will sometimes fail, and companies need a way to detect exposure quickly before stolen data is reused in account takeovers, fraud, or phishing. That does not make monitoring meaningless. It makes it part of a realistic security strategy.
The real problem starts when breach monitoring is treated as the strategy rather than as one layer within it. If a company waits until credentials appear in a dump before reacting, it is already behind. Real security means reducing the likelihood of theft in the first place through strong authentication, unique passwords, passkeys where possible, limited permissions, device security, and ongoing user education. Monitoring helps you manage the damage, but strong cyber hygiene helps you avoid becoming part of the damage statistics at all.
AI-powered cybercrime kits are now sold off the shelf. Are security vendors truly making customers safer with AI — or accelerating an arms race that attackers are winning?
AI is not changing the fundamental nature of cybersecurity; it is accelerating it. This has always been an arms race between attackers and defenders, and AI is simply increasing the speed, scale, and accessibility on both sides. That is why the question is not whether AI makes this race more intense, because it clearly does, but whether it is being used in a way that actually improves security.Â
AI can help reduce attack surfaces, strengthen detection, support faster response, and help organizations identify vulnerabilities before they are exploited. So it is not only a tool for malicious actors to create more convincing scams or automate attacks more efficiently, but it can also give defenders a better chance to prepare, adapt, and respond. The real difference lies in implementation. If AI is used thoughtfully to support real security outcomes, it can make customers safer. If it is used as a buzzword without improving the fundamentals, it only adds noise to an already fast-moving threat landscape.
Zero trust is increasingly mandated — but often implemented for compliance. Is it becoming the next box-ticking exercise rather than a meaningful shift in security?
Zero trust becomes meaningless when it is treated as a label rather than a real change in how access is controlled. The risk today is that many companies treat it as a compliance task, updating policies, adding a tool, and assuming the job is done, without actually reducing trust across their systems.Â
But zero trust only works when it changes daily security practices by forcing continuous verification, limiting unnecessary access, and making it harder for a single compromised account or device to lead to a much larger breach. If it does not materially change who can access what, under which conditions, and how suspicious behavior is detected, then it is not a security shift at all; it is just box-ticking with a more sophisticated name. So, that’s not a flaw in Zero Trust as a concept – it’s a flaw in how it’s adopted.Â
Most consumers don’t believe companies can defend against AI-driven threats. Are they wrong — or is the industry overstating its ability to protect them?
Consumers are not wrong to be skeptical because the industry often talks about AI as if it already has these threats under control, while in reality, the threat landscape is becoming more complex and difficult to manage. Companies can improve detection, automate response, and strengthen protection with AI, but that does not mean they can fully defend consumers from AI-driven threats, especially when attackers are also using the same technology to make scams more convincing, more personal, and harder to spot at scale.Â
Still, imperfect protection does not mean ineffective protection. Some security is always better than none, and defensive systems are evolving alongside these threats. AI is not a silver bullet, but it is becoming an important tool in helping companies detect, respond to, and contain risks more effectively.
As attackers shift from mass credential theft to targeted IP and algorithm theft, are consumer security tools — including VPNs — becoming less relevant?
A VPN was never meant to stop every type of cyberattack, and it will not save a company from a compromised developer endpoint or stolen internal access. What it still does is protect one important layer by reducing exposure on unsafe networks, helping secure online traffic in transit, and making routine online activity harder to intercept or profile. The mistake is expecting a consumer privacy tool to solve an enterprise theft problem. These are different risks.
NordVPN exited Russia rather than comply with state demands. Does taking that stance make companies like yours more exposed — and is the cost of principle rising in cybersecurity?
Yes, taking that stance can add pressure, but in cybersecurity, trust is often more valuable than short-term convenience. Companies that take a public stance may appear more exposed, but they are often operating with stronger security models precisely because they refuse to introduce systemic weaknesses. By contrast, complying with intrusive requirements can create structural risks that affect all users.


