English English

North Korean Hackers Automate Crypto Theft Using ChatGPT

Ashley Carter - Author at CoinMinutes Ashley Carter Updated March 24, 2026 05:21 PM
North Korean state-backed hackers are using AI tools like ChatGPT to automate cryptocurrency theft, according to South Korean cybersecurity officials.
Table of contents
    View more

    Now it's machines, not masses of coders, that power state attacks - AI arms these efforts like never before, leaving the crypto industry racing just to stay even.

    Worries about blockchain safety once fixated on "quantum computing" - a theoretical nightmare where machines get so strong they could break codes and wipe investors' crypto away overnight. Yet according to one of the industry's foremost cryptographers, that fear has been eclipsing a far more immediate and already-active danger. 

    Late in 2025, Kostas Kryptos Chalkias - co-founder and chief cryptographer at Mysten Labs (the development studio behind the Sui blockchain) - pointed out that large language models are closer to causing real harm than any quantum machine has been. With a doctorate in identity-based cryptography and over a decade of research into post-quantum algorithms, Chalkias's claim carries even greater weight, especially when the man is not someone prone to headline-chasing alarmism.

    The Lazarus Group Enters the AI Era

    Lazarus Group from North Korea isn’t unknown among those who follow cryptocurrency. Backed by the government, this team of hackers stole vast sums of digital money during the last decade. Funds pulled in often support missile efforts, slipping past global trade blocks. 2025, however, marked a definitive turning point in their operational model.

    The most striking example came in February 2025. A massive $1.5 billion vanished from Bybit’s systems, federal agents said - FBI among them - and they pointed straight at hackers backed by North Korea. This one theft blew past every earlier mark, shaking trust and raising questions across major trading hubs worldwide. How safe are custodial security and institutional-grade platforms exactly? Other digital raids linked to Pyongyang added up too; analysts later tallied close to $2 billion taken by North Korea in just 2025.

    What changed wasn't the group's ambition but rather their toolkit. Behind the scenes, Chalkias points out, hackers from North Korea began weaving in powerful large language models at almost every turn during their operations. This includes reconnaissance (mapping target infrastructure), phishing (crafting convincing social engineering messages), and code analysis (scanning smart contracts for exploitable bugs). Cross-chain exploitation was also included, which means replicating successful attacks across different blockchain ecosystems, and even laundering (using pattern-recognition algorithms to trace and obscure liquidity paths through mixers and over-the-counter brokers).

    "AI is the best tool I've ever had as a white-hat hacker," Chalkias said. "And you can imagine what happens when it's in the wrong hands."

    Why AI Changes the Game Entirely

    To appreciate why this shift is so consequential, it helps to understand what previously limited the scale of state-sponsored crypto hacking. Launching sophisticated attacks on blockchain protocols and smart contracts historically required a relatively large team of highly trained engineers - people who could manually read and audit complex code, understand the subtle logic of decentralized finance (DeFi) protocols, and identify edge-case vulnerabilities that might be exploitable. That was an expensive, time-consuming bottleneck.

    AI removes that bottleneck almost entirely.

    An LLM can be instructed to analyze thousands of open-source smart contracts in minutes, flagging patterns that match known vulnerability types. When a new exploit is successfully deployed against one protocol, an AI system can immediately scan the broader ecosystem - across Ethereum, Solana, Sui, BNB Chain, and others - to find contracts with mirrored logic that share the same flaw. A human team doing this manually might take weeks or months. An AI model can do it in the time it takes to read this paragraph.

    A jump in effectiveness shows up clearly in reports from top cybersecurity groups. Not just one but two teams, at Microsoft plus Mandiant, noticed sharp rises in how North Korean hackers now rely on artificial intelligence. These actors craft fake emails using machine learning tools instead of old methods. One tactic involves videos doctored to look like real company leaders speaking - deepfakes meant to trick staff. Another approach uses cleverly built job applications made by AI so North Korean agents can pose as Western developers. Getting hired lets them walk right into sensitive systems handling cryptocurrency tech without raising alarms.

    The Effect Across the Wider Crypto World

    Openness defines DeFi more than any other sector. Since it uses open-source code, meant to be seen by anyone, the system leans hard into visibility as a virtue. Yet that choice turns risky once tools powered by artificial intelligence start scanning each piece of logic, hunting gaps. 

    Smart contract oracles bring real-life prices and data into blockchain code, yet they have also historically been among the most exploited components in DeFi. With artificial intelligence now hunting flaws across different protocols at once, this risk becomes much bigger.

    Big trading platforms deal with a unique kind of danger. The social engineering dimension of AI - deepfake impersonations, hyper-personalized phishing, synthetic employee identities - targets the human layer of security rather than the code layer. Evidence says the Bybit incident likely began by tricking internal sign-off routines. That means how teams operate and their training matter just as much as checking firewalls or logs.

    Change has to happen now in how audits work. Traditional smart contract audits involve checking code once before release and then issuing a report. That model is increasingly inadequate in a world where new AI model versions can discover entirely new categories of vulnerability. Chalkias predicts regulators will soon mandate continuous, AI-aware security auditing for exchanges and smart-contract platforms. Think constant testing, triggered each time an advanced AI version drops.

    AI Both Helps and Harms

    Even though the danger feels serious, Chalkias doesn’t paint AI as evil by nature. Because what helps hackers so much also arms those fighting them just as well. Security teams now use AI-driven systems to spot odd behavior, watch transactions closely, check smart contracts for flaws, gather insights on threats - day after day. While risks grow, the tools meant to stop harm are learning fast too.

    Wallet providers, custodians, and exchanges stand stronger when they weave artificial intelligence directly into protection systems instead of waiting for old-style audits. Because of this shift, security firms focused on constant machine-powered oversight have started to emerge across the blockchain world. Investment moves toward these startups show big organizations truly believe dangers powered by AI are real.

    Matching strength for strength makes things even. If hackers and protectors run similar AI systems, success leans toward whoever owns cleaner data, sharper models, and faster iteration cycles. Governments with deep pockets, zero rules, and endless time tilt the field hard. Beating North Korea head-on by spending more isn’t the fix. Instead, weave safety right into design - baked into protocols, built inside wallets, threaded through audits - so protection stands firm before threats arrive.

    "Unless we build anti-AI defenses into everything we do," he warned, "we'll always be one step behind."

    Even though hackers grow smarter, storing private keys on hardware wallets never touching the web still blocks many digital break-ins. Watch out when strangers slide into your inbox offering quick jobs, big returns, or help fixing software - especially through chat apps like Discord or career networks where scammers run fake profiles.