What Happened: Axios, the Backbone of Modern JavaScript, Was Taken Over
In the early hours of March 31, 2026, hackers compromised the npm account of Jason Saayman — the primary maintainer of Axios, the most popular HTTP client in the JavaScript ecosystem with over 100 million weekly downloads and 400 million monthly installs.
Within minutes, two poisoned versions were pushed directly to the npm registry:
axios@1.14.1— published at 00:21 UTCaxios@0.30.4— published at 01:00 UTC
Both the 1.x and 0.x branches were poisoned within 39 minutes of each other. Any project using a standard caret range (^1.14.0 or ^0.30.0) would silently pull the compromised version on its next npm install.
Neither version appeared in Axios's GitHub releases. Neither carried an OpenID Connect (OIDC) package-origin attestation. Neither had a matching GitHub commit. All normal indicators that a release is legitimate were absent — and most automated pipelines never check.
Inside the Payload: A Cross-Platform Remote Access Trojan
The poisoned Axios versions added a single new dependency: plain-crypto-js@4.2.1 — a typosquat of the legitimate crypto-js library, published just minutes before the Axios compromise by the email nrwise@proton.me.
When installed, a postinstall hook executed a heavily obfuscated dropper (setup.js, 4,209 bytes) that used a two-layer encoding scheme — reversed Base64 followed by XOR with the key OrDeR_7077 — to hide every critical string from static analysis.
The dropper detected the operating system and delivered a platform-specific RAT:
| Platform | Technique | Payload Path |
|---|---|---|
| macOS | AppleScript downloads a Mach-O binary mimicking Apple's daemon naming (com.apple.act.mond) | /Library/Caches/com.apple.act.mond |
| Windows | VBScript copies powershell.exe to %PROGRAMDATA%\wt.exe (disguised as Windows Terminal) to bypass EDR, then executes a hidden PowerShell script | %TEMP%\6202033.ps1 |
| Linux | curl downloads a Python RAT and runs it detached via nohup | /tmp/ld.py |
Security researcher Joe Desimone of Elastic Security captured the macOS second-stage binary before the C2 server went offline. It was a fully functional C++ RAT capable of:
- Executing arbitrary commands via
/bin/shor AppleScript - Injecting additional binaries — receives Base64-encoded executables, writes them to temp files, ad-hoc signs them, and launches them
- Enumerating the filesystem — recursively lists
/Applications,~/Library, running processes, timezone, OS version, and boot time - Beaconing every 60 seconds to the C2 at
sfrclak[.]com:8000
After payload execution, the dropper deleted itself, removed the malicious package.json, and swapped in a clean copy — making post-incident forensics significantly harder.
Who's Behind It: Suspected North Korean APT
John Hultquist, chief analyst at Google's Threat Intelligence Group (GTIG), told BleepingComputer that the attack is linked to UNC1069, a North Korean actor known for targeting cryptocurrency exchanges, financial institutions, and venture capital funds.
The macOS RAT's internal name — macWebT — is a direct reference to malware attributed to BlueNoroff, a North Korean threat group specialised in financially-motivated cyberattacks, previously documented by SentinelOne in 2023.
Charles Carmakal, CTO at Mandiant, warned that the attack "is broad and extends to other popular packages that have dependencies on it" and that "hundreds of thousands of stolen credentials" from recent supply chain incidents will lead to further compromises, crypto theft, and ransomware events.
The Bigger Picture: Supply Chain Attacks Are Accelerating
Axios is not an isolated incident. In March 2026 alone:
- Trivy — the most popular open-source vulnerability scanner — had its GitHub Actions tags force-updated and Docker Hub images poisoned with infostealers. Cisco confirmed stolen source code from a Trivy-linked breach.
- Telnyx Python SDK — compromised via TeamPCP to deliver credential-stealing malware hidden inside WAV audio files.
- LiteLLM — a poisoned security scanner was used to backdoor this popular AI proxy framework.
- 29+ npm packages hit by the CanisterWorm campaign with worm-enabled backdoors that use Internet Computer Protocol canisters for C2.
The ENISA (EU Agency for Cybersecurity) published a technical advisory in March 2026 specifically about the secure use of package managers — a clear signal that regulators recognize the severity of the problem.
Why This Matters for AI: Your LLM Pipeline Is a Supply Chain
Every cloud AI platform your company sends data to is a dependency — an external package in your data pipeline, just as opaque and just as vulnerable as a compromised npm module.
Consider the parallels:
- Axios attack: A maintainer's account was compromised, and malicious code was injected into a trusted package that millions of projects pulled automatically.
- Cloud AI risk: Your API keys, prompts, documents, and confidential business data flow through services operated by third parties. A single compromised employee account, misconfigured server, or rogue insider can expose everything — just like Anthropic accidentally leaked its own unreleased model details to a public database last month.
When your AI runs on someone else's infrastructure, you inherit their entire attack surface — their supply chain, their dependencies, their human errors.
How OpenGolin.AI Eliminates Supply Chain Data Exposure
OpenGolin.AI deploys entirely on your own servers. Your data never leaves your network. There is no API call to a third party. There is no maintainer account that can be hijacked to intercept your prompts or documents.
| Attack Vector | Cloud AI | OpenGolin.AI (On-Premise) |
|---|---|---|
| Dependency hijack (like Axios) | Your data transits compromised libraries in the provider's stack | Data stays on your servers — external supply chain never touches it |
| Maintainer account compromise | A single stolen token can serve malicious code to all users | You control the update cycle — pin, audit, and deploy when ready |
| Data exfiltration via RAT | API keys, conversations, and documents accessible to the provider | All data encrypted at rest, air-gapped from external networks |
| Transitive dependency cascade | Cloud providers inherit thousands of upstream dependencies you can't see | Frozen, audited container images with a known bill of materials |
With OpenGolin.AI, the key protections are architectural, not procedural:
- Air-gapped deployment — Models, RAG pipelines, and all user data run inside your firewall. No outbound API calls to third-party AI providers.
- Frozen container images — Each release is a Docker image with compiled, audited code. No
npm installat deploy time, no live dependency resolution, no window for a hijacked package to slip in. - Controlled update cycle — You decide when to pull a new version after reviewing changelogs and verifying signatures. Unlike
^1.14.0pulling a poisoned version automatically, your production environment stays on the exact image you approved. - Full governance layer — Role-based access, per-model permissions, usage audit logs, and data-retention policies that you configure — not a cloud provider's terms of service.
What Enterprise Teams Should Do Today
- Check your lockfiles immediately — Search for
axios@1.14.1,axios@0.30.4, or any version ofplain-crypto-js. If found, treat the system as compromised and rotate all credentials. - Pin critical dependencies — Stop using caret ranges (
^) for security-sensitive packages. Use exact versions and review every update manually. - Audit your AI supply chain — If your team sends confidential data to a cloud AI API, ask: who controls the infrastructure? What happens when their supply chain is compromised? Can you verify the code running behind the API?
- Move sensitive AI workloads on-premise — For data that matters — legal documents, financial records, customer communications, intellectual property — the only way to guarantee it isn't exposed is to keep it on hardware you control.
- Monitor network egress — The Axios RAT beaconed to
sfrclak[.]com:8000every 60 seconds. Egress monitoring would catch this immediately on a properly segmented network.
The Lesson: Trust Is a Dependency Too
The Axios attack is a masterclass in why implicit trust in the software supply chain is a liability. A single npm token — one stolen credential — was enough to push malware to millions of developers.
The same principle applies to your AI stack. Every API call to a cloud AI provider is an act of trust — trust that their infrastructure is secure, their employees are vetted, their dependencies are audited, and their incident response is faster than the attacker.
OpenGolin.AI removes that trust requirement entirely. Your models, your data, your servers, your rules. No supply chain to hijack. No third-party account to compromise. Just the AI capabilities your team needs, running on infrastructure you control.
When the next supply chain attack hits — and it will — the question won't be "were we affected?" It will be "was our data even reachable?" With on-premise AI, the answer is no.
