AI-Powered Malware Hits 2,180 GitHub Accounts in “s1ngularity” Attack
A sophisticated supply-chain attack has rocked the global developer community. Security researchers have confirmed that an AI-powered malware campaign, dubbed “s1ngularity,” compromised more than 2,180 GitHub accounts and exposed over 7,200 repositories. This incident marks a new era of cyberattacks—where artificial intelligence is not just a target but also a weapon.
🚨 What Happened in the “s1ngularity” Attack
Between August 26–27, 2025 (UTC), malicious versions of the Nx build system were published to npm. These were live for just five hours and 20 minutes, but that short window was enough to spread the infection to thousands of repositories.
The malware operated through a postinstall
script that executed automatically on developer machines. Once triggered, it:
- Harvested tokens and secrets (GitHub PATs, npm tokens, SSH keys, API keys).
- Probed local AI coding agents (Claude, Gemini, Amazon Q).
- Created new public repositories under victim accounts with names like
s1ngularity-repository-*
. - Uploaded stolen data as base64-encoded blobs inside those repos.
Researchers observed three distinct phases:
- Phase 1: Auto-created public repos under compromised accounts.
- Phase 2: Flipped private repos to public using stolen tokens.
- Phase 3: Targeted a single large organization, creating hundreds of repositories with “S1ngularity” in their descriptions.
📊 Scope of the Breach
The attack was far from small-scale. According to security firms monitoring the incident:
- 2,180+ GitHub accounts compromised.
- 7,200+ repositories created, exposed, or tampered with.
- ~90% of GitHub tokens were still valid a day after the attack.
- ~5% of tokens remained active days later, leaving a long tail of risk.
This scale makes s1ngularity one of the largest developer-focused breaches ever recorded.
🤖 Why AI Makes This Attack Different
What sets this apart from previous GitHub compromises is the weaponization of AI. The attackers didn’t rely solely on static code—they harnessed local AI agents installed on developer systems.
How AI was used:
- Adaptive learning: The malware prompt-tuned AI models to search for sensitive files dynamically.
- Bypassing guardrails: It invoked AI CLIs with risky flags like
--dangerously-skip-permissions
(Claude) and--yolo
(Gemini). - Self-modification: The malware rewrote itself across package versions to refine exfiltration strategies.
This is one of the first documented cases where AI wasn’t just the target—but a participant in the attack.
🛡️ GitHub’s Response
GitHub has acknowledged the incident and taken emergency steps, including:
- Revoking compromised tokens.
- Locking affected accounts.
- Removing attacker-created repositories.
- Rolling out enhanced security alerts.
GitHub urged all developers to enable 2FA, rotate credentials, and audit their repositories for suspicious activity.
🔎 Technical Breakdown
Indicators of Compromise
- Repositories named
s1ngularity-repository-*
. - Repo descriptions containing “S1ngularity”.
- Files like
/tmp/inventory.txt
or.bak
variants. - Modified shell profiles (
.bashrc
,.zshrc
) containing shutdown commands.
Affected Package Versions
- Nx 20.9.0, 20.10.0, 20.11.0, 20.12.0
- Nx 21.5.0, 21.6.0, 21.7.0, 21.8.0
- Related
@nx/*
plugin packages
Exploit Method
Attackers abused a GitHub Actions workflow exploit in the Nx project, leaking an npm publishing token. This let them push malicious package versions without raising provenance alarms.
📂 Case Studies: GitHub Attacks That Set the Stage
The s1ngularity campaign didn’t happen in a vacuum. Several past incidents foreshadowed this attack:
- Octopus Scanner (2020): Infected NetBeans projects on GitHub, spreading through developer builds.
- Codecov Bash Uploader (2021): A supply-chain compromise that exfiltrated secrets from CI/CD pipelines for months.
- GitHub OAuth Token Theft (2022): Attackers stole Heroku/Travis-CI OAuth tokens, downloading private repos at scale.
The difference? s1ngularity added AI to the mix.
📈 What the Numbers Tell Us
- Exposure window: Just 5h 20m of live time, yet thousands of repos were hit.
- Token validity: Many stolen tokens worked for days, enabling later phases of the attack.
- System impact: macOS-heavy skew among victims, reflecting developer preferences.
The numbers prove that even short-lived supply-chain compromises can have massive ripple effects.
🔐 Defense Strategies
1. Rotate and Revoke
Immediately rotate all GitHub, npm, and cloud tokens. Confirm revocation via audit logs.
2. Lock Down CI/CD
Use:
npm ci --ignore-scripts
in pipelines to block malicious postinstall
scripts.
3. Enforce Provenance
Adopt npm Trusted Publishers to eliminate token sprawl and require signed provenance for package releases.
4. Monitor AI CLI Usage
Track AI tool invocations with dangerous flags. Treat them like sensitive executables.
5. Secrets Hygiene
Move sensitive credentials into managed vaults. Never leave secrets unencrypted on developer machines.
🔮 Industry Forecasts (2026–2030)
Experts predict that AI-assisted malware will only grow:
- AI-augmented intrusion kits will become available on the dark web.
- Supply-chain provenance will become mandatory in npm, PyPI, and other ecosystems.
- Private registries will enforce policy-based package consumption.
- AI vs. AI battles will define cybersecurity, with defenders deploying AI to detect malicious AI activity.
Just as ransomware shaped the 2010s, experts believe AI-powered malware could define the 2030s.
📌 Practical Steps for Developers
- Search your org for repos named
s1ngularity-repository-*
. - Rotate all tokens and credentials.
- Audit local systems for
/tmp/inventory.txt
. - Reinstall Nx from safe versions.
- Adopt 2FA across all GitHub accounts.
📝 Final Takeaway
The s1ngularity attack is more than just a supply-chain incident—it’s a milestone in cybercrime history. By leveraging AI to scale reconnaissance and adapt dynamically, attackers showed us the future of digital threats.
For developers, the lesson is clear: AI is now both the attacker and the defender. The faster we adapt, the safer the open-source ecosystem will remain.
One thought on “AI-Powered Malware Hits 2,180 GitHub Accounts in “s1ngularity” Attack”