A compromised npm package is only the entry point. The axios incident shows how quickly attackers pivot from code execution to credential abuse, identity misuse, and cloud access.
Attackers too are looking to cash in on the AI coding craze, adapting their supply-chain techniques to target coding agents themselves.
Many AI agents autonomously scan package registries such as NPM and PyPI for components to integrate into their coding projects, and attackers are beginning to take advantage of this. Bait packages with persuasive descriptions and legitimate functionality have cropped up on such registries, while packages that target names that AI coding agents are likely to hallucinate as dependencies are another attack vector on the horizon.
Researchers from security firm ReversingLabs have been tracking one such supply-chain attack that uses “LLM Optimization (LLMO) abuse and knowledge injection” to make packages more likely to be discovered and chosen by AI agents. Dubbed PromptMink, the attack was attributed to Famous Chollima, one of North Korea’s APT groups tasked with generating funds for the regime by targeting developers and users from the cryptocurrency and
Microsoft and the US Cybersecurity and Infrastructure Security Agency (CISA) have sounded the alarm about a Windows shell spoofing vulnerability that is already being exploited by attackers. It is not clear by whom as yet, but the main suspects are hackers in Russia.
CISA has mandated that all federal agencies patch this vulnerability, designated CVE-2026-32202, by May 12. According to a Microsoft advisory, exploitation of the flaw could lead to access to sensitive data, but attackers would not be able to gain control of the system.
However, one security expert has warned that the considerable gap between the time Microsoft identified the bug and the date by which the systems must be patched leads to increased risk.
The patch gap
Lionel Litty, CISO for security company Menlo, said that an incomplete patch for CVE-2026-21510 that resulted in the issue tracked as CVE-2026-32202 adds to the problem. “This has been a theme for many years. A vulnerability exists and the vendor has not been
The US Cybersecurity and Infrastructure Security Agency (CISA) does not yet have access to Anthropic’s bug-hunting AI model, Claude Mythos, even though other government agencies do, Axios reported earlier this week.
As if that weren’t a big enough slap in the face for the national cyber-defense agency, the list of those who do have access to Mythos includes several unauthorized users, according to Bloomberg News. Members of a private Discord channel specializing in seeking information about unreleased AI models, have gained access to Mythos, according to one unnamed member of the group, Bloomberg reported. “The group has been using Mythos regularly since then, though not for cybersecurity purposes,” the person told Bloomberg, supplying screenshots to back up their claim.
As a result of its fear that the powerful model could be used to identify and exploit flaws in software and online services, Anthropic has limited access to a preview of Mythos to an exclusive group of government agenc
Automated AI vulnerability discovery is reversing the enterprise security costs that traditionally favour attackers. Bringing exploits to zero was once viewed as an unrealistic goal. The prevailing operational doctrine aimed to make attacks so expensive that only adversaries with functionally unlimited budgets could afford them, thereby disincentivising casual use. However, the recent evaluation by the […]
The post Reversing enterprise security costs with AI vulnerability discovery appeared first on AI News.
Supply chain attacks are shifting from exploits to access reuse. Learn how stolen tokens, SaaS integrations, and fragmented visibility enable data theft without triggering traditional detection.