Microsoft and the US Cybersecurity and Infrastructure Security Agency (CISA) have sounded the alarm about a Windows shell spoofing vulnerability that is already being exploited by attackers. It is not clear by whom as yet, but the main suspects are hackers in Russia.
CISA has mandated that all federal agencies patch this vulnerability, designated CVE-2026-32202, by May 12. According to a Microsoft advisory, exploitation of the flaw could lead to access to sensitive data, but attackers would not be able to gain control of the system.
However, one security expert has warned that the considerable gap between the time Microsoft identified the bug and the date by which the systems must be patched leads to increased risk.
The patch gap
Lionel Litty, CISO for security company Menlo, said that an incomplete patch for CVE-2026-21510 that resulted in the issue tracked as CVE-2026-32202 adds to the problem. “This has been a theme for many years. A vulnerability exists and the vendor has not been
Russia's potential role as a mediator could ease US-Iran tensions, impacting geopolitical dynamics and nuclear non-proliferation efforts.
The post Putin offers to store Iran’s enriched uranium amid US-Iran tensions appeared first on Crypto Briefing.
Plus: Meta officially kills encrypted Instagram DMs, the Trump administration targets “violent left wing extremists,” leaked documents reveal Russia's school for elite hackers, and more.
OpenAI has shipped a Chrome extension for Codex, its AI coding agent, enabling it to complete browser-based tasks directly inside Google Chrome on macOS and Windows — including interacting with signed-in websites, using Chrome DevTools, and running multi-step workflows across browser tabs.
The post OpenAI Adds Chrome Extension to Codex, Letting Its AI Agent Access LinkedIn, Salesforce, Gmail, and Internal Tools via Signed-In Sessions appeared first on MarkTechPost.
OpenAI CEO Sam Altman and Microsoft CTO Kevin Scott. | Image: Getty Images
When OpenAI was busy experimenting with AI-powered gaming bots, Microsoft CEO Satya Nadella and OpenAI CEO Sam Altman were in the early days of forming an AI partnership. Court documents from the ongoing Musk v. Altman trial have provided a rare look at the communications between Microsoft's top executives about investing in OpenAI and fears the AI startup could "storm off to Amazon" and "shit-talk" Microsoft.
Just days after OpenAI showed a bot beating a Dota 2 professional in the summer of 2017, Altman responded to Nadella's congratulations email with a proposal for a much bigger partnership with OpenAI to fund its next phase of AI resear …
Read the full story at The Verge.
MRC (Multipath Reliable Connection) is a new open networking protocol developed by OpenAI in partnership with AMD, Broadcom, Intel, Microsoft, and NVIDIA that improves GPU networking performance and resilience in large-scale AI training clusters by spreading packets across hundreds of paths simultaneously, recovering from network failures in microseconds, and enabling supercomputers with over 100,000 GPUs to be built using only two tiers of Ethernet switches.
The post OpenAI Introduces MRC (Multipath Reliable Connection): A New Open Networking Protocol for Large-Scale AI Supercomputer Training Clusters appeared first on MarkTechPost.
The Center for AI Standards and Innovation (CAISI), a division of the US Department of Commerce, has signed agreements with Google DeepMind, Microsoft, and xAI that would give the agency the ability to vet AI models from these organizations and others prior to their being made publicly available.
According to a release from CAISI, which is part of the department’s National Institute of Standards and Technology (NIST), it will “conduct pre-deployment evaluations and targeted research to better assess frontier AI capabilities and advance the state of AI security.”
The three join Anthropic and OpenAI, which signed similar agreements almost two years ago during the Biden administration, when CAISI was known as the US Artificial Intelligence Safety Institute.
An August 2024 release about those agreements indicated that the institute planned to provide feedback to both companies on “potential safety improvements to their models, in close collaboration with its partners at the UK AI Safety In