Microsoft has unveiled a new AI-driven vulnerability discovery system that identified 16 previously unknown Windows vulnerabilities, including four critical remote code execution flaws, in what security analysts say could mark a major shift in how software vulnerabilities are discovered and remediated.
The system, codenamed MDASH, was developed by Microsoft’s Autonomous Code Security team alongside the Windows Attack Research and Protection group.
The platform will enter private preview for enterprise customers next month, Microsoft said in a blog post announcing the system.
The vulnerabilities were patched as part of Microsoft’s May 12 Patch Tuesday release.
“Cyber defenders are facing an increasingly asymmetric battle,” Microsoft added in the blog post. “Attackers are using AI to increase the speed, scale, and sophistication of attacks.”
Critical Windows components affected
The four critical vulnerabilities affected core Windows components broadly deployed across enterprise environme
Maybe I'm just punch drunk in my third week attending Musk v. Altman, but I have become very, very fond of Microsoft during the course of this trial. They don't want to be here any more than I do.
Their opening statement was honestly one of the most Microsoft things I've ever seen. More than anything else, it was an ad for Microsoft that listed their products in some detail. The general implication, from that statement, was that this trial was absurd, their involvement was absurd, but you, ladies and gentlemen of the jury, might still enjoy an Xbox game.
There's been a great deal of high drama on the stand, from Musk, his associates, and O …
Read the full story at The Verge.
OpenAI's renegotiated deal with Microsoft enhances financial predictability, potentially boosting investor confidence and market expansion.
The post OpenAI saves $97B through 2030 in renegotiated Microsoft deal appeared first on Crypto Briefing.
It feels like the world’s longest and most public divorce: In late April, Microsoft and OpenAI once again renegotiated the slow-motion breakup that has been playing out between the two over the last several years.
At first glance, it looks like a win-win. In the broadest terms, OpenAI gets more freedom to set its own course — it can sell its models to Microsoft competitors such as Amazon and Google, for example — while Microsoft gets a better revenue deal and first rights to the newest OpenAI technologies into the next decade.
But in truth, one company got a better deal than the other. Who came out ahead? To figure that out, we first need to look at the most important details of the new agreement.
A new deal after a lot of rancor
Keep in mind that this new agreement didn’t arise from thin air. It’s a direct result of Microsoft’s threats in March to sue OpenAI when inked a $50 billion deal with Amazon that makes the latter company the only third-party cloud provider for OpenAI’s ent
Tests of how well 19 large language models (LLMs) complete and perform complicated multi-step tasks has shown that they are both error-prone and, in many cases, unreliable.
The findings are contained a preprint paper, LLMs Corrupt Your Documents When You Delegate, written by Microsoft researchers Philippe Laban, Tobias Schnabel and Jennifer Neville based on a benchmark they created called DELEGATE-52 that allowed them to simulate workflows that might be part of a knowledge worker’s tasks. The paper is currently under review.
They said that the benchmark contains 310 work environments across 52 professional domains including coding, crystallography, genealogy and music sheet notation. Each environment consists of real documents totaling around 15K tokens in length, and five to 10 complex editing tasks that a user might ask an LLM to perform.
And, they stated in the paper’s abstract: “Our analysis shows that current LLMs are unreliable delegates: they introduce sparse but severe errors
Tests of how well 19 large language models (LLMs) complete and perform complicated multi-step tasks has shown that they are both error-prone and, in many cases, unreliable.
The findings are contained a preprint paper, LLMs Corrupt Your Documents When You Delegate, written by Microsoft researchers Philippe Laban, Tobias Schnabel and Jennifer Neville based on a benchmark they created called DELEGATE-52 that allowed them to simulate workflows that might be part of a knowledge worker’s tasks. The paper is currently under review.
They said that the benchmark contains 310 work environments across 52 professional domains including coding, crystallography, genealogy and music sheet notation. Each environment consists of real documents totaling around 15K tokens in length, and five to 10 complex editing tasks that a user might ask an LLM to perform.
And, they stated in the paper’s abstract: “Our analysis shows that current LLMs are unreliable delegates: they introduce sparse but severe errors
A malicious Hugging Face repository that posed as an OpenAI release delivered infostealer malware to Windows machines and recorded about 244,000 downloads before removal, according to research from AI security firm HiddenLayer. The number of downloads may have been artificially inflated by the attackers to make the model seem more popular, so the extent of […]
The post Hugging Face hosted malicious software masquerading as OpenAI release appeared first on AI News.
A new paper out of arXiv this week describes an AI system that builds, improves, and deploys its own specialist agents. Here is what that actually means for engineers and technical teams.