OpenAI Launches Daybreak as AI Firms Expand Into Cybersecurity
OpenAI said its new Daybreak initiative uses AI to help companies identify software vulnerabilities and speed up cyber defense.
Government Technology AI·
Recent advances in artificial intelligence have exposed new vulnerabilities that place every cyber system at risk of disruption, and cybersecurity defenders are simply not prepared.
Read full articleOpenAI said its new Daybreak initiative uses AI to help companies identify software vulnerabilities and speed up cyber defense.
JPMorgan AI spending has been reclassified from discretionary innovation to core infrastructure, placing it alongside data centers and cybersecurity in the bank’s budget. JPMorgan has reclassified JPMorgan AI investment as core infrastructure, treating its $2bn annual budget as non-negotiable as…
The Chief AI Council is identifying pain points for agencies to understand and avoid around everything from AI and cybersecurity to IT procurement and privacy.
The IMF called for treating cybersecurity as a core stability issue as new AI tools let even unskilled attackers breach critical infrastructure.
Security researchers at Mozilla say Anthropic's Mythos has unearthed a wealth of high-severity bugs in Firefox.
As agencies deploy AI across their missions, officials launching AI agents and cybersecurity officials need to work together, Palo Alto AI technologist shares.
Agreements with Microsoft, Google DeepMind and xAI focus largely on recognizing cybersecurity, biosecurity and chemical weapons risks The US government has struck deals with Google DeepMind, Microsoft and xAI to review early versions of their new AI models before they are released to the public. The Center for AI Standards and Innovation (CAISI), part of the US Department of Commerce, announced the agreements on Tuesday, saying the review process would be key to understanding the capabilities of new and powerful AI models as well as to protecting US national security. These collaborations will help the federal government “scale (its) work in the public interest at a critical moment”, the agency said in a press release. Continue reading...
Over the years, enterprise IT execs have gotten frighteningly comfortable having little control or visibility over mission-critical apps, from SaaS to cloud and even cybersecurity. But generative AI (genAI) and agentic systems are taking that problem to a new extreme, with vendors able to dumb down a system IT is paying billions for without so much as a postcard. It’s not necessarily that AI changes are made to boost profits or revenue. Even if we accept the vendor argument that such changes are in the customer’s interest, companies still need for their systems to do on Thursday what they did on Tuesday, let alone what they did when the purchase order was signed. Alas, that is no longer the case. Consider a recent report from Anthropic that detailed a lengthy list of changes the company made to some of its AI offerings — including one that explicitly dumbed down answers — without asking or telling customers beforehand. The report describes various changes the Anthropic team made on t