Increased AI model vetting for government use could set a precedent for broader industry safety standards, impacting tech and crypto sectors.
The post Americans for Responsible Innovation urges US to vet AI models for government contracts appeared first on Crypto Briefing.
Binance recasts AI as core security infrastructure, saying 24+ initiatives and 100+ models have blocked $10.53B in risky funds from 2025 through Q1 2026. Binance’s latest security report portrays artificial intelligence not as a feature but as the backbone of…
Criminal groups and state-linked actors appear to be using commercial models to refine and scale up attacks
Business live – latest updates
In just three months, AI-powered hacking has gone from a nascent problem to an industrial-scale threat, according to a report from Google.
The findings from Google’s threat intelligence group add to an intensifying, global discussion about how the newest AI models are extremely adept at coding – and becoming extremely powerful tools for exploiting vulnerabilities in a broad array of software systems.
Continue reading...
A malicious Hugging Face repository posing as an OpenAI release delivered infostealer malware to Windows systems and logged 244,000 downloads before being removed, raising fresh concerns about how enterprises source and validate AI models from public repositories.
The repository, named Open-OSS/privacy-filter, impersonated OpenAI’s legitimate Privacy Filter release, copied its model card almost word-for-word, and included a malicious loader.py file that fetched and executed credential-stealing malware on Windows hosts, AI security firm HiddenLayer said in a research advisory.
“The repository reached the #1 trending position on Hugging Face with approximately 244K downloads and 667 likes in under 18 hours, numbers that were almost certainly artificially inflated to make the repository appear legitimate,” the advisory added.
The incident highlights growing concerns that public AI model registries are emerging as a new software supply-chain risk for enterprises, particularly as developers
Today on Uncanny Valley, we’re diving into recent reports that the Trump administration is considering an executive order that would establish some sort of federal oversight over new AI models.
Microsoft, Google DeepMind and Elon Musk’s xAI have offered to let the U.S. government access new AI models ahead of their general release, which sets up a new phase in Silicon Valley’s often fractious relationship with the US government’s fear of AI threats, based on the latest report of AI companies offering models to U.S. officials in the name of security review, in the hopes that government analysts can vet frontier AI systems for security threats like cyberattacks and military use before it is exposed for public consumption by developers and users, and, inevitably, those who should have no business […]
With Apple's latest operating system updates, users will reportedly have their pick of which third-party AI models they want to use for a host of tasks.
Agreements with Microsoft, Google DeepMind and xAI focus largely on recognizing cybersecurity, biosecurity and chemical weapons risks
The US government has struck deals with Google DeepMind, Microsoft and xAI to review early versions of their new AI models before they are released to the public.
The Center for AI Standards and Innovation (CAISI), part of the US Department of Commerce, announced the agreements on Tuesday, saying the review process would be key to understanding the capabilities of new and powerful AI models as well as to protecting US national security. These collaborations will help the federal government “scale (its) work in the public interest at a critical moment”, the agency said in a press release.
Continue reading...
OpenAI's newest default model for ChatGPT might not make stuff up as much. Hallucinations have been an ongoing problem for AI models, but OpenAI says its new GPT-5.5 Instant model has "significant improvements in factuality across the board."
The company claims that, based on "internal evaluations," GPT-5.5 Instant produced "52.5% fewer hallucinated claims" than its Instant model for GPT-5.3 "on high-stakes prompts covering areas like medicine, law, and finance." GPT-5.5 Instant also "reduced inaccurate claims by 37.3% on especially challenging conversations users had flagged for factual errors."
OpenAI also claims that GPT-5.5 Instant is …
Read the full story at The Verge.
President Donald Trump’s White House is contemplating whether the US government should be allowed to screen the most powerful AI models before they become available to the public, a significant shift from his previously laissez-faire approach to the AI industry. In the most recent story about White House AI model vetting, the debate boils down to whether the government should intervene before frontier systems with coding or cyber capabilities get distributed to the public. That’s a not a subtle change. That is Washington asking whether the arms race to AI has evolved to the stage where ‘ship it and see […]