Over the years, enterprise IT execs have gotten frighteningly comfortable having little control or visibility over mission-critical apps, from SaaS to cloud and even cybersecurity. But generative AI (genAI) and agentic systems are taking that problem to a new extreme, with vendors able to dumb down a system IT is paying billions for without so much as a postcard.
It’s not necessarily that AI changes are made to boost profits or revenue. Even if we accept the vendor argument that such changes are in the customer’s interest, companies still need for their systems to do on Thursday what they did on Tuesday, let alone what they did when the purchase order was signed.
Alas, that is no longer the case.
Consider a recent report from Anthropic that detailed a lengthy list of changes the company made to some of its AI offerings — including one that explicitly dumbed down answers — without asking or telling customers beforehand.
The report describes various changes the Anthropic team made on t
The Chief AI Council is identifying pain points for agencies to understand and avoid around everything from AI and cybersecurity to IT procurement and privacy.
Teradata has launched its Autonomous Knowledge Platform, a new flagship offering that brings together data, analytics, AI development, agent orchestration, and governance across cloud, on-premises, and hybrid environments.
The target customer is an enterprise that has moved beyond testing AI assistants and is now asking harder questions: which data agents can use, what actions they can take, how much they will cost to run, and who is accountable when something goes wrong.
The company said the platform builds on its existing database engine and governance infrastructure, while adding new capabilities and more tightly integrating existing ones, including AI Studio, the Tera natural-language workspace, Tera Agents, Elastic Compute on Teradata Cloud, and the upcoming Teradata Factory for on-premises AI workloads.
Teradata is entering a competitive market with this. Snowflake, Databricks, Microsoft, Oracle, and Salesforce are all trying to persuade customers that their platforms should beco
As agencies deploy AI across their missions, officials launching AI agents and cybersecurity officials need to work together, Palo Alto AI technologist shares.
Agreements with Microsoft, Google DeepMind and xAI focus largely on recognizing cybersecurity, biosecurity and chemical weapons risks
The US government has struck deals with Google DeepMind, Microsoft and xAI to review early versions of their new AI models before they are released to the public.
The Center for AI Standards and Innovation (CAISI), part of the US Department of Commerce, announced the agreements on Tuesday, saying the review process would be key to understanding the capabilities of new and powerful AI models as well as to protecting US national security. These collaborations will help the federal government “scale (its) work in the public interest at a critical moment”, the agency said in a press release.
Continue reading...
Insider Brief PRESS RELEASE — Applied Digital Corporation (NASDAQ: APLD), a designer, builder, and operator of high-performance, sustainably engineered data centers and colocation services for artificial intelligence, cloud, networking, and blockchain workloads, has announced the closing of a $300 million senior secured bridge facility led by Goldman Sachs. The facility is intended to fund the continued development […]
Microsoft and Google are adding new controls for AI agents, as enterprise IT teams try to keep up with tools that can access corporate data and act across business applications.
Microsoft’s Agent 365, made generally available for commercial customers on May 1, is designed to help organizations discover, govern, and secure AI agents, including those operating across Microsoft, third-party SaaS, cloud, and local environments.
Google’s new AI control center for Workspace, announced this week, focuses more specifically on giving administrators a centralized view of AI usage, security settings, data protection controls, and privacy safeguards within Workspace.
The timing reflects a shift in enterprise AI use. Many companies are no longer just testing chatbots, but are beginning to use agents that can reach corporate systems and carry out tasks on behalf of users.
Analysts said the shift changes how CIOs and CISOs should think about AI agents inside the enterprise.
“By placing agent controls
Cybersecurity was already under strain before AI entered the stack. Now, as AI expands the attack surface and adds new complexity, the limits of legacy approaches are becoming harder to ignore. This session from MIT Technology Review’s EmTech AI conference explores why security must be rethought with AI at its core, not layered on after…