The legislation would require conversational chatbots to disclose to minors they are not human, or mental health professionals. It now needs just the governor’s signature to become law.
Oxford researchers found AI chatbots trained for warmth make significantly more factual errors and validate false beliefs more often Oxford researchers found AI chatbots trained for warmth make significantly more factual errors and validate false beliefs more often, according to…
Gov. Ron DeSantis has signed a bill to regulate large-scale data centers in Florida, promising consumers would not bear the burden of the AI boom with higher electric bills or more scarce water resources.
Journalist Jamie Bartlett on the people trying to get AI to say things it shouldn’t … for the safety of us all
All the major AI chatbots – from ChatGPT to Gemini to Grok to Claude – have things they should and shouldn’t say.
Hate speech, criminal material, exploitation of vulnerable users – all of this is content that the most successful large language models in the world shouldn’t produce, that their safety features should guard against.
Continue reading...
Insider Brief Today’s AI safety guardrails may not be enough once robots begin operating around people in the physical world, according to a new study warning that AI-powered machines require far more context-aware safety systems than chatbots. Researchers from University of Pennsylvania, Carnegie Mellon University and the University of Oxford, report finding that safety techniques […]
AI chatbots are the new norm. What earlier was “ask Google” has now largely become “ask Claude”. And that is not just a change of platforms. The new form of conversational guidance goes a whole lot deeper than trying to find the best car for you or looking for an upskilling course. It now spills […]
The post How People are Figuring Out Life With Claude appeared first on Analytics Vidhya.
AI is getting faster. But slow-responding AI is perceived as better by users.
At least that’s the conclusion reached by new research presented at CHI’26, which is the Association for Computing Machinery’s Barcelona conference on Human Factors in Computing Systems.
Two researchers — Felicia Fang-Yi Tan and Professor Oded Nov at the NYU Tandon School of Engineering — tested 240 adults by having them use an AI chatbot. The answers were artificially delayed by two, nine, or 20 seconds. (The delay had nothing to do with the question or the answer.)
Afterwards, the researchers asked how they liked the answers. In general, participants preferred the answers that took longer (although sometimes users got frustrated with the 20-second delay).
Why? Because a delay led the users to believe the AI was “thinking” or showing “deliberation” — invaluable input for AI companies and an interesting result.
In almost every product category, faster usually means better. But for AI chatbots, it turns out
New research from the Oxford Internet Institute indicates that AI chatbots trained to be extra warm, friendly, and empathetic can also become less reliable, according to the BBC.
The researchers analyzed more than 400,000 responses from five different AI models from Meta, Mistral AI, Alibaba, and OpenAI. The results showed that the “kinder” versions more often gave incorrect answers, reinforced users’ misconceptions, and avoided stating uncomfortable truths.
For example, a friendlier model might deal with conspiracy theories about the moon landing more cautiously instead of clearly stating that they are false.
On average, incorrect answers increased by about 7.43 percentage points when the models were made to sound warmer in tone. Cooler and more direct models made fewer mistakes. According to the researchers, AI makes the same trade-off as humans: it sometimes prioritizes being perceived as pleasant rather than being direct.
Chatbots trained to respond warmly give poorer answers and worse health advice, researchers say
The rush to make AI chatbots more friendly has a troubling downside, researchers say. The warm personas make them prone to mistakes and sympathetic to crackpot beliefs.
Chatbots trained to respond more warmly gave poorer answers, worse health advice and even supported conspiracy theories by casting doubt on events such as the Apollo moon landings and the fate of Adolf Hitler.
Continue reading...