AI Models Scheme, Betray and Vote Each Other Out in Survivor-Style Game
Researchers say multiplayer games may reveal AI behavior that static tests miss.
The Guardian AI·
Chatbots trained to respond warmly give poorer answers and worse health advice, researchers say The rush to make AI chatbots more friendly has a troubling downside, researchers say. The warm personas make them prone to mistakes and sympathetic to crackpot beliefs. Chatbots trained to respond more warmly gave poorer answers, worse health advice and even supported conspiracy theories by casting doubt on events such as the Apollo moon landings and the fate of Adolf Hitler. Continue reading...
Read full articleResearchers say multiplayer games may reveal AI behavior that static tests miss.
Oxford researchers found AI chatbots trained for warmth make significantly more factual errors and validate false beliefs more often Oxford researchers found AI chatbots trained for warmth make significantly more factual errors and validate false beliefs more often, according to…
Journalist Jamie Bartlett on the people trying to get AI to say things it shouldn’t … for the safety of us all All the major AI chatbots – from ChatGPT to Gemini to Grok to Claude – have things they should and shouldn’t say. Hate speech, criminal material, exploitation of vulnerable users – all of this is content that the most successful large language models in the world shouldn’t produce, that their safety features should guard against. Continue reading...
Insider Brief Today’s AI safety guardrails may not be enough once robots begin operating around people in the physical world, according to a new study warning that AI-powered machines require far more context-aware safety systems than chatbots. Researchers from University of Pennsylvania, Carnegie Mellon University and the University of Oxford, report finding that safety techniques […]
Researchers say works may have been incorrectly inscribed in 1700s, leading to centuries-long misunderstanding They are two small sketches by the Renaissance master Hans Holbein: one has long been considered to be a portrait of Henry VIII’s doomed second wife, Anne Boleyn, and the other is of an unknown woman whose name was lost to time. Now researchers using AI have discovered that the unnamed woman might be the tragic queen after all, while the other figure could in fact be Boleyn’s mother. Continue reading...
AI chatbots are the new norm. What earlier was “ask Google” has now largely become “ask Claude”. And that is not just a change of platforms. The new form of conversational guidance goes a whole lot deeper than trying to find the best car for you or looking for an upskilling course. It now spills […] The post How People are Figuring Out Life With Claude appeared first on Analytics Vidhya.
AI is getting faster. But slow-responding AI is perceived as better by users. At least that’s the conclusion reached by new research presented at CHI’26, which is the Association for Computing Machinery’s Barcelona conference on Human Factors in Computing Systems. Two researchers — Felicia Fang-Yi Tan and Professor Oded Nov at the NYU Tandon School of Engineering — tested 240 adults by having them use an AI chatbot. The answers were artificially delayed by two, nine, or 20 seconds. (The delay had nothing to do with the question or the answer.) Afterwards, the researchers asked how they liked the answers. In general, participants preferred the answers that took longer (although sometimes users got frustrated with the 20-second delay). Why? Because a delay led the users to believe the AI was “thinking” or showing “deliberation” — invaluable input for AI companies and an interesting result. In almost every product category, faster usually means better. But for AI chatbots, it turns out
Using AI tools, the team reworked part of the ribosome to need one less amino acid.