Digital transformation success will be achieved by people, not technology
The key to unlocking true digital transformation isn’t about technology at all; rather, it’s fundamentally about communication and collaboration at its core.
WIRED AI·
New research suggests that reliance on AI assistants can have a negative impact on people’s ability to think and problem solve.
Read full articleThe key to unlocking true digital transformation isn’t about technology at all; rather, it’s fundamentally about communication and collaboration at its core.
Most AI agents are stuck in their ways. Built once, they repeat the same patterns regardless of the task at hand. But new research suggests a smarter path forward: agents that get sharper with every challenge they face...
World is approaching point where no one can shut down a rogue AI, says director of body behind research It’s the stuff of science fiction cinema, or particularly breathless AI company blogposts: new research finds recent AI systems can independently copy themselves on to other computers. In the doom scenario, this means that when the superintelligent AI goes rogue, it will escape shutdown by seeding itself across the world wide web, lurking outside the reach of frantic IT professionals and continuing to plot world domination or paving over the world with solar panels. Continue reading...
Uber uses OpenAI to power AI assistants and voice features that help drivers earn smarter and riders book faster across a global real-time marketplace.
The retracted study on ChatGPT in education was already cited hundreds of times.
AI outperforms traditional weather forecasting in many cases. But a new study shows that when it matters most, current AI models still need to overcome a fundamental flaw.
New research from the Oxford Internet Institute indicates that AI chatbots trained to be extra warm, friendly, and empathetic can also become less reliable, according to the BBC. The researchers analyzed more than 400,000 responses from five different AI models from Meta, Mistral AI, Alibaba, and OpenAI. The results showed that the “kinder” versions more often gave incorrect answers, reinforced users’ misconceptions, and avoided stating uncomfortable truths. For example, a friendlier model might deal with conspiracy theories about the moon landing more cautiously instead of clearly stating that they are false. On average, incorrect answers increased by about 7.43 percentage points when the models were made to sound warmer in tone. Cooler and more direct models made fewer mistakes. According to the researchers, AI makes the same trade-off as humans: it sometimes prioritizes being perceived as pleasant rather than being direct.
AI may help doctors avoid missed diagnoses, but it still needs real-world testing and human oversight before it can guide patient care.