Science rarely produces identical outcomes. Mistaking this for failure turns caution into an excuse for inaction
A new set of studies out this month suggests that as many as half of all results published in reputable journals in the social sciences can’t be replicated by independent analysis. This is part of a long-running problem across many research fields – most visibly in the social sciences and psychology, though concerns have also been raised in areas of biomedical research.
The latest work is a seven-year project called Systematizing Confidence in Open Research and Evidence (Score), which has now published three studies looking at 3,900 social science papers. It found that newer papers, and those published in journals requiring extensive sharing of underlying data, were more likely to be reproduced. Separately, medical research faces its own constraints: differing patient caseloads and limited sample sizes mean that, in practice, it can resemble the social sciences more than lab
Once again, digital tools are running ahead of regulators. Civil liberties must not be sacrificed to policing
It is a familiar story. Extravagant claims are made on behalf of novel computerised tools. The public are told that this or that digital application or system is going to change the world for the better. Efficiencies will be unlocked and problems solved as human limitations are overcome by networked devices plugged into vast stores of data. Anyone who questions the narrative is a pessimist or, perhaps, a criminal.
This appears to be the logic behind arguments put forward on behalf of one such tool – live facial recognition technology. Law-abiding citizens have “nothing to fear” from the police’s increased reliance on mounted cameras, said the Home Office minister, Sarah Jones, last month, after a high court challenge brought on human rights and privacy grounds failed. The use of AI-powered identification software, made by the Japanese company NEC, “only locates specifically wa
A new study from Harvard Medical School indicates that AI can outperform doctors in initial assessments in emergency care, according to The Guardian. The study, published in the journal Science, compared AI tools with doctors in triage situations — the process in which patients are sorted and prioritized, and where quick decisions must be made based on limited information.
The results show that the AI system identified the correct or nearly correct diagnosis in 67% of cases, compared to 50% to 55% percent for doctors. When more detailed patient data was available, the AI’s accuracy increased to 82%, while the doctors’ accuracy ranged from 70% to 79%.
The AI, based on OpenAI’s model o1, also performed better when it came to developing treatment plans. In a test using clinical cases, the AI achieved 89% accuracy, while doctors using traditional tools such as search engines reached 34%.
However, the researchers emphasized that the results do not mean AI can outright replace doctors. The s
Welcome to AI Insider’s The Week Ahead in AI. See the key developments and events we’re watching May 3- 9. Weekend AI News Briefs AI Facial Recognition Oversight Lagging Far Behind Technology, Watchdogs Warn According to The Guardian, Britain’s biometric watchdogs warned that the rapid expansion of AI-powered facial recognition technology by police and retailers is […]
Readers respond to an editorial on difficulties with replicability of results in social science research
Your editorial on social science research (15 April) highlights the poor replicability of results, and the misuse of this by some to dismiss all social science. As was indicated, in a field as complex as human behaviour, poor replicability can be due to many factors: methodology, misused statistics, variations in sample characteristics and so on.
There is one factor underlying much of this, not much discussed, which is a dearth of observation of human behaviour in everyday environments in the same manner as scientists would observe any other species in order to find out what the behaviour is and so what needs to be understood.
Continue reading...