Journalist Jamie Bartlett on the people trying to get AI to say things it shouldn’t … for the safety of us all
All the major AI chatbots – from ChatGPT to Gemini to Grok to Claude – have things they should and shouldn’t say.
Hate speech, criminal material, exploitation of vulnerable users – all of this is content that the most successful large language models in the world shouldn’t produce, that their safety features should guard against.
Continue reading...
French prosecutors said Wednesday that they have opened an investigation into Elon Musk and social media platform X over the distribution of child sexual abuse images, deepfakes, disinformation and alleged complicity in denying crimes against humanity linked to the platform’s artificial intelligence system, Grok.
A Norwegian researcher has identified an issue with Microsoft Edge’s Password Manager that could be a serious concern for businesses.
Tom Jøran Sønstebyseter Rønning found that passwords are being saved within the browser in plain text, with the effect that any PC, particularly a shared machine, within an organization is a potential risk.
In a post on X, Rønning explained that when users save passwords in Edge, the browser decrypts every credential at startup and keeps it resident in process memory, regardless of whether the user visits the site.
Rønning’s finding was replicated by German IT publication Heise.de, which created and saved a password and found that, even after the browser had been closed and re-opened, the password could be found in plain text.
Microsoft has been nonchalant about the discovery. Norwegian website Itavisen.no said, “Rønning reported the discovery to Microsoft, and according to the company, the behavior is ‘by design’.”
Itavisen.no further said that Rønning
In a federal courtroom in California on Thursday, Elon Musk testified that his own AI startup, xAI, has used OpenAI's models to improve its own.
The matter at question is model distillation, a common industry practice by which one larger AI model acts as a "teacher" of sorts to pass on knowledge to a smaller AI model, the "student." Although it's often used legitimately within companies using one of their own AI models to train another, it's also a practice that's sometimes used by smaller AI labs to try to get their models to mimic the performance of a larger competitor's model.
Asked on the stand whether he knew what model distillation …
Read the full story at The Verge.