Students receive $10,000 prizes from OpenAI for innovative use of artificial intelligence
The awards come as the first class to have ChatGPT access for all four years of college is about to graduate.
InfoWorld AI·

Generative AI has revolutionized the space of software development in such a way that developers can now write code at an unprecedented speed. Various tools such as GitHub Copilot, Amazon CodeWhisperer and ChatGPT have become a normal part of how engineers carry out their work nowadays. I have experienced this firsthand, in my roles from leading engineering teams at Amazon to working on large-scale platforms for invoicing and compliance, both the huge boosts in productivity and the equally great risks that come with GenAI-assisted development. With GenAI, the promise of productivity is very compelling. Developers who use AI coding assistants talk about their productivity going up by 15% to 55%. But most of the time, this speed comes with hidden dangers. To name a few, AI-generated software without good guardrails could open up security issues, lead to technical debt and introduce bugs that are difficult to detect through traditional code reviews. According to McKinsey research, while G
Read full articleThe awards come as the first class to have ChatGPT access for all four years of college is about to graduate.
If you have spent time using AI coding agents — GitHub Copilot, Claude Code, Gemini CLI — you have probably run into this situation: you describe what you want, the agent generates a block of code that looks correct, compiles, and then subtly misses the actual intent. This “vibe-coding” approach can work for quick prototypes […] The post Meet GitHub Spec-Kit: An Open Source Toolkit for Spec-Driven Development with AI Coding Agents appeared first on MarkTechPost.
OpenAI has introduced a Trusted Contact feature for ChatGPT that allows users to designate a friend or family member to receive automated alerts if conversations suggest self-harm risk. The company said human reviewers aim to assess safety notifications within one hour before deciding whether to contact the designated person via email, text, or in-app message. […]
Journalist Jamie Bartlett on the people trying to get AI to say things it shouldn’t … for the safety of us all All the major AI chatbots – from ChatGPT to Gemini to Grok to Claude – have things they should and shouldn’t say. Hate speech, criminal material, exploitation of vulnerable users – all of this is content that the most successful large language models in the world shouldn’t produce, that their safety features should guard against. Continue reading...
A lawsuit against the National Endowment for the Humanities drew wide attention for revealing how DOGE had used ChatGPT to cancel grants.
The company is expanding its efforts to protect ChatGPT users in cases where conversations may turn to self-harm.
The week leading up to Thanksgiving 2023 was the AI industry's biggest soap opera moment. OpenAI CEO Sam Altman was abruptly ousted from his role at the ChatGPT-maker. The explanation? That Altman was "not consistently candid in his communications with the board." Now, via witness testimony and trial exhibits in Musk v. Altman, the public is getting a concrete look behind the scenes of that dramatic weekend for the first time, much of it centered on former CTO Mira Murati. It was a unique situation in that the rollercoaster of a power play - which seemed to change every hour - took place, in many ways, publicly. The board's strikingly vague … Read the full story at The Verge.
A new study finds ChatGPT, Claude, Grok, and Perplexity all share user data with third-party ad trackers—sometimes even when you say no to cookies.