Some of the positions focus on AI-native development, data engineering and analytics, cloud-based engineering, and agent and model development as well as prompt engineering and new AI workflows.
Tech can scale cyber-attacks and defences alike, raising questions about private power, public risk and the future of a shared internet
Anthropic announced its latest AI model, Claude Mythos, this month but said it would not be released publicly, because it turns computers into crime scenes. The company claimed that it could find previously unknown “zero-day” flaws, exploit them and, in principle, link these weaknesses in order to take over major operating systems and web browsers. Mythos did so autonomously, writing code and obtaining privileges. The implications are significant. It’s like a burglar being able to target any building, get inside, unlock every door and empty every safe.
The Silicon Valley company has so far named 40 organisations as partners under Project Glasswing to help mount a defence – asking them to “patch” vulnerabilities before hackers get a chance to exploit them. All are American, sitting at the heart of the US-led digital system. Anthropic shared Mythos with
An image generated by ChatGPT Images 2.0. | Image: OpenAI
OpenAI is rolling out the latest version of its AI-powered image generator with new "thinking capabilities," allowing it to search the web to help it create multiple images from a single prompt. In a blog post, OpenAI says ChatGPT Images 2.0 can now create more "sophisticated" images, with improvements to its ability to follow instructions, preserve details of your choosing, and generate text.
It's powered by OpenAI's new GPT Image 2 model, with new thinking capabilities available to ChatGPT Plus, Pro, Business, and Enterprise subscribers. When a thinking model is selected, the chatbot's image generator can pull information from the web, cr …
Read the full story at The Verge.
As companies move from experimenting with AI agents to deploying them in production, one pattern becomes clear: capability without control is a liability.
Agents operate in long-running, stateful environments. They browse the web, read repositories, execute shell commands, call APIs and interact with internal systems. That power is transformative — and it meaningfully expands the attack surface.
In a recent interview, Jonathan Wall, CEO of Runloop, summarized the shift: “By default, agents should have access to very little. They need to do real work, but capabilities have to be layered on in a controlled way.” That framing reflects a broader industry reality: agent infrastructure must be designed around least privilege, explicit isolation and observable execution.
What follows is a practical control architecture for production agents.
The layered control model
A resilient agent deployment combines six explicit layers:
Strong runtime isolation with a microVM
Restrictive network policy w