Is an AI agent is your new coworker? Make sure to lean into your humanness
Fight the FOBO (fear of becoming obsolete). Automated work still requires human judgement.
The Verge AI·
Microsoft is launching a new AI agent inside Word that's specifically designed for legal teams. Legal Agent handles document edits, negotiation history, and complex documents to help legal teams handle tasks like reviewing contracts. "Instead of relying on general AI models to interpret commands, the agent follows structured workflows shaped by real legal practice, managing clearly defined, repeatable tasks like reviewing contracts clause by clause against a playbook," explains Sumit Chauhan, corporate vice president of Microsoft's Office Product Group. The Legal Agent can work with existing documents that have tracked changes, and analyz … Read the full story at The Verge.
Read full articleFight the FOBO (fear of becoming obsolete). Automated work still requires human judgement.
Perplexity has opened its Personal Computer feature to all Mac users through a new desktop app, bringing local AI agent capabilities beyond its previous Max subscriber waitlist. The tool extends Perplexity’s cloud-based Computer product onto users’ own devices, giving AI agents access to local files, native Mac applications, over 400 connectors, and the web to […]
Standard prompt attacks are merely the beginning. A structured framework to map and mitigate the backend attack vectors of agentic workflows. The post The AI Agent Security Surface: What Gets Exposed When You Add Tools and Memory appeared first on Towards Data Science.
OpenAI CEO Sam Altman and Microsoft CTO Kevin Scott. | Image: Getty Images When OpenAI was busy experimenting with AI-powered gaming bots, Microsoft CEO Satya Nadella and OpenAI CEO Sam Altman were in the early days of forming an AI partnership. Court documents from the ongoing Musk v. Altman trial have provided a rare look at the communications between Microsoft's top executives about investing in OpenAI and fears the AI startup could "storm off to Amazon" and "shit-talk" Microsoft. Just days after OpenAI showed a bot beating a Dota 2 professional in the summer of 2017, Altman responded to Nadella's congratulations email with a proposal for a much bigger partnership with OpenAI to fund its next phase of AI resear … Read the full story at The Verge.
TCI has cut its position in tech giant from 10% to 1%
Leaders at the tech giant were skeptical of OpenAI—but wary of pushing it into the arms of Amazon, according to emails dating back to 2018.
MRC (Multipath Reliable Connection) is a new open networking protocol developed by OpenAI in partnership with AMD, Broadcom, Intel, Microsoft, and NVIDIA that improves GPU networking performance and resilience in large-scale AI training clusters by spreading packets across hundreds of paths simultaneously, recovering from network failures in microseconds, and enabling supercomputers with over 100,000 GPUs to be built using only two tiers of Ethernet switches. The post OpenAI Introduces MRC (Multipath Reliable Connection): A New Open Networking Protocol for Large-Scale AI Supercomputer Training Clusters appeared first on MarkTechPost.
The Center for AI Standards and Innovation (CAISI), a division of the US Department of Commerce, has signed agreements with Google DeepMind, Microsoft, and xAI that would give the agency the ability to vet AI models from these organizations and others prior to their being made publicly available. According to a release from CAISI, which is part of the department’s National Institute of Standards and Technology (NIST), it will “conduct pre-deployment evaluations and targeted research to better assess frontier AI capabilities and advance the state of AI security.” The three join Anthropic and OpenAI, which signed similar agreements almost two years ago during the Biden administration, when CAISI was known as the US Artificial Intelligence Safety Institute. An August 2024 release about those agreements indicated that the institute planned to provide feedback to both companies on “potential safety improvements to their models, in close collaboration with its partners at the UK AI Safety In