Microsoft's LinkedIn CEO, Ryan Roslansky, took on an expanded role at the company as head of Office last year, and he's now getting more responsibilities as part of the latest leadership reshuffle inside Microsoft. Sources tell me that the Microsoft Teams organization is moving to report to Roslansky, who will now lead a new Work Experiences Group at Microsoft.
The changes are part of a broader reshuffle triggered by Rajesh Jha, executive vice president of Microsoft's experiences and devices group, retiring from Microsoft after more than 35 years. Jha was responsible for the teams behind Windows, Office, Copilot, and Microsoft 365, and Micr …
Read the full story at The Verge.
OpenAI has shipped a Chrome extension for Codex, its AI coding agent, enabling it to complete browser-based tasks directly inside Google Chrome on macOS and Windows — including interacting with signed-in websites, using Chrome DevTools, and running multi-step workflows across browser tabs.
The post OpenAI Adds Chrome Extension to Codex, Letting Its AI Agent Access LinkedIn, Salesforce, Gmail, and Internal Tools via Signed-In Sessions appeared first on MarkTechPost.
OpenAI CEO Sam Altman and Microsoft CTO Kevin Scott. | Image: Getty Images
When OpenAI was busy experimenting with AI-powered gaming bots, Microsoft CEO Satya Nadella and OpenAI CEO Sam Altman were in the early days of forming an AI partnership. Court documents from the ongoing Musk v. Altman trial have provided a rare look at the communications between Microsoft's top executives about investing in OpenAI and fears the AI startup could "storm off to Amazon" and "shit-talk" Microsoft.
Just days after OpenAI showed a bot beating a Dota 2 professional in the summer of 2017, Altman responded to Nadella's congratulations email with a proposal for a much bigger partnership with OpenAI to fund its next phase of AI resear …
Read the full story at The Verge.
AI is capable of mimicking a real person. It’s clear this capability exists, and the ethics of using AI for this purpose are often very clear. But increasingly, new applications are leading to ethically murky results.
The good
For example, the CEO of a company, or a politician, could choose to create a clone using AI tools, creating a chatbot plus an avatar — a digital twin — that can interact with people on their behalf. Silicon Valley is big on the idea: Meta’s Mark Zuckerberg and LinkedIn co-founder Reid Hoffman are working on, or have already created, digital twins of themselves.
Cloned politicians include Pakistan’s Imran Khan, who used an authorized voice clone to campaign from prison, and New York City Mayor Eric Adams, who used voice-cloned robocalls to speak with constituents in languages like Mandarin and Yiddish.
This kind of use case is probably ethical — as long as the people interacting know that they’re dealing with a digital clone and not a real person.
The bad
The f
A LinkedIn feature that allows paid subscribers to view a list of visitors to their profile should be made available to all EU users free of charge to comply with the region’s General Data Protection Regulation (GDPR), a legal complaint launched by the None of Your Business (NOYB) digital rights group has claimed.
Filed this week in an Austrian court, the group’s argument is that LinkedIn’s ‘Who’s Viewed Your Profile’ feature contravenes the GDPR Article 15, which covers a subject’s right of access to their own data.
NOYB has a history of taking on tech companies. In 2025, Google was hit by a €325 million ($381 million) fine by French privacy regulator, the CNIL, over its data collection and advertising policies after a complaint by the group.
Contradictory policy
LinkedIn began offering users the ability to see who has viewed their profile around 2007, later turning this into a paywalled perk in a move that pre-dated the arrival of GDPR in 2018.
According to NOYB, this commercializati
MRC (Multipath Reliable Connection) is a new open networking protocol developed by OpenAI in partnership with AMD, Broadcom, Intel, Microsoft, and NVIDIA that improves GPU networking performance and resilience in large-scale AI training clusters by spreading packets across hundreds of paths simultaneously, recovering from network failures in microseconds, and enabling supercomputers with over 100,000 GPUs to be built using only two tiers of Ethernet switches.
The post OpenAI Introduces MRC (Multipath Reliable Connection): A New Open Networking Protocol for Large-Scale AI Supercomputer Training Clusters appeared first on MarkTechPost.
The Center for AI Standards and Innovation (CAISI), a division of the US Department of Commerce, has signed agreements with Google DeepMind, Microsoft, and xAI that would give the agency the ability to vet AI models from these organizations and others prior to their being made publicly available.
According to a release from CAISI, which is part of the department’s National Institute of Standards and Technology (NIST), it will “conduct pre-deployment evaluations and targeted research to better assess frontier AI capabilities and advance the state of AI security.”
The three join Anthropic and OpenAI, which signed similar agreements almost two years ago during the Biden administration, when CAISI was known as the US Artificial Intelligence Safety Institute.
An August 2024 release about those agreements indicated that the institute planned to provide feedback to both companies on “potential safety improvements to their models, in close collaboration with its partners at the UK AI Safety In