Insider Brief Meta announced it is expanding the use of AI systems designed to identify underage users and automatically place suspected teens into stricter safety settings across Instagram and Facebook. The company said it is strengthening enforcement against users under 13 by using AI tools that analyze profiles, posts, captions and other account activity for […]
Memory shapes how humans think and how AI agents act. Without it, an agent only responds to the current input; with it, it can keep context, recall past actions, and reuse useful knowledge. AI memory spans short-term, episodic, semantic, and long-term memory, each with different design trade-offs around storage, retention, retrieval, and control. In this […]
The post Agent Memory Patterns in Cognitive Science and AI Systems appeared first on Analytics Vidhya.
Plus: Meta officially kills encrypted Instagram DMs, the Trump administration targets “violent left wing extremists,” leaked documents reveal Russia's school for elite hackers, and more.
As it adapts to the artificial intelligence era, the company is pushing many of its 78,000 workers to use the technology, and preparing to lay some of them off.
AI is capable of mimicking a real person. It’s clear this capability exists, and the ethics of using AI for this purpose are often very clear. But increasingly, new applications are leading to ethically murky results.
The good
For example, the CEO of a company, or a politician, could choose to create a clone using AI tools, creating a chatbot plus an avatar — a digital twin — that can interact with people on their behalf. Silicon Valley is big on the idea: Meta’s Mark Zuckerberg and LinkedIn co-founder Reid Hoffman are working on, or have already created, digital twins of themselves.
Cloned politicians include Pakistan’s Imran Khan, who used an authorized voice clone to campaign from prison, and New York City Mayor Eric Adams, who used voice-cloned robocalls to speak with constituents in languages like Mandarin and Yiddish.
This kind of use case is probably ethical — as long as the people interacting know that they’re dealing with a digital clone and not a real person.
The bad
The f
European Union member states and the European Parliament agreed early Thursday to push back the toughest deadlines under the bloc’s AI Act, giving enterprises more time to prepare for high-risk compliance.
Under the provisional deal between negotiators for the European Parliament and European Council, high-risk AI systems will face new deadlines of Dec. 2, 2027 for stand-alone systems and Aug. 2, 2028 for AI used in products covered by EU sectoral safety rules, a European Parliament statement said. The original deadline was Aug. 2, 2026.
The deal still needs formal adoption by both Parliament and Council before it can enter into law. The co-legislators intend to complete that step before Aug. 2. Until they do, the original deadline applies as drafted.
“Today’s agreement on the AI Act significantly supports our companies by reducing recurring administrative costs,” Marilena Raouna, Cyprus’s deputy minister for European affairs, said in a statement from the Council, which is composed of