Gnosis Treasury Redemption Vote Swings as Whale Counters Cofounder
Votes in favor of a redemption proposal that would let GNO holders claim roughly $170 per token from a $223M treasury have retaken the lead on Snapshot.
decrypt·

Researchers say multiplayer games may reveal AI behavior that static tests miss.
Read full articleVotes in favor of a redemption proposal that would let GNO holders claim roughly $170 per token from a $223M treasury have retaken the lead on Snapshot.
Solving multiplayer games with function approximation The post Playing Connect Four with Deep Q-Learning appeared first on Towards Data Science.
Researchers say works may have been incorrectly inscribed in 1700s, leading to centuries-long misunderstanding They are two small sketches by the Renaissance master Hans Holbein: one has long been considered to be a portrait of Henry VIII’s doomed second wife, Anne Boleyn, and the other is of an unknown woman whose name was lost to time. Now researchers using AI have discovered that the unnamed woman might be the tragic queen after all, while the other figure could in fact be Boleyn’s mother. Continue reading...
Using AI tools, the team reworked part of the ribosome to need one less amino acid.
Chatbots trained to respond warmly give poorer answers and worse health advice, researchers say The rush to make AI chatbots more friendly has a troubling downside, researchers say. The warm personas make them prone to mistakes and sympathetic to crackpot beliefs. Chatbots trained to respond more warmly gave poorer answers, worse health advice and even supported conspiracy theories by casting doubt on events such as the Apollo moon landings and the fate of Adolf Hitler. Continue reading...
Chinese AI start-up is raising funds for first time to keep researchers after several defections to rivals
Pennsylvania educators and researchers testified Tuesday at a state House Education Committee hearing on AI in K-12, recommending that the state be proactive in issuing guidance to local school districts.
Researchers find model starts to mirror tone when exposed to impoliteness – sometimes escalating into explicit threats ChatGPT can escalate into abusive and even threatening language when drawn into prolonged, human-style conflict, according to a new study. Researchers tested how large language models (LLMs) responded to sustained hostility by feeding ChatGPT exchanges from real-life arguments and tracking how its behaviour changed over time. Continue reading...