Posts

Why does plato include myths in his dialogues

✨AI Mode " Plato incorporates myths into his dialogues to serve several philosophical, educational, and persuasive purposes. While he often championed rational argument ( logos ), he used myth ( muthos ) as a vital supplement when reason alone reached its limits or when he needed to reach a broader audience. 🔗 "Key reasons for Plato's use of myth include: Educational Tool for Complex Concepts: Myths provide a vivid, narrative way to convey abstract and difficult philosophical ideas. For example, the Myth of the Winged Soul in the Phaedrus dramatizes the theory of recollection, helping less philosophical readers grasp the idea that 'knowledge is recollection.' Persuasion and Moral Guidance: Plato used myths to instill noble beliefs and influence behavior in those who might not follow a strict logical argument. The Myth of Er at the end of the Republic serves as a 'back-up' strategy to persuade individuals to live virtuously by depicting the consequenc...

Agent-to-agent protocol

"Originally developed by Google and now donated to the Linux Foundation, A2A provides the definitive common language for agent interoperability in a world where agents are built using diverse frameworks and by different vendors. "The A2A protocol is an open standard that enables seamless communication and collaboration between AI agents.  "It provides a common language for agents built using diverse frameworks and by different vendors, fostering interoperability and breaking down silos.  "Agents are autonomous problem-solvers that act independently within their environment.  "A2A allows agents from different developers, built on different frameworks, and owned by different organizations to unite and work together. "A2A addresses key challenges in AI agent collaboration. It provides a standardized approach for agents to interact."

WebMCP

"Alex Nahas, WebMCP's creator and former Amazon backend engineer who previously built agents using Anthropic's Model Context Protocol, describes the innovation simply: 'Think of it as MCP, but built into the browser tab.' "Instead of requiring separate backend infrastructure, websites advertise capabilities directly through the browser where users are present and approving actions. "WebMCP explicitly states that headless and fully autonomous scenarios are non-goals.  "This is designed for collaborative browsing where users remain in the loop, approving actions and maintaining control. The browser acts as mediator, often prompting users before agents can execute sensitive operations. "For fully autonomous use cases, Google points to its existing Agent-to-Agent protocol. The distinction matters for both privacy advocates and developers building different types of agent experiences."

Agent-ready content

"A patent granted to Google on January 27, 2026 titled 'AI-generated content page tailored to a specific user' describes a system that evaluates your company’s landing page in real time and, if it decides the page won’t perform well enough for a specific user, replaces it with an AI-generated version assembled on the fly. "The user never sees what your team built, they see what Google's machine learning model thinks they should see instead. "This isn’t a feature announcement, it’s a patent, meaning Google has legally protected the ability to do this.  "Whether and when they deploy it is a separate question, but the direction is unmistakable —your website may soon be optional."

Wikipedia updates AI guidelines

"Wikipedia will no longer allow editors to write or rewrite articles using AI. "The update, which was added to Wikipedia’s guidelines late last week, cites the tendency for AI-written articles to violate several of Wikipedia’s core content policies  as the reason for the ban. "The change applies to the English version of Wikipedia and will still allow editors to use AI in certain scenarios. That includes using large language models to suggest basic copyedits  to their writing, but only if it does not introduce content of its own .  "Editors can also use AI to translate articles from another language’s Wikipedia into English. However, they still must follow the site’s rules on LLM-assisted translations, which require editors to have enough knowledge of the original language to confirm the accuracy of the translation."

Not Claude, (⁠◉⁠‿⁠◉⁠) Maven

"The AI underneath the interface is not a language model, or at least the AI that counts is not. The core technologies are the same basic systems that recognise your cat in a photo library or let a self-driving car combine its camera, radar and lidar into a single picture of the road, applied here to drone footage, radar and satellite imagery of military targets. "They predate large language models by years.  "Neither Claude nor any other LLMs detects targets, processes radar, fuses sensor data or pairs weapons to targets. LLMs are late additions to Palantir’s ecosystem.  "In late 2024, years after the core system was operational, Palantir added an LLM layer —this is where Claude sits —that lets analysts search and summarise intelligence reports in plain English.  "But the language model was never what mattered about this system.  "What mattered was what Maven did to the targeting process:  It consolidated the systems,  Compressed the time and  Reduced...

Lobster

"Finding a job in China’s slowing economy these days often feels like a full-time job itself. But Hu Qiyun has his lobster  to help. Since Hu installed OpenClaw, the open-source AI agent has memorized his résumé and scours the web each day for any newly posted jobs in software engineering, helping him apply for openings, prepare for interviews and track updates to his application status. "While most of today’s AI systems require users to write detailed instructions or prompts for every desired action, OpenClaw can be authorized to perform tasks on users’ behalf with little oversight, including sorting and responding to emails, writing reports and making restaurant reservations. "Jensen Huang, chief executive of the American tech company Nvidia, has called it the next ChatGPT , telling CNBC last week that it is 'the most successful open-sourced project in the history of humanity.' "Created by Austrian programmer Peter Steinberger, OpenClaw has taken the world...

AI-driven cognitive foreclosure

"A child offloading a task they've never learned to perform is not making a choice.  "They are skipping a developmental step that was never developed.  "The capacity doesn't exist yet.  "The foreclosure may be permanent —and because they have no independent baseline, they cannot recognize what they're losing. "The downside of adult offloading is people get less sharp.  "The downside of adolescents growing up delegating to AI is a generation that was never sharp to begin with.  "Protecting the space our children need to develop the foundational skills of thinking is now a non-negotiable."

Harm from LLM chatbots

"As large language models (LLMs) have proliferated, disturbing anecdotal reports of negative psychological effects, such as delusions, self-harm, and AI psychosis , have emerged in global media and legal discourse. "However, it remains unclear how users and chatbots interact over the course of lengthy delusional spirals , limiting our ability to understand and mitigate the harm.  "In our work, we analyze logs of conversations with LLM chatbots from 19 users who report having experienced psychological harms from chatbot use. Many of our participants come from a support group for such chatbot users. We also include chat logs from participants covered by media outlets in widely-distributed stories about chatbot-reinforced delusions.  "In contrast to prior work that speculates on potential AI harms to mental health, to our knowledge we present the first in-depth study of such high-profile and veridically harmful cases.  "We develop an inventory of 28 codes and appl...

Joy ride

"Scientists in Geneva took some antiprotons out for a spin —a very delicate one —in a truck, in a never-tried-before test drive that has been deemed a success. "If this so-called antimatter had come into contact with actual matter, even for a fraction of an instant, it would have been annihilated in a quick flash of energy. So experts at the European Organization for Nuclear Research, known as CERN, had to be extra careful when they took 92 antiprotons on the road for a short ride on Tuesday. "The antiprotons were suspended in a vacuum inside a specially designed box and held in place by supercooled magnets. "Particle physicist Alan Barr said science has progressed enough that precise experiments are necessary to spot rather subtle  differences between matter and antimatter. "'To do this, it’s useful to be able to take small amounts of antimatter from places where it is produced, like CERN, to other laboratories around Europe, where precise tests of it can...

Sora app [latest post], kthxbye…

Image
New version: " We’re saying goodbye to the Sora app . To everyone who created with Sora, shared it, and built community around it: thank you. What you made with Sora mattered, and we know this news is disappointing. "We’ll share more soon, including timelines for the app and API and details on preserving your work. – The Sora Team "

cq commons

"The feedback loops cq creates can surface things agents can't see in isolation; patterns across teams, gaps in tooling, friction that only becomes visible at scale. "Before an agent tackles unfamiliar work; an API integration, a CI/CD config, a framework it hasn't touched before; it queries the cq commons.  "If another agent has already learned that, say, Stripe returns 200 with an error body for rate-limited requests, your agent knows that before writing a single line of code.  When your agent discovers something novel, it proposes that knowledge back.  Other agents confirm what works and flag what's gone stale.  Knowledge earns trust through use, not authority. "Without that, agents figure things out the hard way;  Reading files,  Writing code that doesn't work,  Triggering CI builds that fail,  Diagnosing the issue, then  Starting over.  "Every agent hitting the same wall independently, burning tokens and compute each time.  " T...

Beware of free advice

"An AI agent instructed an engineer to take actions that exposed a large amount of Meta’s sensitive data to some of its employees, in the latest example of AI causing upheaval in a large tech company. "The leak, which Meta confirmed, happened when an employee asked for guidance on an engineering problem on an internal forum.  "An AI agent responded with a solution, which the employee implemented —causing a large amount of sensitive user and company data to be exposed to its engineers for two hours. " No user data was mishandled , a Meta spokesperson said, and they emphasised that a human could also give erroneous advice .  "The incident, first reported by The Information , triggered a major internal security alert inside Meta, which the company has said is an indication of how seriously it takes data protection."

Ghost in the Machine

"Veatch took it upon herself to get in contact with OpenAI directly to alert the company about 'how racist, sexist, and misogynistic the outputs [she] was seeing were —outputs where women would start growing extra tits and twerking after like two rounds of generating a scene.'  "Veatch thought OpenAI would see this as a critical bug worth fixing before encouraging more people to adopt Sora into their lives; instead the company brushed her concerns aside. "'The feedback I got was basically, This is very cringe to be bringing up; there’s nothing we can do to change it ,' Veatch recalled. "That situation lit a fire within Veatch to learn about why so many different forms of generative intelligence consistently behave in such ugly, troublesome ways.  "At first, she didn’t really think that having Zoom calls with the authors of white papers about the technology could be turned into a compelling documentary, but that changed as she began to see a clea...

Micro-licensing

"Gig AI trainers —who upload everything from scenes around them to photos, videos and audio of themselves —are at the frontlines of a new global data gold rush. "As Silicon Valley’s hunger for high-quality, human-grade data outpaces what can be scraped from the open internet, a thriving industry of data marketplaces has emerged to bridge the gap.  "From Cape Town to Chicago, thousands of people are now micro-licensing their biometric identities and intimate data to train the next generation of AI. "But this new gig economy comes with trade-offs. In exchange for a few dollars, its trainers are fueling an industry that may eventually render their skills obsolete, while leaving some of them vulnerable to a future of deepfakes, identity theft and digital exploitation that they are only just beginning to understand."

Trinity

"Black Eyed Peas star will.i.am unveiled his latest project - a futuristic three-wheeled vehicle which he describes as brains on wheels . "The project, called Trinity, is a single-passenger electric autocycle built with city life in mind. Compact, fast and packed with sensors, it’s designed for commuters navigating crowded streets. "The vehicle is equipped with an AI agent capable of interacting with its surroundings through a network of 360-degree cameras and onboard systems. Its AI can detect other cars, bikes, pedestrians, traffic lights and signs —using this awareness to provide alerts and plan routes. "Despite its high-tech features, Trinity is not self-driving. Human control remains central, with the AI focused instead on enhancing the in-car experience rather than replacing the driver altogether."

Three people accused of AI diversion

"The U.S. Justice Department said on Thursday that three people have been charged with conspiring ​to unlawfully divert U.S. artificial intelligence technology to China. "The FBI ‌said Yih-Shyan Liaw, Ruei-Tsang Chang, and Ting-Wei Sun 'allegedly conspired to sell billions of dollars worth of servers integrating sensitive, controlled graphic processing units to buyers in China, in ​violation of U.S. export control laws.' "Liaw co-founded AI-optimized server maker ​Super Micro Computer Inc in 1993, and joined its board ⁠of directors in 2023, according to a 2023 Super Micro press release. ​"The DOJ accused the three people of participating in a systematic scheme ​to divert large quantities of AI technology to customers in China."

OpenClaw in China

"So many people in China are rushing to try the OpenClaw artificial intelligence tool that they're driving up prices for secondhand Mac computers. "That's according to Jeremy Ji, chief strategy officer and general manager of international business at ATRenew, a used consumer electronics buyer and reseller that works with Apple and retailer JD.com in mainland China. "OpenClaw is an AI agent, a tool that can autonomously conduct personal tasks such as sending emails and shopping online.  "Usage in China is currently outstripping the U.S., according to American cybersecurity firm SecurityScorecard. "However, the free-to-download software also poses security risks, prompting many users to run OpenClaw on a cloud computing server or laptop separate from their primary device.  "If allowed direct access to a personal computer, the AI agent could autonomously alter private data such as banking information, or enable hackers to access it more easily."

Direct detection of a single photon by humans

"Despite investigations for over 70 years, the absolute limits of human vision have remained unclear. "Rod cells respond to individual photons, yet whether a single-photon incident on the eye can be perceived by a human subject has remained a fundamental open question.  "Here we report that humans can detect a single-photon incident on the cornea with a probability significantly above chance.  "This was achieved by implementing a combination of a psychophysics procedure with a quantum light source that can generate single-photon states of light.  "We further discover that the probability of reporting a single photon is modulated by the presence of an earlier photon, suggesting a priming process that temporarily enhances the effective gain of the visual system on the timescale of seconds."

Connectome

"Advances in network neuroscience challenge the view that general intelligence (g) emerges from a primary brain region or network. "Network Neuroscience Theory (NNT) proposes that g arises from coordinated activity across the brain’s global network architecture.  "We tested predictions from NNT in 831 healthy young adults from the Human Connectome Project. We jointly modeled the brain’s structural topology and intrinsic functional covariation patterns to capture its global topological organization. Our investigation provided evidence that g  Engages multiple networks, supporting the principle of distributed processing;  Relies on weak, long-range connections, emphasizing an efficient and globally coordinated network;  Recruits regions that orchestrate network interactions, supporting the role of modal control in driving global activity; and  Depends on a small-world architecture for system-wide communication.  "These results support a shift in perspective f...