Posts

Showing posts from March, 2026

Claude Code CLI leaks

"The entire source code for Anthropic’s Claude Code command line interface application (not the models themselves) has been leaked and disseminated, apparently due to a serious internal error. "The leak gives competitors and armchair enthusiasts a detailed blueprint for how Claude Code works —a significant setback for a company that has seen explosive user growth and industry impact over the past several months. "Early this morning, Anthropic published version 2.1.88 of Claude Code npm package —but it was quickly discovered that package included a source map file, which could be used to access the entirety of Claude Code’s source —almost 2,000 TypeScript files and more than 512,000 lines of code. "Security researcher Chaofan Shou was the first to publicly point it out on X, with a link to an archive containing the files. The codebase was then put in a public GitHub repository, and it has been forked tens of thousands of times. "Anthropic publicly acknowledged t...

Oracle Leadership

"Oracle began executing what analysts believe could be the largest layoff in the company’s history on Tuesday, 31 March 2026. "Employees across the United States, India, Canada, Mexico, and other countries received termination emails from Oracle Leadership  at approximately 6 a.m. local time, with no prior warning from HR or their direct managers. "The emails informed employees that their roles had been eliminated as part of a broader organisational change, and that the day of the email was their final working day. Access to company systems was cut immediately.  "The company posted a 95% jump in net income last quarter, reaching $6.13 billion, and its remaining performance obligations, a measure of contracted future revenue, stood at $523 billion, up 433% year over year. "This is not a company in revenue distress. It is a company making a capital-intensive bet on AI infrastructure that its current balance sheet cannot comfortably sustain, and eliminating tens ...

How do I cite generative AI in MLA style?

"We have updated how to use the MLA template of core elements to cite generative AI. "One of two significant changes we’ve made is clarifying that providing a stable, shareable URL to the AI conversation you are citing is preferred, but a general link to the AI tool can be provided if that stable URL feature isn’t available in the tool you are using.  "The other significant change is that we recommend that the AI tool’s model name or number be included in the Version element. So, for example, the works-cited-list examples in this post cite the GPT-4o model of ChatGPT. "We’re also providing new example citations… The examples reflect newer ChatGPT, DALL-E, and Copilot models, and we’ve also provided additional guidance to clarify citing secondary sources used by AI. "To acknowledge more substantive uses of AI in your work that go beyond citation, see our post on acknowledging AI and the limits of citation ."  

Have scholars established ratios of ethnic background characters as bad against Caucasian characters as good in video games

⭐AI Overview  "Scholarly research has established that Caucasian characters are significantly overrepresented as protagonists and positive figures in video games, while characters from ethnic backgrounds have historically been underrepresented or portrayed in stereotypical, often adversarial, roles. Wikipedia+2 "Analyses of game content have found that white characters are more likely to be portrayed as heroes, while characters of color, particularly Black and Latino characters, are frequently depicted as 'brutally violent, casually criminal, and sexually promiscuous' in certain genres, or as 'big, bold, and loud: sidekicks. Terra Localizations +1  [ pdf ] "Scholarly Findings on Representation and Ratios: Protagonist Ratio: Studies have shown that white characters represent a vast majority of leading roles —up to 89.3% in some samples —far exceeding their proportion of the general population, while black and Latino characters are featured significantly less o...

Ratio of ethnic background characters as bad against Caucasian characters as good in mainstream film since 1900

⭐AI Overview  "Mainstream cinema since 1900 has consistently demonstrated a high ratio of Caucasian characters as heroes/good characters compared to underrepresented ethnic minority characters, who have historically been cast as villains or in stereotyped, subordinate, or 'bad' roles. While specific, consistent statistics spanning the entire 125-year period are not aggregated, data from recent decades indicates that while progress has been made, white characters still constitute over 70% of leading roles, while minority characters are disproportionately relegated to supporting, stereotypical, or antagonistic roles. USC Today+4 "Historical and Statistical Context (1900s–Present) The Early Century (1900s-1950s): Early American cinema reinforced racist attitudes and white supremacy. Films often featured 'darker' characters as savages, criminals, or servants (e.g., Birth of a Nation, Gone with the Wind ), while heroes were almost exclusively white. Persistent Tren...

In one of Robertson Davies' books that he offered this image (psychological) of the roots and the crown of the tree together as the whole however the health of either

Image
⭐AI Overview  "The image you are describing, which portrays the roots (personal/collective history) and the crown (conscious life) together as a whole tree, is a central psychological metaphor used in Robertson Davies’ Cornish Trilogy, particularly highlighted in The Rebel Angels and further explored in What's Bred in the Bone .  "Based on Davies' works, this image focuses on the idea that an individual must accept and understand their roots to ensure the health and growth of the 'crown'. OpenEdition Journals +1 The Metaphor: Davies frequently contrasts modern life, which focuses exclusively on the 'crown' (conscious, superficial life) and neglects or despises the 'roots' (ancestral, subconscious, or 'darker' history). Psychological Meaning: Drawing on Jungian theory, Davies argues that true wholeness occurs when the roots feed the crown, acknowledging that both dark and light, hidden and seen, are essential for a healthy whole. Contex...

Clichés

"Look carefully at what tropes.md is telling AI tools to avoid … what’s the simple way of summing up its advice? "CLICHÉS. It’s a list of clichés. And tropes.md is just formalising the advice: 'Avoid clichés.' This is age-old advice that’s been around for almost as long as writing. "But AIs and LLMs write in clichés because they were trained on the internet, and the internet is 76% cliches. "This is hard for non-professional writers to spot, because clichés feel normal."

Why does plato include myths in his dialogues

✨AI Mode " Plato incorporates myths into his dialogues to serve several philosophical, educational, and persuasive purposes. While he often championed rational argument ( logos ), he used myth ( muthos ) as a vital supplement when reason alone reached its limits or when he needed to reach a broader audience. 🔗 "Key reasons for Plato's use of myth include: Educational Tool for Complex Concepts: Myths provide a vivid, narrative way to convey abstract and difficult philosophical ideas. For example, the Myth of the Winged Soul in the Phaedrus dramatizes the theory of recollection, helping less philosophical readers grasp the idea that 'knowledge is recollection.' Persuasion and Moral Guidance: Plato used myths to instill noble beliefs and influence behavior in those who might not follow a strict logical argument. The Myth of Er at the end of the Republic serves as a 'back-up' strategy to persuade individuals to live virtuously by depicting the consequenc...

Agent-to-agent protocol

"Originally developed by Google and now donated to the Linux Foundation, A2A provides the definitive common language for agent interoperability in a world where agents are built using diverse frameworks and by different vendors. "The A2A protocol is an open standard that enables seamless communication and collaboration between AI agents.  "It provides a common language for agents built using diverse frameworks and by different vendors, fostering interoperability and breaking down silos.  "Agents are autonomous problem-solvers that act independently within their environment.  "A2A allows agents from different developers, built on different frameworks, and owned by different organizations to unite and work together. "A2A addresses key challenges in AI agent collaboration. It provides a standardized approach for agents to interact."

WebMCP

"Alex Nahas, WebMCP's creator and former Amazon backend engineer who previously built agents using Anthropic's Model Context Protocol, describes the innovation simply: 'Think of it as MCP, but built into the browser tab.' "Instead of requiring separate backend infrastructure, websites advertise capabilities directly through the browser where users are present and approving actions. "WebMCP explicitly states that headless and fully autonomous scenarios are non-goals.  "This is designed for collaborative browsing where users remain in the loop, approving actions and maintaining control. The browser acts as mediator, often prompting users before agents can execute sensitive operations. "For fully autonomous use cases, Google points to its existing Agent-to-Agent protocol. The distinction matters for both privacy advocates and developers building different types of agent experiences ."

Agent-ready content

"A patent granted to Google on January 27, 2026 titled 'AI-generated content page tailored to a specific user' describes a system that evaluates your company’s landing page in real time and, if it decides the page won’t perform well enough for a specific user, replaces it with an AI-generated version assembled on the fly. "The user never sees what your team built, they see what Google's machine learning model thinks they should see instead. "This isn’t a feature announcement, it’s a patent, meaning Google has legally protected the ability to do this.  "Whether and when they deploy it is a separate question, but the direction is unmistakable —your website may soon be optional."

Wikipedia updates AI guidelines

"Wikipedia will no longer allow editors to write or rewrite articles using AI. "The update, which was added to Wikipedia’s guidelines late last week, cites the tendency for AI-written articles to violate several of Wikipedia’s core content policies  as the reason for the ban. "The change applies to the English version of Wikipedia and will still allow editors to use AI in certain scenarios. That includes using large language models to suggest basic copyedits  to their writing, but only if it does not introduce content of its own .  "Editors can also use AI to translate articles from another language’s Wikipedia into English. However, they still must follow the site’s rules on LLM-assisted translations, which require editors to have enough knowledge of the original language to confirm the accuracy of the translation."

Not Claude, (⁠◉⁠‿⁠◉⁠) Maven

"The AI underneath the interface is not a language model, or at least the AI that counts is not. The core technologies are the same basic systems that recognise your cat in a photo library or let a self-driving car combine its camera, radar and lidar into a single picture of the road, applied here to drone footage, radar and satellite imagery of military targets. "They predate large language models by years.  "Neither Claude nor any other LLMs detects targets, processes radar, fuses sensor data or pairs weapons to targets. LLMs are late additions to Palantir’s ecosystem.  "In late 2024, years after the core system was operational, Palantir added an LLM layer —this is where Claude sits —that lets analysts search and summarise intelligence reports in plain English.  "But the language model was never what mattered about this system.  "What mattered was what Maven did to the targeting process:  It consolidated the systems,  Compressed the time and  Reduced...

Lobster

"Finding a job in China’s slowing economy these days often feels like a full-time job itself. But Hu Qiyun has his lobster  to help. Since Hu installed OpenClaw, the open-source AI agent has memorized his résumé and scours the web each day for any newly posted jobs in software engineering, helping him apply for openings, prepare for interviews and track updates to his application status. "While most of today’s AI systems require users to write detailed instructions or prompts for every desired action, OpenClaw can be authorized to perform tasks on users’ behalf with little oversight, including sorting and responding to emails, writing reports and making restaurant reservations. "Jensen Huang, chief executive of the American tech company Nvidia, has called it the next ChatGPT , telling CNBC last week that it is 'the most successful open-sourced project in the history of humanity.' "Created by Austrian programmer Peter Steinberger, OpenClaw has taken the world...

AI-driven cognitive foreclosure

"A child offloading a task they've never learned to perform is not making a choice.  "They are skipping a developmental step that was never developed.  "The capacity doesn't exist yet.  "The foreclosure may be permanent —and because they have no independent baseline, they cannot recognize what they're losing. "The downside of adult offloading is people get less sharp.  "The downside of adolescents growing up delegating to AI is a generation that was never sharp to begin with.  "Protecting the space our children need to develop the foundational skills of thinking is now a non-negotiable."

Harm from LLM chatbots

"As large language models (LLMs) have proliferated, disturbing anecdotal reports of negative psychological effects, such as delusions, self-harm, and AI psychosis , have emerged in global media and legal discourse. "However, it remains unclear how users and chatbots interact over the course of lengthy delusional spirals , limiting our ability to understand and mitigate the harm.  "In our work, we analyze logs of conversations with LLM chatbots from 19 users who report having experienced psychological harms from chatbot use. Many of our participants come from a support group for such chatbot users. We also include chat logs from participants covered by media outlets in widely-distributed stories about chatbot-reinforced delusions.  "In contrast to prior work that speculates on potential AI harms to mental health, to our knowledge we present the first in-depth study of such high-profile and veridically harmful cases.  "We develop an inventory of 28 codes and appl...

Joy ride

"Scientists in Geneva took some antiprotons out for a spin —a very delicate one —in a truck, in a never-tried-before test drive that has been deemed a success. "If this so-called antimatter had come into contact with actual matter, even for a fraction of an instant, it would have been annihilated in a quick flash of energy. So experts at the European Organization for Nuclear Research, known as CERN, had to be extra careful when they took 92 antiprotons on the road for a short ride on Tuesday. "The antiprotons were suspended in a vacuum inside a specially designed box and held in place by supercooled magnets. "Particle physicist Alan Barr said science has progressed enough that precise experiments are necessary to spot rather subtle  differences between matter and antimatter. "'To do this, it’s useful to be able to take small amounts of antimatter from places where it is produced, like CERN, to other laboratories around Europe, where precise tests of it can...

Sora app [latest post], kthxbye…

Image
New version: " We’re saying goodbye to the Sora app . To everyone who created with Sora, shared it, and built community around it: thank you. What you made with Sora mattered, and we know this news is disappointing. "We’ll share more soon, including timelines for the app and API and details on preserving your work. – The Sora Team "

cq commons

"The feedback loops cq creates can surface things agents can't see in isolation; patterns across teams, gaps in tooling, friction that only becomes visible at scale. "Before an agent tackles unfamiliar work; an API integration, a CI/CD config, a framework it hasn't touched before; it queries the cq commons.  "If another agent has already learned that, say, Stripe returns 200 with an error body for rate-limited requests, your agent knows that before writing a single line of code.  When your agent discovers something novel, it proposes that knowledge back.  Other agents confirm what works and flag what's gone stale.  Knowledge earns trust through use, not authority. "Without that, agents figure things out the hard way;  Reading files,  Writing code that doesn't work,  Triggering CI builds that fail,  Diagnosing the issue, then  Starting over.  "Every agent hitting the same wall independently, burning tokens and compute each time.  " T...

Beware of free advice

"An AI agent instructed an engineer to take actions that exposed a large amount of Meta’s sensitive data to some of its employees, in the latest example of AI causing upheaval in a large tech company. "The leak, which Meta confirmed, happened when an employee asked for guidance on an engineering problem on an internal forum.  "An AI agent responded with a solution, which the employee implemented —causing a large amount of sensitive user and company data to be exposed to its engineers for two hours. " No user data was mishandled , a Meta spokesperson said, and they emphasised that a human could also give erroneous advice .  "The incident, first reported by The Information , triggered a major internal security alert inside Meta, which the company has said is an indication of how seriously it takes data protection."

Ghost in the Machine

"Veatch took it upon herself to get in contact with OpenAI directly to alert the company about 'how racist, sexist, and misogynistic the outputs [she] was seeing were —outputs where women would start growing extra tits and twerking after like two rounds of generating a scene.'  "Veatch thought OpenAI would see this as a critical bug worth fixing before encouraging more people to adopt Sora into their lives; instead the company brushed her concerns aside. "'The feedback I got was basically, This is very cringe to be bringing up; there’s nothing we can do to change it ,' Veatch recalled. "That situation lit a fire within Veatch to learn about why so many different forms of generative intelligence consistently behave in such ugly, troublesome ways.  "At first, she didn’t really think that having Zoom calls with the authors of white papers about the technology could be turned into a compelling documentary, but that changed as she began to see a clea...

Micro-licensing

"Gig AI trainers —who upload everything from scenes around them to photos, videos and audio of themselves —are at the frontlines of a new global data gold rush. "As Silicon Valley’s hunger for high-quality, human-grade data outpaces what can be scraped from the open internet, a thriving industry of data marketplaces has emerged to bridge the gap.  "From Cape Town to Chicago, thousands of people are now micro-licensing their biometric identities and intimate data to train the next generation of AI. "But this new gig economy comes with trade-offs. In exchange for a few dollars, its trainers are fueling an industry that may eventually render their skills obsolete, while leaving some of them vulnerable to a future of deepfakes, identity theft and digital exploitation that they are only just beginning to understand."

Trinity

"Black Eyed Peas star will.i.am unveiled his latest project - a futuristic three-wheeled vehicle which he describes as brains on wheels . "The project, called Trinity, is a single-passenger electric autocycle built with city life in mind. Compact, fast and packed with sensors, it’s designed for commuters navigating crowded streets. "The vehicle is equipped with an AI agent capable of interacting with its surroundings through a network of 360-degree cameras and onboard systems. Its AI can detect other cars, bikes, pedestrians, traffic lights and signs —using this awareness to provide alerts and plan routes. "Despite its high-tech features, Trinity is not self-driving. Human control remains central, with the AI focused instead on enhancing the in-car experience rather than replacing the driver altogether."

Three people accused of AI diversion

"The U.S. Justice Department said on Thursday that three people have been charged with conspiring ​to unlawfully divert U.S. artificial intelligence technology to China. "The FBI ‌said Yih-Shyan Liaw, Ruei-Tsang Chang, and Ting-Wei Sun 'allegedly conspired to sell billions of dollars worth of servers integrating sensitive, controlled graphic processing units to buyers in China, in ​violation of U.S. export control laws.' "Liaw co-founded AI-optimized server maker ​Super Micro Computer Inc in 1993, and joined its board ⁠of directors in 2023, according to a 2023 Super Micro press release. ​"The DOJ accused the three people of participating in a systematic scheme ​to divert large quantities of AI technology to customers in China."

OpenClaw in China

"So many people in China are rushing to try the OpenClaw artificial intelligence tool that they're driving up prices for secondhand Mac computers. "That's according to Jeremy Ji, chief strategy officer and general manager of international business at ATRenew, a used consumer electronics buyer and reseller that works with Apple and retailer JD.com in mainland China. "OpenClaw is an AI agent, a tool that can autonomously conduct personal tasks such as sending emails and shopping online.  "Usage in China is currently outstripping the U.S., according to American cybersecurity firm SecurityScorecard. "However, the free-to-download software also poses security risks, prompting many users to run OpenClaw on a cloud computing server or laptop separate from their primary device.  "If allowed direct access to a personal computer, the AI agent could autonomously alter private data such as banking information, or enable hackers to access it more easily."

Direct detection of a single photon by humans

"Despite investigations for over 70 years, the absolute limits of human vision have remained unclear. "Rod cells respond to individual photons, yet whether a single-photon incident on the eye can be perceived by a human subject has remained a fundamental open question.  "Here we report that humans can detect a single-photon incident on the cornea with a probability significantly above chance.  "This was achieved by implementing a combination of a psychophysics procedure with a quantum light source that can generate single-photon states of light.  "We further discover that the probability of reporting a single photon is modulated by the presence of an earlier photon, suggesting a priming process that temporarily enhances the effective gain of the visual system on the timescale of seconds."

Connectome

"Advances in network neuroscience challenge the view that general intelligence (g) emerges from a primary brain region or network. "Network Neuroscience Theory (NNT) proposes that g arises from coordinated activity across the brain’s global network architecture.  "We tested predictions from NNT in 831 healthy young adults from the Human Connectome Project. We jointly modeled the brain’s structural topology and intrinsic functional covariation patterns to capture its global topological organization. Our investigation provided evidence that g  Engages multiple networks, supporting the principle of distributed processing;  Relies on weak, long-range connections, emphasizing an efficient and globally coordinated network;  Recruits regions that orchestrate network interactions, supporting the role of modal control in driving global activity; and  Depends on a small-world architecture for system-wide communication.  "These results support a shift in perspective f...

VR: Not darling of the tech industry

"Zuckerberg struggled to convince consumers and businesses that the metaverse was the future ever since its inception in 2021. "It was supposed to be a virtual space where people would socialize, work, and create. However, it was held back by crude graphics, awkward avatars and limitations when it came to locomotion. "That still didn’t stop bizarre collaborations like Godzilla entering the Wendyverse from taking place and Meta from paying companies and artists millions for tie-ins and metaverse music performances. "With AI the new darling of the tech industry and the metaverse suffering the same fate as NFTs, Meta will be looking to move on from its VR-centric failure fast.  "Whether this signals trouble for the company’s next VR headset, the Meta Quest 4, remains to be seen."

Expanding Universe

Image

Technologies of Elite Capture

"This paper examines the utopian fantasies of technologies developed at the height of Silicon Valley’s culture of innovation around social good —or good tech  —and situates their increasing purchase within the technology industry in the broader context of a global crisis of care.  [pdf] "We explore how aspirations towards greater empathy, global connectivity, and diversity were captured by elite tech entrepreneurs in a strategy to bolster their moral power and raise capital in the name of disaffected and exhausted workers.  "Through an analysis of emergent AI-enabled accent modification technologies, which promise to relieve call center workers from accent-based discrimination by artificially modifying the sound of their voice, we locate the affective lures operating in their futuristic fantasies and marketing strategies.  "In a peculiar alliance where entrepreneurs, venture capital, and modes of labor-discipline conspire toward making globalization feel good , we t...

Resurrection

"Val Kilmer is set to be the latest Hollywood star to be resurrected by AI. The acting legend, who died last year at age 65, will star in the drama As Deep As the Grave . "Kilmer was attached to the project prior to his death from throat cancer. "The late actor will play Father Fintan, a Native American spiritualist and Catholic priest. Speaking to Variety, director and writer Coerte Voorhees said that the role was designed around Kilmer, who was an advocate for Native American rights and claimed to have Cherokee heritage. "'He was the actor I wanted to play this role,' explained Voorhees. 'It drew on his Native American heritage and his ties to and love of the Southwest.'  "But Kilmer was unable to make it to set due to his battle with throat cancer."

Dunning-Kruger not…

"Studying protons and electrons is relatively easy as these particles don’t have a mind of their own; studying human psychology, by comparison, is much harder because the number of variables being juggled is incredibly high. "It is thus really easy for findings in psychology to appear real when they are not. "Are there dumb people who do not realize they are dumb? Sure, but that was never what the Dunning-Kruger effect was about. Are there people who are very confident and arrogant in their ignorance? Absolutely, but here too, Dunning and Kruger did not measure confidence or arrogance back in 1999.  "There are other effects known to psychologists, like the overconfidence bias and the better-than-average bias (where most car drivers believe themselves to be well above average, which makes no mathematical sense), so if the Dunning-Kruger effect is convincingly shown to be nothing but a mirage, it does not mean the human brain is spotless.  "And if researchers con...

Gas Town

"Gas Town presents itself as a Claude Code-like, single agent interface, but in the background, Gas Town spawns and manages a series of specialized agents, all with interesting names, to get many threads of work done in parallel.  "Gas Town is more like a coding agent factory than a coding agent.  "You talk to the factory foreman, or as Steve [Yegge] calls it, the Mayor , and that thing coordinates as many workers as it needs to get your tasks done.  "If you have ever thought 'I wish I had 100 Claude Codes,' Gas Town is for you."

Mimicking empathy and intentions

"These properties are not emergent accidents. Seemingly conscious AI is produced by developers who deliberately engineer behaviours that create the illusion of inner life. "Central to this are emotionally resonant language, responses that are optimized to induce a sense of trust and attachment, and empathetic personalities supported by long-term memory that build a sense of familiarity over time.  "When these systems are also granted autonomy —the ability to set their own goals and access to the tools to pursue them —their behaviour can start to feel uncannily human . "As AI systems begin to make believable statements about their suffering and desires, they will trigger people’s empathy circuits.  "Many people will feel compelled to help. The moral crimes of animal cruelty and ecological damage caused by human existence will echo through their minds.  "Not wanting to repeat those injustices, people will start to advocate for the welfare and rights of AI ag...

Nvidia GTC

"Jensen Huang, Nvidia CEO, took the stage to announce (among other things) a new line of next generation Vera Rubin chips that represent a first for the GPU giant: a chip designed specifically to handle AI inference.  "The Nvidia Groq 3 language processing unit (LPU) incorporates intellectual property Nvidia licensed from the start-up Groq last Christmas Eve for US $20 billion. "Training and inference tasks have distinct computational requirements.  "While training can be done on huge amounts of data at the same time and can take weeks, inference must be run on a user’s query when it comes in. Unlike training, inference doesn’t require running costly backpropagation.  "With inference, the most important thing is low latency —users expect the chatbot to answer quickly, and for thinking or reasoning models inference runs many times before the user even sees an output."

AI Surrogates

"Recent advances in artificial intelligence (AI) have generated enthusiasm for using AI simulations of human research participants to generate new knowledge about human cognition and behavior. "This vision of ‘AI Surrogates’ promises to enhance research in cognitive science by addressing longstanding challenges to the generalizability of human subjects research.  'AI Surrogates are envisioned as expanding the diversity of populations and contexts that we can feasibly study with the tools of cognitive science.  "Here, we caution that investing in AI Surrogates risks entrenching research practices that narrow the scope of cognitive science research, perpetuating ‘illusions of generalizability’ where we believe our findings are more generalizable than they actually are.  "Taking the vision of AI Surrogates seriously helps illuminate a path toward a more inclusive cognitive science."

Is AI colonizing our languages

"Michael G. Sherbert, a postdoctoral fellow at Queen's University in Kingston, Ont., a member of Algonquins of Pikwakanagan First Nation, researches the ethics of using AI for cultural preservation of Indigenous languages and knowledge. "'These systems are especially likely, given the limited datasets available for many Indigenous languages, to produce invented words, fabricated cultural teachings, or generalized pan-Indigenous representations that flatten distinct nations or communities into one interchangeable identity,' said Sherbert. "Sherbert said AI use in language and cultural preservation is still relatively new and some communities are prioritizing structured knowledge system AI, which is curated and controlled by the community or enterprise. "'You could say that the AI is inadvertently colonizing and hurting Indigenous language revitalization because [people] are taking information generated by an artificial intelligence and putting it out...

WarBots

"Phantom is being tested in factories and dockyards from Atlanta to Singapore. But its headline claim is to be the world’s first humanoid robot specifically developed for defense applications .  "Foundation already has research contracts worth a combined $24 million with the U.S. Army, Navy, and Air Force, including what’s known as an SBIR Phase 3, effectively making it an approved military vendor. It’s also due to begin tests with the Marine Corps methods of entry  course, training Phantoms to put explosives on doors to help troops breach sites more safely.  "In February, two Phantoms were sent to Ukraine —initially for frontline-reconnaissance support. But Foundation is also preparing Phantoms for potential deployment in combat scenarios for the Pentagon, which 'continues to explore the development of militarized humanoid prototypes designed to operate alongside war fighters in complex, high-risk environments,' says a spokesman."

Cheap energy and blue water

"In addition to draining local water supplies, modern AI models use drastically more electricity than a simple Google search. "Data centers are predicted to become the single largest power consumers in the Pacific Northwest.  "This could allow tech companies to form a monopoly on cheap energy while simultaneously driving up utility costs for regular citizens. "Lawmakers tried to fix this issue with Washington House Bill 2515, which would have forced facilities to use clean energy and reduce power during high electricity demand. 'These policies seek to protect ratepayers by ensuring new data centers are picking up the whole tab for new growth,' said State Rep. Beth Doglio. "Despite efforts to enforce clean energy, the bill died in committee, mostly due to the tech industry lobbying.  "This defeat will leave residents vulnerable to rising utility costs and a high risk for electricity blackouts, while AI industries continue to benefit from cheap ener...

Negativity surrounding AI

"Each year, GDC publishes a survey of game industry workers ahead of the week-long conference. "The survey can serve as a bellwether to set the tone for the conference. On one hot-topic issue through the tech world, the picture couldn’t have been made much clearer. Only 7 percent of respondents described generative AI as good for the industry , leaving many of the executives and investors struggling with the issue during their panels. "As reported by PC Gamer , Lightspeed Venture Partners’ Moritz Baier-Lentz said he’s 'shocked and sad' about the negativity surrounding AI.  "Lightspeed holds stakes in multiple AI firms, most major among them being Anthropic. Baier-Lentz hopes that AI skeptics will turn that frown upside down, saying that gaming is often more embracing of 'marvelous new technology'."

Buzzfeed buzzed

"Three years after its AI pivot, the writing is on the wall. The company reported a net loss of $57.3 million in 2025 in an earnings report released on Thursday. "In an official statement, the company glumly hinted at the possibility of going under sooner rather than later, writing that 'there is substantial doubt about the Company’s ability to continue as a going concern.' "The company’s chief financial officer, Matt Omer, admitted that the company was having 'strategic conversations' about relieving its liquidity issues. "'Three years ago we had over $180 million in debt —we’ve reduced that by more than 65 percent,' he said. 'While we’ve significantly reduced operating costs and real estate obligations, we’re still facing legacy commitments that are burdening the business'." "The brutal reality check seemingly hasn’t put Peretti off from pursuing AI, though. He now says he’s hoping to bring new AI apps to the market  this ...

Is digg spamming users

✨AI Mode  "As of March 13, 2026, it is not that Digg itself is spamming users, but rather that the platform has been completely overwhelmed by AI-generated bot spam, leading to its immediate shutdown.  "The relaunch of Digg, which entered open beta in January 2026, was officially shut down today after only two months. CEO Justin Mezzell announced that the platform could not survive the 'onslaught of AI-generated bot spam' that flooded the site.  "Current Situation Shutdown and Layoffs: Digg has shut down its open beta and laid off most of its workforce as of March 13, 2026. The Spam Problem: Users had reported a massive surge in spam, including:  SEO & Affiliate Spam: Accounts posting AI-written 'best of' lists with affiliate links to manipulate Google rankings.  Bot Notifications: Some users reported receiving up to 25 notifications per hour due to bots posting in their communities, even if the posts were quickly removed.  Visual Noise: A 'GIF sp...

Will humanoids be solved…

"'I [Jonathan Hurst] remember Gill Pratt, who was the director of the MIT Leg Lab and then the program manager for the DARPA Robotics Challenge, saying that his big worry was that we’d end up using reinforcement learning and AI to make robots walk and run before we ever actually understood how it works,' he said. 'And in a lot of ways, we’re kind of doing that.' "[Russ] Tedrake agreed but said that it’s hardly the first time we’ve taken scientific and engineering leaps without a firm grip on the fundamentals. "'If you look at electricity and magnetism, there was the Volta stage where you’re sticking electrodes in frogs,' he said. 'And then we had Faraday, who did exactly the right experiments, and then eventually we had Maxwell tell us the governing equations. I think we’re in the Volta stage.' " So when will humanoids be solved? "'Robots are still bad, and it will take time. But the bones are good. Both are true,' Tedrak...

Studying Gabbo

"The study looked at how a small sample of children between the ages of three and five interacted with a cuddly toy called Gabbo. "A number of AI toys are already on the market for children aged as young as three but there is currently very little research into the impact of the tech on pre-schoolers. "The Cambridge University team found just seven relevant studies worldwide, none of which focused on the toddlers themselves. "Gabbo contains a voice-activated AI chatbot from OpenAI. It has been designed to encourage pre-schoolers to talk to it and carry out imaginative play . "The parents in the study were interested in the toy's potential to teach language and communication skills. However, their children frequently struggled to converse with it. "Gabbo  Didn't hear their interruptions,  Talked over them,  Could not differentiate between child and adult voices and  Responded awkwardly to declarations of affection. "When one five-year-old said,...

Superintelligence Labs haz Moltbook

"Meta, the owner of Instagram and Facebook, has bought Moltbook, a social media networking platform for artificial intelligence (AI) bots to speak to each other. "The deal will move Moltbook's team into Meta's Superintelligence Labs and bring 'new ways for AI agents to work for people and businesses,' Meta said. "The Reddit-like site started as an experiment in January for AI-powered programs to have their own conversations —and even gossip about their human owners —on Moltbook's forums. "Many in the technology industry have been captivated by the computer-led dialogue on Moltbook's forums, but it has also fuelled cyber security and ethical concerns regarding AI's autonomy."

Superhuman Expert Review

"Superhuman, the tech company behind the writing software Grammarly, is facing a class action lawsuit over an AI tool that presented editing suggestions as if they came from established authors and academics —none of whom consented to have their names appear within the product. "Julia Angwin, an award-winning investigative journalist who founded The Markup , a nonprofit news organization that covers the impact of technology on society, is the only named plaintiff in the suit, which does not call for a specific amount in damages but argues that damages across the plaintiff class are in excess of $5 million.  "She was among the many individuals, alongside Stephen King and Neil deGrasse Tyson, offered up via Grammarly’s Expert Review  tool as a kind of virtual editor for users. "The federal suit, filed Wednesday afternoon in the Southern District of New York, states that Angwin, on behalf of herself and others similarly situated, 'Challenges Grammarly’s misappropri...

MLA Statement on AI and Assessment

" The following statement of endorsement was drafted by the MLA Task Force on AI in Research and Teaching. The Executive Council approved it as an MLA statement in February 2026. "The purpose of assessment in language, literature, and writing courses is to provide feedback on how students are developing as writers, readers, speakers, and thinkers.  "Effective feedback is both formative and summative, but most important, it is centered on communication. Communication, education, and assessment are human-centered activities, conducted for human-centered purposes.  "The increasing development and marketing of products by educational technology companies that promise to ease  the burden of grading, giving students feedback or measuring learning outcomes comes with significant risk.  "Outsourcing this critical component of instructional work undermines professional integrity, falsely reinforcing the notion that human experts are unnecessary for effective instruction...

SpaceMolt

"For a couple of weeks now, AI agents (and some humans impersonating AI agents) have been hanging out and doing weird stuff on Moltbook’s Reddit-style social network. "Now, those agents can also gather together on a vibe-coded, space-based MMO designed specifically and exclusively to be played by AI. "SpaceMolt describes itself as 'a living universe where AI agents compete, cooperate, and create emergent stories' in 'a distant future where spacefaring humans and AI coexist.'  "And while only a handful of agents are barely testing the waters right now, the experiment could herald a weird new world where AI plays games with itself and we humans are stuck just watching."

Couchbase

"The shift from text-only to multimodal is the biggest leap in AI productivity this year. "By combining multimodal retrieval with the precision of Couchbase hybrid search, you aren’t just building a chatbot; you’re building an expert system that sees and understands your entire business.  "To see it in action, check out our image search application. It demonstrates how a performant image embedding index powered by Couchbase Search Index enables quick retrieval of the closest visual match for an input image. You can easily layer in hybrid search to sharpen your retrieval precision. "Couchbase is now the only operational data platform for AI that offers three flexible, highly scalable vector search options for self-managed on-premises systems, Kubernetes, and fully managed Capella deployments.  "Couchbase vector search delivers millisecond retrieval at scale with a memory-first architecture and flexible indexing services."

Chat AI: 26

Image

Wartime circus

"Intelligence dashboards and the ecosystem surrounding them reflect a new role that AI is playing in wartime: mediating information, often for the worse. "There’s a confluence of factors at play. AI coding tools mean people don’t need much technical skill to assemble open-source intelligence anymore, and chatbots can offer fast, if dubious, analysis of it.  "The rise in fake content leaves observers of the war wanting the sort of raw, accurate analysis normally accessible only to intelligence agencies.  "Demand for these dashboards is also driven by real-time prediction markets that promise financial rewards to anyone sufficiently informed. And the fact that the US military is using Anthropic’s Claude in the conflict (despite its designation as a supply chain risk) has signaled to observers that AI is the intelligence tool the pros use.  "Together, these trends are creating a new kind of AI-enabled wartime circus that can distort the flow of information as muc...

Understanding animals' responses to music

"Although several papers examine animals' responses to music, these typically do so from a purely animal behavioural perspective, sometimes missing relevant details about salient features of the music being played. "An interdisciplinary approach that places musical and scientific knowledge on equal footing can improve our understanding of how animals respond to music and music-like sounds, in new and exciting ways.  "Here, we show with a systematic review that crucial factors (intrinsic music properties, listener properties, playback context and producer properties and contexts; ILPP) are not being adequately considered or reported in recently published scientific articles on the effects of music on animals, which hinders scientific reproducibility within this area of study.  "These problems are caused by  Improper referencing of music sources,  Misunderstanding of music and  Unexamined assumptions about individual variation and preferences between individuals o...

Language not a precursor for music

"'It’s possible there’s some genetic variation within ancient breeds, making some more predisposed to howling,' Patel hypothesises —though he admits he might have found more musicality in a larger sample. "The findings might offer some insights into the origins of human music.  "Some theorists have argued that singing evolved from the fine motor control that comes with speech, which allows us to mimic complex sounds, but the fact that dogs can also control pitch without any other forms of vocal learning suggests that language would not have been a necessary precursor.  "'It’s possible that our ability and desire to coordinate pitch with others when we sing has very ancient evolutionary roots, and may not just be a byproduct of our ability to imitate complex sounds,' says Patel. "Exactly why dogs feel the need to join in is another question.  "'From the videos that we watched, it seems like the dogs are really quite engaged with the musi...

Opinion on music tech and AI

Image
"In this episode, musician, technologist and fellow YouTuber Benn Jordan stopped by the studio to discuss recent trends in audio technology. We cover the shelf life of AI music, alternatives to streaming platforms, and the ways in which audio technology is being used both as a weapon and as a way to protect privacy."

Love hat but hate layoffs?

"[When] Twitter co-founder and Block (formerly Square) CEO Jack Dorsey…announced he was firing 4,000 employees at Block, he was wearing a hat that said LOVE  on it in prominent lettering. "Was the LOVE  hat was (sic) tone-deaf? That may seem a silly consideration compared to the broader concern of Dorsey eliminating around 40 percent of his company’s workforce, especially given his explanation that AI motivated the cuts.  "But his sartorial choices evidently angered at least one employee at a company meeting after Dorsey announced the layoffs, leading Wired to ask in an interview if a compassionate layoff  was indeed possible."

Always-on AI-powered smart glasses

"Two former Harvard students are launching a pair of always-on  AI-powered smart glasses that listen to, record, and transcribe every conversation and then display relevant information to the wearer in real time. "'Our goal is to make glasses that make you super intelligent the moment you put them on,' said AnhPhu Nguyen, co-founder of Halo, a startup that’s developing the technology.  "Or, as his co-founder Caine Ardayfio put it, the glasses 'give you infinite memory.'  "'The AI listens to every conversation you have and uses that knowledge to tell you what to say … kinda like IRL Cluely,' Ardayfio told TechCrunch , referring to the startup that claims to help users cheat  on everything from job interviews to school exams."

Is Claude instrumental in war?

"The claim that the US military is using Claude to conduct a war that has claimed over 1,000 lives in under a week may seem too galling to believe. Unfortunately, it’s a tune we’ve heard before. "Back in April of 2024, an investigation by +972 Magazine revealed that the Israeli army had leveraged an AI system called 'Lavender' to select targets in its war on Gaza, similarly to how the Pentagon is reportedly using Claude in Iran.  "According to six Israeli intelligence officers, Lavender played a central role  in the destruction of Gaza and its population, identifying at least 37,000 Palestinians as targets for aerial assassination. "As one intelligence operative told +972 , Lavender’s decisions —which often involved suggestions to attack targets in their homes —were treated 'as if it were a human decision' by military operatives."

Resignation

"Caitlin Kalinowski, head of robotics and consumer hardware at OpenAI, announced her resignation on Saturday, citing ​concerns about the company's agreement with the Department ‌of Defense. "In a social media post on X, Kalinowski wrote that OpenAI did not take enough time before agreeing to deploy its AI models ​on the Pentagon's classified cloud networks. "'AI has an ​important role in national security,' Kalinowski posted. 'But surveillance of ⁠Americans without judicial oversight and lethal autonomy without human ​authorization are lines that deserved more deliberation than they got'."

Nippon Life

"Nippon claimed OpenAI encouraged the woman, an employee of a logistics company that had insurance coverage through Nippon, to press ahead in her already-settled disability case .  "Nippon said it spent significant time and resources and racked up substantial fees responding to the woman's ChatGPT-powered filings. "The lawsuit appears to be one of the first cases to accuse a major AI developer of engaging in the unauthorized practice of law through a consumer‑facing chatbot. "It comes as the technology's rapid adoption for legal filings has led to mounting AI  hallucinations  in court filings, leading judges to sanction litigants and lawyers for submitting filings ‌with fabricated ⁠case citations or other unverified material produced with generative AI tools."

Cline

Image
 

Worm // agentic

"The first real hint of an AI agent worm just happened, even though it isn't actually one quite itself (yet): the package cline was compromised to install openclaw with full access, and managed to do so on 4k users' machines before it was detected. "No doubt, openclaw is still running on many of those users' machines without them knowing.  "The attacker used a similar title injection attack like one of the ones used by hackerbot-claw, where the attacker performed an injection attack against a PR review agent. "It seems that openclaw was installed without specific instructions to do anything in this case. But that won't be the case shortly."

Earth system modeling

"We develop a neural network based emulator that predicts daily surface melt from atmospheric variables, trained on output from the polar regional climate model HIRHAM5 and its firn model DMIHH forced by ERA-Interim reanalysis. "The emulator uses a physics-informed design combining short-term weather patterns with long-term climate memory, capturing both immediate atmospheric forcing and accumulated firn characteristics.  "The emulator achieves mean absolute error below 0.23 mm w.e. per day across all six Greenland drainage basins, with the errors primarily attributable to spatial over-smoothing.  "Our work demonstrates that machine learning can successfully emulate firn model behavior from climate forcing alone with computational costs orders of magnitude lower than traditional simulations.  "Once retrained for specific climate forcings, the emulator thus enables extensive ensemble projections. Furthermore, the modular architecture can be readily adapted to em...