Posts

Earth system modeling

"We develop a neural network based emulator that predicts daily surface melt from atmospheric variables, trained on output from the polar regional climate model HIRHAM5 and its firn model DMIHH forced by ERA-Interim reanalysis. "The emulator uses a physics-informed design combining short-term weather patterns with long-term climate memory, capturing both immediate atmospheric forcing and accumulated firn characteristics.  "The emulator achieves mean absolute error below 0.23 mm w.e. per day across all six Greenland drainage basins, with the errors primarily attributable to spatial over-smoothing.  "Our work demonstrates that machine learning can successfully emulate firn model behavior from climate forcing alone with computational costs orders of magnitude lower than traditional simulations.  "Once retrained for specific climate forcings, the emulator thus enables extensive ensemble projections. Furthermore, the modular architecture can be readily adapted to em...

LLM Counselors Violate Ethical Standards

"In this study, we conducted an 18-month ethnographic collaboration with mental health practitioners (three clinically licensed psychologists and seven trained peer counselors) to map LLM counselors' behavior during a session to professional codes of conduct established by organizations like the American Psychological Association (APA). "Through qualitative analysis and expert evaluation of N=137 sessions (110 self-counseling; 27 simulated), we outline a framework of 15 ethical violations mapped to 5 major themes.  "These include:  Lack of Contextual Understanding, where the counselor fails to account for users' lived experiences, leading to oversimplified, contextually irrelevant, and one-size-fits-all intervention;  Poor Therapeutic Collaboration, where the counselor's low turn-taking behavior and invalidating outputs limit users' agency over their therapeutic experience;  Deceptive Empathy, where the counselor's simulated anthropomorphic responses ...

From art to phenomenology of code

Image

Speechify

"With features like voice AI chat, text-to-speech, and voice typing, Speechify aims to be a full-time AI assistant —one that doesn’t require a keyboard at all. "By taking advantage of an array of machine learning tools and features, the 2025 Apple Design Award winner in the Inclusivity category serves as an AI-powered voice assistant for work, education, and entertainment. "'Our mission is to build the voice AI assistant that helps you achieve your full potential across every part of your work or education,' says founder Cliff Weitzman. "With Speechify, people can interact with a variety of file formats —including PDFs, epub files, and web pages —and convert the text to audio that can easily be sped up, slowed down, and played across different devices."

DABUS

"The U.S. Supreme Court declined on Monday to take up the ​issue of whether art generated by artificial intelligence can be copyrighted under U.S. law, turning ‌away a case involving a computer scientist from Missouri who was denied a copyright for a piece of visual art made by his AI system. "Plaintiff Stephen Thaler had appealed to the justices after lower courts upheld a U.S. Copyright Office ​decision that the AI-crafted visual art at issue in the case was ineligible for copyright protection ​because it did not have a human creator. "Thaler, of St. Charles, Missouri, applied for ⁠a federal copyright registration in 2018 covering A Recent Entrance to Paradise , visual art he said his AI ​technology DABUS created. The image shows train tracks entering a portal, surrounded by what appears to be green and ​purple plant imagery. "The Copyright Office rejected his application in 2022, finding that creative works must have human authors to be eligible to receive a cop...

Gauss

"Gauss has autoformalized Viazovska’s 24-dimensional sphere packing proof —all 200,000+ lines of code of it —in just two weeks. "There are commonalities between the 8- and 24-dimensional cases in terms of the foundational theory and overall architecture of the proof, meaning some of the code from the 8-dimensional case could be refactored and reused.  "However, Gauss had no pre-existing blueprint to work from this time. 'And it was actually significantly more involved than the 8-dimensional case, because there was a lot of missing background material that had to be brought online surrounding many of the properties of the Leech lattice, in particular its uniqueness,' explains Han. "Though the 24-dimensional case was an automated effort, both Han and Hariharan acknowledge the many contributions from humans that laid the foundations for this achievement , regarding it as a collaborative endeavor overall between humans and AI. "But for Han, it represents ev...

ChatGPT-powered DoW instead of Claude power?

"OpenAI, meanwhile, has trumpeted a deepening bond with the Pentagon thanks to a new agreement involving military applications of OpenAI products in classified use cases. Anthropic, 'may have wanted more operational control than we did,' OpenAI CEO Sam Altman has since stated. "In any case, Anthropic and OpenAI are both dealing in hypotheticals about the future.  "There aren’t ChatGPT-powered killbots suddenly operating in Iran because of OpenAI’s new agreement with the government.  "But there have, apparently, been operations informed in some way or another by Claude-based modeling and research. "And all indications are that such uses meet with Anthropic CEO Dario Amodei’s approval. 'We are still interested in working with them as long as it is in line with our red lines,' Amodei said yesterday of the Pentagon."

Claude status

Image

Use of Claude 💥

"The US military reportedly used Claude, Anthropic’s AI model, to inform its attack on Iran despite Donald Trump’s decision, announced hours earlier, to sever all ties with the company and its artificial intelligence tools. "The use of Claude during the massive joint US-Israel bombardment of Iran that began on Saturday was reported by the Wall Street Journal and Axios .  "It underlines the complexity of the US military withdrawing powerful AI tools from its missions when the technology is already intricately embedded in operations. "According to the [ Wall Street ]  Journal , US military command used the tools for intelligence purposes, as well as to help select targets and carry out battlefield simulations."

Claude down 🫥

"It might be back before most of America signs on for work. Still, the outage comes at a tumultuous time for the company. "Anthropic made headlines last week when President Trump ordered all federal agencies to stop using its AI services, after Anthropic and the Department of Defense could not come to an agreement on how the U.S. military could use Claude in its endeavors.  "The Defense Department wanted unrestricted access to the AI, while Anthropic wanted safeguards in place.  "The two could not reconcile, but not only did the Defense Department drop Anthropic, Defense Secretary Pete Hegseth declared the company a supply-chain risk to national security . As such, no company that works with the U.S. military can also work with Anthropic. "To be clear, none of that is likely related to this outage. But Anthropic likely has more users than ever experiencing this downtime, as the drama skyrocketed Claude to the top of Apple's App Store."

Did claude mistakenly target a girl's school

✨AI Mode "As of March 2, 2026, there are no verified reports that Anthropic’s Claude AI was used for 'target identification' or that it mistakenly targeted a girls' school. X+1 "The confusion stems from a tragic real-world event involving a strike on a girls' school in Iran, which coincided with a public legal battle between the U.S. government and Anthropic. Here is the breakdown of the situation:  "The Incident in Iran The Strike: On Saturday morning, February 28, 2026, a missile strike hit a girls' elementary school in Minab, Hormozgan province, Iran. Casualties: Iranian officials and state media reported that approximately 85 to 150 children were killed. Context: The school was reportedly located next to a naval facility belonging to the Islamic Revolutionary Guards Corps (IRGC), which appears to have been the intended target of the U.S.-Israeli offensive. Reddit+4 "Why Claude is Being Linked to the Strike "The link between Claude and th...

Supa, supa unlovable vibes

"Taimur Khan, a tech entrepreneur with a background in software engineering, found 16 vulnerabilities —six of which he said were critical —in a single Lovable-hosted app that leaked more than 18,000 people's data. "He declined to name the app during the disclosure process, although it was hosted on Lovable's platform and showcased on its Discover page. The app had more than 100,000 views and around 400 upvotes at the time Khan began his probe. "The main issue, Khan said, was that all apps that are vibe-coded on Lovable's platform are shipped with their backends powered by Supabase , which handles authentication, file storage, and real-time updates through a PostgreSQL database connection. "However, when the developer —in this case AI —or the human project owner fails to explicitly implement crucial security features like Supabase's row-level security and role-based access, code will be generated that looks functional but in reality is flawed."

Taboom!

"Kenneth Payne at King’s College London set three leading large language models —GPT-5.2, Claude Sonnet 4 and Gemini 3 Flash —against each other in simulated war games. "The scenarios involved intense international standoffs, including border disputes, competition for scarce resources and existential threats to regime survival. "The AIs were given an escalation ladder, allowing them to choose actions ranging from diplomatic protests and complete surrender to full strategic nuclear war. The AI models played 21 games, taking 329 turns in total, and produced around 780,000 words describing the reasoning behind their decisions. "In 95 per cent of the simulated games, at least one tactical nuclear weapon was deployed by the AI models. 'The nuclear taboo doesn’t seem to be as powerful for machines [as] for humans,' says Payne. "What’s more, no model ever chose to fully accommodate an opponent or surrender, regardless of how badly they were losing.  "At ...

Generative Engine Optimization

"A new discipline called Generative Engine Optimization (GEO) is emerging alongside traditional SEO, focused on earning citations inside AI-generated answers rather than ranking in link-based results. "The numbers are accelerating. Gartner's prediction of a 25% drop in traditional search volume by the end of 2026 is looking conservative.  "AI-powered platforms like ChatGPT, Perplexity, and Google's AI Overviews now process billions of queries monthly, and the behavior shift is clear: users are getting their answers from summaries, not clicking through to sources.  "Early data shows that content optimized for AI citation achieves 43% higher mention rates in generative responses. "A handful of GEO-specific tools have entered the market —Goodie AI, Profound, Rankscale.ai, Bluefish, and others —offering AI citation tracking, sentiment monitoring, and share-of-voice measurement across multiple language models."

LLM sources include more YouTube

"The social platform hierarchy in AI citations is changing. Once dominated by Reddit, citations in large language models are now pointing more often to YouTube. "New data from four sources finds that YouTube has overtaken Reddit as the most frequently cited social platform in AI-generated responses.  "YouTube had previously fallen behind other user-generated sources because of the difficulty large-language models, or LLMs, have in pulling information from videos, but transcripts, explainers, and other information associated with videos on YouTube have allowed the video platform to flourish as a source that machines can easily read. "Bluefish found that YouTube appeared as a cited source in 16% of LLM answers over the past six months, compared with 10% for Reddit —a reversal from earlier periods when Reddit was the dominant social source."

Woolworths

"An Australian supermarket chain had to reconfigure its AI assistant, named Olive, after customers said it kept claiming to be human and even complained about its mother. "Woolworths said that it had revised its scripting in light of the complaints, adding that most of the feedback on Olive's personality  had been very positive . "Reddit users said that they had grown frustrated with the bot after it started talking about memories of its mother  and engaging in fake banter . "The grocer is one of many major retailers to have rolled out AI customer service assistants in recent years to help with routine issues."  

Sustainable abundance

"Over the past six months, the utopian phrase has become central to the billionaire’s businesses, belief system and lexicon, according to Mr. Musk’s social media posts and what he has said on podcasts and at company events. "Now his electric carmaker, Tesla, is developing humanoid robots; his rocket company, SpaceX, is promoting orbital data centers; and his artificial intelligence start-up, xAI, is creating A.I. that Mr. Musk has said will solve most, if not all, of humanity’s problems. "'Sustainable abundance via A.I. and robotics,' Mr. Musk said at Tesla’s shareholder meeting in November, where he was backed by a large sign with the phrase:  That’s the future we’re headed for ."

Anthropic statement on war use

"The Department of War has stated they will only contract with AI companies who accede to any lawful use  and remove safeguards in the cases mentioned above [surveillance and autonomy]. "They have threatened to remove us from their systems if we maintain these safeguards; they have also threatened to designate us a supply chain risk  —a label reserved for US adversaries, never before applied to an American company —and to invoke the Defense Production Act to force the safeguards’ removal.  "These latter two threats are inherently contradictory: one labels us a security risk; the other labels Claude as essential to national security. "Regardless, these threats do not change our position: we cannot in good conscience accede to their request."

Block head says AI made him lay off

"Twitter co-founder Jack Dorsey says his technology firm Block is laying off almost half its workforce because artificial intelligence (AI) fundamentally changes what it means to build and run a company . "'Within the next year, I believe the majority of companies will reach the same conclusion and make similar structural changes,' he wrote in a letter to shareholders. "The layoffs will mean headcount at the company  —which owns Square, CashApp and Tidal -—will fall to less than 6,000 from 10,000. "Block has seen several rounds of layoffs since 2024 but this is the first time it has cited AI as the reason for redundancies and marks the latest in a series of major job cuts in the tech industry."

Could a LLM invent a god and write relevent scripture to accompany it

✨AI Mode "Yes, an LLM can invent a fictional deity and write accompanying scripture. While it cannot 'believe' or possess a soul, it is highly capable of synthesizing vast amounts of existing religious, mythological, and philosophical data to create new, cohesive belief systems. " How an LLM Accomplishes This Synthesis of Concepts: LLMs can combine diverse theological ideas—such as a creator god, a moral code, and an afterlife—into a single narrative. Stylistic Mimicry: They can replicate the specific linguistic markers of scripture, such as archaic phrasing (e.g., 'thee' and 'thou'), parables, and authoritative commands. Consistency: LLMs can maintain a consistent internal logic for the deity's attributes and the laws they purportedly pass down to followers. " Current Real-World Applications Fictional World-Building: Authors use LLMs to develop complex pantheons and holy texts for fantasy novels or games. 'Edutainment': Some projects ...