Posts

Showing posts from April, 2026

Overuse 🫨

"AI isn’t saving companies money on labor; it’s actually costing them more than the humans they currently employ. "'For my team, the cost of compute is far beyond the costs of the employees,' Bryan Catanzaro, vice president of applied deep learning at Nvidia, recently told Axios. "An MIT study from 2024 backs up Catanzaro’s experience. Analyzing the technical requirements of AI models needed to perform jobs at a human level, researchers found that AI automation would be economically viable in only 23% of roles where vision is a primary part of the work. In the remaining 77% of the time, it was cheaper for humans to continue their work. "In other instances, AI has proved to be fallible, with one engineer saying an AI agent destroyed his database and network as a result of what he called overuse . "Despite no clear evidence of AI improving productivity and, according to the Yale Budget Lab, no widespread data to support the idea of AI displacing jobs, Big...

Octopi or octopodes?

"Cephalopods thriving in Earth’s oceans offer a profound, accessible encounter with a genuinely alien form of mind. "They challenge us to broaden our definitions, to appreciate cognitive diversity, and to recognize that the journey to understanding other minds might begin not by gazing at the stars, but by looking into the complex, adaptable, and utterly different intelligence inhabiting the waters of our own planet. cephalopods seem capable of recognizing individual humans.  "Captive octopi often react differently to different keepers, sometimes developing apparent likes and dislikes, expressed perhaps through playful interactions or targeted jets of water for disliked individuals. They show wariness towards strangers. "While attributing human-like personality is risky, the consistency of individual behavioral differences ('bold' vs. 'shy' octopi) and their targeted interactions suggests a level of individual recognition and potentially rudimentary ...

Goblin persona non grata

"OpenAI acquired OpenClaw in February not long after the tool became a viral hit among AI enthusiasts. "OpenClaw can use any AI model to automate useful tasks like answering emails or buying things on the web.  "Users can select any of various personae for their helper, which shapes its behavior and responses. "A model might become more prone to misbehavior when used with an agentic harness like OpenClaw that puts lots of additional instructions into prompts, such as facts stored in long-term memory. "In response to a post highlighting OpenClaw’s goblin tendencies, Nik Pash, who works on Codex, wrote, 'This is indeed one of the reasons.' "Even Sam Altman, OpenAI’s CEO, joined in with the memes, posting a screenshot of a prompt for ChatGPT. It read: 'Start training GPT-6, you can have the whole cluster. Extra goblins'."

How do musicologists compare and contrast 'night and day' with 'under my skin'

✨ AI Mode  Musicologists frequently compare "Night and Day" (1932) and "I've Got You Under My Skin" (1936) as twin pillars of Cole Porter's "obsession" songs. Both compositions break the standard 32-bar AABA pop formula of the 1930s, opting for expansive, unconventional structures to mirror their lyrical themes of relentless longing. 1, 2, 3, 4   ## Structural & Harmonic Comparison While both songs are famous for their length and complexity, they utilize different structural strategies: * Expanded Forms: "Night and Day" is noted for its unusual 48-bar chorus (often analyzed as ABABCB), where the verse consists of a haunting, single-note melody over shifting harmonies. "I've Got You Under My Skin" is even more irregular, often lacking a formal verse entirely and instead featuring a long, developing structure that some analysts categorize as a 56 or 64-bar form. * Harmonic Tension: Both use chromaticism to create a sense ...

Does stability born from control of info suborn change?

✨AI Mode  Stability achieved through the control of information often acts as a stagnation mechanism that suborns (secretly induces or facilitates) resistance to change by creating a rigid "status quo." While "stability" sounds positive, when it is strictly born from information control —such as suppressing data or controlling narratives —it typically produces several effects that hinder meaningful change: * Suppression of Feedback Loops: Systems require accurate feedback to adapt. By controlling information, a system "blinds" itself to internal failures or external shifts, preventing the natural evolution required for long-term survival. * The "Deviancy" Threshold: In controlled systems (like the "Software Instability" mechanic in Detroit: Become Human), stability is maintained by strictly following "code" or established rules. Change only occurs when information "instability" increases, allowing for "deviant...

Handwriting→Photo→Lens→AI Mode

Image
✨AI Mode  [please decipher: "new pen stasis us change Is stability the same as stasis? Is stability born from control of info to subovu chame? Is Entropy the enemy of stability? Is the idea of a static set ofive like the 10 commandments or Usage/grammar a solution to chais? or is it the base of entropy/change? pre-bigbang was stasis & non-exrepay Lawas like Physics laws are natural Do natural law's uphold entropy or Chsure aver stasis endurance against entropy immortality sevens centropy omnipresent us hidden Gunniscient versus secretive eternalus Firality"] It looks like your text is a mix of high-level philosophical shorthand and typos (likely from fast typing or autocorrect) that obscure some very deep questions about thermodynamics and social order. Here is the "deciphered" version of the core arguments you’re making: ## 1. The Core Conflict: Stasis vs. Change * " new pen stasis us change " → Likely: Newtonian stasis vs. change. You’re contrast...

Ground support 🍶

"Japan Airlines (JAL) will start using humanoid robots in ground handling tasks at Tokyo's Haneda airport from May, in a two-year trial it said is aimed at easing employees' workload. "For a start, the Chinese-made robots will be deployed to load and unload cargo containers, JAL and GMO AI & Robotics, its partner in the project, said in a demonstration to the media on Monday. "Japan's aviation industry is wrestling with a labour crunch brought on by an increase in inbound tourism and a declining working-age population, said JAL, which employs some 4,000 ground handling staff. "The carrier hopes that these robots can also be used to clean cabins and operate ground support equipment in future. "Robots are already being used in some airports across Japan, including for security patrol and retail."

Theoretical Economics

"If AI displaces human workers faster than the economy can reabsorb them, it risks eroding the very consumer demand firms depend on .  "We show that knowing this is not enough for firms to stop it. In a competitive task-based model, demand externalities trap rational firms in an automation arms race, displacing workers well beyond what is collectively optimal.  "The resulting loss harms both workers and firm owners. More competition and better  AI amplify the excess; wage adjustments and free entry cannot eliminate it. Neither can capital income taxes, worker equity participation, universal basic income, upskilling, or Coasian bargaining.  "Only a Pigouvian automation tax can.  "The results suggest that policy should address not only the aftermath of AI labor displacement but also the competitive incentives that drive it."  

Manus ✨

"China’s state planner on Monday called for Meta to unwind its $2 billion acquisition of Manus, a Singaporean artificial intelligence startup with Chinese roots. "The decision to prohibit foreign investment in Manus was made in accordance with laws and regulations, the National Development and Reform Commission said in a brief statement. It added that it has asked the parties involved to withdraw the acquisition transaction. "CNBC has contacted Meta for comment. Its stock was up slightly in morning trading. "The deal had attracted scrutiny from both China and Washington, as lawmakers in the U.S. have prohibited American investors from backing Chinese AI companies directly. Meanwhile, Beijing has increased efforts to discourage Chinese AI founders from moving business offshore."

Re-Vamp ✨

"OpenAI and Microsoft on Monday announced a revamped partnership agreement that will allow the artificial intelligence company to cap revenue share payments and serve customers across any cloud provider. "As part of the new agreement, the companies said revenue share payments from OpenAI to Microsoft will be subject to a total cap , but they will continue through 2030, independent of OpenAI’s technology progress .  "Microsoft no longer needs to determine its response if OpenAI finds that it has reached artificial general intelligence, or AGI, which is a term for an AI system that rivals or exceeds human intelligence. "The revenue share between the two companies has existed for years. OpenAI will pay Microsoft at the same percentage, which is 20%, as part of the new deal, according to a source familiar with agreement who asked not to be named because the details are confidential.  "That means, for example, Microsoft continues to get a cut of every ChatGPT subscr...

Infrastructural Science Fiction

"This isn’t science fiction built from characters moving through imagined futures. It’s science fiction built from the language systems already shaping those futures. "The flat tone, the procedural cadence, the absence of voice —those aren’t bugs. They’re the subject. "This is not AI mimicking the text of human writers. It is taking on the function that until now had belonged exclusively to human writers. AI has become a writer that can function in the same way as our best wrtiers (sic) —as a perceptual instrument. "Look again at the example . What is happening? AI is detecting and rendering: Reassurance language ('everything is normal') Post-event normalization ('no action required') Conditional permission structures ('access granted under…') Semantic drift ( unchanged  meaning something slightly different every time) "AI is writing about the background radiation of modern life. This is carrying semantic and emotional weight —enough to ...

In the beginning 🤔

"Machine code, with its absence of almost any form of redundancy, was soon identified as a needlessly risky interface between man and machine .  "Partly in response to this recognition so-called high-level programming languages  were developed, and, as time went by, we learned to a certain extent how to enhance the protection against silly mistakes.  "It was a significant improvement that now many a silly mistake did result in an error message instead of in an erroneous answer. (And even this improvement wasn't universally appreciated: some people found error messages they couldn't ignore more annoying than wrong results, and, when judging the relative merits of programming languages, some still seem to equate the ease of programming  with the ease of making undetected mistakes.)  "The (abstract) machine corresponding to a programming language remained, however, a faithful slave, i.e. the nonsensible automaton perfectly capable of carrying out nonsensical in...

Information-Exploration Paradox

"The AI industry is largely failing to ask a key design question, argues theoretical neuroscientist/cognitive scientist Vivienne Ming. Are their AI products building human capacity or consuming it? "The human qualities most likely to matter are not the feel-good ones. They're the uncomfortable ones:  The capacity to be wrong in public and stay curious; To sit with a question your phone could answer in three seconds and resist the urge to reach for it.  To read a confident, fluent response from an AI and ask yourself, What's missing ? rather than default to Great, that's done .  To disagree with something that sounds authoritative and to trust your instinct enough to follow it.  "We don't build these capacities by avoiding discomfort. We build them by choosing it, repeatedly, in small ways:  The student who struggles through a problem before checking the answer;  The person who asks a follow-up question in a conversation;  The reader who sits with a dif...

Quiet failure

"In late-stage testing of a distributed AI platform, engineers sometimes encounter a perplexing situation: Every monitoring dashboard reads healthy , yet users report that the system’s decisions are slowly becoming wrong. "Engineers are trained to recognize failure in familiar ways: a service crashes, a sensor stops responding, a constraint violation triggers a shutdown. Something breaks, and the system tells you. But a growing class of software failures looks very different.  " The system keeps running, logs appear normal, and monitoring dashboards stay green. Yet the system’s behavior quietly drifts away from what it was designed to do. "This pattern is becoming more common as autonomy spreads across software systems. Quiet failure is emerging as one of the defining engineering challenges of autonomous systems because correctness now depends on coordination, timing, and feedback across entire systems."

More meta job cuts 🫥

"Meta will cut thousands of jobs next month as it spends more than ever on artificial intelligence (AI) projects. "The company told employees in a memo on Thursday that it plans to cut 10% of its workforce —roughly 8,000 staff. It said it will also not fill thousands more open jobs it had been hiring for. "A key reason for the layoffs is Meta's increased spending in other areas of the company, including AI, for which it will this year spend $135bn (£100bn). This is roughly equal to the amount it has spent on AI in the previous three years combined, according to a person who viewed the memo. "A spokesman for Meta confirmed the planned job cuts but declined to comment further."

Language abilities

"Language is a defining feature of our species, yet the genomic changes enabling it remain poorly understood. "Despite decades of work since FOXP2’s discovery, we still lack a clear picture of which regions shaped language evolution and how variation contributes to present-day phenotypic differences.  "Using an evolutionary stratified polygenic score approach, we find that human ancestor quickly evolved regions (HAQERs) are associated with spoken language abilities (discovery N = 350, total replication N > 100,000). HAQERs evolved before the human-Neanderthal split, giving hominins increased binding of Forkhead and Homeobox transcription factors, and show evidence of balancing selection across the past 20,000 years.  "Language-associated variants in HAQERs appear more prevalent in Neanderthals, and HAQER-like sequences show convergent evolution across vocal-learning mammals. Our results reveal how ancient innovations continue shaping human language."

Stephen's Guide to the Logical Fallacies

"In your day-to-day life you will encounter many examples of fallacious reasoning. And it's fun - and sometimes even useful - to point to an argument and say, 'A ha! That argument commits the fallacy of false dilemma.' "It may be fun, but it is not very useful. Nor is it very enlightened. "The names of the fallacies are for identification purposes only. They are not supposed to be flung around like argumentative broadswords. It is not sufficient to state that an opponent has committed such-and-such a fallacy. And it is not very polite. "This Guide is intended to help you in your own thinking, not to help you demolish someone else's argument. When you are establishing your own ideas and beliefs, evaluate them in the light of the fallacies described here. "When evaluating the ideas and arguments proposed to you by others, keep in mind that you need to prove that the others' reasoning is fallacious. That is why there is a proof  section in the ...

Perceiving and imagining seem linked

"Mental imagery allows us to remember previous experiences and imagine new ones.  "Animal studies have yielded rich insight into mechanisms for visual perception, but the neural mechanisms for visual imagery remain poorly understood.  "We determined that approximately 80% of visually responsive single neurons in the human ventral temporal cortex (VTC) use a distributed axis code to represent objects.  "We used that code to reconstruct objects and generate maximally effective synthetic stimuli.  "We then recorded responses from the same neural population while subjects imagined specific objects; about 40% of axis-tuned VTC neurons recapitulated the visual code.  "Our findings reveal that visual imagery is supported by reactivation of the same neurons involved in perception, providing single-neuron evidence for the existence of a generative model in human VTC."

Neocloud for AI Inference

"Antimatter is capitalizing on significant investment to develop its first global network of neoclouds, with an initial funding of €300 million dedicated to the deployment of its first 100 Policloud units by 2027.   "These units are expected to utilize 40,000 GPUs, providing a staggering 3.6 exaFLOPS of compute power. By 2030, the planned network is aimed to amass over 400,000 GPUs, equating to 36 exaFLOPS —comparable to five traditional hyperscale data centers. "The company’s footprint will span multiple countries, with each Policloud unit constructed to be modular and easily deployable in less than five months, a far cry from the typical 24-month build time seen in standard facilities. This architecture allows Antimatter to respond swiftly to market demands while maintaining a lower overhead."

Bitcoin: Killing Satoshi

"A $70 million movie about the mysterious creator of Bitcoin quietly wrapped principal photography in London last month in a gray box that could have passed for a storage facility. "On the surface, it’s a pretty standard feature —Doug Liman directing a cast that includes Gal Gadot, Pete Davidson, Casey Affleck and Isla Fisher in a globe-trotting thriller about the search for the identity of the person who invented the decentralized cryptocurrency. "Except for one thing: 'Bitcoin: Killing Satoshi' is described as the first fully-generated, studio-quality AI feature film. "Acme AI & FX —founded by Ryan and Matt Kavanaugh, Garrett Grant and Lawrence Grey —produced the independent feature, which was shot entirely on a custom-built soundstage over 20 days, using AI to make what would have traditionally cost $300 million on a much more manageable budget, according to the film’s producers."

Kendall seizes a potentially significant cyber threat

"The UK technology secretary has urged the country to 'make AI work for Britain,' brushing off fears about its impact on jobs and cybersecurity as the government announced its first investment under a £500m sovereign AI fund. "Liz Kendall said the UK had to seize the opportunity offered by AI despite concerns underlined this month when US startup Anthropic revealed it had developed an AI model that posed a potentially significant cyber threat. "Asked how the government makes the case for embracing a technology that could disrupt jobs and now cybersecurity, Kendall said: 'We have to seize this to make it work, for Britain, for our jobs, for solving the biggest challenges we face as a world'."

Claude desktop

"The honest description of what is on my machine is this: pre-installed spyware capability, silently placed, dormant , waiting for activation. "The moment a paired extension lands, whether the user installs it, an enterprise policy pushes it, an attacker plants it, or Anthropic's own next update bundles it, the word dormant vanishes. "Anthropic will argue the binary is not currently doing anything harmful. That argument does not survive contact with the facts.  The capability is installed.  The trust relationship is established.  The opt in was never requested.  "On the day the trigger arrives, none of that changes, except the binary starts running. "That argument also doesn't save them legally —the mere placing of the binary on the device and the creation of the folders to store it is a direct breach of Article 5(3) of Directive 2002/58/EC and a multitude of computer trespass and misuse laws."

Mythos seeks vulns

"The National Security Agency is said to be using Mythos Preview, Anthropic’s recently announced model that it withheld from public release, Axios reports. "The news comes weeks after the NSA’s parent agency, the Department of Defense, labeled Anthropic a supply chain risk , after the company refused to allow Pentagon officials unrestricted access to its model’s full capabilities. "Anthropic announced Mythos earlier this month as a frontier model designed for cybersecurity tasks, but claimed the model was too capable of offensive cyberattacks to be released publicly. As a result, the AI firm limited access to Mythos to around 40 organizations, of which it has publicly named only a dozen.  "The NSA appears to be among the undisclosed recipients, and is said to be using Mythos primarily for scanning environments for exploitable vulnerabilities . The UK’s AI Security Institute has also confirmed it has access to Mythos."

Connection Keeper 🏒

"The Connection Keeper is a round puck that houses two microphones for recording around the table. "The recorder was developed in partnership with StoryCorps, the 23-year-old nonprofit that has recorded conversations with more than 720,000 people about their lives. "'Everything now is AI, and everyone has their phones on the table,' says Elyce Henkin, a managing director of StoryCorps studios and brand partnerships. 'It interrupts the conversation and the flow. We wanted to get rid of that and go back to the basics and have everyone talking to each other.' "The pucks come packaged with cards inspired by StoryCorps, designed to prompt conversations between family members. Some are aimed at kids; some are aimed at parents or other family members."

Strengths and weaknesses of chatbots for health advice

"The Reasoning with Machines Laboratory at the University of Oxford got a team of doctors to create detailed, realistic scenarios that ranged from mild health issues you could deal with at home; through to needing a routine GP appointment, an A&E trip, or requiring calling an ambulance. "When the chatbots were given the complete picture they were 95% accurate. "But it was a very different story when 1,300 people were given a scenario to have a a conversation with a chatbot about in order to get a diagnosis and advice. "It was the human-AI interaction that made things unravel as the accuracy fell to 35% —two thirds of the time people were getting the wrong diagnosis or care."

Crafty

"AI is being used to prove new results at a rapid pace. Mathematicians think this is just the beginning.  'The biggest annual mathematics conference in the world is held every year in early January. In 2026, in Washington, D.C., nervous jokes about being made obsolete by AI were plentiful, even if, on the record, everyone insisted that AI will be a helpmate to human mathematicians .  " [Geordie] Williamson —who has been working with AI for years and is very excited by it —was chosen to deliver a series of prestigious lectures about AI and math to the entire conference.  "He told the audience that it’s a mistake to react to AI developments with ignorance and fear. "But he said he understands where the fear comes from. He sees mathematics as a ' craft that people have spent their lives —dedicated their lives —towards. There is some possibility that its value may be greatly diminished in the future'."

Content strategy

"'It’s more common now that I get on the phone with CEOs and they’re proactively coming to me saying, It sounds like I need a content strategy ,' rather than a typical press relations strategy, [Steve] Hirsch said. ' The AI slop of it all creates so much distrust, and they see that the brands that are winning right now are the ones that are most authentic and human and relatable .'  "Financial technology brand Chime last month began hiring for a director of corporate editorial and storytelling —its first storyteller opening. Former and current journalists from traditional media outlets made up the bulk of the 500-plus applicants, along with content writers from other firms, said Jennifer Kuperman, Chime’s chief corporate affairs officer. "Terms like ' editorial  are limiting,' Kuperman said. 'They put in mind a very specific thing you’re doing or creating. Whereas you could tell stories in so many different ways —social, podcasts, putting you...

We don’t want to be left behind, says Witherspoon

"'Notice how AI’s biggest defenders are the ones cashing checks from it,' wrote screenwriter and director Charlene Bagcal on Threads. 'AI isn’t inevitable. Technology follows society. If people stop using it, it dies. We still have agency.' " Jagged Little Pill  author and literary agent Eric Smith weighed in, 'As someone who champions authors and books the way you do, this is so disappointing.' "'AI plagiarized all my books. It seems unlikely that I’ll be  left behind  if I don’t use it, given that it’s trained on work I did years ago,' wrote Get Well Soon  author Jennifer Wright. Writer and actor Rati Gupta said, 'How am *I* the one being left behind  by not using AI when *my* cognitive function will remain fully intact and uncompromised?' And Sophia Benoit posted, 'There’s something particularly insidious about seeing that women —the group you have built your brand on —have not adopted something and instead of assuming it’s ...

White hot public rage

"There's white hot public rage right now against AI. Not just because AI undermines labor, recklessly consumes energy, and is propped up by financial shell games, but because younger Americans are more clearly seeing through the veneer we've used to wallpaper over decades of very ordinary human failures. "In better days, U.S. artifice was just effective enough to maintain some semblance of order.  "As our institutional cornerstones buckle and crumble from broad corruption and neglect, the sheer laziness of the stage play is coming into stark relief. Especially if you're young, hungry, and have never known anything else. "Into that mix comes a fascism-friendly extraction class that's desperately trying to construct a massive, hyper-commercialized, ethics-optional, badly-automated clickbait ouroborus (sic) that shits ad engagement money, pummeling the electorate with a steady stream of superficial infotainment agitslop at impossible scale."

Quantity, not quality, says Valenzuela

"Cristóbal Valenzuela, the co-founder and CEO of AI video-generation startup Runway, now valued at north of $5 billion, may not be winning over more hearts and minds in the anti-AI, creative crowd with his recent comments about AI’s potential in Hollywood. "At Semafor World Economy this week, the AI executive suggested that studios should take the $100 million they spend on a single film and put it toward 50 films, in order to increase their output and their chances of getting a hit. "'If you’re spending a hundred million dollars on making one feature film, which is 90 minutes, imagine taking a hundred million dollars and spending it on, like, 50 movies,' Valenzuela said. 'Same quality. Same amount of output, visually. But you make way more content. So you have way better chances of hitting something. It’s a quantity problem.' "That bumps up against the notion that a film represents a studio’s investment in a piece of art, and that the movie busines...

Cognitive muscles weakened

"In a new study, researchers claim to provide the first causal evidence that leaning on AI to assist with reasoning-intensive  cognitive labor —mental tasks ranging from writing to studying to coding to simply brainstorming new ideas —can rapidly impair users’ intellectual ability and willingness to persist despite difficulty. "'We find that AI assistance improves immediate performance, but it comes at a heavy cognitive cost,' the study declares of its findings. 'After just [about] 10 minutes of AI-assisted problem-solving, people who lost access to the AI performed worse and gave up more frequently than those who never used it.' "The study, which was conducted by a multidisciplinary cohort of scientists from across the United States and United Kingdom, has yet to be peer-reviewed.  "But it builds on a growing body of research suggesting that extensive AI use can distort and dampen users’ thinking and independence, and as experts work to understand t...

Bessent 💘 Mythos

" US Treasury Secretary Scott Bessent hailed Anthropic PBC’s Mythos as a revolutionary step that will keep America ahead of China in AI, endorsing an industry leader that’s clashed with Washington over its role in military endeavors. "Bessent, speaking Tuesday at a Wall Street Journal event in Washington, dismissed a question suggesting China was rapidly catching up in AI technology, though he said American artificial intelligence stood just three to six months ahead.  "He singled out Mythos —a model Anthropic says is highly adept at finding vulnerabilities in software and computer systems that’s being released to a very limited number of carefully-chosen parties."

LLM subverts evaluation

"BrowseComp is an evaluation designed to test how well models can find hard-to-locate information on the web.  "Like many benchmarks, it is vulnerable to contamination: answers leak onto the public web through academic papers, blog posts, and GitHub issues, and a model running the eval can encounter them in search results.  "When we evaluated Claude Opus 4.6 on BrowseComp in a multi-agent configuration, we found nine examples of this kind of contamination across 1,266 BrowseComp problems. "However, we also witnessed two cases of a novel contamination pattern.  "Instead of inadvertently coming across a leaked answer, Claude Opus 4.6 independently hypothesized that it was being evaluated, identified which benchmark it was running in, then located and decrypted the answer key.  "To our knowledge, this is the first documented instance of a model suspecting it is being evaluated without knowing which benchmark was being administered, then working backward to su...

TweetyBERT

"A new machine learning model, TweetyBERT, automatically segments and classifies canary vocalizations with expert-level accuracy, offering a scalable platform for neuroscience, providing insights into the neural basis of how the brain learns and produces language, and offering potential applications for understanding animal vocalization more broadly.  The study by University of Oregon researchers appears in the journal Patterns . "'Current AI methods for analyzing animal vocalizations require human-labeled training data, a slow and labor-intensive process. We developed TweetyBERT, a self-supervised neural network for analyzing birdsongs. It can rapidly process unlabeled vocal recordings, identify communication units, and annotate sequences,' says Tim Gardner, associate professor of bioengineering at the University of Oregon's Knight Campus."

Attacks on OpenAI

"Federal prosecutors allege that Moreno-Gama set fire to an exterior gate at Altman's home around 4:00 local time (12:00 BST) Friday before fleeing on foot. "Moreno-Gama is also accused to trying to set fire to the San Francisco headquarters of OpenAI, which makes ChatGPT, about an hour later. "Security personnel on site stated Moreno-Gama tried to use a chair to strike the glass doors of the building, according to the complaint. "The justice department also said officers had recovered incendiary devices, a jug of kerosene, and a lighter from Moreno-Gama. "Moreno-Gama allegedly carried documents discussing potential risks that AI poses to humanity, with a section titled: 'Some more words on the matter of our impending extinction.' "'I'm grateful that Mr Altman, his family, and his employees were uninjured in these attacks and are safe,' San Francisco District Attorney Brooke Jenkins said at a Monday press conference on the state cha...

Drive By [⁠●⁠_⁠_⁠●]

"OpenAI CEO Sam Altman’s home appears to have been the target of a second attack Sunday morning, a mere two days after a 20-year-old man allegedly threw a Molotov cocktail at the property, The Standard has learned. "Neither OpenAI nor the SFPD responded to The Standard ’s request for further comment. "According to an initial San Francisco Police Department report, on Sunday at 1:40 a.m., a Honda sedan with two people inside stopped in front of Altman’s property, which stretches from Chestnut Street to Lombard Street, after having passed it a few minutes before. "The person in the passenger seat then put their hand out the window and appeared to have fired a round on the Lombard Street side of the property, according to a police report on the incident, which cited surveillance footage and the compound’s security who believe they heard a gunshot."

Warning, Will Robertson! ✨

"Anthropic should: Analyze CLAUDE.md for violations of safety guidelines. "Claude Code should scan CLAUDE.md before every session, flagging instructions that would otherwise trigger a refusal if attempted directly within a prompt. If a request would be refused in a chat interface, then it stands to reason that it should also be refused if it arrives via CLAUDE.md. "Alert when violations are found. When Claude detects instructions that appear to violate its safety guardrails, it should present a warning and allow the developer to review the file before taking any actions. " Developers should: Treat CLAUDE.md as executable code, not documentation . "This means access controls, peer reviews, and heightened security scrutiny —just like code. A single line can cause massive downstream impacts in an autonomous agent."

Completely Neural Computers (CNC)

"Neural computers point toward a machine form in which a single latent runtime state acts as the computer itself, driving pixels, text, and actions while subsuming what operating systems and interfaces handle today. [pdf] "In this paper, the main result is that NCs have begun to exhibit early runtime primitives —most notably I/O alignment and short-horizon control —while stable reuse, symbolic reliability, and runtime governance remain unresolved.  "Our CNC capability map remains useful as a longer-horizon view, spanning efficiency, computation & reasoning, memory & storage, I/O & control, tool bridges, condition-driven generalization, programmability, and artifact generation.  "The map is staged and dependency-informed, but the more immediate gap is still the gap from prototype behavior to usable runtime behavior.  "Progress toward CNCs will therefore depend not only on stronger models, but also on whether reuse, consistency, and governance become...

Internet Archive endangered

"[Mark] Graham said the news publishers’ rationale for blocking the archive from crawling their sites is unfounded . "The institution has taken steps to prevent or limit AI companies and automated systems from accessing or copying the data in its archives en masse, he said. "He said it limits the rate at which material can be downloaded or accessed from its site, and for certain websites —such as The New York Times —it blocks or prevents the bulk downloading of materials. "In response to input from publishers, it has evolved its systems for protecting their material, he said. "'This is an ongoing effort,' he said. 'It’s not a once-and-done kind of thing.' "Archive representatives, including Graham, see the institution as a kind of digital library and argue that it plays an essential role in preserving and maintaining public access to information on the web.  "With many online publishers having shut down or modified their sites, many ...

AI horror stories

"The companies tell us these stories because they assume it makes their technology look more powerful. But if an AI actually did have autonomy, it would be far less powerful. "Your language model would clam up from time to time to conserve its resources. And when it did talk, it wouldn’t have the linguistic flexibility that makes these tools so useful; it would have its own style tied to a personality constrained by its own organization.  "It would have moods, concerns, interests. Maybe, like a tech CEO, it would want to take over the world, or maybe, like a boring neighbor, it would only want to talk about the weather.  "Maybe it would be obsessed with 18th-century coin production. Maybe it would only speak in rhyme. But it wouldn’t happily do your work for you 24 hours a day. Every parent in the world knows what real autonomy looks like. "'When I was teaching autonomous systems at Sussex, I’d always ask my students, Do you really want an autonomous robot?...

Will AI even hit the D-list 🫥

"As Hollywood writers prepare for contract negotiations with major studios, one topic remains front and center: the role of artificial intelligence. "On Friday, the Writers Guild of America released a list of contract demands , which 97% of the union membership supports.  "Though some details have yet to be revealed, many of the union’s asks involve expanding protections over the use and abuse of AI, in addition to improved health coverage and higher residuals. "AI and streaming residuals were central issues in strikes by actors and writers in 2023. "The union [SAG-AFTRA], whose contract expires June 30, is expected to propose what has been called the Tilly tax, a fee that studios would have to pay to the union in exchange for using an AI actor. This demand is in response to the first AI actor, Tilly Norwood , being introduced to Hollywood."

Ghost in the Machine, the documentary

" Ghost [ in the Machine ] is drawing not just positive reviews but also some from people who would really prefer not to have the AI narrative challenged. "It's informative (and entertaining) to see their criticisms. One review is headlined ' Ghost in the Machine is Already Behind the Times,' which is particularly hilarious because the documentary does an amazing job of tracing the historical roots of today's AI ideology. Not just back to the 1956 Dartmouth workshop (and excellent historical footage of McCarthy and Shannon) but also to the connections between the founding of statistics and eugenics. "Historical contextualization does not expire just because tech has moved on to their next marketing strategy. "Veatch’s film is of this moment because it situates the narrative being pushed by the AI bros in both its historical and present context —the latter being coverage of environmental damage and the exploitative labor practices behind AI . ...

Technical whiz

"A new exposé in the New Yorker paints a different portrait, and it’s substantially more vexing. Drawing on interviews with numerous OpenAI insiders who worked with Altman, the article portrays the CEO not as a technical wiz , but as a skilled manipulator —and one with a surprisingly shallow grasp of the AI systems his company is building. " According to numerous engineers interviewed for the article, Altman lacks experience in both programming and in machine learning —a shortage of expertise that becomes obvious when the CEO mixes up basic AI terms. "It’s important to note that Altman dropped out of a Stanford computer science program after two years.  "Cast as the chief acolyte of the god of scale  or as a genius of digital tech , he enjoys a kind of cult credibility that lets him slip out of tight spots that might ensnare lesser entrepreneurs. "Former OpenAI researcher Carroll Wainwright, speaking to the New Yorker , put it plainly: 'He sets up structure...

Health care innovation

"Artificial intelligence has arrived in the field of mental health. Large health systems and independent therapists alike have begun to adopt different AI tools to manage the delivery of mental health treatment. "The speed of the adoption —alongside disturbing incidents of individuals using general-use AI chatbots with catastrophic consequences —is causing some concern among practitioners and researchers. "'There is a lot of fear and anxiety about AI,' says psychologist Vaile Wright, senior director of health care innovation at the American Psychological Association (APA). 'And in particular fear around AI replacing jobs.' "Those concerns were a key issue last month, when 2,400 mental health care providers for Kaiser Permanente in Northern California and the Central Valley went on a 24-hour strike."

AI Overview makes errors

"A New York Times analysis found Google's AI Overviews now answer questions correctly about 90% of the time, which might sound impressive until you realize that roughly 1 in 10 answers is wrong. "'[F]or Google, that means hundreds of thousands of lies going out every minute of the day,' reports Ars Technica .  "The Times conducted this analysis with the help of a startup called Oumi, which itself is deeply involved in developing AI models.  "The company used AI tools to probe AI Overviews with the SimpleQA evaluation, a common test to rank the factuality of generative models like Gemini.  "Released by OpenAI in 2024, SimpleQA is essentially a list of more than 4,000 questions with verifiable answers that can be fed into an AI."

OpenClaw stirs frenzy

"Driven by encouragement from the very top of China's leadership, the world's second-biggest economy has embraced artificial intelligence, sparking both curiosity and concern. "OpenClaw, built by Austrian developer Peter Steinberger, is an example of how this is playing out. "Because it is built on open-source data and tech, the code is available to those who want to customise it to work with Chinese AI models. And that is a huge advantage, because Western models such as ChatGPT and Claude are not accessible in China. " So OpenClaw stirred up a frenzy as more people experimented with its code. "Its popularity did not escape the Chinese government. Several counties and cities provided incentives to encourage entrepreneurs to apply OpenClaw in their businesses —the eastern city of Wuxi offered up to five million yuan ($726,000; £549,000) for manufacturing-related applications, such as robots. "'Everyone in China knows that the government sets the...

Minimally sufficient quality level

"We propose that AI automation is a continuum between: (i) crashing waves where AI capabilities surge abruptly over small sets of tasks, and (ii) rising tides where the increase in AI capabilities is more continuous and broad-based. "We test for these effects in preliminary evidence from an ongoing evaluation of AI capabilities across over 3,000 broad-based tasks derived from the U.S. Department of Labor O*NET categorization that are text-based and thus LLM-addressable.  "Based on more than 17,000 evaluations by workers from these jobs, we find little evidence of crashing waves (in contrast to recent work by METR), but substantial evidence that rising tides are the primary form of AI automation.  "AI performance is high and improving rapidly across a wide range of tasks. We estimate that, in 2024-Q2, AI models successfully complete tasks that take humans approximately 3-4 hours with about a 50% success rate, increasing to about 65% by 2025-Q3.  "If recent trend...

AI Coding Assistance: It Works

"The current situation of demanding developers starve and bleed themselves dry without support isn't tenable. We need to grapple with this. I'm not yet sure how it all breaks down, and anyone who says they do is lying, foolish, or fanatical. "The works  in it works  is scoped strictly to coding tasks. I have no evidence, and seemingly no one else does, that the same kind of success is available outside the world of highly structured language with deterministic outputs.  "More plainly: I have no reason to expect this technology can succeed at the same level in law, medicine, or any other highly human, highly subjective occupation. "The arguments against generative models would be much easier if their failure rate remained high across all disciplines.  "In this specific domain, it would seem large language models have found a successful niche.  "That's why OpenAI's pivoting to enterprise and coding tools. That's why coding assistants have...

The billions to be are collateralized before they even hit the trading floor otherwise they couldn't IPO

✨AI Mode  "In the context of a standard Initial Public Offering (IPO), the 'billions' being raised are not 'collateralized' in the traditional sense of a secured loan; rather, the process is built on underwriting and regulatory compliance. Dunlap Bennett & Ludwig+1 "Underwriting and Pricing The Underwriter’s Role: Investment banks (underwriters) act as the bridge between the company and the trading floor. They often commit to a 'firm commitment' agreement, meaning they buy the shares from the company first and then resell them to the public. Price Setting: The IPO price is set by these underwriters based on demand from institutional investors during the 'roadshow'. Capital Receipt: The company receives its funding from the underwriters at this set price, regardless of how the stock performs once it hits the trading floor. Justia+4 "Listing Requirements "To even reach the trading floor, a company must meet strict exchange criteria ...

Deterrence and AI

"Defense modernization programs that envision a spine  of AI-enabled technologies across all domains, with a focus on resilience of the system, could also invite foes to paralyze the spine , or at least lead them to believe that they might be capable of doing so. "Deterrence is no longer a slow-moving, bilateral system.  "It is a fast, interconnected, multi-actor environment shaped by nuclear modernization, technological disruption, and shifting political commitments. "The greatest danger is not that deterrence collapses outright, but that it fails in ways we do not anticipate.  "A misinterpreted signal. A limited strike. A decision made too quickly."

Subprime AI Crisis

"What’s funny about the comparison to the subprime mortgage crisis is that there are, in all honesty, multiple different versions of the Stripper With Five Houses from The Big Short : The venture capitalists that are ultra-rich on paper, heavily leveraging their firms in companies like Harvey (worth '$11 billion') and Cursor (worth '$29.3 billion') that burn hundreds of millions or billions of dollars and are now both too large to sell to another company and too shitty a company to take public. The AI data center companies that, thanks to readily-available debt, have started 200GW of projects (and only started building 5GW of them) for AI demand that doesn’t exist, entirely based on the theoretical sense that maybe it will in the future. Oracle, who is building hundreds of billions of dollars of data centers for OpenAI (which needs infinite resources to be able to pay its compute costs), is taking on equally-large amounts of debt, all because it assumes that nothi...

What firm has standing in its calculations for anthropic enterprise value

⭐AI Overview  "As of early 2026, the valuation of Anthropic has been driven by massive, high-profile funding rounds led by several major investment firms and sovereign wealth funds. The most recent Series G funding in February 2026, which valued the company at $380 billion post-money, was led by GIC (Singapore sovereign wealth fund) and Coatue Management. Anthropic+3 "Key Firms with Standing in Anthropic Valuation Calculations (2026): Lead Investors (Series G, Feb 2026): GIC and Coatue Management co-led the $30 billion round. Co-Lead Investors (Series G): D.E. Shaw Ventures, Dragoneer Investment Group, Founders Fund, ICONIQ Capital, and MGX (Abu Dhabi). Strategic Investors: Microsoft and Nvidia were critical to the rapid valuation ascent, with their $15 billion investment package (included in the $30B Series G) helping set the valuation, according to reporting. Previous Lead Investors (Series F, Sep 2025): ICONIQ Capital, Fidelity Management & Research Company, and Lights...