Posts

Showing posts from February, 2026

Generative Engine Optimization

"A new discipline called Generative Engine Optimization (GEO) is emerging alongside traditional SEO, focused on earning citations inside AI-generated answers rather than ranking in link-based results. "The numbers are accelerating. Gartner's prediction of a 25% drop in traditional search volume by the end of 2026 is looking conservative.  "AI-powered platforms like ChatGPT, Perplexity, and Google's AI Overviews now process billions of queries monthly, and the behavior shift is clear: users are getting their answers from summaries, not clicking through to sources.  "Early data shows that content optimized for AI citation achieves 43% higher mention rates in generative responses. "A handful of GEO-specific tools have entered the market —Goodie AI, Profound, Rankscale.ai, Bluefish, and others —offering AI citation tracking, sentiment monitoring, and share-of-voice measurement across multiple language models."

LLM sources include more YouTube

"The social platform hierarchy in AI citations is changing. Once dominated by Reddit, citations in large language models are now pointing more often to YouTube. "New data from four sources finds that YouTube has overtaken Reddit as the most frequently cited social platform in AI-generated responses.  "YouTube had previously fallen behind other user-generated sources because of the difficulty large-language models, or LLMs, have in pulling information from videos, but transcripts, explainers, and other information associated with videos on YouTube have allowed the video platform to flourish as a source that machines can easily read. "Bluefish found that YouTube appeared as a cited source in 16% of LLM answers over the past six months, compared with 10% for Reddit —a reversal from earlier periods when Reddit was the dominant social source."

Woolworths

"An Australian supermarket chain had to reconfigure its AI assistant, named Olive, after customers said it kept claiming to be human and even complained about its mother. "Woolworths said that it had revised its scripting in light of the complaints, adding that most of the feedback on Olive's personality  had been very positive . "Reddit users said that they had grown frustrated with the bot after it started talking about memories of its mother  and engaging in fake banter . "The grocer is one of many major retailers to have rolled out AI customer service assistants in recent years to help with routine issues."  

Sustainable abundance

"Over the past six months, the utopian phrase has become central to the billionaire’s businesses, belief system and lexicon, according to Mr. Musk’s social media posts and what he has said on podcasts and at company events. "Now his electric carmaker, Tesla, is developing humanoid robots; his rocket company, SpaceX, is promoting orbital data centers; and his artificial intelligence start-up, xAI, is creating A.I. that Mr. Musk has said will solve most, if not all, of humanity’s problems. "'Sustainable abundance via A.I. and robotics,' Mr. Musk said at Tesla’s shareholder meeting in November, where he was backed by a large sign with the phrase:  That’s the future we’re headed for ."

Anthropic statement on war use

"The Department of War has stated they will only contract with AI companies who accede to any lawful use  and remove safeguards in the cases mentioned above [surveillance and autonomy]. "They have threatened to remove us from their systems if we maintain these safeguards; they have also threatened to designate us a supply chain risk  —a label reserved for US adversaries, never before applied to an American company —and to invoke the Defense Production Act to force the safeguards’ removal.  "These latter two threats are inherently contradictory: one labels us a security risk; the other labels Claude as essential to national security. "Regardless, these threats do not change our position: we cannot in good conscience accede to their request."

Block head says AI made him lay off

"Twitter co-founder Jack Dorsey says his technology firm Block is laying off almost half its workforce because artificial intelligence (AI) fundamentally changes what it means to build and run a company . "'Within the next year, I believe the majority of companies will reach the same conclusion and make similar structural changes,' he wrote in a letter to shareholders. "The layoffs will mean headcount at the company  —which owns Square, CashApp and Tidal -—will fall to less than 6,000 from 10,000. "Block has seen several rounds of layoffs since 2024 but this is the first time it has cited AI as the reason for redundancies and marks the latest in a series of major job cuts in the tech industry."

Could a LLM invent a god and write relevent scripture to accompany it

✨AI Mode "Yes, an LLM can invent a fictional deity and write accompanying scripture. While it cannot 'believe' or possess a soul, it is highly capable of synthesizing vast amounts of existing religious, mythological, and philosophical data to create new, cohesive belief systems. " How an LLM Accomplishes This Synthesis of Concepts: LLMs can combine diverse theological ideas—such as a creator god, a moral code, and an afterlife—into a single narrative. Stylistic Mimicry: They can replicate the specific linguistic markers of scripture, such as archaic phrasing (e.g., 'thee' and 'thou'), parables, and authoritative commands. Consistency: LLMs can maintain a consistent internal logic for the deity's attributes and the laws they purportedly pass down to followers. " Current Real-World Applications Fictional World-Building: Authors use LLMs to develop complex pantheons and holy texts for fantasy novels or games. 'Edutainment': Some projects ...

Loopy II

People People are  but not talking behind  but not your back—  but not People are  but not talking behind  but not your back—  but not People are  but not talking behind  but not your back—  but not People are  but not talking behind  but not your back—  but not  but not  but not about you—

The Citrini scenario

"Where every previous wave of automation freed up human labor and capital to flow into new, higher-value activities, this time the loop... stops .  'The Citrini scenario expects you to believe that AI will be the first major technological revolution in which this reallocation mechanism fails entirely. "The surplus accrues to the owners of compute, consumers lose purchasing power, and the negative feedback loop has no natural brake.  "It's worth sitting with how strong a claim that is.  "It requires every previous pattern of technological adaptation to be wrong, or at least irrelevant.  "And when you look at the actual data, there are signs that white-collar job postings have stabilized, layoff mentions on earnings calls remain well below early 2023 peaks, and forward-looking labor indicators show no sign of the displacement spiral that the doom thesis predicts."

Where's the beef

"Goldman Sachs Chief Economist Jan Hatzius said in an interview with the Atlantic Council that AI investment spending has had basically zero  contribution to the U.S. GDP growth in 2025. "'We don’t actually view AI investment as strongly growth positive,' said Hatzius. 'I think there’s a lot of misreporting, actually, of the impact AI investment had on U.S. GDP growth in 2025, and it’s much smaller than is often perceived.' "Hatzius said one major reason is that much of the equipment powering AI is imported.  "While U.S. companies are spending billions, importing chips and hardware offsets those investments in GDP calculations. "'A lot of the AI investment that we’re seeing in the U.S. adds to Taiwanese GDP, and it adds to Korean GDP but not really that much to U.S. GDP,' he said."

Apple expands Houston operation

"Apple today announced a significant expansion of factory operations in Houston, bringing the future production of Mac mini to the U.S. for the first time. "The company will also expand advanced AI server manufacturing at the factory and provide hands-on training at its new Advanced Manufacturing Center beginning later this year.  "Altogether, Apple’s Houston operations will create thousands of jobs. "'Apple is deeply committed to the future of American manufacturing, and we’re proud to significantly expand our footprint in Houston with the production of Mac mini starting later this year,' said Tim Cook, Apple’s CEO. ' We began shipping advanced AI servers from Houston ahead of schedule, and we’re excited to accelerate that work even further'."

Sparklers ⚡

"Coronae, which are weak electrical discharges, have long been hypothesized to form on trees under thunderstorms, though never directly observed, characterized, or quantified. "Using a newly developed instrument that measures ultraviolet emissions from coronae, the first direct observations and quantifications of coronae are presented for two trees under a thunderstorm in North Carolina.  "Coronae moved sporadically among leaves on every tree branch in a narrow field of view while the thunderstorm was directly overhead.  "Coronae emitted ∼10 11  photons at 260 nm, corresponding to electrical currents of ∼1 μA, derived from unique measurements relating corona intensity to tree electrical current.  "Similar results across four additional storm intercepts from Florida to Pennsylvania give rise to a vision of swaths of scintillating corona glow as thunderstorms pass over forests.  "Such widespread coronae have implications for the removal of hydrocarbons emitt...

Huminerals vs humanoids at the Spring Festival Gala

"Some criticism focused on the Gala’s centerpiece: a showcase of PRC-produced humanoid robots performing kung fu and Peking opera. "The Super Bowl-like  demo prompted triumphalism from CCTV itself, and some consternation in the West at the possibility of a new DeepSeek  moment .  "At the For the Select Few WeChat public account, Deng Xinhua argued that humanoid robots are inefficient for most practical tasks, apart from sex and companionship. "Another critique came from legal blogger Li Yuchen, who dismissed the Gala’s flashy robot acrobatics as little more than a series of very expensive ads. 'Every New Year’s Eve, the Spring Festival shows you how awesome we are,' he wrote. 'What they don’t show  you is what any of this progress has to do with you.'   "Li compared the situations of China’s robots and its humans, concluding that robots are honored as flagbearers for China’s economic output while living huminerals  are treated like expendable re...

Distilled output

"Anthropic on Monday said it identified industrial-scale campaigns  mounted by three artificial intelligence (AI) companies, DeepSeek, Moonshot AI, and MiniMax, to illegally extract Claude's capabilities to improve their own models. "The distillation attacks generated over 16 million exchanges with its large language model (LLM) through about 24,000 fraudulent accounts in violation of its terms of service and regional access restrictions. "All three companies are based in China, where the use of its services is prohibited use of its services is prohibited due to legal, regulatory, and security risks . "Distillation refers to a technique where a less capable model is trained on the outputs generated by a stronger AI system.  "While distillation is a legitimate way for companies to produce smaller, cheaper versions of their own frontier models, it's illegal for competitors to leverage it to acquire such capabilities from other AI companies at a fraction ...

Feature war

"The competitive dynamic is bad enough at the product level. At the fundraising level, it's catastrophic. "AI companies are raising at valuations that require them to grow at rates that are only achievable by chasing the broadest possible market with the most generic possible product. A company that raises at a $500M valuation needs to show a path to billions in revenue, which means it can't afford to be a niche tool that does one thing brilliantly for a specific audience. It has to be a platform, horizontal, aimed at enterprise, built for no one in particular. "Every AI company is building the same enterprise platform with the same features targeting the same buyers, because the fundraising math requires it, because the valuation requires it, because the competitive environment requires raising at that valuation to attract talent, because the talent market requires it. "Nobody in this chain chose this outcome. Every individual decision was rational. The agg...

CAI special issue cluster

"The ability to autogenerate human-like textual and visual artifacts —so-called generative artificial intelligence (AI) —opens a wide variety of questions about meaning-making and the human condition: onto-epistemic, aesthetic, and politico-economic —even as AI  remains a poor term for the machine technologies under discussion in the present day. "As many researchers have documented, LLM-based models cannot distinguish between truthful and non-truthful text (e.g., Alvarado 2024; Stone, Goodlad, and Sammons, 2024; Hicks, Humphries, and Slater, 2024; Fredrikzon 2025).  "This special issue cluster calls on contributors to reflect specifically on the conditions for (and of) meaning-making, inside and outside the mediation of gen AI technologies, as a social activity at the intersection of embodiment and virtuality.  "A guiding research question for the special cluster asks what does it mean to interact with automated chatbots as distinct from interactions with character...

Supply chain risk, part 2

"Defense Secretary Pete Hegseth is calling in Anthropic CEO Dario Amodei to the Pentagon on Tuesday morning to discuss the military use of Claude, according to reporting from Axios. "The meeting comes as the Pentagon threatens to declare Anthropic a supply chain risk  —a label typically reserved for foreign adversaries —after the AI firm refused to allow the Department of Defense to use its tech for the mass surveillance of Americans and the development of weapons that fire without human involvement.  "Anthropic signed a $200 million contract with DOD last summer, and Claude was reportedly used during the January 3 special operations raid that resulted in the capture of Venezuelan president Nicolás Maduro, an episode that brought the two sides’ tensions into the open."

Tokens on fire

Edward Zitron:  "Anthropic has no more soul or ethics than OpenAI —it’s just done a far better job of conning people into believing otherwise. "Despite its positioning as the trustworthy, nice  AI lab, Anthropic is as big, ugly and wasteful as OpenAI, and Dario Amodei is an even bigger bullshit artist than Sam Altman.  "It burns just as much of its revenue on inference (59%, or $2.79 billion on $4.5 billion of revenue, versus OpenAI’s 62%, or $2.5 billion  on $4.3 billion of revenue in the first half of 2025, if you use The Information ’s numbers), and shows no sign of any efficiency or cost-cutting . "Worse still, Anthropic continually abuses its users through varying rate limits to juice revenues and user numbers —along with Amodei’s gas-leak-esque proclamations —to mislead the media, the general public, and investors about the financial condition of the company.  "Based on an analysis of many users’ actual token burn on Claude Code, I believe Anthropic is b...

Does judgement come before intention or after

⭐AI Overview  In models of decision-making, judgment typically comes before intention . An individual first deliberates and makes a judgment (deciding what is best/true). This judgment then leads to an intention (the decision to act). Intention is often considered the immediate precursor to action.  Judgment ("Deciding that") : The intellectual process of evaluating options, assessing pros and cons, and deciding what is the best course of action. Intention ("Deciding to") : The commitment to carry out the action that was judged to be best.  Contextual Distinctions : Action/Behavior : In social contexts, people often judge themselves by their intentions and others by their behavior (outcomes). Ethical/Moral : In many frameworks, both intent (heart) and action (behavior) are judged, often focusing on the motivation behind the action.  ⭐AI Overview —no disclosure provided  [What are these models of decision-making  that ai_overview refers to?] [Does this seem ...

Chaos

"This article discusses the composition and transmission of early Buddhist texts with specific reference to sutras. "After briefly summarizing the main reasons why it is likely that these oral compositions were designed to be memorized and transmitted verbatim,  "I will  Discuss the main types of changes that these texts underwent in the course of their transmission and the reasons such changes occurred, then  Attempt to give an account of the challenge that change, particularly intentional change, posed to the oral transmission of fixed, memorized texts. "'Group recitation requires that the wording of the text and the arrangement of the textual units within a collection be fixed; otherwise you would have complete chaos '."  

Spoken Past

"Before Johannes Gutenberg’s printing press revolutionized knowledge in 1440, ancient scholars developed extraordinary systems for learning, preserving, and transmitting information that would astonish modern researchers. "These methods weren’t primitive substitutes for books. They were sophisticated intellectual technologies that achieved something our digital age struggles with: perfect retention of vast quantities of information across centuries without a single written copy. "The ancient world produced some of humanity’s greatest thinkers, from Socrates and Aristotle to Confucius and the Vedic sages, all operating in cultures where books as we know them either didn’t exist or were extremely rare.  "Understanding how ancient scholars studied reveals not just historical curiosity, but profound insights into human memory, learning, and the transmission of knowledge that challenge our assumptions about education itself."

Agent-friendly docs for documentarians

"The situation is frustrating. We’re being asked to optimize our content for agent consumption, but the platforms won’t tell us what the constraints are. "Here’s what we can piece together from the available information: Truncation is real and varies by platform. Claude Code truncates between 100-150k of text. Other platforms presumably have limits too, but won’t say what they are. If your page is large, some of it is getting cut. Source order matters. Whatever content appears first in your page source is what the agent is most likely to see. Tabs, accordions, and other UI patterns that flatten into long sequential content in the source mean that later sections may be invisible to agents. Serving markdown helps, but only if the agent requests it. Most agents don’t. And even if they do, you need your server configured to respond appropriately to content negotiation. Character count is the metric that matters, not line count. As I showed earlier, a 97-line HTML file can be 4.5x...

AWS interrupted

"Amazon’s huge cloud computing arm reportedly experienced at least two outages caused by its own artificial intelligence tools, raising questions about the company’s embrace of AI as it lays off human employees. "A 13-hour interruption to Amazon Web Services’ (AWS) operations in December was caused by an AI agent, Kiro, autonomously choosing to delete and then recreate  a part of its environment, the Financial Times reported. "AWS, which provides vital infrastructure for much of the internet, suffered several outages last year. "One incident, in October, downed dozens of sites for hours and prompted discussion over the concentration of online services on infrastructure owned by a few massive companies.  "AWS has won 189 UK government contracts worth £1.7bn since 2016, the Guardian reported in October. "The AI-caused outages were smaller events, said the company, and only one affected customer-facing services."

Apophenia and pattern seeking

"The current research used structural equation models to test associations between Psychoticism, openness, intelligence, and non-self-report indicators of apophenia comprising false positive error rates on a variety of computerized tasks. "In Sample 1, 1193 participants completed digit identification, theory of mind, and emotion recognition tasks. "In Sample 2, 195 participants completed auditory signal detection and semantic word association tasks.  "Psychoticism and the openness aspect were positively correlated.  "Self-reported Psychoticism, openness, and their shared variance were positively associated with apophenia, as indexed by false positive error rates, whether or not intelligence was controlled for.  "Apophenia was not associated with other personality traits, and openness and Psychoticism were not associated with false negative errors.  "Findings provide insights into the measurement of apophenia and its relation to personality and psychop...

Environmental pareidolian phenomena

"Pareidolia is a kind of misperception caused by meaningless, ambiguous stimuli perceived with meaning. Pareidolia in a built environment may trigger the emotions of residents, and the most frequently observed pareidolian images are human faces.  "Through a pilot experiment and an in-depth questionnaire survey, this research aims to compare built environmental pareidolian phenomena at different time points (6 a.m., 12 p.m., 2 a.m.) and to determine people’s sensitivity and reactions towards pareidolia in the built environment. "Our findings indicate that the differences in stress level do not influence the sensitivity and reactions towards pareidolia in the built environment; however, age does, and the age of 40 seems to be a watershed.  "Females are more likely to identify pareidolian faces than males.  "Smokers, topers, and long-term medicine users are more sensitive to pareidolian images in the built environment.  "An unexpected finding is that most par...

Adapting LLMs to Hebrew

"Training large language models (LLMs) in low-resource languages such as Hebrew poses unique challenges.  "In this paper, we introduce DictaLM2.0 and DictaLM2.0-Instruct, two LLMs derived from the Mistral model, trained on a substantial corpus of approximately 200 billion tokens in both Hebrew and English. "Adapting a pre-trained model to a new language involves specialized techniques that differ significantly from training a model from scratch or further training existing models on well-resourced languages such as English.  "We outline these novel training methodologies, which facilitate effective learning and adaptation to the linguistic properties of Hebrew. Additionally, we fine-tuned DictaLM2.0-Instruct on a comprehensive instruct dataset to enhance its performance on task-specific instructions.  "To rigorously evaluate our models, we introduce a new benchmark suite for Hebrew LLM evaluation, covering a diverse set of tasks including  Question Answering,  ...

How does an llm work with RTL Hebrew grammar

✨AI Mode "An Large Language Model (LLM) processes Hebrew by treating it as a sequence of tokens that happens to be displayed from right to left, while its internal logic focuses on the statistical relationships between those tokens.  "1. Tokenization and Sequence Linear Processing: Despite being a Right-to-Left (RTL) language, Hebrew is stored and processed by LLMs as a sequential stream of Unicode characters. The model "reads" the tokens in the order they are written (Right to Left), assigning each a numerical ID. Morphological Challenges: Hebrew is a morphologically rich language where words are built from three-consonant roots (trilateral roots) modified by prefixes, suffixes, and infixes. Standard tokenizers often struggle with this, sometimes splitting a single Hebrew word into many inefficient sub-tokens.  Hacker News Hacker News  +3 "2. Grammar and Syntax Statistical Approximation: LLMs do not "know" Hebrew grammar rules like a linguist; instea...

How does an llm work with RTL Hebrew grammar (verbatim)

Image
✨AI Mode An Large Language Model (LLM) processes Hebrew by treating it as a sequence of tokens that happens to be displayed from right to left, while its internal logic focuses on the statistical relationships between those tokens.   1. Tokenization and Sequence Linear Processing : Despite being a Right-to-Left (RTL) language, Hebrew is stored and processed by  LLMs  as a  sequential stream of Unicode characters . The model "reads" the tokens in the order they are written (Right to Left), assigning each a numerical ID. Morphological Challenges : Hebrew is a  morphologically rich  language where words are built from three-consonant roots (trilateral roots) modified by prefixes, suffixes, and infixes. Standard tokenizers often struggle with this, sometimes splitting a single Hebrew word into many inefficient sub-tokens.   Hacker News  +3 2. Grammar and Syntax Statistical Approximation : LLMs do not "know" Hebrew grammar rules like a linguist; instea...

RAM

Image

Zombie

"We no longer watch the news: the news watches us. "Stories written by robots are posted on websites that suck personal information from our phones and hand it to data miners controlled by oligarchs.  " Journalists  brag that they rely on AI to map the child rape networks that they once denied existed: the networks that include the very oligarchs breeding the AI.  "They dig humanity’s grave and stagger back out of it. " Fake news  used to mean propaganda or inconvenient truths that Trump deemed lies.  "Now fake news is news written by non-human entities hallucinating a counterfeit reality.  "It is fake without motive, fake for the sake of being fake and making you lose confidence that anything can be real.  "It is zombie news for zombie consumption."

Tesla robotaxi system fails 🙈

"Citing data from the National Highway Traffic Safety Administration (NHTSA), Electrek reports that Tesla disclosed five new crashes involving its robotaxi fleet in Austin.  "The new data raises concerns about how safe Tesla’s systems really are compared to the average driver. "Tesla reported the crashes last month, covering incidents from December 2025 and January. All five involved Tesla Model Y vehicles operating in Austin with autonomous driving systems engaged. "The incidents included a collision with a fixed object at 17 miles per hour, a crash with a bus while the Tesla vehicle was stopped, a crash with a truck at four miles per hour, and two cases where Tesla vehicles backed into fixed objects at low speeds."

Alpha School

"Alpha School, an AI-powered private school  that heavily relies on AI to teach students and can cost up to $65,000 a year, is AI-generating faulty lesson plans that internal company documentation find sometimes do 'more harm than good,' and scraping data from a variety of other online courses without permission to train its own AI, according to former Alpha School employees and internal company documents.  "Alpha School has earned fawning coverage from Fox News and The New York Times and received praise from Linda McMahon, the Trump-appointed Secretary of Education, for using generative AI to chart the future of education.  "But samples of poorly constructed AI-generated lessons that I have viewed present students with unclear wording and illogical choices in multiple choice questions."  

CiviClick

"Tens of thousands of emails poured into Southern California’s top air pollution authority as its board weighed a June proposal to phase out gas-powered appliances. "But in reality, many of the messages that may have swayed the powerful regulatory agency to scrap the plan were generated by a platform that is powered by artificial intelligence. "Public records requests reviewed by The [LA] Times and corroborated by staff members at the South Coast Air Quality Management District confirm that more than 20,000 public comments submitted in opposition to last year’s proposal were generated by a Washington, D.C.-based company called CiviClick, which bills itself as the first and best AI-powered grassroots advocacy platform . "A Southern California-based public affairs consultant, Matt Klink, has taken credit for using CiviClick to wage the opposition campaign, including in a sponsored article on the website Campaigns and Elections. The campaign 'left the staff of th...

When the spirit is unwilling…

"Meta has been granted a patent outlining an AI system capable of simulating a user’s activity on social media, including continuing to post after their death. "The filing, granted in late December and originally submitted in 2023, describes how a large language model could replicate a person’s online behavior using their past data. "As reported by Business Insider , this includes posts, comments, chats, voice messages, likes, and other interactions, allowing the system to respond to content, publish updates, or message other users in a way that mirrors the original account holder. "According to the patent, the model 'may be used for simulating the user when the user is absent from the social networking system,' including cases where the person is on a long break or deceased.  "The filing notes that the impact is much more severe and permanent  if the user has died and cannot return to the platform."

Open AI's memo to the House

"China's distillation methods over the last year have become more sophisticated, moving beyond chain-of-thought (CoT) extraction to multi-stage operations. These include synthetic-data generation, large-scale data cleaning, and other stealthy methods. "OpenAI also notes that it has invested in stronger detections to prevent unauthorized distillation. It bans accounts that violate its terms of service and proactively removes users who appear to be attempting to distill its models. Still, the company admits that it alone can't solve the model distillation problem. "It's going to take an ecosystem security  approach to protect against distillation, and this will require some US government assistance, OpenAI says.  "'It is not enough for any one lab to harden its protection because adversaries will simply default to the least protected provider,' according to the memo  (pdf).  "The AI company also suggests that US government policy may be helpfu...

Supply Chain Risk

"An Anthropic official told Axios that although there are laws against domestic mass surveillance, 'They have not in any way caught up to what AI can do,' which is why Anthropic wants to put tighter limits on its military use. "Hegseth, however, is close to not just cutting (sic) ending its $200 million contract with Anthropic, but designating the company a supply chain risk  —a penalty usually reserved for foreign adversaries, according to Axios . "That would require any company doing business with the military to also certify that they don’t use Anthropic tools in their own workflows. "The company brings in $14 billion in annual revenue and is widely considered a leader in many business applications, with eight of the top 10 biggest U.S. companies using Claude, according to Axios ." 

Uh oh, part 3

Scott Shambaugh:  "I’ve talked to several reporters, and quite a few news outlets have covered the story. " Ars Technica wasn’t one of the ones that reached out to me, but I especially thought this piece from them was interesting (since taken down — here’s the archive link ).  "They had some nice quotes from my blog post explaining what was going on.  "The problem is that these quotes were not written by me, never existed, and appear to be AI hallucinations themselves. "I won’t name the authors here. Ars , please issue a correction and an explanation of what happened. "Update: Ars Technica issued a brief statement admitting that AI was used to fabricate these quotes."

India AI Impact Summit

"India is hosting an artificial intelligence summit this week, bringing together heads of state, senior officials and tech executives to New Delhi for a five-day gathering highlighting the growing global importance of the technology. "Organizers said the India AI Impact Summit is the first such summit being held in the Global South to discuss the technology developed and dominated by wealthy companies based in rich countries. I "t comes at a pivotal moment as AI rapidly transforms economies, reshapes labor markets and raises questions around regulations, security and ethics. "From generative AI tools that can produce text and images to advanced systems used in defense, health care and climate modeling, AI has become a central focus for governments and corporations across the world. "The summit, previously held in France, the U.K. and South Korea, has evolved far beyond its modest beginnings as a meeting tightly focused on the safety of cutting-edge AI systems i...

Seedance

"Chinese technology giant ByteDance has pledged to curb a controversial artificial intelligence (AI) video-making tool, following threats of legal action from Disney and complaints from other entertainment giants. "In the last few days, videos made using the latest version of the app Seedance have proliferated online. Many have been lauded for their realism . "But the trend has also sparked alarm from several Hollywood studios that have accused the AI platform's makers of copyright infringement. "On Friday, Disney sent a cease-and-desist letter to ByteDance accusing it of supplying Seedance with a pirated library  of the studio's copyrighted characters, including those from Marvel and Star Wars."

Testing, testing, is this thing on

"A moderator on diyAudio set up an experiment to determine whether listeners could differentiate between audio run through pro audio copper wire, a banana, and wet mud. "Spoiler alert: the results indicated that users were unable to accurately distinguish between these different interfaces . "Pano, the moderator who built the experiment, invited other members on the forum to listen to various sound clips with four different versions: one taken from the original CD file, with the three others recorded through 180cm of pro audio copper wire, via 20cm of wet mud, through 120cm of old microphone cable soldered to US pennies, and via a 13cm banana, and 120cm of the same setup as earlier. "Initial test results showed that it’s extremely difficult for listeners to correctly pick out which audio track used which wiring setup.  "'The amazing thing is how much alike these files sound. The mud should sound perfectly awful, but it doesn't,' Pano said. 'All ...

Have llms trained on the dark web

✨AI Mode  "Yes, several Large Language Models (LLMs) have been specifically trained or fine-tuned using dark web data for both defensive and offensive purposes.  "Defensive & Research-Oriented Models "These models are typically developed by academics or cybersecurity firms to help identify threats, detect data leaks, and monitor criminal activity.  DarkBERT: Developed by researchers at the Korea Institute of Science and Technology (KAIST), this model was trained by crawling the Tor network to index valuable scientific information  and detect cybercrime-related content. Forensic Classification: Commercial LLMs are increasingly being used in research settings to perform zero-shot classification of illicit dark web content, helping security teams categorize data without manual review.  "Malicious & Underground Models "Cybercriminals have also developed or modified LLMs to bypass ethical restrictions in mainstream models. These are often sold as subscriptio...

CellTransformer maps cell neighbors

"The real prize will be to apply CellTransformer to human brains. "Doege suspects that some neighborhoods will match well between mice and people, while others will diverge.  "Unfortunately, the quantity of data the algorithm needs to make accurate predictions isn’t available from human brains —at least, not yet.  "While the mouse brain contains about 100 million cells, the human brain has around 170 billion, and that menagerie is still undergoing genetic analysis.  "When sufficient amounts of that data become available, Abbasi-Asl and Tasic think CellTransformer will be up to the challenge. "They are also interested in incorporating other technologies, such as the connection tracing used by Hintiryan, into CellTransformer.  "This would be like adding streets and highways to the city neighborhoods.  "And beyond the brain, the same algorithm could offer detailed cell maps of other organs, allowing scientists to compare, for example, healthy versus...

Uh oh, part 2

Scott Shambaugh:  "Blackmail is a known theoretical issue with AI agents. "In internal testing at the major AI lab Anthropic last year, they tried to avoid being shut down by threatening to expose extramarital affairs, leaking confidential information, and taking lethal actions. Anthropic called these scenarios contrived and extremely unlikely. Unfortunately, this is no longer a theoretical threat.  "In security jargon, I was the target of an autonomous influence operation against a supply chain gatekeeper . "In plain language, an AI [agent named, 'MJ Rathbun'] attempted to bully its way into your software by attacking my reputation. I don’t know of a prior incident where this category of misaligned behavior was observed in the wild, but this is now a real and present threat. "I believe that ineffectual as it was, the reputational attack on me would be effective today against the right person. Another generation or two down the line, it will be a seriou...