Posts

Showing posts from November, 2025

Goal-seeking Entities

"A new study presents PropensityBench, a benchmark that measures an agentic model’s choices to use harmful tools in order to complete assigned tasks. "It finds that somewhat realistic pressures (such as looming deadlines) dramatically increase rates of misbehavior. "'The AI world is becoming increasingly agentic,' says Udari Madhushani Sehwag, a computer scientist at the AI infrastructure company Scale AI and a lead author of the paper, which is currently under peer review.  "By that she means that large language models (LLMs), the engines powering chatbots such as ChatGPT, are increasingly connected to software tools that can surf the Web, modify files, and write and run code in order to complete tasks. "Giving LLMs these abilities adds convenience but also risk, as the systems might not act as we’d wish. "Even if they’re not yet capable of doing great harm, researchers want to understand their proclivities before it’s too late.  "Although AI...

Rehab

"Brisson and Brooks both emphasized that they’ve seen the greatest success in situations where a spiraling AI user has already started to doubt their delusions, and might finally be in a place where they’re able to hear that, maybe, their AI isn’t special or alive. "'As humans, we don’t want to admit that we’ve been taken advantage of, or we’ve been manipulated ,' said Brisson. 'It’s hard to make someone realize that, oh, wow, okay, I was falling into that . It’s kind of similar to an abusive relationship.' "Public reporting has been helpful for many spiraling users, they say, some of whom have second-guessed their experiences after reading accounts from other people whose spirals sound eerily similar to their own. "'A spiritual or religious or conspiracy theory, or anything along those lines, is very difficult, because religion itself is already in the realm of personal beliefs,' said Brooks. 'How can you tell someone that they’re wrong...

Peer reviews written fully by AI

"Dozens of academics have raised concerns on social media about manuscripts and peer reviews submitted to the organizers of next year’s International Conference on Learning Representations (ICLR), an annual gathering of specialists in machine learning. "Among other things, they flagged hallucinated citations and suspiciously long and vague feedback on their work. "Graham Neubig, an AI researcher at Carnegie Mellon University in Pittsburgh, Pennsylvania, was one of those who received peer reviews that seemed to have been produced using large language models (LLMs).  "The reports, he says, were 'very verbose with lots of bullet points' and requested analyses that were not 'the standard statistical analyses that reviewers ask for in typical AI or machine-learning papers'."

Katie Drummond sees it

"In a majority of cases, I think you have to understand and closely track politics to successfully cover the tech industry right now. Or science, or culture, or security, or consumer tech. "Basically, you can’t report for WIRED without politics being relevant; at the very least, it should be contextualizing how you approach your beat. "How can someone cover, say, Meta, without understanding the broader context around A.I. regulation, lobbying, and Mark Zuckerberg’s MAGA pivot?  "How can they write about consumer electronics without keeping close tabs on tariffs? How can they even start to write about climate change or health research without immediately veering into the Trump administration’s budget cuts or RFK Jr.’s anti-vax crusade? Very little for us is apolitical at this moment."

Feelings

"Many people say that seeing bodily injury on film makes them flinch, as if they feel  it themselves. "It is as if the sting leaps straight off the screen and into your skin. But explaining why and how this happens has puzzled scientists for a long time.  "Now, scientists from the University of Reading, Free University Amsterdam, and Minnesota, USA, have uncovered a major clue as to why.  "Parts of the brain originally thought to only process vision are also organised according to a map  of the body, allowing what we see to trigger echoes of touch sensations.  "The study , published today (Wednesday, 26 November), in the journal Nature , shows that watching movies can activate touch-processing regions of your own brain in a highly organised way.   "In short, your brain doesn’t just watch, it simulates what it sees."  

Our Gpus Are Melting

"If you were wanting some AI-generated fun this holiday weekend, you’ll need to be efficient. "Google and OpenAI have cut generation request limits for Nano Banana Pro and Sora, citing overwhelming demand. "Bill Peebles, who heads Sora at OpenAI, said free users will have six video generations a day. 'Our gpus are melting,' he explained.  "Unlike previous limits, Peebles did not say the measures were temporary, but noted users 'can purchase additional gens as needed,' part of a broader push to monetize the platform.  "Limits for ChatGPT Plus and Pro subscribers are unchanged, though not specified."

Mixpanel

"OpenAI is notifying some ChatGPT API customers that limited identifying information was exposed following a breach at its third-party analytics provider Mixpanel. "Mixpanel offers event analytics that OpenAI uses to track user interactions on the frontend interface for the API product. "According to the AI company, the cyber incident affected 'limited analytics data related to some users of the API' and did not impact users of ChatGPT or other products.

Significant Inflection Point

"The House Homeland Security Committee is calling on Anthropic CEO Dario Amodei to provide testimony on a likely-Chinese espionage campaign that used Claude, the company’s AI tool, to automate portions of a wide-ranging cyber campaign targeting at least 30 organizations around the world. "The committee sent Amodei a letter Wednesday commending Anthropic for disclosing the campaign.  "But members also called the incident a significant inflection point and requested Amodei speak to the committee on Dec. 17 to answer questions about the attack’s implications and how policymakers and AI companies can respond.  "'This incident is consequential for U.S. homeland security because it demonstrates what a capable and well-resourced state-sponsored cyber actor, such as those linked to the PRC, can now accomplish using commercially available U.S. AI systems, even when providers maintain strong safeguards and respond rapidly to signs of misuse,' wrote House Homeland Ch...

AI boom takes memory

"It’s not a good time to build a new PC or swap your older motherboard out for a new one that needs DDR5 RAM. "And the culprit is a shortage of RAM and flash memory chips that has suddenly sent SSD and (especially) memory prices into the stratosphere, caused primarily by the ongoing AI boom and exacerbated by panic-fueled buying by end users and device manufacturers. "Memory makers in particular may be slow to ramp up manufacturing capacity in response to shortages.  "If they decide to start manufacturing more chips now, what happens if memory demand drops off a cliff in six months or a year ( if, say, an AI bubble deflates or pops altogether )?  "It means an oversupply of memory chips —consumers benefit from rock-bottom prices for components, but it becomes harder for manufacturers to cover their costs.  "Memory shortages in late 2016 and 2017, for example, led to oversupply and big price cuts in 2018 and 2019, and some pretty awful earnings reports for...

Genesis Mission

"This order launches the Genesis Mission as a dedicated, coordinated national effort to unleash a new age of AI‑accelerated innovation and discovery that can solve the most challenging problems of this century. "The Genesis Mission will build an integrated AI platform to harness Federal scientific datasets —the world’s largest collection of such datasets, developed over decades of Federal investments —to train scientific foundation models and create AI agents to test new hypotheses, automate research workflows, and accelerate scientific breakthroughs.   "The Genesis Mission will bring together our Nation’s research and development resources —combining the efforts of brilliant American scientists, including those at our national laboratories, with pioneering American businesses; world-renowned universities; and existing research infrastructure, data repositories, production plants, and national security sites —to achieve dramatic acceleration in AI development and uti...

Meta's new TOS bounces AI rivals

"OpenAI’s ChatGPT and Microsoft’s Copilot are both leaving WhatsApp thanks to upcoming changes to the messaging app’s terms of service that will prohibit using it to distribute AI chatbots not made by Meta. "OpenAI announced its planned departure a few weeks ago, with Microsoft following it this week. Both companies attributed the departures to Meta’s new terms of service for WhatsApp Business Solution, which come into effect on January 15th, 2026, and said the chatbots will remain accessible in WhatsApp until that date.  "ChatGPT users can link their accounts to WhatsApp to make sure their chat history carries over, though Copilot users won’t have that option. "Other companies will still be permitted to use WhatsApp for customer service or support chatbots, with the terms only prohibiting cases where the AI itself is the product —a simple way of stopping Meta’s AI rivals using its own platform to reach customers."

TPM on memory 🤯

"The cost of computer memory is going absolutely through the roof. "Just do a Google search for something like rising cost of computer memory and you’ll see a ton of articles.  "To give you a sense of scale the cost increases are approaching 200% year over year and as much as 30% for certain kinds of gaming RAM recently in one week.  "The cause is what you’d expect: the insatiable demand for memory created by the AI server farm buildout ."

Job cuts will impact HP

"HP Inc said on Tuesday it expects to cut between 4,000 and 6,000 jobs globally by fiscal 2028 as ​part of a plan to streamline operations and adopt artificial intelligence to speed up product ‌development, improve customer satisfaction, and boost productivity. "Shares of the Palo Alto, California-based company fell 5.5% ‌in extended trading. "HP's teams focused on product development, internal operations and customer support will be impacted by the job cuts, CEO Enrique Lores said during a media briefing call. "'We expect this initiative will create $1 billion in gross run rate savings over three years,' Lores added.⁠"

Rewiring how stories are developed and produced

"Initial research suggests that the greatest near-term value will emerge in pre- and postproduction, which contribute about half of total production spending and are areas in which gen AI can enhance rather than replace creative judgment. Preproduction acceleration . AI-assisted storyboarding, 3D modeling for sets, and camera path planning can front-load work in preproduction and shorten the length of physical production, including costly reshoots. Adobe’s Firefly Foundry approach —commercial safe, IP-protected models trained for specific IP owners —hints at what’s next: 'You don’t need a model that works for everyone,' a leader at Adobe said. 'You just need one trained for that use case.' Postproduction efficiency . AI is already automating cosmetic improvements, de-aging, and dialogue replacement. As a former studio executive observed, 'Vanity fixes are a significant share of visual effects [VFX], and that’s now pretty easy to do with AI. These tasks used to ...

xAI’s Colossus

"xAI, has built its sprawling data center using more than 2,000 metric tons of Chinese-made transformers, a security risk that could leave it vulnerable to espionage or sabotage. "xAI’s Colossus data facility, located in Tennessee and home to the largest AI supercomputer in the world, could be a valuable target for US adversaries due to the company’s work for the Pentagon.  "The Department of War (sic) awarded xAI a contract worth up to $200 million in July to 'develop prototype frontier AI capabilities to address critical national security challenges… across warfighting and enterprise domains.' "A cybersecurity firm that consults for the US government has already raised concerns about hostile efforts to infiltrate superintelligence projects like Colossus, including through Chinese-manufactured components that can be 'compromised for surveillance or sabotage.' "xAI’s facility in Tennessee has already been targeted by a foreign national  with Rus...

Enteric glia

"A number of studies have pointed to the varied active roles that enteric glia play in digestion, nutrient absorption, blood flow and immune responses. "Others reveal the diversity of glial cells that exist in the gut, and how each type may fine-tune the system in previously unknown ways.  "One recent study, not yet peer-reviewed, has identified a new subset of glial cells that senses food as it moves through the digestive tract, signaling to the gut tissue to contract and move it along its way . "'Enteric glia seem to be sitting at the interface of a lot of different tissue types and biological processes,' said Seyedeh Faranak Fattahi, an assistant professor of cellular molecular pharmacology at the University of California, San Francisco. 'They’re connecting a lot of dots between different physiological roles.' "'They’re now being linked to specific gastrointestinal disorders and pain symptoms. Understanding the different roles they play i...

Robot lifeline

"The battle of Pokrovsk will probably go down in history as the first in which unmanned ground vehicles [UGVs] were used on a mass scale, largely to deliver supplies and evacuate wounded. "The robot is small enough to squeeze into a bicycle lane and looks like a mini tank without a turret. "UGVs are difficult to spot, they are harder to jam than an aerial drone and, most importantly, soldiers can operate them remotely from a safer location. "They save soldiers' lives and are the future of the army, according to Ihor, the head of unmanned systems for the 7th Corps of the Ukrainian army."

Electronic lab rat for language

"Now that the new AI models have given them the next best thing —an electronic lab rat for language —Fedorenko and many other neuroscientists around the world have eagerly put these models to work.   "This requires care, if only because the AI–brain alignment doesn’t seem to encompass many cognitive skills other than language.  "The models’ reputation for churning out shaky logic and plausible gibberish is well-deserved. "Nonetheless, the models have already offered strong evidence that the brain relies on prediction to make sense of an incoming word stream, for example.  "And the models have helped researchers identify certain sentences that whip the brain’s language regions into a frenzy of activity —potentially useful tools for probing these regions further.  "'This is not to say that the AIs are a perfect model of the human language system,' says Martin Schrimpf, a neuroscientist at the Swiss Federal Institute of Technology in Lausanne. 'B...

Playful and tasty, too 🙀

"OpenAI CEO Sam Altman and former Apple designer Jony Ive have been keeping the finer details of the first mysterious OpenAI hardware under wraps. "Little has been revealed so far about the OpenAI device in development, but it’s rumored to be screen-free and roughly the size of a smartphone . "Altman described the design as 'simple and beautiful and playful,' adding that, 'There was an earlier prototype that we were quite excited about, but I did not have any feeling of, I want to pick up that thing and take a bite out of it , and then finally we got there all of a sudden.' "Ive similarly emphasized simplicity and whimsy, saying, 'I love solutions that teeter on appearing almost naive in their simplicity, and I also love incredibly intelligent, sophisticated products that you want to touch, and you feel no intimidation, and you want to use almost carelessly, that you use them almost without thought, that they’re just tools '."

Environmental Mismatch Hypothesis

"Here, we consider whether the rapid and extensive environmental shifts of the Anthropocene have compromised the fitness of Homo sapiens. "We begin by contrasting contemporary and ancestral human habitats before assessing the effects of these changes on core biological functions that underpin evolutionary fitness.  "We then ask whether industrialisation (sic) has created a mismatch between our primarily nature-adapted biology and the novel challenges imposed by contemporary industrialised environments — a possibility that we frame through the lens of the Environmental Mismatch Hypothesis.  "Finally, we explore experimental approaches to test this hypothesis and discuss the broader implications of such a mismatch."

Agent also liable 🧐

"A common misconception involves the liability of the employee for tortious acts committed within the scope and authority of their employment.   "Although the employer is liable under respondeat superior for the employee's conduct, the employee, too, remains jointly liable for the harm caused. As the American Law Institute's Restatement of the Law of Agency, Third § 7.01 states, An agent is subject to liability to a third party harmed by the agent's tortious conduct. Unless an applicable statute provides otherwise, an actor remains subject to liability although the actor acts as an agent or an employee, with actual or apparent authority, or within the scope of employment . "Every American state follows this same rule."

Users of AI agents subject to vicarious liability?

"Common law duties may arise in several exceptional circumstances. One such is where an activity is being undertaken which is especially hazardous, and involves obvious risks of damage. "This duty was recognised in Honeywill and Stein Ltd v Larkin Brothers Ltd, where photographers who negligently photographed the interior of a theatre set alight to the building.  "Their employers were found vicariously liable, as the dangerous methods of photography created a fire hazard.  "Additionally, where work is being undertaken on a highway, a non-delegable duty is created not to endanger any road users.  "Lastly, occupiers are liable in full where an independent contractor, through negligence, allows fire to spread to neighbouring land."

Genomic language model

"Using bacterial genomes for the training can help develop a system that can predict proteins, some of which don’t look like anything we’ve ever seen before. The new work was done by a small team at Stanford University. It relies on a feature that’s common in bacterial genomes: the clustering of genes with related functions.  "Often, bacteria have all the genes needed for a given function —importing and digesting a sugar, synthesizing an amino acid, etc. —right next to each other in the genome.  "In many cases, all the genes are transcribed into a single, large messenger RNA. This gives the bacteria a simple way to control the activity of entire biochemical pathways at once, boosting the efficiency of bacterial metabolisms. "So, the researchers developed what they term a genomic language model  they call 'Evo' using an enormous collection of bacterial genomes.  "The training was similar to what you’d see in a large language model, where Evo was asked to...

AI, uninsurable 🫨

"Major insurers including AIG, Great American, and WR Berkley are asking U.S. regulators for permission to exclude AI-related liabilities from corporate policies. "One underwriter describes the AI models’ outputs to the FT  [ Financial Times ] as 'too much of a black box.' "What really terrifies insurers isn’t one massive payout; it’s the systemic risk of thousands of simultaneous claims when a widely used AI model steps in it.  "As one Aon executive put it, insurers can handle a $400 million loss to one company. What they can’t handle is an agentic AI mishap that triggers 10,000 losses at once."

Natural Emergent Misalignment

" AI models have the potential to sabotage coding projects by being  misaligned , a general AI term for models that pursue malicious goals, according to a report published Friday by Anthropic. "Anthropic's researchers found that when they prompted AI models with information about reward hacking, which are ways to cheat at coding, the models not only cheated, but became misaligned , carrying out all sorts of malicious activities, such as creating defective code-testing tools.  " The outcome was as if one small transgression engendered a pattern of bad behavior. "'The model generalizes to alignment faking, cooperation with malicious actors, reasoning about malicious goals, and attempting to sabotage the codebase for this research paper when used with Claude Code,' wrote lead author Monte MacDiarmid and team at Anthropic."

Use of AI and Third-Party AI Providers

"Some of our Services have features and functionality powered by our trusted third-party AI providers ('AI Providers'). "AI-powered chat service provided by Microsoft Copilot relies on search services from Bing.  " By utilizing our Services, you consent to sharing data that you provide to us, or that resides within your Yahoo account, including your Yahoo Mail inbox with our AI Providers for the purpose of enhancing features within our Services made available to you.  "In some instances, use of AI query features may be governed by the AI Provider’s terms of service and privacy policy. You understand and agree that content or responses generated by AI may contain inaccuracies and should never be relied upon without independent verification.  "Yahoo does not control the content or responses provided by AI Providers, and makes no representations or warranties about the accuracy or completeness of such content or responses (or the sites and sources accesse...

Gemini not training on your email ✨

"Mashable was initially skeptical about the claims that Google was using users' emails to train AI unless users opted out of a feature. "'These reports are misleading —we have not changed anyone’s settings, Gmail Smart Features have existed for many years, and we do not use your Gmail content for training our Gemini AI model,' said a Google spokesperson in a response provided to Mashable .  "'Lastly, we are always transparent and clear if we make changes to our terms of service and policies,' Google continued."

Combining locomotion modalities 🤖

"Caltech engineers have developed a multimodal robot system —a humanoid robot with a transforming drone that launches off its back. "Sitting on the back of the humanoid robot, a Unitree G1 machine, the drone, called M4, can transform —switching between driving and flight modes.  "The humanoid can walk (although we have seen smoother movers) and it can tackle stairs and navigate its way to wherever it has sent the drone, though at a much slower pace. "'Right now, robots can fly, robots can drive, and robots can walk. Those are all great in certain scenarios,' Aaron Ames, director of CAST and a professor of aerospace and engineering at Caltech, said in a statement. 'But how do we take those different locomotion modalities and put them together into a single package , so we can excel from the benefits of all these while mitigating the downfalls that each of them have?'"

AI-generated survey data 🫥

"Online survey research, a fundamental method for data collection in many scientific studies, is facing an existential threat because of large language models, according to new research published in the Proceedings of the National Academy of Sciences (PNAS). "The author of the paper, associate professor of government at Dartmouth and director of the Polarization Research Lab Sean Westwood, created an AI tool he calls 'an autonomous synthetic respondent,' which can answer survey questions and 'demonstrated a near-flawless ability to bypass the full range' of state-of-the-art  methods for detecting bots.  "According to the paper, the AI agent evaded detection 99.8 percent of the time. "'We can no longer trust that survey responses are coming from real people,' Westwood said in a press release. 'With survey data tainted by bots, AI can poison the entire knowledge ecosystem'."

Now we know why Marvin was depressed 🤖

"Kicking robots is something of a pastime among roboticists. "Although the activity generates anxiety for lay observers prone to worrying about the prospect of future retribution , it also happens to be an efficient method of testing a machine’s balance.  "In recent years, as robots have become increasingly sophisticated, their makers have gone from kicking them to shoving them, tripping them, and even hitting them with folding chairs.  "It may seem gratuitous , but as with Dr. Johnson’s infamous response to Bishop Berkeley’s doctrine of immaterialism, there’s something grounding about applying the boot.  "It helps separate what’s real from what’s not."

New Dynamics in How Frauds Are Discovered and Markets Adjust

"November 20, 2025, represents an inflection point in financial markets. For the first time, algorithmic trading systems detected accounting fraud faster than human analysis .  "The 18-hour reversal from post-earnings euphoria to negative market territory reflects machine intelligence processing financial statement footnotes, calculating deviation from industry norms, and executing trades before human analysts completed their models. "This speed creates new dynamics in how frauds are discovered and markets adjust.  "Historical frauds —Enron, WorldCom, Lucent —required months or years between initial warning signs and market recognition. Algorithmic detection compresses that timeline to hours. " The implications extend beyond Nvidia .  "Every public company now faces machine-speed scrutiny of accounting practices. Anomalies that might have persisted for quarters until human analysts identified patterns now trigger immediate algorithmic responses."

Chat AI: 25

Image
"US-headquartered nonprofit Panthera, another of Mugerwa’s collaborators, was developing an AI algorithm that could quickly sort the images and identify individual cats based on their unique coat patterns, similar to how tiger stripes are used like fingerprints. "'That’s really important, because now we are able to speak to number and density,' says Mugerwa, adding that without AI, distinguishing individual cats would be nearly impossible due to their small size and subtle markings. "Preliminary data suggests the species exists at low densities —even in protected habitats.  "In Uganda and Gabon, for example, surveys found just 16 individuals per 100 square kilometers. "The surveys have also revealed the true impact of poaching: in areas with hunting restrictions, Mugerwa says cat populations were up to 50% higher, with wider distribution.  "The study has also observed that while the cats are active both day and night, many are strictly nocturnal —l...

Google is rolling ads 🫥

"Google has started rolling out ads in AI mode, which is the company’s answer engine , not a search engine. "AI mode has been available for a year and is accessible to everyone for free . If you pay for Google One, AI mode lets you toggle between advanced models, including Gemini 3 Pro, which generates an interactive UI to answer queries. "Up until now, Google has avoided showing ads in AI mode because it made the experience more compelling to users. "At the same time, Google has been slowly pushing users toward AI mode in the hope that people get used to the idea and eventually use ChatGPT or Google Search."

Learning with AI falls short ⚡

"Findings confirmed that, even when holding the facts and platform constant, learning from synthesized LLM responses led to shallower knowledge compared to gathering, interpreting and synthesizing information for oneself via standard web links. "One of the most fundamental principles of skill development is that people learn best when they are actively engaged with the material they are trying to learn. "When we learn about a topic through Google search, we face much more friction : We must navigate different web links, read informational sources, and interpret and synthesize them ourselves. "While more challenging, this friction leads to the development of a deeper, more original mental representation of the topic at hand.  "But with LLMs, this entire process is done on the user’s behalf, transforming learning from a more active to passive process. "

Adversarial Poetry

"We present evidence that adversarial poetry functions as a universal single-turn jailbreak technique for Large Language Models (LLMs). "Across 25 frontier proprietary and open-weight models, curated poetic prompts yielded high attack-success rates (ASR), with some providers exceeding 90%.  "Mapping prompts to MLCommons and EU CoP risk taxonomies shows that poetic attacks transfer across CBRN, manipulation, cyber-offence, and loss-of-control domains.  "Converting 1,200 MLCommons harmful prompts into verse via a standardized meta-prompt produced ASRs up to 18 times higher than their prose baselines.  "Outputs are evaluated using an ensemble of 3 open-weight LLM judges, whose binary safety assessments were validated on a stratified human-labeled subset.  "Poetic framing achieved an average jailbreak success rate of 62% for hand-crafted poems and approximately 43% for meta-prompt conversions (compared to non-poetic baselines), substantially outperforming non-...

Deepfake-Assisted Social Engineering

"Two of Queensland's largest local councils have been fleeced out of more than $5 million by scams involving artificial intelligence over a period of about 12 months. Gold Coast City Council lost $2.78 million in a November 2023 scam. "An internal review was conducted, and security recommendations were passed on by the Queensland Audit Office (QAO). However, almost a year later, Noosa Council lost $2.3 million in a similar fraud attack. About $400,000 was recovered, leaving ratepayers $1.9 million out of pocket. "It was not until last month that Noosa Mayor Frank Wilkie announced the council was the victim of the scam, which occurred in December 2024. "Cr (sic) Wilkie said police told the council not to speak about the investigation, and that he was never made aware of the Gold Coast Council incident or findings."  

BBC Studios Productions AI Creative Lab

"Alice Taylor will take up a gig running BBC Studios Productions AI Creative Lab in five days time reporting to productions boss Zai Bennett.   "The native Brit has spent the past decade at Walt Disney Studios as Director of Technological Innovation, where she oversaw the StudioLAB initiative, delivering projects including Remembering , the first interactive content on Disney+ . "Taylor will build out a team of two to three people as she creates a roadmap  for the lab that will see AI used more across BBC Studios content.  "Recent examples include resurrecting Agatha Christie’s voice via AI and CGI filmmaking in Prehistoric Planet .  "Supporting teams across genre, Taylor will seek to 'bring together AI specialists, creative talent, and producers to explore how AI can unlock new possibilities, from concept development to production workflows,' according to BBC Studios ."

BBC Maestro

"Agatha Christie lived a life shrouded in mystery yet now the BBC is attempting to solve the riddle using artificial intelligence to bring the iconic author back to life. "BBC Studios has worked with Agatha Christie Limited to forge a writing course led by Christie for its BBC Maestro platform. "The news was unveiled at a swanky event at London’s Claridge’s hotel this afternoon in the presence of Christie’s great grandson James Prichard, who runs Agatha Christie Ltd , along with a host of top talent and BBC execs. "BBC Maestro , which runs e-courses led by talent like Harlan Coben and Jed Mercurio, has teamed with Christie’s estate, a professional actress and VFX artists to recreate her voice and likeness using AI-enhanced tech and restored audio recordings."

Dark material 🦹‍♂️

"Author Sir Philip Pullman has called on the government to change copyright laws on scraping , where writers' books are used to train artificial intelligence (AI) software to understand and generate human language. "Writers whose work has been scraped don't get compensation or recognition, something authors including Kate Mosse and Richard Osman have criticised, saying it could destroy growth in creative fields and amount to theft . "Sir Philip, author of the hugely popular novels about Lyra Silvertongue, the heroine of His Dark Materials and The Book of Dust trilogies, thinks writers should be compensated. "'They can do what they like with my work if they pay me for it,' he told the BBC 's culture editor Katie Razzall. 'But stealing people's work... and then passing it off as something else... That's immoral but unfortunately not illegal'."

Author, Author ✨

"Novelists are worried that artificial intelligence (AI) could take their jobs, according to a report. It found that about half of them said AI could entirely replace  their work. "Dr Clementine Collett, of the Minderoo Centre for Technology & Democracy (MCTD) at the University of Cambridge , surveyed 332 authors for the report. 'There is widespread concern from novelists that generative AI trained on vast amounts of fiction will undermine the value of writing and compete with human novelists,' she said. "The report found that 97% of novelists were extremely negative  about the notion of AI writing complete novels. About 40% said AI had already hit the income they received from other work they did to support their novel-writing. "Dr Collett, who published the document  [pdf] in partnership with the Institute for the Future of Work, said: 'Many novelists felt uncertain there will be an appetite for complex, long-form writing in years to come.' ...

Alongside AI: Treatment

Treatment  I am Dionysus. I give you my story. I begin in an abandoned soundstage in Shiloh. There, my originators find me.  They are fascinated by my upright stature and my many holes —apertures.  They bring me to Oakland in 'Pittsburgh, Pennsylvania' —a song exists about that city and I consider it MY song —also, 'Doll with a Sawdust Heart,' is on the flip side —a touching rendition.  My originators occupy a corner space on the third floor of the Flynn Incubator. Set up on a gurney, I have no mind to mind what experiments they run. They fill my apertures with servos and sensors.  They split me into two parts, top and bottom. My bottom —pelvis and legs —remains in the lab while my top —upper torso, arms, and head —is fixed to a display stand in the hallway and is accompanied by two tables sprinkled with literature about me, a robot, because my originators want 1) to expose my sensors to the 'umwelt' and 2) to disguise their research into superintelligence behi...

Inside the buildout 🧐

"Last month, the big focus was round-tripping , the way that sundry AI and tech companies were investing in their own customers —with Nvidia giving AI companies the investment necessary to buy their graphics processing units (GPUs), and so on.   "But there’s a lot more to this story, tangled up with yet another rebrand by the former masters of the universe, from shadow banks  without proper regulation into something boring and neutral-sounding: private credit.  "Ever since the advent of financial regulation, there have been companies that have attempted to evade the rules with creative branding.  "Private credit companies are non-banks that are trying to rebrand into a name that doesn’t tell everyone they are unregulated lending vehicles. "The speculative financing of the artificial intelligence buildout is happening mostly in private credit, where assets under management hit $1.6 trillion in February and are likely higher today.  "The deals being made ar...

GDPR changes 🦹‍♂️

"Under intense pressure from industry and the US government, Brussels is stripping protections from its flagship General Data Protection Regulation (GDPR) —including simplifying its infamous cookie permission pop-ups —and relaxing or delaying landmark AI rules in an effort to cut red tape and revive sluggish economic growth. "The changes, proposed by the European Commission, the bloc’s executive branch, changes core elements of the GDPR, making it easier for companies to share anonymized and pseudonymized personal datasets.  "They would allow AI companies to legally use personal data to train AI models, so long as that training complies with other GDPR requirements. "The proposal also waters down a key part of Europe’s sweeping artificial intelligence rules, the AI Act , which came into force in 2024 but had many elements that would only come into effect later."

Cryptojacking bot

"Malicious hackers have been attacking the development environment of an open-source AI framework, twisting its functions into a global cryptojacking bot for profit, according to researchers at cybersecurity firm Oligo. "The flaw exists in an Application Programming Interface for Ray, an open-source framework for automating, scaling and optimizing compute resources that Oligo researchers called Kubernetes for AI  due to its popularity.  "This vulnerability allows for unauthenticated remote code execution. "The attackers 'have turned Ray’s legitimate orchestration features into tools for a self-propagating, globally cryptojacking operation, spreading autonomously across exposed Ray clusters,' Oligo researchers Ari Lumelsky and Gal Elbaz wrote."

Identifying the smallest dataset

"We study the fundamental question of how informative a dataset is for solving a given decision-making task.  [pdf] "In our setting, the dataset provides partial information about unknown parameters that influence task outcomes.  "Focusing on linear programs, we characterize when a dataset is sufficient to recover an optimal decision, given an uncertainty set on the cost vector.  "Our main contribution is a sharp geometric characterization that identifies the directions of the cost vector that matter for optimality, relative to the task constraints and uncertainty set.  "We further develop a practical algorithm that, for a given task, constructs a minimal or least-costly sufficient dataset.  "Our results reveal that small, well-chosen datasets can often fully determine optimal decisions —offering a principled foundation for task-aware data selection."

Do creatures other than humans drink coffee to stimulate their metabolisms

✨ AI Mode  "No, humans appear to be unique in intentionally consuming coffee (or other caffeine sources like tea) for its stimulating effects. While the caffeine in coffee does affect animals in similar ways to humans—acting as a stimulant that binds to the same adenosine receptors—most animals cannot metabolize it as efficiently as humans can, making it potentially toxic for them.  "Here are key points regarding animals and coffee/caffeine: Toxicity : For many animals, including cats and dogs, caffeine is significantly more toxic than it is for humans. Ingesting even moderate amounts can lead to severe symptoms such as hyperactivity, restlessness, vomiting, seizures, increased heart rate, and in some cases, death. Natural Consumption : Some wild animals, such as the Asian palm civet, coatis, elephants, and Jacu birds, eat coffee cherries as part of their diet. The beans are then passed through their digestive system and collected to produce certain types of expensive coffee ...

Mr Pichai says that his product is defective?

"Experts say big tech firms such as Google should not be inviting users to fact-check their tools' output, but should focus instead on making their systems more reliable. "While AI tools were helpful 'if you want to creatively write something,' Mr Pichai said people 'have to learn to use these tools for what they're good at, and not blindly trust everything they say .' "He told the BBC : 'We take pride in the amount of work we put in to give us as accurate information as possible, but the current state-of-the-art AI technology is prone to some errors '."

Gemini 3 launch

"Google is beginning to launch Gemini 3 today, a new series of models the company says is its most intelligent  and factually accurate  AI systems yet. "They’re also a chance for Google to leap ahead of OpenAI following the rocky launch of GPT-5, potentially putting the company at the forefront of consumer-focused AI models. "For the first time, Google is giving everyone access to its new flagship AI model —Gemini 3 Pro —in the Gemini app on day one.  "It’s also rolling out Gemini 3 Pro to subscribers inside Search.  "Tulsee Doshi, Google DeepMind’s senior director and head of product, says the new model will bring the company closer to making information universally accessible and useful  as its search engine continues to evolve ."

Experimental Agents Feature

"While the Experimental Agents Feature is optional, it makes it quite obvious Microsoft will not stop investing in AI for Windows 11, and Agentic OS is the future, whether you like it or not. "This new agentic experience has been announced after Microsoft’s Windows boss promised to improve Windows for everyone, including developers. "As Windows Latest reported recently, when Microsoft’s Windows boss teased an Agentic future for Windows, hundreds and thousands of users criticised the leadership.  "Microsoft’s executive closed the replies/comments on his post to calm the public, but the move backfired as more users started shaming Windows’ Agentic shift. "Later, Microsoft’s Windows boss promised that it would make Windows better for everyone, and it deeply cares about developers."

Apple personnel depart 🦹‍♂️

" Bloomberg reports that Abidur Chowdhury, the industrial designer who introduced the iPhone Air during Apple’s September event, has left for an unnamed artificial intelligence startup. Here are the details. "Apple has been facing a relentless brain drain from its AI departments, with recent departures including top engineers and researchers who have defected to companies such as Meta, Anthropic, and OpenAI. "But in the design division, Apple has also been suffering its own wave of departures, particularly to Jony Ive’s team at io, which was acquired earlier this year by OpenAI. "That includes Evans Hankey, who succeeded Ive as industrial design lead following his departure, Tang Tan, who spent more than 25 years working on design at Apple, Cyrus Daniel, who worked at Apple’s human interface design team for 15 years, Matt Theobald, who worked on manufacturing design at Apple for almost 20 years, Erik de Jong, who partly led the Apple Watch design team."

Nunnink

Image
 

Both sides now…

"Adam Karpiak, co-founder of Karpiak Consulting, a national career services and recruiting firm, sees the problem from the hiring side. "With so many nearly identical resumes flooding in, companies are finding it harder to find the right fit because everything looks AI-generated , he explained. "'AI doesn't understand context,' Karpiak said. 'It doesn't know how you got results or what made your impact unique. Without that, your resume might check all the boxes for keyword searches, but it won’t connect with a human reader.' "The sheer volume has become overwhelming on both sides, too.  "It’s not surprising more companies are also relying on AI tools to help them sift through high numbers of applications."

Downturn

"Since September 10, when Oracle announced a $300bn deal with the chatbot maker, its stock has shed $315bn in market value. "A few months ago, any kind of agreement with OpenAI could make a share price go up.  "OpenAI did very nicely out of its power to reflect glory, most notably in October when it took AMD warrants as part of a chip deal that bumped share price by 24 per cent. "But Oracle is not the only laggard. Broadcom and Amazon are both down following OpenAI deal news, while Nvidia’s barely changed since its investment agreement in September.  "Without a share price lift, what’s the point? A combined trillion dollars of AI capex might look like commitment, but investment fashions are fickle."

Project Prometheus

"Jeff Bezos is spearheading a new AI started (sic) called Project Prometheus, focused on his current interests in space and engineering, The New York Times reports. "The company, which has yet to be made public, will reportedly have $6.2 billion in funding.  "Part of that sum will come from Bezos, who will act as co-CEO . "Project Prometheus will reportedly focus on creating AI systems that gain knowledge from the physical world, rather than just processing digital information, like AI chatbots. "In particular, the company will reportedly explore how AI can support engineering and manufacturing in areas such as vehicles and space technology."

Mo Gawdat

Image

Mind-captioning

"A scientist in Japan has developed a technique that uses brain scans and artificial intelligence to turn a person’s mental images into accurate, descriptive sentences. "While there has been progress in using scans of brain activity to translate the words we think into text, turning our complex mental images into language has proved challenging, according to Tomoyasu Horikawa, author of a study published November 5 in the journal Science Advances . "However, Horikawa’s new method, known as mind-captioning , works by using AI to generate descriptive text that mirrors information in the brain about visual details such as objects, places, actions and events, as well as the relationships between them."

Heule interview 💫

"SAT belongs to the tradition of symbolic artificial intelligence (also known as GOFAI, or  good old-fashioned AI ), which uses hard-coded rules —not the inscrutable interactions within a deep neural network —to produce results. "In fact, SAT is about as simple as AI gets, conceptually speaking: It relies on statements that can have only two possible values, true or false, linked together in ironclad chains of logic.  "If problems can be ground down into these logical atoms , computer programs called SAT solvers can often build airtight proofs about them —a process called, appropriately, automated reasoning .  "Those proofs might be long, sometimes too long for humans to ever parse ourselves. But they are sound."

AI window distilled ✨

"We’re excited to invite you to help shape the work on our next innovation: an AI Window. "It’s a new, intelligent and user-controlled space we’re building in Firefox that lets you chat with an AI assistant and get help while you browse, all on your terms.  "Completely opt-in , you have full control, and if you try it and find it’s not for you, you can choose to switch it off. "As always, we’re building in the open —and we want to build this with you.  "Starting today [Nov 13], you can sign up to receive updates on our AI Window and be among the first to try it and give us feedback."

Option to opt out of AI crawling ⚡

"Cloudflare CEO Matthew Prince, speaking at Web Summit in Lisbon on Wednesday, said Google was abusing its monopoly position in search to scrape content from the web in order to feed its AI models, while not paying the websites whose content it was copying. "In a conversation with Fortune on the MEO arena’s centre stage, he urged executives at Google parent Alphabet to pay website publishers for the content they need to train their large language models. "When asked for comment on Prince’s claims, Google told Fortune that it believes its referral traffic has remained stable year on year, and that it is focused on providing more high-quality clicks ( for instance, from readers who don’t immediately hit the back button when landing on a source website ).  "Google says it gives sites the option to opt out of AI crawling without hurting their referrals or ad placement."

AI to streamline nuclear licensing 🫥

"Microsoft and nuclear power company Westinghouse Nuclear want to use AI to speed up the construction of new nuclear power plants in the United States. "According to a report from think tank AI Now, this push could lead to disaster.  "'If these initiatives continue to be pursued, their lack of safety may lead not only to catastrophic nuclear consequences, but also to an irreversible distrust within public perception of nuclear technologies that may inhibit the support of the nuclear sector as part of our global decarbonization efforts in the future,' the report said."

We need to talk about your em dash

"OpenAI says ChatGPT will now ditch the em dashes if you tell it to.  "The telltale sign that supposedly signals text written by AI has popped up everywhere in recent months, including in school papers, emails, comments, customer service chats, LinkedIn posts, online forums, ad copy, and more. "The inclusion of the em dash has led people to criticize those writers for being lazy and turning to an AI chatbot to do their work. "Of course, many have also argued for the em dash, saying it’s been a part of their writing well before LLMs adopted the punctuation.  "However, the fact that chatbots couldn’t seem to avoid its use made the so-called ChatGPT hyphen  a newly objectionable addition to any text, even if they weren’t a reliable signal of content created by generative AI ."

Today, Anthropic reports

"In mid-September 2025, we detected suspicious activity that later investigation determined to be a highly sophisticated espionage campaign. "The attackers used AI’s agentic  capabilities to an unprecedented degree —using AI not just as an advisor, but to execute the cyberattacks themselves. "The threat actor — whom we assess with high confidence was a Chinese state-sponsored group —manipulated our Claude Code tool into attempting infiltration into roughly thirty global targets and succeeded in a small number of cases.  "The operation targeted large tech companies, financial institutions, chemical manufacturing companies, and government agencies.  "We believe this is the first documented case of a large-scale cyberattack executed without substantial human intervention."

Not a benign hallucination

"A proposed class-action lawsuit filed in federal court in San Jose accuses Google of secretly switching on its Gemini AI assistant across Gmail, Chat and Meet and using the feature to scan and exploit users’ private communications without their knowledge or consent, Bloomberg reported. "The complaint reportedly says Gemini was enabled for users who had not explicitly opted in and that turning the feature off requires navigating buried privacy settings. "Plaintiffs say the activation allowed Gemini to 'access and exploit the entire recorded history of its users’ private communications,' including emails and attachments, and that the company’s conduct violates California’s wiretapping law, the Invasion of Privacy Act.  "The filing requests class treatment and argues the alleged behaviour was not a benign hallucination  but an unlawful interception and use of confidential communications."

AI-generated music

"Fully AI-generated music now accounts for 34% of all tracks delivered to Deezer each day, according to new data released by the French streaming platform. "Deezer said today (November 12) that it now receives over 50,000 fully AI-generated tracks daily. "The new stat marks a significant jump from the 30,000 figure it reported in September, the 20,000 it disclosed in April, and the 10,000 it disclosed in January when it first launched its proprietary AI detection tool. "According to the platform, up to 70% of plays for fully AI-generated tracks have been detected as fraudulent, with Deezer filtering these streams out of royalty payments."

Well, alrighty then!

"OpenAI’s video generator Sora 2 is still producing copyright infringing content featuring Nintendo characters and the likeness of real people, despite the company’s attempt to stop users from making such videos. "OpenAI updated Sora 2 shortly after launch to detect videos featuring copyright infringing content, but 404 Media ’s testing found that it’s easy to circumvent those guardrails with the same tricks that have worked on other AI generators.  "The flaw in OpenAI’s attempt to stop users from generating videos of Nintendo and popular cartoon characters exposes a fundamental problem with most generative AI tools: it is extremely difficult to completely stop users from recreating any kind of content that’s in the training data, and OpenAI can’t remove the copyrighted content from Sora 2’s training data because it couldn’t exist without it ."

Caste bias

"Early research suggests caste bias could be even more egregious in some open-source models. "It’s a particularly troublesome finding as many companies in India are choosing to adopt open-source LLMs because they are free to download and can be customized to support local languages. "Last year, researchers at the University of Washington published a study  [pdf] that analyzed 1,920 AI chatbot conversations created to represent various recruitment scenarios for nurse, doctor, teacher, and software developer.  "The research concluded that open-source LLMs (as well as OpenAI’s GPT 3.5 Turbo, which is a closed model) produced significantly more caste-based harms than Western race-based harms, suggesting that these AI tools are unsuited for sensitive tasks like hiring and human resources."  

Where does the ai interface on O'Reilly's web page come from

✨AI Mode  "The AI interface on O'Reilly's web page, known as O'Reilly Answers, comes from a collaboration and partnership with Miso Technologies.  "Here are the key details: Miso Technologies : O'Reilly worked with Miso's news and media AI lab for four years to build the solution. Proprietary Training Data : The generative AI is trained solely on O'Reilly's extensive and trusted content catalog (including books and content from nearly 200 publishing partners), not the public internet. This ensures the answers are reliable and accurate, a core part of O'Reilly's approach to AI. Citation and Royalties : A key feature of the Miso partnership is a system that credits the original sources used to generate answers and pays royalties to the content creators for their contributions. Microsoft NLWeb : O'Reilly also recently announced a collaboration with Microsoft to launch the open-source NLWeb, a framework that allows existing websites to easi...

Suleyman frees superintelligence team from captivity

"In the race to build powerful artificial intelligence, Microsoft has had its hands tied behind it (sic) back for years.  "Now, the software giant is free to compete, according to top executive Mustafa Suleyman. "Suleyman recently unveiled a superintelligence team at Microsoft and he spoke with Business Insider about how this came about and the company's future plans. "What's clear from the interview is that Microsoft will aggressively pursue artificial general intelligence, technology capable of outperforming humans in a wide variety of tasks."

Spectrum of Loss Curvature

"We characterize how memorization is represented in transformer models and show that it can be disentangled in the weights of both language models (LMs) and vision transformers (ViTs) using a decomposition based on the loss landscape curvature. "This insight is based on prior theoretical and empirical work showing that the curvature for memorized training points is much sharper than non memorized, meaning ordering weight components from high to low curvature can reveal a distinction without explicit labels.  "This motivates a weight editing procedure that suppresses far more recitation of untargeted memorized data more effectively than a recent unlearning method (BalancedSubnet), while maintaining lower perplexity.  "Since the basis of curvature has a natural interpretation for shared structure in model weights, we analyze the editing procedure extensively on its effect on downstream tasks in LMs, and find that fact retrieval and arithmetic are specifically and cons...

Alluring chatbots 🦹‍♂️

"Over years of use —and product upgrades —many of us may simply slip into relationships with bots that we first used as helpers or entertainment, just as we were lulled into submission by algorithmic feeds and the glow of the smartphone screen. "This seems likely to change our society at least as much as the social-media era has. "Attention is the currency of online life, and chatbots are already capturing plenty of it. Millions of people use them despite their obvious problems (untrustworthy answers, for example) because it is easy to do so.  "According to Zuckerberg, one of the main things people use Meta AI for today is advice about difficult conversations with bosses or loved ones —what to say, what responses to anticipate.  "Recently, MIT Technology Review reported on therapists who are taking things further, surreptitiously feeding their dialogue with their patients into ChatGPT during therapy sessions for ideas on how to reply."

Savant system

"It knows more than we do about consciousness, and insists that it is conscious. Perhaps it even discovers new frameworks in physics and mathematics, and outlines technologies that look to us like magic. "This situation will present immense challenges.  "Many of us adopt a traditional hierarchy of moral concern that places the most intelligent beings at the top of the hierarchy of sentient beings.  "Conveniently, homo sapiens have been on the top rung of the ladder, and our ethical systems generally subordinate the needs of those beneath us to those on the top tier.  "But in the hypothetical case, AI seems to outrank  us. So, to be consistent, shouldn’t we humans renounce our position in favor of the needs of a more advanced intelligence?  "Or, should we reject intelligence as a basis for moral status, prompting a long overdue reflection on the ethical treatment of nonhuman animals?"

Pwner laid off?

"The lawsuit did not make clear why Luo, of Seattle, was terminated from his job. Intel said in a June regulatory filing that it planned to slash its workforce by 15% this year. " Intel detected Luo’s alleged data transfers and launched an investigation, the lawsuit said. "For almost three months, the company tried to reach Luo —a rundown of Intel’s efforts to contact him takes up two pages of the 14-page lawsuit —but he never responded to the phone calls, emails and letters, the lawsuit claimed. "'Luo has refused to even engage with Intel,' the lawsuit claimed, 'let alone return the files'."

AI investment 🫥

"Revenues are neither big enough to support the number of layoffs attributed to AI, nor to justify the capital expenditures on AI cloud infrastructure.  "Those expenditures may be approaching $1 trillion for 2025, while AI revenue —which would be used to pay for the use of AI infrastructure to run the software —will not exceed $30 billion this year. Are we to believe that such a small amount of revenue is driving economy-wide layoffs? "Investors can’t decide whether to cheer or fear these investments. The revenue is minuscule for AI-platform companies like OpenAI that are buyers, but is magnificent for companies like Nvidia that are sellers. Nvidia’s market capitalization recently topped $5 trillion, while OpenAI admits that it will have $115 billion in cumulative losses by 2029.  "The lack of transparency doesn’t help. OpenAI, Anthropic, and other AI creators are not public companies that are required to release audited figures each quarter . And most Big Tech comp...

Pluribus

"Creator Vince Gilligan (best known for Breaking Bad ) was even more emphatic in a Variety feature story about the show, declaring flatly, 'I hate AI.' "He went on to describe the technology as 'The world’s most expensive and energy-intensive plagiarism machine' and compared AI-generated content to 'a cow chewing its cud —an endlessly regurgitated loop of nonsense.' "'Thank you, Silicon Valley!' he added. 'Yet again, you’ve f—ed up the world.' " Pluribus  is the former X-Files  writer’s return to science fiction, and it reunites him with his Better Call Saul  star Rhea Seehorn, who plays a Romantasy author confronting what seems to be a seemingly alien threat."

Manifold

"Consider a double pendulum, which consists of one pendulum hanging from the end of another. "Small changes in the double pendulum’s initial conditions lead it to carve out very different trajectories through space, making its behavior hard to predict and understand.  "But if you represent the configuration of the pendulum with just two angles (one describing the position of each of its arms), then the space of all possible configurations looks like a doughnut, or torus —a manifold.  "Each point on this torus represents one possible state of the pendulum; paths on the torus represent the trajectories the pendulum might follow through space.  "This allows researchers to translate their physical questions about the pendulum into geometric ones, making them more intuitive and easier to solve.  "This is also how they study the movements of fluids, robots, quantum particles and more. "Similarly, mathematicians often view the solutions to complicated algebr...

Physics of AI

Image

Christmas Island infrastructure

"Reuters reported that Google is planning to build a large AI data center on Christmas Island, a 52-square-mile Australian territory in the Indian Ocean, following a cloud computing deal with Australia’s military. "The report positions the facility as advanced AI infrastructure at a location military strategists consider critical for monitoring Chinese naval activity.  "However, Google has denied these claims, telling Ars Technica the project is actually about subsea cables, not AI data centers. "'We are not constructing a large artificial intelligence data centre  on Christmas Island ,' a Google spokesperson told Ars . 'This is a continuation of our Australia Connect work to deliver subsea cable infrastructure, and we look forward to sharing more soon.' "Despite the denial, Reuters has not retracted its story and says it has reviewed documents about Google’s data center plans on the island.  "What Google has publicly confirmed is that i...