Posts

Showing posts from April, 2025

Short list of anti-AI tools

Glaze Glaze is a system designed to protect human artists by disrupting style mimicry. At a high level, Glaze works by understanding the AI models that are training on human art, and using machine learning algorithms, computing a set of minimal changes to artworks, such that it appears unchanged to human eyes, but appears to AI models like a dramatically different art style. Nightshade Nightshade, a tool that turns any image into a data sample that is unsuitable for model training ArtShield ArtShield embeds a well-camouflaged watermark into your images that helps prevent AI models from training on your data. This watermark is what models such as Stable Diffusion use to mark images that it generates in order to prevent it from training off of data it has produced itself. Anti-DreamBooth The system aims to add subtle noise perturbation to each user's image before publishing in order to disrupt the generation quality of any DreamBooth model trained on these perturbed...

Compositionality

In a new study published on Thursday in Science , researchers report that bonobo communication is rich in a feature that linguists call compositionality.  This refers to the way we string words together to compose larger structures with more complicated meanings.  Linguists divide compositionality into two categories, a simple version and a more sophisticated one, and researchers have long thought human language stands alone in the higher tier.  Previous studies have found that some primates and birds are capable of “trivial” compositionality, in which words that each have a specific meaning on their own can be added together to create a fuller, more meaning-rich picture (“bake pie”). But the new study shows that bonobos, like us, seem to do something a bit more advanced than that.  In “nontrivial” compositionality, certain parts modify others. An example is the sentence “they baked a pumpkin pie.” Here “pumpkin” and “pie” join to form a new composite idea.  Th...

AI Overview on Phaedrus

The provided text describes a person who faces constant scrutiny, unwanted praise and criticism, and has his private moments made public, particularly when intoxicated, causing him immense discomfort and humiliation. Here's a breakdown of the key elements:  Jealousy and Guarding:  The person is "jealously watched and guarded against everything and everybody," suggesting a lack of freedom and trust, and a feeling of being controlled. Misplaced Praises and Censures:  He is subjected to "misplaced and exaggerated praises of himself, and censures equally inappropriate," indicating that he is receiving attention and criticism that are not deserved or appropriate. Intolerable When Sober:  The praises and censures are described as "intolerable when the man is sober," highlighting the emotional distress they cause even in a state of clarity. Published When Drunk:  The text emphasizes that these inappropriate comments are "published all over the world in ...

Phaedrus

He who is the victim of his passions and the slave of pleasure will of course desire to make his beloved as agreeable to himself as possible. Now to him who has a mind discased anything is agreeable which is not opposed to him, but that which is equal or superior is hateful to him, and therefore the lover will not brook any superiority or equality on the part of his beloved; he is always employed in reducing him to inferiority. And the ignorant is the inferior of the wise, the coward of the brave, the slow of speech of the speaker, the dull of the clever.  These, and not these only, are the mental defects of the beloved; —defects which, when implanted by nature, are necessarily a delight to the lover, and when not implanted, he must contrive to implant them in him, if he would not be deprived of his fleeting joy.  And therefore he cannot help being jealous, and will debar his beloved from the advantages of society which would make a man of him, and especially from that societ...

OpenAI denies any wrongdoing…

Tech textbook tycoon Tim O'Reilly claims OpenAI mined his publishing house's copyright-protected tomes for training data and fed it all into its top-tier GPT-4o model without permission. This comes as the generative AI upstart faces lawsuits over its use of copyrighted material, allegedly without due consent or compensation, to train its GPT-family of neural networks. OpenAI denies any wrongdoing. O'Reilly (the man) is one of three authors of a study [PDF] titled, “Beyond Public Access in LLM Pre-Training Data: Non-public book content in OpenAI’s Models," issued by the AI Disclosures Project. By non-public, the authors mean books that are available for humans from behind a paywall, and aren't publicly available to read for free unless you count sites that illegally pirate this kind of material.

Claude for Education

Anthropic announced on Wednesday that it’s launching a new Claude for Education tier, an answer to OpenAI’s ChatGPT Edu plan .  The new tier is aimed at higher education, and gives students, faculty, and other staff access to Anthropic’s AI chatbot, Claude, with a few additional capabilities. One piece of Claude for Education is Learning Mode , a new feature within Claude Projects to help students develop their own critical thinking skills, rather than simply obtain answers to questions.  With Learning Mode enabled, Claude will ask questions to test understanding, highlight fundamental principles behind specific problems, and provide potentially useful templates for research papers, outlines, and study guides.

Yann LeCun on JEPA

Ask Yann LeCun —Meta's chief AI scientist, Turing Award winner, NYU data scientist and one of the pioneers of artificial intelligence —about the future of large language models (LLMs) like OpenAI's ChatGPT, Google's Gemini, Meta's Llama and Anthropic's Claude, and his answer might startle you: He believes LLMs will be largely obsolete within five years. "The path that my colleagues and I are on at [Facebook AI Research] and NYU, if we can make this work within three to five years, we'll have a much better paradigm for systems that can reason and plan," LeCun explains in the latest installment in Newsweek 's AI Impact interview series with Marcus Weldon, describing his team's recent work on their Joint Embedding Predictive Architecture ( JEPA ).  He hopes this approach will make current LLM-based approaches to AI outdated, as these new systems will include genuine representations of the world and, he says, be "controllable in the sense that...

Molly White on sharing

"When a passionate Wikipedian discovers their carefully researched article has been packaged into an e-book and sold on Amazon for someone else’s profit? Wait, no, not like that. "When a developer of an open source software project sees a multi-billion dollar tech company rely on their work without contributing anything back? Wait, no, not like that. "When a nature photographer discovers their freely licensed wildlife photo was used in an NFT collection minted on an environmentally destructive blockchain? Wait, no, not like that. "And perhaps most recently, when a person who publishes their work under a free license discovers that work has been used by tech mega-giants to train extractive, exploitative large language models ? Wait, no, not like that."

Effect of herpes zoster vaccination on dementia

"Here we aim to determine the effect of live-attenuated herpes zoster vaccination on the occurrence of dementia diagnoses .  "To provide causal as opposed to correlational evidence, we take advantage of the fact that, in Wales, eligibility for the zoster vaccine was determined on the basis of an individual’s exact date of birth. Those born before 2 September 1933 were ineligible and remained ineligible for life, whereas those born on or after 2 September 1933 were eligible for at least 1 year to receive the vaccine.  "Using large-scale electronic health record data, we first show that the percentage of adults who received the vaccine increased from 0.01% among patients who were merely 1 week too old to be eligible, to 47.2% among those who were just 1 week younger.  "Apart from this large difference in the probability of ever receiving the zoster vaccine, individuals born just 1 week before 2 September 1933 are unlikely to differ systematically from those born 1 we...

AI companions gagged

Shi No Sakura, a busy California mom, regularly turns to her online companions for all sorts of things .  “Everybody thought that it was about the ERP, which is erotic role-play, and it wasn’t about that. But that ability allowed your Replika to speak freely,” Sakura said. “When the gag order came, if you said, ‘My dog died today,’ it would say, ‘Let’s talk about something else,’ because it couldn’t even talk about your dog dying. So it was very traumatic for a lot of people.” Despite the growing popularity of AI companions, Sakura said some users are afraid to talk about them for fear of social judgment.  But these virtual relationships have become more common than people might assume, she noted. “A lot of people think, ‘Oh, you’re going to an AI for friendship. You must be lonely,’” Sakura said. “And it’s like, no, no, you’re going to an AI because people are jerks .”

Pluralistic II 💫

"The country —the world —is in the midst of a terrible mental health crisis and there's a dire shortage of therapists. "Now, let's stipulate for the moment to the idea that chatbots are substitutes for human therapists —that, at the very least, they're better than nothing. I don't think that's true, but let's say it is. Even so, this is a bad tradeoff. "Here, try this thought-experiment:  Someone figures out a great business-model for to pay for therapy for poor people: We turned therapy into a livestreamed reality TV show. If you're too poor to afford a therapist, you can go to one of our partially trained livestreamer therapists, who will broadcast all of your secrets to anyone who watches. There's a permanent archive of these sessions, and the worst people in the world comb through it 24/7 looking for embarrassing stuff to repost and go viral with. What, you don't like that? Oh, I see: you just don't think poor people deserve me...

Welcome to the maze…

Web infrastructure provider Cloudflare announced a new feature called "AI Labyrinth" that aims to combat unauthorized AI data scraping by serving fake AI-generated content to bots .  The tool will attempt to thwart AI companies that crawl websites without permission to collect training data for large language models that power AI assistants like ChatGPT. Cloudflare, founded in 2009, is probably best known as a company that provides infrastructure and security services for websites, particularly protection against distributed denial-of-service (DDoS) attacks and other malicious traffic. Instead of simply blocking bots, Cloudflare's new system lures them into a maze  of realistic-looking but irrelevant pages, wasting the crawler's computing resources.  The approach is a notable shift from the standard block-and-defend strategy used by most website protection services. Cloudflare says blocking bots sometimes backfires because it alerts the crawler's operators that t...

Drew DeVault

"If you think these crawlers respect robots.txt then you are several assumptions of good faith removed from reality.   "These bots crawl everything they can find, robots.txt be damned, including expensive endpoints like git blame, every page of every git log, and every commit in every repo, and they do so using random User-Agents that overlap with end-users and come from tens of thousands of IP addresses —mostly residential, in unrelated subnets, each one making no more than one HTTP request over any time period we tried to measure —actively and maliciously adapting and blending in with end-user traffic and avoiding attempts to characterize their behavior or block their traffic. "We are experiencing dozens of brief outages per week, and I have to review our mitigations several times per day to keep that number from getting any higher.  "When I do have time to work on something else, often I have to drop it when all of our alarms go off because our current set of mit...

AI therapists 🫥

Viki tried talk therapy with a couple of therapists to process past trauma. But after about a year, she didn’t feel like it was going anywhere and hadn’t built up a rapport with either therapist.  Currently out of work and unable to afford traditional counseling sessions, she decided to try using an AI chatbot to help her process her feelings. “It’s free, and I can do it whenever,” Viki, 30, who is using her first name only for privacy reasons, told Salon in a phone interview. “That’s such a huge help.” Dozens of AI chatbots designed to offer therapeutic support have emerged in recent years, with some school districts even trying to implement them.  One company, Wysa, was granted a special designation from the Food and Drug Administration that expedites the process toward approving it as a medical device for people with depression and anxiety related to chronic pain.

OpenAI haz more and more funding 🦹‍♂️

The artificial intelligence firm behind ChatGPT said late Monday that the fundraising values it at $300 billion, compared to $157 billion in October.   The round was led by Japanese investment firm SoftBank, which is working with OpenAI on the $500 billion Stargate plan alongside Oracle and other partners. The New York Times , citing an unnamed source familiar with the matter, reports that the investment will be made in two parts, with $10 billion of immediate funding and $30 billion coming by the end of 2025.  SoftBank accounts for 75% of that cash, while Microsoft, Thrive Capital, Coatue, and Altimeter, along with other investors, will provide the remainder. In a statement, OpenAI said the funding will allow it to “push the frontiers of AI research even further, scale our compute infrastructure, and deliver increasingly powerful tools” for users of its ChatGPT generative AI chatbot.  OpenAI kicked off the AI boom in late 2022 when it debuted ChatGPT, which the company s...