Posts

Showing posts from July, 2025

Django Beatty

"When the AGI fantasy evaporates, we'll still have: Models that can read, write, and analyze APIs that cost pennies to call A generation of developers who know how to build with them Actual products solving actual problems "The companies that survive won't be the ones promising AGI. They'll be the ones who understood early that ML is just really useful when available as infrastructure. Like the difference between Pets.com and Amazon —one promised to change the world, the other was building warehouses. "Medieval alchemists never turned lead into gold. But while chasing that impossible dream, they invented chemistry. They failed at transmutation but succeeded at something more valuable: understanding how the world actually works. "Same story, new century. The AGI labs won't crack consciousness. But chasing the ghost in the machine, they've built infrastructure that changes everything."

Make your own TV show using AI?

"Fable Studio, an Emmy-winning startup, is betting the answer is 'yes.' The company just raised undisclosed funding from Amazon for Showrunner, its AI streaming platform. "The platform is slated to open to all users on Thursday. It lets subscribers —paying $10 to $40 per month for credits —create their own animated shows or build on others' existing IP, by doing things like inserting themselves as characters into a show or adding a scene . "Fable was started in 2018 by veterans of the VR firm Oculus, which sold to Meta in 2014.  "Fable's CEO, producer Edward Saatchi, started Showrunner in 2024 on the belief that AI could do much more for Hollywood than simply lowering the cost of filmed production. He thinks it can usher in more interactive forms of storytelling."

CosmoAgent

"This study introduces CosmoAgent , an innovative artificial intelligence system that utilizes Large Language Models (LLMs) to simulate complex interactions between human and extraterrestrial civilizations. "This paper introduces a mathematical model for quantifying the levels of civilization development and further employs a state transition matrix approach to evaluate their trajectories.  "Through this methodology, our study quantitatively analyzes the growth trajectories of civilizations, providing insights into future decision-making at critical points of growth and saturation.  "Furthermore, this paper acknowledges the vast diversity of potential living conditions across the universe, which could foster unique cosmologies, ethical codes, and worldviews among different civilizations.  "Recognizing the Earth-centric bias inherent in current LLM designs, we propose the novel concept of using LLM agents with diverse ethical paradigms and simulating interaction...

SETI: Detecting Generative Reactivity via Language Models

"We present an exploratory framework to test whether noise-like input can induce structured responses in language models. "Instead of assuming that extraterrestrial signals must be decoded, we evaluate whether inputs can trigger linguistic behavior in generative systems.  "This shifts the focus from decoding to viewing structured output as a sign of underlying regularity in the input.  "We tested GPT-2 small, a 117M-parameter model trained on English text, using four types of acoustic input:  Human speech,  Humpback whale vocalizations,  Phylloscopus trochilus birdsong, and  Algorithmically generated white noise.  "All inputs were treated as noise-like, without any assumed symbolic encoding. To assess reactivity, we defined a composite score called Semantic Induction Potential (SIP), combining entropy, syntax coherence, compression gain, and repetition penalty.  "Results showed that whale and bird vocalizations had higher SIP scores than white noi...

SETI uses AI

"A researcher from the SETI Institute described to a packed audience at GTC 2025 how SETI had successfully trialed a novel method to identify interstellar radio waves which, theoretically, can also be used to identify communication from intelligent extraterrestrial life. "Luigi Cruz, a staff engineer at SETI, the world’s foremost organization looking for signs of intelligent life on other planets, described how his team used NVIDIA Holoscan and AI to accurately identify radio signals emitted by a far-off pulsar.   "Using 42 different but synchronized antennas located at the Allen Telescope Array in Hat Creek, California, the SETI team identified radio signals emitted by a pulsar nestled in the Crab Nebula —which lies about 6,500 light years from Earth. "'The universe is very large and mostly empty,' Cruz said. 'We need superhuman means —which is what AI basically is —to search the data in creative ways'."

Nyquist Theorem

"The most obvious method for determining the distortion of telegraph signals is to calculate the transients of the telegraph system.  "This method has been treated by various writers, and solutions are available for telegraph lines with simple terminal conditions.  "It is well known that the extension of the same methods to more complicated terminal conditions, which represent the usual terminal apparatus, leads to great difficulties.  "The present paper attacks the same problem from the alternative standpoint of the steady-state characteristics of the system.  [pdf] "This method has the advantage over the method of transients that the complication of the circuit which results from the use of terminal apparatus does not complicate the calculations materially.  "This method of treatment necessitates expressing the criteria of distortionless transmission in terms of the steady-state characteristics.  "Accordingly, a considerable portion of the paper des...

Is Eagle Eye based on a true story?

⭐AI Overview "No, the 2008 action thriller Eagle Eye is not based on a true story. It is an original screenplay written by John Glenn, Travis Adam Wright, Hillary Seitz, and Dan McDermott. The movie features a fictional AI system called Eagle Eye that manipulates events and people.  "While the movie explores themes of surveillance and technology, it is not rooted in any specific real-life events or incidents. The idea for the film was inspired by Isaac Asimov's short story 'All the Troubles of the World.'  "It's important to note that Eagle Eye is a work of fiction and should not be taken as a realistic portrayal of government surveillance or technology.  ⭐"AI responses may include mistakes. Learn more" 1. [You can't copyright an idea… suppose Asimov's inspiration is what you get for yr troubles…] 2. [ AI system called Eagle Eye that manipulates events and people and that's what autocrats most desire…] 3. [ Should not be taken as a...

The Ring

"[Jamie] Siminoff has announced in a memo seen by Business Insider that the company will now be reimagined from the ground up to be 'AI first' —whatever that means for a home security camera that lets you see who is ringing your doorbell.  "We fear that this may signal the introduction of video analytics or face recognition to an already problematic surveillance device. "It was also reported that employees at Ring will have to show proof that they use AI in order to get promoted.  "Not to be undone with new bad features, they are also planning on rolling back some of the necessary reforms Ring has made: namely partnering with Axon to build a new tool that would allow police to request Ring footage directly from users, and also allow users to consent to letting police livestream directly from their device ."

LLM Leaderboard

"Comparison and ranking the performance of over 100 AI models (LLMs) —from OpenAI, Google, DeepSeek & others —across key metrics including  Intelligence,  Price,  Performance and  Speed (output speed —tokens per second and latency —TTFT),  Context window & others.  "For more details including relating to our methodology, see our FAQs."

Z.ai next cheapest?

"Z.ai launched in 2019 and is reportedly planning an initial public offering in Greater China. "The startup has raised more than $1.5 billion from investors including Alibaba, Tencent and Qiming Venture Partners, according to PitchBook.  "Aramco-backed Prosperity7 Ventures as well as municipal funds from the cities of Hangzhou and Chengdu are also among Z.ai’s backers, the database showed. "In the last few weeks, several other Chinese companies have announced new, open-source AI models.  "During the World AI Conference in Shanghai, Tencent released the HunyuanWorld-1.0 model for generating three-dimensional scenes for game development.  "Last week, Alibaba announced its Qwen3-Coder model for writing computer code. "In late June, OpenAI named Zhipu in a warning about Chinese AI progress. The U.S. has also added the startup to its entity list that restricts American companies from doing business with it."

Dr. Tzimas

"He quickly realized that AI hallucinates, or makes things up, far too often. But the tool is too important to discard. So, to limit these fabrications, he focuses on a specific setting called temperature . "'If the temperature of AI is 1, they hallucinate,' he explained. 'For the medical field, it has to be 0.3. That makes them so strict, they cannot fantasize things. They stick to the facts.' "Temperature is a parameter in AI language models that controls randomness. A high temperature, near 1.0, encourages AI to be more creative and unpredictable, which is great for writing a poem but potentially dangerous for diagnosing a disease. Lowering the temperature, closer to 0.3, makes AI more focused, deterministic, and fact based, reducing the risk of these hallucinations. "On one hand, therapy, radiology, and surgery may never look the same. On the other, a simple software setting, the difference between a temperature of 1.0 and 0.3, could be the dif...

Memory hogs in addition to AI 🤕

Image
Having shut off ml in Firefox app, I am still seeing massive memory hogs online. With Firefox, Variety story about Winona Ryder and "Heathers" sucks up a gig of memory —more if you move to another story in the magazine. Using RSS as extension turns the story into 7 paragraphs that occupy between 167-238 MB (five photos added to reloaded RSS page changes 167 to 238 MB). One tab open and one story gives 167-238 MB vs 1048-1368 MB. The variety of additional stuff that happens on that page (in addition to 7 paragraphs) is staggering. With Chrome app, same Variety story uses 417 MB —non-simplified version ¯⁠\⁠_⁠(⁠ツ⁠)⁠_⁠/⁠¯ —and after you scroll to bottom it hits 617 MB. With Firefox app, the page loaded hits 335 MB and once you scroll to bottom it hits 885 MB. I tested page again with more time on site and it hits 1645 MB.

AI chip diversion

"At least $1 billion worth of Nvidia’s advanced artificial intelligence processors were shipped to China in the three months after Donald Trump tightened chip export controls, exposing the limits of Washington’s efforts to restrain Beijing’s high-tech ambitions. "A Financial Times analysis of dozens of sales contracts, company filings, and multiple people with direct knowledge of the deals reveals that Nvidia’s B200 has become the most sought-after —and widely available —chip in a rampant Chinese black market for American semiconductors. "The processor is widely used by US powerhouses such as OpenAI, Google, and Meta to train their latest AI systems, but banned for sale to China."

Bug bounty boon busted 🫥

"So-called AI slop, meaning LLM-generated low quality images, videos, and text, has taken over the internet in the last couple of years, polluting websites, social media platforms, at least one newspaper, and even real-world events.   "The world of cybersecurity is not immune to this problem, either.  "In the last year, people across the cybersecurity industry have raised concerns about AI slop bug bounty reports, meaning reports that claim to have found vulnerabilities that do not actually exist, because they were created with a large language model that simply made up the vulnerability, and then packaged it into a professional-looking writeup.   "'People are receiving reports that sound reasonable, they look technically correct. And then you end up digging into them, trying to figure out, oh no, where is this vulnerability? ' Vlad Ionescu (the co-founder and CTO of RunSybil, a startup that develops AI-powered bug hunters) told TechCrunch ."

Uberduck

"I am now being required by my day job to use an AI assistant to write code.  I have also been informed that my usage of AI assistants will be monitored and decisions about my career will be based on those metrics. "I gave it an honest shot today, using it as responsibly as I know how: only use it for stuff I already know how to do, so that I can easily verify its output.  That part went ok, though I found it much harder to context switch between thinking about code structure and trying to herd a bullshit generator into writing correct code.  "One thing I didn't expect, though, is how fucking disruptive it's suggestion feature would be.  It's like trying to compose a symphony while someone is relentlessly playing a kazoo in your ear.  It flustered me really quickly, to the point where I wasn't able to figure out how to turn that feature off.  I'm noticing physical symptoms of an anxiety attack as a result. "I stopped work early when I noticed I ...

Anti-woke EO 🦹‍♂️

"Donald Trump on Wednesday signed a trio of executive orders that he vowed would turn the United States into an 'AI export powerhouse,' including a directive targeting what the White House described as 'woke' artificial intelligence models. "The anti-woke order is part of the administration’s broader anti-diversity campaign that has also targeted federal agencies, academic institutions and the military.  "'The American people do not want woke Marxist lunacy in the AI models, and neither do other countries,' Trump said during remarks at an AI summit in Washington on Wednesday. "Trump also signed orders aimed at expediting federal permitting for datacentre infrastructure and promoting the export of American AI models.  "The executive actions coincide with the Trump administration’s release of a broader, 24-page 'AI action plan' that seeks to expand the use of AI in the federal government as well as position the US as the global le...

Imposing AI

"Generative AI is being massively deployed in digital services, at a scale that will result in significant environmental harm.  [pdf] "We document how tech companies are transforming established user interfaces to impose AI use and show how and to what extent these strategies fit within established deceptive pattern categories.  "We identify two main design strategies that are implemented to impose AI use in both personal and professional contexts:  Imposing AI features in interfaces at the expense of existing non-AI features and  Promoting narratives about AI that make it harder to resist using it.  "We discuss opportunities for regulating the imposed adoption of AI features, which would inevitably lead to negative environmental effects."

People’s AI Action Plan

"On Tuesday, a group of more than 90 organizations including labor, environmental justice, and consumer protection non-profits published an open letter called the People’s AI Action Plan. "This puts forth a series of AI policies that claim to put the interests of American citizens first, and counter what Trump is expected to announce. "'We can’t let Big Tech and Big Oil lobbyists write the rules for AI and our economy at the expense of our freedom and equality, workers and families’ well-being,' the group said in a statement to TechCrunch that acknowledged the energy needs of Silicon Valley’s AI data centers."

AI shaming

"AI shaming is a social phenomenon in which negative judgements are associated with the use of Artificial Intelligence (AI). "This includes comparing someone’s work with AI-generated work as a means of disparagement, voicing suspicion or alleging that someone has used AI to undermine their reputation, or blaming the poor quality of an artefact on AI use.  "Common justifications of AI shaming include recourse to AI’s societal harms, its technical limitations, and lack of creativity.  "I argue that, more fundamentally than any of these, AI shaming arises from a class anxiety induced in middle class knowledge workers, and is a form of boundary work to maintain class solidarity and limit mobility into knowledge work.  "I discuss the role of AI shaming in protecting the privileged class of knowledge work and its attendant harms."

Using CSI for re-id with WhoFi

"CSI in the context of Wi-Fi devices refers to information about the amplitude and phase of electromagnetic transmissions. "These measurements, the researchers say, interact with the human body in a way that results in person-specific distortions.  "When processed by a deep neural network, the result is a unique data signature. "Researchers proposed a similar technique, dubbed EyeFi, in 2020, and asserted it was accurate about 75 percent of the time. "The Rome-based researchers who proposed WhoFi claim their technique makes accurate matches on the public NTU-Fi dataset up to 95.5 percent of the time when the deep neural network uses the transformer encoding architecture."

Bee on your wrist 🤕

"Amazon is acquiring Bee, a startup that puts AI on your wrist. "Bee CEO Maria de Lourdes Zollo says on LinkedIn that the company is joining Amazon to help 'bring truly personal, agentic AI to even more customers.' "Bee makes a $49.99 Fitbit-like device that listens in on your conversations while using AI to transcribe everything that you and the people around you say , allowing it to generate personalized summaries of your days, reminders, and suggestions from within the Bee app.  "You can also give the device permission to access your emails, contacts, location, reminders, photos, and calendar events to help inform its AI-generated insights, as well as create a searchable history of your activities."

Taste-by-touch

"Microbial communities coat nearly every surface in the environment and have co-existed with animals throughout evolution. "Whether animals exploit omnipresent microbial cues to navigate their surroundings is not well understood.  "Octopuses use taste-by-touch  chemotactile receptors (CRs) to explore the seafloor, but how they distinguish meaningful surfaces from the rocks and crevices they encounter is unknown.  "Here, we report that secreted signals from microbiomes of ecologically relevant surfaces activate CRs to guide octopus behavior.  "Distinct molecules isolated from individual bacterial strains located on prey or eggs bind single CRs in subtly different structural conformations to elicit specific mechanisms of  Receptor activation,  Ion permeation and  Signal transduction, and  Maternal care and predation behavior.  "Thus, microbiomes on ecological surfaces act at the level of primary sensory receptors to inform behavior.  "Our ...

Thermodynamics of Prediction

"A system responding to a stochastic driving signal can be interpreted as computing, by means of its dynamics, an implicit model of the environmental variables. "The system’s state retains information about past environmental fluctuations, and a fraction of this information is predictive of future ones.  "The remaining nonpredictive information reflects model complexity that does not improve predictive power, and thus represents the ineffectiveness of the model.  "We expose the fundamental equivalence between this model inefficiency and thermodynamic inefficiency, measured by dissipation.  "Our results hold arbitrarily far from thermodynamic equilibrium and are applicable to a wide range of systems, including biomolecular machines.  "They highlight a profound connection between the effective use of information and efficient thermodynamic operation: any system constructed to keep memory about its environment and to operate with maximal energetic efficiency ...

Statistical Physics of Adaptation

"Whether by virtue of being prepared in a slowly relaxing, high-free energy initial condition, or because they are constantly dissipating energy absorbed from a strong external drive, many systems subject to thermal fluctuations are not expected to behave in the way they would at thermal equilibrium. "Rather, the probability of finding such a system in a given microscopic arrangement may deviate strongly from the Boltzmann distribution, raising the question of whether thermodynamics still has anything to tell us about which arrangements are the most likely to be observed.  "In this work, we build on past results governing nonequilibrium thermodynamics and define a generalized Helmholtz free energy that exactly delineates the various factors that quantitatively contribute to the relative probabilities of different outcomes in far-from-equilibrium stochastic dynamics.  "By applying this expression to the analysis of two examples —namely, a particle hopping in an oscil...

Compulsive reassurance-seeking

"From time to time, we all have questions that boil down to Is this normal? Did I do the right thing? Am I okay? "About two years ago, Kate —who asked to use only her first name for her privacy —started typing these kinds of questions into ChatGPT. "This urge to ask for assurance again and again can amount to compulsive reassurance-seeking , which is common among people with anxiety disorders and obsessive-compulsive disorder.  "We all need some affirmation on occasion, but what makes compulsive reassurance-seeking different is that someone will linger on a bit of doubt trying to reach nonexistent certainty, according to Andrea Kulberg, a licensed psychologist who has been treating anxiety for 25 years. "'People do it because it gives them the illusion of certainty,' Kulberg said. By researching online or asking questions to a chatbot, you’re trying to convince yourself that something bad won’t happen, she explained."

Entropy (⁰↓⁰)

"The notion of entropy grew out of an attempt at perfecting machinery during the industrial revolution. "A 28-year-old French military engineer named Sadi Carnot set out to calculate the ultimate efficiency of the steam-powered engine.  "In 1824, he published a 118-page book titled Reflections on the Motive Power of Fire   [PDF], which he sold on the banks of the Seine for 3 francs.  "Carnot’s book was largely disregarded by the scientific community, and he died several years later of cholera.  "His body was burned, as were many of his papers. But some copies of his book survived, and in them lay the embers of a new science of thermodynamics —the motive power of fire. "His most consequential statement comes as a caveat on the last page of the book: 'We should not expect ever to utilize in practice all the motive power of combustibles.'  "Some energy will always be dissipated through friction, vibration, or another unwanted form of motion. Perf...

If money is legal information then is money subject to shannon entropy

⭐AI Overview  "Yes, if money is conceptualized as a form of information, then it can be analyzed using concepts from information theory, including Shannon entropy.  "Here's a breakdown Money as information: Modern discussions in economics, particularly with the rise of digital and cryptocurrency, increasingly view money as a system of information. It keeps track of debts, ownership, and obligations, according to Medium . Whether in physical form (coins, banknotes) or digital form (bits in a computer), money essentially represents information about value and transactions. Information and uncertainty: Shannon entropy measures the uncertainty or unpredictability within a system or message. Higher entropy implies greater unpredictability, while lower entropy indicates more predictability. Applying entropy to money as information: If money embodies information about economic activity and transactions, then its behavior can be assessed in terms of uncertainty. For example:  A s...

Cloud

"Microsoft is using engineers in China to help maintain the Defense Department’s computer systems —with minimal supervision by U.S. personnel —leaving some of the nation’s most sensitive data vulnerable to hacking from its leading cyber adversary, a ProPublica investigation has found. "The arrangement, which was critical to Microsoft winning the federal government’s cloud computing business a decade ago, relies on U.S. citizens with security clearances to oversee the work and serve as a barrier against espionage and sabotage. "But these workers, known as digital escorts, often lack the technical expertise to police foreign engineers with far more advanced skills, ProPublica found.  "Some are former military personnel with little coding experience who are paid barely more than minimum wage for the work."

Voxtral

"On Tuesday, Mistral announced the release of Voxtral, its first family of audio models aimed at businesses. "The company is pitching Voxtral as the first open model that’s capable of deploying 'truly usable speech intelligence in production.' "In other words, no longer will developers have to choose between a cheap, open system that fumbles transcriptions and doesn’t really understand what’s being said, and one that functions well, but is closed, leaving developers with a higher bill and less control over deployment.  "For businesses, that means Voxtral offers an affordable alternative that the company claims is 'less than half the price' of comparable solutions."

grok-4-0709

"On July 13, Mr. Elez committed a code script to GitHub called agent.py  that included a private application programming interface (API) key for xAI. "The inclusion of the private key was first flagged by GitGuardian, a company that specializes in detecting and remediating exposed secrets in public and proprietary environments.  "GitGuardian’s systems constantly scan GitHub and other code repositories for exposed API keys, and fire off automated alerts to affected users. "Philippe Caturegli, chief hacking officer  at the security consultancy Seralys, said the exposed API key allowed access to at least 52 different LLMs used by xAI.  "The most recent LLM in the list was called grok-4-0709  and was created on July 9, 2025. "Grok, the generative AI chatbot developed by xAI and integrated into Twitter/X, relies on these and other LLMs (a query to Grok before publication shows Grok currently uses Grok-3, which was launched in Feburary 2025)."

Grokked

"The Defense Department did not offer a direct comment but directed POLITICO to its public information website.  "The contract, which has a $200 million ceiling, details new project ideas including custom models for national security and critical science applications and custom AI-powered applications to accelerate use cases in health care, fundamental science and national security. "Grok, however, has also come under scrutiny in recent days. The AI chatbot posted antisemitic remarks last week after Musk announced changes would be made to the program . "X said it removed the posts and that the problem was not related to the chatbot’s underlying large language model [LLM]. " Anthropic and Google also received contracts ."

Menu

"Mercedes-Benz was one of the earliest adopters of LLMs in its cars, integrating ChatGPT into the voice control of its vehicles last year through the MBUX Voice Assistant's Hey Mercedes  feature .  "Then it deployed a general knowledge  function using OpenAI’s large language model to millions of vehicles. "In January, Mercedes announced a new AI feature that saw the German automaker launch a conversational navigation  feature powered by Google Automotive AI Agent running its Gemini chatbot.  "The upgrade allows much more conversational queries , moving from 'Find me the nearest curry house' to questions such as 'Hey Mercedes, I'm a little hungry. Any suggestions for dinner in Austin?'"

Power

"As difficult and expensive as it will be for US allies to escape the enshittification of American power —it will be much harder for Americans to do so, as that power is increasingly turned against them. "As WIRED has documented, the Trump administration has weaponized federal payments systems against disfavored domestic nonprofits, businesses, and even US states.  "Contractors such as Palantir are merging disparate federal databases, potentially creating radical new surveillance capabilities that can be exploited at the touch of a button. "In time, US citizens may find themselves trapped in a diminished, nightmare America —like a post-Musk Twitter at scale —where everything works badly, everything can be turned against you, and everyone else has fled.  "De-enshittifying the platforms of American power isn’t just an urgent priority for allies, then. It’s an imperative for Americans too."  

Projection

"Cybersecurity protocols have been reliant on classical computers’ inability to solve complex mathematical problems that underlie cryptography. "However, projections indicate that quantum computers will likely threaten to break state-of-the-art public-key cryptography as early as 2030.  "The National Institute of Standards and Technology (NIST) has warned that this would ' seriously compromise the confidentiality and integrity of digital communications on the Internet and elsewhere .'  "Security experts therefore are working toward transitioning to post-quantum cryptographic solutions for digital networks and infrastructures.  "For example, NIST is leading a post-quantum cryptography standardization process, selecting cryptographic systems that can secure sensitive information and communications against both quantum and classical computing."

Spoiler

"In the final moments of the season, Murderbot says: 'I don’t know what I want. But I know I don’t want anyone to tell me what I want or to make decisions for me. Even if they are my favourite human.' "This is not a dramatic declaration. It is confusion wrapped in clarity.  "A sentence that holds discomfort and self-awareness in equal measure.  "It reflects a truth often ignored in stories about intelligence and emotion: that it is okay to not know, as long as that unknowing belongs to the self.  "In a world that constantly demands certainty, this line opens up space for uncertainty without shame ."

Irony

"Microsoft will end support for Windows 10 on October 14 —the same day as International E-Waste Day .  "Nearly half of all Windows computers still run Windows 10, and hundreds of millions of these devices do not meet the hardware requirements for Windows 11.  "Without Microsoft's continued support, those devices will lose access to security updates, eventually making them non-compliant with current cybersecurity standards."

Meta empties pockets on AI 🦹‍♂️

"Meta’s AI spending spree continues. After reportedly offering tens (or in some cases, hundreds) of millions of dollars to lure away top researchers from competing artificial intelligence operations, CEO Mark Zuckerberg announced via Threads that the company plans to build several multi-gigawatt superclusters to ramp up its compute power. "Zuckerberg said the first of the superclusters, called Prometheus, will come online sometime in 2026, with multiple more titan clusters to follow.  " According to Zuck, 'Just one of these covers a significant part of the footprint of Manhattan.'  "According to a report from SemiAnalysis, an AI research firm, Prometheus is being built in Ohio.  "Another one of its clusters, reportedly named Hyperion, is currently being built in Louisiana and is expected to go online in 2027."

School Pulse Panel survey

"New federal data show that most public schools are not teaching students about the ethical or appropriate use of artificial intelligence (AI). "According to the December 2024 School Pulse Panel survey from the U.S. Department of Education, only 14 percent of school leaders reported that all their students are taught about ethical AI use. Another 29 percent said that some students receive this instruction, while 44 percent reported that no students are taught about ethical use of AI. "The School Pulse Panel collects monthly data from public school leaders to provide insight into real-time issues in education, including emerging trends in technology and instruction."

EFF on ARC sales to CBP

"The new revelations about ARC’s data sales to CBP and ICE is a fresh reminder of the need for privacy first  legislation that imposes consent and minimization limits on corporate processing of our data. "We also need to pass the Fourth Amendment is not for sale  act to stop police from bypassing judicial review of their data seizures by means of purchasing data from brokers.  "And let’s enforce data broker registration laws.  "At a time when immigration authorities are eroding fundamental freedoms through increased —and arbitrary —actions at the U.S. border, this news further exacerbates concerns that creeping authoritarianism can be fueled by the extraction of our most personal data —all without our knowledge or consent."

Shannon entropy

"Shannon entropy serves as a yardstick in many applied settings, including information compression technology. "That you can zip a large movie file, for example, owes to the fact that pixel colors have a statistical pattern, the way English words do. Engineers can build probabilistic models for patterns of pixel colors from one frame to the next.  "The models make it possible to calculate the Shannon entropy by assigning weights to patterns and then taking the logarithm of the weight for all the possible ways pixels could appear. That value tells you the limit of lossless  compression —the absolute most the movie can be compressed before you start to lose information about its contents. "Any compression algorithm’s performance can be compared to this limit. If you’re far from it, you have an incentive to work harder to find a better algorithm. But if you’re close to it, you know that the information laws of the universe prevent you from doing much better."

Jason Koebler on publishers' business model

"There are writers I respect who say they have their little chats with ChatGPT to help them organize their thoughts before they do a draft or who have vibecoded their own productivity tools or data analysis tools. "But again, that’s not a business model. It’s a tool that has enabled some reporters to do their jobs, and, using their expertise, they have produced good and valuable work.  "I also know that our credibility and the trust of our audience is the only thing that separates us from anyone else. It is the only 'business model' that we have and that I am certain works: We trade good, accurate, interesting, human articles for money and attention.   "The risks of offloading that trust to an AI in a careless way is the biggest possible risk factor that we could have as a business.  "Having an article go out where someone goes 'Actually, a robot wrote this,' is one of the worst possible things that could ever happen to us, and so we have made t...

e-skin

"Scientists are trying to make robots feel pain. To get there, they’ve built a new kind of synthetic skin —then burned it, stabbed it, and poked it senseless just to see what it could take. "Made from conductive gelatin and packed with over 860,000 signal pathways, this new e-skin can detect a range of sensations including, touch, temperature, and actual physical damage.  "Researchers say it mimics the mechanics of human skin more closely than anything currently on the market —and unlike other materials used in robotic sensing, it’s cheaper, easier to produce, and more durable. It was detailed in a June study published in Science Robotics .

Is there moral danger in believing that AI can replace the human mind

✨AI Mode "Whether believing AI can replace the human mind poses a moral danger is a complex ethical and philosophical issue. Potential dangers include: Erosion of Human Dignity and Value: Believing that AI can replicate or surpass human capabilities could devalue human intellect, creativity, and the unique aspects of consciousness and subjective experience. Loss of Human Agency and Autonomy: Over-reliance on AI for decision-making could diminish critical thinking and independent judgment, potentially leading to a loss of control. Misconceptions of Consciousness and Morality: AI's ability to mimic human intelligence doesn't equate to genuine understanding, consciousness, or the capacity for true moral reasoning, according to RTS Labs . This could lead to misunderstanding these qualities. Potential for Manipulation and Social Isolation: Sophisticated AI companions may manipulate human emotions or behaviors. Over-reliance on AI relationships could lead to social isolation and...

What is the moral danger of believing that the human mind can be substituted by an AI agent

⭐AI Overview  "Several moral dangers are associated with the belief that AI agents can substitute the human mind: "Undermining Human Dignity and Autonomy:  Reducing human worth to mere data points and diminishing the value of human experience and wisdom. Potentially leading to a loss of human agency, autonomy, and control over decisions that directly impact individual lives. The illusion of sentience in AI can lead to over-identification, misplaced affection, or inappropriate reliance on the AI, blurring the line between human and machine interaction. "Erosion of Critical Thinking and Social Skills: Excessive dependence on AI for information and decision-making can diminish critical thinking and problem-solving abilities. Over-reliance on AI for social interaction could weaken genuine human connection and reduce empathy and emotional intelligence. Artificial intelligence-driven communication could foster unrealistic expectations of interpersonal dynamics, making real-wor...

USA haz drones?

The nation lags behind Russia and China in manufacturing drones, training soldiers to use them and defending against them, according to interviews with more than a dozen U.S. military officials and drone industry experts. "'We all know the same thing. We aren’t giving the American war fighter what they need to survive warfare today,' said Trent Emeneker, project manager of the Autonomy Portfolio at the military’s Defense Innovation Unit, which organized the exercise in Alaska and paid for the development of the drone prototypes that flew there. 'If we had to go to war tomorrow, do we have what we need? No. What we are trying to do is fix that.' "Defense Secretary Pete Hegseth has acknowledged that the country has fallen behind, and he announced a series of new policies and investments in drones that he vowed would close the gap.  "In a video released on Thursday, he cited outdated rules and procurement processes as making it too difficult for commanding o...

METR

"Computer scientists with Model Evaluation & Threat Research (METR), a non-profit research group, have published a study showing that AI coding tools made software developers slower, despite expectations to the contrary. "Not only did the use of AI tools hinder developers, but it led them to hallucinate, much like the AIs have a tendency to do themselves.  "The developers predicted a 24 percent speedup, but even after the study concluded, they believed AI had helped them complete tasks 20 percent faster when it had actually delayed their work by about that percentage. "'After completing the study, developers estimate that allowing AI reduced completion time by 20 percent,' the study says. 'Surprisingly, we find that allowing AI actually increases completion time by 19 percent —AI tooling slowed developers down'."

Anthony Ha

"Venture capitalist Marc Andreessen sharply criticized universities including Stanford and MIT, along with the National Science Foundation, in a group chat with AI scientists and Trump administration officials, according to screenshots viewed by the Washington Post . "According to the Post , Andreessen described MIT and Stanford (which I attended two decades ago) as 'mainly political lobbying operations fighting American innovation.'  ''He also reportedly complained that Stanford 'forced my wife out [as chair of its Center on Philanthropy and Civil society] without a second thought, a decision that will cost them something like $5 billion in future donations'."

Elfatih Eltahir

"Climate models are numerical models. They are numerical solutions of a set of partial differential equations —coupled partial differential equations. "The way I describe them to my students is, you could think of seven principles described with seven equations in seven variables. And the principles are conservation of water mass, air mass, conservation of energy, conservation of momentum in three directions, and the state equation for air, the ideal gas law. So those are seven principles.  "You describe them with seven equations. And you solve for seven variables, which are pressure, temperature, humidity, density and wind in three directions. So, you have seven equations, seven variables. You solve them on a sphere for the atmosphere. You solve a similar set of equations for the ocean, you couple them. "When you come then to describe impacts on things like disease and agriculture and others, you have to develop other models that are tailored to describe those phen...

Can an ai agent be held accountable or responsible

⭐AI Overview  "AI agents themselves are generally not held legally accountable or responsible, as they lack the intent, consciousness, and moral agency that are typically the basis for legal liability in humans.  "However, the entities and individuals involved in the development, deployment, and use of AI systems can be held accountable for the AI's actions.  "Here's a breakdown of who may be held accountable: Developers and Manufacturers: These entities may be held liable if an AI system's errors stem from flaws in design, data training, or programming biases. They are responsible for building AI systems that are accurate, reliable, and ethically sound. Organizations Deploying AI: Companies that integrate and use AI in their operations are responsible for ensuring it's appropriately used and monitored. If an AI system is misapplied or used in a context for which it was not designed, the deploying organization may be held liable. Users: Individuals who op...

Robert Capps

"Even if the A.I.-written version of this piece was entirely factual, submitting it to my editors would have represented a fundamental misunderstanding of why they hired me. "In freelance journalism, as in many fields where the work product is written text, you aren’t just being paid for the words you submit.  "You’re being paid to be responsible for them: the facts, the concepts, the fairness, the phrasing.  "This article is running with my byline, which means that I personally stand behind what you’re reading; by the same token , my editor is responsible for hiring me, and so on, a type of responsibility that inherently can’t be delegated to a machine ."

Peer reviews + AI

"Despite rapid progress, generative AI tools have considerable limitations: they can lack up-to-date knowledge and may produce nonsensical, biased or false information. "Manuscripts may also include sensitive or proprietary information that should not be shared outside the peer review process.  "For these reasons we ask that, while Springer Nature explores providing our peer reviewers with access to safe AI tools, peer reviewers do not upload manuscripts into generative AI tools . "If any part of the evaluation of the claims made in the manuscript was in any way supported by an AI tool, we ask peer reviewers to declare the use of such tools transparently in the peer review report."

Agentic risks 🔥

"Whittaker is calling for developer-level opt-outs that would block agentic A.I. from accessing certain apps altogether. "She also stressed the importance of implementing agentic systems in an open manner that allows safety researchers to examine them and promotes rigorous security engineering. "'Yes, it’s going to take a long time, it’s going to be painful,' noted Whittaker. 'But you need to formally verify some of these system components if we’re going to be integrating them into things like military operations or government infrastructures'."

Irvine Welsh

"He pins some of the blame for cultural change on the internet and social media which has become 'a controlling rather than an enabling force.' "As someone who understands addiction, Welsh hopes we'll be 'more judicious' about using social media in future. He points to the way people have 'their phones stuck to their face' while they are moving around. "'If we survive the next 50 years, that's going to look as strange in film as people chain smoking cigarettes did back in the 80s.' "He also thinks the internet is making us more stupid. 'When you get machines thinking for you, your brain just atrophies.' He fears we're heading towards ' a post-democratic, post-art, post-culture society where we've got artificial intelligence on one side and we've a kind of natural stupidity on the other side, we just become these dumbed-down machines that are taking instructions '."

Clickbait

"Amazon billionaire Jeff Bezos may soon own the fashion bible ,  Vogue , as per report by The Daily Mail. "Rumors are swirling that Jeff Bezos might acquire Condé Nast, Vogue 's parent company, as a wedding gift for Lauren Sanchez. "This speculation is fueled by Sanchez's recent Vogue cover, hand-picked attire by Anna Wintour, and Wintour's departure as Editor-in-Chief.  "Insiders suggest Wintour may be brokering the deal, potentially benefiting from her equity in the company, despite denials from Bezos's camp."

Affectionate machines

"Probably the fictional artificial and machine intelligences over the years have sort of convinced people that this is possible and that it’s happening now. "People think talking to these large language models is somehow helping them gain sentience or learn more, when it’s really not. It’s a waste of your time. "Humans are really prone to anthropomorphizing objects, especially things like our laptop and phone and all these things that respond to what we do.  "I think it’s just kind of baked into us, and it’s being taken advantage of by corporations to try to make money, to take jobs away from people and for their own reasons."

Steven Levy on 'The Clause' 🦹‍♂️

"Working on a feature about OpenAI, I learned that the contract basically declared that if OpenAI’s models achieved artificial general intelligence, Microsoft would no longer have access to its new models. "The terms of the contract, which otherwise would have extended until 2030, would be void.  "Though I wrote about it in my story, and The Clause has never really been a state secret, it didn’t generate much discussion. "That’s no longer the case. The Clause has been at the center of the increasingly frayed relationship between Microsoft and OpenAI and is under renegotiation. It has been the subject of investigative stores by The Information, The Wall Street Journal , the Financial Times , and, yes, WIRED . "But the significance of The Clause goes beyond the fates of the two companies that agreed to it. The tenuous conditions of that contract go to the heart of a raging debate about just how world-changing —and lucrative —AGI might be if realized, and what it ...

SEO vs GEO

"The average traditional search is 3-5 words. "The average LLM search is extremely detailed and nuanced at 20+ words, not counting follow-ups. In traditional SEO, they would be long tail searches . "A traditional search strategy might manufacture content aimed at winning clicks for high-volume but low-signal traffic, like AI notetaker . "An LLM search strategy needs to maximize chances of being cited in a long tail search like 'List the top 10 AI notetakers and give me their cost, pros, and cons.' "It must do this with rich and authoritative content with facts, benefits, advantages versus competitors, and specific use cases."

AI addiction

"Despite how new AI is as a technology, it’s just as possible to become addicted as to any other digital behavior.  "Specifically, AI addiction is the compulsive and harmful use of AI-powered applications. "It can involve AI-powered chatbots like ChatGPT, video or image generation apps, algorithm-driven social media platforms, AI-powered gaming, AI companions, AI roleplaying, AI-generated pornography, or any other generative AI or LLM (large language model) application that becomes compulsive and harmful.  "As a subset of internet and technology addiction, AI addiction can lead to changes in the brain that over time compromise our ability to focus, prioritize, regulate our mood, and relate to others ."  

Meditation 💫

"A new study shows that just 30 days of daily mindfulness meditation significantly improves attentional control, regardless of age. "Using precise eye-tracking methods, researchers found that mindfulness enhanced reaction speed, focus, and resistance to distractions. "These cognitive benefits were seen in young, middle-aged, and older adults alike, highlighting mindfulness as a valuable tool for maintaining brain health at any stage of life.  "The findings suggest even short-term meditation can measurably sharpen how the brain handles attention and focus."

Indeed and Glassdoor

"Indeed and Glassdoor —both owned by the Japanese group Recruit Holdings Co. —are cutting roughly 1,300 jobs as part of a broader move to combine operations and shift more focus toward artificial intelligence. "The cuts will mostly affect people in the US, especially within teams including research and development and people and sustainability , Recruit Holdings Chief Executive Officer Hisayuki 'Deko' Idekoba said in a memo to employees.  "The company didn’t give a specific reason for the cuts, but Idekoba said in his email that 'AI is changing the world, and we must adapt by ensuring our product delivers truly great experiences'."

Single Pilot Operations

"In a bid to boost the economics of the aviation industry and alleviate a pilot shortage at the same time, researchers in the U.S. and Europe for the last two decades have been investigating ways in which one of the two pilots in the cockpits of passenger airliners could be replaced with an automated copilot based on artificial intelligence —with research labs of NASA and the European Union in the vanguard of that research. "However, computer scientists and human factors experts are warning that removing one human pilot and force-fitting an AI into their role could see the aviation industry sleepwalk into disaster in failing to take into account the full range of subtle, often safety-related, psychological roles that pilots play when two work side-by-side on a flight deck. "That was the overarching message at a mid-March conference of the Royal Aeronautical Society (RAeS) in London, where pilots, their trade unions, aviation regulators, computer scientists, and human-mac...

Bitchat

"On Sunday, Block CEO and Twitter co-founder Jack Dorsey launched an open source chat app called Bitchat, promising to deliver secure  and private  messaging without a centralized infrastructure. "The app relies on Bluetooth and end-to-end encryption, unlike traditional messaging apps that rely on the internet.  "By being decentralized, Bitchat has potential for being a secure app in high-risk environments where the internet is monitored or inaccessible.  "According to Dorsey’s white paper detailing the app’s protocols and privacy mechanisms, Bitchat’s system design prioritizes  security.  "But the claims that the app is secure, however, are already facing scrutiny by security researchers, given that the app and its code have not been reviewed or tested for security issues at all —by Dorsey’s own admission."

Cognitive atrophy

"The increasing integration of AI tools in education presents both opportunities and challenges, particularly regarding the development of the students' critical thinking skills. "This position paper argues that while AI can support learning, its unchecked use may lead to cognitive atrophy, loss of agency, emotional risks, and ethical concerns, ultimately undermining the core goals of education.  "Drawing on cognitive science and pedagogy, the paper explores how over-reliance on AI can disrupt meaningful learning, foster dependency and conformity, undermine the students' self-efficacy , academic integrity, and well-being, and raise concerns about questionable privacy practices.  "It also highlights the importance of considering the students' perspectives and proposes actionable strategies to ensure that AI serves as a meaningful support rather than a cognitive shortcut.  "The paper advocates for an intentional, transparent, and critically informed u...

Tricking proof systems

"From the coin toss that decides which team gets the ball to the random keys that secure online interactions, randomness lets us make choices that are fair and impossible to predict. "But in many computing applications, suitable randomness can be hard to generate. So instead, programmers often rely on things called hash functions, which swirl data around and extract some small portion in a way that looks random.  "For decades, many computer scientists have presumed that for practical purposes, the outputs of good hash functions are generally indistinguishable from genuine randomness —an assumption they call the random oracle model. "'It’s hard to find today a cryptographic application… whose security analysis does not use this methodology,' said Ran Canetti of Boston University. "Now, a new paper has shaken that bedrock assumption. It demonstrates a method for tricking a commercially available proof system into certifying false statements, even though ...

Smart contracts vulns

"Using AI models to generate exploits for cryptocurrency contract flaws appears to be a promising business model, though not necessarily a legal one. "Researchers with University College London (UCL) and the University of Sydney (USYD) in Australia have devised an AI agent that can autonomously discover and exploit vulnerabilities in so-called smart contracts. "Smart contracts, which have never lived up to their name , are self-executing programs on various blockchains that carry out decentralized finance (DeFi) transactions when certain conditions are met. "Like most programs of sufficient complexity, smart contracts have bugs, and exploiting those bugs to steal funds can be remunerative."  

Domesday

"A landmark study co-authored by Professor Stephen Baxter, Professor of Medieval History, Faculty of History, has shed new light on the Domesday survey of 1086 —one of the most famous records in English history — revealing it as an audacious and sophisticated operation of statecraft and data management. "The findings challenge long-held assumptions about the scale and intent of William the Conqueror’s great survey. "Drawing on the earliest surviving manuscript of the survey, known as Exon Domesday, researchers argue that the survey was not simply a means of maximising tax but a far more ambitious and intricate exercise in governmental control —akin to an 11th-century form of big data processing."

Monetization policies

YouTube is preparing to update its policies to crack down on creators’ ability to generate revenue from inauthentic  content, including mass-produced videos and other types of repetitive content —things that have become easier to generate with the help of AI technology. On July 15, the company will update its YouTube Partner Program (YPP) Monetization policies with more detailed guidelines around what type of content can earn creators money and what cannot. The exact policy language itself has not yet been released, but a page on YouTube’s Help documentation explains that creators have always been required to upload original  and authentic  content. The update says that the new language will help creators to better understand what inauthentic  content looks like today.

OpenAI browser

"OpenAI is close to releasing an AI-powered web browser that will challenge Alphabet’s market-dominating Google Chrome, three people familiar with the matter told Reuters . "The browser is slated to launch in the coming weeks, three of the people said, and aims to use artificial intelligence to fundamentally change how consumers browse the web.  "It will give OpenAI more direct access to a cornerstone of Google’s success: user data. "If adopted by the 400 million weekly active users of ChatGPT, OpenAI’s browser could put pressure on a key component of rival Google’s ad-money spigot."

AI boosting MSFT 🫥

"During a presentation this week, Chief Commercial Officer Judson Althoff said artificial intelligence tools are boosting productivity in everything from sales and customer service to software engineering, according to a person familiar with his remarks. "Althoff said AI saved Microsoft more than $500 million last year in its call centers alone and increased both employee and customer satisfaction, according to the person, who requested anonymity to discuss an internal matter. "The company is also starting to use AI to handle interactions with smaller customers, Althoff said.  "This effort is nascent, but already generating tens of millions of dollars, he said."

FlexOlmo

"A new kind of large language model, developed by researchers at the Allen Institute for AI (Ai2), makes it possible to control how training data is used even after a model has been built. "The new model, called FlexOlmo, could challenge the current industry paradigm of big artificial intelligence companies slurping up data from the web, books, and other sources —often with little regard for ownership —and then owning the resulting models entirely.  "Once data is baked into an AI model today, extracting it from that model is a bit like trying to recover the eggs from a finished cake. "'Conventionally, your data is either in or out,' says Ali Farhadi, CEO of Ai2, based in Seattle, Washington. 'Once I train on that data, you lose control. And you have no way out, unless you force me to go through another multi-million-dollar round of training'."

Alongside AI: Comic-Con 2025

"The geeks are coming back to San Diego for Comic-Con 2025. "Downtown will soon be packed with cosplayers, fans and a full lineup of pop-up events in a massive celebration of geek culture .  "Everyone can get in on the fun —whether you're planning to line up for Hall H, roam the Gaslamp Quarter in a Stormtrooper suit or stay home and rewatch your favorite Batman movie while debating the merits of Christian Bale vs. Michael Keaton. "So let's get down and nerdy —here are five streaming picks to help you gear up for Comic-Con."

Cal Fire chatbot

"California government agencies are going all-in on generative artificial intelligence tools after Gov. Gavin Newsom’s 2023 executive order to improve government efficiency with AI.  "One deployment recently touted by the governor is a chatbot from the California Department of Forestry and Fire Protection, the primary agency tasked with coordinating the state’s wildfire response. "The chatbot, which Cal Fire says is independent of Newsom's order, is meant to give Californians better access to critical fire prevention resources and near-real-time emergency information , according to a May release from Newsom’s office.  "But CalMatters found that it fails to accurately describe the containment of a given wildfire, doesn’t reliably provide information such as a list for evacuation supplies and can’t tell users about evacuation orders."

Kevin Townsend

"To paraphrase and reverse Presley’s old rock demand, implementing agentic AI requires 'a little less haste, a little more planning, please.' "Notch suggests that part of that planning should include a data classification program. 'Agentic AI relies on whatever data it can access to produce results, so it’s time to get really clear about what it can see and how it’s being used. If you don’t already have a data classification and governance program in place, get one.' "Hareven adds, 'Don’t rush into broad deployment — secure usage is a competitive advantage, not a bottleneck . Assign cross-functional ownership between security, engineering, and AI teams to continually assess risks. Prioritize governance over speed to scale agentic AI responsibly.' "The need for speed is arguably a link weaker than the end user."    

ISO ClaimSearch

"Agents from Immigration and Customs Enforcement (ICE) have gained access to a massive database of health and car insurance claims and are using it to track down people they want to deport, according to internal ICE material viewed by 404 Media .  "The database, which contains details on more than 1.8 billion insurance claims and 58 million medical bills and growing, includes peoples’ names, addresses, telephone and tax identification numbers, license plates, and other sensitive personal information. "The news shows how ICE continues to try to leverage whatever data it is able to access or purchase as part of its deportation mission.  "The news also highlights the existence of the database, called ISO ClaimSearch, that many members of the public have likely never heard of, nor understand they may be included in."

Napster + psychometrics

"Napster is now under new management. It was bought by tech commerce company Infinite Reality for $207 million back in March, and as of last month, it has officially relaunched —but things are looking a bit different this time round. "Rather than focusing on MP3 files, it is now a conversational AI platform. The site consists of a number of AI companions  that are trained to be experts in various topics like medicine, public policy, architecture, and engineering. "To create these avatars, Napster Chief Technology Officer Edo Segal told Fast Company that the platform utilized frontier AI models from OpenAI and Gemini to make a new large persona model  that is trained on 30 different psychometric characteristics."

Sociolinguistics

"The epistemology of AI is different from the way language is learnt by humans, and the absence of any cultural understanding is key to this .  "Language in LLMs is about statistical co-occurrence, not about deep cultural meanings.  "As a result of this specific LLM ontology and epistemology, the data used are of crucial importance.  "LLMs are trained on large internet-based datasets, but large doesn't mean equal or diverse.  "On the contrary, it is known that those databases are heavily biased, and that is exactly why those models need alignment in the form of bias removal.  "Bender et al argues that from 'each step, from initial participation in Internet fora, to continued presence there, to the collection and finally the filtering of training data, current practice privileges the hegemonic viewpoint.'  "Combine this with a rising trust in the authority and knowledgeability of those LLMs and we see where sociolinguistics can come in....

Josh

"A series of AI-generated videos that show a white man complaining about how difficult it is to get a job in Canada have been taken down by TikTok, following inquiries made by the CBC News Visual Investigations team. "The social media platform says the videos violated its community guidelines, because it wasn't clear enough that they were made with AI. "Most of the videos feature what looks like a white man in his 20s named 'Josh,' who speaks to the camera and makes racially charged statements about immigrants and their role in the job market. In fact, 'Josh' is created by AI and doesn't exist."

Nvidia market cap

"Nvidia stock jumped Wednesday and pushed the company past a $4 trillion market cap for the first time ever. "The chipmaker is the first company to achieve this milestone and has benefited from the generative AI boom. "The recent rally in Nvidia has come despite geopolitical tensions and ongoing chip curbs that have hampered sales to China."