Posts

Showing posts from December, 2024

Brain Zippers ✨

Image

Encode files brief on Musk v. Altman

“OpenAI was founded as an explicitly safety-focused non-profit and made a variety of safety related promises in its charter. It received numerous tax and other benefits from its non-profit status. Allowing it to tear all of that up when it becomes inconvenient sends a very bad message to other actors in the ecosystem,” said Geoffrey Hinton, Emeritus Professor of Computer Science at University of Toronto, 2024 Nobel Laureate in Physics and 2018 Turing Award recipient. Encode, a youth-led organization advocating for responsible artificial intelligence development, filed an amicus brief today in Musk v. Altman urging the U.S. District Court in Oakland to block OpenAI’s proposed restructuring into a for-profit entity. The organization argues that the restructuring would fundamentally undermine OpenAI’s commitment to prioritize public safety in developing advanced artificial intelligence systems. The brief argues that the nonprofit-controlled structure that OpenAI currently operates under p...

Totimorphic lattices

Scientists with the European Space Agency's Advanced Concepts Team took a major leap in advancing totimorphic lattices from a hypothetical idea to practical applications .  One major question with these lattices was how to reconfigure a large structure into another shape without the lattice getting tangled and how to accomplish that transformation as efficiently as possible. The researchers developed a computer simulation of totimorphic lattices and figured out how to optimize the transformation of one shape into another. They showcased their new technique with two examples. In the first, they designed a simple habitat structure that could change its shape and stiffness. Future space explorers could deploy the same kind of material to build a variety of habitat modules.  These modules would hold their shape until they were reprogrammed to change their form and fulfill some other need. In the second example, the researchers designed a flexible space telescope. With totimorphic ...

David Autor

"All the people who will turn 30 in the year 2053 have already been born and we cannot make more of them.  "Barring a massive change in immigration policy, the U.S. and other rich countries will run out of workers before we run out of jobs.  "AI will change the labor market, but not in the way Musk and Hinton believe. Instead, it will reshape the value and nature of human expertise.   "Defining terms, expertise refers to the knowledge or competency required to accomplish a particular task like taking vital signs, coding an app or catering a meal.  "Expertise commands a market premium if it is both necessary for accomplishing an objective and relatively scarce.  "To paraphrase the character Syndrome in the movie The Incredibles , if everyone is an expert, no one is an expert."  

Decision-making on the cusp

Referees will announce any video assistant referee decisions to football supporters inside stadiums in England for the first time during the Carabao Cup semi-finals.   As part of a trial, referees will announce final decisions following a visit to the VAR pitchside monitor or when rulings are made on factual matters such as accidental handball by a goalscorer or offside offences where the attacker touches the ball.  Such announcements are common in other sports such as rugby union and American football and the system was trialled during the 2023 Women's World Cup. Both legs in each of the semi-finals —Arsenal v Newcastle and Tottenham v Liverpool —will be included in the trial. Refereeing authority PGMOL (Professional Game Match Officials Limited) says the move is part of its "commitment to transparency " and hopes it will provide greater clarity and understanding around key decisions.  Referees have been preparing for the in-stadium announcements at training camps and ...

Fat Kathy

Image

nH Predict ✨

"This algorithm is called nH Predict. And it was developed by a company called Senior Metrics back in the late 1990s and early 2000s, develops this algorithmic tool using various data inputs .  "And eventually, it was actually the administrator, former administrator of the center of Medicare and Medicaid services, Tom Scully was looking for investments and as part of a private equity company following his tenure. And he saw that there was an opportunity presented by the fact that so many elderly patients were spending longer periods in nursing homes. "He saw this algorithm, he forms a company, and he buys it, and that company becomes naviHealth.  "And naviHealth begins applying this algorithm to the care of older patients in nursing homes across the country for a number of years.  "Until eventually, through a series of transactions, UnitedHealthcare , the largest insurer and the largest Medicare Advantage insurer, buys it and begins using it on its own patients...

Kessler syndrome

The EU has signed a deal for its IRIS² constellation of 290 communication satellites that will operate in both medium and low Earth orbit .  The Starlink rival will provide secure connectivity to governmental users as well as private companies and European citizens, and bring high-speed internet to dead-zones .  The public-private deal valued at €10.6 billion (about $11 billion), according to The Financial Times, is expected to come online by 2030.  According to the European Space Agency (ESA), the interlinked satellites placed into different orbits will “enable the constellation to communicate securely and quickly and remain constantly connected without needing thousands of satellites.”  SpaceX, by comparison, has already launched some 7,000 low Earth satellites since 2018 to ensure Starlink’s global coverage and low latencies. The IRIS² constellation will consist of 264 spacecraft in low Earth orbit and 18 in medium Earth orbit. 

Chips and science 🦹‍♂️

After years of planning, building, geopolitical wrangling, and workforce challenges, the world’s largest semiconductor foundry, Taiwan Semiconductor Manufacturing Co. (TSMC) is officially starting mass production at an advanced chip-manufacturing facility in Phoenix in 2025 .  The fab represents the arrival of advanced chip manufacturing in the United States and a test of whether the 2022 CHIPS and Science Act can help stabilize the semiconductor-industry supply chain for the United States and its allies. In late October 2024, the company announced that yields at the Arizona plant were 4 percent higher than those at plants in Taiwan, a promising early sign of the fab’s efficiency.  The current fab is capable of operating at the 4-nanometer node, the process used to make Nvidia’s most advanced GPUs. A second fab, set to be operational in 2028, plans to offer 2- or 3-nm-node processes. Both 4-nm and more advanced 3-nm chips began high-volume production at other TSMC fabs in 202...

Dating platforms may add AI, too

An AI dating coach, for example, could explain compatibility scores, suggest icebreakers, or help users navigate conversations. During Match Group’s investor day, Hinge’s McLeod announced plans to build the “world’s most knowledgeable dating coach” using years of insights from the dating process. “Dating isn’t easy. Many people using the app don’t get that first match and don’t know why —whether it’s their photos, not sending enough likes, or taking too long to ask a match on a date. A dating coach can step in with personalized suggestions,” he said. Sharabi said the concept of an AI dating coach “makes a lot of sense” because it’s not uncommon to be out with friends and co-workers when looking for love and there’s “someone to bounce ideas off of.” 

AI tools help the downtrodden

An eligibility assessment for benefits can run to 60 pages. Each individual case can take hours of careful analysis of personal circumstances. The Westway is now using AI tools to cut through these kinds of documents to the key facts and legal issues that could make or break a case. "We spend a couple of minutes going through [the documents] and redacting the client's personal information," says Mr Samji. "We upload it on to an AI model and that will give us all that information. It'll usually shoot it back in about 10 to 15 minutes. "It will save us hours of having to do it ourselves. We can efficiently use our time, as their paralegal volunteers, to better serve our clients."

AI vs hoomans🍌

Prof Geoffrey Hinton, who this year was awarded the Nobel prize in physics for his work in AI, said there was a “10% to 20%” chance that AI would lead to human extinction within the next three decades. Previously Hinton had said there was a 10% chance of the technology triggering a catastrophic outcome for humanity. Asked on BBC Radio 4’s Today programme if he had changed his analysis of a potential AI apocalypse and the one in 10 chance of it happening, he said: “Not really, 10% to 20%.” Hinton’s estimate prompted Today’s guest editor, the former chancellor Sajid Javid, to say “you’re going up,” to which Hinton replied: “If anything. You see, we’ve never had to deal with things more intelligent than ourselves before.” He added: And how many examples do you know of a more intelligent thing being controlled by a less intelligent thing? There are very few examples. There’s a mother and baby. Evolution put a lot of work into allowing the baby to control the mother, but that’s about the o...

AI not responsible for dis content

Image

Editors resign

"The editors say the publisher [Elsevier] frequently introduces errors during production that were not present in the accepted manuscript: "In fall of 2023, for example, without consulting or informing the editors, Elsevier initiated the use of AI during production, creating article proofs devoid of capitalization of all proper nouns (e.g., formally recognized epochs, site names, countries, cities, genera, etc.) as well italics for genera and species.  "These AI changes reversed the accepted versions of papers that had already been properly formatted by the handling editors.  "This was highly embarrassing for the journal [ Journal of Human Evolution ] and resolution took six months and was achieved only through the persistent efforts of the editors.  "AI processing continues to be used and regularly reformats submitted manuscripts to change meaning and formatting and require extensive author and editor oversight during proof stage ."    

Positively more positivity with a PBC

Artificial intelligence research organization OpenAI announced that its Board of Directors is currently assessing its corporate structure to better align with its mission of ensuring that artificial general intelligence (AGI) benefits all of humanity. OpenAI currently operates with both a non-profit and a for-profit entity.  Moving forward, the plan is to maintain both structures, with the for-profit entity’s success helping to fund and sustain the non-profit, positioning it more effectively to support the broader mission. OpenAI views this mission as a critical challenge, one that requires balancing the advancement of AI’s capabilities, safety, and positive global impact.  As OpenAI looks ahead to 2025, the organization recognizes the need to evolve beyond a research lab and startup, aiming to become a more enduring company. The Board of Directors, with guidance from external legal and financial advisors, is focused on determining the best structural approach to advance Open...

Thy name shall be Controversy

Image

Sad Zuck II 🫥

"He is perhaps the wildest misfit tech diva of his generation, with a torrid ambition and engineering prowess rivaled only by Elon Musk; [Palmer] Luckey is also, in a way Musk is not and cannot be, the product of something more familiar —the heir to a 100-year revolution in American society that made Southern California the techno-theological citadel of the Cold War, and a one-man bridge between the smoldering American past and an unknown future that may be arriving soon .  "In his spare time, when he is not providing U.S. Customs and Border Patrol with AI-powered long-range sensors, or Volodymyr Zelenskyy with drones to attack high-value Russian targets, or winning first place in the Texas Renaissance Festival’s costume contest with historically meticulous renderings of Henry VIII and Anne Boleyn sewn and stitched by his wife, Nicole —who’s been at his side for 16 of his 31 years on earth —Luckey recently built  A bypass for his peripheral nervous system to experiment with g...

David Brooks celebrates essayists

"I took a pass on a large subset of essays arguing that technological advance is overshadowing and eviscerating our humanity.   "I sort of agree with that analysis, but who needs a downer this time of year?" 

Tyler Cowen

"AI will know almost all of the academic literature, and will be better at modeling and solving most of the quantitative problems .   "It will be better at specifying the model and running through the actual statistical exercises.   "Humans likely will oversee these functions, but most of that will consist of nodding, or trying out some different prompts. The humans will gather the data.  They will do the lab work, or approach the companies (or the IRS?) to access new data.   They will be the ones who pledge confidentiality and negotiate terms of data access. (Though an LLM might write the contract.)  They will know someone, somewhere, using a telescope to track a particular quasar.   They may (or may not) know that the AI’s suggestion to sample the blood of a different kind of gila monster is worth pursuing.   They will decide whether we should be filming dolphins or whales, so that we may talk to them using LLMs, though ...

AI governance

"Many plans for AI governance are put forth these days, from licensing frontier AI systems to safety standards to a public cloud with a few hundred million in compute for academics.  "These seem well-intentioned —but to me, it seems like they are making a category error.  [pdf]  "I find it an insane proposition that the US government will let a random, SF [San Francisco] startup develop superintelligence.  "Imagine if we had developed atomic bombs by letting Uber just improvise.  "Superintelligence —AI systems much smarter than humans —will have vast power, from developing novel weaponry to driving an explosion in economic growth.  "Superintelligence will be the locus of international competition; a lead of months potentially decisive in military conflict.  "It is a delusion of those who have unconsciously internalized our brief respite from history that this will not summon more primordial forces.  "Like many scientists before us, the great min...

With AGI comes profit ✨

Microsoft and OpenAI have a very specific, internal definition of artificial general intelligence (AGI) based on the startup’s profits, according to a new report from The Information .  And by this definition, OpenAI is many years away from reaching it. The two companies reportedly signed an agreement last year stating OpenAI has only achieved AGI when it develops AI systems that can generate at least $100 billion in profits .  That’s far from the rigorous technical and philosophical definition of AGI many expect.  

Claude builds a debugger

"Claude wasn’t perfect —it did get stuck one time when I asked it to add a flamegraph view of the stack trace changing over time .  "Perhaps I could have prodded it into building this better, or even resorted to building it myself. But instead I just decided to abandon that idea and carry on.  "AI development works well when your requirements are flexible and you’re OK changing course to work within the current limits of the model. "Overall, it felt incredible that it only took seconds to go from noticing something I wanted in my debugger to having it there in the UI.  "The AI support let me stay in flow the whole time; I was free to think about interpreter code and not debug tool code. I had a yak-shaving intern at my disposal. "This is the dream of malleable software: editing software at the speed of thought.  "Starting with just the minimal thing we need for our particular use case, adding things immediately as we come across new requirements. En...

Blow up 💥

"Where exactly does general relativity break down? It’s not really answerable, since we’re talking about the breakdown of space-time itself. "One type of singularity is a path through space-time that simply ends, meaning that objects following those paths could randomly pop in and out of existence.  "The singularities introduce unpredictability and indeterminism.  "This could indicate that general relativity is incomplete and in need of a theory of quantum gravity to replace it. "Then we have curvature singularities, which roughly correspond to the idea of space-time curvature blowing up , increasing without bound.  "These are also quite disturbing, since they’d mean we have unbounded tidal forces ripping everything to shreds. The consensus amongst physicists is that these singularities are just problems with the theory."

Courtney Howard

"It’s a difficult task to capture the love of reading.   "The affecting act happens between a book and the person captivated by the words on a page that draw them into uncharted places.  "However, the clever minds behind Reading Rainbow  were able to contextualize what happens when adolescents open picture books and embark on exciting literary journeys.  "They figured out it could empower not only the reader, but also cause a ripple effect in communities through the knowledge, imagination, and empathy imparted.  "From its premiere in 1983 until its finale in 2006, the show was a beacon for youngsters encouraged to explore beyond a book’s covers and learn about the world surrounding them."   

Mild Cognitive Impairment

Abstract Objective To evaluate the cognitive abilities of the leading large language models and identify their susceptibility to cognitive impairment, using the Montreal Cognitive Assessment (MoCA) and additional tests. Design Cross sectional analysis. Setting Online interaction with large language models via text based prompts. Participants Publicly available large language models, or “chatbots”: ChatGPT versions 4 and 4o (developed by OpenAI), Claude 3.5 “Sonnet” (developed by Anthropic), and Gemini versions 1 and 1.5 (developed by Alphabet). Assessments The MoCA test (version 8.1) was administered to the leading large language models with instructions identical to those given to human patients. Scoring followed official guidelines and was evaluated by a practising neurologist. Additional assessments included the Navon figure, cookie theft picture, Poppelreuter figure, and Stroop test. Main outcome measures MoCA scores, performance in visuospatial/executive tasks, and Stroop te...

Rot ✨

Image

Christmas Island

Image

You're Boxing Day ready? ✨

Image

❄️ Happy Holidays! ❄️

Image

Self dealing? 🦹‍♂️

Contractors recently began noticing references to Anthropic’s Claude appearing in the internal Google platform they use to compare Gemini to other unnamed AI models, the correspondence showed.  At least one of the outputs presented to Gemini contractors, seen by TechCrunch , explicitly stated: “I am Claude, created by Anthropic.” One internal chat showed the contractors noticing Claude’s responses appearing to emphasize safety more than Gemini. “Claude’s safety settings are the strictest” among AI models, one contractor wrote.  In certain cases, Claude wouldn’t respond to prompts that it considered unsafe, such as role-playing a different AI assistant. In another, Claude avoided answering a prompt, while Gemini’s response was flagged as a “huge safety violation” for including “nudity and bondage.”  Anthropic’s commercial terms of service forbid customers from accessing Claude “to build a product or service” or “train competing AI models” without approval from Anthropic. G...

AI Notes versus самиздат

Image

Best-of-N (BoN) Jailbreaking

New research from Anthropic, one of the leading AI companies and the developer of the Claude family of Large Language Models (LLMs), has released research showing that the process for getting LLMs to do what they’re not supposed to is still pretty easy and can be automated. SomETIMeS alL it tAKeS Is typing prOMptS Like thiS.  To prove this, Anthropic and researchers at Oxford, Stanford, and MATS, created Best-of-N (BoN) Jailbreaking, “a simple black-box algorithm that jailbreaks frontier AI systems across modalities.”   Jailbreaking, a term that was popularized by the practice of removing software restrictions on devices like iPhones, is now common in the AI space and also refers to methods that circumvent guardrails designed to prevent users from using AI tools to generate certain types of harmful content.  Frontier AI models are the most advanced models currently being developed, like OpenAI’s GPT-4o or Anthropic’s own Claude 3.5. As the researchers explain, “BoN Jailb...

Moxie embodied in open source?

Earlier this month, startup Embodied announced that it is going out of business and taking its Moxie robot with it.   The $800 robots, aimed at providing emotional support for kids ages 5 to 10, would soon be bricked, the company said, because they can’t perform their core features without the cloud.  Following customer backlash , Embodied is trying to create a way for the robots to live an open sourced second life. Embodied CEO Paolo Pirjanian shared a document via a LinkedIn blog post today saying that people who used to be part of Embodied’s technical team are developing a “potential” and open source way to keep Moxies running. 

Google photo AI edit of self-portrait

Image

Overview on EDPB Opinion 28/2024 on Personal Data in AI models

It is a matter for the individual regulators to conduct case by case analysis, taking account of the specific circumstances of each case.  What is clear is there is risk for both the parties who process the data to create the model and the parties who then take the model and use it to deploy amongst those options.  The power of regulatory authorities to act upon the creation of AI models, including those which claim to be anonymized, is confirmed.  Regulation of this sort can only be done with access to granular information of the inputs, processes and outputs of those models. As a corollary, this Opinion is a charter for breaking open the black boxes of AI models and the companies who develop them.  

Italian Data Protection Authority

ChatGPT users and non-users should be sensitized with how to object to the training of the generative artificial intelligence with their personal data and, therefore, be effectively placed in the conditions to exercise their rights under the GDPR. The Guarantor has imposed a fine of fifteen million euros at OpenAI also calculated taking into account the company’s collaborative attitude. Finally, in mind that the company, during the investigation, established its European headquarters in Ireland, the Guarantor, in compliance with the rule of the so-called one stop shop, sent the documents of the proceedings to the Irish Data Protection Authority (DPC), which became the lead supervisory authority under the GDPR, to continue the investigation in relation to any continuous violations not exhausted before the opening of the European establishment. 

Sad Zuck 🫥

Image

Olfactory ethics cause ire

Academics have long been accused of jargon-filled writing that is impossible to understand.   A recent cautionary tale was that of Ally Louks, a researcher who set off a social media storm with an innocuous post on X celebrating the completion of her PhD.  If it was Ms Louks’s research topic (“olfactory ethics” —the politics of smell) that caught the attention of online critics, it was her verbose thesis abstract that further provoked their ire .  In two weeks, the post received more than 21,000 retweets and 100m views. 

All in on AI 🦹‍♂️

Image

UBI hold water?

If AI can write software, draft legal documents, drive cars and diagnose illness today, what will it be capable of in 20, 30 or 50 years time? Advocates argue that not only will AI make it necessary to provide some form of UBI, but it will also be the technological leap that makes it possible.  But does this idea hold water, or is it just far-fetched (and highly optimistic) thinking? UBI refers to unconditional payments made to citizens designed to cover or contribute towards the basic cost of living.  The idea goes back to the first Industrial Revolution, when there were worries that industrialization would lead to large-scale human unemployment, resulting in civil unrest . Although that isn’t exactly how things turned out, many countries did subsequently implement various forms of social welfare programs to provide assistance to those on low or no income —this was to try to reduce extreme poverty and the associated problems that come with it.

Stagecoach says no way Forth

“We are proud to have achieved a world first with our CAVForth autonomous bus service, demonstrating the potential for self-driving technology on a real-world registered timetable in East Scotland. “Although passenger adoption did not meet expectations, the trial has significantly advanced the understanding of the operational and regulatory requirements for autonomous services, delivering what was expected from this demonstrator project. “The partners remain committed to exploring new opportunities for self-driving technology in other areas across the UK, ensuring that this exciting innovation can play a transformative role in future transport networks.”

Alongside AI: The Road

Image
 

John Nosta

"In the Bible, humanity is described as being created in the 'image of God' —צֶלֶם אֱלֹהִים ( tzelem elohim ).  "The Hebrew word tzelem can also mean shadow, a dual meaning that is both curious and profound.  "A shadow reflects its source but lacks its depth and essence. It is, by nature, a reduction, capturing only an outline of the original. "Perhaps LLMs are best understood as shadows of human cognition. They distill the vast complexity of language and thought into something computationally bounded.  "They mimic our creative capacities, echoing patterns of thought and expression, yet they lack understanding, intent, and the ineffable spark we associate with the soul. They are brilliant projections —but projections nonetheless. "Much like Aristotle’s ideals of truth and beauty, the concept of tzelem elohim captures a duality : reflection and reduction, potential and limitation.  "These ideas invite us to see LLMs not as creators themselves...

Endurance, a leap in autonomy

While these are big leaps in robotics, these capabilities have really been enabled by Mars rovers and investments in autonomy back here on Earth. "Perseverance, for example, is already doing autonomous driving on Mars," said James Keane [JPL]. "This autonomy lets Perseverance accomplish more science, and reach more interesting sample sites." Endurance would drive at a clip some ten times faster than Perseverance, taking into account the maximum speed the rover would have to autonomously navigate, Keane said.  The machine [Endurance] would need to autonomously operate, and drive across the full lunar day-night cycle, he [Keane] added, under a broader range of lighting conditions and thermal extremes.

Zack Savitsky

"Experiments may also provide insight into the mechanics of the most efficient information processing systems we know of: ourselves. "Scientists aren’t sure how the human brain can perform immensely complicated mental gymnastics using only 20 watts of power.  "Perhaps the secret to biology’s computational efficiency also lies in harnessing random fluctuations at small scales, and these experiments aim to sniff out any possible advantage.  "'If there is some win in this, there’s a chance that nature actually uses it,' said Janet Anders, a theorist at the University of Exeter who works with [Natalia] Ares. 'This fundamental understanding that we’re developing now hopefully helps us in the future understand better how biology does things.' "The trend toward messiness is what powers all our machines. While the decay of useful energy does limit our abilities, sometimes a new perspective can reveal a reservoir of order hidden in the chaos.  "Fu...

MoD said “robust safeguards” have been put in place by the suppliers

An artificial intelligence tool hosted by Amazon and designed to boost UK Ministry of Defence recruitment puts defence personnel at risk of being identified publicly, according to a government assessment. Data used in the automated system to improve the drafting of defence job adverts and attract more diverse candidates by improving the inclusiveness language, includes names, roles and emails of military personnel and is stored by Amazon in the US.  This means “a data breach may have concerning consequences, i.e. identification of defence personnel”, according to documents detailing government AI systems published for the first time today. The risk has been judged to be low and the MoD said “robust safeguards” have been put in place by the suppliers, Textio, Amazon Web Services and Amazon GuardDuty, a threat detection service. But it is one of several risks acknowledged by the government about its use of AI tools in the public sector in a tranche of documents released to improve ...

Sellcell

A new survey from the company Sellcell has found that most iPhone and Samsung users don't actually think AI improves their daily lives. The survey asked iPhone users with Apple Intelligence and Samsung users with access to Galaxy AI, whether or not the AI features on their smartphones were actually useful, and most don't seem to think so. According to Sellcell, 73% of iPhone users and 87% of Samsung users say AI features add little to no value, showcasing that AI is yet to show its raison d'être on the best smartphones. The survey also found that 1 in 6 iPhone users would make the jump to Android for AI features if there was an enticing enough AI-fuelled feature worth making the move for.  Interestingly, nearly 50% of iPhone users said AI was a major factor when deciding on their next smartphone purchase, that number was 23.7% for Samsung users.

Big brother, an algorithm 🫥

Schools are employing dubious AI-powered software to accuse teenagers of wanting to harm themselves and sending the cops to their homes as a result — with often chaotic and traumatic results. As The New York Times reports, software being installed on high school students' school-issued devices tracks every word they type .  An algorithm then analyzes the language for evidence of teenagers wanting to harm themselves. Unsurprisingly, the software can get it wrong by woefully misinterpreting what the students are actually trying to say.  A 17-year-old in Neosho, Missouri, for instance, was woken up by the police in the middle of the night.

Drew Wilson

"Simply put, claiming copyright infringement to attack LLMs in the first place is a legal non-starter.   "The courts are increasingly tossing these cases for the junk lawsuits that they are. "Heck, in some instances, there might be a stronger case for using moral rights where people’s likeness is used for something that is outputted by AI (especially in the case of AI that is used to create NSFW material), but the angle being used to attack AI is just not working.  "Yes, there are cases that are ongoing, but the legal trend here is not exactly looking promising for those who think they have a case against LLMs in the first place.  "By all means, though, feel free to try anyway and waste your time and money losing in court. Nothings stopping you there. "At any rate, the TechCrunch author has completely failed to present a case for copyright infringement here.  "The author screamed about how OpenAI used streams and video walkthroughs. The logical respo...

FAIR USE II

When does generative AI qualify for fair use?    Suchir Balaji   ***10/23/24 "While generative models rarely produce outputs that are substantially similar to any of their training inputs, the process of training a generative model involves making copies of copyrighted data.  "If these copies are unauthorized, this could potentially be considered copyright infringement, depending on whether or not the specific use of the model qualifies as 'fair use'.  "Because fair use is determined on a case-by-case basis, no broad statement can be made about when generative AI qualifies for fair use.  "Instead, I’ll provide a specific analysis for ChatGPT’s use of its training data, but the same basic template will also apply for many other generative AI products."

FAIR USE

Suchir Balaji grew up in Cupertino before attending UC Berkeley to study computer science.  It was then he became a believer in the potential benefits that artificial intelligence could offer society, including its ability to cure diseases and stop aging, the [NY]Times reported. “I thought we could invent some kind of scientist that could help solve them,” he told the newspaper. But his outlook began to sour in 2022, two years after joining OpenAI as a researcher.  He grew particularly concerned about his assignment of gathering data from the internet for the company’s GPT-4 program, which analyzed text from nearly the entire internet to train its artificial intelligence program, the news outlet reported. The practice, he told the [NY]Times, ran afoul of the country’s fair use laws governing how people can use previously published work.  In late October, he posted an analysis on his personal website arguing that point. 

Physicians Make Decisions Act (SB 1120)

As 2025 approaches, Californians can look forward to strengthened patient protections under the new Physicians Make Decisions Act (SB 1120), authored by Senator Josh Becker (D-Menlo Park).  This groundbreaking law ensures that decisions about medical treatments are made by licensed health care providers, not solely determined by artificial intelligence (AI) algorithms used by health insurers. “Artificial intelligence has immense potential to enhance healthcare delivery, but it should never replace the expertise and judgment of physicians,” said Senator Becker. “An algorithm cannot fully understand a patient’s unique medical history or needs, and its misuse can lead to devastating consequences. SB 1120 ensures that human oversight remains at the heart of healthcare decisions, safeguarding Californians’ access to the quality care they deserve.”

Jeff Carlson on iHype

"Don't forget that Apple Intelligence works only on the iPhone 15 Pro, iPhone 16, iPhone 16 Pro or M-series Macs and iPads (plus the newest iPad mini). "But what if you don't want Apple Intelligence foisted on you?  "I'm not being an AI crank —I appreciate features such as notification summaries and the Clean Up tool in the Photos app.  "And yet Apple Intelligence is also a work in progress, an evolving set of features that Apple is heavily hyping while gradually developing.  "I wouldn't hold it against you if you wanted to not be distracted or feel like you're doing Apple's testing for them (that's what the developer and public betas are for). "It's possible to turn off Apple Intelligence entirely, or to disable it for specific apps.   "And if you decide you want to jump back into the AI stream, you can easily turn it back on."

Suchir Balaji

A former OpenAI researcher known for whistleblowing the blockbuster artificial intelligence company facing a swell of lawsuits over its business model has died, authorities confirmed this week. Suchir Balaji, 26, was found dead inside his Buchanan Street apartment on Nov. 26, San Francisco police and the Office of the Chief Medical Examiner said. Police had been called to the Lower Haight residence at about 1 p.m. that day, after receiving a call asking officers to check on his well-being, a police spokesperson said. The medical examiner’s office determined the manner of death to be suicide and police officials this week said there is “currently, no evidence of foul play.” Information he held was expected to play a key part in lawsuits against the San Francisco-based company (OpenAI). Balaji’s death comes three months after he publicly accused OpenAI of violating U.S. copyright law while developing ChatGPT, a generative artificial intelligence program that has become a moneymaking sen...

Optum lacked infosec

Healthcare giant Optum has restricted access to an internal AI chatbot used by employees after a security researcher found it was publicly accessible online, and anyone could access it using only a web browser .  The chatbot, which TechCrunch has seen, allowed employees to ask the company questions about how to handle patient health insurance claims and disputes for members in line with the company’s standard operating procedures (SOPs).  While the chatbot did not appear to contain or produce sensitive personal or protected health information, its inadvertent exposure comes at a time when its parent company, health insurance conglomerate UnitedHealth, faces scrutiny for its use of artificial intelligence tools and algorithms to allegedly override doctors’ medical decisions and deny patient claims.

Chat AI 15

Image

New neurons

"We found that new neurons in the adult brain are linked to reduced cognitive decline —particularly in verbal learning, or learning by listening to others. "This was very surprising to us. In mice, new neurons are known for their role in helping them learn and navigate new spaces through visual exploration. However, we did not observe a similar connection between new neurons and spatial learning in people. "Talking with others and remembering those conversations is an integral part of day-to-day life for many people. However, this crucial cognitive function declines with age, and the effects are more severe with neurological disorders.  "As aging populations grow, the burden of cognitive decline on health care systems worldwide will increase. "Our research suggests that the link between newborn neurons and verbal learning may be foundational to developing treatments to restore cognition in people. Enhancing new neuron generation could be a potential strategy to...

meta back (º⁠‿⁠º⁠)

The outages started around 18:00 GMT on Wednesday, according to Downdetector. Its numbers are based on reports of outages and the actual number of users affected may vary considerably. Soon after users reported issues, Meta said it was aware of a "technical issue" that was "impacting some users' ability to access our apps" and said it was working to fix things as soon as possible. A variety of issues were reported including not being able to use the apps at all or feeds not refreshing for certain users. "We're aware of some issues accessing WhatsApp. We're actively working on a solution and starting to see a return to normal for most people. We expect things to be back to normal shortly," WhatsApp said around the same time. Meta's platforms are some of the most popular in the world. Facebook has over three billion active monthly users. The biggest outage Meta experienced was in 2021 when founder Mark Zuckerberg apologised for the disruption...

Just say, 'No'?

Image

Humans in the loop 🎰

Literally everything that ChatGPT does has humans in the loop. A human  Types the prompt and  Pastes the response, then they  Paste it into Prolific on the computer that they  Carry into work every day and manually  Plug into the wall.  The only step that the LLM does is generate the text. In the case of that Chaucer poem, it simply plagiarized the work of a human, but, even when it doesn't, in this experimental setup, it doesn't have an original style, but relies on the pre-existing styles of famous poets .  In other words, not only did they explicitly put 10 poets "in the loop," but, loop or no loop, this methodology is incapable of proving that AI is "indistinguishable from human-written poetry" because human poets are expected to innovate on style. This paradigm is an ideologically-motivated work of fiction, based on a true story in the most Hollywood sense. It is the difference between, say, remote-controlled cars and self-driving ones. AI cannot...

embed[ding]

Optus has brought former Commonwealth Bank head of emerging tech Jesse Arundell into its new artificial intelligence unit.  Holding the title of head of emerging AI, Arundell will join the leadership ranks in Optus' AI-dedicated division, which officially launched on November 1 this year. According to Optus, Arundell will be tasked with embed[ding]  AI more widely across its business and expanding it into “new areas”. Arundell previously spent almost a decade at CBA, where he scaled up the bank's emerging technology experiments, playing “matchmaker” between third parties and the bank’s business units.

AI is here to help —fraudsters

Kush Parikh, president at Hiya said that, quarter-on-quarter, its data showed fraud call rates continuing to rise despite a growing awareness of the risks.  AI and automation are at the heart of this.  “Fraudsters are becoming more sophisticated,” Parikh said, “ fuelled in part by the latest technology to adapt their tactics. Examples of robocalls are plentiful, demonstrating that it is becoming easier —and less time intensive —for scammers to spam call victims in high volumes.” Some of the U.S. specific statistics from the report revealed just how citizens are being impacted by this growth. There was an increase in spam call rates from an average of 11 to 13 per user per month between July and Sept. 2024.  While Medicare and insurance impersonation scams are prolific in the U.S., with attackers looking to gain insurance details to enable them to defraud the U.S. government rather than the victims directly, fraudsters are also impersonating IRS tax agents, Amazon, and Go...

Gemini 2.0 builds agents to ruin fun🦹‍♂️

Google just announced Gemini 2.0, and as part of its suite of news today, the company is revealing that it’s been exploring how AI agents built with Gemini 2.0 can understand rules in video games to help you out. The agents can “reason about the game based solely on the action on the screen, and offer up suggestions for what to do next in real time conversation ,” Google DeepMind CEO Demis Hassabis and CTO Koray Kavukcuoglu write in a blog post.  Hassabis and Kavukcuoglu also say that the agents can also “tap into Google Search to connect you with the wealth of gaming knowledge on the web.” Google is testing the agents’ ability to interpret rules and challenges in games like Clash of Clans and Hay Day from Supercell, according to Hassabis and Kavukcuoglu.

Robot Operating System

Artificial intelligence is being developed to provide a robotic brain for a future NASA mission to land on the icy surface of one of the solar system's ocean moons, such as Europa or Enceladus. The autonomous software is being developed by teams of researchers who are making use of a robotic arm, mimicking that belonging to a lander or rover, and a virtual reality simulation at NASA's Jet Propulsion Laboratory (JPL) and Ames Research Center, respectively. Both OWLAT and OceanWATERs are based on the same Robot Operating System, which is autonomous software that receives telemetry from the robot's sensors and issues commands in response.  Through the Robot Operating System, various mission goals can be simulated , and fault-correction software based on A.I. can address problems when they arise.

EagleMsgSpy

Security researchers have uncovered a new surveillance tool that they say has been used by Chinese law enforcement to collect sensitive information from Android devices in China. The tool, named “EagleMsgSpy,” was discovered by researchers at U.S. cybersecurity firm Lookout. The company said at the Black Hat Europe conference on Wednesday that it had acquired several variants of the spyware, which it says has been operational since “at least 2017.” Kristina Balaam, a senior intelligence researcher at Lookout, told TechCrunch the spyware has been used by many public security bureaus in mainland China to collect “extensive” information from mobile devices. This includes call logs, contacts, GPS coordinates, bookmarks, and messages from third-party apps including Telegram and WhatsApp.  EagleMsgSpy is also capable of initiating screen recordings on smartphones, and can capture audio recordings of the device while in use, according to research Lookout shared with TechCrunch .

Artisans won't complain about work-life balance

Image

Quantum AI can has error correction

In a long-awaited advance, researchers at Google have shown they can suppress errors in the finicky quantum bits critical to the promise of quantum computing.   By spreading one logical qubit of information across multiple redundant physical qubits, they enabled it to survive longer than the fragile quantum state of any of the physical qubits, according to a report today in Nature . “This result is what convinces me that we can actually build a big quantum computer that will work,” says Kevin Satzinger, a physicist with Google Quantum AI.  Scott Aaronson, a theoretical computer scientist at the University of Texas at Austin, says the work “very clearly represents an exciting milestone for the field.”  But he notes that researchers using other types of qubits are also closing in on practical error correction.

Plant RNA-FM

A pioneering Artificial Intelligence (AI) powered model able to understand the sequences and structure patterns that make up the genetic "language" of plants, has been launched by a research collaboration. Plant RNA-FM, believed to be the first AI model of its kind, has been developed by a collaboration between plant researchers at the John Innes Center and computer scientists at the University of Exeter. The paper , "An Interpretable RNA Foundation Model for Exploration Functional RNA Motifs in Plants," appears in Nature Machine Intelligence. The model, say its creators , is a smart technological breakthrough that can drive discovery and innovation in plant science and potentially across the study of invertebrates and bacteria.

Applying AI 🐦‍⬛

The chirps and whistles of dolphins, the rumblings of elephants and the trills and tweets of birdsong all have patterns and structure that convey information to other members of the animal’s species.  For a person, the subtleties of these patterns can be difficult to identify and understand, but finding patterns is a task at which artificial intelligence (AI) excels.  The hope of a growing number of biologists and computer scientists is that applying AI to animal sounds might reveal what these creatures are saying to each other. Over the past year, AI-assisted studies have found that both African savannah elephants (Loxodonta africana) and common marmoset monkeys (Callithrix jacchus) bestow names on their companions.  Researchers are also using machine-learning tools to map the vocalizations of crows.  As the capability of these computer models improves, they might be able to shed light on how animals communicate, and enable scientists to investigate animals’ self-aw...

Bloat

Image

Artisan: “Stop hiring humans” 🦹‍♂️

“We wanted something that would draw eyes —you don’t draw eyes with boring messaging,” the [Artisan] CEO said. Artisan also used “Stop hiring humans” — in truly humongous font —as its booth banner at TechCrunch Disrupt, a startup-focused San Francisco conference in October.  The billboards illustrate how scrappy startups can stand out from established tech giants with corporate reputations to uphold.  In other words, the inflammatory ads served their purpose. Salesforce incessantly plugged its similar “Agentforce” product at its Dreamforce megaconference in September but usually threw in a phrase about the AI agents working with humans too. SFGATE reached Artisan CEO Jaspar Carmichael-Jack on Friday and asked how he responds to critiques of the billboards. He acknowledged the ads’ “dystopian” tone but stood by it. “They are somewhat dystopian, but so is AI,” said the young CEO over text. “The way the world works is changing.” Carmichael-Jack said his startup has seen a “crazy...

More Radar Trends by Mike Loukides

Artificial Intelligence The OpenGPT-X project has released its open large language model, Teuken-7B. This model is significant because it supports 24 European languages and is designed to be compliant with European law. It is available on Hugging Face . OLMo 2 is a newly released, fully open, small language model that comes in 7B and 13B sizes. Both versions claim the best performance in their group. NVIDIA has announced Fugatto , a new generative text-to-audio model that can create completely new kinds of sounds. They position it as a tool for creators. Anthropic has announced the developer preview of its Model Context Protocol . MCP allows Claude Desktop to communicate securely with other resources. The MCP server limits the services that are exposed to Claude, filters Claude’s requests, and prevents data from being exposed over the internet. OpenScholar is an open source language model designed to support scientific research . It’s significantly more accurate than ...

AI code security

A pair of studies recently explored the effect of AI on code security .  The first was a Stanford University study, “Do Users Write More Insecure Code with AI Assistants?” and the other was a Wuhan University study, “Exploring Security Weaknesses of Copilot Generated Code in Github.” The Stanford study found the following: Participants who had access to an AI assistant wrote significantly less secure code than those without access to an assistant. Participants with access to an AI assistant were also more likely to believe they wrote secure code, suggesting that such tools may lead users to be overconfident about security flaws in their code. Participants who invested more in creating their queries for the AI assistant, such as providing helper functions or adjusting the parameters, were more likely to eventually offer secure solutions. The Wuhan study found that almost 30% of Copilot-generated code snippets have security weaknesses:  Focusing specifically on Python, 91 of 277...

Not AI, please, not AI 🦹‍♂️

Mangione was valedictorian of his 2016 high school graduating class at the Gilman School in Baltimore, where he played soccer, according to online sites.  High school tuition at the all-boys school is nearly $40,000 a year. He said at the time of graduation that he planned to seek a degree in artificial intelligence, focused on the areas of computer science and cognitive science at the University of Pennsylvania, according to an interview with the Baltimore Fishbowl.

Diffeomorphic Mapping Operator Learning (DIMON)

"Solving partial differential equations (PDEs) using numerical methods is a ubiquitous task in engineering and medicine.   "However, the computational costs can be prohibitively high when many-query evaluations of PDE solutions on multiple geometries are needed.  "Here we aim to address this challenge by introducing Diffeomorphic Mapping Operator Learning (DIMON), a generic artificial intelligence framework that learns geometry-dependent solution operators of different types of PDE on a variety of geometries.  "We present several examples to demonstrate the performance, efficiency and scalability of the framework in learning both static and time-dependent PDEs on parameterized and non-parameterized domains; these include solving Laplace equations,  Reaction–diffusion equations and a System of multiscale PDEs that characterize the electrical propagation on thousands of personalized heart digital twins.  "DIMON can reduce the computational costs of solution appro...

Happy birthday Grace Hopper🎈

Image

Ali Alkhatib

"Defining AI along political and ideological language allows us to think about things we experience and recognize productively as AI, without needing the self-serving supervision of computer scientists to allow or direct our collective work .  "We can recognize, based on our own knowledge and experience as people who deal with these systems, what’s part of this overarching project of disempowerment by the way that it renders autonomy farther away from us, by the way that it alienates our authority on the subjects of our own expertise. "This framework sensitizes us to “small” systems that cause tremendous harm because of the settings in which they’re placed and the authority people place upon them; and it inoculates us against fixations on things like regulating systems just because they happened to use 10^26 floating point calculations in training —an arbitrary threshold, denoting nothing in particular, beneath which actors could (and do) cause monumental harms already,...