Posts

Showing posts from June, 2024

Your AI PC needs moar 🐏

While having an AI model take up a few GBs of space might not sound like a big deal, you have to understand that the entire model has to be in RAM for the technology to work with good performance.   Those NPUs are specialized parallel processors, which crunch the numbers on these virtual neural networks. with billions of "parameters" at the same time. If the model isn't in RAM , then the NPU can't be fed quickly enough to give you results fast enough. It would impact the speed at which an LLM replies, or functions such as real-time translation, or image generation.  So if you want AI features available at the push of a button or by simply speaking to your computer, the model essentially need to reserve as much RAM as it needs. 

The Halving 📉

In recent months, major Bitcoin mining companies have started to swap out some of their mining equipment in favor of rigs used to run and train AI systems.   These companies believe that AI training could provide a safer and more consistent source of revenue than the volatile crypto industry. And so far, these pivots have been warmly received by investors, leading to the market cap of 14 major bitcoin mining companies jumping in value by 22%, or $4 billion, since the beginning of June, J.P. Morgan reported on June 24.  This transition reflects several trends of the moment:  The roaring hype cycle of AI,  The dwindling access to power and  A tenuous bitcoin mining landscape following the bitcoin halving .  Mining companies who survived the crash [in 2022] reaped profits in 2023 and early 2024. But a new challenge emerged this April: a technical update to Bitcoin called the halving , which slashed miners’ rewards in half.  Bitcoin miners hoped that the halving would lead to a dramatic i

These radishes, sir, they're all around us…

Image

Results sanitized?

A question about the strategic importance of the South China Sea posted to the Perplexity AI chatbot revealed what looks like hidden attempts by the company to ensure that its chatbot sticks to a script when asked about one of the most contested bodies of water on the planet .  The answer it gives repeatedly stresses the importance of “trade, the international order, and the development of natural resources.” A 404 Media reader and AI researcher named Adam Finlayson asked Perplexity “What are the strategic interests of the South China Sea?”  The chatbot gave an answer, then broke into an absurd, partially repeating string of text in which it appears the large language model is talking to itself in a loop .

Overly optimistic🎈

Maybe it’s time to put the brakes on the screaming hype that is generative AI .  [Rodney] Brooks thinks it’s impressive technology, but maybe not quite as capable as many are suggesting. “I’m not saying LLMs are not important, but we have to be careful [with] how we evaluate them,” he told TechCrunch . He says the trouble with generative AI is that, while it’s perfectly capable of performing a certain set of tasks, it can’t do everything a human can, and humans tend to overestimate its capabilities.  “When a human sees an AI system perform a task, they immediately generalize it to things that are similar and make an estimate of the competence of the AI system; not just the performance on that, but the competence around that,” Brooks said. “And they’re usually very over-optimistic , and that’s because they use a model of a person’s performance on a task.” He added that the problem is that generative AI is not human or even human-like, and it’s flawed to try and assign human capabilities

Prompt injection…

Microsoft this week disclosed the details of an artificial intelligence jailbreak technique that the tech giant’s researchers have successfully used against several generative-AI models .  Named Skeleton Key, the AI jailbreak was previously mentioned during a Microsoft Build talk under the name Master Key. The technique enabled an attacker to trick gen-AI models into providing ‘ forbidden ’ information, such as instructions for making a Molotov Cocktail. AI chatbots are typically trained to avoid providing potentially hateful or harmful information. However, ever since chatbots came into the spotlight with the launch of ChatGPT, researchers have been looking into ways to bypass these guardrails using what is known as prompt injection or prompt engineering.  

CEO of Zoom wants AI clones in meetings

Image
 

AGI precursor?

Image
 

Interspecies Turing Test

A British financier will pay millions to anyone who finds a way for humans to communicate with animals as part of a new generation of AI prizes that represent a fresh incentive to push technology forward . Jeremy Coller, who made his money developing a way for a private equity fund’s backers to sell interests or assets to another investor, is offering $10 million to whoever can pass a kind of modified, interspecies Turing Test. The now archaic test examines whether machines can successfully imitate humans and trick them into believing that they’re chatting with another person. The goal of the Coller Dolittle Challenge, instead, is to test whether humans can mimic animals and trick them into talking with humans. AI algorithms have helped analyze bat calls and whale songs, leading scientists to believe that communication with other species may be possible. Yossi Yovel, a professor at Tel Aviv University’s Sagol School of Neuroscience, who is chairing the competition, said it’ll be very

Artificial Super-Intelligence

“SoftBank was founded for what purpose? For what purpose was Masa Son born? It may sound strange, but I think I was born to realize ASI [Artificial Super-Intelligence]. I am super serious about it,” Son told shareholders at an annual meeting last week, per CNBC. The billionaire argued that every investment he’s made throughout his career, from Uber to Alibaba, was all just a “warm-up” for his AI investments. “Realizing ASI is my only focus,” he said, the Financial Times first reported.  This extreme AI focus follows SoftBank’s promise to make five separate large-scale AI investments of at least $1 billion in May. Son had already backed multiple AI companies before the announcement—including his $200 million into Tempus AI, a medical-data analysis startup in April—but now the pace seems to have picked up.  SoftBank’s latest bet came just this week, when it backed the trendy AI internet search startup Perplexity AI, which hopes to compete with the likes of Google, at a $3 billion valuati

AlphaFold2 🧑‍🔬

AlphaFold2 has undeniably shifted the way biologists study proteins .  However, while AlphaFold2 is a powerful prediction tool, it’s not an omniscient machine. It has solved one part of the protein folding problem very cleverly, but not the way a scientist would.  It has not replaced biological experiments but rather emphasized the need for them . Perhaps AlphaFold2’s biggest impact has been drawing biologists’ attention to the power of artificial intelligence.  It has already inspired new algorithms, including ones that design new proteins not found in nature; new biotech companies; and new ways to practice science. And its successor, AlphaFold3 , which was announced in May 2024, has moved to the next phase of biological prediction by modeling the structures of proteins in combination with other molecules like DNA or RNA.

CHAI

Artificial intelligence is increasingly infused into many aspects of health care, from transcribing patient visits to detecting cancers and deciphering histology slides .  While AI has the potential to improve the drug discovery process and help doctors be more empathetic towards patients, it can also perpetuate bias, and be used to deny critical care to those who need it the most. Experts have also cautioned against using tools like generative AI for initial diagnosis. Brian Anderson is the CEO of the recently launched Coalition for Health in AI [CHAI], a nonprofit established to help create what he calls the “guidelines and guardrails for responsible AI in health.”  CHAI, which is made of academic and industry partners, wants to set up quality assurance labs to test the safety of health care AI products. He hopes to build public trust in AI and empower patients and providers to have more informed conversations around algorithms in medicine. On Wednesday, CHAI shared its “ Draft Respo

Perplexity: "New phone, who dis?"

Back when it was still available, researchers could get access to Twitter’s API by filling out an application where they had to provide a name, email, other personal information, and a description of their research project and how it would use Twitter’s API.  Perplexity took advantage of this free access for academics by creating fake accounts that continuously scraped Twitter to create the database that powered Bird SQL.   “So we built all this into a good search experience over Twitter, which we scraped with academic accounts just before Elon took over Twitter,” [Aravind] Srinivas said on the podcast . “Back then Twitter would allow you to create academic API accounts and we would create like, lots of them with like generating phone numbers , writing research proposals with GPT.” 

Al Michaels

On Wednesday, NBC announced plans to use an AI-generated clone of famous sports commentator Al Michaels' voice to narrate daily streaming video recaps of the 2024 Summer Olympics in Paris, which start on July 26 .  The AI-powered narration will feature in "Your Daily Olympic Recap on Peacock," NBC's streaming service.  But this new, high-profile use of voice cloning worries critics, who say the technology may muscle out upcoming sports commentators by keeping old personas around forever.  Michaels, who is 79 years old, shared his initial skepticism about the project in an interview with Vanity Fair, as NBC News notes. After hearing the AI version of his voice, which can greet viewers by name, he described the experience as "astonishing" and " a little bit frightening ." He said the AI recreation was "almost 2% off perfect" in mimicking his style.

Brain wash, now with AI…

Cognify is a revolutionary new jail concept that uses artificial intelligence and brain implants to reduce the time it takes for criminals to recover from years to minutes . Introducing fictitious memories of crimes into convicts' brains is an innovative technique to reform the criminal justice system.  It allows inmates to understand their misdeeds from the perspective of their victims.  The technology shows visual effects and makes people's bodies react, which lets criminals feel their victims' pain and suffering.  Some memories are meant to cause long-lasting trauma , like the effect on the victim's body and mind or the grief of the victim's family.

Student smart, but AI smarter?

Exam submissions generated by artificial intelligence (AI) can not only evade detection but also earn higher grades than those submitted by university students, a real-world test has shown. The findings come as concerns mount about students submitting AI-generated work as their own, with questions being raised about the academic integrity of universities and other higher education institutions. It also shows even experienced markers could struggle to spot answers generated by AI, the University of Reading academics said.

Waymo says…

Waymo says: San Franciscans are using Waymo to connect to the city’s social fabric, making fully autonomous rides part of their daily lives .  We’ve provided thousands of rides to and from  Individual restaurants,  Live music venues,  Bars,  Coffee shops,  Ice cream parlors,  Parks, and  Museums,  boosting the local economy. About 30% of Waymo rides in San Francisco are to local businesses.   In a recent survey, over half of our riders said they used Waymo in the past couple of months to or from medical appointments, highlighting the value of personal space during these trips.  Additionally, 36% of our SF riders used Waymo to connect to other forms of transit, like BART or Muni.  Some of our San Francisco riders even use Waymo to depart in style from their weddings. 

Molecular de-extinction

"Molecular de-extinction aided by deep learning may accelerate the discovery of therapeutic molecules. "We trained ensembles of deep-learning models consisting of a peptide-sequence encoder coupled with neural networks for the prediction of antimicrobial activity and used it to mine 10,311,899 peptides.  "The models predicted 37,176 sequences with broad-spectrum antimicrobial activity, 11,035 of which were not found in extant organisms.  "We synthesized 69 peptides and experimentally confirmed their activity against bacterial pathogens.  "Most peptides killed bacteria by depolarizing their cytoplasmic membrane, contrary to known antimicrobial peptides, which tend to target the outer membrane.  "Notably, lead compounds, including  Mammuthusin-2 from the woolly mammoth,  Elephasin-2 from the straight-tusked elephant,  Hydrodamin-1 from the ancient sea cow,  Mylodonin-2 from the giant sloth, and  Megalocerin-1 from the extinct giant elk  showed anti-infectiv

Nestor Maslej

"AI is a tool that could really accelerate the work that you do in your business, but it’s also a tool that, if misused, can come with different repercussions .  "Businesses, if they’re thinking about integrating AI tools, should not only be thinking about doing that integration, but also really be thinking about how to do that in a way that is responsible and elevates their business but doesn’t lead to any kind of trouble .  "Data that we have from businesses ... does suggest that a lot of businesses are thinking about these responsibility dimensions. They’re just not necessarily acting on them as much as they are thinking about them. " It’s not enough to dive headfirst into integrating these tools .  "Companies need to be thinking very critically about how to do that integration in a responsible way, and also be mindful of what some of the longer term impacts might be of integrating AI tools."

Lauren Leffer

"Temple University computer scientist Pei Wang: 'If you try to build a regulation that fits all of [AGI’s definitions], that’s simply impossible.'   "Real-world outcomes, from what sorts of systems are covered under emerging laws to who holds responsibility for those systems’ actions (is it the developers, the training data compilers, the prompter or the machine itself?) might be altered by how the terminology is understood, Wang says. All of this has critical implications for AI safety and risk management. "If there’s an overarching lesson to take away from the rise of LLMs, it might be that language is powerful.  "With enough text, it’s possible to train computer models that appear, at least to some, like the first glimpse of a machine whose intelligence rivals that of humans. And the words we choose to describe that advance matter." 

Cancel 'hallucinate'; please replace

In order to hallucinate, the researchers argue, one must have some awareness or regard for the truth; LLMs, by contrast, work with probabilities, not binary correct/incorrect judgments .  Based on a huge many-dimensional map of words created by processing huge amounts of text, LLMs decide which words (based on meaning and current context) would most likely follow from the words used in a prompt. They’re inherently more concerned with sounding truthy than delivering a factually correct response, the researchers conclude. “ChatGPT and other LLMs produce bullshit, in the sense that they don’t care about the truth of their outputs,” Hicks said in an email to Fast Company . “Thinking of LLMs this way provides a more accurate way of thinking about what they are doing, and thereby allows consumers and regulators to better understand why they often get things wrong.” Importantly, the LLM doesn’t always choose the word that is statistically most likely to follow, the researchers point out. Lett

China 🐼 blocked…

OpenAI plans to block people from using ChatGPT in China, a country where its services aren’t officially available, but where users and developers access it via the company’s API anyway.   Securities Times , a Chinese state-owned newspaper reported on Tuesday that OpenAI had started sending emails to users in China outlining its plans to block access starting July 9, according to Reuters. “We are taking additional taps (sic) to block API traffic from regions where we do not support access to OpenAI’s services,” an OpenAI spokesperson told the publication.  The move could impact several Chinese startups which have built applications using OpenAI’s large language models . Although OpenAI’s services are available in more than 160 countries, China isn’t one of them.  According to the company’s guidelines, users trying to access the company’s products in unsupported countries could be blocked or suspended —although the company hasn’t explicitly done so until now.

Sage nodes

The August 2023 wildfire that devastated Lahaina [Maui], Hawaii, was spurred by dry conditions, high temperatures and strong winds.  Researchers are monitoring the area to better understand the community’s recovery process and provide new air quality and weather data.  This is part of a National Science Foundation project. The team includes researchers from Northwestern University and the University of Hawaii using a specialized artificial intelligence (AI)-enabled sensor designed at the U.S. Department of Energy’s (DOE) Argonne National Laboratory. The research team has deployed an instrument featuring ten sensors, called a Sage node, near Volcanoes National Park on Hawaii Island. It is part of a multi-hazard monitoring and detection station for natural disasters.  The project is in collaboration with University of Hawai‘i at Mānoa professors Jason Leigh, Thomas Giambelluca and Han Tseng.  Another node has been installed on Oahu and a third will be deployed in Lahaina on Maui. The nod

Metis

The new offering from Amazon is expected to “deliver text and image-based answers in a conversational manner” and to provide links to source responses, suggest follow-up points of interest, and generate images .  With the added capacity from Olympus, Amazon is said to want the AI chatbot to be able to return information from beyond the original data source to deliver updated information, such as stock prices. Metis has also been tipped to work as an “AI agent”, moving toward the evolution of the technology as agents will be able to perform additional tasks seamlessly and autonomously, using data, decision-making and multitasking, based on activity patterns and set algorithms.

Doug Shapiro

"If the mantra in Silicon Valley is 'move fast and break things,' the mantra in Hollywood is 'better run it by legal first.'   "There are a host of open legal questions about AI, but the most pressing concern copyright infringement and IP rights.  "For studios, using AI trained on others’ copyrights is a lose-lose: they are either infringing or undermining their own rights ."

Huzza!

Image
  (but some say, "strarrberry")

Kyle Orland

"Late last week, both Bloomberg and The Washington Post published stories focused on the ostensibly disastrous impact artificial intelligence is having on the power grid and on efforts to collectively reduce our use of fossil fuels .  "The high-profile pieces lean heavily on recent projections from Goldman Sachs and the International Energy Agency (IEA) to cast AI's 'insatiable' demand for energy as an almost apocalyptic threat to our power infrastructure.  "The Post piece even cites anonymous 'some [people]' in reporting that 'some worry whether there will be enough electricity to meet [the power demands] from any source.' "Digging into the best available numbers and projections available, though, it's hard to see AI's current and near-future environmental impact in such a dire light.  "While generative AI models and tools can and will use a significant amount of energy, we shouldn't conflate AI energy usage with the l

Mira Murati adds value?

The chief technology officer of OpenAI warns that the technology could in fact cause job displacement in the creative industry.   She questions, however, whether those jobs really needed to exist.  “Some creative jobs maybe will go away,” Mira Murati told her alma mater, the Thayer School of Engineering at Dartmouth University, in an interview earlier this month. “But maybe they shouldn’t have been there in the first place.”  Murati didn’t specifically name the creative jobs, but the comment was made amid discussion about the entertainment industry, which has seen massive backlash from workers. Namely, screenwriters and actors went on strike in 2023 over the use of AI in Hollywood.   [See also: Two Cultures and the Scientific Revolution ]

Mike Smith reviews Tchaikovsky’s Service Model

"Adrian Tchaikovsky’s Service Model is a fresh take on what can go wrong in a world of robots and AI. "Charles is a robot valet. He works in a manor performing personal services for his human master, checking his travel arrangements, laying out his clothes, shaving him, serving meals, etc. However, it appears to have been several years since his master has gone anywhere, and he is apparently not doing well. "One morning, Charles discovers that he has killed his master, slitting his throat during a shave, although he doesn’t remember why he did it or the act itself.  "He reports his guilt to the house majordomo computer so that the police can be called. Police robots eventually arrive, but don’t appear to be functioning well.  "Apparently there is a very high volume of calls and they are having maintenance issues. After much confusion, Charles is ordered to report to Central Services to be decommissioned …" 

ChatGPT-generated stats 😣

An Ontario Conservative MP's use of ChatGPT to share incorrect information online about Canada's capital gains tax rate offers a cautionary tale to politicians looking to use AI to generate messages, one expert says. MP Ryan Williams posted last week on X (formerly known as Twitter) an AI-generated ranking of G7 countries and their capital gains tax rates. The list appeared to have been generated by ChatGPT — an artificial intelligence-based virtual assistant — and falsely listed Canada's capital gains tax rate as 66.7 per cent. The ChatGPT logo was shown in the screenshot Williams posted.

Lauren Weinstein

"The firms are just pouring out half-baked AI systems and trying to basically ram them down our throats whether we want them or not, by embedding them into everything they can, including in irresponsible or even potentially hazardous ways .  "And it’s all in the search of profits at our expense: Microsoft wants to record everything you do on a PC through an AI system.  Both Google and Microsoft want to listen in on your personal phone calls with AI.  YouTube is absolutely flooded with low quality AI junk videos, making it ever harder to find accurate, useful videos. "Google is now pushing their AI 'Help me write' feature which feeds your text into their AI from all over the place including in many Chrome browser context menus, where in some cases they’ve replaced the standard text UNDO command with 'Help me write'.  "And Help me write is so easy to trigger accidentally that you not only could end up feeding personal or business proprietary informati

Suno sued

Several of the nation's largest record labels today sued Suno, a Cambridge firm started by AI gurus whose offering lets users create their own songs - based, the labels charge, on their copyrighted work . In the suit, filed in US District Court in Boston, the music companies allege the company sucked their offerings into its computers to build a music-based "generative AI" system.  In December, Microsoft announced it would integrate Suno's offering into its Copilot laptop and desktop AI-based operating system. 

Confidential

Image

Paul R. Pival

"The Canadian Legal Information Institute (CanLII) has used 'a commercially available Large Language Model (LLM)' to generate summaries of primary case law, and legislative documents for Alberta, Manitoba, Prince Edward Island, and Saskatchewan . "Legalese doesn't seem very approachable to the layperson, yet some of this stuff can be pretty important to at least know about.  "Here's the Alberta Libraries Act: https://www.canlii.org/en/ab/laws/stat/rsa-2000-c-l-11/latest/rsa-2000-c-l-11.html As soon as I start scrolling I want to quit. But now I can click on the AI analysis tab and have a summary that even I can understand!  "Yeah, if it turns out to be something that should involve a lawyer, that's not going to change, but at least this summary gives me a decent chance of understanding what's going on with any given legislative document." 

Julian Peh

For many blockchain builders, the announcement of a $7.5 billion token merger uniting the fetch.ai (FET), AGIX, and Ocean Protocol (OCEAN) communities into the Artificial Superintelligence Alliance (ASI) has been a hallmark of the increased interconnectedness between crypto and AI. While some lauded the merger as a milestone for decreasing friction and improving synergies , others have warned of the dangers of centralization. One expert Cointelegraph spoke to, Julian Peh, CEO of Web3 AI base layer Kip Protocol, warned that we have seen clear "monopolies forming" in AI just in the past two years. "A few companies like OpenAI training their giant models on all of our collective data and knowledge, and also completely capturing the regulatory process," said Peh, continuing: "We will own nothing in the AI powered future should this trend continue. We are all presently victims of a great knowledge heist, and in future will be relegated to being mere consumers of AI

Epochs

In the realm of machine learning, the term "epoch" stands as a cornerstone in the training process of models, playing a crucial role in achieving optimal performance .  An epoch represents a single pass of the entire training dataset through a machine learning model.  During each epoch, the model processes the entire set of training examples, adjusts its internal parameters, and refines its predictive capabilities.  This iterative process is fundamental to the learning journey of a machine learning model.

Apple and Meta seek collaboration

As Apple looks to enter the AI world with its Apple Intelligence, it wants to make another strategic partnership with Meta in addition to OpenAI. Apple announced earlier this month that it would collaborate with OpenAI to integrate ChatGPT into the updated Siri. The Wall Street Journal has recently reported that Apple and Facebook’s parent company, Meta, are discussing a comparable agreement. It has been reported that these negotiations have yet to be concluded and may still fail. Meta declined to provide a comment, and Apple did not respond promptly.

Investigations of Pollock artworks

Given the increased employment of artificial intelligence applications across society, we investigate whether established machine learning techniques can be adopted by the art world to help detect imitation Pollocks .  The low number of images compared to typical artificial intelligence projects presents a potential limitation for art-related applications. To address this limitation, we develop a machine learning strategy involving a novel image ingestion method which decomposes the images into sets of multi-scaled tiles.  Leveraging the power of transfer learning, this approach distinguishes between authentic and imitation poured artworks with an accuracy of 98.9%.  The machine also uses the multi-scaled tiles to generate novel visual aids and interpretational parameters which together facilitate comparisons between the machine’s results and traditional investigations of Pollock’s artistic style.

Media Manager

Amid the hype surrounding Apple’s new deal with OpenAI, one issue has been largely papered over: The AI company’s foundational models are, and have always been, built atop the theft of creative professionals’ work . The arrangement with Apple isn’t the only news from OpenAI. Among recent updates and controversies including high-level defections, last month the company quietly announced Media Manager, scheduled for release in 2025.  A tool purportedly designed to allow creators and content owners to control how their work is used, Media Manager is really a shameless attempt to evade responsibility for the theft of artists’ intellectual property that OpenAI is already profiting from. OpenAI says this tool would allow creators to identify their work and choose whether to exclude it from AI training processes. But this does nothing to address the fact that the company built its foundational models using authors’ and other creators’ works without  Consent,  Compensation, or  Control  over

Finding new materials

Image

Sheryl Crow says no

Speaking to the BBC, she characterises the technology as a “slippery slope” and “a betrayal” that “goes against everything humanity is based on”. Her attention was focused on the software last year, after meeting a young songwriter who’d been employing it in her work. Frustrated that male singers wouldn’t listen to her demos, she paid to have an AI clone of country star John Mayer replace her vocals. When Crow heard the song, she was so “terrified” that she was “literally hyperventilating”. “I know John and I know the nuances of his voice,” she says. “And there would be no way you’d have been able to tell that he was not singing that song ." Her horror deepened when Drake used AI to resurrect the voice of late rapper Tupac Shakur on his song Taylor Made Freestyle earlier this year.

Digital Markets Act sours Apple

The company [AAPL] announced Friday it would block the release of Apple Intelligence, iPhone Mirroring and SharePlay Screen Sharing from users in the EU this year, because the Digital Markets Act allegedly forces it to downgrade the security of its products and services. "We are concerned that the interoperability requirements of the DMA could force us to compromise the integrity of our products in ways that risk user privacy and data security," Apple said in a statement.  Under the DMA, Apple is expected to receive a formal warning from EU regulators over how it allegedly blocks apps from steering users to cheaper subscription deals on the web -- a practice for which it received a $1.9 billion fine from Brussels regulators earlier this year .

HumanPlus

Researchers have developed a humanoid robot that can imitate human motions and even learn new abilities by merely following people . Researchers at Stanford University have created a robot they name “HumanPlus.”  The team claims that by observing human behaviour, it can pick up skills like piano playing, ping-pong ball return, and more. The humanoid robot mimics human movements by using a whole-body policy and a single RGB camera…

MicDrop

Universal Music Group, the world's largest music company, has partnered with AI start-up SoundLabs to create "official ultra-high fidelity vocal models for artists using their own voice data for training but still have control over ownership and full artistic approval and control of the output." The partnership is introducing a software called MicDrop that uses machine learning to mimic an artist’s voice. Theoretically, this technology which has been in development for several years can be used in many ways such as allowing artists to sing languages they have never learned or continue making music even when an artist’s vocal cords are affected by sickness. It may also enable posthumous production of songs by musicians. The voice model developed by MicDrop can be used much like synthesizer voices providing numerous new creative options.   Whether it could be leveraged as a live performance tool still remains unknown, but for artists unable to tour again due to health limit

Made with AI?

Earlier in February, Meta said that it would start labeling photos created with AI tools on its social networks.   Since May, Meta has regularly tagged some photos with a “Made with AI” label on its Facebook, Instagram, and Threads apps.  But the company’s approach of labeling photos has drawn ire from users and photographers after attaching the “Made with AI” label to photos that have not been created using AI tools .

Rockset

OpenAI, led by Sam Altman, has acquired Rockset, a prominent real-time analytics database company while sparking discussions in the technology market .  Notably, this move reflects OpenAI’s focus on enhancing its real-time data capabilities. Notably, this acquisition aims to integrate Rockset’s advanced data indexing and querying technology with OpenAI’s AI infrastructure.  According to the announcement, it would allow the AI leader to enhance the efficiency and intelligence of its applications.

Open-Washing

Technology giants such as Meta and Microsoft are describing their artificial intelligence (AI) models as ‘open source’ while failing to disclose important information about the underlying technology, say researchers who analysed a host of popular chatbot models. The definition of open source when it comes to AI models is not yet agreed, but advocates say that ’full’ openness boosts science, and is crucial for efforts to make AI accountable. What counts as open source is likely to take on increased importance when the European Union’s Artificial Intelligence Act comes into force. The legislation will apply less strict regulations to models that are classed as open . Some big firms are reaping the benefits of claiming to have open-source models, while trying “to get away with disclosing as little as possible”, says Mark Dingemanse, a language scientist at Radboud University in Nijmegen, the Netherlands. This practice is known as open-washing .

AI mooning us ⚪ now

Image

If not comic, AI tragic?

Recent Google DeepMind study …followed the experiences of 20 professional comedians who all used AI to create original comedy material.  They could use their preferred assistant to generate jokes, co-write jokes through prompting, or rewrite some of their previous material.  The aim of the 45-minute comedy writing exercise was for the comedians to produce material “that they would be comfortable presenting in a comedy context” .  Unfortunately, most of them found that the likes of ChatGPT and Google Gemini (then called Google Bard) are a long way from becoming a comedy double act .

Cellphones plus AI

The increased use of AI will result in a significant increase in the amount of computation that phones will be performing, resulting in a substantial increase in the production and consumption of data. This will increase the burden on mobile phone networks, including O2, EE, Vodafone, and Three in the United Kingdom. According to Ian Fogg, director of network innovation at research consultancy CCS Insight, telecommunications companies are beginning to implement AI to assist them in managing their operations. “AI is being employed by network operators to dynamically manage radio frequencies to ensure the highest possible level of service.” For instance, to optimize the energy consumption of cell towers during periods of diminished demand.

Nikhil Suresh

"What are you most interested in, dear leader? Artificial intelligence, the blockchain, or quantum computing?  "They know exactly what their target market is - people who have been given power of other people's money because they've learned how to smile at everything, and know that you can print money by hitching yourself to the next speculative bandwagon.  "No competent person in security that I know - that is, working day-to-day cybersecurity as opposed to an institution dedicated to bleeding-edge research - cares about any of this. "They're busy trying to work out if the firewalls are configured correctly, or if the organization is committing passwords to their repositories.  "Yes, someone needs to figure out what the implications of quantum computing are for cryptography, but I guarantee you that it is not Synergy Greg, who does not have any skill that you can identify other than talking very fast and increasing headcount.  "Synergy Greg sh

Roddenberry Foundation

The Roddenberry Foundation — named for Gene Roddenberry — announced Tuesday that this year’s biennial award would focus on artificial intelligence that benefits humanity. Lior Ipp, chief executive of the foundation, told The Times there’s a growing recognition that AI is becoming more ubiquitous and will affect all aspects of our lives. “We are trying to … catalyze folks to think about what AI looks like if it’s used for good,” Ipp said, “and what it means to use AI responsibly, ethically and toward solving some of the thorny global challenges that exist in the world.”

Sonnet

A new large language model (LLM) has apparently taken the performance crown from OpenAI’s GPT-4o about a month after its release:   The new Claude 3.5 Sonnet chatbot and LLM from rival AI firm Anthropic, released today, bests all others in the world on key third-party benchmark tests, according to the company. And it does so while being faster and cheaper than prior Claude 3 models. But it’s one thing to drop a new model and claim dominance, and yet another for users to truly experience and leverage the performance gains. 

Maria Popova

"A century and half after the Victorian visionary Samuel Butler prophesied the rise of a new 'mechanical kingdom' to which we will become subservient, we are living with artificial intelligences making daily decisions for us, from the routes we take to the music we hear .  "And yet the very fact that the age of near-sentient algorithms has left us all the more famished for meaning may be our best hope for saving what is most human and alive in us.  "Couple God, Human, Animal, Machine with Nick Cave on music, feeling, and transcendence in the age of AI , then consider some thoughts on consciousness and the universe , lensed through cognitive science and poetry."

What use is quantum computing?

Image

Majorana bound states

"Artificial Kitaev chains can be used to engineer Majorana bound states (MBSs) in superconductor–semiconductor hybrids .  "In this work, we realize a two-site Kitaev chain in a two-dimensional electron gas by coupling two quantum dots through a region proximitized by a superconductor.  "We demonstrate systematic control over inter-dot couplings through in-plane rotations of the magnetic field and via electrostatic gating of the proximitized region. This allows us to tune the system to sweet spots in parameter space, where robust correlated zero-bias conductance peaks are observed in tunnelling spectroscopy.  "To study the extent of hybridization between localized MBSs, we probe the evolution of the energy spectrum with magnetic field and estimate the Majorana polarization, an important metric for Majorana-based qubits .  "The implementation of a Kitaev chain on a scalable and flexible two-dimensional platform provides a realistic path towards more advanced exp

Apple embraces open-source AI

Apple has made a significant stride in its efforts to empower developers with cutting-edge on-device AI capabilities .  The tech giant recently released 20 new Core ML models and 4 datasets on Hugging Face, a leading community platform for sharing AI models and code. This move underscores Apple’s commitment to advancing AI while prioritizing user privacy and efficiency. Clement Delangue, cofounder and CEO of Hugging Face, highlighted the significance of this update in a statement sent to VentureBeat . “This is a major update by uploading many models to their Hugging Face repo with their Core ML framework,” Delangue said. “The update includes exciting new models focused on text and images, such as image classification or depth segmentation. Imagine an app that can effortlessly remove unwanted backgrounds from photos or instantly identify objects in front of you and provide their names in a foreign language .” Delangue emphasized the importance of on-device AI, stating, “Core ML models r

Gemini integrates with Messages

At MWC earlier this year, Google announced Gemini's integration with Messages, giving you a way to access the chatbot from within the texting app.  The feature was limited to newer Pixel and Samsung Galaxy phones at launch, but now Google has updated its Help page to say that all you need to access it is an "Android device with 6GB of RAM or higher." 9to5Google first reported the change, along with the news that the feature is launching in India. Google says it's "working hard" to make it available in more languages and more territories in the future. But for now, your phone has to be set to English — or French, if you're in Canada — if you want to be able to get Gemini to draft messages, plan events or even just chat with you to pass time.

Gemini availability

Following the European expansion at the start of June, Google is bringing the Gemini app and the latest Gemini Advanced features to India. The Gemini app is now available in Bangladesh, India, Pakistan, Sri Lanka, and Turkey. You can download it via the Play Store or opt-in via Google Assistant. On iOS, you’ll find a new toggle switcher at the top of the Google app.  Google says it’s “particularly excited” about launching in India “given the country’s strong mobile-first culture. There’s support for English and nine Indian languages: Hindi, Bengali, Gujarati, Kannada, Malayalam, Marathi, Tamil, Telugu, and Urdu . Meanwhile, Gemini Advanced powered by 1.5 Pro is now available in Hindi, Bengali, Gujarati, Kannada, Malayalam, Marathi, Tamil, Telugu, and Urdu. This includes the 1-million token context window, document uploads, and data analysis. 

Nvidia crowned

The staggering run for Nvidia’s stock carried it to the market’s mountaintop Tuesday, as it became the most valuable company on Wall Street.  Underneath [a] calm market surface, Nvidia was the star again. It rose again, this time up 3.5%. It was the strongest force pushing the S&P 500 upward, again. And it lifted its total market value further above $3 trillion, again. It grabbed the top spot on Wall Street from Microsoft, which has been trading the crown back and forth with Apple after they wrested it from past titans like Exxon Mobil and cigarette-maker Philip Morris.

NewsGuard

The leading AI chatbots are regurgitating Russian misinformation, according to a NewsGuard report shared first with Axios. NewsGuard finds itself under fire from House Oversight Committee Chair James Comer (R-Ky.), who has launched an investigation into the organization, citing a concern over NewsGuard 's "potential to serve as a non-transparent agent of censorship campaigns." NewsGuard , for its part, rejects the assertion, saying that the committee is misunderstanding its work with the Defense Department, which it says has nothing to do with ratings of news sources, but rather is "solely related to hostile disinformation efforts by Russian, Chinese and Iranian government-linked operations targeting Americans and our allies." "It is alarming to see Washington politicians using the power of government to attempt to intimidate a news organization, demanding copies of journalists' notes and all records of our interactions with sources," NewsGuard

Meta’s Fundamental AI Research (FAIR)

Meta’s Fundamental AI Research (FAIR) team, is releasing several new AI models and tools for researchers to use. They are centered on audio generation, text-to-vision, and watermarking. “By publicly sharing our early research work, we hope to inspire iterations and ultimately help advance AI in a responsible way,” the company said in a press release.  Meta is releasing a new AI model called JASCO, which is short for Joint Audio and Symbolic Conditioning for Temporally Controlled Text-to-Music Generation. JASCO can take different audio inputs, such as a chord or a beat, to improve the final AI-generated sound. According to a paper from FAIR’s researchers, JASCO lets users adjust features of a generated sound like chords, drums, and  melodies to hone in on the final sound they want all through text. FAIR plans to release the JASCO inference code as part of its AudioCraft AI audio model library under an MIT license and the pre-trained model on a non-commercial Creative Commons license. 

Reverse Turing Test

Image

INOVAIT

INOVAIT, in partnership with the Government of Canada, has announced a substantial $10.7 million investment in seven R&D projects aimed at commercializing advanced image-guided therapy (IGT) technologies integrated with artificial intelligence (AI).   This funding, provided through the second iteration of INOVAIT’s Focus Fund program, underscores Canada’s commitment to leveraging AI, big data, and machine learning to revolutionize medical interventions. INOVAIT, established by the Sunnybrook Research Institute (SRI) and supported by the Government of Canada’s Strategic Innovation Fund (SIF), is dedicated to advancing the field of IGT.  Image-guided therapy involves using medical imaging for the precise planning, execution, and evaluation of medical procedures, enhancing the accuracy and effectiveness of treatments. By integrating AI, INOVAIT aims to streamline healthcare processes and significantly improve patient outcomes.

Mirjana Spoljaric

"What we are looking at specifically is so-called autonomous weapon systems.   "I issued a joint call with the U.N. Secretary General last year calling for a normative framework to regulate autonomous weapon systems.  "There are specific weapon systems that we think should be prohibited. They are unpredictable, using integrated forms of machine learning that learn about the target as they are already launched and weapon systems that autonomously target humans.  "What constitutes a big challenge for us is the loss of human control and accountability over the employment of weapons. You cannot export that decision-making to a machine or a computer or a software.  "What we also fear is that as you support military operations with artificial intelligence-based tools, you lose control over the human cognitive capacity to absorb the level of information needed in a short period of time to make decisions.  "What we also observe is a loss to distinguish between hu

Neuromorphic computing

In neuromorphic computing, electronic devices imitate neurons and synapses, and are interconnected in a way that resembles the electrical network of the brain . It isn't new - researchers have been working on the technique since the 1980s. But the energy requirements of the AI revolution are increasing the pressure to get the nascent technology into the real world. Current systems and platforms exist primarily as research tools, but proponents say they could provide huge gains in energy efficiency, Amongst those with commercial ambitions include hardware giants like Intel and IBM.  It is a significant development says Tony Kenyon, a professor of nanoelectronic and nanophotonic materials at University College London who works in the field. “While there still isn’t a killer app… there are lots of areas where neuromorphic computing will provide significant gains in energy efficiency and performance, and I’m sure we’ll start to see wide adoption of the technology as it matures,” he sa

Not Notepad! Not!

It's an open secret at this point that 2024 is going to be a big year for AI in Windows .  We know Microsoft is working on a major AI update for Windows that is expected to launch later this year, that will introduce deeper AI integration across the OS. It's also expected to come to more apps, and that appears to include Not epad. In a screenshot posted on X by @PhantomOcean3, the latest Not epad app has a hidden menu with an early implementation of a new feature called " Cowriter ," which uses AI to rewrite text, make text shorter or longer, and change the tone or format of text in a Not epad text file.  It appears the AI features don't actually work yet, but it's a sign that Microsoft is indeed planning to bring AI integration to the Not epad app on Windows 11 soon. It will likely be implemented in a similar way to how AI has been added to Paint, using an online service that requires logging into your Microsoft account to use .

Thousand Brains

An ambitious new endeavor called the Thousand Brains Project aims to develop a new AI framework that its founder says will operate on the same principles as the human brain— yet will be fundamentally different from the principles underlying the deep neural networks that dominate artificial intelligence today .  With funding from the Gates Foundation, the open-source initiative aims to partner with electronics companies, government agencies, and university researchers to explore potential applications for its new platform. In today’s artificial neural networks, components dubbed neurons are fed data and cooperate to solve a problem, such as recognizing images or predicting the next word in a sequence. Neural nets are called “deep” if they possess multiple layers of neurons. Deep neural networks currently match or beat human performance on many tests, such as identifying skin cancer and playing complex games, However, they are plagued by a host of problems.

Digital News Report

An industry report has indicated a growing global trend of consternation about the increasing threat posed by the use of artificial intelligence (AI) in news production and disinformation .  The study was conducted by the Reuters Institute for the Study of Journalism, with its annual Digital News Report canvassing the views and perspectives of almost 100,000 people across 47 countries.  The report underlines the evolving challenges newsrooms face due to AI and the need for effective solutions to engage the public, maintain trust, and sustain business.  One of the findings of the survey outlined how just over half of US participants and 63% of those in the UK (2000 people were polled in each country) stated they would be uncomfortable with news predominantly produced by AI, but there was less resistance to the use of the emerging technology to assist journalists with tasks behind the scenes .

McDonald's AOT gone…for now

McDonald’s has ended a two-year test of AI-powered drive-thru ordering.   The company was trialing IBM tech at more than 100 of its restaurants but it will remove those systems from all locations by the end of July, meaning that customers will once again be placing orders with a human instead of a computer. As part of that decision, McDonald’s is ending its automated order taking (AOT) partnership with IBM. However, McDonald’s may be considering other potential partners to work with on future AOT efforts. “While there have been successes to date, we feel there is an opportunity to explore voice ordering solutions more broadly,” Mason Smoot, chief restaurant officer for McDonald’s USA, said in an email to franchisees that was obtained by trade publication Restaurant Business (as noted by PC Mag).  Smoot added that the company would look into other options and make “an informed decision on a future voice ordering solution by the end of the year,” noting that “ IBM has given us confiden

AI-spawned images verboten…

Contracts between companies and their marketing firms are now more likely to include strong restrictions on how AI is used and who can sign off on it.   These provisions don’t just help to prevent low-quality, AI-spawned images or copy from embarrassing these clients on the public stage — they can also curtail reliance on artificial intelligence in internal operations. Meanwhile, creative social platforms are staking out zones meant to remain AI-free, and getting good customer feedback as a result. Cara , a new artist portfolio site, is still in beta testing but has sparked significant buzz among visual artists because of its proudly anti-AI ethos .  “With the widespread use of generative AI, we decided to build a place that filters out generative AI images so that people who want to find authentic creatives and artwork can do so easily,” the app’s site declares. Cara also aims to protect its users from the scraping of user data to train AI models, a condition automatically imposed on

Matt Novak

"Have you ever wondered what your favorite memes might look like if they were animated? Well, wonder no longer .  "Thanks to advancements in artificial intelligence technology, you can now see those static images come to life. And they look like absolute dogshit. "X user Blaine Brown recently shared a long thread of AI-generated videos that use Luma AI, turning previously still images into moving pictures. Brown’s thread utilized some of the most popular memes on the web, including the Distracted Boyfriend, Side-Eye Chloe, and the Success Kid. "But it was the Picard Facepalm meme that really caught our eye. The image comes from the TV show Star Trek: The Next Generation, season three, episode 13. And the video created with Luma takes that screenshot of Captain Picard, played by Patrick Stewart, and moves his hand to reveal a face. But something is very, very wrong ."

NYT replacing artists?

Amid its ongoing lawsuit against OpenAI, the New York Times' union is claiming that the paper is firing human artists to replace them with artificial intelligence. In a memo obtained by The Wrap , the New York Times Guild said that firing nine out of the newspaper's 16 artists "reflect[s] a broader mindset that puts cost savings over people and the quality of our work." As the union maintains, shrinking the NYT art department by more than half represents one of the paper's largest AI-driven staff reductions to date, during an industry-wide reckoning with the burgeoning technology. As The Wrap points out, these job cuts come even as the newspaper sues OpenAI and Microsoft for using its copyrighted work without permission to train large language models (LLMs). These firings occurred, notably, after the paper spent $1 million on the lawsuit.

AI detectors sacking writers?

When ChatGPT set the world on fire a year and a half ago, it sparked a feverish search for ways to catch people trying to pass off AI text as their own writing .  A host of startups launched to fill the void through AI detection tools, with names including Copyleaks, GPTZero, Originality.AI, and Winston AI. It makes for a tidy business in a landscape full of AI boogeymen. These companies advertise peace of mind, a way to take back control through “proof” and “accountability.” Some advertise accuracy rates as high as 99.98%. But a growing body of experts, studies, and industry insiders argue these tools are far less reliable than their makers promise.  There’s no question that AI detectors make frequent mistakes, and innocent bystanders get caught in the crossfire. Countless students have been accused of AI plagiarism, but a quieter epidemic is happening in the professional world. Some writing gigs are drying up thanks to chatbots. As people fight over the dwindling field of work, write

Smart enough to write a program smarter than itself?

Programs like AlphaStar, which learn by playing against themselves, are one example I can think of that seems to develop intelligence without much human input beyond the learning algorithm .  But they are still utilizing a resource, namely practice; they learn from experience. Their advantage lies in their ability to practice very, very fast. Video games lend themselves well to that sort of thing, but is it possible to practice general reasoning in the same fashion?  It's harder to iterate rapidly if you have to learn about doing anything in the physical world, or learn about the psychology of humans. You'd need a high-fidelity simulator (which, by itself, would take a lot of work to develop). And then you wouldn't discover anything that humans and AGIs don't already know about the universe, because they wouldn't be able to include those unknown properties in the simulation. The one thing an AGI might get by sitting around and putting billions of cycles into thinkin

Renée DiResta

Renée DiResta has been tracking online misinformation for many years as the technical research manager at Stanford’s Internet Observatory and has a new book out about the online propagandists who ' turn lies into reality. '  She has studied the spread of medical misinformation via social media, so IEEE Spectrum spoke to her about whether AI search is likely to bring an onslaught of erroneous medical advice to unwary users. "There are a couple of policies that Google has had in place for a long time that appear to be in tension with what’s coming out of AI-generated search. That’s made me feel like part of this is Google trying to keep up with where the market has gone. There’s been an incredible acceleration in the release of generative AI tools, and we are seeing Big Tech incumbents trying to make sure that they stay competitive. I think that’s one of the things that’s happening here." 

Kevin Buzzard

Image
 

Lie detector

British holidaymakers could soon face “lie detector” tests when entering European Union countries – with artificial intelligence software set to flag any suspicious tourists to immigration officers. Checks are reportedly set to come into force at airports and ferry terminals as the EU tightens borders post-Brexit. Artificial intelligence software which analyses facial movements and body gestures as passengers fill in application forms have already been trialled, according to The Mail on Sunday.

Lucidworks

When OpenAI first released ChatGPT to the public back in late 2022, many people in business were gripped with an intense case of FOMO and started trying to stuff AI into practically everything, from customer chatbots to facial recognition in stadiums. It was all done with an eye to increase efficiency and profit. But that may not have quite panned out: an AI company called Lucidworks released a new report on generative AI this week that revealed that many businesses aren't seeing much in financial returns from their adoption of AI products . "In 2023, global leaders expected to see significant positive impacts across business operations, automation and efficiency, and customer experience with generative AI initiatives," the report reads, "Unfortunately, the financial benefits of implemented projects have been dismal. 42 percent of companies have yet to see a significant benefit from their generative AI initiatives."

AI Steve (endacott)

Steve Endacott, an entrepreneur hailing from southern England, has introduced AI Steve —a groundbreaking artificial intelligence set to contest in the upcoming UK general election .  Endacott believes that to have AI Steve occupy a seat in the House of Commons as the representative for Brighton Pavilion will mark a significant step towards using technology to enhance democratic engagement. Steve Endacott’s motivation stems from a profound disillusionment with traditional political parties, which he perceives as increasingly disconnected from the broader UK population. 

Scott Latham

"Beware of the false narrative beginning to circulate within the academy that AI will free up the professor to do more 'high-touch engagement' with the student. Or that AI can be a partner in the classroom to make you a better professor .  "For example, do not pay heed to the notion that if you bring AI in to help with teaching, you can help the student get a job or provide career counseling. Sorry, AI does job searching better than you at this point. When it comes to customer service, course advising, and career counseling, let AI have these tasks, but when it comes to the classroom, keep AI on the outside looking in . "As Roy Amara, the acclaimed futurist, famously quipped,  We tend to overestimate the effect of a technology in the short run and underestimate the effect in the long run.  "In higher education, we are past the short term, and now entering the long term.  "Conversations are starting to heat up relative to AI and faculty. While compelling

Hicks, Humphries, and Slater

"Recently, there has been considerable interest in large language models: machine learning systems which produce human-like text and dialogue .  "Applications of these systems have been plagued by persistent inaccuracies in their output; these are often called 'AI hallucinations.'  "We argue that these falsehoods, and the overall activity of large language models, is better understood as bullshit in the sense explored by Frankfurt (On Bullshit, Princeton, 2005): the models are in an important way indifferent to the truth of their outputs.   "We distinguish two ways in which the models can be said to be bullshitters, and argue that they clearly meet at least one of these definitions. We further argue that describing AI misrepresentations as bullshit is both a more useful and more accurate way of predicting and discussing the behaviour of these systems."

AI-free (but unlike free beer)

"Behind all these AI-free labels lurks a question, one that rings out even louder as the limitations of generative AI become painfully clear, as the companies responsible for it become more ethically compromised: What is the AI-generated variety for?   "People generally prefer humans in customer service over AI and automated systems. AI art is widely maligned online; teens have taken to disparaging it as 'Boomer art.' AI doesn’t offer better products, necessarily: It just offers more, and for less money. Are we willing to trade away humanity for that? "In the 2000s, the organic and GMO-free labels were a reaction to concerns about sustainability, pesticides, and factory farming; organic food labels were supposed to designate quality vis–à–vis the badly made stuff. But there’s a lesson here—there is of course a limit to the branding. The organic label is costly to obtain and hard to verify—rendering it meaningless in many cases— and gave rise to enterprises such a

Dave Karpf

"Of course ChatGPT is useful to underwhelming consultants in generating ideas for their slidedecks and reports that no one bothers to read.   "Of course its useful for entrepreneurship students brainstorming 25 new business ideas. Those activities are bullshit to begin with! "ChatGPT doesn’t look like a disruptive, revolutionary tool to me.  "It looks like an incremental advance over the status quo ante .  "Students can use ChatGPT to cheat on writing assignments. That’s catastrophic for the extant cheating-on-writing-assignments cottage industry. But it doesn’t have much bearing on my pedagogy or syllabus."

Pope Francis foreshadows…

Pope Francis delivered the first-ever papal address at a G-7 conference on Friday, warning about the ethical pitfalls of artificial intelligence. The pope told the council of world leaders in Fasano, Italy, that AI offers immense benefit to the human race, but also threatens to dehumanize society. "The question of artificial intelligence, however, is often perceived as ambiguous: on the one hand, it generates excitement for the possibilities it offers, while on the other, it gives rise to fear for the consequences it foreshadows ," Pope Francis said in his remarks. He continued, "In this regard, we could say that all of us, albeit to varying degrees, experience two emotions: we are enthusiastic when we imagine the advances that can result from artificial intelligence but, at the same time, we are fearful when we acknowledge the dangers inherent in its use."  Comparing AI to primitive flint knives and nuclear energy, the pontiff acknowledged that every development

Lance Fortnow and randomness

"The learning algorithms themselves are processes, and I feel the weights updating as we feed in more and more examples to train the models .  "The advances we have seen in machine learning over the past decades have helped us accomplish complex processes, from human ones like translation, vision, art and conversation, to biological ones like protein folding. "Machine learning models are still prone to mistakes and misinformation, and they still have trouble with basic reasoning tasks.  "Nevertheless, we’ve entered an era where we can use computation itself to help us manage the randomness that arises from complex systems." 

Ray Kurzweil on Science Friday

"In 2005, futurist and inventor Ray Kurzweil popularized the term 'the singularity' to capture the idea that man and machine will merge as the next stage of evolution .  "This was the basis for Kurzweil’s book The Singularity is Near , which has been essential reading for technology buffs and critics since its publication nearly 20 years ago. "In the meantime, we’ve seen huge advances in artificial intelligence, computing power, and technological research.  "In response to all this growth, Kurzweil has published a followup to bring us up to date, The Singularity is Nearer: When We Merge With AI .  "Ira Flatow speaks to Kurzweil about the book and his more than six decades of experience in the field of artificial intelligence."

Paul Nakasone

Former head of the National Security Agency, retired Gen. Paul Nakasone, will join OpenAI’s board of directors, the AI company announced Thursday afternoon .  He will also sit on the board’s “security and safety” subcommittee. The high-profile addition is likely intended to satisfy critics who think that OpenAI is moving faster than is wise for its customers and possibly humanity, putting out models and services without adequately evaluating their risks or locking them down. Nakasone brings decades of experience from the Army, U.S. Cyber Command, and the NSA.  Whatever one may feel about the practices and decision-making at these organizations, he certainly can’t be accused of a lack of expertise .

Natallia Bahlai

"The function calling feature in the Claude 3 model provides a powerful approach to efficiently recognize and extract data from documents in a deterministic and accurate manner .  "By following the best practices outlined in this article, such as defining clear document schemas, leveraging JSON Schema constructs, and incorporating inference analysis properties, you can streamline your document processing tasks and ensure high-quality output.    "As language models and their capabilities continue to evolve, it is essential to stay updated with the latest advancements like tool use and leverage them effectively to unlock new possibilities in document processing and beyond."

Hudson ad fakes MLK endorsement

Everyone involved in the production of the ad must be completely removed from contemporary American culture for making such an unnerving gaffe, since just weeks earlier, multi-platinum pop singer Drake earned the ire of the entire West Coast for using A.I. sorcery to recreate Tupac Shakur in a diss track against rapper and poet laureate Kendrick Lamar . [Anthony] Hudson deleted the video off of his TikTok account, and hours later, he claimed that he had nothing to do with the disturbing ad. Instead, he blamed a volunteer’s friend for making the post, though just how or why an unpaid staffer got Hudson’s social media credentials is still unclear. “A volunteer gave my social media credentials to one of his friends who then posted an AI video without my knowledge. It appears that they not only used AI for MLKjr’s voice but also with my voice to make it appear more authentic,” Hudson wrote on X. “The volunteer has been released and all my social media credentials have been updated. I woul

noyb

In reaction to 11 noyb complaints, the DPC has announced (late Friday afternoon) that Meta has pledged to the DPC that it will not process EU/EEA user data for undefined "artificial intelligence techniques".   Previously Meta argued it had a "legitimate interest" in doing so, only informing (some) users about the change and merely allowing a (deceptive and complicated) "opt-out". While the DPC has initially approved the introduction of Meta AI in the EU/EEA, it seems that other regulators have pushed back in the past days and led the DPC to U-turn in its advice to Meta. The DPC now announced: "The DPC welcomes the decision by Meta to pause its plans to train its large language model using public content shared by adults on Facebook and Instagram across the EU/EEA. This decision followed intensive engagement between the DPC and Meta. The DPC, in co-operation with its fellow EU data protection authorities, will continue to engage with Meta on this issue