Posts

Showing posts from March, 2024

Curating internal data…

A survey in the second half of 2023 of 334 CDOs and data leaders in organizations (sponsored by Amazon Web Services and the MIT Chief Data Officer/Information Quality Symposium), and a series of interviews with these executives, found that while they are excited about generative AI as everyone else, they have much work to do to get ready for it.  In terms of data preparedness, in particular, companies have not yet created new data strategies or begun to manage their data in the ways necessary to make generative AI work for them.  We’ll describe the results of the survey and what that suggests for next steps with data.

Zeyi Yang

"The team at MacroPolo, the think tank of the Paulson Institute, an organization that focuses on US-China relations, studied the national origin, educational background, and current work affiliation of top researchers who gave presentations and had papers accepted at NeurIPS, a top academic conference on AI.  "Their analysis of the 2019 conference resulted in the first iteration of the Global AI Talent Tracker. They’ve analyzed the December 2022 NeurIPS conference for an update three years later. "I recommend you read the original report , which has a very well-designed infographic that shows the talent flow across countries. But to save you some time, I also talked to the authors and highlighted what I think are the most surprising or important takeaways from the new report.  "Here are the four main things you need to know about the global AI talent landscape today."

TacticAI ⚽

A new AI assistant developed by Google DeepMind  … can suggest tactics for soccer set-pieces that are even better than those created by professional club coaches.  The system, called TacticAI, works by analyzing a dataset of 7,176 corner kicks taken by players for Liverpool FC, one of the biggest soccer clubs in the world .  Corner kicks are awarded to an attacking team when the ball passes over the goal line after touching a player on the defending team. In a sport as free-flowing and unpredictable as soccer , corners—like free kicks and penalties—are rare instances in the game when teams can try out pre-planned plays. TacticAI uses predictive and generative AI models to convert each corner kick scenario—such as a receiver successfully scoring a goal, or a rival defender intercepting the ball and returning it to their team—into a graph, and the data from each player into a node on the graph, before modeling the interactions between each node. The work was published in N

Actress Replaced by AI

Stage and voice actress Sara Poyzer was unceremoniously informed that she had been replaced by artificial intelligence, the Mamma Mia star said Tuesday . Poyzer posted the text of an email message she received from an unnamed production company that said her voice services were no longer needed by the BBC as the broadcaster had given permission to use an AI-generated voice instead. “Sorry for the delay,” the message said . “We have had the approval from [the] BBC to use the AI-generated voice so we won’t need Sara anymore.”

Onavo

New documents from an ongoing class action suit against the company [Meta] over anticompetitive behavior reveal some of the specific ways it tackled rivals in recent years.   One of them was using software made by a mobile data analytics company called Onavo in 2016 to access user activities on Snapchat, and eventually Amazon (AMZN) and YouTube, too.  Instagram and Threads currently claims a market cap of $1.24 trillion, while Snapchat’s parent company Snap is worth just $19 billion. Snapchat has 800 million monthly active users, which pale in comparison to Facebook’s three billion and Instagram’s two billion. Meta is one of the tech giants under antitrust investigation by the European Union. It’s one of the six companies (along with Apple (AAPL) , Alphabet (GOOGL) , Amazon, ByteDance and Microsoft (MSFT) ) designated as “gatekeepers” in the digital economy under the E.U.’s newly enacted Digital Markets Act. 

Voice Engine

OpenAI is offering limited access to a text-to-voice generation platform it developed called Voice Engine, which can create a synthetic voice based on a 15-second clip of someone’s voice .  The AI-generated voice can read out text prompts on command in the same language as the speaker or in a number of other languages. “These small scale deployments are helping to inform our approach, safeguards, and thinking about how Voice Engine could be used for good across various industries,” OpenAI said in its blog post .  Companies with access include the education technology company Age of Learning, visual storytelling platform HeyGen, frontline health software maker Dimagi, AI communication app creator Livox, and health system Lifespan.

Sally Kornbluth

"As generative AI evolves at an exceptionally rapid pace, MIT has a responsibility to help humanity pursue a future of AI innovation that is broadly beneficial and mitigates potential harm. A deep understanding of the societal impact of AI is a vital part of this effort, and MIT faculty have an extraordinary breadth of knowledge and insight to contribute. "Recognizing that keeping up with the pace of AI development requires our faculty to accelerate their research, in the fall of 2023 we funded seed grants for 27 faculty members to explore how generative AI will transform people’s lives and work. The resulting papers, 25 of which are published here, delve into questions and problems across an enormous range of disciplines, from healthcare to finance, climate to education, manufacturing to music. "This collection offers a glimpse into some of MIT’s most brilliant minds at work, weaving new ideas across fields, departments and schools. We share their work in the hope it wi

BU ♥️ AI instead of teachers

Boston University administrators recommended that faculty members use generative AI tools in classrooms due to the BU Graduate Workers Union strike. In an email sent to faculty members on Wednesday that was seen by The Daily Beast, the BU Dean of Arts & Sciences Stan Sclaroff provided recommendations for staff to “manage course discussion sections and labs that are impacted by the BUGWU strike.” While most suggestions were tame, such as assigning readings and combining discussion sections, Sclaroff also “listed some creative ways in which, we have heard, some faculty are adapting their course formats and using technology to serve their students.” This included a recommendation that they, “Engage generative AI tools to give feedback or facilitate ‘discussion’ on readings or assignments.” “For some bewildering reason, they decided to throw in an extremely non-conventional and ultimately self-damaging suggestion that we just use ChatGPT to do the work,” said a faculty member who spoke

UnScanny

By the end of the year, travelers should be able to refuse facial recognition scans at airport security screenings without fear it could delay or jeopardize their travel plans. That’s just one of the concrete safeguards governing artificial intelligence that the Biden administration says it’s rolling out across the US government, in a key first step toward preventing government abuse of AI.  The move could also indirectly regulate the AI industry using the government’s own substantial purchasing power .     

jrenaut…

"Let’s say I want to be an author. I’m into horror, so I read a bunch of horror books to hone my craft. I read a lot of Stephen King, and now a lot of what I write is pretty heavily influenced by his style.  "He’s a pretty successful author, so this isn’t really a bad thing. Now, no one would consider this copyright infringement, right? Every author ever is influenced by what they read. It’s one of the first things they teach you in writing class – go read more. "So what is the difference between me reading Stephen King and bits of his style creeping into mine, and an LLM reading EVERYONE, and bits of their style creeping into its writing?   "The only difference is volume. And there’s nothing in copyright law that says 'doing this once is fine, doing it 100,000 times is a violation'." 

Interbrain Synchrony

Dozens of recent experiments studying the brain activity of people performing and working together — duetting pianists, card players, teachers and students, jigsaw puzzlers and others — show that their brain waves can align in a phenomenon known as interpersonal neural synchronization, also known as interbrain synchrony . “There’s now a lot of research that shows that people interacting together display coordinated neural activities,” said Giacomo Novembre , a cognitive neuroscientist at the Italian Institute of Technology in Rome, who published a key paper on interpersonal neural synchronization last summer. The studies have come out at an increasing clip over the past few years — one as recently as last week — as new tools and improved techniques have honed the science and theory. They’re finding that synchrony between brains has benefits. It’s linked to better problem-solving, learning and cooperation, and even with behaviors that help others at a personal cost. What’s more

TritonGPT

Meet TritonGPT—UC San Diego’s specialized artificial intelligence-powered information and resource assistant. Next week, following the success of a two-month early user program, TritonGPT will enter the next phase of its campuswide launch.  The “second wave” pilot will grant access to campus employees in the Vice Chancellor-Chief Financial Officer area, with additional VC areas to be rolled in throughout April and May as IT Services moves toward making TritonGPT accessible to all staff and faculty in campus and health sciences units. So what exactly is this new tool? Picture “ChatGPT,” but uniquely trained to provide information and generate text that’s specific to UC San Diego.

AI fluffs CSR

Contact centers will always need humans because some calls require a level of empathy and understanding that AI just can’t provide .  Additionally, AI will always require human oversight, and its increased usage will give rise to in-house AI Competency Centers, where knowledgeable agents will oversee AI operations and outputs.  Ultimately, frontline AI voice assistants and experienced live agents will work hand-in-hand to deliver the best customer experience possible, driving high satisfaction scores and business success.

LAION-5B

LAION-5B is a really big, open-source dataset of images and text captions scraped from the internet, designed for large AI models .  It was released in 2022 by LAION, a German non-profit organization. LAION-5B is what we call a "foundation dataset" for generative artificial intelligence. Training a model on LAION-5B is meant to give it a comprehensive representation of the world, to build a kind of vocabulary of things and concepts.

Electioneering

When it comes to AI possibly influencing elections, 2024 will be "ground zero," according to Hillary Clinton .  This will be a huge election year, with more than four billion people on this planet eligible to vote in one poll or another.  The output of generative AI in all this politics, at least, is expected to be unavoidable in 2024; deepfake images , falsified audio , and such software-imagined stuff are likely to be used in attempts to sway or put off voters, undermine people's confidence in election processes, and sow division. That's not to say nothing should be trusted, or that elections will be thrown. Instead, everyone should be mindful of artificial intelligence, what it can do, and how it can be misused.

IRL Fakes

A Telegram user who advertises their services on Twitter will create an AI-generated pornographic image of anyone in the world for as little as $10 if users send them pictures of that person .  Like many other Telegram communities and users producing nonconsensual AI-generated sexual images, this user creates fake nude images of celebrities, including images of minors in swimsuits, but is particularly notable because it plainly and openly shows one of the most severe harms of generative AI tools: easily creating nonconsensual pornography of ordinary people. In one of the chat rooms in this Telegram channel, titled “IRL Fakes,” the person who creates the images posts real photographs of people, seemingly taken from their social media profiles, alongside their AI-generated porn.

AI here to help

In October, New York City announced a plan to harness the power of artificial intelligence to improve the business of government.  The announcement included a surprising centerpiece: an AI-powered chatbot that would provide New Yorkers with information on starting and operating a business in the city.  The problem, however, is that the city’s chatbot is telling businesses to break the law . Five months after launch, it’s clear that while the bot appears authoritative, the information it provides on housing policy, worker rights, and rules for entrepreneurs is often incomplete and in worst-case scenarios “dangerously inaccurate,” as one local housing policy expert told The Markup.

StealthMole

The company says it traces criminals using 255 billion analyzed data points from the dark web, deep web, and various hidden sources while leveraging advanced AI .  Through AI and machine learning, it reportedly collects and connects data from hidden digital sources. This, in turn, aids governments and law enforcement in early risk mitigation and criminal tracking and supports businesses in cyber incident response and prevention. StealthMole founder Louis Hur stated: “StealthMole came about from a critical market gap I encountered while working in cybersecurity and white-hat hacking: a severe lack of data points and information networks specifically within Asia.” He added that data leaks, anonymized transactions, and cybercrimes were on the rise, demanding a better understanding of digital threats.

Who's on first but AI's out

The British Broadcasting Corporation (BBC) has decided not to incorporate artificial intelligence (AI) into the advertising of the iconic show Doctor Who. We reported earlier this month that David Housden, a senior executive with the BBC had initially said that there is “a rich variety of content in the Whoniverse collection on iPlayer to test and learn with, and Doctor Who thematically lends itself to AI, which is a bonus.” Now the fans have spoken and the broadcaster has had to file an official statement saying “We have no plans to do this again to promote Doctor Who.”

Claude winning winningly

On Tuesday, Anthropic's Claude 3 Opus large language model (LLM) surpassed OpenAI's GPT-4 (which powers ChatGPT) for the first time on Chatbot Arena , a popular crowdsourced leaderboard used by AI researchers to gauge the relative capabilities of AI language models.  "The king is dead," tweeted software developer Nick Dobos in a post comparing GPT-4 Turbo and Claude 3 Opus that has been making the rounds on social media. "RIP GPT-4." Since GPT-4 was included in Chatbot Arena around May 10, 2023 (the leaderboard launched May 3 of that year), variations of GPT-4 have consistently been on the top of the chart until now, so its defeat in the Arena is a notable moment in the relatively short history of AI language models.  One of Anthropic's smaller models, Haiku, has also been turning heads with its performance on the leaderboard .

Repetition bears fruit

Several big businesses have published source code that incorporates a software package previously hallucinated by generative AI. Not only that but someone, having spotted this reoccurring hallucination, had turned that made-up dependency into a real one, which was subsequently downloaded and installed thousands of times by developers as a result of the AI's bad advice, we've learned.  If the package was laced with actual malware, rather than being a benign test, the results could have been disastrous. According to Bar Lanyado, security researcher at Lasso Security, one of the businesses fooled by AI into incorporating the package is Alibaba, which at the time of writing still includes a pip command to download the Python package huggingface-cli in its GraphTranslator installation instructions.

Chief AI officers

The White House has announced the "first government-wide policy to mitigate risks of artificial intelligence (AI) and harness its benefits."  To coordinate these efforts, every federal agency must appoint a chief AI officer with "significant expertise in AI." Some agencies have already appointed chief AI officers, but any agency that has not must appoint a senior official over the next 60 days.  If an official already appointed as a chief AI officer does not have the necessary authority to coordinate AI use in the agency, they must be granted additional authority or else a new chief AI officer must be named.

Ray

Thousands of servers storing AI workloads and network credentials have been hacked in an ongoing attack campaign targeting a reported vulnerability in Ray, a computing framework used by OpenAI, Uber, and Amazon. The attacks, which have been active for at least seven months, have led to the tampering of AI models.  They have also resulted in the compromise of network credentials, allowing access to internal networks and databases and tokens for accessing accounts on platforms including OpenAI, Hugging Face, Stripe, and Azure.  Besides corrupting models and stealing credentials, attackers behind the campaign have installed cryptocurrency miners on compromised infrastructure, which typically provides massive amounts of computing power.  Attackers have also installed reverse shells, which are text-based interfaces for remotely controlling servers.

Galaxy AI coming

The trickling of Samsung's latest Galaxy AI features will begin on March 28, this Thursday, the company confirmed today.  With the latest One UI 6.1 update, older Samsung devices including last year's Galaxy S23 series , Z Flip 5 and Z Fold 5 , Tab S9 series, and more will gain access to key Galaxy AI features, such as Circle to Search , Chat Assist, and Generative Edit. "Galaxy AI puts our groundbreaking suite of AI tools in the palm of more users' hands in different form factors -- tailoring the mobile experience to their needs. Now, Samsung is bringing Galaxy AI features to even more users across the Galaxy ecosystem," the company said in a Tuesday press release.

Update on predictive AI

An artificial intelligence (AI) model has been developed to predict early mortality, among other things, but as interest in the app has risen, the founders have begun issuing stern warnings. The fortune-telling AI bot, named Life2vec , uses the sequences of life events to predict the future. This is helped by several years’ worth of registry datasets from Denmark, where the business originates from. Deep learning models analyze information related to health, education, income, occupation, address, and working hours to predict everything from major life events to personality nuances.

Streamlining AI development

Neural networks, regardless of their complexity or training method, follow a surprisingly uniform path from ignorance to expertise in image classification tasks .  Researchers found that neural networks classify images by identifying the same low-dimensional features, such as ears or eyes, debunking the assumption that network learning methods are vastly different. This finding could pave the way for developing more efficient AI training algorithms, potentially reducing the significant computational resources currently required.  The research, grounded in information geometry, hints at a more streamlined future for AI development, where understanding the common learning path of neural networks could lead to cheaper and faster training methods

National Grid

Some studies have warned that the AI industry alone could consume as much energy as a country the size of the Netherlands by 2027 . The sector has been enjoying an economic boom after the launch of chatbots like OpenAI's ChatGPT and image-making tools such as Midjourney . Concern over their energy use and demands on infrastructure has increased in recent years. Official data showed that in the Republic of Ireland, which is home to the European headquarters of several big tech firms such as Google and Facebook-parent Meta, data centres accounted for nearly a fifth of all electricity used in 2022 .

AI PCs in the works

As part of its developer program, Intel is also offering an "AI PC development kit" centered on an Asus NUC Pro 14, a mini PC built around Intel's Meteor Lake silicon .  Unfortunately for Intel, the first company to put out an NPU suitable for powering Copilot locally may come from Qualcomm.  The company's upcoming Snapdragon X processors , long seen as the Windows ecosystem's answer to Apple's M-series Mac chips, promise up to 45 TOPS.  Rumors suggest that Microsoft will shift the consumer version of its Surface tablet to Qualcomm's chips after a few years of offering both Intel and Qualcomm options; Microsoft announced a Surface Pro update with Intel's Meteor Lake chips last week  but is only selling it to businesses.

AI PCs

Intel executives, in a question-and-answer session with Tom's Hardware, said that Copilot  will soon run locally on PCs. Company representatives also mentioned a 40 TOPS requirement for NPUs on next-gen AI PCs . Microsoft has been largely silent about its plans for AI PCs and even allowed Intel to officially announce Microsoft's new definition of an AI PC. Microsoft’s and Intel’s new co-developed definition states that an AI PC will have an NPU, CPU, GPU, Microsoft’s Copilot, and a physical Copilot key directly on the keyboard.  PCs meeting those requirements are already shipping, but that is just the first wave of the AI PC initiative. Intel divulged future AI PC requirements in response to my questions about potential memory criteria. 

Belgian continuous improvement

Researchers say they have harnessed the power of artificial intelligence to make brews even better . Prof Kevin Verstrepen, of KU Leuven university, who led the research, said AI could help tease apart the complex relationships involved in human aroma perception. “Beer – like most food products – contains hundreds of different aroma molecules that get picked up by our tongue and nose, and our brain then integrates these into one picture. However, the compounds interact with each other, so how we perceive one depends also on the concentrations of the others,” he said. Writing in the journal Nature Communications, Verstrepen and his colleagues report how they analysed the chemical makeup of 250 commercial Belgian beers of 22 different styles including lagers, fruit beers, blonds, West Flanders ales, and non-alcoholic beers.

Mind games we play 🤯

The first human recipient of a Neuralink brain implant has shared new details on his recovery and experience of living with the experimental assistive tech , which has allowed him a greater level of freedom and autonomy, including the ability to pull an all-nighter playing Sid Meier's Civilization 6 .

SGE

Google's new AI-powered 'Search Generative Experience' algorithms recommend scam sites that redirect visitors to unwanted Chrome extensions, fake iPhone giveaways, browser spam subscriptions, and tech support scams . Earlier this month, Google began  rolling out  a new feature called Google  Search Generative Experience  (SGE) in its search results, which provides AI-generated quick summaries for search queries, including recommendations for other sites to visit related to the query.

Alexandra Gibson

"Based on our recent research experience , we believe theoretically-driven, qualitative social research is best equipped to detect and protect against AI interference. "There are additional implications for research. The threat of AI as an unwanted participant means researchers will have to work longer or harder to spot imposter participants . "Academic institutions need to start developing policies and practices to reduce the burden on individual researchers trying to carry out research in the changing AI environment."

Creation tale?

The Dartmouth Summer Research Project on Artificial Intelligence , held from 18 June through 17 August of 1956, is widely considered the event that kicked off AI as a research discipline.  Organized by John McCarthy , Marvin Minsky , Claude Shannon , and Nathaniel Rochester , it brought together a few dozen of the leading thinkers in AI, computer science, and information theory to map out future paths for investigation. A group photo [ shown here ] captured seven of the main participants.  When the photo was reprinted in Eliza Strickland’s October 2021 article “The Turbulent Past and Uncertain Future of Artificial Intelligence” in IEEE Spectrum , the caption identified six people, plus one “unknown.”

San Jose's AI searches the unhoused

For the last several months, a city at the heart of Silicon Valley has been training artificial intelligence to recognize tents and cars with people living inside in what experts believe is the first experiment of its kind in the United States. Last July, San Jose issued an open invitation to technology companies to mount cameras on a municipal vehicle that began periodically driving through the city’s district 10 in December, collecting footage of the streets and public spaces. The images are fed into computer vision software and used to train the companies’ algorithms to detect the unwanted objects, according to interviews and documents the Guardian obtained through public records requests. Some of the capabilities the pilot project is pursuing – such as identifying potholes and cars parked in bus lanes – are already in place in other cities. But San Jose’s foray into automated surveillance of homelessness is the first of its kind in the country, according to city offici

Central Casting

In keeping with law enforcement’s grand tradition of taking antiquated, invasive, and oppressive technologies, making them digital, and then calling it innovation , police in the U.S. recently combined two existing dystopian technologies in a brand new way to violate civil liberties.  A police force in California recently employed the new practice of taking a DNA sample from a crime scene, running this through a service provided by US company Parabon NanoLabs that guesses what the perpetrators face looked like, and plugging this rendered image into face recognition software to build a suspect list. Parts of this process aren't entirely new . On more than one occasion, police forces have been found to have fed images of celebrities into face recognition software to generate suspect lists.  In one case from 2017, the New York Police Department decided its suspect looked like Woody Harrelson and ran the actor’s image through the software to generate hits .  Further, software

Proverb 15: AI

All models are wrong, but some models are useful. —George Box  

ELVIS Act

Tennessee made history on Thursday, becoming the first U.S. state to sign off on legislation to protect musicians from unauthorized artificial intelligence impersonation . "Tennessee (sic) is the music capital of the world, & we're leading the nation with historic protections for TN artists & songwriters against emerging AI technology," Gov. Bill Lee announced on social media . The Ensuring Likeness Voice and Image Security Act, or ELVIS Act, is an updated version of the state's old right of publicity law. While the old law protected an artist's name, photograph or likeness, the new legislation includes AI-specific protections. 

Atomic bytes

The land surrounding a nuclear power plant might not sound like prime real estate, but as more bit barns seek to trim costs, it's poised to become a rather hot commodity. All datacenters are energy-hungry but with more watt-greedy AI workloads on the horizon, nuclear power has fresh appeal, especially for hyperscalers .  Such a shift in power also does wonders for greenwashing narratives around net-zero operations.  While not technically renewable, nuclear power does have the benefit of being carbon-free, not to mention historically reliable — with a few notable exceptions of course. All of these are purported benefits cited by startup NE Edge, which has been fighting for more than a year to be able to build a pair of AI datacenters adjacent to a 2GW Millstone nuclear power plant in Waterford, Connecticut.

OneAPI

A coalition of tech companies that includes Qualcomm, Google, and Intel plans to loosen Nvidia’s chokehold by going after the chip giant’s secret weapon: the software that keeps developers tied to Nvidia chips .  They are part of an expanding group of financiers and companies hacking away at Nvidia's dominance in AI. "We're actually showing developers how you migrate out from an Nvidia platform," Vinesh Sukumar, Qualcomm's head of AI and machine learning, said in an interview with Reuters.

ELIZA effect

The tendency to invest private feelings in a computer puzzled and concerned Weizenbaum, who worried that people’s internal reality might be replaced by that of the machine .  Weizenbaum was also concerned by the extent to which computers “induce powerful delusional thinking in quite normal people” and strengthened notions of human beings as machines, by which rationality became associated with calculation.  This became known as the “ELIZA effect,” the propensity for humans to ascribe understanding and intelligence to computer systems. Hofstadter (1995, p. 167) described it as “the susceptibility of people to read far more understanding than is warranted into strings of symbols – especially words – strung together by computers,” a compelling description written in 1995 that, nonetheless, accurately describes modern generative AI systems (e.g., ChatGPT). 

YouTube cracks the whip

YouTube creators will be required to label when realistic-looking videos were made using artificial intelligence, part of a broader effort by the company to be transparent about content that could otherwise confuse or mislead users . When a user uploads a video to the site, they will see a checklist asking if their content makes a real person say or do something they didn’t do, alters footage of a real place or event, or depicts a realistic-looking scene that didn’t actually occur. The disclosure is meant to help prevent users from being confused by synthetic content amid a proliferation of new, consumer-facing generative AI tools  that make it quick and easy to create compelling text, images, video and audio that can often be hard to distinguish from the real thing. Online safety experts have raised alarms that the proliferation of AI-generated content could confuse and mislead users across the intern

Emad Mostaque

Stability AI founder and chief executive Emad Mostaque has stepped down from the top role and from the unicorn startup ‘s board, the buzzy firm said Friday night, making it the second hot AI startup to go through major changes this week . Stability AI, which has been backed by such investors as Lightspeed Venture Partners and Coatue Management, doesn’t have an immediate permanent replacement for the CEO role but has appointed its COO Shan Shan Wong and CTO Christian Laforte as interim co-CEOs, it said in a blog post . Stability AI, which has lost more than half a dozen key talent in recent quarters, said Mostaque is stepping down to pursue decentralized AI .  In a series of posts on X, Mostaque opined that one can’t beat “centralized AI” with more “centralized AI,” referring to the ownership structure of top AI startups such as OpenAI and Anthropic.

Gefion

The owner of Novo Nordisk, the drugmaker that gave the world Ozempic and Wegovy, is funding a new supercomputer powered by Nvidia’s artificial intelligence technology with a key aim of discovering new medicines and treatments . The Novo Nordisk Foundation has awarded France’s Eviden a contract to build what the computing company says will be one of the world’s most powerful supercomputers, able to process vast amounts of data using AI. It should provide “unprecedented potential to accelerate groundbreaking scientific discoveries in areas such as drug discovery, disease diagnosis and treatment,” Cédric Bourrasset, Eviden’s head of quantum computing, said in a statement. The supercomputer is expected to be ready for pilot projects before the end of the year and will be housed in Denmark’s national center for AI innovation. Named Gefion, the supercomputer will be available for use by researchers from Denmark’s public and private sectors, and will enjoy the backing of two of the hottest co

Nozick intuited which pill we would take in a Matrix

In 2016, Hindriks and Igor Douven of Sorbonne University in France attempted to verify that intuition by surveying people's responses to the original thought experiment .  They also asked if participants would take an "experience pill" that operates similarly to a machine but allows the user to remain in the world, and a functioning pill that enhances the user's capabilities but not their perception of reality. "Our first major finding was that people actually do respond in this way, by and large," Hindriks confirms.  "Overall, people are rather reluctant to go along with this scenario where they would be hooked up to an experience machine."  In their study, about 70% of participants rejected the experience machine, as originally constructed by [Robert] Nozick.

Reviewing the AIs

TechCrunch will 'benchmark' AI systems (so you won't have to) .  Disclosure:  "These systems are too general and are updated too frequently for evaluation frameworks to stay relevant, and synthetic benchmarks provide only an abstract view of certain well-defined capabilities.  "Companies like Google and OpenAI are counting on this because it means consumers have no source of truth other than those companies’ own claims.  "So even though our own reviews will necessarily be limited and inconsistent, a qualitative analysis of these systems has intrinsic value simply as a real-world counterweight to industry hype." 

facebook's AI art fascinates the aged

AI-made art isn’t evident to everyone: It seems that older users—generally those in Generation X and above—are falling for these visuals en masse on social media.  Recently, Facebook’s algorithm seems to be pushing wacky AI images on users’ feeds to sell products and amass followings, according to a preprint paper announced on March 18 from researchers at Stanford University and Georgetown University.  Take a look at the comment section of any of these AI-generated photos and you’ll find them filled with older users commenting that they’re “beautiful” or “amazing,” often adorning these posts with heart and prayer emojis.  Why do older adults not only fall for these pages—but seem to enjoy them?

Transformer Eight

Approaching its seventh anniversary, the “Attention” paper has attained legendary status.  The authors started with a thriving and improving technology—a variety of AI called neural networks—and made it into something else: a digital system so powerful that its output can feel like the product of an alien intelligence .  Called transformers, this architecture is the not-so-secret sauce behind all those mind-blowing AI products , including ChatGPT and graphic generators such as Dall-E and Midjourney.  [Noam] Shazeer now jokes that if he knew how famous the paper would become, he “might have worried more about the author order.” All eight of the signers are now microcelebrities (sic).  “I have people asking me for selfies—because I’m on a paper!” says Llion Jones, who is (randomly, of course) name number five.

Love Letter Generator

In 1952, Christopher Strachey wrote a combinatory algorithm for the Manchester Mark 1 computer which could create love letters. The poems it generated have been seen as the first work of electronic literature and a queer critique of heteronormative expressions of love. Alan Turing's biographer Andrew Hodges dates the creation of the love letter generator, also known as M.U.C. , to the summer of 1952, when Strachey was working with Turing, although Gaboury dates its creation to 1953. Hodges writes that while many of their colleagues thought M.U.C. silly, “It greatly amused Alan and Christopher Strachey – whose love lives, as it happened, were rather similar too.” Strachey was known to be gay. Although this appears to be the first work of computer-generated literature, the structure is similar to the nineteenth-century parlour game 'Consequences,' and the early twentieth-century surrealist game,  exquisite corpse . The Mad Libs books were conceived around the same time as S

Matthew Guariglia

"Machine learning and algorithmic systems are useful tools whose potential we are only just beginning to grapple with—but we have to understand what these technologies are and what they are not. They are neither 'artificial' or 'intelligent' —they do not represent an alternate and spontaneously-occurring way of knowing independent of the human mind.  "People build these systems and train them to get a desired outcome. Even when outcomes from AI are unexpected, usually one can find their origins somewhere in the data systems they were trained on. Understanding this will go a long way toward responsibly shaping how and when AI is deployed, especially in a defense contract, and will hopefully alleviate some of our collective sci-fi panic. "This doesn’t mean that people won’t weaponize AI—and already are in the form of political disinformation or realistic impersonation .  "But the solution to that is not to outlaw AI entirely, nor is it hand

AI Character Generator

Just like any generative AI, an AI character generator uses prompts and machine learning to create a fresh image. Most apps use text-based prompts, taking information from the user and generating an image using that information. Some generative AI tools will also use image-based prompts, where users upload several variations of what they want and the AI tool will combine them into a new image. This can be especially helpful to generate a stylized version of yourself, as you can add photos and ask for them to be remade in a pixel, graphic, or virtually any other style. While all generative AI tools operate on the same principles, not all apps use the same AI algorithms. That means using different tools can result in different results, so it’s important to pick the right tool that fits your needs and preferences.  Ten Best AI Character Generating Apps

Mariella Moon

"If you're in the US, you might see a new shaded section at the top of your Google Search results with a summary answering your inquiry, along with links for more information .  "That section, generated by Google's generative AI technology, used to appear only if you've opted into the Search Generative Experience (SGE) in the Search Labs platform.  "Now, according to Search Engine Land , Google has started adding the experience on a 'subset of queries, on a small percentage of search traffic in the US.'  "And that is why you could be getting Google's experimental AI-generated section even if you haven't switched it on."

Christopher Mims

"In experimenting with AI, my aim was to get a handle on the impact it will have on the 100 million ' knowledge workers ' in the U.S.—not to mention 900 million elsewhere in the world.  "That commitment included the research and writing of this column, which, for better or worse, would likely have taken a significantly different form without the help of AI.  "I didn’t use AI to write any of the words you’re reading now, but it did shape my thinking."

Series Mania

AI spending is expected to crest above $13 billion by 2028, with the spread falling fairly evenly across analytics, development/delivery and customer experiences like personalization and discovery, media analysts announced at a Series Mania presentation on Thursday. However, the analysts do not anticipate the content creation apocalypse that has underscored much AI coverage of late . Leading off a daylong series of panels that confronted those two troubling vowels on everyone’s mind from a panoply of industry perspectives, research directors from Omdia and Plum Research instead sought to give context – to assuage fears and misconception by framing machine learning more as a tool than as a weapon.   

Replicator

The Pentagon is taking an “if you can’t beat them, join them” approach, launching an ambitious plan called Replicator to build thousands of cheap, replaceable  (or attritable,  in the Pentagon’s lexicon)  drones , all in anticipation of a potential superpower conflict with China. Fielding fleets of drones at this scale is also likely to speed up the military’s adoption of artificial intelligence . “The only way that thousands of drones work is if you have some measure of autonomy in the drones,” said Paul Scharre, a former Defense Department official now with the Center for a New American Security (CNAS). “Because they have thousands of systems of control, then you would need thousands of people operating them, and that’s a big personnel cost for the military.” Both sides in the Ukraine war claim to be using artificial intelligence to improve their drones’ performance. So far, any use has probably been limited, but the war has also accelerated development of these capabilit

Open Interpreter

Open Interpreter, which started as an open-source implementation of ChatGPT’s code interpreter is now joining the AI hardware arena.  It just released O1 - an open-source ecosystem for AI devices .  Its long-term goal is to be the Linux of the next-generation AI-first devices and the first iteration of that vision is O1 Light. It is a small device that you talk to, and it thinks and does tasks on your computer. It can send text, edit files, access the web etc, but most importantly, learn new tasks. 

UN landmark resolution

The UN General Assembly has adopted a landmark resolution on AI, aiming to promote the safe and ethical development of AI technologies worldwide . The resolution, co-sponsored by over 120 countries, was adopted unanimously by all 193 UN member states on 21 March. This marks the first time the UN has established global standards and guidelines for AI. The eight-page resolution calls for the development of “safe, secure, and trustworthy” AI systems that respect human rights and fundamental freedoms. It urges member states and stakeholders to refrain from deploying AI inconsistent with international human rights laws.

Chain-of-thought reasoning

Several teams have explored the power of chain-of-thought reasoning by using techniques from an arcane branch of theoretical computer science called computational complexity theory.  It’s the latest chapter in a line of research that uses complexity theory to study the intrinsic capabilities and limitations of language models .  These efforts clarify where we should expect models to fail, and they might point toward new approaches to building them. “They remove some of the magic,” said Dimitris Papailiopoulos , a machine learning researcher at the University of Wisconsin, Madison. “That’s a good thing.”

Saud + AI

The government of Saudi Arabia plans to create a fund of about $40 billion to invest in artificial intelligence, according to three people briefed on the plans — the latest sign of the gold rush toward a technology that has already begun reshaping how people live and work. In recent weeks, representatives of Saudi Arabia’s Public Investment Fund have discussed a potential partnership with Andreessen Horowitz, one of Silicon Valley’s top venture capital firms, and other financiers, said the people, who were not authorized to speak publicly. They cautioned that the plans could still change. The planned tech fund would make Saudi Arabia the world’s largest investor in artificial intelligence. It would also showcase the oil-rich nation’s global business ambitions as well as its efforts to diversify its economy and establish itself as a more influential player in geopolitics. The Middle Eastern nation is pursuing those goals through its sovereign wealth fund, which has assets of more than $

Homo digitalis

AI will mesh human life with the digital world to create what [Toby] Walsh has called “ Homo Digitalis ” — a digitally-enhanced human that lives between the real world and online spaces, with augmented and virtual reality becoming more and more a part of our daily experience. “The distinction between the two will be increasingly blurred,” says Walsh, author of  2062: The World That AI Made . “We will interact with people that we will never meet physically and spend more of our time in these connected places and outsource a lot of what we do to our digital devices.” [Ray] Kurzweil also predicts the merging of AI and humanity and he has a date for that too: 2045.

Vernor Vinge

The singularity concept postulates that AI will soon become superintelligent (sic), far surpassing humans in capability and bringing the human-dominated era to a close .  While the concept of a tech singularity sometimes inspires negativity and fear, [Vernor] Vinge remained optimistic about humanity's technological future, as [David] Brin notes in his tribute: "Accused by some of a grievous sin—that of 'optimism'—Vernor gave us peerless legends that often depicted human success at overcoming problems... those right in front of us... while posing new ones! New dilemmas that may lie just ahead of our myopic gaze. He would often ask: 'What if we succeed? Do you think that will be the end of it?'" Vinge's concept heavily influenced futurist Ray Kurzweil, who has written about the singularity several times at length in books such as The Singularity Is Near in 2005.  In a 2005 interview with the Center for Responsible Nanotechnology website, Kurzweil said, &

Paul Ekwere

"Diving into the labyrinth of artificial intelligence’s history and its probable future is akin to embarking on a time-traveling escapade, where the line between science fiction and reality blurs faster than a quantum computer solving a Rubik’s cube. "Imagine, if you will, a world where machines not only perform tasks but learn, adapt, and evolve.  "A world where your toaster might one day outsmart you at chess, and your vacuum cleaner could pen a sonnet to rival Shakespeare. "Welcome, dear reader, to the thrilling, terrifying, and utterly captivating world of artificial intelligence."

Forestry

South Korea's Forest Service announced on Wednesday it plans to establish a real-time forest resource management system and an AI-based forest fire monitoring platform . The resource management system will rely on agricultural and forestry satellites. The country plans to establish a "National Forest Satellite Information Utilization Center" in July to utilize satellite data. The ministry promised that when combined, the satellite data and AI technology will be able to predict when trees and plants flower, and quickly assess damage caused by natural disasters.

Noland Arbaugh

Elon Musk's brain-chip startup Neuralink livestreamed on Wednesday its first patient implanted with a chip using his mind to play online chess . Noland Arbaugh, the 29-year-old patient who was paralyzed below the shoulder after a diving accident, played chess on his laptop and moved the cursor using the Neuralink device.  The implant seeks to enable people to control a computer cursor or keyboard using only their thoughts. Arbaugh had received an implant from the company in January and could control a computer mouse using his thoughts, Musk said last month.  "The surgery was super easy," Arbaugh said in the video streamed on Musk's social media platform X, referring to the implant procedure. "I literally was released from the hospital a day later. I have no cognitive impairments."

Microsoft AI (consumer facing products)

Microsoft has hired Mustafa Suleyman, the co-founder of Google’s DeepMind and chief executive of artificial intelligence start-up Inflection, to run a new consumer AI unit. Suleyman, a British entrepreneur who co-founded DeepMind in London in 2010, will report to Microsoft chief executive Satya Nadella, the company announced on Tuesday. He will launch a division of Microsoft that brings consumer-facing products including Microsoft’s Copilot, Bing, Edge and GenAI under one team called Microsoft AI. It is the latest move by Microsoft to capitalise on the boom in generative AI. It has invested $13bn in OpenAI, the maker of ChatGPT, and rapidly integrated its technology into Microsoft products.

Shrimp Jesus

"Facebook’s recommendation algorithms are promoting bizarre, AI-generated images being posted by spammers and scammers to an audience of people who mindlessly interact with them and perhaps don’t understand that they are not real, a new analysis by Stanford and Georgetown University researchers has found .  "The researchers’ analysis aligns with what I have seen and experienced over the course of months of researching and reporting on these pages, many of which have found a novel way to link to off-platform, AI-generated 'news' sites that are littered with Google ads or which are selling low-quality products.   "Last week the world was introduced to Shrimp Jesus, a series of AI-generated images in which Jesus is melded with a crustacean, and which have repeatedly gone viral on Facebook.  "The images are emblematic of a specific type of AI image being used by spammers and scammers..."

Your Fitbit

"Google’s recent announcement about integrating Gemini (it’s AI model) into Fitbit devices sends a shudder up my arm .  "What 'personalized health insights' they provide won’t be as useful as you might hope. The private, confidential health data their AI consumes to tell me, 'Stopping at the donut shop once a day' will be worth far more than the benefit. "Still, AI feature’s like this could coach folks like me after analyzing my health data. This is an embryonic first step for AI, so to speak, and who knows what new, action steps I could take with an AI’s insight."

OpenAI sorta open about training data

"It's deeply concerning that the Chief Technology Officer of the most 'important' AI company in the world can't answer a very basic question about training data .  "Even when asked about training on data from Shutterstock , a company that OpenAI has a partnership with, Murati stammered, shook her head, and said that she would not 'go into the details of the data that was used, but it was publicly available or licensed data'... "Shortly after the interview, Stern adds that OpenAI shared that Shutterstock's data was used to train Sora's models."

AI a hot prospect

Even before Blackwell's debut, datacenter operators were already feeling the heat associated with supporting massive clusters of Nvidia's 700W H100 . With twice the silicon filling out Nvidia's latest GPU, it should come as some surprise that it runs only a little hotter — or at least it can, given the ideal operating conditions. With the B100, B200, and GB200, the key differentiator comes down to power and performance rather than memory configuration. According to Nvidia, the silicon can actually operate between 700W and 1,200W, depending on the SKU and type of cooling used.