Posts

Showing posts from May, 2024

Tribeca Film Festival

The upcoming Tribeca Film Festival is poised to break new ground with the debut of five short films created using OpenAI’s Sora model. This marks the first time the festival will showcase films generated through artificial intelligence, and the buzz surrounding this program is palpable. Sora is a revolutionary text-to-video model developed by OpenAI. It allows creators to generate video clips based on textual descriptions. This technology opens a treasure trove of possibilities for filmmakers, enabling them to bring their visions to life in ways never before imaginable. Jane Rosenthal, co-founder and CEO of Tribeca Enterprises, emphasizes the importance of storytelling in driving change. She highlights the diverse nature of storytelling, encompassing feature films, immersive experiences, and even AI-generated shorts. Her excitement for the creativity that will emerge from this collaboration with Tribeca alumni is infectious. This program, a joint effort by Tribeca Festival and OpenAI,

Llama becomes scam's ally

"You absolutely should not trust AI. It is a very powerful tool that makes mistakes," [David] Gerhard, who has studied artificial intelligence and social media, said in an interview. He likened it to "a hyper-intelligent eight-year-old that desperately wants to please you. It knows everything about everything, but it will give you a wrong answer rather than say, 'I don't know.'" The AI software that Meta uses in the Messenger app — where [Dave] Gaudreau went to confirm the fake support number — is called Llama 3. Like the better-known ChatGPT, it's what's known as a "large language model," Gerhard said. "Llama and other large language models are trained on everything. They're trained on all of the internet. And the problem there is that there's a lot of bad information on the internet," he said — which also means they don't have an understanding of truth. "They don't know what truth is or how to get to it,

Prisoner’s dilemma

Anthropic employees trade in metaphors: brain scanners, “grown” neural networks, races to both top and bottom. Amodei offers one more, comparing his decision not to release Claude in 2022 to the prisoner’s dilemma.   In this famous game-theory experiment, two prisoners face a choice: betray the other for a chance at freedom, or stay silent and cooperate for a reduced sentence. If both betray, they each fare worse than if they’d cooperated. It’s a situation where individual incentives lead to worse collective outcomes—a dynamic Amodei sees playing out in the AI industry today.  Companies taking risks are rewarded by the market, while responsible actions are punished. “I don’t want us to be in this impossible prisoner’s dilemma,” Amodei says. “I want to change the ecosystem so there is no prisoner’s dilemma, and everyone’s incentivized to do the right thing.”

More tools

As AI-generated images spread across entertainment, marketing, social media and other industries that shape cultural norms, The Washington Post set out to understand how this technology defines one of society’s most indelible standards: female beauty. Using dozens of prompts on three of the leading image tools — MidJourney, DALL-E and Stable Diffusion — The Post found that they steer users toward a startlingly narrow vision of attractiveness. Prompted to show a “beautiful woman,” all three tools generated thin women, without exception. Just 2 percent of the images showed visible signs of aging. More than a third of the images had medium skin tones. But only nine percent had dark skin tones.

Tool (re)

Google announced on Thursday that it would refine and retool its summaries of search results generated by artificial intelligence, posting a blog explaining why the feature was returning bizarre and inaccurate answers that included telling people to eat rocks or add glue to pizza sauce .  The company will reduce the scope of searches that will return an AI-written summary. Google has added several restrictions on the types of searches that would generate AI Overview results, the company’s head of search, Liz Reid, said, as well as “limited the inclusion of satire and humor content”. The company is also taking action against what it described as a small number of AI Overviews that violate its content policies, which it said occurred in fewer than 1 in 7m unique search queries where the feature appeared. The AI Overviews feature, which Google released in the US this month, quickly produced viral examples of the tool misinterpreting information and appearing to use satirical sources like

Perplexity Pages

With Perplexity Pages, the unicorn is aiming to help users make reports, articles or guides in a visually appealing format. Free and paid users can find the option to create a page in the library section. They just need to enter a prompt, such as “Information about Sahara Desert,” for the tool to start creating a page. Users can select an audience type — beginner, advanced or anyone — to shape the tone of the generated text. Perplexity said its algorithms work to create a detailed article with different sections. You can ask the AI tool to rewrite or reformat any sections or even remove them. Plus, you can add a section by prompting the tool to write about a certain subtopic. Perplexity also helps you find and insert relevant media items such as images and videos. All these pages are publishable and also searchable through Google. You can share the link to these pages with other users. They can ask follow-up questions on the topic as well. What’s more, users can also turn their existin

Protect musicians

Image
Artificial Intelligence is one of the buzziest — and most rapidly changing — areas of the music business today .  A year after the fake-Drake song signaled the technology’s potential applications (and dangers), industry lobbyists on Capitol Hill, like RIAA’s Tom Clees, are working to create guard rails to protect musicians — and maybe even get them paid. Meanwhile, entrepreneurs like Soundful’s Diaa El All and BandLab’s Meng Ru Kuok (who oversees the platform as founder and CEO of its parent company, Caldecott Music Group) are showing naysayers that AI can enhance human creativity rather than just replacing it.  Technology and policy experts alike have promoted the use of ethical training data and partnered with groups like Fairly Trained and the Human Artistry Campaign to set a positive example for other entrants into the AI realm.

Foreign influence operations

OpenAI said Thursday that it has seen several foreign influence campaigns tap the power of its AI models to help generate and translate content, but has yet to see novel attacks enabled through its tools . Supercharging misinformation efforts has been seen as a key risk associated with generative AI, though it has been an open question just how the tools would be used and by whom. OpenAI said in a new report that it has seen its tools used by several existing foreign influence operations, including efforts based in Russia, China, Iran and Israel.

Target Speech Hearing

Noise-canceling headphones have gotten very good at creating an auditory blank slate. But allowing certain sounds from a wearer’s environment through the erasure still challenges researchers .  The latest edition of Apple’s AirPods Pro, for instance, automatically adjusts sound levels for wearers — sensing when they’re in conversation, for instance — but the user has little control over whom to listen to or when this happens. A University of Washington team has developed an artificial intelligence system that lets a user wearing headphones look at a person speaking for three to five seconds to “ enroll ” them.  The system, called “Target Speech Hearing,” then cancels all other sounds in the environment and plays just the enrolled speaker’s voice in real time even as the listener moves around in noisy places and no longer faces the speaker.

Protein Folding and the Limits of Critical AI Studies

"The successful deployment of transformers in protein folding, we argue, discloses what we consider a non-linguistic approach to token processing intrinsic to the architecture .  "We contend that through this non-linguistic processing, the transformer architecture carves out unique epistemological territory and produces a new class of knowledge, distinct from established domains. "We contend that our search for intelligent machines has to begin with the shape, rather than the place, of intelligence.  "Consequently, the emerging field of critical AI studies should take methodological inspiration from the history of science in its quest to conceptualize the contributions of artificial intelligence to knowledge-making, within and beyond the domain-specific sciences." 

Codestral

Mistral, the French AI startup backed by Microsoft and valued at $6 billion, has released its first generative AI model for coding, dubbed Codestral. Codestral, like other code-generating models, is designed to help developers write and interact with code. It was trained on over 80 programming languages, including Python, Java, C++ and JavaScript, explains Mistral in a blog post. Codestral can complete coding functions, write tests and “fill in” partial code, as well as answer questions about a codebase in English. Mistral describes the model as “open,” but that’s up for debate. The startup’s license prohibits the use of Codestral and its outputs for any commercial activities. There’s a carve-out for “development,” but even that has caveats: the license goes on to explicitly ban “any internal usage by employees in the context of the company’s business activities.” The reason could be that Codestral was trained partly on copyrighted content. Mistral didn’t confirm or deny this in the bl

Drug development

Identifying and accelerating drug development is big business. The costs in this industry are significant and finding pathways to optimize using AI methods is top of mind in this fast and evolving industry. Deloitte found that the average cost of developing a new drug among the top 20 global biopharmas it studied rose 15% ($298 million) last year, to approximately $2.3 billion. That figure includes the average cost of developing a candidate from discovery through clinical trials to the market. Many biopharmaceutical companies are using AI to speed up drug development. For example, machine-learning models are trained using information about the protein or amino-acid sequence or 3D structure of previous drug candidates, and about properties of interest.

AI in hiring

The darker side to using AI in hiring is that it can bypass potential candidates based on predetermined criteria that don’t necessarily take all of a candidate’s skills into account .  And for job seekers, the technology can generate great-looking resumes, but often they’re not completely truthful when it comes to skill sets. “The use of AI in talent acquisition is particularly prevalent in tech hubs and innovative industries where the demand for skilled professionals is high,” said Rick Hammell, founder and CEO of Helios, a workforce management platform startup. “The benefits and potential problems associated with using AI to find talent have global implications.”

Neural engineering

Scientists at the University of California, San Francisco have developed a bilingual brain implant that uses artificial intelligence to help a stroke survivor communicate in Spanish and English for the first time . Nearly a dozen scientists from the university’s Center for Neural Engineering and Prostheses have worked for several years to design a decoding system that could turn the man's brain activity into sentences in both languages and display them on a screen. The study ultimately shows "the feasibility of a bilingual speech neuroprosthesis," or bilingual brain implant, and provides a glimpse into how this type of technology has the "potential to restore more natural communication" among bilingual speakers with paralysis… 

Insurers cut costs đź©ą

Health insurers are telling shareholders that they are ramping up the use of artificial intelligence and are hiring talent to implement the technology across their organizations .  They say their artificial intelligence models can increase efficiency and “cut costs,” but they refused to discuss what models they’re using, how those models were developed, or exactly what they’re using them for. Centene, for instance, says, it is “investing in artificial intelligence and machine learning technologies to improve the health of our members and contain rising healthcare costs.”  The insurer, whose plans cover more than 28 million Americans, noted that its large footprint, particularly in government-sponsored programs, puts it “in a unique position to use data to develop models that predict a wide range of health outcomes.”

Atlantic succumbs

OpenAI announced deals with Vox Media and the Atlantic on Wednesday, adding to the growing number of news organizations taking money from the artificial-intelligence company in exchange for sharing their content. The deals come a week after OpenAI announced a similar tie-up with News Corp., which is controlled by the Murdoch family and owns the Wall Street Journal and the New York Post. Over the past several months, the AI company has signed deals with publishing companies that together represent more than 70 newspapers, news websites and magazines. As more people use OpenAI’s ChatGPT and other chatbots to find information, AI companies are trying to find ways to get the most up-to-date, helpful and accurate information into their products. AI models still often make up false information, so relying on news content from third parties is a way to increase the trustworthiness of AI answers. News organizations, for their part, are nervous that more people will use AI to get their news, t

Helen Toner dishes on Altman's fibs

The larger problem in her mind was that "Sam started lying to other board members in order to try and push me off the board. " "It was another example that just, like, really damaged our ability to trust him," she said. At that point, board members had already discussed whether Sam needed to be ousted. Toner said that Altman was always able to downplay "any individual case" and would provide "some kind of, like, innocuous sounding explanation of why it wasn’t a big deal, or misinterpreted or whatever." Over the course of years, however, "all four of us who hired him came to the conclusion that we just couldn’t believe things that Sam was telling us."  "And that’s a completely unworkable place to be in as a board, but especially a board that is supposed to be providing independent oversight over the company, not just like, you know, helping the CEO to raise more money."

Malware by AI

A 25-year-old unemployed man from Kawasaki has been arrested for allegedly creating a computer virus by using interactive generative artificial intelligence available online. This is believed to be the first case in the nation related to the creation of viruses using generative AI systems. The Metropolitan Police Department arrested Ryuki Hayashi on Monday on suspicion of making of electric or magnetic records containing unauthorized commands.  He is believed to have used his home computer and smartphone to combine information about creating malware programs obtained after giving instructions to several generative AI systems in March last year, according to investigators. The crafted virus was designed to do things such as encrypt data on targeted systems and demand cryptocurrency as ransom. There have been no reports of damage caused by the virus, police said. According to the police, Hayashi admitted to the charge during questioning, saying, “I wanted to make money through ransomwar

GPT-5

While nothing is yet known officially about what GPT-5 could bring, it’s thought that automation would be a central part of it. Rumors are circulating about improved ability in task delegation and greater accuracy in both paid and free versions. With greater emphasis being placed on the importance of safety in data training, as evidenced by the formation of the committee [Safety and Security] at OpenAI as well, it’s likely that GPT-5 will have in-built safety protocols and frameworks too. Speaking about future models in March, Altman previously refused to be drawn on what the model could entail or even what it would be called, only that it would be ‘ smarter ’.

Image-based disinformation

New research from Google researchers and several fact checking organizations have found that most image-based disinformation is now AI-generated, but the way researchers collected their data suggests that the problem is even worse than they claim . The paper, first spotted by the Faked Up newsletter, measures the rise of AI-generated image-based disinformation by looking at what fact checkers at Snopes, Politifact, and other sites have claimed were image-based disinformation.  Overall, the study looks at a total of 135,838 fact checks which date back to 1995, but the majority of the claims were created after 2016 and the introduction of ClaimReview…

Google Chromebook Plus

Google has announced a suite of new AI-powered features for its full range of Chromebook Plus devices .  Google Chromebook Plus users can now have AI assistance on a range of tasks, including write, ideation, editing, and more. The new features are designed for Chromebook Plus laptops, which start from $350.  All Chromebooks will also have new non-AI tools and Google integrations as well. That includes easy set-up with an Android phone, a new built-in view of Google Tasks, seamless GIF screen recording, and a new Game Dashboard.

Paris Marx

Image

ENOIK

ENOIK, launched an arbitration court based on Artificial Intelligence in early 2024. The company received 6.4 million zloty (about 1.5 million euro) in 2014 from support EU funds, with a total investment cost of 8.4 million zloty (about 2 million euro in 2014). The court's website states that, after evidentiary proceedings, the system suggests a preliminary resolution to the arbitrator. The algorithm bases its decision on knowledge acquired on a sample of over half a million cases resolved by common courts. The arbitrator then has the chance to assess both the facts of the case and the algorithm analysis before issuing a judgment of the case. Anna Cybulko, PhD and expert on Gender Equality Law at the European Equality Law Network, argues that humans often refrain from interfering with the results generated by algorithms due to the so-called overconfidence effect in AI tools: “We tend to use the given data as the binding data.” A second issue stemmed from the difficulty of verifying

Chinese homework apps

The two most popular AI helpers in the U.S., as of May, are both Chinese-owned .  One-year-old Question AI is the brainchild of the founders of Zuoyebang, a popular Chinese homework app that has raised around $3 billion in equity over the past decade. Gauth, on the other hand, was launched by TikTok parent ByteDance in 2019.  Since its inception, Question AI has been downloaded six million times across Apple’s App Store and Google Play Store in the U.S., whereas its rival Gauth has amassed twice as many installs since its launch, according to data provided by market research firm SensorTower. (Both are published in the U.S. by Singaporean entities, a common tactic as Chinese tech receives growing scrutiny from the West.) The success of Chinese homework apps is a result of their concerted effort to target the American market in recent years. In 2021, China imposed rules to clamp down on its burgeoning private tutoring sector focused on the country’s public school curriculum. Many servic

Beijing funds chips

China has set up its third planned state-backed investment fund to boost its semiconductor industry, with a registered capital of 344 billion yuan ($47.5 billion), according to a filing with a government-run companies registry . The hundreds of billions of yuan invested in the sector puts into perspective President Xi Jinping's drive to achieve self-sufficiency for China in semiconductors. That commitment has taken on renewed urgency after the U.S. imposed a series of export control measures over the last couple of years, citing fears Beijing could use advanced chips to boost its military capabilities.

Kai-Fu Lee

“We need to encourage people to harness AI and use all the tools so that they can be the best that they can be,” he said. “Also, it’s a great guide to what things they can aspire to and what things are not worth following.” To be sure, Lee still believes that there is something unique about our humanity, saying people have souls while machines never will. “We have compassion and empathy. We have emotions and the ability to love. We have the ability to connect to other people and create trust and win trust .” In fact, more than any technical or business skill, the most important skill to have is being able to earn other people’s trust , which comes from authenticity, teamwork, sharing and having a high emotional quotient, he explained.

Stephen Findeisen

“What really upsets me is not that it’s bad (Rabbit R1), it’s that the core selling point of this product is built on a lie and that lie is the Lamb.” “I  think the best thing you could say at this point about Jesse [Lyu, founder] and Rabbit is that they vastly over-promised and under-delivered, the worst you could say is (that) this is consumer fraud.”  Coffeezilla punctuated his salvo with a claim that he spoke to an employee of Rabbit and on the condition of anonymity they stated that “Lamb as advertised does not exist.” The investigator reached out to the company for a final response and the comments which included a statement (shared on the video) read, “You are not interested in taking a balanced or objective approach, or in working with us in good faith ” and “Rabbit stands behind its product”.

Patricia Reiners

AI might not reinvent the look and feel of tomorrow’s consumer tech, but it could power a reinvention of the software we use on our computers, tablets, and smartphones. “When you open your phone, there’s so much going on. All these apps, all these different notifications. It’s too much. It’s exhausting,” said Reiners. “There’s a lot of research that this shouldn’t be the way we interact with technology.” Reiners believes companies like Apple and Google will attempt to fulfill the promises made by Humane and Rabbit with simplified AI operating systems that predict what users need and automate common tasks. She notes that smartphones would prove easier to use if they presented users with fewer options after they unlock the device. Phones may even replace apps with automations controlled by an on-device AI agent. “The user doesn’t really need apps,” said Reiners. “They have a goal , what they want to do, and want to get it done. So, as designers and people who work in tech, we need to ret

Imaging analysis

Red Hat, Inc., the world's leading provider of open source solutions, today announced that Boston Children’s Hospital is piloting imaging analysis using Red Hat OpenShift for artificial intelligence (AI) .  The collaboration between Red Hat and Boston Children’s Hospital, one of the leading pediatric hospitals in the nation, is utilizing AI adoption in the hospital’s radiology department, with the promise to improve image quality and the speed and accuracy of image interpretation. Before Boston Children’s Hospital began piloting AI in radiology, quantitative measurements had to be done manually, which was a time-consuming task. Other, more complex image analyses were performed completely offline and outside of the clinical workflow. In a field where time is of the essence, the hospital is piloting Red Hat OpenShift via the ChRIS Research Integration Service, a web-based medical image platform.  The AI application running in ChRIS on the Red Hat OpenShift foundation has the potentia

ICYMI: Honest Interview

Image

Coming to Open Sauce 🤖

Image

Face Watch

Lindsey Chiswick, director of intelligence for the Met: "It takes less than a second for the technology to create a biometric image of a person's face, assess it against the bespoke watchlist and automatically delete it when there is no match." The BBC spoke to several people approached by the police who confirmed that they had been correctly identified by the system - 192 arrests have been made so far this year as a result of it. But civil liberty groups are worried that its accuracy is yet to be fully established, and point to cases such as Shaun Thompson's. Mr Thompson, who works for youth-advocacy group Streetfathers, didn't think much of it when he walked by a white van near London Bridge in February. Within a few seconds, though, he was approached by police and told he was a wanted man. "That's when I got a nudge on the shoulder, saying at that time I'm wanted".

Meredith Whittaker, Winner of 2024 Helmut Schmidt Future Prize

"The current AI craze is a result of this toxic surveillance business model. It is not due to novel scientific approaches that–like the printing press–fundamentally shifted a paradigm .  "And while new frameworks and architectures have emerged in the intervening decade, this paradigm still holds: it’s the data and the compute that determine who “ wins ” and who loses.  "What we call AI today grew out of this toxic model – and must be understood primarily as a way of marketing the derivatives of mass surveillance and concentrated platform and computational power.  "Currently, there are only a small handful of firms – based in the US and China – who have the resources to create and deploy large-scale AI from start to finish. These are the cloud and platform monopolies – those that established themselves early on the backs of the surveillance business model.  "Everyone else is licensing infrastructure, scrambling for data, and struggling to find market fit without

Peter Cappelli

The backend work to build and sustain large language models (LLMs) may need more human labor than the effort saved up front. Plus, many tasks may not necessarily require the firepower of AI when standard automation will do .  That's the word from Peter Cappelli, a management professor at the University of Pennsylvania Wharton School, who spoke at a recent MIT event. On a cumulative basis, generative AI and LLMs may create more work for people than alleviate tasks. LLMs are complicated to implement, and "it turns out there are many things generative AI could do that we don't really need doing," said Cappelli. While AI is hyped as a game-changing technology, "projections from the tech side are often spectacularly wrong," he pointed out. "In fact, most of the technology forecasts about work have been wrong over time." He said the imminent wave of driverless trucks and cars, predicted in 2018, is an example of rosy projections that have yet to come tr

Gigafactory

Elon Musk’s xAI has announced a partnership with Oracle to create a supercomputer. This new venture aims to support the advanced development of xAI’s AI model, Grok.   This collaboration highlights Musk’s ambitious plans to enhance AI capabilities, signaling a potential shift in the technological landscape . The partnership between xAI and Oracle is designed to construct what has been termed a “Gigafactory of Compute.” This initiative will focus on building a supercomputer that will serve as the backbone for training and evolving Grok. In April, Musk aimed to secure $4 billion in funding at a $15 billion valuation for xAI. Investor interest surged, prompting Musk to increase the funding target to $6 billion at an $18 billion valuation. Furthermore, the funds are anticipated to expand xAI’s GPU count dramatically. The goal is to boost the number from approximately 10,000 to 100,000 GPUs. This expansion is not just a quantitative increase but a strategic enhancement to unify these GPUs

Jennie Kermode

"Perhaps because of the distancing effect of the distortion, which immediately labels it as something separate from life, the film doesn’t have much of that uncanny valley effect that some people find repellant .  "It’s still a little too rough-hewn to be creepy in that way, but there’s a sensuousness about it, a sort of tacit eagerness to please, which is at time quite unsettling.  "Artistically, it promises magic and yet it is, in this form, obviously no more than a tool, a void which can never create in the absence of a human user.  "For want of a better term, one might say that it too patently lacks a soul . One might be tempted to ask it to describe in single words only the good things that come into its mind about its mother. "Despite this, the film has a certain charm, and at a crisp six minutes does not overstay its welcome."  Can We Really Know Anything About Carrots?  An experiment with artificial intelligence, looking at the world from a differe

Episodic memory

“This study provides strong evidence for episodic memory in Eurasian jays,” said Dr. Jonathon Crystal , a provost professor of psychological and brain sciences at the University of Indiana Bloomington who was not involved with the project.  “If you can answer that unexpected question after incidental encoding, that becomes a strong argument that you can remember back in time to the earlier episode, which is at the heart of documenting episodic memory.” Crystal said that studies such as this one, which aim to identify animals’ abilities to form episodic memories, are important in part because of their potential role in the field of human memory research. 

Neuroplasticity

Back in the 1970s — ancient history — you know, we didn’t have the tools that we needed in order to really understand, the brain’s capacity for plasticity . These forms of neuroplasticity began to be worked out in the 1970s and 1980s. And then, really, with the advent of new kinds of basic neuroscience imaging, you can actually see these spines growing out of the dendrites, where they make the synaptic connections. You realize, the brain is not static. It’s incredibly, unbelievably plastic.  And it raises really fundamental ideas that it’s not just depression that we want to treat by harnessing neuroplasticity, but there are opportunities to treat other disorders that we don’t treat as effectively as we should.

Dark Lord blows

In recent years, computer programmers have flocked to chatbots like OpenAI's ChatGPT to help them code, dealing a blow to places like Stack Overflow, which had to lay off nearly 30 percent of its staff last year. The only problem? A team of researchers from Purdue University presented research this month at the Computer-Human Interaction conference that shows that 52 percent of programming answers generated by ChatGPT are incorrect . That's a staggeringly large proportion for a program that people are relying on to be accurate and precise, underlining what other end users like writers and teachers are experiencing: AI platforms like ChatGPT often hallucinate totally incorrectly answers out of thin air. For the study, the researchers looked over 517 questions in Stack Overflow and analyzed ChatGPT's attempt to answer them. "We found that 52 percent of ChatGPT answers contain misinformation, 77 percent of the answers are more verbose than human answers, and 78 percent o

Dark Lord adumbrated

OpenAI on Thursday backtracked on a controversial decision to, in effect, make former employees choose between signing a non-disparagement agreement that would never expire, or keeping their vested equity in the company. The internal memo , which was viewed by CNBC, was sent to former employees and shared with current ones. The memo, addressed to each former employee, said that at the time of the person’s departure from OpenAI, “you may have been informed that you were required to execute a general release agreement that included a non-disparagement provision in order to retain the Vested Units [of equity].”

Stefan Bauschard

"Why did Google decide to share the idea that one should eat rocks? Likely from a satirical article in The Onion that suggests it and is reprinted elsewhere. And, of course, the Google answer. "What is actually impressive is that Perplexity (or Opus, I chose Claude 3 Opus when running this query in Perplexity) was able to determine that the original Onion article is satirical, which points to a high degree of sophistication in the model(s). "AI scientists long held the belief that AI could never capture the nuances of language, which it clearly has (the example I always give is that it can pick-up on the nuances of how the word 'crane' is used in different contexts). But it cannot just capture the nuances of language, it can also understand satire ." 

Neil Druckmann

Naughty Dog studio boss Neil Druckmann seems considerably more upbeat about it [generative AI] than I am, however, saying in a new Sony interview that the advent of generative AI is "opening the door for us to take on more adventurous projects and push the boundaries of storytelling in games ." Druckmann acknowledges fairly early on that the growing use of AI "does bring up some ethical issues we need to address," but then quickly moves to note that it also reduces "costs and technical hurdles," and is "truly empowering creators to bring their visions to life without the traditional obstacles ."  He also encourages up-and-coming content creators to master "fundamentals over tools," saying that tools slide into obsolescence quickly.

Tim Dickinson

"Google has launched a tool for many users called 'AI Overview' that uses machine learning to generate quick answers to user queries and delivers them as the top search result .  "AI Overview cannot be turned off, and it does not offer any warnings or disclosures about its limitations on the search page itself. "The early results have been problematic, to say the least — including spreading a bigoted conspiracy theory, and providing answers that not only include absurd misinformation, but could in fact be hazardous to human health or endanger people experiencing a mental health crisis." 

Evaluating AI generated code

"This study evaluates the capabilities of ChatGPT versions 3.5 and 4 in generating code across a diverse range of programming languages .  "Our objective is to assess the effectiveness of these AI models for generating scientific programs. To this end, we asked ChatGPT to generate three distinct codes:  a simple numerical integration,  a conjugate gradient solver, and  a parallel 1D stencil-based heat equation solver.  "The focus of our analysis was on the compilation, runtime performance, and accuracy of the codes.  "While both versions of ChatGPT successfully created codes that compiled and ran (with some help), some languages were easier for the AI to use than others (possibly because of the size of the training sets used). Parallel codes -- even the simple example we chose to study here -- also difficult for the AI to generate correctly."

Photon computing FTW

Moore’s law is already pretty fast. It holds that computer chips pack in twice as many transistors every two years or so, producing major jumps in speed and efficiency. But the computing demands of the deep learning era are growing even faster than that — at a pace that is likely not sustainable .  The International Energy Agency predicts that artificial intelligence will consume 10 times as much power in 2026 as it did in 2023, and that data centers in that year will use as much energy as Japan. “The amount of [computing power] that AI needs doubles every three months,” said Nick Harris, founder and CEO of the computing-hardware company Lightmatter — far faster than Moore’s law predicts. “It’s going to break companies and economies.” One of the most promising ways forward involves processing information not with trusty electrons, which have dominated computing for over 50 years, but instead using the flow of photons , minuscule packets of light. Recent results suggest that, for certai

Magic Editor coming

Magic Editor initially launched as a Pixel 8 exclusive feature. But now, according to 9to5Google , the tool is finally coming to other models through Google Photos, something the tech company previously promised .  Magic Editor allows users to change, resize, and remove parts of photographs. For example, users can circle an object in the image and move it over or change the lighting to a different time of day. “Using generative AI, this editor makes it easy to do complex photo edits with simple and intuitive actions, like repositioning your subject or turning the sky from gray to blue,” Google explained in a release last month.

Ollama AI models

The Ollama website has a long list of AI models you can install and try. For example, you can try Meta’s new Llama3 model, Microsoft’s Phi-3 Mini model, Google’s Gemma model, or Mistral AI’s Mistral model .  Just use the same command, and specify the model you want to use. For example, you can use the following commands: ollama run llama3 ; ollama run phi3 ; ollama run gemma ; ollama run mistral . In the future, we’ll probably see nicer graphical interfaces built on top of tools like Ollama. There’s already a great Open WebUI project that looks very similar to ChatGPT’s web interface, but requires Docker to install — it’s not quite as “click and run” as Ollama. Want something faster, or just want to skip the installation process? You can access many of these chatbots in your web browser with the HuggingChat tool. It’s a free web interface for accessing these chatbots. (But with HuggingChat, they’re running on the HuggingChat servers, not on your own PC.)

New Mexico SOS

Toulouse Oliver’s office [SOS] has invested in promotional campaigns and online resources to combat  Misinformation (defined as false information shared without intention of harm),  Disinformation (deliberately misleading information) and  Malinformation (often defined as disinformation based on a kernel of true information but presented deceptively or exaggerated so as to mislead).  At the office’s website, sos.nm.gov , a page styled “ Rumor vs. Reality ” addresses some common rumors and misunderstandings about elections and voting. This year, the office is focusing more attention on “AI-manipulated media that could distort the truth about the election and candidates,” Toulouse Oliver said in a news release earlier in May. “With the creation of deepfakes and other manipulated media through AI software, seeing is no longer believing.” Besides spreading awareness of fakery and ways to spot it, the campaign also promotes avenues for obtaining information about elections, either through a

Kylie Robison

"Companies developing artificial intelligence are often quick to avoid taking accountability for their systems with an approach much like a parent with an unruly child — boys will be boys!   "These companies claim that they can’t predict what this AI will spit out, so really, it’s out of their control. "But for users, that’s a problem. Last year, Google said that AI was the future of search.  "What’s the point, though, if the search seems dumber than before?"

AI finds hidden galactic evolution clues in over 100 galaxies

Before using a neural network, you first have to train it. Unfortunately, as we've discussed, there aren't enough known neutral carbon absorbers to adequately do that .  So, instead of using real data, the researchers generated a batch of 5 million fictitious spectra and used them to teach the neural network about what to look for: patterns often too subtle for a human eye to spot. Then, the researchers set their neural network loose on data from the Sloan Digital Sky Survey III.  When they did so, they pinpointed neutral carbon absorbers in 107 galaxies previously not known to possess these features.

Frankly, Scarlett…

OpenAI could face legal consequences for making a ChatGPT voice that sounds a lot like Scarlett Johansson — whether the company did so intentionally or not .  And the fact that OpenAI’s CEO [Sam Altman] referenced those similarities? That only makes matters worse, intellectual property lawyers tell The Verge . “There are a few courses of actions she can take, but case law supports her position,” says Purvi Patel Albers, partner at the law firm Haynes Boone with a focus on trademarks and copyright.

Alexa wants a word ✨

Amazon is upgrading its decade-old Alexa voice assistant with generative artificial intelligence and plans to charge a monthly subscription fee to offset the cost of the technology, according to people with knowledge of Amazon’s plans .  The Seattle-based tech and retail giant will launch a more conversational version of Alexa later this year, potentially positioning it to better compete with new generative AI-powered chatbots from companies including Google and OpenAI, according to two sources familiar with the matter, who asked not to be named because the discussions were private.  Amazon’s subscription for Alexa will not be included in the $139-per-year Prime offering, and Amazon has not yet nailed down the price point, one source said.

It was against the law

A man who admitted to sending out robocalls mimicking President Joe Biden's voice on the day of the New Hampshire primary is now facing criminal charges. Five indictments have been returned against Steve Kramer, each related to a different alleged victim. He has been charged with bribing, intimidation and suppression. The robocalls went out to people across New Hampshire in January on the day of the first-in-the-nation primary. The calls used artificial intelligence to mimic the sound of Biden's voice and told listeners to save their vote for the November election. 

Qwen LLM discounts

China's top AI players have made enormous cuts to the price of their services . Alibaba Cloud confirmed to The Register that it has reduced the price it charges per 1,000 input tokens to its proprietary Qwen LLM, after the South China Morning Post reported the Chinese cloud leader introduced discounts of up to 97 percent. The Register understands that Alibaba Cloud's motive is to promote the growth of AI applications in China.  Its discounts were reportedly introduced a day after competitor ByteDance introduced AI services that were vastly cheaper – measured in cost of input tokens – than Alibaba's offerings. The other big player in the Middle Kingdom, Baidu, responded to its rivals' moves by making access to some of its Ernie models entirely free .

Scanning the brains of LLMs

The European Union’s AI Act, for example, requires explainability for ‘high-risk AI systems’ such as those deployed for remote biometric identification, law enforcement or access to education, employment or public services.   [Sandra] Wachter says that LLMs aren’t categorized as high-risk and might escape this legal need for explainability except in some specific use cases. But this shouldn’t let the makers of LLMs entirely off the hook, says [David] Bau, who takes umbrage over how some companies, such as OpenAI — the firm behind ChatGPT — maintain secrecy around their largest models. OpenAI told Nature it does so for safety reasons, presumably to help prevent bad actors from using details about how the model works to their advantage. Companies including OpenAI and Anthropic are notable contributors to the field of XAI. In 2023, for example, OpenAI released a study that used GPT-4, one of its most recent AI models, to try to explain the responses of an earlier model, GPT-2, at the neur

HyperCycle

HyperCycle, which came into existence in October 2022, is the brainchild of CEO Toufi Saliba and Ben Goertzel, the founder of SingularityNet . The inspiration for creating HyperCycle was sparked during a conversation between the two at the global AGI summit in 2021. This innovative company is pioneering the development of a General Purpose Technology aimed at facilitating decentralised networks that enable AI-to-AI communication . The technology is crafted to efficiently scale, meeting the global surge in demand for AI integrations and applications. HyperCycle's mission revolves around leveraging advanced technologies to create a seamless, expansive platform for AI interactions, driving forward the future of artificial intelligence communication.

News Corps swallowed by Dark Lord

OpenAI and News Corp, the owner of The Wall Street Journal, MarketWatch, The Sun , and more than a dozen other publishing brands, have struck a multi-year deal to display news from these publications in ChatGPT, News Corp announced on Wednesday .  OpenAI will be able to access both current and well as archived content from News Corp’s publications and use the data to further train its AI models.  Neither company disclosed the terms of the deal, but a report in The Wall Street Journal estimated that News Corp would get $250 million over five years in cash and credits.

AI needs something completely different

The type of artificial intelligence that powers systems like OpenAI’s ChatGPT and Google’s Gemini will not be able to reach human levels of intelligence, Meta’s AI chief Yann LeCun told the Financial Times in an interview published Wednesday , giving an insight into how the tech giant plans to develop the technology moving forward just weeks after its plans to invest heavily spooked investors and erased hundreds of billions from its market value.  LeCun said he and his roughly 500 strong team at Meta’s Fundamental AI Research lab are working to develop an entirely new generation of AI systems based on an approach called “world modeling,” where the system builds an understanding of the world around it like humans do and develops a sense of what would happen if something changes based off this. It could take up to 10 years to achieve human-level AI using the world modeling approach, LeCun predicted.

Mechanistic interpretability

A small subfield of A.I. research known as “mechanistic interpretability” has spent years trying to peer inside the guts of A.I. language models . The work has been slow going, and progress has been incremental. There has also been growing resistance to the idea that A.I. systems pose much risk at all. Last week, two senior safety researchers at OpenAI, the maker of ChatGPT, left the company amid conflict with executives about whether the company was doing enough to make its products safe. But this week, a team of researchers at the A.I. company Anthropic announced what they called a major breakthrough — one they hope will give us the ability to understand more about how A.I. language models actually work, and to possibly prevent them from becoming harmful. The team summarized its findings in a blog post called “Mapping the Mind of a Large Language Model.”

Show me the money…

Suno, a generative AI music company, has raised $125 million in its latest funding round, according to a post on the company’s blog .  The AI music firm, which is one of the rare start-ups that can generate voice, lyrics and instrumentals together, says it wants to usher in a “future where anyone can make music.” Suno allows users to create full songs from simple text prompts. While most of its technology is proprietary, the company does lean on OpenAI’s ChatGPT for lyric and title generation.   Free users can generate up to 10 songs per month, but with its Pro plan ($8 per month) and Premier plan ($24 per month), a user can generate up to 500 songs or 2,000 songs, respectively, on a monthly basis and are given “general commercial terms.” 

Advanced Paste

Microsoft is adding a new Advanced Paste feature to PowerToys for Windows 11 that can convert your clipboard content on the fly with the power of AI. The new feature can help people speed up their workflows by doing things like copying code in one language and pasting it in another, although its best tricks require OpenAI API credits. Advanced Paste is included in PowerToys version 0.81 and, once enabled, can be activated with a special key command: Windows Key + Shift + V. That opens an Advanced Paste text window that offers paste conversion options including plaintext, markdown, and JSON. If you enable Paste with AI in the Advanced Paste settings, you’ll also see an OpenAI prompt where you can enter the conversion you want — summarized text, translations, generated code, a rewrite from casual to professional style, Yoda syntax, or whatever you can think to ask for.

Tabletop Handybot

Decently useful AI has been around for a little while now, and robotic arms have been around much longer. Yet somehow, we don’t have little robot helpers on our desks yet! Thankfully, [Yifei] is working towards that reality with Tabletop Handybot . What [Yifei] has developed is a robotic arm that accepts voice commands. The robot relies on a Realsense D435 RGB-D camera, which provides color vision with depth information as well. Grounding DINO is used for object detection on the RGB images. Segment Anything and Open3D are used for further processing of the visual and depth data to help the robot understand what it’s looking at. Meanwhile, voice commands are interpreted via OpenAI Whisper, which can feed prompts to ChatGPT for further processing. [Yifei] demonstrates his robot picking up markers on command, which is a pretty cool demo. With so many modern AI tools available, we’re getting closer to the ideal of robots that can understand and execute on general spoken instructions. This

YMMV

Image

Errors were generated

An interdisciplinary journal says it will take “corrective actions” on a paper following a thorough investigation on a paper for which one author used ChatGPT to update the references .   Krithika Srinivasan, an editor of Environment and Planning E: Nature and Space and a geographer at the University of Edinburgh, in Scotland, confirmed to Retraction Watch her journal is finalizing what actions need to be taken. After the probe concluded, Srinivasan says she submitted her recommendations to Sage, the journal’s publisher, who will take actions in line with their policy.  What’s clear from the probe, she says, is that “none of the incorrect references in this paper were ‘fabricated’ in the sense of being made up or false.” She notes that the original manuscript was submitted to the journal with the correct references but “the errors were generated when one of the other authors (without the knowledge of the submitting author) used chatGPT (instead of regular referencing software) to in

But, but, my Holodeck?

SAG-AFTRA, the union representing thousands of actors and other media professionals, threw their support behind actress Scarlett Johansson after she expressed concerns over ChatGPT’s new voiced artificial intelligence (AI) assistant that she claims sounds “eerily similar” to her voice. “We share in her concerns and fully support her right to have clarity and transparency regarding the voice used in developing the Chat GPT-4o appliance ‘Sky,'” a SAG-AFTRA spokesperson wrote in a statement Tuesday. Johansson on Monday said OpenAI CEO Sam Altman previously spoke with her about voicing an AI assistant, but she declined. Last week, OpenAI released a demo of its “Sky” voice assistance, featured in its new AI model, GPT-4o. Johansson said Altman contacted her agent two days before the demo was released and asked her to reconsider, but “before we could connect, the system was out there.” “When I heard the released demo, I was shocked, angered and in disbelief that Mr. Altman would pursue a

Proverb 17: AI

We are supposed to be bored.   It is a part of life.   Learn to put up with it.   —Kurt Vonnegut

Spiritual advisor 🎮

Microsoft Copilot will be embedded directly in video games, starting with Minecraft .  Players will be able to use natural language to ask questions like "How do I craft a sword?" and the Copilot will search your chests and inventories for the necessary materials, or guide you to them if you don't have them.  It will also explain how to craft the item, and so on, eliminating the need to alt tab and read a website for Minecraft guides…  Microsoft has made a rather large emphasis on privacy at this event as well, claiming that data used via these new AI PCs will remain on-device, and won't be uploaded to the cloud or used to train language models without consent. 

Dell Technologies

Dell Technologies unveiled a new lineup of AI-enabled PCs featuring Qualcomm processors during an event in Las Vegas on Monday . The company also announced an upcoming server that will support Nvidia's latest chips, set to be released in the second half of 2024. The newly introduced AI-capable PCs are equipped with Qualcomm Snapdragon X series chips, which include neural processing units (NPUs) specifically designed for handling sophisticated AI tasks. This enhancement is expected to cater to the growing demand for high-performance computing in various sectors.

Llama drama

Checkmarx threat research team in a report shared with Hackread.com revealed the dangers posed by seemingly trusted AI models harboring backdoors .  Dubbed Llama drama ; the vulnerability impacts the llama_cpp_python package potentially allowing attackers to execute arbitrary code and compromise data and operations. The vulnerability affects over 6,000 AI models on trusted platforms like Hugging Face, highlighting the need for AI platforms and developers to address supply chain security challenges. It is important to mention that the vulnerability was initially discovered by a cybersecurity researcher known by the handle @retr0reg on X (Twitter).

Cocreator

Microsoft Paint is getting new image generation powers with a new tool called Cocreator. Powered by "diffusion-based algorithms," Cocreator can generate images based on text prompts as well as your own doodles in the Paint app . The company has been experimenting with AI image generation in Paint for a while, and early versions of Cocreator have been available to developers and Windows Insiders since the fall. But with the introduction of CoPilot+ PCs, the feature is now official. During a demo at its Surface event, the company showed off how Cocreator combines your own drawings with text prompts to create an image. There’s also a “ creativity slider ” that allows you to control how much you want AI to take over compared with your original art. 

Recall

At a Build conference event on Monday, Microsoft revealed a new AI-powered feature called "Recall" for Copilot+ PCs that will allow Windows 11 users to search and retrieve their past activities on their PC .  To make it work, Recall records everything users do on their PC, including activities in apps, communications in live meetings, and websites visited for research.  Despite encryption and local storage, the new feature raises privacy concerns for certain Windows users. "Recall uses Copilot+ PC advanced processing capabilities to take images of your active screen every few seconds," Microsoft says on its website. "The snapshots are encrypted and saved on your PC’s hard drive. You can use Recall to locate the content you have viewed on your PC using search or on a timeline bar that allows you to scroll through your snapshots."

Donald Knuth

Donald Knuth, an algorithmist at Stanford, in recent years changed his mind—he “flipped the bit.”   His intuition is that P does indeed equal NP, but that we’ll probably never be able to make use of that fact, practically speaking—because we won’t actually know any of the algorithms that happen to work.  There are mind-boggling numbers of algorithms out there, he explains, but most of them are beyond our ken.  So whereas some researchers might insist that no P = NP algorithm exists, Knuth contends that “it’s more likely that no polynomial-time algorithm will ever be embodied—actually written down as a program—by mere mortals.” 

Scott Aaronson

"One might think that, once we know something is computable, how efficiently it can be computed is a practical question with little further philosophical importance. In this essay, I offer a detailed case that one would be wrong .  "In particular, I argue that computational complexity theory — the field that studies the resources (such as time, space, and randomness) needed to solve computational problems — leads to new perspectives on the nature of mathematical knowledge,  the strong AI debate, computationalism,  the problem of logical omniscience, Hume's problem of induction, Goodman's grue riddle,  "the foundations of quantum mechanics, economic rationality, closed timelike curves, and several other topics of philosophical interest.  "I end by discussing aspects of complexity theory itself that could benefit from philosophical analysis."

Lance Fortnow

"As an open mathematical problem, P vs. NP remains one of the most important ; it is listed on the Clay Mathematical Institute’s Millennium Problems (the organization offers a million-dollar bounty for the solution).  "I close the article by describing some new theoretical computer science results that, while not getting us closer to solving the P vs. NP question, show us that thinking about P vs. NP still drives much of the important research in the area."

Peak AI đź—»

Image

Air gapped AI 🕵️

Microsoft, which has gone “all-in” on artificial intelligence, has developed a generative AI model designed expressly for U.S. intelligence services .  Unlike other AI platforms, such as Microsoft’s own Copilot, this one will be “air gapped” and won’t require a potentially unsafe connection to the internet. Bloomberg notes, “It’s the first time a major large language model has operated fully separated from the internet… Most AI models, including OpenAI’s ChatGPT rely on cloud services to learn and infer patterns from data, but Microsoft wanted to deliver a truly secure system to the US intelligence community.”

Shelly Palmer

There have been quite a few pundits, futurists, and normal people weighing in on how AI might end the world .  The real risks AI poses to the world may not come from super-intelligent machines with malevolent intentions but from the very human oversight of not accounting for the environmental and infrastructural costs of rapidly advancing technology.  Ensuring that AI serves the common good requires not just technical and ethical guidelines for AI development but also a comprehensive approach to managing the resources it depends on. 

Decontextualized language

We’ve spoken before about “decontextualized language” – the language takes us beyond that of the immediate context and moment, and how such language can take us beyond our own already delimited feelings and experiences, and into a realm of interpersonal and cultural thought, knowledge, and perspectives.  This is the language of storybooks, of science, and – and it’s greatest extreme – of code. We begin teaching this form of language when we begin storytelling with our children and reading with them and talking to them about books. It becomes increasingly dense and complex as we move into disciplinary study. There is some evidence that training LLMs on this specific form of language is more powerful – such as this study training a “tiny LLM” on children’s stories. And if you think about what LLMs have been getting trained on thus far – it’s corpuses of written language, not training on conversations using everyday language.  As we’ve explored in depth on this blog, written language is

Nil Gates

The Microsoft co-founder recently took to social media to tout Brave New Words: How AI Will Revolutionize Education (And Why That’s a Good Thing)  which published last week .  The book was written by Sal Khan, the founder and CEO of education nonprofit Khan Academy — which is developing an experimental AI chatbot tutor called, 'Khanmigo.' “If you’re passionate about education, you need to read this book,” Gates wrote on social media platform X. “Sal offers a compelling vision for harnessing AI to expand opportunity for all.” In the book, Khan writes about the potential for AI-powered systems — like ChatGPT, which powers Khanmigo — to “revolutionize the way we learn and teach” by assisting overworked teachers and tailoring lessons to individual students around the world. AI tutoring could help “close the education gap” with direct help for low-income students even in developing countries, Gates noted last year on his “ Unconfuse Me ” podcast, in an episode featuring Khan.

Bleeping Computer RX

If you dislike Google AI Overviews, you can force Google to always show Web search results without AI summaries, videos, images, and other search features . To do this, change the search engine in your web browser to a specific Google entry point that triggers the "Web" filter by following these steps:     Open Google Chrome, click the three-dot menu in the top-right corner, and select Settings.     Scroll down to the Search engine" section and click Manage search engines and site search.     Click Add next to Site search.     In the Add search engine dialog, enter a name for your search engine (e.g., "Google Web"). For a shortcut, enter a keyword to quickly use this search engine from the address bar (e.g., "Web").     ​Change URL to {google:baseURL}/search?udm=14&q=%s.     Click Add.     Click on the three-dot menu next to the new search engine you created and select Make Default.     The new search engine will now appear as the default under Se

Seoul Summit

“The last 12 months of AI progress were the slowest they’ll be for the foreseeable future,” the economist Samuel Hammond wrote in early May.   Until now, “ frontier ” AI systems, the most powerful on the market, have largely been confined to simply handling text. Microsoft and Google have incorporated their offerings into their office products, and given them the authority to carry out simple administrative functions upon request.  But the next step of development is “ agentic ” AI: systems that can truly act to influence the world around them, from surfing the web, to writing and executing code. Smaller AI labs have experimented with such approaches, with mixed successes, putting commercial pressure on the larger companies to give their own AI models the same power. By the end of the year, expect the top AI systems to not only offer to plan a holiday for you, but book the flights, hotels and restaurants, arrange your visa, and prepare and lead a walking tour of your destination. But a

Section 230

Adam Thierer, senior fellow at the nonprofit free-market think tank R Street, worries that innovation could be throttled if Congress doesn't extend Section 230 to cover AI tools . "As AI is integrated into more consumer-facing products, the ambiguity about liability will haunt developers and investors," he predicted. "It is particularly problematic for small AI firms and open-source AI developers, who could be decimated as frivolous legal claims accumulate."  But John Bergmayer, legal director for the digital rights nonprofit Public Knowledge, said there are real concerns that AI answers could spell doom for many of the publishers and creators that rely on search traffic to survive — and which AI, in turn, relies on for credible information. From that standpoint, he said, a liability regime that incentivizes search engines to continue sending users to third-party websites might be "a really good outcome."

AI entails UBI

The computer scientist regarded as the “godfather of artificial intelligence” says the government will have to establish a universal basic income to deal with the impact of AI on inequality. Professor Geoffrey Hinton told BBC Newsnight that a benefits reform giving fixed amounts of cash to every citizen would be needed because he was “very worried about AI taking lots of mundane jobs”. “I was consulted by people in Downing Street and I advised them that universal basic income was a good idea,” he said. He said while he felt AI would increase productivity and wealth, the money would go to the rich “and not the people whose jobs get lost and that’s going to be very bad for society”. Professor Hinton is the pioneer of neural networks, which form the theoretical basis of the current explosion in artificial intelligence. 

Barba-Kay

The digital now inflects how we think—and how we think about thinking. While the “most urgent and proximate concern about AI is not the death of theory … [but] the possibility that AI will be deployed for military purposes so as to slip from our control,” Barba-Kay’s long-range concern is the redefinition of intelligence itself, with the artificial version ascendant. “By perfecting digital technology, we are in fact constructing a model of human intelligence, our perfect self. And the more we see ourselves in it (because we are able to), the more it becomes us, the more we become it.” Algorithmic, utilitarian and putatively neutral, AI represents “the ultimate expression of modern science’s founding gambit, which is to make practical effectiveness primary over metaphysical or philosophical meaning.” What AI cannot do is judge, deliberate or contemplate: AI cannot tell you whether to go to war or whom to marry; nor can it answer the question, quid sit deus? Will we learn n

Key Points: slippery slopes

Gannett, the media company that owns hundreds of newspapers in the US, is launching a new program that adds AI-generated bullet points at the top of journalists’ stories, according to an internal memo seen by The Verge . The AI feature, labeled “key points” on stories, uses automated technology to create summaries that appear below a headline.  The bottom of articles includes a disclaimer, reading, “The Key Points at the top of this article were created with the assistance of Artificial Intelligence (AI) and reviewed by a journalist before publication. No other parts of the article were generated using AI.” The memo is dated May 14th and notes that participation is optional at this point .

Hustle culture

Silicon Valley startup hustle culture is back, especially in Cerebral Valley, the Hayes Valley neighborhood of San Francisco that’s filled with early-stage AI startups, often founded and staffed with 20-somethings who make their companies their whole lives .  Hustle culture went out of favor in the post pandemic years, when people had moved away from both their offices and San Francisco.  But Hacker houses in San Francisco are popular again. And Cerebral Valley is its own cultural phenom, where those who believe in the future of AI (or fear it) live in such houses and go [to] the same parties. In the case of Exa Labs, the need for nap pods is a natural extension of its hacker house history. Exa is a 10-person startup that was, until a few weeks ago, in such a house, where co-workers of tiny companies work and live together. 

Discord's discord

Discord is alleged to be using machine learning to predict the gender and age group of its users, according to technically savvy users . X user DiscordPreviews found hints of Discord using machine learning (ML), an application of AI, to assign gender and age groups to users. They dug out the information from the social platform’s data packages, claiming the company has been doing so “since at least August 2022.” “The data can be found in the ‘activity/analytics/events-[…].json’ file of some Discord data packages, though the exact requirements are unknown,” writes DiscordPreviews. They also shared a screenshot of a JSON file that shows the age and gender assigned to an undefined user. The user in question is shown to be a male aged between 18 and 24.

Whale clicks

Scientists have accomplished a whale of a feat. They’ve identified previously unknown complexity in whale communication by analyzing thousands of recorded sequences of sperm whale clicks with artificial intelligence. Variations in tempo, rhythm and length of the whales’ click sequences, called codas, weave a rich acoustic tapestry.  These variables hint that whales can combine click patterns in multiple ways, mixing and matching phrases to convey a broad range of information to one another. 

Sony says stop…

Image
Sony Music Group (SMG) is in the process of sending out letters to what MBW understands to be 700 AI developers and music streaming services, declaring that it is “opting out” of having its content used in the training of AI. Also, any AI developer who wants to use SMG’s content will need explicit permission. The letter, obtained by MBW, also states that these companies may have already violated Sony Music’s copyrights. “Due to the nature of your operations and published information about your AI systems, we have reason to believe that you and/or your affiliates may already have made unauthorized uses (including TDM [text and data mining]) of SMG Content in relation to the training, development or commercialization of AI systems,” states the letter. 

Cash diffusion

Stability AI, the maker of the popular AI image generator Stable Diffusion, is in big financial trouble. As The Information reports, the startup is facing a severe cash crunch and is in talks of being sold off. It's a major fall from grace. During the early days of the AI race, former CEO Emad Mostaque raised $100 million for the venture at a $1 billion valuation in 2022. But the company has been bleeding cash ever since, with wages and expenses on computing power greatly exceeding revenue. According to The Information , the company generated less than $5 million in revenue in the first quarter of this year, while losing more than $30 million. The ironically-named venture is now reportedly sitting on $100 million worth of outstanding bills, indicating the end of the once hyped-up AI startup could be nigh.

Council of Europe

The Council of Europe has adopted the first-ever international legally binding treaty aimed at ensuring the respect of human rights, the rule of law and democracy legal standards in the use of artificial intelligence (AI) systems .  The treaty, which is also open to non-European countries, sets out a legal framework that covers the entire lifecycle of AI systems and addresses the risks they may pose, while promoting responsible innovation.  The convention adopts a risk-based approach to the design, development, use, and decommissioning of AI systems, which requires carefully considering any potential negative consequences of using AI systems.

Microsoft's carbon goals

When Microsoft Corp. pledged four years ago to remove more carbon than it emits by the end of the decade, it was one of the most ambitious and comprehensive plans to tackle climate change.  Now the software giant's relentless push to be the global leader in artificial intelligence is putting that goal in peril . The Seattle-based company’s total planet-warming impact is about 30% higher today than it was in 2020, according to the latest sustainability report published Wednesday. That makes getting to below zero by 2030 even harder than it was when it announced its carbon-negative goal. Now to meet its goals, the software giant will have to make serious progress very quickly in gaining access to green steel and concrete and less carbon-intensive chips, said Brad Smith, president of Microsoft, in an exclusive interview with Bloomberg Green. “In 2020, we unveiled what we called our carbon moonshot. That was before the explosion in artificial intelligence,” he said. “So in many ways th

AI Expo for National Competitiveness

The inaugural “AI Expo for National Competitiveness”, hosted by the Special Competitive Studies Project – better known as the “techno-economic” thinktank created by the former Google CEO and current billionaire Eric Schmidt:  The conference’s lead sponsor was Palantir, a software company co-founded by Peter Thiel that’s best known for inspiring 2019 protests against its work with Immigration and Customs Enforcement (Ice) at the height of Trump’s family separation policy.  Currently, Palantir is supplying some of its AI products to the Israel Defense Forces . The conference hall was also filled with booths representing the US military and dozens of its contractors , ranging from Booz Allen Hamilton to a random company that was described to me as Uber for airplane software.

UNK_SweetSpecter

Cyber security experts have recently uncovered a sophisticated cyber attack campaign targeting U.S-based organizations that are involved in artificial intelligence (AI) projects .  Targets have included organizations in academia, private industry and government service. Known as UNK_SweetSpecter, this campaign utilizes the SugarGh0st remote access trojan (RAT) to infiltrate networks. In the past, SugarGh0st RAT has been used to target individuals in Central and East Asia and prior to this point, it has not been widely deployed elsewhere. The specifics of the attack remain under investigation. However, it appears that attackers deployed phishing emails in order to send AI-themed lures to targets; with the objective of persuading them to open an attached ZIP archive.