Posts

Showing posts from January, 2024

Figure AI

Figure AI, a company that specializes in the development of humanoid robots, is planning to secure $500 million in a funding round led by Microsoft Corp. and OpenAI, according to Bloomberg. Although the tech giant reportedly will not provide the full amount, it is said to be investing approximately $95 million alongside OpenAI , which will contribute $5 million. If the funding round is successful, the company might achieve a pre-money valuation of $1.9 billion. However, the funding round is not yet finalized, and there is always the possibility of talks collapsing. The potential investors and Figure AI have also not commented on the situation.

Taonaw (ju ta ro)

"The right to privacy, which for me was always about owning content and the choice to share it, goes hand in hand with the right not to have AI bots scrape your content and spread it all over the Internet .  "As I said before, the struggle for privacy is not just about about not giving away our emails, phone numbers, articles and images. It’s the fact that we don’t have the option to say no anymore. No one bothers asking us - not really.  "It’s been this way for so long that when the AI craze came along, it was simply assumed that scraping our content to train it is how things work."

Universal

TikTok is trying to build a music-based business, without paying fair value for the music. On AI, TikTok is allowing the platform to be flooded with AI-generated recordings—as well as developing tools to enable, promote and encourage AI music creation on the platform itself – and then demanding a contractual right which would allow this content to massively dilute the royalty pool for human artists, in a move that is nothing short of sponsoring artist replacement by AI. Further, TikTok makes little effort to deal with the vast amounts of content on its platform that infringe our artists’ music and it has offered no meaningful solutions to the rising tide of content adjacency issues, let alone the tidal wave of hate speech, bigotry, bullying and harassment on the platform. The only means available to seek the removal of infringing or problematic content (such as pornographic deepfakes of artists) is through the monumentally cumbersome and inefficient process which equates to th

Ethan Mollick

"The evidence for AI as a productivity booster has only grown , but many people are still not even trying to use AI (though that isn’t true for all populations - almost 100% of my students this semester reported using LLMs).  "Maybe it was the fact that it took a little skill to use LLMs well. Maybe it was that people found using AI unnerving and gave up on it quickly. Maybe it was the fact that organizations frowned on AI use. It certainly didn’t help that the ubiquitous chatbot approach to AI hid a lot of its power, which was only revealed after hours of experimentation.  "As a result, AI use was mostly something a small proportion of people in organizations used, often keeping their applications secret to get the maximum value with minimum risk.  " The extent to which AI was truly disruptive was hidden. "

Neuralink

The startup's goal, the CEO wrote , is to allow somebody to control a "phone or computer, and through them almost any device, just by thinking." That kind of ability will amount to what Musk is calling "Telepathy," which will be Neuralinks' first "product." "Imagine if Stephen Hawking could communicate faster than a speed typist or auctioneer," Musk suggested. "That is the goal."

Robot braille reader

Researchers have developed a robotic sensor that incorporates artificial intelligence techniques to read braille at speeds roughly double that of most human readers. The research team, from the University of Cambridge, used machine learning algorithms to teach a robotic sensor to quickly slide over lines of braille text. The robot was able to read the braille at 315 words per minute at close to 90% accuracy. Although the robot braille reader was not developed as an assistive technology, the researchers say the high sensitivity required to read braille makes it an ideal test in the development of robot hands or prosthetics with comparable sensitivity to human fingertips. The results are reported in the journal IEEE Robotics and Automation Letters .

Horizontal Integration

A promising approach to regulating AI is one that would focus on controlling access to the very data that is the lifeblood of AI development.   Since data is behind the rise of horizontal integration as well as fuelling the growth and sophistication of AI systems, its concentration in the hands of a few entities can lead to monopolistic dominance.  In short, it gives too much power to too few companies.

Proverb 10: AI

"If AI learns arithmetic by counting on its fingers, we are in deep doo doo."  —Fìero   

Rainier Club

“You can imagine building a universe in the last days leading up to an election that looks convincing,” said Ryan Calo , a University of Washington law professor and tech policy expert. “That’s the kind of thing that Jevin and I lose sleep over.” Calo spoke on a panel at the Rainier Club in Seattle on Monday with UW information school professor Jevin West . They are co-founders of the UW’s Center for an Informed Public , which studies the spread of strategic misinformation. The quality of deepfakes and other AI-generated content will almost certainly get better and harder to detect, West said. This can also have an impact on legitimate content and how people could falsely claim artificiality — in other words, calling something fake when it’s actually real. Above all, West said he’s most concerned about “lowering levels of trust in our system” at a time when public trust in government is at historic lows . 

Leaks not leeks

ChatGPT is leaking private conversations that include login credentials and other personal details of unrelated users, screenshots submitted by an Ars reader on Monday indicated . Two of the seven screenshots the reader submitted stood out in particular. Both contained multiple pairs of usernames and passwords that appeared to be connected to a support system used by employees of a pharmacy prescription drug portal.  An employee using the AI chatbot seemed to be troubleshooting problems they encountered while using the portal.

That's Entertainment

Motion Picture Magazine is part of a larger operation called Loaded Media .  The site bills itself as the “future of news. Boundless, bright, and beyond limits.”   The website is full of AI-generated images backing up seemingly AI-generated text, all of it linking to each other like an eternal ouroboros. Like the Motion Picture Magazine site, it contains YouTube videos, advertisements, and Amazon affiliate links.  It also features a gallery of a dozen different digital magazines covering topics ranging from the entertainment industry, to travel, to popular science.  Those websites are also filled with apparently AI-generated content and images. 

Quantum wrestles AI

Image
 

Carpet weaving meets AI

Aby Mathew is chief operating officer at International Virtual Assistance, a computer software firm that specialises in analysing data. His company is training an artificial intelligence (AI) system to understand the talim code by showing it picture of carpets and lines of talim code. The AI is still being developed and the process will still require a human to write the code, but Mr Mathew says it should speed up manufacturing by decoding the talim instructions for the weavers. "Weavers will be able to try out new patterns, update classic themes to suit contemporary tastes, and produce one-of-a-kind, custom carpets," says Mr Mathew.

MLA weighs in

In response to community needs, the CCCC and MLA‘s AI and Writing Task Force has participated in conversations about the roles of AI and writing when considering the scholarship and creative activities undertaken by faculty.  These might include grant applications/proposals, research articles, conference presentations, creative works, and other kinds of professional writing.  As a first effort, the task force is sharing a set of preliminary guidelines and inviting community feedback. The aim is to offer provisional guidance for evaluating the use of AI in Scholarship and Creativity, including basic standards for the ethical use of these technologies.  We have drafted these with two audiences in mind: for scholars who are preparing materials that are subject to peer review by members of the scholarly community, and for reviewers who are looking for guidance about how to approach submissions that have used AI tools as part of the process. 

Biological Weapon Attack Planning

LLMs provided some “unfortunate outputs,” but they tend to generally mirror information that’s already available online. This suggests that LLMs do not substantially increase the risks associated with biological weapon attack planning. “Overall, our findings on viability suggest that the tasks involved in biological weapon attack planning likely fall outside the existing capabilities of LLMs.” The study used two unspecified LLMs, and researchers confirmed that one scored higher than the other. The researchers don’t believe that the study is conclusive and rules out the risk of biological attacks using LLMs. More testing would be needed for more accurate assessment, including more LLMs and researchers.

Nita Farahany

"It’s been a whirlwind of a year. It’s been an exciting year. It’s been a bit of a terrifying year in many ways.  "I think the rapid pace of technological changes in society and the urgent need for ethical and legal guidance [to match] the rapid pace of technological advancement in AI and neurotechnology has made it exciting and terrifying because I’m not sure we will get to a place where we can align the technology in ways that really maximize the benefit for humanity .  "And so it’s been a year of me being on the road nonstop, missing my kids, but feeling like there is really important work to do." 

AI not teh boss of me?

"Being a shitty boss" is the one AI application that companies are willing to increase their net spending on.  No one buys an eyeball-monitoring AI so they can fire a manager. This is the one place where AI is there to augment , rather than replace, an employee. This makes AI-based bossware subtly different from other forms of Taylorism, the "scientific management" fad of the early 20th century that saw management consultants choreographing the postures and movements of workers to satisfy the aesthetic fetishes of their employers.

Silly hoomans

The estate of George Carlin has filed a federal lawsuit against the comedy podcast Dudesy for an hour-long comedy special sold as an AI-generated impression of the late comedian.  But a representative for one of the podcast hosts behind the special now admits that it was actually written by a human. In the lawsuit , filed by Carlin manager Jerold Hamza in a California district court, the Carlin estate points out that the special, " George Carlin: I'm Glad I'm Dead ," (which was set to "private" on YouTube shortly after the lawsuit was filed) presents itself as being created by an AI trained on decades worth of Carlin's material. That training would, by definition, involve making "unauthorized copies" of "Carlin's original, copyrighted routines" without permission in order "to fabricate a semblance of Carlin’s voice and generate a Carlin stand-up comedy routine," according to the lawsuit.

Backfire

Researchers programmed various large language models (LLMs) — generative AI systems similar to ChatGPT — to behave maliciously.  Then, they tried to remove this behavior by applying several safety training techniques designed to root out deception and ill intent.  They found that regardless of the training technique or size of the model, the LLMs continued to misbehave.  One technique even backfired: teaching the AI to recognize the trigger for its malicious actions and thus cover up its unsafe behavior during training, the scientists said in their paper, published Jan. 17 to the preprint database arXiv . 

Too many fingers

Image
  Too many feet...  

Code Quality

New GitHub Copilot Research Finds 'Downward Pressure on Code Quality':   New research on the effect of AI-powered GitHub Copilot on software development cites some adverse results. The " Coding on Copilot " whitepaper from GitClear sought to investigate the quality and maintainability of AI-assisted code compared to what would have been written by a human. In other words: "Is it more similar to the careful, refined contributions of a Senior Developer, or more akin to the disjointed work of a short-term contractor?"     

Robert Lemos

"The end users of LLMs typically do not have a lot of information on how providers collected and cleaned the data used to train their models, and the model developers typically conduct only a shallow evaluation of the data because the volume of information is just too vast.   "This lack of visibility into how artificial intelligence (AI) makes decisions is the root cause of more than a quarter of risks posed by LLMs, according to a new report from the Berryville Institute of Machine Learning (BIML) that describes 81 risks associated with LLMs.  "The goal of the report, ' An Architectural Risk Analysis of Large Language Models ,' is to provide CISOs and other security practitioners with a way of thinking about the risks posed by machine learning (ML) and AI models, especially LLMs and the next-generation large multimodal models (LMMs), so they can identify those risks in their own applications, says Gary McGraw, co-founder of BIML." 

Lawsuit

George Carlin’s estate is suing over the release of a comedy special that uses generative artificial intelligence to mimic the deceased comedian’s voice and style of humor . The lawsuit, filed in California federal court Thursday, accuses the creators of the special of utilizing without consent or compensation George Carlin’s entire body of work consisting of five decades of comedy routines to train an AI chatbot, which wrote the episode’s script. It also takes issue with using his voice and likeness for promotional purposes.  

Cancer Discovery

In a groundbreaking study published on January 18, 2024, in Cancer Discovery , scientists at University of California San Diego School of Medicine leveraged a machine learning algorithm to tackle one of the biggest challenges facing cancer researchers: predicting when cancer will resist chemotherapy. All cells, including cancer cells, rely on complex molecular machinery to replicate DNA as part of normal cell division. Most chemotherapies work by disrupting this DNA replication machinery in rapidly dividing tumor cells. While scientists recognize that a tumor's genetic composition heavily influences its specific drug response, the vast multitude of mutations found within tumors has made prediction of drug resistance a challenging prospect. The new algorithm overcomes this barrier by exploring how numerous genetic mutations collectively influence a tumor's reaction to drugs that impede DNA replication. Specifically, they tested their model on cervical cancer tumor

Alex Heath

"'We’ve come to this view that, in order to build the products that we want to build, we need to build for general intelligence,' Zuckerberg tells me in an exclusive interview . 'I think that’s important to convey because a lot of the best researchers want to work on the more ambitious problems.' "Here, Zuckerberg is saying the quiet part aloud. The battle for AI talent has never been more fierce, with every company in the space vying for an extremely small pool of researchers and engineers.  "Those with the needed expertise can command eye-popping compensation packages to the tune of over $1 million a year. CEOs like Zuckerberg are routinely pulled in to try to win over a key recruit or keep a researcher from defecting to a competitor."

Robert Evans

"The more you dig into Andreessen’s theology, the more it starts to seem like a form of technocapitalist Christianity. AI is the savior, and in the case of devices like the Rabbit, it might literally become our own, personal Jesus. And who, you might ask, is God? "'We believe the market economy is a discovery machine, a form of intelligence — an exploratory, evolutionary, adaptive system,' Andreessen writes. "This is the prism through which these capitalists see artificial intelligence. This is why they are choosing to bring AGI into being. All of the jobs lost, all of the incoherent flotsam choking our internet, all of the Amazon drop shippers using ChatGPT to write product descriptions, these are but the market expressing its will.  "Artists must be plagiarized and children presented with hours of procedurally generated slop and lies on YouTube so that we can, one day, reach the promised land: code that can outthink a human being." 

Duobao

OpenAI halted ByteDance's account due to uncertainties surrounding the usage of GPT’s data. OpenAI explicitly forbids users from creating competing AI models by using the output generated by ChatGPT. ByteDance denied any wrongdoing. As reported by CNN, the company’s spokesperson claimed that their engineering team uses OpenAI’s GPT, along with other third-party models, to a very limited extent during the evaluation and testing process. According to the spokesperson, the company uses GPT’s API to power products and features in markets outside China. The company uses its own self-developed AI model to power a ChatGPT-like tool called Doubao, which is available only in China.

Human cognitive augmentation

This paper investigates human cognitive augmentation due to using ChatGPT by presenting the results of two experiments comparing responses created using ChatGPT with results created not using ChatGPT.  We find using ChatGPT does not always result in cognitive augmentation and does not yet replace human judgement, discernment, and evaluation in certain types of tasks.  In fact, ChatGPT was observed to result in misleading users resulting in negative cognitive augmentation.

Weird chatbots

There are over three million AI chatbots, but OpenAI already banned a handful of GPTs that it’s deemed so strange, so taboo, that they should not be allowed to exist in its new world.  We made sure to include some of those on this list .  It was inevitable that folks were going to create some fairly strange robot assistants, given free rein, but now that the GPT Store is live, it’s out there for anyone to see.  However, some of these GPTs are probably better left unseen.

Crypto companies

Companies that once serviced the boom in cryptocurrency mining are pivoting to take advantage of the latest data gold rush. Canadian company Hive Blockchain changed its name in July to Hive Digital Technologies and announced it was pivoting to AI. “Hive has been a pioneering force in the cryptocurrency mining sector since 2017. The adoption of a new name signals a significant strategic shift to harness the potential of GPU Cloud compute technology, a vital tool in the world of AI, machine learning and advanced data analysis, allowing us to expand our revenue channels with our Nvidia GPU fleet,” the company said in its announcement at the time.

AI not parrot 🦜

A theory developed by Sanjeev Arora of Princeton University and Anirudh Goyal , a research scientist at Google DeepMind, suggests that the largest of today’s LLMs are not stochastic parrots.  The authors argue that as these models get bigger and are trained on more data, they improve on individual language-related abilities and also develop new ones by combining skills in a manner that hints at understanding — combinations that were unlikely to exist in the training data. This theoretical approach, which provides a mathematically provable argument for how and why an LLM can develop so many abilities, has convinced experts like Hinton, and others. And when Arora and his team tested some of its predictions, they found that these models behaved almost exactly as expected.  From all accounts, they’ve made a strong case that the largest LLMs are not just parroting what they’ve seen before.

iOS 17.4 beta

Apple is widely expected to unveil major new artificial intelligence features with iOS 18 in June. Code found by 9to5Mac in the first beta of iOS 17.4 shows that Apple is continuing to work on a new version of Siri powered by large language model technology, with a little help from other sources. In fact, Apple appears to be using OpenAI’s ChatGPT API for internal testing to help the development of its own AI models. According to this code, iOS 17.4 includes a new SiriSummarization private framework that makes calls to the OpenAI’s ChatGPT API. This appears to be something Apple is using for internal testing of its new AI features. 

OpenAI's U-Turn

Wealthy tech entrepreneurs including Elon Musk launched OpenAI in 2015 as a nonprofit research lab that they said would involve society and the public in the development of powerful AI, unlike Google and other giant tech companies working behind closed doors.  In line with that spirit, OpenAI’s reports to US tax authorities have from its founding said that any member of the public can review copies of its governing documents, financial statements, and conflict of interest rules. But when WIRED requested those records last month, OpenAI said its policy had changed, and the company provided only a narrow financial statement that omitted the majority of its operations.

Savvy Humans

People who were more skeptical of human-caused climate change or the Black Lives Matter movement and who took part in conversation with a popular AI chatbot were disappointed with the experience, but left the conversation more supportive of the scientific consensus on climate change or BLM.  This is according to researchers studying how these chatbots handle interactions from people with different cultural backgrounds. Savvy humans can adjust to their conversation partners' political leanings and cultural expectations to make sure they're understood, but more and more often, humans find themselves in conversation with computer programs , called large language models , meant to mimic the way people communicate. 

FTC Inquiry

Today, the Federal Trade Commission announced an inquiry, called a 6(b) study, into generative artificial intelligence investments and partnerships with major cloud service providers.  The agency will review these corporate arrangements with AI providers to “build a better internal understanding of these relationships and their impact on the competitive landscape.” The FTC sent compulsory information requests to Alphabet, Amazon, Anthropic, Microsoft, and OpenAI, the maker of chatGPT. Public Knowledge applauds this FTC action to investigate potential competitive concerns where Big Tech and AI meet and looks forward to reviewing the results of the study.

Emily Bender

“Who are they to be speaking for all of humanity?,” asked Emily M. Bender , raising the question to the tech companies in a conversation with AIM . “The handful of very wealthy (even by American standards) tech bros are not in a position to understand the needs of humanity at large,” she bluntly argued. The vocal, straightforward, and candid computational linguist is not exaggerating as she calls out the likes of OpenAI.   Currently, Sam Altman is trying to solve issues of humanity, which include poverty, hunger, and climate catastrophes through AI tools like ChatGPT, which has been developed in Kenyan sweatshops , got sued for violating privacy laws , continues to pollute the internet and is a source of misinformation.

Power demands

The International Energy Agency (IEA) anticipates substantial increases in energy consumption from crypto mining and artificial intelligence (AI) in the coming years, with projections suggesting a more than 30% surge by 2026 . The IEA’s Electricity 2024 report outlines the expected growth in energy usage through 2026, highlighting that power generation, including for crypto mining purposes, remains a major contributor to carbon emissions but is leading the transition to net-zero emissions.

Data dumped?

The National Security Agency (NSA) has admitted to buying records from data brokers detailing which websites and apps Americans use, US Senator Ron Wyden (D-Ore.) revealed Thursday. This news follows Wyden's push last year that forced the FBI to admit that it was also buying Americans' sensitive data . Now, the senator is calling on all intelligence agencies to "stop buying personal data from Americans that has been obtained illegally by data brokers."

Copyright Office

Lobbyists for Microsoft, Google , and the music and news industries have asked to meet with Shira Perlmutter, the register of copyrights, and her staff.  Thousands of artists, musicians and tech executives have written to the agency, and hundreds have asked to speak at listening sessions hosted by the office. The attention stems from a first-of-its-kind review of copyright law that the Copyright Office is conducting in the age of artificial intelligence.   The technology — which feeds off creative content — has upended traditional norms around copyright, which gives owners of books, movies and music the exclusive ability to distribute and copy their works.

Alphabet Inc

The cloud computing arm of Alphabet Inc (GOOGL.O) said on Thursday it had formed a partnership with startup Hugging Face to ease artificial intelligence (AI) software development in the company's Google Cloud. Following the launch of generative AI tools focused on consumers from the likes of Microsoft (MSFT.O) and Google parent Alphabet, large and small businesses have become interested in deploying AI for their own purposes - such as boosting an existing product or serving an internal need.  The need to create AI software that is optimized for specific tasks and needs prompted the partnership.

Tay

A variety of sexually inappropriate and offensive AI images of Taylor Swift are making the rounds on X, formerly Twitter , to the disgust of many people on the platform. AI images are pictures generated through artificial intelligence software using a text prompt.   This can be done without a person's consent. Users on the platform have raised fears about how easily AI can be used to post fake images, violating the subject's privacy. Some are also taking action to report the posts, or attempting to bury the issue as a trending topic.

Tim Harper

“The way in which China is beginning to administer its tactics indicate an increasing use of generative AI tools that are more sophisticated at getting the nuance of communication in another language correct,” said Tim Harper, a senior policy analyst for democracy and elections at the Washington-based Center for Democracy and Technology.  “We talk a lot about the ways that AI democratizes the ability to spread mis- and disinformation, but at the nation-state level, where actors are using extremely sophisticated techniques with high budgets, we’re also seeing that tools are becoming much more sophisticated at micro-targeting toward specific groups.”

SAP

Technology companies are investing heavily in artificial intelligence, and some workers are already paying the price. SAP is the latest big tech player to cut jobs as it pours money into AI , with the German software giant announcing this week that it is investing more than $2 billion to integrate artificial intelligence into its business as part of what it called "transformation program."  At the same time, the company said Tuesday it plans to restructure 8,000 roles. Some of the workers will be laid off, while others will be re-trained to work with AI. 

robots.txt

Are you a content creator or a blog author who generates unique, high-quality content for a living? Have you noticed that generative AI platforms like OpenAI or CCBot use your content to train their algorithms without your consent? Don’t worry!  You can block these AI crawlers from accessing your website or blog by using the robots.txt file .

Proverb 9: AI

Rummaging in our souls, we often dig up something that ought to have lain there unnoticed. —Tolstoy

LLM creativity

In the field of natural language processing, the rapid development of large language model (LLM) has attracted more and more attention.  LLMs have shown a high level of creativity in various tasks, but the methods for assessing such creativity are inadequate. The assessment of LLM creativity needs to consider differences from humans, requiring multi-dimensional measurement while balancing accuracy and efficiency.  This paper aims to establish an efficient framework for assessing the level of creativity in LLMs .

NCSC

Ransomware continues to be the most acute cyber threat faced by businesses and organizations in the UK – a global problem that artificial intelligence (AI) will further exacerbate, a new report published by the National Cyber Security Centre (NCSC) said. “AI is already being used in malicious cyber activity and will almost certainly increase the volume and impact of cyberattacks – including ransomware – in the near term,” said NCSC, which is part of the UK’s cyber spy agency GCHQ. The report concluded that AI lowers the barrier of entry to novice cybercriminals, hackers-for-hire, and hacktivists. 

Firefly

One of the big controversies in AI image generation is copyright .  Models such as Stable Diffusion and Midjourney were trained on billions of images scraped from the internet without artists' permission.  But Adobe had a ready-made solution at arm's reach: its own royalty-free image library, Adobe Stock, which contains over 200 million photos, videos, illustrations and 3D assets.  However, while this might make Adobe Firefly commercially safe, some Adobe Stock contributors are not happy that they weren't consulted and still don't know how they'll be compensated. 

GeoAI

The past decade has witnessed the rapid development of geospatial artificial intelligence (GeoAI) primarily due to the ground-breaking achievements in deep learning and machine learning .  A growing number of scholars from cartography have demonstrated successfully that GeoAI can accelerate previously complex cartographic design tasks and even enable cartographic creativity in new ways.  Despite the promise of GeoAI, researchers and practitioners have growing concerns about the ethical issues of GeoAI for cartography.

EU AI

European Union lawmakers scrambling for the bloc to be a contender in the generative AI race are presenting a package of support measures aimed at charging up homegrown AI startups and scale ups . Artificial intelligence technologies — and especially generative AI models which are trained on very large data-sets and have capabilities such as being able to parse natural language and produce text, imagery or audio on demand — are being viewed as a key strategic area for the bloc’s future competitiveness.  However Commission officials concede lawmakers have been caught on the hop, somewhat, when it comes to compute infrastructure that’s fit for training such AIs.

Ferris State

Ann and Fry are AIs created by the Michigan-based university, which is enrolling them in courses.   The project is a mix of researching artificial intelligence and online classrooms while getting a peek into a typical student’s experience. “As the student and external environment changes and evolves, we need to make sure we’re prepared and ready to deliver educational experiences that are just as impactful,” said Kasey Thompson, Ferris State’s special assistant to the president for innovation and entrepreneurship.

Job loss? or what?

In their study, the MIT researchers sought to move beyond what they characterize as “task-based” comparisons and assess how feasible it is that AI will perform certain roles — and how likely businesses are to actually replace workers with AI tech. Contrary to what one (including this reporter) might expect, the MIT researchers found that the majority of jobs previously identified as being at risk of AI displacement aren’t, in fact, “economically beneficial” to automate — at least at present.

Michael J Casey

"A path toward AI models running on a more decentralized system in which training data is used only if there is consent from its owners:   "For that we’ll need the kind of decentralized tracking approaches that blockchain could enable, both to give assurances to consenting owners that their data and content is being used as described and to ensure that vital information isn’t subject to AI-driven 'deep fake' tricks.  "We need a system of verification in which people can trust a censorship-resistant, open-source protocol rather than the promises of Big Tech that they’ll 'do the right thing'."

Job Help

A Seattle-area nonprofit helping disadvantaged job seekers around the world is adding generative artificial intelligence to the toolkit for its participants. The Global Mentorship Initiative is teaching college students and recent graduates how to use digital tools that are powered by technology including ChatGPT and GPT-4 to more easily and successfully land jobs, starting with the job search and application all the way through the interview process to sending thank you emails. 

Furby

The NSA has finally released a treasure trove of documents about the brief Furby panic of 1998 and 1999 at America’s top spy agency, in which it banned the toy from its offices as a potential spy device , discussed the toy’s ability to “learn” using an “artificial intelligent chip onboard” on an internal listserv, and ultimately was embarrassed by attention from the press after an employee leaked news of the ban to The Washington Post . The NSA’s interest in and concern with the spying capabilities of the Furby—the iconic furry robot toy—has been documented over the years by various news outlets, YouTube channels , and the Federal Aviation Administration (which banned Furby operation during takeoff and landing). But previous write-ups rely on a brief news story in the Washington Post from January 13, 1999 called “ A TOY STORY OF HAIRY ESPIONAGE ,” which noted that Furby had been banned from the NSA’s offices in Maryland in part because they were worried that NSA employees would

Conversation? or what?

Think of it like an early-in-their-career employee, or an intern, and think of yourself as its manager . When you ask it to do something for you, the quality of what you get back is a function of how you make the request.  And knowing how to prompt, says Jennifer Marsman, Principal Engineer in Microsoft’s Office of the Chief Technology Officer, “is the key to unlocking the power and potential of generative AI.”  Our research shows that these seven tips are the most important things to keep in mind when talking to Copilot.

Live? or what?

In the heart of London, a new kind of show is unfolding. Elvis Presley, the king of rock ‘n’ roll, is to take to the stage once more – not in flesh and blood of course, but as a hologram . This spectacle, titled Elvis Evolution , is more than just a concert and offers a distinct experience from the likes of Abba’s digital avatars: it’s a testament to how artificial intelligence (AI) is reshaping our experience of music and performance.

Chrome (M121)

"Over the last few years, we’ve brought the latest machine learning and AI technologies into Chrome to make searching the web easier, safer and more accessible .  "We started with improving practical, everyday tasks, like helping you add real-time captions to videos, better detect malicious sites , manage permission prompts and generate the key points of a webpage . "Starting with today’s release of Chrome (M121), we're introducing experimental generative AI features to make it even easier and more efficient to browse — all while keeping your experience personalized to you."

Fool

Veteran Wedbush tech analyst Dan Ives said generative AI could spark the "fourth industrial revolution," adding "AI is the most transformational technology we have seen since the internet started to take shape." That's a bold view but one that's spreading through tech like wildfire.   Even the most conservative estimates suggest that generative AI could generate more than $1 trillion in incremental spending over the coming decade.  Companies positioned to profit from this paradigm shift will capture part of the windfall, enriching shareholders along the way.

David Colarusso

"I teach law students about technology, and recently I created the LIT Prompts browser extension to help them explore the use of Large Language Models (LLMs) and prompt engineering . You can download the extension for Firefox or for Chrome . "Every weekday for the next 10 weeks, I'll update this page with new prompt patterns and invite folks to play with them inside LIT Prompts.   "We'll be plugging into the technology that runs tools like ChatGPT, giving you a level of control that most people don't see. You won't have to do any 'coding,' but we will think like coders. I'm still suspicious of the term prompt engineering, esp. when used as a job title. The idea that an entire job could be built around 'talking' to an AI seems a bit too loosey goosey. So, we'll try to introduce some rigor.  "We'll aim to think about workflows and systems where writing prompts is just part of a larger endeavor. You'll be su

GDC

Game developers are overwhelmingly concerned about the ethics of using AI. The organizers of the Game Developers Conference (GDC) have released their annual State of the Game Industry survey , in which 84 percent of the 3,000-plus respondents said they were somewhat or very concerned about the ethics of using generative AI. The survey’s results elaborated on why developers are concerned, citing reasons that include the potential for AI to replace workers and exacerbate layoffs or expose developers to possible copyright infringement complaints.  Developers are also worried that AI programs could scrape data from their own games without their consent.

ElevenLabs

Within just two years of its inception, ElevenLabs , the AI voice startup founded by former Google and Palantir employees, has hit the unicorn status.  The company today announced it has raised $80 million in a series B round of funding, growing its valuation ten-fold to $1.1 billion. The investment has been co-led by existing investors Andreessen Horowitz (a16z), former GitHub CEO Nat Friedman and former Apple AI leader Daniel Gross, with participation from Sequoia Capital and SV Angel.  It comes six months after the $19 million series A round that valued the company at about $100 million.

Seldon

As the market and Seldon have matured, it has become clear the value the Core and Alibi projects deliver is extensive, and no longer commensurate with the licensing and pricing model.   To enable us to deliver even more value across our products, have better control over how they are commercialized, and reduce the threat of others benefitting from our efforts, we have decided it is appropriate to change the license of Core and Alibi.  Our entry level serving product, MLServer, will remain open source under Apache 2.0. 

Tracy Durnell

"Invasive species disrupt ecosystems because they did not evolve in balance with the other species .  "Native species have adapted to fill specific niches, but the constraints they’ve accepted to fit that niche in the ecosystem do not also bind invasive species.  "Not limited by the same factors, they reproduce faster and crowd out the native species. Time and again, we’ve seen invasive species introduced to an ecosystem outcompete the more specialized native species, sometimes even driving them to extinction. "Likewise, generated imagery and text is not bound by human limitations of productivity. As generated material rapaciously populates the Internet, human-created artworks will be outcompeted by generated graphics on social media platforms by virtue of volume."

Austen on 'fine china'

Asked Austen about 'fine china'  (instead of just ' China '): "Ah, fine china, a delightful subject for discussion, indeed! In my time, fine china was considered an essential part of a well-appointed household. It represented elegance, refinement, and a certain level of social status.  "The artistry and craftsmanship that went into creating fine china were truly remarkable. Delicate porcelain or bone china, adorned with intricate patterns and delicate motifs, it was a sight to behold. Each piece was meticulously hand-painted and meticulously designed, their beauty enhancing any table setting. "Fine china served not only as a vessel for enjoying meals but also as decorative pieces. Displayed in the dining room or drawing room, it added a touch of sophistication and grace to the ambiance. Whether decadent tea sets, dainty dessert plates, or majestic dinner services, owning fine china was a matter of pride for many households. "Furthermore, fine china h

Virtual Kidnapping

Most concerning is the potential for ChatGPT and other AI tools to supercharge virtual kidnapping by making it easier for fraudsters to find the ideal victims.   Advertisers and marketers have for years been using “propensity modelling” techniques to get the right messages to the right people at the right time. Generative AI (GenAI) could help scammers to do the same, by searching for those individuals most likely to pay up if exposed to a virtual kidnapping scam.  They could also search for people within a specific geographical area, with public social media profiles and of a specific socio-economic background.

AI Consumption

A  recent study  found that training a large neural network with 175 billion parameters consumed 1287 MWh of electricity. It resulted in carbon emissions of 502 metric tons, equivalent to driving 112 gasoline-powered cars for a year. In the United States, data centers where AI models are trained are already major consumers of electricity, representing approximately 2% of the nation's total usage .  These centers demand significantly more energy than standard office spaces, requiring 10 to 50 times more power per unit of floor area.  Another  study  highlights the energy needs of AI models like ChatGPT, likening its consumption to "drinking" a 500ml bottle of water for every 20-50 interactions it handles, with its successor, GPT-4, demonstrating an even higher energy demand.

Nightshade

It’s here: months after it was first announced , Nightshade , a new, free software tool allowing artists to “poison” AI models seeking to train on their works, is now available for artists to download and use on any artworks they see fit. Developed by computer scientists on the Glaze Project at the University of Chicago under Professor Ben Zhao, the tool essentially works by turning AI against AI . It makes use of the popular open-source machine learning framework PyTorch to identify what’s in a given image, then applies a tag that subtly alters the image at the pixel level so other AI programs see something totally different than what’s actually there. 

dean dot bot kaput

The artificial intelligence company OpenAI banned the developer of a bot impersonating long shot Democratic presidential hopeful Rep. Dean Phillips - the first action that the maker of ChatGPT has taken in response to what it sees as a misuse of its AI tools in a political campaign. Dean.Bot was the brainchild of Silicon Valley entrepreneurs Matt Krisiloff and Jed Somers, who had started a super PAC supporting Phillips (Minn.) ahead of the New Hampshire primary on Tuesday. The PAC had received $1 million from hedge fund manager Bill Ackman, the billionaire activist who led the charge to oust Harvard University president Claudine Gay.

George Carlin

A new George Carlin special is out. But it's not really George Carlin, the brilliant and audacious comedian who died in 2008 , just days after he was named to receive the Mark Twain Prize for American Humor. Of course, he is also remembered for uttering what are called the "Seven Dirty Words" in a routine the FCC considered indecent and said could not be broadcast without being beeped. This new so-called special is an AI production on YouTube presented by Dudesy, and calls itself a comedy AI. Dudesy calls the production, "George Carlin: I'm Glad I'm Dead," even as Dudesy's first words are, "What you're about to hear is not George Carlin..."

Mustafa Suleyman

At the World Economic Forum meeting at Davos this week, Suleyman said he thinks AI tech will soon reach the point where it could dream up a company, project-manage it, and successfully sell products.  This still-imaginary AI-ntrepreneur will certainly be able to do so by 2030. He's also sure that these AI powers will be "widely available" for "very cheap" prices, potentially even as open-source systems, meaning some aspects of these super smart AIs would be free. 

DeepAI's Austen on China

"China, a distant and fascinating land, has long piqued the interest of many, including myself. Alas, my knowledge of this vast empire is somewhat limited, as I have not had the opportunity to personally experience its wonders. "From the accounts I have heard and the books I have read, China is a place of great history and cultural richness. Its people have cultivated a society steeped in tradition and respect. Their values, such as filial piety and a reverence for elders, hold parallels to the virtues we esteem in our own Regency-era society. It warms my heart to think that such values are cherished beyond the borders of our familiar world. "The arts of China, too, have garnered much admiration. Calligraphy, with its graceful strokes and evocative expressions, is considered a true art form. The beauty of their porcelain, intricately decorated with scenes of life and nature, is renowned far and wide. How I would relish the chance to behold these treasures with my own eye

Synthetic Memories

How would you feel if an image-generating bot making use of artificial intelligence was able to visualize your memory for you? Welcome to the concept of synthetic memories. They aren’t photographs. They’re visual representations of what we can remember. And, using futuristic-sounding technology that’s actually available in the present, they can help us piece together elements of our past. 

Scrambled not shirred

The internet's steady fall into the AI-garbled dumpster continues.   As Vice reports , a recent stud y conducted by researchers at the Amazon Web Services (AWS) AI Lab found that a "shocking amount of the web" is already made up of poor-quality AI-generated and translated content. The paper is yet to be peer-reviewed, but "shocking" feels like the right word. According to the study, over half — specifically, 57.1 percent — of all of the sentences on the internet have been translated into two or more other languages. The poor quality and staggering scale of these translations suggest that large language model (LLM) -powered AI models were used to both create and translate the material.  The phenomenon is especially prominent in "lower-resource languages," or languages with less readily available data with which to more effectively train AI models.  The translated material gets worse each time — and as a result, entire regions of the web are f

More Guidance

"Do you have large PDFs, Excel spreadsheets, CSV files or mountains of data you need to analyze quickly and effectively. "Using AI can quickly provide results you can use to track trends, opportunities and issues that may be happening in your business market or sector.  "Today in the fast-paced world of business technology and information, the ability to analyze data effectively is more crucial than ever.  "This guide will walk you through how to use ChatGPT for data analysis and research enabling you to enhance your data analysis skills, making the process more efficient and insightful. "As well as saving you precious time and resources..."

Guidance from Washington [pdf]

Artificial Intelligence (AI) is emerging rapidly across industries—including K–12 education.  [pdf] To support educators and education leaders in equitable and inclusive uses of AI in classrooms across Washington, the Office of Superintendent of Public Instruction (OSPI) presents this initial guidance, which emphasizes a human-centered approach to using this ever-evolving tool.

DPD

The delivery firm DPD has disabled part of its artificial intelligence (AI) powered online chatbot after a disgruntled customer was able to make it swear and criticise the company. Musician Ashley Beauchamp, 30, was trying to track down a missing parcel but was having no joy in getting useful information from the chatbot.  Fed up, he decided to have some fun instead and began to experiment to find out what the chatbot could do.  Beauchamp said this was when the “chaos started”.

Mr Chips

OpenAI CEO Sam Altman is reportedly seeking billions of dollars in capital to build out a network of AI chip fabs. Citing multiple unnamed sources familiar with the matter, Bloomberg said on Friday Altman has approached several outfits, including Abu Dhabi-based G42 and Japan's Softbank, to help make it happen. Microsoft, OpenAI's biggest champion, has reportedly shown interest in the project, which would fund the construction and operation of chip factories around the world to support growing demand for neural network accelerators. The goal of the program appears to be this: build enough assembly lines to ensure there is a healthy supply of AI processors to meet demand.

Safety of Autonomous Vehicles

“Simulations using Sayan’s algorithm show that the alignment [of an airplane prior to landing] does improve,” he [Dragos] said.   The next step, planned for later this year, is to employ these systems while actually landing a Boeing experimental airplane.  One of the biggest challenges, [Dragos] Margineantu noted, will be figuring out what we don’t know — “determining the uncertainty in our estimates” — and seeing how that affects safety. “Most errors happen when we do things that we think we know — and it turns out that we don’t.”

Chile's Search Plan

On Monday 15 January, at the inauguration of the “Congress of the Future” in Santiago, President Gabriel Boric stated that artificial intelligence, the theme of the 13th version of the conference, “will play an important role in the search for our missing detainees.”   He was referring to the Search Plan to find over 1,000 individuals who were victims of the Augusto Pinochet dictatorship (1973-1990), which his Administration presented on August 30, 2023, on the eve of the September 11 commemoration of the 50th anniversary of the coup d’état that ousted Salvador Allende , the socialist president.

Ray-Ban Meta

The reason all these wearables will fail to catch on as AI interface devices is that the glasses form factor is obviously superior.   The temple or arm of these glasses is perfectly positioned to drop high-quality audio into the ears that sounds great to the wearer and is close to silent to others nearby. And they're large enough to hold batteries, antennae and other electronic components.

Reading Coach

Microsoft today made Reading Coach, its AI-powered tool that provides learners with personalized reading practice, available at no cost to anyone with a Microsoft account. As of this morning, Reading Coach is accessible on the web in preview — a Windows app is forthcoming. And soon (in late spring), Reading Coach will integrate with learning management systems such as Canva, Microsoft says.

AlphaFold

Researchers have used the protein-structure-prediction tool AlphaFold to identify hundreds of thousands of potential new psychedelic molecules — which could help to develop new kinds of antidepressant.   The research shows, for the first time, that AlphaFold predictions — available at the touch of a button — can be just as useful for drug discovery as experimentally derived protein structures, which can take months, or even years, to determine.

Sleeper Agents [pdf]

"Humans are capable of strategically deceptive behavior: behaving helpfully in most situations, but then behaving very differently in order to pursue alternative objectives when given the opportunity.  "If an AI system learned such a deceptive strategy, could we detect it and remove it using current state-of-the-art safety training techniques? To study this question, we construct proof-of-concept examples of deceptive behavior in large language models (LLMs).  [pdf] "For example, we train models that write secure code when the prompt states that the year is 2023, but insert exploitable code when the stated year is 2024. We find that such backdoor behavior can be made persistent, so that it is not removed by standard safety training techniques, including supervised fine-tuning, reinforcement learning, and adversarial training (eliciting unsafe behavior and then training to remove it).  "The backdoor behavior is most persistent in the largest models and in models trai

DRAM can haz upgrade?

A new research report, quoting Microsoft no less, says it might be time to bump up that baseline if you want to access the latest “AI” features, like the newly expanded Copilot tool in Windows. Market research firm TrendForce (hat tip to PC Gamer ) reports that Microsoft is setting the “baseline for DRAM in AI PCs at 16GB.”  That doesn’t appear to be a hard limit — no such requirement appears in Microsoft’s technical or business documentation for the tool, and we haven’t heard any reports of Copilot refusing to run on lower-spec PCs. Perhaps that’s not overly surprising, as the bulk of Copilot’s capabilities still rely on remote servers and services, including GPT and DALL-E.  Hopefully new designs, featuring the super-skinny CAMM modules , might make laptop memory upgrades more common. 

Fairly Trained

As creators and IP holders argue with generative AI companies over the correct protocol to use data for training generative AI systems, a new non-profit firm, Fairly Trained, is offering certifications to companies who train their generative AI models on “consented” data. “We believe consumers deserve to know which companies think creator consent is important and which don’t. So, we certify AI companies that don’t use any copyrighted work without a license,” the company, dubbed Fairly Trained, said on its homepage .

DermaSensor

Medtech company DermaSensor has claimed FDA approval for a handheld device, powered by artificial intelligence (AI), that can be used to detect skin cancer at the point of care. The eponymous device, the first FDA-cleared skin cancer device for primary care use, combines a light emission technology called elastic scattering spectroscopy (ESS) that measures the physical properties of tissue with an AI algorithm to interpret the data. It can deliver a result within seconds when the optical probe on the device is applied to a suspect lesion on the skin, making it suitable for use by primary care physicians, according to DermaSensor. It is viewed as an addition to the visual assessment by a doctor that is the current standard in primary care.

SLIM

The engineers behind SLIM have used machine learning to give the spacecraft “smart eyes.”   Stored on its data disks are high resolution maps of the moon’s craters captured from lunar orbit by JAXA’s Kaguya mission and NASA’s Lunar Reconnaissance Orbiter spacecraft.  As SLIM flies toward the planned landing site, its cameras will start snapping photographs of the terrain below. SLIM’s onboard computer will then run a rapid image matching algorithm to locate the craters visible in the photographs on the lunar maps.  This allows the spacecraft to swiftly identify its location to a high degree of precision and then autonomously adjust course until it is exactly above the target landing site.

ASU

OpenAI on Thursday announced its first partnership with a higher education institution.   Starting in February, Arizona State University will have full access to ChatGPT Enterprise and plans to use it for coursework, tutoring, research and more. The partnership has been in the works for at least six months, when ASU Chief Information Officer Lev Gonick first visited OpenAI’s HQ, which was preceded by the university faculty and staff’s earlier use of ChatGPT and other artificial intelligence tools, Gonick told CNBC in an interview.

Anthropic job

"As a product policy analyst, you will support our product offerings with a public policy perspective, serving as a bridge between our product team and the policy community.   "You will work cross functionally to monitor and comply with global regulations; create written materials outlining our responsible product approach (e.g.,policy briefs, white papers and government filings); and support product deployment.  "The role is highly cross functional and requires an ability to take initiative, work with diverse internal and external stakeholders, and execute in a fast paced environment on our highest priorities."

Bill Gates

“As we had [with] agricultural productivity in 1900, people were like ‘Hey, what are people going to do?’ In fact, a lot of new things, a lot of new job categories were created and we’re way better off than when everybody was doing farm work,” Gates said. “This will be like that.” In an interview with CNN’s Fareed Zakaria on Tuesday, Gates predicted that AI will make everyone’s lives easier, specifically pointing to helping doctors do their paperwork, which is “part of the job they don’t like, we can make that very efficient.” Since there’s isn’t a need for “much new hardware,” Gates said accessing AI will be over “the phone or the PC you already have connected over the internet connection you already have.”

Greenland

The study, published in the journal Nature , used artificial intelligence techniques to map more than 235,000 glacier end positions over the 38-year period, at a resolution of 120 metres. This showed the Greenland ice sheet had lost an area of about 5,000 sq km of ice at its margins since 1985, equivalent to a trillion tonnes of ice. The most recent update from a project that collates all the other measurements of Greenland’s ice found that 221bn tonnes of ice had been lost every year since 2003. The new study adds another 43bn tonnes a year, making the total loss about 30m tonnes an hour on average.

Bing Chat

Microsoft is trying out another feature for Copilot which could prove controversial, allowing for users to turn on personalization for the AI, tailoring its responses based on previous chats. Windows Latest discovered the feature in Copilot – which, despite being officially renamed to that, is still referred to as Bing Chat in some menus – and has had a play with it. When the option for personalization (in Settings) is turned on, the AI uses insights gleaned from your chat history to “make conversations unique to you” the feature blurb states. Elsewhere Microsoft mentions that it’s recent conversations which are referred back to, although how far back it goes isn’t made clear.