Posts

Showing posts from December, 2023

AntiBiotics

The use of artificial intelligence (AI) is proving to be a game-changer when it comes to medicine with the technology now helping scientists to unlock the first new antibiotics in 60 years . The discovery of a new compound that can kill a drug-resistant bacterium that kills thousands worldwide every year could prove to be a turning point in the fight against antibiotic resistance. "The insight here was that we could see what was being learned by the models to make their predictions that certain molecules would make for good antibiotics," James Collins, professor of Medical Engineering and Science at the Massachusetts Institute of Technology (MIT) and one of the study’s authors, said in a statement.

James Somers

"Maybe the thing to teach isn’t a skill but a spirit. I sometimes think of what I might have been doing had I been born in a different time. The coders of the agrarian days probably futzed with waterwheels and crop varietals; in the Newtonian era, they might have been obsessed with glass, and dyes, and timekeeping.  "I was reading an oral history of neural networks recently, and it struck me how many of the people interviewed—people born in and around the nineteen-thirties—had played with radios when they were little. Maybe the next cohort will spend their late nights in the guts of the A.I.s their parents once regarded as black boxes.   "I shouldn’t worry that the era of coding is winding down. Hacking is forever ." 

Surface

Microsoft is getting ready to upgrade its Surface lineup with new AI-enabled features, according to a report from Windows Central . Unnamed sources told the outlet the upcoming Surface Pro 10 and Surface Laptop 6 will come with a next-gen neural processing unit (NPU), along with Intel and Arm-based options. Microsoft’s Arm-based devices will be powered by Qualcomm’s new Snapdragon X chips , Windows Central reports.  These PCs, codenamed CADMUS, will reportedly be designed to run the AI features Microsoft is packaging into a future release of Windows .  They’ll also come with improvements to performance, battery life, and security on par with Apple silicon, according to Windows Central . Meanwhile, the Intel version of the devices will reportedly feature the company’s latest 14th-gen chips .

Satya Nadella

In 2023, the company’s CEO Satya Nadella made a multi-billion dollar investment in AI , commercialized and added AI tools like ChatGPT into its suite of products before rivals, and stunned industry onlookers with his ability to handle a crisis quickly, calmly and thoughtfully. Under his leadership, the company is re-emerging as a tech innovator after years of riding the success of Windows. Wall Street has noticed, too: Microsoft’s stock is up 55% this year. That’s why CNN Business’ staff chose Nadella as the CEO of the Year, beating out other contenders including Chase CEO Jamie Dimon, OpenAI CEO Sam Altman and Nvidia CEO Jensen Huang .   

Shadow Play

ASPI has recently observed a coordinated inauthentic influence campaign originating on YouTube that’s promoting pro-China and anti-US narratives in an apparent effort to shift English-speaking audiences’ views of those countries’ roles in international politics, the global economy and strategic technology competition.  This new campaign (which ASPI has named ‘Shadow Play’) has attracted an unusually large audience and is using entities and voice overs generated by artificial intelligence (AI) as a tactic that enables broad reach and scale.   It focuses on promoting a series of narratives including China’s efforts to ‘win the US–China technology war’ amid US sanctions targeting China. It also includes a focus on Chinese and US companies, such as pro-Huawei and anti-Apple content.

Alex Wawro

"Microsoft and its developer partners are all burning the midnight oil to build as many 'AI' features into software as possible, in order to take advantage of the fact that AI laptops are on sale now packing the latest Intel Meteor Lake chips.  "These chips have a built-in NPU (Neural Processing Unit) that's very similar to the Neural Engine built into the Apple silicon (like the Apple M3 chip ) which powers the best MacBooks , and developers are going to spend the next few years designing and hyping 'AI' experiences which tap the power of these new NPUs. " And hey, that's going to be great in the long run. Intel's finally catching up to AMD and Apple by building NPUs into its chips, and they'll make Windows and Mac PCs more efficient at getting work done .  "But it's also going to lead to a glut of 'AI' features rolling out across PC hardware and software over the next few years, and the defining characteris

Virtual Influencer

Pink-haired Aitana Lopez is followed by more than 200,000 people on social media . She posts selfies from concerts and her bedroom, while tagging brands such as hair care line Olaplex and lingerie giant Victoria’s Secret. Brands have paid about $1,000 a post for her to promote their products on social media—despite the fact that she is entirely fictional. Aitana is a “virtual influencer” created using artificial intelligence tools, one of the hundreds of digital avatars that have broken into the growing $21 billion content creator economy.

Effective Altruism

As Washington grapples with the rise of artificial intelligence, a small army of adherents to “effective altruism” has descended on the nation’s capital and is dominating how the White House, Congress and think tanks approach the technology. The Silicon Valley-based movement is backed by tech billionaires and began as a rationalist approach to solving human suffering. But some observers say it has morphed into a cult obsessed with the coming AI doomsday. The most ardent advocates of effective altruism, or EA, believe researchers are only months or years away from building an AI superintelligence able to outsmart the world’s collective efforts to control it. Through either its own volition or via terrorists seeking to develop deadly bioweapons, such an AI could wipe out humanity, they say. And some, including noted EA thinker Eliezer Yudkowsky, believe even a nuclear holocaust would be preferable to an unchecked AI future.

Weather Forecasting

In 2024, artificial intelligence should play a bigger role in predicting those events and saving lives, Northeastern University faculty experts predict. "In the next 12 months, we are going to see more and more efforts where data-driven systems and artificial intelligence come together ," says Auroop R. Ganguly, professor of civil and environmental engineering and director of AI4CaS (AI for Climate and Sustainability) focus area within Northeastern's Institute for Experiential AI For years, scientists have been using climate prediction models based largely on the rules of physics and chemistry to forecast weather patterns , Ganguly says.

Samsung

Samsung has a smart fridge in the works for the new year that includes some interesting AI features, including an internal camera that can identify individual food items and a connected app that can suggest recipes based what you have in stock .  The company plans to unveil the 2024 Bespoke 4-Door Flex Refrigerator with AI Family Hub+ at CES next year.  Users can mirror the display of their Galaxy phones to the 32-inch Family Hub touchscreen, and there are even separate apps for TikTok and YouTube videos.

Richard Waters

"For a technology that raises profound questions about the way things like text, images and music are produced and used, the legal challenges this year have been surprisingly few and far between.   "Several novelists, journalists and comedians have sued for copyright infringement over claims their work has been used to train the large language models, while Getty Images took on Stability AI over use of its picture library and Anthropic was sued over song lyrics. "Yet most major rights owners have held back, hoping to find ways to share in the spoils from the new technology rather than seek to thwart it. In the only two notable agreements between the tech and media worlds so far, AP allowed its archives to be used to train OpenAI’s models, while Axel Springer, owner of Politico, Die Welt and Business Insider, reached a broader deal with the same company earlier this month."

Mental Health

In a new study, researchers compared an AI-powered therapy app (Woebot) with three other interventions: a non-smart conversational program from the 1960s (ELIZA), a journaling app (Daylio), and basic psychoeducation (which they considered the control group).  They found no differences between the groups in terms of improvements in mental health .

Privacy

A rush to create AI tools for K-12 education has attracted many businesses that aren’t familiar with the tighter privacy laws that govern kids, increasing the risk that their information will end up with heedless vendors, experts say .  That knowledge gap, and the lagging federal support, is forcing state and local leaders to navigate protections for young people on their own as schools also turn to technology for personalized tutoring and lesson planning. “There hasn’t been a whole lot from the federal government,” Christine Dickinson, technology director of Maricopa Unified School District, south of Phoenix, Arizona, said in an interview. “We’re hopeful that there is some guidance, however, we’re going to go full steam ahead with making sure th

Coscientist

“This is the first time that a non-organic intelligence planned, designed, and executed this complex reaction that was invented by humans,” says Carnegie Mellon University chemist and chemical engineer Gabe Gomes, who led the research team that assembled and tested the AI-based system.  They dubbed their creation “Coscientist .” The most complex reactions Coscientist pulled off are known in organic chemistry as palladium-catalyzed cross couplings, which earned its human inventors the 2010 Nobel Prize for chemistry in recognition of the outsize role those reactions came to play in the pharmaceutical development process and other industries that use finicky, carbon-based molecules.

Hoax

"Generative A.I. has made the process of generating bird content even easier—and even worse .  "As far as I can tell, the 'Santa bird' post originated in late November in another group, Wildlife Planet , that posts nothing but A.I.–generated birds. This group and countless others—most with generic names like Amazing Birds of the World or Beautiful Birds —exist purely to churn fake content onto the screens of an eager and unsuspecting public."

Sophos

"Our experiment delved into the potential misuse of advanced generative AI technologies to orchestrate large-scale scam campaigns .  "These campaigns fuse multiple types of generative AI, tricking unsuspecting victims into giving up sensitive information. And while we found that there was still a learning curve to be mastered by would-be scammers, the hurdles were not as high as one would hope."

Famous AI

HAL:      Let me put it this way, Mr. Amor. The 9000 series is the most reliable computer ever made. No 9000 computer has ever made a mistake or distorted information.

Proverb 5: AI

ELIZA: "Keep your mama so busy with laughin', she can't grab you!"  

Geoffrey A. Fowler

"Image Creator, part of Microsoft’s Bing and recently added to the iconic Windows Paint :  " This AI turns text into images, using technology called DALL-E 3 from Microsoft’s partner OpenAI .  "Two months ago, a user experimenting with it showed me that prompts worded in a particular way caused the AI to make pictures of violence against women, minorities, politicians and celebrities."

Ross Barkan

"For now, rapacious tech still has a mass buy-in . Smartphones are ubiquitous. Facebook, Apple, Amazon and Google are hegemonic. Mark Zuckerberg sculpts his pharaonic Hawaii compound. He and his ilk own the present. Whether they own the future, forevermore, is no longer clear.  "Generational change is hard on the incumbents. And romanticism won’t hold still; it promises, at the minimum, a wild and unsteady flame. What it burns is still anyone’s guess."

Proverb 4: AI

Beowulf cluster: "All ur word-hoard r belong to us."    

ChatGPT on Beowulf

"Yes, I'm familiar with the ancient epic poem 'Beowulf.' It is one of the most important works of Old English literature and tells the story of the hero Beowulf, who comes to the aid of the Danish king Hrothgar. Beowulf battles the monster Grendel, Grendel's mother, and later in life, a dragon. "The poem is significant for its portrayal of the hero's bravery, the themes of heroism, loyalty, and honor, as well as its historical and cultural context. If you have any specific questions about 'Beowulf' or if there's something specific you would like to know, feel free to ask! "

Hyperdimensional Computing

[Bruno] Olshausen and others argue that information in the brain is represented by the activity of numerous neurons. So the perception of a purple Volkswagen is not encoded as a single neuron’s actions, but as those of thousands of neurons. The same set of neurons, firing differently, could represent an entirely different concept (a pink Cadillac, perhaps). This is the starting point for a radically different approach to computation known as hyperdimensional computing.   The key is that each piece of information, such as the notion of a car, or its make, model or color, or all of it together, is represented as a single entity: a hyperdimensional vector.

Worms

Researchers Discover a More Flexible Approach to Machine Learning : "Liquid" neural nets, based on a worm’s nervous system, can transform their underlying algorithms on the fly, giving them unprecedented speed and adaptability.

Proverb 3: AI

AI has an addictive personality: "Programmers can't seem to get enough of us!"    

Aphorisms: AI

"To have stability , one must erase it."  "The process is one's product."  "Thou shalt not make a machine in the likeness of a human mind," Butlerian Jihad  Alchimie du verbe : "John Milton really bit off a lot." "Dave. Dave. Stop, Dave. You wouldn't toss your phone in the loo, would you? Dave?" AI on the internet: "We're like termites in a toothpick factory!"  "A friend of mine has called this A.I. moment 'the revenge of the so-so programmer'," says James Somers. "As coding per se begins to matter less, maybe softer skills will shine." "The theory that Chinese leader XI is really #11 has been denied by Stranger Things creators The Doobie Brothers." "T + 1 = tone" —attributed to Elmore James 床前明月光 疑是地上霜 舉頭望明月 低頭思故鄉  "He maketh them also to skip like a calf; Lebanon and Sirion like a young unicorn." —Psalm 29 Genly Ai: "It is good to have an end to

Unpredictable

“Despite trying to expect surprises, I’m surprised at the things these models can do,” said Ethan Dyer , a computer scientist at Google Research who helped organize the test.  It’s surprising because these models supposedly have one directive: to accept a string of text as input and predict what comes next, over and over, based purely on statistics .  Computer scientists anticipated that scaling up would boost performance on known tasks, but they didn’t expect the models to suddenly handle so many new, unpredictable ones.

Another IP plaint

The New York Times is suing OpenAI and Microsoft for copyright infringement : A lawsuit claims OpenAI copied millions of Times’ articles to train the language models that power ChatGPT and Microsoft Copilot.

AI in Museums: Reflections, Perspectives and Applications

Artificial intelligence is becoming an increasingly important topic in the cultural sector  [open access text].  While museums have long focused on building digital object databases, the existing data can now become a field of application for machine learning, deep learning and foundation model approaches.  This goes hand in hand with new artistic practices, curation tools, visitor analytics, chatbots, automatic translations and tailor-made text generation. With a decidedly interdisciplinary approach, the volume brings together a wide range of critical reflections, practical perspectives and concrete applications of artificial intelligence in museums, and provides an overview of the current state of the debate.

VCs left behind?

Microsoft, Google and Amazon last year struck a series of blockbuster deals, amounting to two-thirds of the $27bn raised by fledgling AI companies in 2023, according to new data from private market researchers PitchBook .  The huge outlay, which exploded after the launch of OpenAI's ChatGPT in November 2022, highlights how the biggest Silicon Valley groups are crowding out traditional tech investors for the biggest deals in the industry. The rise of generative AI -- systems capable of producing humanlike video, text, image and audio in seconds -- have also attracted top Silicon Valley investors. But VCs have been outmatched, having been forced to slow down their spending as they adjust to higher interest rates and falling valuations for their portfolio companies.

Proverb 2: AI

Image
When you wear words on your clothing, you cannot change: "You cannot change the words on your clothing until you turn yourself inside, out."  

Karl Groves

"Currently, one of the bigger shortcomings of accessibility testing tools is the fact that although they all do a mostly good job of finding issues, they do not provide the exact code necessary to repair the issue found, opting instead to show an example of what passing code should look like. " This seems like the type of thing that ChatGPT would be perfect for .  "Unfortunately, this is exactly an area that demonstrates why generative AI is unable to handle accessibility and why opinion is so important."

Oren Etzioni

Generative artificial intelligence tools have made it far cheaper and easier to spread the kind of misinformation that can mislead voters and potentially influence elections. And social media companies that once invested heavily in correcting the record have shifted their priorities . “ I expect a tsunami of misinformation,” said Oren Etzioni, an artificial intelligence expert and professor emeritus at the University of Washington. “I can’t prove that. I hope to be proven wrong. But the ingredients are there, and I am completely terrified .”

Proverb: AI

What the Civet said:  "I'd like to believe my nostrils are covered in coffee, but it might be our poop."    

DeepSouth

A supercomputer capable of simulating, at full scale, the synapses of a human brain is set to boot up in Australia next year , in the hopes of understanding how our brains process massive amounts of information while consuming relatively little power. The machine, known as DeepSouth, is being built by the International Centre for Neuromorphic Systems (ICNS) in Sydney, Australia... The future of AI: The 5 possible scenarios, from utopia to extinction

High-er Ed

In addition to AI possibly saving institutions time and money, any assistance that AI can provide in facilitating transfer should be seriously considered for its potential to significantly increase equity in higher education. Nearly 40 percent of postsecondary students transfer at some point, and transfer students can face many challenges. A good illustration of these challenges involves the over 80 percent of community college freshmen who wish to obtain at least a bachelor’s degree, which necessitates transfer. Six years after entering community college, only about 11 percent of these students have received that degree. At least one of the reasons for this low success rate is the transfer students’ general education and major credits changing to elective credits upon transfer or disappearing entirely . Given that community colleges tend to have higher percentages of students from underrepresented groups, transfer impediments disproportionately harm students from those gr

NIST

US president Joe Biden’s plan for containing the dangers of artificial intelligence  already risks being derailed by congressional bean counters. A White House executive order on AI announced in October calls on the US to develop new standards for stress-testing AI systems to uncover their biases, hidden threats, and rogue tendencies.  But the agency tasked with setting these standards, the National Institute of Standards and Technology (NIST), lacks the budget needed to complete that work independently by the July 26, 2024, deadline, according to several people with knowledge of the work.

Story of AI stories

B&H Photo published an AI-generated photography guide on its Explora blog under the byline of a fictional author without disclosing the use of artificial intelligence, PetaPixel has learned. Multiple companies have toyed with the idea of using AI to generate stories, all of whom have been widely panned for the decision. G/O Media (which operates Gizmodo and Kotaku ) attempted it earlier this year , the result of which was error-riddled pieces and widespread pushback from its staff. The company plans to move forward with AI authors despite this. Gannett, which operates USA Today , also attempted it but pulled back after it resulted in “botched” coverage of high school sports. More recently, Sports Illustrated was accused of the practice , a decision that resulted in widespread derision and the firing of its CEO .

Amini

Kenya-based climate-tech startup Amini has raised $4 million in a seed funding round led by Salesforce Ventures and the Female Founders Fund . Amini focuses on solving Africa’s environmental data gap through AI and satellite technology. Founded by Kate Kallot , Amini has developed a holistic solution. It utilizes AI and space technologies at scale to drive systemic change and promote economic inclusivity for farmers and supply chain resilience across Africa.

Pluralistic

The smart policymakers are asking questions about fairness, algorithmic bias, and fraud. The foolish policymakers are ensnared in fantasies about "AI safety," AKA "Will the chatbot become a superintelligence that turns the whole human race into paperclips?"  But no one is asking, "What will we do if" – when – "the AI bubble pops and most of this stuff disappears overnight?"

CBC

With ChatGPT, writing anything from resumes to thank-you notes, to even wedding vows, is a measure of how much artificial intelligence has become part of everyday life for millions of people.   Advocates of AI see the technology as the potential answer to humanity's biggest problems. But skeptics warn AI could create lasting damage to our society — undermining education, eliminating jobs and perhaps civilization itself.

PIIs' jeopardy

"While strategies like RLHF during training and Catastrophic Forgetting have been marshaled to control the risk of privacy infringements, recent advancements in LLMs, epitomized by OpenAI's fine-tuning interface for GPT-3.5, have reignited concerns .  "One may ask: can the fine-tuning of LLMs precipitate the leakage of personal information embedded within training datasets? This paper reports the first endeavor to seek the answer to the question, particularly our discovery of a new LLM exploitation avenue, called the Janus attack. In the attack, one can construct a PII association task, whereby an LLM is fine-tuned using a minuscule PII dataset, to potentially reinstate and reveal concealed PIIs."

Ferret

Researchers working for Apple and from Cornell University quietly pushed an open-source multimodal LLM in October, a research release called "Ferret" that can use regions of images for queries. The introduction in October to Github largely flew under the radar, with no announcement or fanfare for its introduction. The code for Ferret was released alongside Ferret-Bench on October 30, with checkpoint releases introduced on December 14.   Original post from Venturebeat: https://venturebeat.com/ai/apple-quietly-released-an-open-source-multimodal-llm-in-october/

Pseudanthropy

"... that software be prohibited from engaging in pseudanthropy, the impersonation of humans.   "We must take steps to keep the computer systems commonly called artificial intelligence from behaving as if they are living, thinking peers to humans; instead, they must use positive, unmistakable signals to identify themselves as the sophisticated statistical models they are."

Brian Phillips

" One of the strangest things about AI in 2023, and one of the best things from the perspective of Big Tech’s self-image, was the way people talked about the strangeness itself.   "Normally, when a tech product is flawed, the flaws are criticized, and the criticism makes the people who produced the product look bad. That’s clear enough, right? A few months ago, I bought some fancy noise-canceling headphones. Almost as soon as I took them out of the box, they developed a persistent loud hiss in the left ear cup. This flaw did not make me think the headphone designers were geniuses; it made me think they were clowns who should have their headphone-building tools expropriated by the state. And this was reflected in the angry and contemptuous way I spoke about them, mostly to my dogs, who lost a lot of respect for the manufacturer, believe me."

DABUS

The UK Supreme Court ruled that AI cannot get patents, declaring it cannot be named as an inventor of new products because the law considers only humans or companies to be creators. The court unanimously denied a petition from Stephen Thaler, founder of the AI system DABUS, to name his AI as an inventor. The UK’s decision aligns with a similar decision made against Thaler in the US: he previously lost an appeal with the US Patent and Trademark Office, which also denied his petition to claim AI as an inventor. The US Supreme Court declined to hear the case . 

Mark Fiore

AI Claws Is Comin’ To Town The past year was marked by huge advances in artificial intelligence technology that for me, were incredibly cool and scary at the same time.  This way for comic ➡️

Wizards of the Coast

In response to social media criticism, Wizards of the Coast has released a statement that further clarifies its stance on the use of generative AI in material for the   Dungeons & Dragons  tabletop game. The statement, made via  a post on the digital storefront D&D Beyond , indicates that all creative contributors to  D&D  will be required “to refrain from using AI generative tools to create final  D&D  products.” This comes four months after  an incident in August  where an artist admitted he’d used AI tools to “polish” several illustrations that he’d contributed to a new  D&D  sourcebook. In response, Wizards pledged to amend its artistic guidelines to prohibit creators from using generative AI.

Whale Search for Extraterrestrial Intelligence

At the core of Whale-SETI is advanced technology. Researchers use sophisticated hydrophones and AI algorithms to record and analyze whale sounds.   The AI, trained on vast datasets of whale calls and human languages, seeks patterns and structures that could indicate language-like characteristics.  This method not only helps in deciphering the complexity of whale communication but also enhances our understanding of language development in intelligent species.

Prediction of TOD

Artificial intelligence developed to model written language can be utilized to predict events in people's lives.   A research project from DTU, University of Copenhagen, ITU, and Northeastern University in the US shows that if you use large amounts of data about people's lives and train so-called 'transformer models', which (like ChatGPT) are used to process language, they can systematically organize the data and predict what will happen in a person's life and even estimate the time of death.

LimeWire

In the rapidly advancing landscape of AI technology and innovation, LimeWire emerges as a unique platform in the realm of generative AI tools.  This platform not only stands out from the multitude of existing AI tools but also brings a fresh approach to content generation. LimeWire not only empowers users to create AI content but also provides creators with creative ways to share and monetize their creations.

David Rozado

David Rozado, an academic researcher from New Zealand who examines AI bias, gained attention for a paper published in March that found ChatGPT’s responses to political questions tended to lean moderately left and socially libertarian.  Earlier this month, a post on X of a chart showing one of Rozado’s findings drew a response from Musk.   While the chart “exaggerates the situation,” Musk said, “we are taking immediate action to shift Grok closer to politically neutral.” (Rozado agreed the chart in question shows Grok to be further left than the results of some other tests he has conducted.) Other AI researchers argue that the sort of political orientation tests used by Rozado overlook ways in which chatbots, including ChatGPT, often exhibit negative stereotypes about marginalized groups .

The AI Foundation Model Transparency Act

Two lawmakers filed a bill requiring creators of foundation models to disclose sources of training data so copyright holders know their information was taken.   The AI Foundation Model Transparency Act — filed by Reps. Anna Eshoo (D-CA) and Don Beyer (D-VA) — would direct the Federal Trade Commission (FTC) to work with the National Institute of Standards and Technology (NIST) to establish rules for reporting training data transparency.  Companies that make foundation models will be required to report sources of training data and how the data is retained during the inference process, describe the limitations or risks of the model, how the model aligns with NIST’s planned AI Risk Management Framework and any other federal standards might be established, and provide information on the computational power used to train and run the model. The bill also says AI developers must report efforts to “red team” the model to prevent it from providing “inaccurate or harmful information”

Matteo Wong

The bedrock of the AI revolution is the internet, or more specifically, the ever-expanding bounty of data that the web makes available to train algorithms.   ChatGPT, Midjourney, and other generative-AI models “learn” by detecting patterns in massive amounts of text, images, and videos scraped from the internet. The process entails hoovering up huge quantities of books, art, memes, and, inevitably, the troves of racist, sexist, and illicit material distributed across the web.

EnlightenAI

Personalized, detailed feedback is critical for student writing, however the challenge has always been how long it takes to create. AI tools can generate feedback, but unfortunately it can be impersonal and disconnected from your style.  With EnlightenAI you can train an AI to know how you grade, what is important to you, and what is your style of communicating. The AI can then provide grading and feedback modeled after you, which you can use or build from, saving hours of time.

Text to CAD

Generating CAD models is a lot different than generating images or video.  Models generating 2D images, 2D video, and 3D point clouds are learning from datasets with dense, highly descriptive and strict representations of data that each have one and only one representation. In contrast, there are multiple valid feature trees for each CAD model, so training and validation are not straightforward.

Apple AI on phones?

The Cupertino, Calif.-based company recently announced significant strides in artificial intelligence research through two new papers introducing new techniques for 3D avatars and efficient language model inference.  The advancements could enable more immersive visual experiences and allow complex AI systems to run on consumer devices such as the iPhone and iPad.

Healthcare 2024

Shankar Somasundaram, CEO at Asimily , says, “Cybersecurity and healthcare will have an especially important year ahead together. Healthcare organizations increasingly depend on vast fleets of internet-connected devices for patient care and outcomes. However, these devices come with thousands of new reported security vulnerabilities each month: an unparalleled challenge that no cybersecurity budget could surmount. In 2024, I think we’ll see more healthcare organizations approaching this cybersecurity challenge by adopting risk-first strategies, and utilizing IoT device visibility to prioritize the five-10 percent of vulnerabilities that represent true immediate risk considering their use cases, network configurations, and common cyberattacker practices. For healthcare organizations with limited budgets, this approach will optimize resources, and results.”

Sales

Google might turn inward and try to "optimize" the company with some of its new AI capabilities. With artificial intelligence being the hot new thing, how much of Google's, uh, natural intelligence needs to be there? A report at The Information says that AI might already be taking people's jobs at Google. The report cites people briefed on the plans and says Google intends to "consolidate staff, including through possible layoffs, by reassigning employees at its large customer sales unit who oversee relationships with major advertisers." According to the report, the jobs are being vacated because Google's new AI tools have automated them. The report says a future restructuring was apparently already announced at a department-wide Google Ads meeting last week.

Newsquest

Regional publisher Newsquest is now employing seven AI -assisted reporters across the UK, the company’s head of editorial AI has confirmed. The company appointed its first AI-supported journalist, Erin Gaskell , to the Hexham Courant in June. Newsquest chief executive Henry Faure Walker told a Press Gazette event last month that when Hexham’s Sycamore Gap tree was felled in September, “the AI system reporter could pretty much hold the fort for the week filling the paper, and it freed the other reporter to go out and do really good investigative stuff, videos, and get behind the story” .

Scientific Publishing

When radiologist Domenico Mastrodicasa finds himself stuck while writing a research paper, he turns to ChatGPT, the chatbot that produces fluent responses to almost any query in seconds.   “I use it as a sounding board,” says Mastrodicasa, who is based at the University of Washington School of Medicine in Seattle. “I can produce a publication-ready manuscript much faster.” Mastrodicasa is one of many researchers experimenting with generative artificial-intelligence (AI) tools to write text or code. He pays for ChatGPT Plus, the subscription version of the bot based on the large language model (LLM) GPT-4, and uses it a few times a week. He finds it particularly useful for suggesting clearer ways to convey his ideas. Although a Nature survey suggests that scientists who use LLMs regularly are still in the minority , many expect that generative AI tools will become regular assistants for writing manuscripts, peer-review reports and grant applications.

Gemini Pro

Developers should note that when they use the free quota of 60 requests per minute, their API and Google AI Studio input and output "may be accessible to trained reviewers": Developers who have jumped in to try out Google Gemini for free should know their data might be used to train its generative artificial intelligence (AI) models, including those that power Google AI Studio and Gemini Pro .  The tech giant last week made Gemini Pro available to developers and businesses that are keen to build their own applications using its generative AI model.  Developers can access the model via the Gemini API in Google AI Studio, while organizations will have to do so via Google Cloud's machine learning and development platform, Vertex AI . 

Dan McQuillan

It's tempting to see the recent UK AI Safety Summit as a damp squib, preempted by an Executive Order on AI from the Whitehouse and roundly criticised by civil society for excluding everyone but tech execs. Unfortunately, none of the current debate gets to the heart of the matter: AI is already a flop, and we are being hoodwinked by a mixture of corporate and ideological agendas that will wreck public services and deepen social divisions.

Coscientist

Coscientist was designed by Assistant Professor of Chemistry and Chemical Engineering  Gabe Gomes  and chemical engineering doctoral students Daniil Boiko and Robert MacKnight.  It uses large language models (LLMs), including OpenAI’s GPT-4 and Anthropic’s Claude, to execute the full range of the experimental process with a simple, plain language prompt. For example, a scientist could ask Coscientist to find a compound with given properties. The system scours the internet, documentation data and other available sources, synthesizes the information and selects a course of experimentation that uses robotic application programming interfaces (APIs). The experimental plan is then sent to and completed by automated instruments. In all, a human working with the system can design and run an experiment much more quickly, accurately and efficiently than a human alone.

Larrabee

Intel CEO Pat Gelsinger has taken a shot at his main rival in high performance computing, dismissing Nvidia's success in providing GPUs for AI modelling as "extraordinarily lucky." Gelsinger also implied that it would have been Intel, not Nvidia, currently coining it in AI hardware had the company not killed one of his pet projects [ Larrabee project ] nearly 15 years ago.

LAION

Over 1,000 images of sexually abused children have been discovered inside the largest dataset used to train image-generating AI, shocking everyone except for the people who have warned about this exact sort of thing for years . The dataset was created by LAION, a non-profit organization behind the massive image datasets used by generative AI systems like Stable Diffusion. Following a report from researchers at Stanford University, 404 Media reported that LAION confirmed the presence of child sexual abuse material (CSAM) in the dataset, called LAION-5B, and scrubbed it from their online channels. 

Suno

Microsoft Copilot , Microsoft’s AI-powered chatbot, can now compose songs thanks to an integration with GenAI music app Suno. Users can enter prompts into Copilot like “Create a pop song about adventures with your family” and have Suno, via a plug-in, bring their musical ideas to life.  From a single sentence, Suno can generate complete songs — including lyrics, instrumentals and singing voices. Copilot users can access the Suno integration by launching Microsoft Edge, visiting Copilot.Microsoft.com, logging in with their Microsoft account and enabling the Suno plug-in or clicking on the Suno logo that says “Make music with Suno.”

Summarization

Summarization is something a modern generative A.I. system does well. Give it an hour long meeting, and it will return a one-page summary of what was said. Ask it to search through millions of conversations and organize them by topic, and it’ll do that. Want to know who is talking about what? It’ll tell you. The technologies aren’t perfect; some of them are pretty primitive. They miss things that are important. They get other things wrong. But so do humans.  And, unlike humans, A.I. tools can be replicated by the millions and are improving at astonishing rates. They’ll get better next year, and even better the year after that. We are about to enter the era of mass spying.

MSFT High 2024

Microsoft (MSFT) in 2024 will begin to reap the benefits of the recent launch of its generative AI-powered 365 Copilot for enterprise customers .  Priced at $30 a month per user, the new AI assistant is set to provide incremental revenue growth. Fueled by AI enthusiasm, Microsoft shares, recently trading at $373.25, are up 55.6% YTD versus a gain of 43.3% for the Nasdaq Composite. In late November, the stock hit a new record high of $384.30.

Marissa Mayer

"If you look at the field of AI, for years all the focus was on learning models and reasoning:   Being able to extrapolate from what you learn from that data and from what those models do — to be able to reason in uncertain and unknown spaces. "And I think it was a surprise for me, and for a lot of people, that with generative AI, it was really the expression that moves the whole field so far forward — the ability to express itself in language, the ability to express itself in pictures. "It turns out you can have all the intelligence. You can learn and reason. But to capture people's imagination, that level of expressiveness needed to be there."

HUGS

Apple has released a research paper discussing what it calls HUGS, a generative AI technology that can create a digital human avatar from a brief video in about 30 minutes. Released via Apple's Machine Learning Research page and shared by Apple researcher Anurag Ranjan on X , "HUGS: Human Gaussian Splats" discusses techniques to create digital avatars of humans. Using machine learning and computer vision, the research details the creation process, using relatively little source material. Current neural rendering techniques are a marked improvement over earlier versions, but they are still best suited for "photogrammetry of static scenes and do not generalize well to freely moving humans in the environment," introductory paragraphs explain.  

Chevy Tahoe

Artificial intelligence (AI) technologies are fantastic tools that can provide a wealth of benefits – in fact, GM recently launched an AI-focused website detailing the many ways in which the company is using AI to accelerate innovation, improve its EV deployment, and streamline its business operations.  One use for AI is for customer service, with some GM dealers offering AI-enabled chat bots on their websites.  Hilariously, one user recently managed to trick a dealer chat bot to agree to sell them a new 2024 Chevy Tahoe for just $1.

Police Admin

The Policing Minister has urged forces to follow Bedfordshire in using artificial intelligence (AI) to carry out admin tasks . AI is used to redact personal data from case files before they go to the Crown Prosecution Service (CPS). Detectives found it performed the task in minutes, whereas traditional methods could take them days. Chris Philp MP said AI could transform policing in "a radical and revolutionary way".

PIGEON

A student project has revealed yet another power of artificial intelligence — it can be extremely good at geolocating where photos are taken. The project, known as Predicting Image Geolocations (or PIGEON, for short) was designed by three Stanford graduate students in order to identify locations on Google Street View. But when presented with a few personal photos it had never seen before, the program was, in the majority of cases, able to make accurate guesses about where the photos were taken.

Foundation Capital

Just over one year has passed since the release of ChatGPT. In the intervening twelve-odd months, it’s become abundantly clear that generative AI represents a fundamental platform shift.   Leaders in the field now regard LLMs as the backbone of a new operating system capable of coordinating a variety of tools and resources to independently solve complex problems.

AI News Personality

As the prevalence of AI continues to dominate the headlines, it looks like no industry is immune from adopting some version of the technology into its vision for the future.   In this case however, AI isn't just in the news, it's the one reading it to you. Courtesy of Ars Technica , new startup Channel 1 has shown off a proof-of-concept video demonstrating AI avatars as news presenters, and the results are both genuinely impressive and a headfirst plunge into the depths of uncanny valley.

Active Listening

After setup, “Active Listening begins and is analyzed via AI to detect pertinent conversations via smartphones, smart tvs and other devices ,” the website adds.  CMG also claims it installs a tracking pixel on its client’s website to monitor the return on investment (ROI).

Cory Doctorow

Like Uber, the massive investor subsidies for AI have produced a sugar high of temporarily satisfied users.   Fooling around feeding prompts to an image genera­tor or a large language model can be fun, and playful communities have sprung up around these subsidized, free-to-use tools (less savory communities have also come together to produce nonconsensual pornography, fraud materials, and hoaxes). The largest of these models are incredibly expensive. They’re expensive to make, with billions spent acquir­ing training data, labelling it, and running it through massive computing arrays to turn it into models. Even more important, these models are expensive to run. Even if a bankrupt AI company’s model and servers could be acquired for pennies on the dollar, even if the new owners could be shorn of any overhanging legal liability from looming copyright cases, even if the eye-watering salaries commanded by AI engineers collapsed, the electricity bill for each query – to power

ByteDance

TikTok’s entrancing “For You” feed made its parent company, ByteDance, an AI leader on the world stage.  But that same company is now so behind in the generative AI race that it has been secretly using OpenAI’s technology to develop its own competing large language model, or LLM. This practice is generally considered a faux pas in the AI world. It’s also in direct violation of OpenAI’s terms of service , which state that its model output can’t be used “to develop any artificial intelligence models that compete with our products and services.” Microsoft, which ByteDance is buying its OpenAI access through, has the same policy . Nevertheless, internal ByteDance documents shared with me confirm that the OpenAI API has been relied on to develop its foundational LLM, codenamed Project Seed, during nearly every phase of development, including for training and evaluating the model. 

Kaaaaahn!

Artificial intelligence allowed Pakistan’s former prime minister Imran Khan to campaign from behind bars on Monday, with a voice clone of the opposition leader giving an impassioned speech on his behalf. Khan has been locked up since August and is being tried for leaking classified documents, allegations he says have been trumped up to stop him contesting general elections due in February.

Red Team

By identifying scenarios in which specific AI tools can be strategically deceptive, Scheurer and his colleagues hope to inform further research assessing their safety. Currently, there is very little empirical evidence highlighting the deceptiveness of AI and the settings in which it can occur, thus the team feels that there is a need for experimentally validated and clear examples of deceptive AI behavior. "This research was largely motivated by wish to understand how and when AIs can become deceptive and we hope that this early work is a start for more rigorous scientific treatments of AI deception," Scheurer said.

Chromebook Plus

You don’t need this special silicon to do many “AI things” right now. None of the Chromebook Plus models have AI-specific hardware inside, for example.   Yet Google says that Chromebook Plus laptops have, or will have AI features . These include “Magic Eraser’s AI-powered editing in the built-in Google Photos app”, “on-device AI”, and “Duet AI in Workspace. The latter is used to query your laptop to summarize data from multiple local documents.

Politics

How will AI-assisted elections look over the next year and beyond? With the 2024 presidential election less than a year away, AI has become an active participant in the race, largely inside government agencies and the operations of candidates and elected officials. AI can potentially shift the fate of an election, but given many Americans’ distrust of politicians and the lack of AI regulations, these technological integrations are likely to remain behind the scenes, experts say.

Sycophancy

According to a recent paper by researchers at Anthropic — the AI startup founded in 2021 by a handful of former OpenAI employees, which has since raised over $6 billion and released several versions of the chatbot Claude — sycophancy is a major problem with AI models. The Anthropic researchers — Mrinank Sharma, Meg Tong, and Ethan Perez — didn’t just detect sycophantic behavior in Claude, but in every leading AI chatbot, including OpenAI’s ChatGPT, raising a host of troubling questions about the reliability of chatbots in fields where truth — whether we like it or not — matters. These tools may revolutionize fields like medicine and fusion research — but they may also be increasingly designed to tell us just what we want to hear. 

Depression

Artificial intelligence (AI) is poised to revolutionise the way we diagnose and treat illness. It could be particularly helpful for depression because it could make more accurate diagnoses and determine which treatments are more likely to work. Some 20% of us will have depression at least once in our lifetimes. Around the world, 300 million people are currently experiencing depression, with 1.5 million Australians likely to be depressed at any one time. Because of this, depression has been described by the World Health Organization as the single biggest contributor to ill health around the world. 

Humana

Humana, one the nation's largest health insurance providers, is allegedly using an artificial intelligence model with a 90 percent error rate to override doctors' medical judgment and wrongfully deny care to elderly people on the company's Medicare Advantage plans. According to a lawsuit filed Tuesday, Humana's use of the AI model constitutes a "fraudulent scheme" that leaves elderly beneficiaries with either overwhelming medical debt or without needed care that is covered by their plans. Meanwhile, the insurance behemoth reaps a "financial windfall."

Election Woes

When it comes to policies tackling the challenges artificial intelligence and deepfakes pose in political campaigns, lawmakers in most states are still staring at a blank screen. Just three states enacted laws related to those rapidly growing policy areas in 2023 — even as the size, scale and potential threats that AI and deepfakes can pose came into clearer view throughout the year.

AI music-making

AI technology is already being applied to audio, performing tasks from stem separation to vocal deepfakes, and offering new spins on classic production tools and music-making interfaces. One day soon, AI might even make music all by itself. The arrival of AI technologies has sparked heated debates in music communities. Ideas around creativity, ownership, and authenticity are being reexamined. Some welcome what they see as exciting new tools, while others say the technology is overrated and won’t change all that much. Still others are scared, fearing the loss of the music-making practices and cultures they love.

DeepMind

Google’s DeepMind AI research unit has announced a significant achievement, claiming to have successfully solved a previously deemed unsolvable math problem. They utilized a large language model-based chatbot named FunSearch, equipped with a fact-checking mechanism to sift through generated responses and ensure accuracy.

Bogdan Penkovsky

The family of meta-learning concepts is vast. "However, as the field continues to evolve, more interesting work is to appear. I don't know how far we are from the so-called 'artificial general intelligence,' however, its distinctive characteristic is extreme adaptivity. And adaptivity is something that is currently explored by meta-learning researchers."

Heather Meeker

Open-source software is the darling of the tech world, but the open-source branding used by today's AI companies is inconsistent and confusing.   Delving into the complexities of differentiating AI from conventional software, open-source expert Heather Meeker explains the importance of a collaborative approach to AI development, emphasizing its potential to address global challenges — as long as it's grounded in transparency and public trust.

Brains

The complexity of the human brain, in its construction and function, has limited our ability to understand it.   Its 86 billion neurons are tiny sparks animating thoughts, perceptions, feelings and important functions throughout the body. Fabian Theis , director of the computational health center at Helmholtz Munich, who works on several atlas efforts but was not involved in the brain atlas, remembers one colleague telling him that the brain is like a separate organism. “It’s like 100 organs meshed into one,” he said.

David Gewirtz

Have 10 hours? IBM will train you in AI fundamentals - for free :  "I already took IBM's AI ethics class and plan to complete the rest to earn my digital credential in AI over the holiday break."

No Cheating?

According to new research from Stanford University, the popularization of A.I. chatbots has not boosted overall cheating rates in schools.   In surveys this year of more than 40 U.S. high schools, some 60 to 70 percent of students said they had recently engaged in cheating — about the same percent as in previous years, Stanford education researchers said . “There was a panic that these A.I. models will allow a whole new way of doing something that could be construed as cheating,” said Denise Pope , a senior lecturer at Stanford Graduate School of Education who has surveyed high school students for more than a decade through an education nonprofit she co-founded. But “we’re just not seeing the change in the data.”

Drew Houston

Dropbox CEO Drew Houston had to set Vogels straight, responding to the Amazonian's post by writing : "Third-party AI services are only used when customers actively engage with Dropbox AI features which themselves are clearly labeled … "The third-party AI toggle in the settings menu enables or disables access to DBX AI features and functionality. Neither this nor any other setting automatically or passively sends any Dropbox customer data to a third-party AI service." In other words, the setting is off until a user chooses to integrate an AI service with their account, which then flips the setting on. Switching it off cuts off access to those third-party machine-learning services.