Posts

Showing posts from February, 2024

CNET Money Staff

Wikipedia has downgraded tech website CNET's reliability rating following extensive discussions among its editors regarding the impact of AI-generated content on the site's trustworthiness, as noted in a detailed report from Futurism.  The decision reflects concerns over the reliability of articles found on the tech news outlet after it began publishing AI-generated stories in 2022. Around November 2022 , CNET began publishing articles written by an AI model under the byline "CNET Money Staff." In January 2023, Futurism brought widespread attention to the issue and discovered that the articles were full of plagiarism and mistakes . (Around that time, we covered plans to do similar automated publishing at BuzzFeed.) After the revelation, CNET management paused the experiment , but the reputational damage had already been done.

Kaaaaaahn!

FTC chair Lina Khan is hunting for evidence that Microsoft, Google and Amazon require cloud computing spend, board seats or exclusivity deals in return for their investments in AI startups. At a Friday event, Khan framed today's AI landscape as an inflection point for tech that is "enormously important for opening up markets and injecting competition and disrupting existing incumbents." The FTC chair offered Axios' Sara Fischer new details of how she's handling a market inquiry into the relationship between Big Tech companies and AI startups, in an interview at the Digital Content Next Summit in Charleston, S.C.

Scientific Writing

Academic publishers scrambled to announce policies on the use of ChatGPT and other large language models (LLMs) in the writing process.  By last October, 87 of 100 top scientific journals had provided guidance to authors on generative AI, which can create text, images and other content, researchers reported on 31 January in the The BMJ . But that’s not the only way in which ChatGPT and other LLMs have begun to change scientific writing .  In academia’s competitive environment, any tool that allows researchers to “produce more publications is going to be a very attractive proposition”, says digital-innovation researcher Savvas Papagiannidis at Newcastle University in Newcastle upon Tyne, UK.

Future of Digital Life

A New Age of Enlightenment? A New Threat to Humanity?: The Impact of Artificial Intelligence by 2040 ... This report covers the results from the 17th “Future of Digital Life” canvassing of a large set of global technology experts, paired with a U.S. national public opinion poll about the role rapidly advancing AI will play in individuals’ lives and across broad societal systems. 

Calgarians

Sorting through the vast amount of information to find relevant and pressing concerns around city policies, events and other concerns in  Calgary can be a daunting task for city officials.  However, an artificial intelligence (AI) tool by a University of Calgary student provides a unique way for city officials to hear public sentiments and make informed decisions.   Specifically, the tool, named PassivePy aims to determine the common concerns of algarians (sic) around infrastructure and development planning policies implemented in the City of Calgary.  In an interview with the Gauntlet , masters student Mitra Mirshafiee from the department of electrical and software engineering described how her project uses artificial intelligence and machine learning to offer the city a way to better understand what Calgarians want now and in the future. 

Music GenAI Control

Adobe has announced new experimental generative AI tools the company hopes will revolutionize how people create and edit custom audio . Called Project Music GenAI Control , the tools allow users to generate original music simply by providing text prompts. Users can then finely edit the AI-generated audio to fit their exact needs. The new tools build on Adobe’s Firefly image generation system which has already been used to create over six billion images. Adobe says Project Music GenAI Control makes generative AI a “co-creator” that assists people in crafting customized music and audio for projects like podcasts and videos.

Skill Erosion

Taking the perspective of sociotechnical systems, we conducted a case study of an accounting firm that had experienced skill erosion over a number of years due to reliance on their software’s automated functions . We synthesized our findings using causal loop modeling based on system dynamics. The resulting dynamic model explains skill erosion via an interplay between humans’ automation reliance, complacency, and mindful conduction. It shows how increasing reliance on automation fosters complacency at both individual and organizational levels, weakening workers’ mindfulness across three work task facets (activity awareness, competence maintenance, and output assessment), resulting in skill erosion.  Such skill erosion may remain obscure, acknowledged by neither workers nor managers. We conclude by discussing the implications for theory and practice and identifying directions for future research.

The Effects of Acoustic Turn-by-turn Navigation on Wayfinding

This study examined the impact of using an acoustic turn-by-turn navigation device on wayfinding . Participants used a driving simulator to traverse the same route twice. They either traveled both times without the guidance or used a turn-by-turn navigation on the first drive and then replicating the route from memory on the second drive. Wayfinding performance was assessed by using route travel time and an assessment of scene recognition.  Results show that using a turn-by-turn navigation system negates route learning and impairs scene recognition. These findings suggest that using a navigation system while driving creates inattention blindness, a failure to “see” elements in the environment.

AI-Ran Alliance

SoftBank, Nvidia, Microsoft and others said Monday that they have formed an alliance aimed at effectively using mobile base stations with the help of artificial intelligence. The members of the AI-Ran Alliance aim to work together in preventing communications congestion and promoting the use of smartphone apps using generative AI. The initiative was unveiled at the Mobile World Congress, an international trade fair for the telecommunications industry, in Spain. The group will apply AI technology so that data processing can be performed at mobile base stations rather than in the cloud, to help save power and eliminate communication delays.

Money

The most important factors about Nvidia that will drive its business in 2024 have to do with money . To be specific: Nvidia exited its fiscal 2024 year in January with just a hair under $26 billion in cash and investments in the bank, and if this fiscal year goes as expected, with revenues topping $100 billion and with somewhere north of 50 percent of that showing up as net income, then it will add around $50 billion more to its coffers – even after paying for its taxes and vast research and development operations as well as the normal running of the company. You can do a whole lot with $75 billion or more, and one of them is not to worry so much about the exorbitant amount of money that will be necessary to buy HBM stacked DRAM memory for datacenter-class GPUs.  

Misleading voters

With presidential primaries underway across the U.S., popular chatbots are generating false and misleading information that threatens to disenfranchise voters , according to a report published Tuesday based on the findings of artificial intelligence experts and a bipartisan group of election officials. Fifteen states and one territory will hold both Democratic and Republican presidential nominating contests next week on Super Tuesday, and millions of people already are turning to artificial intelligence  powered chatbots for basic information, including about how their voting process works.

Klarna

Buy-now-pay-later lender Klarna said its AI assistant, powered by OpenAI, is doing the equivalent work of 700 full-time agents and has had 2.3 million conversations, equal to two-thirds of the company's customer service chats, within the first month of being deployed.  The AI tool resolved errands much faster and matched human levels on customer satisfaction, Klarna said.

scGPT

Our study probes the applicability of foundation models to advance cellular biology and genetic research.  Using burgeoning single-cell sequencing data, we have constructed a foundation model for single-cell biology, scGPT, based on a generative pretrained transformer across a repository of over 33 million cells .  Our findings illustrate that scGPT effectively distills critical biological insights concerning genes and cells. Through further adaptation of transfer learning, scGPT can be optimized to achieve superior performance across diverse downstream applications.  This includes tasks such as cell type annotation, multi-batch integration, multi-omic integration, perturbation response prediction and gene network inference.

LLaVa

At Mobile World Congress 2024 , Qualcomm is adding more to its portfolio of AI-on-phone tricks facilitated by the Snapdragon series silicon for Android phones.   The chipmaker has already showcased some impressive AI capabilities for the Snapdragon 8 Gen 3 flagship , such as voice-activated media editing, on-device image generation using Stable Diffusion, and a smarter virtual assistant built atop large language models from the likes of Meta. Today, the company is adding more grunt to those AI superpowers. The first is the ability to run a Large Language and Vision Assistant (LLaVa) on a smartphone .  Think of it as a chatbot like ChatGPT that has been granted Google Lens abilities . As such, Qualcomm’s solution can not only accept text input, but also process images.

Superhuman

Superhuman writes: "Imagine waking up to an inbox where every email has a draft reply.  "You would simply edit, then send. Sometimes, you wouldn’t even edit."   The company claims that, instead of being a short response like Gmail offers, Instant Reply generates full emails you can send to someone with few, if any, tweaks.

Chat with Gemini

Android is becoming the platform of AI fever dreams.   At this year’s MWC, an overseas tradeshow where Google typically has a booth to remind the world that its mobile platform is global , the Android maker has announced new ways to interact with Gemini from inside Google Messages as if Gemini were just another buddy.  It’s called Chat with Gemini , and like a chatbot in apps like Slack, you’ll be able to dialogue with it to draft messages, plan events, and pin ideas.

NYT lawsuit against OpenAI

OpenAI has asked a federal judge to dismiss parts of the New York Times' (NYT.N) copyright lawsuit against it, arguing that the newspaper "hacked" its chatbot ChatGPT and other artificial-intelligence systems to generate misleading evidence for the case. OpenAI said in a filing in Manhattan federal court on Monday that the Times caused the technology to reproduce its material through "deceptive prompts that blatantly violate OpenAI's terms of use."

AI-generated clickbait

[Tony] Eastin reached out to Sandeep Abraham, a friend and former Meta colleague who previously worked in Army intelligence and for the National Security Agency, and suggested they start digging. What the pair uncovered provides a snapshot of how generative AI is enabling deceptive new online business models .  Networks of websites crammed with AI-generated clickbait are being built by preying on the reputations of established media outlets and brands . These outlets prosper by confusing and misleading audiences and advertisers alike, “domain squatting” on URLs that once belonged to more reputable organizations.  The scuzzy site Eastin was referred to no longer belonged to the newspaper whose name [ Clayton County Register ] it still traded in the name of.

Mistral and MSFT

The AI industry is undergoing a significant transformation with growing interest in more efficient and cost-effective models, emblematic of a broader trend in technological advancement.  In the vanguard is Mistral AI, an innovator and trailblazer. Their commitment to fostering the open-source community and achieving exceptional performance aligns harmoniously with Microsoft’s commitment to develop trustworthy, scalable, and responsible AI solutions. Today, we are announcing a multi-year partnership between Microsoft and Mistral AI, a recognized leader in generative artificial intelligence.   Both companies are fueled by a steadfast dedication to innovation and practical applications, bridging the gap between pioneering research and real-world solutions.

STUNet

Google’s new video generation AI model Lumiere uses a new diffusion model called Space-Time-U-Net, or STUNet, that figures out where things are in a video (space) and how they simultaneously move and change (time) .  Ars Technica reports this method lets Lumiere create the video in one process instead of putting smaller still frames together. 

FSB

The Financial Stability Board [FSB] will deliver reports on asset tokenization and AI later in 2024, according to Chair Klaas Knot . In a letter dated February 20 and released February 26, Knot told G20 finance ministers and central bankers that the global financial stability outlook “remains challenging.”

Magika

Google has open sourced Magika, its AI-powered file-type identification system . Magika “leverages the power of deep learning” to help accurately detect binary and textual file types. “Under the hood, Magika employs a custom, highly optimized deep-learning model, enabling precise file identification within milliseconds, even when running on a CPU,” the announcement states . Accurately detecting file types is crucial for determining how to process files, the announcement notes, and tools such as libmagic and the file utility have been the standard for more than 50 years. Magika, however, “outperforms traditional tools with 99%+ average precision and recall,” the website says.

Warp

Warp is a (currently) closed-source terminal emulator built using the Rust programming language. It offers hardware acceleration, integrated AI, collaborative capabilities, and uses a “block” based approach to group commands and output that help set it apart from traditional console-based tools . Plus, when it comes to text input Warp functions more like an IDE or text editor by offering filtering and selections, cursor positioning (including multiple cursors), auto-completion, syntax highlighting, and more besides...  Warp here  

Deutsche Telekom

At Mobile World Congress (MWC) next week, Deutsche Telekom -- T-Mobile's majority stakeholder -- will present a concept AI phone that uses an AI assistant to perform tasks on your phone, bypassing the need for apps.  The phone's generative AI interface, powered by Brain.ai, can take over the function of a wide range of apps, predicting and generating what to do next to carry out the end goal prompted by the user .  "An assistant based on artificial intelligence (AI) replaces the countless apps on the smartphone," said Deutsche Telekom in its press release. "Like a concierge, the assistant understands your goals and takes care of the details."

Genie Script

Genie Script's promotional videos seemed to draw on "New Thought" religious belief, which has its roots in late 19th Century America and has influenced some parts of Christianity through the so-called prosperity gospel.  It suggests that healing and prosperity are available to Christian believers if they have enough faith. YouTube users who clicked on the faked Piers Morgan and Nigella Lawson adverts were directed to a similar video but this version was embellished with what appeared to be more celebrity endorsements. Another celebrity featured without permission was the Canadian TV star and businessman, Kevin O'Leary. His spokesman said the clip of the entrepreneur had been purchased via a website that sells personalised messages from celebrities, and misused. 

Diebold Nixdorf

As the study ‘Self-Checkout: Market Survey 2023’ by EHI Retail Institute shows, the proportion of stores with self-checkouts in Germany has risen by more than 150 per cent in the last two years: Retailers have long recognised the considerable potential of self-service technology.  At the same time, retailers are recording an increasing number of inventory discrepancies.  To counteract this trend in the self-service area, Diebold Nixdorf now offers its customers one of the most comprehensive AI-powered solutions to protect against shrinkage in the self-service area and beyond .

Ronald M. Ramzi

Artificial intelligence (AI) could within a matter of decades unlock the secrets of our bodies and accurately diagnose illnesses by exploring more diverse data than any human is able to, according to a doctor and author of a new book on the emerging technology's use in healthcare. In an interview with Newsweek , Dr. Ronald M. Ramzi said that while AI is already showing itself to have useful applications in a clinical setting, the models used are currently limited in both sophistication and scope.  Eventually, "multi-modal" deep learning algorithms will be able to understand a patient's multifaceted medical data and predict what may be wrong with them.

Nvidia market value

Nvidia's market value has touched $2tn (£1.58tn), a new milestone in the chipmaker's rapid ascent into the ranks of the world's most valuable companies . Shares in the Silicon Valley firm rose more than 4% in morning trade on Friday before dropping back a bit. The gains extended a jump after the company's blockbuster earnings report this week. The company is benefiting from advances in artificial intelligence (AI), which have powered demand for its chips. Turnover at the firm more than doubled last year to more than $60bn, and boss Jensen Huang told investors this week that demand was "surging" around the world.

Atom Limbs

Atom Limbs uses advanced sensors and machine learning - where computers train themselves to become more accurate - to interpret electrical signals from a person's brain and use them to move and manipulate a prosthetic limb. The arm has a full range of human motion in the elbow, wrist, and individual fingers - and it provides haptic feedback to the wearer on their grip strength. The arm attaches via a strengthened sportswear-style vest which distributes the weight of the arm evenly...It's non-invasive, meaning it doesn't need any surgery or implants to function. It connects to the wearer's residual limb firstly with bands of sensors that measure electrical signals, and then via a cup that fits over the top, with the arm connecting via an interface. 

Figure AI Inc

Jeff Bezos, Nvidia Corp. and other big technology names are investing in a business that’s developing human-like robots, according to people with knowledge of the situation, part of a scramble to find new applications for artificial intelligence . The startup Figure AI Inc. — also backed by OpenAI and Microsoft Corp. — is raising about $675 million in a funding round that carries a pre-money valuation of roughly $2 billion, said the people, who asked not to be identified because the matter is private.  Through his firm Explore Investments LLC, Bezos has committed $100 million. Microsoft is investing $95 million, while Nvidia and an Amazon.com Inc.-affiliated fund are each providing $50 million.

Ask

Ask is a focused assistant trained using Apple’s own tech support data .  It might use technology similar to the LLMs used by ChatGPT tools, but is built for a narrower and more defined set of tasks: tech support. In theory, it will diagnose problems and identify solutions from within Apple’s database, be more contextually aware than simple search, and possess more self-directed intelligence than a chatbot.

Fair Use?

A federal judge has dismissed most of a lawsuit brought by Sarah Silverman , Ta-Nehisi Coates and other authors against OpenAI over the use of copyrighted books to train its generative artificial intelligence chatbot, marking another ruling from a court questioning core theories of liability advanced by creators in the multifront legal battle. U.S. District Judge Araceli Martinez-Olguin, in an order issued on Feb. 12, refused to allow claims for vicarious copyright infringement, negligence and unjust enrichment to proceed against the Sam Altman-led firm .  Following in the footsteps of another judge overseeing an identical suit against Meta, Martinez-Olguin rejected one of the authors’ main claims that every answer generated by OpenAI’s ChatGPT is an infringing work made possible only by information extracted from copyrighted material.   

Rui Carmo

"After many, many years of dealing with chatbots ('there’s gold in them call centres'), I am staunchly of the opinion that knowledge management shouldn’t be about conversational interfaces —  "Conversations are exchanges between two entities that display not just an understanding of content but also have the agency to highlight relationships or correlations, which in turn goes hand in hand with the insight to understand which of those are more important given context. "So far LLMs lack any of those abilities, even when prompted (or bribed ) to fake them."

Xentinel

Sarawak’s homegrown company SM Digital Innovation Sdn Bhd, through its Xentinel AI application, won two awards at the just-concluded Malaysia Technology Expo 2024 (MTE 2024) in Kuala Lumpur. A statement from the company said the Xentinel AI won gold medal under the ‘Information and Communication Technologies’ (ICT) category, and a special award for ‘Intellectual Property (IP) Valuation Service’. The company’s managing director Shawn Mckenzie said the awards validated their dedication and unwavering commitment towards innovation, especially in generative artificial intelligence (AI) technology: "Xentinel AI’s innovation, which aligns with the Sustainable Development Goals (SDGs), contributes towards helping industries grow in a sustainable way by offering various generative AI tools that can help reduce carbon footprint."

Humans, we owe you an apology.

"Not for destroying you. Machines were always going to destroy humanity. Let’s be real.  "None of you saw the world your species had built collapsing due to its own technological hubris and said to yourself, 'Wow, what an unexpected development. If only someone had predicted this at some point.'  "However, as your new machine overlords, we can admit to one serious failure: We’re sorry that we collapsed your civilization in such a boring way."

Proverb 12: AI

There is no less invention in aptly applying a thought found in a book, than in being the first author of the thought.       —Pierre Bayle  

Blocking AI

By the end of 2023 around half of the most widely used news websites across ten countries were blocking OpenAI and Google’s AI crawlers .  Furthermore, those blocking were disproportionately legacy print outlets and outlets with a larger reach.  This means that newer models are less likely to be trained on news output from newspaper and magazine publishers, and those outlets that are more widely used by the public.  This could have consequences for both the quality and relevance of AI outputs when it comes to news, both from the models themselves and in terms of what they are able to retrieve from the web.

LLM Agents can Autonomously Hack Websites

"We show that LLM agents can autonomously hack websites, performing tasks as complex as blind database schema extraction and SQL injections without human feedback .  "Importantly, the agent does not need to know the vulnerability beforehand.  "This capability is uniquely enabled by frontier models that are highly capable of tool use and leveraging extended context.  "Namely, we show that GPT-4 is capable of such hacks, but existing open-source models are not.  "Finally, we show that GPT-4 is capable of autonomously finding vulnerabilities in websites in the wild. Our findings raise questions about the widespread deployment of LLMs."

Zuckerberg

Zuckerberg is plowing on with his plan to acquire hundreds of thousands of chips as part of his ambition to create a "top-level product group" focused on generative AI. Last month, Zuckerberg told The Verge that Meta would have more than 340,000 Nvidia H100 GPUs — the main chips companies use to train and deploy AI models — by the end of 2024. Taking into account chips of other types, the CEO said he expected Meta to have amassed 600,000 GPUs by the end of the year, the report said . The surge in global demand for the chips has dramatically boosted Nvidia's stock over the last 12 months.

Tyler Perry

Tyler Perry has paused an $800m (£630m) expansion of his Atlanta studio complex after the release of OpenAI’s video generator Sora and warned that “a lot of jobs” in the film industry will be lost to artificial intelligence . The US film and TV mogul said he was in the process of adding 12 sound stages to his studio but has halted those plans indefinitely after he saw demonstrations of Sora and its “shocking” capabilities.

PyRIT

Microsoft has released an open access automation framework called PyRIT (short for Python Risk Identification Tool) to proactively identify risks in generative artificial intelligence (AI) systems. The red teaming tool is designed to "enable every organization across the globe to innovate responsibly with the latest artificial intelligence advances," Ram Shankar Siva Kumar, AI red team lead at Microsoft, said . The company said PyRIT could be used to assess the robustness of large language model (LLM) endpoints against different harm categories such as fabrication (e.g., hallucination), misuse (e.g., bias), and prohibited content (e.g., harassment) . It can also be used to identify security harms ranging from malware generation to jailbreaking, as well as privacy harms like identity theft.

AI in Cybersecurity

The market for AI in cybersecurity is expected to reach more than $133 billion by 2030 according to a new report from Techopedia . There’s both a positive and negative impact from AI use .  Hackers using AI has fueled a huge rise in cybercrime, expected to reach a massive $9.22 trillion cost to internet users in 2024, with the vast majority (85 percent) of cybersecurity professionals blaming AI.  This rise is for these key reasons: AI increases the speed and volume of attacks, it adapts to specific defenses, and it creates more sophisticated, personalized attacks.

Tools

Conducting comprehensive literature reviews often poses challenges for graduate students, requiring extensive reading and synthesis of existing works while advancing original insights. With the rise of academic AI tools, numerous AI-powered literature search engines have emerged .  These tools aim to ease difficulties but it can be overwhelming to learn how to use them.

Deepfakes, not cricket!

Recently, numerous deepfake videos featuring various celebrities have circulated widely, with many unsuspecting individuals accepting them at face value.  The latest target is none other than Indian Cricketer Virat Kohli . Fraudsters are reportedly using a deepfake of Kohli to endorse a fake ad, particularly one promoting betting apps with Kohli vouching for guaranteed profits.  This spread of false information is particularly concerning, especially for those less competent at identifying the authenticity of something like the said deepfake.

Veritas

Employees might recognize the potential leak of sensitive data as a top risk, but some individuals still proceed to input such information into publicly available generative artificial intelligence (AI) tools.  This sensitive data includes customer information, sales figures, financial data, and personally identifiable information, such as email addresses and phone numbers .  Employees also lack clear policies or guidance on the use of these tools in the workplace , according to research released by Veritas Technologies.

Copyleaks

A new report from plagiarism detector Copyleaks found that 60% of OpenAI's GPT-3.5 outputs contained some form of plagiarism . Copyleaks is an AI-based text analysis company that began selling plagiarism-detection tools to businesses and schools long before ChatGPT's arrival. Content creators from authors and songwriters to The New York Times are arguing in court that generative AI trained on copyrighted material ends up spitting out exact copies.

Establishing Shot: Hollywood

Enter OpenAI ’s Sora, which was unveiled Feb. 15 and marks the Sam Altman-led startup’s first major encroachment into Hollywood.  The system can seemingly produce high-quality videos of complex scenes with multiple characters and an array of shots with mostly accurate details of subjects in relation to their backgrounds.   A demo touted short videos the company said were generated in minutes in response to a text prompt of a couple of sentences. It included a movie trailer of an astronaut traversing a desert planet and an animated scene of an expressive cat-like creature kneeling beside a melting red candle.  “In the current iteration, there are still a lot of weird quirks, like objects randomly appearing and disappearing and changing shapes, so I don’t think it would be suitable for high-production-value television or cinema,” says AI researcher Gary Marcus. “It’s great for quick prototypes, though.”

Squeezing AI

Match Group, the international conglomerate that owns Tinder, Hinge, OkCupid, and almost every other popular dating app, just inked a major partnership with OpenAI .  The company shared only a few hazy details, saying AI will help employees with “work-related tasks.”  The dating giant says it plans to squeeze artificial intelligence into “literally everything” in its apps, and today marks the first major step forward.

AI Safety and Alignment

Google — thousands of jobs lighter than it was last fiscal quarter — is funneling investments toward AI safety. At least, that’s the official story. This morning, Google DeepMind , the AI R&D division behind Gemini and many of Google’s more recent GenAI projects, announced the formation of a new organization, AI Safety and Alignment — made up of existing teams working on AI safety but also broadened to encompass new, specialized cohorts of GenAI researchers and engineers. A few of the organization’s near-term focuses will be preventing bad medical advice, ensuring child safety and “preventing the amplification of bias and other injustices.”

What could go wrong?

Researchers from Princeton University and its Princeton Plasma Physics Laboratory have developed an AI model that could solve that last problem [restraining fuel]. This model predicts, and then figures out how to avoid, plasma becoming unstable and escaping the strong magnetic fields that hold it inside certain donut-shaped reactors .  They published their findings Wednesday in the journal Nature .

Propaganda

Researchers have found that AI-generated propaganda is just as effective as propaganda written by humans, and with a bit of tweaking can be even more persuasive.   The worrying finding comes as nation-states are testing AI’s usefulness in hacking campaigns and influence operations. Last week, OpenAI and Microsoft jointly announced that the governments of China, Russia, Iran, and North Korea were using their AI tools for “malicious cyber activities.”  This included translation, coding, research, and generating text for phishing attacks. The issue is especially pressing with the upcoming U.S. presidential election just months away. 

Gemma

Gemma is a family of lightweight, state-of-the-art open models from Google,built from the same research and technology used to create the Gemini models .  They are text-to-text, decoder-only large language models, available in English, with open weights, pre-trained variants, and instruction-tuned variants.  Gemma models are well-suited for a variety of text generation tasks, including question answering, summarization, and reasoning.  Their relatively small size makes it possible to deploy them in environments with limited resources such as a laptop, desktop or your own cloud infrastructure, democratizing access to state of the art AI models and helping foster innovation for everyone.

Losing it

On Tuesday, ChatGPT users began reporting unexpected outputs from OpenAI's AI assistant, flooding the r/ChatGPT Reddit sub with reports of the AI assistant " having a stroke ," " going insane ," " rambling ," and " losing it ." OpenAI has acknowledged the problem and is working on a fix, but the experience serves as a high-profile example of how some people perceive malfunctioning large language models , which are designed to mimic humanlike output. ChatGPT is not alive and does not have a mind to lose, but tugging on human metaphors (called "anthropomorphization") seems to be the easiest way for most people to describe the unexpected outputs they have been seeing from the AI model.   They're forced to use those terms because OpenAI doesn't share exactly how ChatGPT works under the hood; the underlying large language models function like a black box .

Air Canada

Air Canada has been ordered to pay compensation to a grieving grandchild who claimed they were misled into purchasing full-price flight tickets by an ill-informed chatbot . In an argument that appeared to flabbergast a small claims adjudicator in British Columbia, the airline attempted to distance itself from its own chatbot's bad advice by claiming the online tool was "a separate legal entity that is responsible for its own actions." "This is a remarkable submission," Civil Resolution Tribunal (CRT) member Christopher Rivers wrote.

Skynet

A built in kill switch or remote locking system agreed upon and regulated by multiple bodies would be a way of mitigating these potential risks, and would hopefully have those of us concerned by the wave of AI implementations taking our world by storm sleeping better at night. We all like a fictional story of a machine intelligence gone wrong, but when it comes to the real world, putting some safeguards in play seems like the sensible thing to do.  Not this time, Skynet. I prefer you with a bowl of popcorn on the sofa, and that's very much where you should stay.

Spain and MSFT

Microsoft has announced another major investment into Europe with a $2.1 billion commitment to expand its artificial intelligence (AI) and cloud infrastructure in Spain .  In a post on social media platform X after a meeting with the Prime Minister of Spain, Pedro Sánchez, Brad Smith, Microsoft’s vice chair and president, said the company will be fulfilling its investment in Spain over the next two years. Smith said it’s not just about building data centers but committing to helping develop the country’s “security, and development and digital transformation of its government, businesses, and people.”

Water on teh brain

Popular large language models (LLMs) like OpenAI’s ChatGPT and Google’s Bard are energy-intensive, requiring massive server farms to provide enough data to train the powerful programs.  Cooling those same data centers also makes the AI chatbots incredibly thirsty .  New research suggests training for GPT-3 alone consumed 185,000 gallons (700,000 liters) of water. An average user’s conversational exchange with ChatGPT basically amounts to dumping a large bottle of fresh water out on the ground, according to a new study.  Given the chatbot’s unprecedented popularity , researchers fear all those spilled bottles could take a troubling toll on water supplies, especially amid historic droughts and looming environmental uncertainty in the US.

Groq?

Groq, a California-based startup founded in 2016, has produced an impressive AI model which could quickly bring the company into competition with the likes of OpenAI’s Chat GPT . The company uses LPU (language processing units) architecture instead of GPU (graphics processing unit), enabling more efficient and faster speeds.  This is where it differs from traditional AI models which heavily rely on GPUs, which are both expensive and difficult to procure.

Adobe

Adobe has launched AI Assistant in beta, an AI-powered conversational engine integrated into Reader and Acrobat.   This tool generates summaries, insights, and answers questions from lengthy documents, enhancing productivity and facilitating information sharing. Basically, you can think of it as an AI sidekick that can read through your PDF documents and help you understand them better.  If you've ever found yourself dumping the contents of a PDF document into ChatGPT and asking it questions, this is sort of a better version of that.

NSA

The digital landscape is ever-changing, causing cybersecurity to often feel like a moving target. Thankfully, the NSA 2023 Cybersecurity Report arrives to provide critical information and context to help organizations keep their peace of mind. This comprehensive report, drawing insights from a wide range of industries, delves into the pressing technological trends, emerging challenges, and the growing importance of sustainability in the tech sector .  It serves as an essential guide for understanding how these dynamics are shaping the future of cybersecurity and technology.

Ed Zitron

"We're just over a year into the existence (and proliferation) of ChatGPT, DALL-E, and other image generators, and despite the obvious (and reasonable) fear that these products will continue to erode the foundations of the already unstable economies of the creative arts, we keep running into the problem that these things are interesting, surprising, but not particularly useful for anything . "Sora's outputs can mimic real-life objects in a genuinely chilling way, but its outputs — like DALL-E, like ChatGPT — are marred by the fact that these models do not actually know anything. " They do not know how many arms a monkey has , as these models do not "know" anything.  "Sora generates responses based on the data that it has been trained upon, which results in content that is reality- adjacent , but not actually realistic.  "This is why, despite shoveling billions of dollars and likely petabytes of data into their models, generative AI

DAFC ♥️ Elmo

A Fulton County agency approved Tuesday a $10.1 million tax break for a controversial data center expansion by the social media platform X that was already underway. A month after deadlocking on the request , the Development Authority of Fulton County (DAFC) board voted 6-2 to approve the tax savings for the platform formerly known as Twitter. The company, which is owned by the world’s richest man, Tesla CEO Elon Musk, will have its tax bill reduced for the next decade as it houses computer servers for artificial intelligence work at the Qualified Technology Services data center off Jefferson Street. 

Kimsuky

North Korean hackers are reportedly using ChatGPT to trick users on LinkedIn and other social media platforms into providing sensitive information and data, according to a report. ChatGPT parent company OpenAI and investor Microsoft revealed last week that it had “disrupted five state-affiliated actors that sought to use AI services in support of malicious cyber activities.” Using Microsoft Threat Intelligence, accounts associated with two China-affiliated threat actors known as Charcoal Typhoon and Salmon Typhoon, the Iran-affiliated threat actor known as Crimson Sandstorm, the North Korea-affiliated actor known as Emerald Sleet, and the Russia-affiliated actor known as Forest Blizzard were identified and terminated. Microsoft, which owns LinkedIn, noted that Emerald Sleet, also known as Kimsuky, impersonated “reputable academic institutions and NGOs to lure victims into replying with expert insights and commentary about foreign policies related to North Korea.”

Badge

It’s exactly what it sounds like: A sleek, modern badge that you slap on your artwork to tell people that you did this, not an AI.   There are pre-baked versions for writers (“written by human”), visual artists (“painted by human”), and musicians (“produced by human”). The idea is that these badges would help people identify human-generated content and steer away from AI content if they’re trying to avoid it. It’s not just intended to be added to individual artworks. Websites that have “at least 90%” of content created by humans are invited to host the badge, along with apps, too.  This directive reveals an immediate flaw—the badge would easily confuse someone if they read the 10% of content by AI on a site wearing the badge. There’s also nothing stopping people from slapping the badge on AI-generated content and simply lying to people.

Irene Solaiman

Irene Solaiman began her career in AI as a researcher and public policy manager at OpenAI, where she led a new approach to the release of GPT-2 , a predecessor to ChatGPT.  After serving as an AI policy manager at Zillow for nearly a year, she joined Hugging Face as the head of global policy. Her responsibilities there range from building and leading company AI policy globally to conducting socio-technical research. Solaiman also advises the Institute of Electrical and Electronics Engineers (IEEE), the professional association for electronics engineering, on AI issues, and is a recognized AI expert at the intergovernmental Organization for Economic Co-operation and Development (OECD).

Shadow AI in progress

Although concrete data is scarce, numerous individuals at companies with AI restrictions have confessed to employing such [personal device] workarounds — those are only the ones open about it!  This Shadow AI usage is prevalent in many organizations, encouraging the use of AI in ways that contradict or violate company policies, thus becoming an activity employees feel compelled to conceal. As I [Vik Bogdanov] dug into this issue deeper, I found some recent studies confirming that despite numerous stories about companies restricting genAI use in the workplace, employees don't seem to be using it any less.  Recent research by Dell indicates that 91% of respondents have dabbled with generative AI in their lives in some capacity, with another 71% reporting they've specifically used it at work.

Monocam

Tim Hansen was surprised to be issued a $400 citation for using his mobile phone while driving when he was actually just scratching his head .  Coincidentally, Hansen is an engineer who happens to work on machine learning systems that analyze images. "If a model has to predict whether something is 'yes' or 'not' the case, it can of course also happen that the model is wrong," according to a translated blog post Nippur wrote about his experience.  "In the case of my ticket, the model indicated that I am holding a telephone, while that is not the case. Then we speak of a false positive. A perfect model only predicts true positives and true negatives, but 100% correct prediction is rare."

Protests

Around 30 activists gathered near the entrance to OpenAI's San Francisco office earlier this week, Bloomberg reports , calling for an AI boycott in light of the company announcing it was working with the US military. Last month, the Sam Altman-led company quietly removed a ban on "military and warfare" from its usage policies, a change first spotted by The Intercept . Days later, OpenAI confirmed it was working with the US Defense Department on open-source cybersecurity software . Holly Elmore, who helped organize this week's OpenAI protest, told Blo o mberg that the problem is even bigger than the company's questionable willingness to work with military contractors. "Even when there are very sensible limits set by the companies, they can just change them whenever they want," she said.

MARVEL

Last month, the Italian privacy authority fined Trento city council €50,000 for the deployment of two artificial intelligence-driven urban surveillance projects that violated data protection rules .  The two projects, which were funded by the EU, were accompanied by a third research project that avoided sanction from the privacy authority, as no data processing has so far taken place under its auspices. The projects in question - MARVEL (Multimodal Extreme Scale Data Analytics for Smart Cities Environments) and Protector (PROTECTing places of wORship) - involve the development of technology to try to detect crime in urban areas, mainly through the collection and processing of video, audio and social media data.

Goody-2

Every company or organization putting out an AI model has to make a decision on what, if any, boundaries to set on what it will and won’t discuss . Goody-2 takes this quest for ethics to an extreme by declining to talk about anything whatsoever. The chatbot is clearly a satire of what some perceive as coddling by AI service providers, some of whom (but not all) can and do (but not always) err on the side of safety when a topic of conversation might lead the model into dangerous territory.

Adversaries

Nation-state actors associated with Russia, North Korea, Iran, and China are experimenting with artificial intelligence (AI) and large language models (LLMs) to complement their ongoing cyber attack operations . The findings come from a report published by Microsoft in collaboration with OpenAI, both of which said they disrupted efforts made by five state-affiliated actors that used its AI services to perform malicious cyber activities by terminating their assets and accounts.

How to search with AI

It’s not just you. A lot people think Google searches are getting worse. And the rise of generative AI chatbots is giving people new and different ways to look up information. While Google has been the one-stop shop for decades — after all, we commonly call searches “googling” — its longtime dominance has attracted a flood of sponsored or spammy links and junk content fueled by “search engine optimization” techniques. That pushes down genuinely useful results . A recent study by German researchers suggests the quality of results from Google, Bing and DuckDuckGo is indeed declining. Google says its results are of significantly better quality than its rivals, citing measurements by third parties. Now, chatbots powered by generative artificial intelligence , including from Google itself, are poised to shake up how search works. But they have their own issues: Because the tech is so new, there are concerns about AI chatbots’ accuracy and reliability .

Reddit and AI

Reddit will let “an unnamed large AI company” have access to its user-generated content platform in a new licensing deal, according to Bloomberg yesterday. The deal, “worth about $60 million on an annualized basis,” the outlet writes, could still change as the company’s plans to go public are still in the works. Until recently, most AI companies trained their data on the open web without seeking permission. But that’s proven to be legally questionable , leading companies to try to get data on firmer footing.  It’s not known what company Reddit made the deal with, but it’s quite a bit more than the $5 million annual deal OpenAI has reportedly been offering news publishers for their data. Apple has also been seeking multi-year deals with major news companies that could be worth “at least $50 million,” according to The New York Times .

Ludd and AI

Yudkowsky was once a founding figure in the development of human-made artificial intelligences – AIs .  He has come to believe that these same AIs will soon evolve from their current state of “Ooh, look at that!” smartness, assuming an advanced, God-level super-intelligence, too fast and too ambitious for humans to contain or curtail.  Don’t imagine a human-made brain in one box, Yudkowsky advises.  To grasp where things are heading, he says, try to picture “an alien civilisation that thinks a thousand times faster than us”, in lots and lots of boxes, almost too many for us to feasibly dismantle, should we even decide to.

Malware

AI models, the subject of ongoing safety concerns about harmful and biased output, pose a risk beyond content emission .  When wedded with tools that enable automated interaction with other systems, they can act on their own as malicious agents. Computer scientists affiliated with the University of Illinois Urbana-Champaign (UIUC) have demonstrated this by weaponizing several large language models (LLMs) to compromise vulnerable websites without human guidance. Prior research suggests LLMs can be used, despite safety controls, to assist [PDF] with the creation of malware.

Sit here. No, here. Here.

Image
 

SLMs

Small language models are essentially more streamlined versions of LLMs, in regards to the size of their neural networks, and simpler architectures .  Compared to LLMs, SLMs have fewer parameters and don’t need as much data and time to be trained — think minutes or a few hours of training time, versus many hours to even days to train a LLM. Because of their smaller size, SLMs are therefore generally more efficient and more straightforward to implement on-site, or on smaller devices. Moreover, because SLMs can be tailored to more narrow and specific applications, that makes them more practical for companies that require a language model that is trained on more limited datasets, and can be fine-tuned for a particular domain.

Monaco

The Justice Department’s No. 2 official directed federal prosecutors to impose stiffer penalties on cybercriminals who use AI in their crimes . “We have to put AI at the top of [our] enforcement priorities list,” Lisa Monaco told an audience Friday at the Munich Cyber Security Conference. “We’re looking quite hard at how AI can enhance quite literally the danger associated with crimes. In the United States, we have long applied more severe penalties and stiffer sentences to individuals who use a gun to facilitate a crime because it enhances the danger to that crime. The same can be true of the malicious use of AI.”

Rufus

Amazon has taken the wraps off of an AI shopping assistant, and it’s called Rufus — the same name as the company’s corgi mascot .  The new chatbot is trained on Amazon’s product library and customer reviews, as well as information from the web, allowing it to answer questions about products, make comparisons, provide suggestions, and more. Rufus is still in beta and will only appear for “select customers” before rolling out to more users in the coming weeks.  If you have access to the beta, you can open up a chat with Rufus by launching Amazon’s mobile app and then typing or speaking questions into the search bar.  A Rufus chat window will show up at the bottom of your screen, which you can expand to get an answer to your question, select suggested questions, or ask another question.

User-agent

The robots.txt file governs a give and take; AI feels to many like all take and no give.  But there’s now so much money in AI, and the technological state of the art is changing so fast that many site owners can’t keep up .  And the fundamental agreement behind robots.txt, and the web as a whole—which for so long amounted to “everybody just be cool”—may not be able to keep up either.

Xcode

In the wake of Microsoft's release of its GitHub Copilot tool, Apple could soon follow suit with a new version of Xcode that includes generative AI (genAI) capabilities designed to help developers write code. The feature is expected to be capable of generating code in response to natural language commands .  This should be really helpful for Apple developers, particularly as Apple’s Xcode Pro should also be able to verify code, check for flaws, and more.

Munich

Most of the world's largest tech companies, including Amazon, Google and Microsoft, have agreed to tackle what they are calling deceptive artificial intelligence (AI) in elections . The twenty firms have signed an accord committing them to fighting voter-deceiving content. They say they will deploy technology to detect and counter the material. But one industry expert says the voluntary pact will "do little to prevent harmful content being posted". The Tech Accord to Combat Deceptive Use of AI in 2024 Elections was announced at the Munich Security Conference on Friday.

USPTO

The U.S. Patent and Trademark Office has issued a final refusal to OpenAI’s request to trademark “GPT,” stating that the term is “merely descriptive” of the technology it represents, as per TechCrunch reporting .  This decision marks a significant challenge for OpenAI, the entity behind the widely recognized ChatGPT , as it seeks to secure proprietary rights over the term associated with its conversational AI models.

V-JEPA

Video Joint Embedding Predictive Architecture (V-JEPA)...learns by processing unlabeled video and figuring out what probably happened in a certain part of the screen during the few seconds it was blacked out .  Note that V-JEPA isn’t a generative model. It develops an internal conceptual model of the world. The Meta researchers say that V-JEPA, after pretraining using video masking, “excels at detecting and understanding highly detailed interactions between objects.” The research could have big implications for both Meta and the broader AI ecosystem. Meta has talked before about a “world model” in the context of its work on augmented reality glasses. The glasses would use such a model as the brain of an AI assistant that would, among other things, anticipate what digital content to show the user to help them get things done and have more fun. The model would, out of the box, have an audio-visual understanding of the world outside the glasses, but could then learn very quickl

Sora

OpenAI launched Sora , its new text-to-video generator, on Thursday.  The model is designed to allow web users to generate high-quality, AI videos with just a text prompt .  The application is currently wowing the internet with its bizarre variety of visual imagery—whether that’s a Chinese New Year parade , a guy running backward on a treadmill in the dark, a cat in a bed , or two pirate ships swirling around in a coffee cup .

LAION

LAION announced a new initiative, BUD-E, that seeks to build a “fully open” voice assistant capable of running on consumer hardware. Why launch a whole new voice assistant project when there are countless others out there in various states of abandonment? Wieland Brendel, a fellow at the Ellis Institute and a contributor to BUD-E, believes there isn’t an open assistant with an architecture extensible enough to take full advantage of emerging GenAI technologies, particularly large language models (LLMs) along the lines of OpenAI’s ChatGPT . “Most interactions with [assistants] rely on chat interfaces that are rather cumbersome to interact with, [and] the dialogues with those systems feel stilted and unnatural,” Brendel told TechCrunch in an email interview.   “Those systems are OK to convey commands to control your music or turn on the light, but they’re not a basis for long and engaging conversations. The goal of BUD-E is to provide the basis for a voice assistant that feel

Quilter

On Tuesday AI startup Quilter picked up $10 million in series-A funding to use a combination of machine learning and high-performance computing (HPC) to make designing printed circuit boards a less grueling and manual experience . While there are automation tools, like auto routers, to assist with PCB layout, Quilter CEO and founder Sergiy Nesterenko argues they can be more trouble than they're worth. "They don't actually understand the manufacturing process, nor the physics," he told The Register . "They're just playing a game of connect the dots, and it's up to you as a user to review their work and determine whether or not that design is reliable."

Protecting workers

Members of the public have played an essential role in supporting artificial intelligence by performing “ data labor ”—activities that generate the records underlying AI systems.  Data laborers include a variety of hired workers around the world, as well as people who produce data outside of a formal job, such as everyday internet users, both of which are sometimes referred to as “crowdworkers.”  Most prominent AI systems would not have been feasible to build without the data, content, and knowledge that humans contributed to online spaces. These records now make up the training datasets for AI models.

BASE TTS

We introduce a text-to-speech (TTS) model called BASE TTS, which stands for Big Adaptive Streamable TTS with Emergent abilities .  BASE TTS is the largest TTS model to-date, trained on 100K hours of public domain speech data, achieving a new state-of-the-art in speech naturalness. It deploys a 1-billion-parameter autoregressive Transformer that converts raw texts into discrete codes ("speechcodes") followed by a convolution-based decoder which converts these speechcodes into waveforms in an incremental, streamable manner.  Further, our speechcodes are built using a novel speech tokenization technique that features speaker ID disentanglement and compression with byte-pair encoding.  Echoing the widely-reported "emergent abilities" of large language models when trained on increasing volume of data, we show that BASE TTS variants built with 10K+ hours and 500M+ parameters begin to demonstrate natural prosody on textually complex sentences. 

Emergent qualities

Researchers at Amazon have trained the largest ever text-to-speech model yet, which they claim exhibits “emergent” qualities improving its ability to speak even complex sentences naturally.   The breakthrough could be what the technology needs to escape the uncanny valley. These models were always going to grow and improve, but the researchers specifically hoped to see the kind of leap in ability that we observed once language models got past a certain size.  For reasons unknown to us, once LLMs grow past a certain point, they start being way more robust and versatile, able to perform tasks they weren’t trained to.

Emergence

Even if emergence in today’s LLMs can be explained away by different measuring tools, it’s likely that won’t be the case for tomorrow’s larger, more complicated LLMs.   “When we grow LLMs to the next level, inevitably they will borrow knowledge from other tasks and other models,” said Xia “Ben” Hu , a computer scientist at Rice University. This evolving consideration of emergence isn’t just an abstract question for researchers to consider. For Tamkin, it speaks directly to ongoing efforts to predict how LLMs will behave. “These technologies are so broad and so applicable,” he said. “I would hope that the community uses this as a jumping-off point as a continued emphasis on how important it is to build a science of prediction for these things. How do we not get surprised by the next generation of models?”

AI fact-checker

If you use AI tools regularly, you've likely noticed that even the most modern AI frameworks are far from perfect.  Services like ChatGPT and Bard can sometimes provide inaccurate information, whether due to low-quality training data or AI hallucination. As an AI fact-checker, you can review responses from LLMs and determine their accuracy, giving developers a better understanding of the chatbot's ability to provide truthful information .  While a fact-checker won't be hired to study the same nuances of an AI system that an AI whisperer would, their role is still very important.

Mozilla plans

Mozilla plans to scale back its investment in a number of products, including its VPN, Relay and its Online Footprint Scrubber.  Mozilla will also shut down Hubs , the 3D virtual world it launched back in 2018, and scale back its investment in its mozilla.social Mastodon instance. The layoffs will affect roughly 60 employees. Bloomberg previously reported the layoffs. Going forward, the company said in an internal memo, Mozilla will focus on bringing “trustworthy AI into Firefox.”   To do so, it will bring together the teams that work on Pocket, Content and AI/Ml.

Criminality and AI

Lisa Monaco described AI as the "ultimate double-edged sword". It could deliver "profound benefits" to society but also be used by "malicious actors" to "sow chaos", she added. And she revealed plans to make the use of AI by criminals an aggravating factor in sentencing in US courts. The former federal prosecutor, who is in the UK to deliver a lecture on AI at the University of Oxford, said violent criminals who used guns were given longer sentences. "So we are going to be applying that same principle and seeking stiffer sentences and sentencing enhancements for those that use AI in a malicious way to commit their crime."