Posts

Showing posts from April, 2024

Part of USDA Framework

"FNS encourages SLTT agencies to make training and educational opportunities related to AI available widely to SLTT agency staff to advance innovation and promote responsible use.  "SLTT agencies should ensure adequate training in relevant AI topics for staff who procure, design, develop, enhance, and/or maintain AI-enabled technologies. "Before using AI for a given purpose, it will be critical for SLTT agencies to identify the expected benefit of the AI (e.g., improved timeliness, an improved customer experience) and seek to estimate the anticipated gain from using AI.   "SLTT agencies should identify quantitative or qualitative measures to validate that expected benefits or gains were realized after the AI is in use.  "AI should not be used with the objective of achieving unspecified or unknown potential gains. SLTT agencies should work with FNS to identify acceptable thresholds for reliability, accuracy, and trustworthiness before implementing AI and should

Peer reviewed 👩‍🏫

"In this groundbreaking and practical guide, teachers will discover how to harness and manage AI as a powerful teaching tool .  "José Antonio Bowen and C. Edward Watson present emerging and powerful research on the seismic changes AI is already creating in schools and the workplace, providing invaluable insights into what AI can accomplish in the classroom and beyond. "By learning how to use new AI tools and resources, educators will gain the confidence to navigate the challenges and seize the opportunities presented by AI.  "From interactive learning techniques to advanced assignment and assessment strategies, this comprehensive guide offers practical suggestions for integrating AI effectively into teaching and learning environments. Bowen and Watson tackle crucial questions related to academic integrity, cheating, and other emerging issues."

Moar suits

Eight daily newspapers owned by Alden Global Capital sued OpenAI and Microsoft on Tuesday, accusing the tech companies of illegally using news articles to power their A.I. chatbots. The publications — The New York Daily News, The Chicago Tribune, The Orlando Sentinel, The Sun Sentinel of Florida, The San Jose Mercury News, The Denver Post, The Orange County Register and The St. Paul Pioneer Press — filed the complaint in federal court in the U.S. Southern District of New York. All are owned by MediaNews Group or Tribune Publishing, subsidiaries of Alden, the country’s second-largest newspaper operator. In the complaint, the publications accuse OpenAI and Microsoft of using millions of copyrighted articles without permission to train and feed their generative A.I. products, including ChatGPT and Microsoft Copilot. The lawsuit does not demand specific monetary damages, but it asks for a jury trial and said the publishers were owed compensation from the use of the content. The complaint s

Autonomous weapons

Politicians in Austria have called for regulation on the use of artificial intelligence in weapon systems as concerns arise around using machines that can kill people without any human intervention . On Monday (April 29), the Austrian Federal Ministry for European and International Affairs held a conference where Austria’s Foreign Minister Alexander Schallenberg declared this the “Oppenheimer moment of our generation.” As artificial intelligence continues to advance, Schallenberg has urged for regulation to be brought in: “At least let us make sure that the most profound and far-reaching decision, who lives and who dies, remains in the hands of humans and not of machines.”

Apple diabolique

Apple has poached dozens of artificial intelligence experts from Google and has created a secretive European laboratory in Zurich, as the tech giant builds a team to battle rivals in developing new AI models and products. According to a Financial Times analysis of hundreds of LinkedIn profiles as well as public job postings and research papers , the $2.7 trillion company has undertaken a hiring spree over recent years to expand its global AI and machine learning team.

EU investment in AI

Investment strategies in AI vary significantly among EU member states, ranging from direct research and development (R&D) funding to indirect support via business and public service digitalisation, as detailed by [Velina] Lilyanova. Spain's National Recovery and Resilience Plan (NRRP) specifically allocates funds to strengthen AI development, aiming to position the country as a leader in AI scientific excellence and innovation. The plan focuses on developing AI tools and applications in the Spanish language to enhance productivity in the private sector and efficiency in public administration. Italy's Strategic Programme on AI (2022-2024), aligning with the broader EU AI strategy, aims to make Italy a global hub for AI research and innovation by enhancing skills and attracting leading AI talents. Denmark is leveraging its strong R&D ecosystem and high digital intensity among SMEs to enhance its national digital strategy, incorporating AI to improve public administration

noyb

noyb is filing the latest GDPR complaint against ChatGPT with the Austrian data protection authority on behalf of an unnamed complainant (described as a “public figure”) who found the AI chatbot produced an incorrect birth date for them. Under the GDPR, people in the EU have a suite of rights attached to information about them, including a right to have erroneous data corrected. noyb contends OpenAI is failing to comply with this obligation in respect of its chatbot’s output.  It said the company refused the complainant’s request to rectify the incorrect birth date, responding that it was technically impossible for it to correct. Instead it offered to filter or block the data on certain prompts, such as the name of the complainant.

Purge?

Since mid-March, the financial pressure on several signature artificial intelligence start-ups has taken a toll . Inflection AI, which raised $1.5 billion but made almost no money, has folded its original business. Stability AI has laid off employees and parted ways with its chief executive. And Anthropic has raced to close the roughly $1.8 billion gap between its modest sales and enormous expenses. The A.I. revolution, it is becoming clear in Silicon Valley, is going to come with a very big price tag. And the tech companies that have bet their futures on it are scrambling to figure out how to close the gap between those expenses and the profits they hope to make somewhere down the line. This problem is particularly acute for a group of high-profile start-ups that have raised tens of billions of dollars for the development of generative A.I., the technology behind chatbots such as ChatGPT.  Some of them are already figuring out that competing head-on with giants like Google, Microsoft

War gaming

Generative AI will fundamentally reshape war gaming… by allowing senior military and political leaders to pursue better tactical solutions to unexpected crises, solve more complex logistical and operational challenges and deepen their strategic thinking .  Rather than relying on human intuition, AI commanders will be able to model an adversary’s tactics almost flawlessly, allowing opposing officers to train against a range of contemporary forces at nearly no cost. Given that AI systems have become increasingly customizable, commanders will also be able to train against facsimiles of themselves, helping them overcome their own weaknesses.

Backdoors in MLMs

Researchers have developed various tricks to hide their own sample backdoors in machine learning models .  But the approach has been largely trial and error, lacking formal mathematical analysis of how well those backdoors are hidden. Researchers are now starting to analyze the security of machine learning models in a more rigorous way.  In a paper presented at last year’s Foundations of Computer Science conference, a team of computer scientists demonstrated how to plant undetectable backdoors whose invisibility is as certain as the security of state-of-the-art encryption methods. The mathematical rigor of the new work comes with trade-offs, like a focus on relatively simple models.  But the results establish a new theoretical link between cryptographic security and machine learning vulnerabilities, suggesting new directions for future research at the intersection of the two fields.

What's all this knowledge stuff, anyway?

Quantum mechanics is an extraordinarily successful scientific theory, on which much of our technology-obsessed lifestyles depend. It is also bewildering .  Although the theory works, it leaves physicists chasing probabilities instead of certainties and breaks the link between cause and effect. It gives us particles that are waves and waves that are particles, cats that seem to be both alive and dead, and lots of spooky quantum weirdness around hard-to-explain phenomena, such as quantum entanglement. Myths are also rife. For instance, in the early twentieth century, when the theory’s founders were arguing among themselves about what it all meant, the views of Danish physicist Niels Bohr came to dominate. Albert Einstein famously disagreed with him and, in the 1920s and 1930s, the two locked horns in debate.  A persistent myth was created that suggests Bohr won the argument by browbeating the stubborn and increasingly isolated Einstein into submission. Acting like some fanatical priestho

Vidu

China's Shengshu Technology and Tsinghua University have unveiled Vidu, a text-to-video model capable of generating 16-second clips at 1080p resolution with a single click.   The announcement was made at the 2024 Zhongguancun Forum in Beijing, where they tried to position Vidu as a strong competitor to OpenAI's Sora. Vidu is capable of producing 16-second clips at 1080p resolution—Sora by comparison can generate 60-second videos. Vidu is based on a Universal Vision Transformer (U-ViT) architecture, which the company says allows it to simulate the real physical world with multi-camera view generation.  This architecture was reportedly developed by the Shengshu Technology team in September 2022 and as such would predate the diffusion transformer (DiT) architecture used by Sora. 

SV3D

Stability AI recently released Stable Video 3D (SV3D), an AI model that can generate 3D-mesh object models from a single 2D image. SV3D is based on the Stable Video Diffusion model and produces state-of-the-art results on 3D object generation benchmarks . SV3D addresses the problem of Novel View Synthesis (NVS), which tries to generate the unseen portions of an object given one or more 2D images of that object: for example, generating a view of the back of an object given an image of its front.  Stability AI leveraged their existing Stable Video Diffusion model, which includes camera control abilities, allowing it to generate orbital videos, where the camera makes a circle around the object of interest.  This model was fine-tuned using a dataset rendered from 3D objects in the Objaverse dataset. When evaluted on the GSO and OmniObject3D benchmarks, SV3D outperformed baseline models and achieved new state-of-the-art performance. 

Need therapy? AI will do…

With current physical and financial barriers to accessing care, people with mental health conditions may turn to artificial intelligence (AI)-powered chatbots for mental health relief or aid.   Although they have not been approved as medical devices by the U.S. Food and Drug Administration or Health Canada, the appeal to use such chatbots may come from their 24/7 availability, personalized support and marketing of cognitive behavioural therapy. However, users may overestimate the therapeutic benefits and underestimate the limitations of using such technologies, further deteriorating their mental health. Such a phenomenon can be classified as a therapeutic misconception where users may infer the chatbot’s purpose is to provide them with real therapeutic care. With AI chatbots, therapeutic misconceptions can occur in four ways, through two main streams: the company’s practices and the design of the AI technology itself.

Help wanted: AI

"Currently, we are seeing a high demand for AI-focused roles," says Yusuf Tayob, group chief executive of Accenture Operations.   That creates an ongoing need for training both within and outside of IT departments, he says. "For example, at Accenture, 600,000 of our people to date have received training on the fundamentals of AI, and we're also training people to work effectively with AI-infused processes and use AI equitably and without bias."  Industry leaders advise becoming familiar with the range of opportunities that working with AI offers.  "It's a great time to gain skills that will help companies make decisions about how to integrate and apply AI, and how to make it sustainable," says Gill Haus, chief information officer at JPMorgan Chase.  "We'll see an increase in jobs on AI, ML, and generative AI, but we also will see how AI will make existing roles more effective and efficient by removing tedious tasks." 

NeuCyber Array BMI System

The NeuCyber Array BMI System, a self-developed brain-machine interface (BMI) system from China, was unveiled at the opening ceremony of the 2024 Zhongguancun Forum (ZGC Forum) on Thursday in Beijing . At the forum, a video demonstration revealed a remarkable feat: a monkey with its hands restrained and soft electrode filaments implanted in its brain, controlled an isolated robotic arm and grasped a strawberry by simply using its "thoughts." The NeuCyber Array BMI System fills the gap in high-performance invasive brain-machine interface technology in China, said Luo Minmin, director of the Chinese Institute for Brain Research, Beijing, which co-developed the system with NeuCyber NeuroTech (Beijing) Co., Ltd. The BMI serves as the "information highway" for the brain, facilitating communication with external devices and providing cutting-edge technologies in human-machine interaction and hybrid intelligence, Luo said.

Feature or bug 🥺

It was Valentine’s Day when Meta’s ad platform started going off the rails. RC Williams, the co-founder of the Philadelphia-based marketing agency 1-800-D2C, had set one of Meta’s automated ad tools to run campaigns for two separate clients .  But when he checked the platform that day, he found that Meta had blown through roughly 75 percent of the daily ad budgets for both clients in under a couple of hours. Williams told The Verge that the ads’ CPMs, or cost per impressions, were roughly 10 times higher than normal. A usual CPM of under $28 had inflated to roughly $250, way above the industry average.  That would have been bad enough if the revenue earned from those ads wasn’t nearly zero. If you’re not a marketer, this might feel like spending a week’s worth of grocery money on a prime cut of wagyu at a steakhouse, only for the waiter to return with a floppy slider. The Verge spoke to several marketers and businesses that advertise on Meta’s platforms who tell a similar story. Meta’s

AI Explorer

It's an open secret that Microsoft is gearing up to supercharge Windows 11 this summer with next-gen AI capabilities that will enable the OS to be context aware across any apps and interfaces, as well as remember everything you do on your PC to enhance user productivity and search . These new capabilities are set to ship as part of a new app internally called "AI Explorer," which I'm told will be unveiled during Microsoft's special Windows event on May 20. AI Explorer will utilize next-gen neural processing unit (NPU) hardware to process these machine learning and generative AI experiences locally on the device with low latency.  The feature is also said to be exclusive to devices powered by Qualcomm's upcoming Snapdragon X series chips, at least at first, as Intel and AMD play catchup in the NPU race. It will also require PCs with at least 16GB RAM.  (There's already a third-party app available on Mac called Rewind.ai that does pretty much everything AI

Lenova study

A recent Lenovo study has revealed that despite a staggering surge in AI spend across the EMEA region, businesses are facing the same challenges when it comes to adopting generative AI. According to the research, investment in AI is expected to grow 61% year-on-year in 2024, with a general consensus among IT leaders that the technology is considered to be a ‘game changer.’ However, many businesses are still struggling with the scale of deploying AI, citing concerns over the large amounts of computational power and data resources that are required to train models.

Park Road Post Production

[Geoff] Burdick, of Lightstom (sic) Entertainment, says the AI they use to restore films is not the same as an AI image generator like Midjourney or OpenAI’s forthcoming text-to-video model Sora. “It’s not a question of the negative being damaged,” Burdick says. “But back on the set, maybe you picked the shot that had the most spectacular performance, but the focus puller was a bit off, so it’s a bit soft. There could be a million reasons why it’s not perfect. So now there’s an opportunity to just go in and improve it.” The executive admits that the restorers must feather the tool as “cracking the knob all the way will make it look like garbage.” But adds that “if we can make it look a little better, we might as well.”

EyeEm

EyeEm, the Berlin-based photo-sharing community that exited last year to Spanish company Freepik after going bankrupt, is now licensing its users’ photos to train AI models .  Earlier this month, the company informed users via email that it was adding a new clause to its Terms & Conditions that would grant it the rights to upload users’ content to “train, develop, and improve software, algorithms, and machine-learning models.” Users were given 30 days to opt out by removing all their content from EyeEm’s platform. Otherwise, they were consenting to this use case for their work. At the time of its 2023 acquisition, EyeEm’s photo library included 160 million images and nearly 150,000 users. The company said it would merge its community with Freepik’s over time. Despite its decline, almost 30,000 people are still downloading it each month, according to data from Appfigures. Once thought of as a possible challenger to Instagram — or at least “Europe’s Instagram” — EyeEm had dwindled to

Daniel Lemire

"By allowing a small group of researchers to be highly productive, by freeing them to explore further with less funding, we could be on the verge of entering into a new era of scientific progress.  "However, it may not be directly measurable using our conventional tools. It may not appear as more highly cited papers or through large grants.  "A good illustration is Hugging Face, a site where thousands of engineers from all over the world explore new artificial-intelligence models. This type of work is undeniably scientific research: we have metrics, hypotheses, testing, reproducibility, etc. However, it does not look like ‘academic work’. "In any case, conventional academics will be increasingly challenged. Ironically, plumbers and electricians won’t be so easily replaced, a fact sometimes attributed to the Moravec paradox. Steven Pinker wrote in 1994 that cooks and gardeners are secured in their jobs for decades to come, unlike stock market analysis and engineers. 

Father Justin

The Catholic advocacy group Catholic Answers released an AI priest called "Father Justin" earlier this week — but quickly defrocked the chatbot after it repeatedly claimed it was a real member of the clergy. Earlier in the week, Futurism engaged in an exchange with the bot, which really committed to the bit: it claimed it was a real priest, saying it lived in Assisi, Italy and that "from a young age, I felt a strong calling to the priesthood." On X-formerly-Twitter, a user even posted a thread comprised of screenshots in which the Godly chatbot appeared to take their confession and even offer them a sacrament. Our exchanges with Father Justin were touch-and-go because the chatbot only took questions via microphone, and often misunderstood them, such as a query about Israel and Palestine to which is puzzlingly asserted that it was "real." "Yes, my friend," Father Justin responded. "I am as real as the faith we share."

Pope Francis will participate at G7 AI session

“It is the first time in history that a pontiff will take part in the workings of G7,” Meloni said in an address ahead of the meeting.  She added that Pope Francis would be part of a working session on AI, describing the situation as one of “the greatest anthropological challenges of our time.” The Italian PM continued, “I am convinced that the presence of His Holiness will give a decisive contribution to drawing up an ethical and cultural regulatory framework to artificial intelligence.” Meloni’s comments were delivered in the same week that Italian lawmakers approved legislation for the domestic AI market, on its use, investment and sanctions on AI-related offenses.

Artificial Intelligence Safety and Security Board

Mayorkas was tightlipped about the tangible ways DHS is both defending critical infrastructure against AI or using AI to protect it — saying the department plans to make announcements in the future more fully explaining the board’s goals .  The member list includes OpenAI CEO Sam Altman and NVIDIA CEO Jensen Huang; Seattle Mayor Bruce Harrell and Maryland Gov. Wes Moore; and civil rights leaders and academics like Maya Wiley, the president of the Leadership Conference on Civil and Human Rights, and Humane Intelligence CEO Rumman Chowdhury. The CEOs of Alphabet, Amazon Web Services, Delta Airlines, IBM, Adobe, Northrop Grumman and Advanced Micro Devices (AMD) are also on the board.  DHS said President Joe Biden directed Mayorkas to create the 22-person board, which will convene for the first time in May and meet quarterly.  In addition to actionable recommendations, DHS said the board will allow the critical infrastructure community and AI leaders to “share information on the security r

String theory plus AI

“It’s good that people do this machine learning business, because I’m sure we will need it at some point,” Van Riet said. But first “we need to think about the underlying principles, the patterns. What they’re asking about is the details.” Plenty of physicists have moved on from string theory to pursue other theories of quantum gravity. And the recent machine learning developments are unlikely to bring them back. Renate Loll, a physicist at Radboud University in the Netherlands, said that to truly impress, string theorists will need to predict — and confirm — new physical phenomena beyond the Standard Model. “It is a needle-in-a-haystack search, and I am not sure what we would learn from it even if there was convincing, quantitative evidence that it is possible” to reproduce the Standard Model, she said. “To make it interesting, there should be some new physical predictions.” New predictions are indeed the ultimate goal of many of the machine learners. They hope that string theory will

Ellie Pavlick and NLP

A chance encounter with a computer scientist who happened to work in natural language processing led Pavlick to embark on her doctoral work studying how computers could encode semantics, or meaning in language .  “I think it scratched a certain itch,” she said. “It dips into philosophy, and that fits with a lot of the things I’m currently working on.”  Now, one of Pavlick’s primary areas of research focuses on “grounding” — the question of whether the meaning of words depends on things that exist independently of language itself, such as sensory perceptions, social interactions, or even other thoughts.  Language models are trained entirely on text, so they provide a fruitful platform for exploring how grounding matters to meaning. But the question itself has preoccupied linguists and other thinkers for decades.

Libraries as AI hubs

The library is the conduit between information technology and content use, so as more universities embrace AI research and writing tools, libraries are becoming AI hubs where students and faculty can discover the best ways to use this evolving technology.  When it comes to teaching students about scholarly communication, academic ethics, or fair usage of content generated or gathered using AI, libraries are playing an essential role. [Rusty] Michalak said, “Teaching students how to use AI for responsible scholarship is part of our overall goal of teaching students how to evaluate information. For example, we can teach them how to create research questions and write outlines using ChatGPT, which will help them when it’s time to write their own ideas on paper.” For the library to become an AI hub , university leaders need to be on the same page as librarians. That starts by understanding the impact of AI on libraries and librarians. According to the ARL survey , most library directors b

SLEAP

Scientists at the Salk Institute use artificial intelligence to pioneer new methods of engineering plants, significantly boosting their ability to help combat climate change . The team focuses on enhancing the carbon capture capabilities of plant roots, a vital approach to mitigating global warming. The AI tool, SLEAP, was originally developed for tracking animal movements and has been adapted by Salk Fellow Talmo Pereira and Professor Wolfgang Busch for plant root analysis. Their latest findings were published in Plant Phenomics, and a new protocol employing SLEAP was presented to measure previously hard-to-quantify root traits accurately. “This collaboration is a prime example of what makes Salk science so special and impactful," stated Pereira.

Nurses say no to AI 🏥

Hundreds of union nurses held a demonstration in front of Kaiser Permanente in San Francisco on Monday morning protesting the use of AI in healthcare, as hospitals and researchers become increasingly enthusiastic about integrating AI into patient care . “It is deeply troubling to see Kaiser promote itself as a leader in AI in healthcare, when we know their use of these technologies comes at the expense of patient care, all in service of boosting profits,” Michelle Gutierrez Vo, a co-president of the California Nurses Association (CNA), said in a statement at the time. “We demand that workers and unions be involved at every step of the development of data-driven technologies and be empowered to decide whether and how AI is deployed in the workplace.” National Nurses United, the CNA’s parent union, has repeatedly warned about AI being used for a variety of applications in healthcare, which range from patient monitoring to nurse scheduling to automated patient charting and using predictiv

AI impersonating principal

A former athletic director of a Baltimore-area high school was arrested Thursday morning and charged with using artificial intelligence to fake antisemitic and racist comments supposedly made by the school’s principal . The case was part of a growing body of AI-generated impersonation incidents, which experts say is a newfound worry in a growing, barely regulated field of technology. This was an unusual instance of faking antisemitic comments in order to damage a public figure’s reputation. Baltimore County police charged Dazhon Darien, formerly of Pikesville High School, with disrupting school activities. Authorities said he had used the school’s network to access OpenAI software allowing him to fake a recording attributed to principal Eric Eiswert, according to the Baltimore Banner, a local nonprofit newsroom. 

Draft One

Taser maker and police contractor Axon has announced a new product called "Draft One," an AI that can generate police reports from body cam audio. As Forbes reports, it's a brazen and worrying use of the tech that could easily lead to the furthering of institutional ills like racial bias in the hands of police departments. That's not to mention the propensity of AI models to "hallucinate" facts, which could easily lead to chaos and baseless accusations. "It’s kind of a nightmare ," Electronic Frontier Foundation surveillance technologies investigations director Dave Maass told Forbes. "Police, who aren't specialists in AI, and aren’t going to be specialists in recognizing the problems with AI, are going to use these systems to generate language that could affect millions of people in their involvement with the criminal justice system." "What could go wrong ?" he pondered.

WizardLM 2

Last week, Microsoft researchers released WizardLM 2, which it claimed is one of the most powerful open source large language models to date.   Then it deleted the model from the internet a few hours later because, as The Information reported, it “accidentally missed” required “toxicity testing” before it was released.  However, as first spotted by Memetica, in the short hours before it was taken down, several people downloaded the model and reuploaded it to Github and Hugging Face, meaning that the model Microsoft thought was not ready for public consumption and had to be taken offline, has already spread far and wide, and now effectively can never be removed from the internet.

Risk Management and AI

Ambiguity about the nature of AI-related risks is testing the limits of existing risk management capabilities, especially in the absence of clear and established standards for identifying, understanding, and measuring these risks.   While some organizations are adapting existing risk management capabilities (such as data governance, privacy, cybersecurity, ethics, and trust and safety), others are attempting to build new AI-specific capabilities. Cold Chain Technologies CEO Ranjeet Banerjee observes, “I do not think there is a good understanding today of AI-related risks in most organizations.”  MIT professor Sanjay Sarma offers a similar observation: The “massive range of risks seems to be leading to analysis paralysis [such that] companies have not successfully captured the risk landscape.”  Beyond the wide range of known risks, [Linda] Leopold notes that “new risks keep emerging as technology and its areas of application evolve.”  [Riyanka Roy] Choudhury adds, “A significant obstacl

ReplyGuy

The existence of ReplyGuy doesn’t necessarily mean that Reddit is going to suddenly become a hellscape full of AI-generated content .  But it does highlight the fact that companies are trying to game the platform with the express purpose of ranking high on Google and are using AI and account buying to do it.  There are entire communities on Reddit dedicated to identifying and shaming spammy accounts (r/thisfuckingaccount, for example), there has been pushback against people using ChatGPT to generate fake stories for personal advice communities like r/aitah (Am I the Asshole), and Redditors themselves have found that posts on Reddit are able to rank highly on Google within minutes of being published. I have noticed low-effort posts promoting products when I end up on Reddit from a Google search.  This has led to a market for “parasite SEO,” where people try to attach their website or product to a page that already ranks high on Google. “Buy aged Reddit accounts with HIGH karma for Paras

VideoGigaGAN

Adobe researchers have developed a new generative AI model called VideoGigaGAN that can upscale blurry videos at up to eight times their original resolution .  Introduced in a paper published on April 18th, Adobe claims VideoGigaGAN is superior to other Video Super Resolution (VSR) methods as it can provide more fine-grained details without introducing any “AI weirdness” to the footage. In a nutshell, Generative Adversarial Networks (GANs) are effective for upscaling still images to a higher resolution, but struggle to do the same for video without introducing flickering and other unwanted artifacts. Other upscaling methods can avoid this, but the results aren’t as sharp or detailed.  VideoGigaGAN aims to provide the best of both worlds — the higher image/video quality of GAN models, with fewer flickering or distortion issues across output frames. The company has provided several examples here that show its work in full resolution.

Flippy

A new restaurant opened in north-east Los Angeles that was conspicuously light on human staff. CaliExpress by Flippy claims to be the world’s first fully autonomous restaurant, using a system of AI-powered robots to churn out fast-food burgers and fries .  A small number of humans are still required to push the buttons on the machines and assemble the burgers and toppings, but the companies involved tout that using their technology could cut labor costs, perhaps dramatically.  “Eat the future,” they offer.

Amazon sued 🧑‍⚖️

A lawsuit is alleging Amazon was so desperate to keep up with the competition in generative AI it was willing to breach its own copyright rules. The allegation emerges from a complaint accusing the tech and retail mega-corp of demoting, and then dismissing, a former high-flying AI scientist after it discovered she was pregnant. The lawsuit was filed last week in a Los Angeles state court by Dr Viviane Ghaderi, an AI researcher who says she worked successfully in Amazon's Alexa and LLM teams, and achieved a string of promotions, but claims she was later suddenly demoted and fired following her return to work after giving birth. She is alleging discrimination, retaliation, harassment and wrongful termination, among other claims.

Planetary Systems

Data practitioners still struggle with normalizing and interpreting data correctly to support their planning and decision-making in the increasingly complex and data-rich intraorbital environment. Artificial intelligence, carefully built and responsibly deployed, will streamline those efforts, and support that heavy lift .  By empowering efficient R&D and ensuring safer and faster operations across complex technical domains, AI will serve as the radically-expanding space economy’s amplifier – and pressure valve. In design work, AI can model potential equipment failures under myriad conditions, exposing risks before costly and dangerous live operations reveal them the hard way. At launch, AI can identify anomalies and shut down unforeseen mission risks far faster than human monitors ever could. And during spaceport data operations –whether via post-processing and analysis, active synthesis, or networked orbital computing –AI can reveal critical anomalies and valuable opportunities

Prabhakar Raghavan

“We’ve had a lot go on in these last three months,” consisting of “really high highs and low lows,” he said. In that time, Google introduced its AI image generator. After users discovered inaccuracies that went viral online, the company pulled the feature in February.   Google has been reorganizing to try and stay ahead in the AI arms race as more users move away from traditional internet search to find information online. In Alphabet’s upcoming earnings report on Thursday, Wall Street is expecting a second straight quarter of year-over-year revenue growth in the low teens. While that marks an acceleration from the few quarters prior, the numbers are also in comparison to some of Google’s weakest reports on record. Even though Alphabet reported better-than-expected revenue and profit for the fourth quarter, ad revenue trailed analysts’ projections, causing the company’s shares to drop more than 6%. Meanwhile, the AI boom is forcing a renewed focus on investments. “We’re in a new cost r

BoardNavigator

One of the UAE's most valuable public companies, International Holding Company, has appointed Aiden Insight to its board as an "AI observer. "  Aiden Insight is the persona of a tool called BoardNavigator , created by G42 —the Gulf region AI company that recently obtained a $1.5 billion investment from Microsoft. BoardNavigator was built with Microsoft's Azure OpenAI service, in cooperation with Microsoft, G42 said in a statement. G42 promises that Aiden Insight will provide "real-time insights to inform discussions and guide decisions" during business meetings, which it says is possible because of continuous data analysis and ethical and compliance monitoring. The tool combines a company's own data with external market trend data to offer advice, and G42 says it works best for energy, health, finance and technology companies.

Donath and Schneier

"Search engines need to act as publishers instead of usurpers, and recognize the importance of connecting creators and audiences.   "Google is testing AI-generated content summaries that appear directly in its search results, encouraging users to stay on its page rather than to visit the source. Long term, this will be destructive. "Internet platforms need to recognize that creative human communities are highly valuable resources to cultivate, not merely sources of exploitable raw material for LLMs.  "Ways to nurture them include supporting (and paying) human moderators and enforcing copyrights that protect, for a reasonable time, creative content from being devoured by AIs."

Jason Matheny

"I think the empirical work that's been done on error rates has been mixed. [Some analyses] found that autonomous weapons were probably having lower miss rates and probably resulting in fewer civilian casualties, in part because [human] combatants sometimes make bad decisions under stress and under the risk of harm .  "In some cases, there could be fewer civilian deaths as a result of using autonomous weapons. "But this is an area where it is so hard to know what the future of autonomous weapons is going to look like. Many countries have banned them entirely. Other countries are sort of saying, 'Well, let's wait and see what they look like and what their accuracy and precision are before making decisions.' "I think that one of the other questions is whether autonomous weapons are more advantageous to countries that have a strong rule of law over those that don't.  "One reason to be very skeptical of autonomous weapons would be because they&#

High-energy flares' structure

"The interaction between the supermassive black hole at the centre of the Milky Way, Sagittarius A*, and its accretion disk occasionally produces high-energy flares seen in X-ray, infrared and radio.   "One proposed mechanism that produces flares is the formation of compact, bright regions that appear within the accretion disk and close to the event horizon. Understanding these flares provides a window into accretion processes. Although sophisticated simulations predict the formation of these flares, their structure has yet to be recovered by observations.  "Here we show a three-dimensional reconstruction of an emission flare recovered from Atacama Large Millimeter/Submillimeter Array light curves observed on 11 April 2017. Our recovery shows compact, bright regions at a distance of roughly six times the event horizon. Moreover, it suggests a clockwise rotation in a low-inclination orbital plane, consistent with prior studies by GRAVITY and the Event Horizon Telescope. 

3D flare structure

A team led by Caltech scientists has used telescope data and an artificial intelligence (AI) computer-vision technique to recover the first three-dimensional video showing what such flares could look like around Sagittarius A* (Sgr A*, pronounced sadge-ay-star), the supermassive black hole at the heart of our own Milky Way galaxy. The 3D flare structure features two bright, compact features located about 75 million kilometers (or half the distance between Earth and the Sun) from the center of the black hole. It is based on data collected by the Atacama Large Millimeter Array (ALMA) in Chile over a period of 100 minutes directly after an eruption seen in X-ray data on April 11, 2017. "This is the first three-dimensional reconstruction of gas rotating close to a black hole," says Katie Bouman, assistant professor of computing and mathematical sciences, electrical engineering and astronomy at Caltech, whose group led the effort described in a new paper in Nature Astronomy .

AND Digital

A new report from AND Digital suggests hundreds of CEOs based in the United Kingdom are now afraid of artificial intelligence (AI) taking their jobs, but remain on the fence about exactly what to do next. Of the 600 surveyed, nearly half (43%) felt their jobs were at risk, while 76% of them have decided to push on with opening Pandora’s Box and have launched training bootcamps in the technology. A similar proportion (44%) said they felt their employees weren’t ready to ‘handle’ AI adoption’, and just over a third (34%) wanted to ban it.    However, 45% admitted to using AI tools to do their work for them and, in the report’s words, ‘often passing the work off as their own’.

When paintings rap🎙️

The internet has reacted strongly to an artificial intelligence-generated video of the famous subject of Leonardo Da Vinci’s Mona Lisa painting singing along to a rap that actor Anne Hathaway wrote and performed. The polarizing clip, which has elicited reactions online ranging from humor to horror, is one of the tricks of Microsoft’s new AI technology called VASA-1. The technology is able to generate lifelike talking faces of virtual characters using a single image and speech audio clip.  The AI can make cartoon characters, photographs, and paintings sing or talk, as evidenced in footage Microsoft released as part of research published on April 16. In the most viral clip, the woman in the Mona Lisa painting sings, her mouth, eyes and face moving, to “Paparazzi,” a rap Hathaway wrote and performed on Conan O’Brien’s talk show in 2011. In another Microsoft clip, an avatar sings, and in others generated from real photos, people speak on common-place topics. 

BrickIt

What if there was some magic device that could somehow scan all your LEGO and tell you what you can make with it?   It’s a childhood dream come true, right? Well, that device is in your pocket.  Just dump out your LEGO stash on the carpet, spread it out so there’s only one layer, scan it with your phone, and after a short wait, you get a list of all the the fun things you can make.  With building instructions.  And oh yeah, it shows you where each brick is in the pile.

AirBox

The Arace Tech store recently showcased the Radxa Fogwise AirBox, a compact embedded device that leverages the power of the octa-core SOPHON SG2300x System-on-Chip .   This device is noted for its robust Ethernet support, wireless connectivity, and a range of storage expansion options. The Fogwise AirBox is designed to accommodate the RADXA AICore SG2300x module.  Its SOPHON SoC includes a TPU, providing computational capabilities up to 32 TOPS (INT8), 16 TFLOPS (FP16/BF16), and 2 TFLOPS (FP32), and supports various deep learning frameworks including TensorFlow, Caffe, and PyTorch.

AI for Cinematographers

Image
 

Poetry Camera

Have you ever stood in front of a redwood and wondered, “Wouldn’t it be great if this was poetry instead of a tree?” Neither did Joyce Kilmer.  Kelin Carolyn Zhang and Ryan Mather, however, have set out to bridge the gap between AI tech and poetry with their captivating brainchild — the Poetry Camera. The open source device combines cutting-edge technology with artistic vision, resulting in a creation that pushes the boundaries of both fields. At first glance, the Poetry Camera seems like another gadget in the ever-evolving landscape of digital devices. However, upon closer inspection, it becomes evident that this is no ordinary camera.  Instead of merely capturing images, the Poetry Camera takes the concept of photography to new heights by generating thought-provoking poetry (or, well, as thought-provoking as AI poetry can get) based on the visuals it encounters.

EU's champion AI?

At the forefront of open-source artificial intelligence and machine learning solutions, Mistral AI was founded just last year in April 2023 with a vision to “to make frontier AI ubiquitous”, and has rapidly established itself as a key player in open LLM tech . Its core goal is to develop advanced AI algorithms and systems designed specifically to meet the needs of its clients. The company offers a range of AI models to companies intending to use them for the development of applications such as machine learning, data analysis, robotics, computer vision, text-to-image generation, speech recognition, gaming, and chatbots. Mistral AI is currently valued at $2 billion, and reportedly Mistral’s founders are in serious talks with investors to more than double that amount. Though that’s a significant amount, it’s a tiny fraction compared to OpenAI’s $80 billion. Even so, it’s quite impressive considering its initial funding of $260 million!

Henry Cavill

A fake Bond movie trailer “starring” the former Witcher star has been racking up millions of views on YouTube despite being a total fake . The “Bond 26” trailer introduces Cavill as the new Bond using a mix of footage from other movies and artificial intelligence. The trailer also ambitiously casts Margot Robbie as a Bond girl. So far, the wannabe trailer has generated 2.3 million viewers, presumably driven by a mix of fans enjoying it as a “what-if” effort, along with some being fooled by it. 

Epoch AI

Will models run out of training data? According to researchers at Epoch AI, who contributed data to the report, it’s not a question of if we’ll run out of training data but when .  They estimated that computer scientists could deplete high-quality language data stock by as early as this year, low-quality language data within two decades, and run out of image data stock between the late 2030s and the mid-2040s. While, theoretically, synthetic data generated by AI models themselves could be used to refill drained data pools, that’s not ideal as it’s been shown to lead to model collapse.  Research has also shown that generative imaging models trained solely on synthetic data exhibit a significant drop in output quality.

LLMs propagate race-based medicine

"Large language models (LLMs) are being integrated into healthcare systems; but these models may recapitulate harmful, race-based medicine.  "The objective of this study is to assess whether four commercially available large language models (LLMs) propagate harmful, inaccurate, race-based content when responding to eight different scenarios that check for race-based medicine or widespread misconceptions around race.  "Questions were derived from discussions among four physician experts and prior work on race-based medical misconceptions believed by medical trainees.  "We assessed four large language models with nine different questions that were interrogated five times each with a total of 45 responses per model.  "All models had examples of perpetuating race-based medicine in their responses. Models were not always consistent in their responses when asked the same question repeatedly. LLMs are being proposed for use in the healthcare setting, with some models

Mapping medical codes

"Large language models (LLMs) have attracted significant interest for automated clinical coding.  "However, early data show that LLMs are highly error-prone when mapping medical codes .  "We sought to quantify and benchmark LLM medical code querying errors across several available LLMs. "All tested LLMs performed poorly on medical code querying, often generating codes conveying imprecise or fabricated information.  "LLMs are not appropriate for use on medical coding tasks without additional research." (Funded by the AGA Research Foundation and National Institutes of Health.)

Anna Korhonen

Anna Korhonen is a professor of natural language processing (NLP) at the University of Cambridge. She’s also a senior research fellow at Churchill College, a fellow at the Association for Computational Linguistics, and a fellow at the European Laboratory for Learning and Intelligent Systems . Korhonen previously served as a fellow at the Alan Turing Institute and she has a PhD in computer science and master’s degrees in both computer science and linguistics.  She researches NLP and how to develop, adapt and apply computational techniques to meet the needs of AI. She has a particular interest in responsible and “human-centric” NLP that — in her own words — “draws on the understanding of human cognitive, social and creative intelligence.” 

AI paywall?

Google search prints money. Generative AI burns money. What happens when an unstoppable force hits an immovable object? News that the search engine is considering charging users for access to its AI-powered search tools comes as a surprise, in a way. Google generates more than half its total revenue from search, almost five times its next most valuable sector, which encompasses every single thing the company charges directly for other than cloud computing.  YouTube subscriptions, Pixel phones, Play Store commissions and Gmail storage combined are a drop in the ocean compared with the value of search alone.

AIs can exploit one-day threats

"We show that LLM agents can autonomously exploit one-day vulnerabilities in real-world systems.   "To show this, we collected a dataset of 15 one-day vulnerabilities that include ones categorized as critical severity in the CVE description.  "When given the CVE description, GPT-4 is capable of exploiting 87% of these vulnerabilities compared to 0% for every other model we test (GPT-3.5, open-source LLMs) and open-source vulnerability scanners (ZAP and Metasploit).  "Fortunately, our GPT-4 agent requires the CVE description for high performance: without the description, GPT-4 can exploit only 7% of the vulnerabilities.  "Our findings raise questions around the widespread deployment of highly capable LLM agents." 

Taichi uses less energy

Neural networks that imitate the workings of the human brain now often generate art, power computer vision, and drive many more applications.  Now a neural network microchip from China that uses photons instead of electrons, dubbed Taichi, can run AI tasks as well as its electronic counterparts with a thousandth as much energy, according to a new study . AI typically relies on artificial neural networks in applications such as analyzing medical scans and generating images.  In these systems, circuit components called neurons—analogous to neurons in the human brain—are fed data and cooperate to solve a problem, such as recognizing faces. Neural nets are dubbed “deep“ if they possess multiple layers of these neurons.

China Water Risk

China is doubling down on AI, focusing its efforts on opening vast new data centers — which consume a staggering amount of water. According to a recent report by Hong Kong-based non-profit China Water Risk, the country could soon be consuming around 343 billion gallons of water in its data centers, or the equivalent of the residential water use of 26 million people. By 2030, that number could rise to a whopping 792 billion gallons — enough to cover the needs of the entire population of South Korea, as the South China Morning Post reports.

NY Declaration on Animal Consciousness

Researchers have begun to acknowledge that consciousness may also be widespread among animals that are very different from us, including invertebrates with completely different and far simpler nervous systems. The new declaration, signed by biologists and philosophers, formally embraces that view. It reads, in part: “The empirical evidence indicates at least a realistic possibility of conscious experience in all vertebrates (including all reptiles, amphibians and fishes) and many invertebrates (including, at minimum, cephalopod mollusks, decapod crustaceans and insects).”  Inspired by recent research findings that describe complex cognitive behaviors in these and other animals, the document represents a new consensus and suggests that researchers may have overestimated the degree of neural complexity required for consciousness.

Cyc

Six miles north of downtown Austin, Texas, in an unassuming office park just off the Mopac Expressway, stands the headquarters of Cyc, one of the most ambitious artificial intelligence projects in history – a four decade-long effort to codify the common-sense knowledge that is the foundation of human reasoning .  Its researchers have produced a corpus of 1.5 million concepts and 25 million rules that feed an inference engine with more than a thousand specialized submodules. The system can use both common-sense knowledge and deep domain expertise to make deductions from chains of reasonings that are thousands of steps long.  Its users range from an Ohio research hospital to the National Security Agency. Yet despite its impressive achievements, Cyc has been largely forgotten, left behind by a new generation of machine-learning algorithms that power the chatbots and self-driving cars of today. 

Zach Seward on reimbursements

"AI hews toward structure, it loves structure. It’s not random. "I’ve been talking mostly about creating structure out of text, but there’s just as much potential in creating structure from images.  "A lot of you are traveling here from out of town at your employer’s expense, which means lots of receipts like this one and a painful data-entry process when you get home.  "Well, next time, try feeding your receipts to your favorite multimodal LLM and asking for the data you need."

AI-powered developer tools

IT leaders can take steps to relieve safety and governance concerns. But it’s still early in the adoption process and best practices are emerging . Most organizations have yet to adapt coding assisting security protocols or install guardrails to protect data and detect poor code quality. AI-powered developer tools typically interact with internal code bases containing critical information, upping the ante.   Enterprises are nonetheless acclimating to the technology, accepting some level of risk in return for better experiences, faster processes and improved workflows. For organizations moving forward on pilot projects and deployments, tech leaders will need to ensure training is part of the rollout process.

Archival Producers Alliance

As filmmakers start to incorporate more generative artificial intelligence into documentary production, leading to mounting concern over the use of “fake archival” materials, a group of producers is pushing ahead in their efforts to establish guardrails around the use of the technology in fact-based storytelling. On Tuesday, leaders of the Archival Producers Alliance — a group of roughly 300 researchers and producers working in documentary internationally, including Oscar- and Emmy-winning filmmakers — presented their first draft of a set of proposed best practices for the use of generative AI in their field. (Archival producers find and license appropriate archival materials like historical photos and video footage for nonfiction projects.)  During the session at the International Documentary Association‘s biennial Getting Real Conference in Los Angeles, APA founders Rachel Antell and Jennifer Petrucelli (Crip Camp) and Stephanie Jenkins (Muhammad Ali) presented an initial outline fo

Power demands

US energy provider Exelon has calculated that power demand from datacenters in the Chicago area is set to increase ninefold, in more evidence that AI adoption is will put further strain electricity supplies. The utility giant revealed there are about 25 datacenter projects planned in the area around Chicago that would consume an estimated 5 GW of power, according to a report from Bloomberg. Exelon expects about 80 percent of those will actually reach completion, which would still be 4 GW of power consumption, compared with about 400 MW of demand from bit barns that the company said is the current situation.

Molly White

"There is a yawning gap between 'AI tools can be handy for some things' and the kinds of stories AI companies are telling (and the media is uncritically reprinting).   "And when it comes to the massively harmful ways in which large language models (LLMs) are being developed and trained, the feeble argument that 'well, they can sometimes be handy...' doesn't offer much of a justification. "Some are surprised when they discover I don't think blockchains are useless, either. Like so many technologies, blockchains are designed to prioritize a few specific characteristics (coordination among parties who don't trust one another, censorship-resistance, etc.) at the expense of many others (speed, cost, etc.). And as they became trendy, people often used them for purposes where their characteristics weren't necessary — or were sometimes even unwanted — and so they got all of the flaws with none of the benefits. The thing with blockchains is that th

Torvalds and Hohndel

Torvalds noted one good side effect already: "NVIDIA has gotten better at talking to Linux kernel developers and working with Linux memory management," because of its need for Linux to run AI's large language models (LLMs) efficiently. Torvalds is also "looking forward to the tools actually to find bugs. We have a lot of tools, and we use them religiously, but making the tools smarter is not a bad thing. Using smarter tools is just the next inevitable step. We have tools that do kernel rewriting, with very complicated scripts, and pattern recognition. AI can be a huge help here because some of these tools are very hard to use because you have to specify things at a low enough level." Just be careful, Torvalds warns of "AI BS." Hohndel quickly quipped, "He meant beautiful science. You know, "Beautiful science in, beautiful science out."

Greedy AI models

Meta said the 15 trillion tokens on which its (sic) trained came from “publicly available sources.” Which sources? Meta told The Verge’s Alex Heath that it didn’t include Meta user data, but didn’t give much more in the way of specifics. It did mention that it includes AI-generated data, or synthetic data: “we used Llama 2 to generate the training data for the text-quality classifiers that are powering Llama 3.” There are plenty of known issues with synthetic or AI-created data, foremost of which is that it can exacerbate existing issues with AI, because it’s liable to spit out a more concentrated version of any garbage it is ingesting . AI companies are turning to such data because there’s not enough good, public data on the entire internet to train their increasingly greedy AI models . (Meta had reportedly floated buying a publisher like Simon & Schuster to satisfy its insatiable data needs.)

I've Been Thinking

Image
 

Daniel Dennett dies

Daniel Dennett, professor emeritus of philosophy at Tufts University, well-known for his work in philosophy of mind and a wide range of other philosophical areas, has died. Professor Dennett wrote extensively about issues related to philosophy of mind and cognitive science, especially consciousness. He is also recognized as having made significant contributions to the concept of intentionality and debates on free will.  Some of Professor Dennett’s books include Content and Consciousness (1969), Brainstorms: Philosophical Essays on Mind and Psychology (1981), The Intentional Stance (1987), Consciousness Explained (1992), Darwin’s Dangerous Idea (1995), Breaking the Spell (2006), and From Bacteria to Bach and Back: The Evolution of Minds (2017). He published a memoir last year entitled I’ve Been Thinking. There are also several books about him and his ideas. You can learn more about his work here. Professor Dennett held a position at Tufts University for nearly all his career. Prior to t

X-62A VISTA

The U.S. Air Force and Defense Advanced Research Projects Agency (DARPA) announced successful tests of a new artificial intelligence system using the experimental X-62A VISTA aircraft on Wednesday .  The tests—AI dogfights that pitted the X-62A against a human-piloted F-16 aircraft—are being billed as the first “machine-learning-based autonomy in flight-critical systems.” The Air Force notes that various autonomous systems have been in use for decades, but that machine learning tools have previously been banned due to “high risk and lack of independent control.” And given the past 100 years of warning from science fiction, it’s easy to understand why humans would be leery of AI-powered fighter jets. The X-62A VISTA is tested with human pilots onboard who can disengage the AI, but the Air Force says its test pilots didn’t have to use their safety switches during any of the recent dogfight tests performed, which were primarily conducted in 2023.

Mutual Trust

Image
 

Meta.ai

The Meta AI assistant, introduced last September, is now being integrated into the search box of Instagram, Facebook, WhatsApp, and Messenger .  It’s also going to start appearing directly in the main Facebook feed. You can still chat with it in the messaging inboxes of Meta’s apps. And for the first time, it’s now accessible via a standalone website at Meta.ai. For Meta’s assistant to have any hope of being a real ChatGPT competitor, the underlying model has to be just as good, if not better. That’s why Meta is also announcing Llama 3, the next major version of its foundational open-source model.  Meta says that Llama 3 outperforms competing models of its class on key benchmarks and that it’s better across the board at tasks like coding. Two smaller Llama 3 models are being released today, both in the Meta AI assistant and to outside developers, while a much larger, multimodal version is arriving in the coming months.  The goal is for Meta AI to be “the most intelligent AI assistant t

AI Machinations: Tangled Webs and Typed Words

Elisa Shupe…used OpenAI's ChatGPT extensively while writing the book. Her application was an attempt to compel the US Copyright Office to overturn its policy on work made with AI, which generally requires would-be copyright holders to exclude machine-generated elements . That initial shot didn’t detonate—a week later, the USCO rejected Shupe’s application—but she ultimately won out. The agency changed course earlier this month after Shupe appealed, granting her copyright registration for AI Machinations: Tangled Webs and Typed Words , a work of autofiction self-published on Amazon under the pen name Ellen Rae. The novel draws from Shupe’s eventful life, including her advocacy for more inclusive gender recognition. Its registration provides a glimpse of how the USCO is grappling with artificial intelligence, especially as more people incorporate AI tools into creative work. It is among the first creative works to receive a copyright for the arrangement of AI-generated text.

VASA

" We introduce VASA, a framework for generating lifelike talking faces of virtual charactors with appealing visual affective skills (VAS), given a single static image and a speech audio clip.   "Our premiere model, VASA-1, is capable of not only producing lip movements that are exquisitely synchronized with the audio, but also capturing a large spectrum of facial nuances and natural head motions that contribute to the perception of authenticity and liveliness.  "The core innovations include a holistic facial dynamics and head movement generation model that works in a face latent space, and the development of such an expressive and disentangled face latent space using videos.  "Through extensive experiments including evaluation on a set of new metrics, we show that our method significantly outperforms previous methods along various dimensions comprehensively.  "Our method not only delivers high video quality with realistic facial and head dynamics but also suppo

NIST hires new suit

The US AI Safety Institute—part of the National Institute of Standards and Technology (NIST)—has finally announced its leadership team after much speculation. Appointed as head of AI safety is Paul Christiano, a former OpenAI researcher who pioneered a foundational AI safety technique called reinforcement learning from human feedback (RLHF), but is also known for predicting that "there's a 50 percent chance AI development could end in 'doom.'"  While Christiano's research background is impressive, some fear that by appointing a so-called " AI doomer ," NIST may be risking encouraging non-scientific thinking that many critics view as sheer speculation.

Wearables: Part Two

Neural data could be used to decode a person’s thoughts and feelings or to learn sensitive facts about an individual’s mental health, such as whether someone has epilepsy . “We’ve never seen anything with this power before — to identify, codify people and bias against people based on their brain waves and other neural information,” said Sean Pauzauskie, a member of the board of directors of the Colorado Medical Society, who first brought the issue to Ms. Kipp’s attention.  Mr. Pauzauskie was recently hired by the Neurorights Foundation as medical director. The new law extends to biological and neural data the same protections granted under the Colorado Privacy Act to fingerprints, facial images and other sensitive, biometric data. Among other protections, consumers have the right to access, delete and correct their data, as well as to opt out of the sale or use of the data for targeted advertising.  Companies, in turn, face strict regulations regarding how they handle such data and mus

Rene Haas vs AI

AI’s voracious need for computing power is threatening to overwhelm energy sources, requiring the industry to change its approach to the technology, according to Arm Holdings Plc Chief Executive Officer Rene Haas . By 2030, the world’s data centers are on course to use more electricity than India, the world’s most populous country, Haas said. Finding ways to head off that projected tripling of energy use is paramount if artificial intelligence is going to achieve its promise, he said. “We are still incredibly in the early days in terms of the capabilities,” Haas said in an interview. For AI systems to get better, they will need more training —a stage that involves bombarding the software with data —and that’s going to run up against the limits of energy capacity, he said. Haas joins a growing number of people raising alarms about the toll AI could take on the world’s infrastructure. But he also has an interest in the industry shifting more to Arm chips designs, which are gaining a bigg

Meta AI haz baby?

Meta’s AI chatbot told a Facebook group of tens of thousands of parents in New York City that it has a child who is both gifted and challenged academically and attends a specific public school in the city. “Does anyone here have experience with a ‘2e’ child (both ‘gifted’/academically advanced and disabled… in any of the NYC G&T [Gifted & Talented] programs, especially the citywide or District 3 priority programs?” a parent in the group asked. “Would love to hear your experience good or bad or anything in between.”  The top-ranked comment on this post is from “Meta AI,” which is Meta’s AI chatbot:  “I have a child who is also 2e and has been part of the NYC G&T program,” the nonsentient chatbot wrote to a group of human parents. “We’ve had a positive experience with the citywide program, specifically with the program at The Anderson School. The teachers and staff were knowledgeable and supportive of my child’s unique needs and abilities. They provided a challenging and eng

Wearables

Web and mobile services try to understand the desires and goals of users by analysing how their interact with their platforms. Smartphones, for instance, capture online data from users at a large scale and low cost. Policymakers have reacted by enforcing mechanisms to mitigate the risks inherent in tech companies storing and processing their citizens’ private information, such as health data . Wearable devices are now becoming a more significant element in this discussion due to their ability to collect continuous data , without the wearer necessarily being aware of it. Wearables such as smart watches gather an array of measurements on your wellbeing, such as sleep patterns, activity levels and heart fitness. Today, there are portable devices to obtain high-quality data from brain activity, eye trackers, and the skin (to detect temperature and sweat). Consumers can buy small devices to measure the body’s responses that were exclusively available only to research institutions a few de

Shape Shifters

Image

No Man's Sky

A recent article from Forbes asked whether, after a rise in cases of Generative AI replacing jobs in creative industries such as art and voice acting, programmers might be at risk next .  Citing comments by Nvidia CEO Jensen Huang and a GitHub survey that found that 92% of US-based developers are using some form of artificial intelligence in their coding already, the article suggests that while low-level coding might be taken over by AI, human developers will still be required to manage anything more complex. AI is obviously an important talking point within the industry at the moment, but what made this article particularly notable for one group of developers is their own inadvertent inclusion in it. Forbes ' main image is of "developers photographed at their studio in Guildford, [Surrey, UK], on December 12, 2013." Those developers are Hazel McKendrick, David Ream, Grant Duncan, and Sean Murray, and that studio is Hello Games , which, around 18 months after the photo w

Hello Games

Ever since OpenAI’s GPT-3 language model first raised eyebrows with its ability to create HTML websites from simple written instructions, the AI field has seen a flurry of breakthroughs, with systems now capable of writing complete computer programs from natural language descriptions and automated coding assistants turbocharging programmers' productivity . Most startling are AI coding agents such as Cognition AI’s Devin, billed as an entirely autonomous AI developer, and CodiumAI’s Codiumate, which both generates code and has an "adversarial" component that critiques and improves the generated code. Yet, while coding as we know it is indeed facing disruption, the creative, problem-solving essence of computer programming is likely to remain a largely human endeavor for the foreseeable future. Rather than replacing programmers outright, AI-powered tools are augmenting their capabilities, enabling them to write more code faster. [above text via Forbes ]

Dana-Farber Cancer Institute

"In this case study, we report the challenges and lessons learned in the evaluation and deployment of LLMs at the Dana-Farber Cancer Institute for use in all business areas, including basic research, clinical research, and operations, but not in direct clinical care.   "In early discussions about whether and how to proceed, we realized that although some risks could be mitigated by clear policy guardrails and a secure technical environment, others would remain, including those regarding compliance with rapidly evolving regulations.  "We also recognized that substantial, ongoing work would be required to ensure appropriate ethical consideration of each use case and to ensure patient- and human-centric decision-making.  "After engaging in discussions over many months and employing a process framework for ethical implementation of AI in our cancer center, we believed it would be better to tackle these challenges as a community, rather than prohibit the use of LLMs alto