Posts

Hi, spy, you get AI 🫥

Israel’s military surveillance agency has used a vast collection of intercepted Palestinian communications to build a powerful artificial intelligence tool similar to ChatGPT that it hopes will transform its spying capabilities, an investigation by the Guardian can reveal. The joint investigation with Israeli-Palestinian publication +972 Magazine and Hebrew-language outlet Local Call has found Unit 8200 trained the AI model to understand spoken Arabic using large volumes of telephone conversations and text messages, obtained through its extensive surveillance of the occupied territories. According to sources familiar with the project, the unit began building the model to create a sophisticated chatbot-like tool capable of answering questions about people it is monitoring and providing insights into the massive volumes of surveillance data it collects.

Human expertise in key fields endangered by AI 🦹‍♂️

Anthropic has submitted a detailed set of AI policy recommendations to the White House’s Office of Science and Technology Policy (OSTP), calling for enhanced government oversight in artificial intelligence development. The company warns that advanced AI systems could surpass human expertise in key fields as early as 2026, and argues that without immediate intervention , the U.S. may not be prepared for the economic and security challenges such technology brings. However, as it urges federal regulators to take a more active role in AI governance, Anthropic has quietly removed multiple AI policy commitments it made under the Biden administration, raising questions about its evolving stance on self-regulation.

CamoGPT

The United States Army is employing a prototype generative artificial intelligence tool to identify references to diversity, equity, inclusion, and accessibility (DEIA) for removal from training materials in line with a recent executive order from President Donald Trump. Officials at the Army’s Training and Doctrine Command (TRADOC) —the major command responsible for training soldiers, developing leaders, and shaping the service’s guidelines, strategies, and concepts —are currently using the AI tool , dubbed CamoGPT, to “review policies, programs, publications, and initiatives for DEIA and report findings,” according to an internal memo reviewed by WIRED .

Vibe coding ✨

To many people, coding is about precision. It's about telling a computer what to do and having the computer perform those actions exactly, precisely, and repeatedly.  With the rise of AI tools like ChatGPT, it's now possible for someone to describe a program in English and have the AI model translate it into working code without ever understanding how the code works .  Former OpenAI researcher Andrej Karpathy recently gave this practice a name — vibe coding —and it's gaining traction in tech circles. The technique, enabled by large language models (LLMs) from companies like OpenAI and Anthropic, has attracted attention for potentially lowering the barrier to entry for software creation.  But questions remain about whether the approach can reliably produce code suitable for real-world applications, even as tools like Cursor Composer, GitHub Copilot, and Replit Agent make the process increasingly accessible to non-programmers.

AI will interfere

Canada's cyber intelligence agency is warning that countries including China, Russia and Iran will very likely  lean on artificial intelligence to try and interfere in the upcoming federal election and mislead voters. In a report assessing threats to Canada's democratic process in the upcoming year, the Communications Security Establishment (CSE) said those known hostile actors are looking to use AI to fuel disinformation campaigns or launch hacking operations. While the 28-page document suggests the threats are real and evolving, CSE does stress that it believes it's very unlikely that AI-enabled activities will "fundamentally undermine the integrity of Canada's next general election."

Victor Cooper

"Gen-AI is primarily about achieving more with less. Our respective Gen-AI capabilities are designed to enhance productivity and significantly reduce the burden of manual labor .  "We ensure that all Gen-AI results are clearly marked, allowing users to trace back to the original data that appeared as the indicator.  "Additionally, there is always a human-in-the-loop (HITL) to review, accept, or disregard the AI-generated suggestions. Every report we produce is firmly grounded in robust evidence. "While we deliver efficiency , it remains the user's responsibility to thoroughly check any AI-generated suggestions or indicators to avoid hallucinations or incomplete results.  "Terms like ‘partial’ or ‘incomplete’ could equally apply to scenarios where investigators, without the aid of AI, fail to examine all potential evidence.  "Our Gen-AI tools are intended to automate certain manual tasks, but they do not absolve users of their responsibility to base con...

Assessing AI tools for libraries

"The popularization of artificial intelligence (AI) represents a significant business opportunity for private actors developing tools and services aimed at research and higher education .  "Academic libraries are often at the receiving end of sales pitches for new tools and could benefit from guidance on how to assess them.  "Libraries’ assessment of tools is a valuable service to library stakeholders, many of whom may not have sufficient time, the necessary competencies or the inclination to explore the landscape of innovations promising to support their information needs and research endeavours.  "The main areas proposed for reflection concern Tool purpose, design and technical aspects;  Information literacy, academic craftsmanship and integrity;  Ethics and the political economy of AI. "This article offers concrete guidance concerning what to consider when assessing whether to adopt, endorse and/or invest in innovative information and research tools that mak...

Thunderforge

U.S. military commanders will use artificial intelligence tools to plan and help execute movements of ships, planes and other assets under a contract called “Thunderforge” led by start-up Scale AI, the company said Wednesday. The deal comes as the Defense Department and the U.S. tech industry are becoming more closely entwined.  Scale will use AI tools from Microsoft and Google to help build Thunderforge, which is also being integrated into start-up weapons developer Anduril’s systems. The Thunderforge project aims to find ways to use AI to speed up military decision-making during peace and wartime.  Commanders’ roles have become more challenging as military operations and equipment have become more complex and technology-centric, with missions involving drones and conventional forces spanning land, sea and air, as well as cyberattacks.  Under the new contract, Scale will develop AI programs that commanders could ask for recommendations about how to most efficiently move...

AI weaponry

Firms like Google that are associated with developing these weapons might be too big to fail.   As a consequence, even when there are clear instances of AI going wrong, they are unlikely to be held responsible.  This lack of accountability creates a hazard as it disincentivises learning and corrective actions. The “cosying up” of tech executives with US president Donald Trump only exacerbates the problem as it further dilutes accountability. Society may be willing to accept mistakes, as with civilian casualties caused by drone strikes directed by humans. This tendency is something known as the banality of extremes —humans normalise even the more extreme instances of evil as a cognitive mechanism to cope.  The alienness of AI reasoning may simply provide more cover for doing so. Rather than joining the race towards the development of AI weaponry, an alternative approach would be to work on a comprehensive ban on it’s development and use. 

Terminator

Concerns about the potentially catastrophic  introduction of artificial intelligence (AI) into the nuclear weapons’ command, control and communication (N3) systems have been raised by the former First Sea Lord and former Security Minister Lord West of Spithead . An AI expert told the Canary that the potential worst-case scenario for introducing AI into nuclear weapons command and control systems is a situation like the one which caused the apocalypse in the Terminator franchise .  The Terminator films revolve around an event where the AI in control of the USA’s nuclear weapons system gains self-awareness, views its human controllers as a threat, and chooses to attempt to wipe out humanity. 

AI + nuke + drone

The only way to maintain nuclear surety is direct, physical human control over nuclear weapons up until the point of a decision to carry out a nuclear strike .  While the U.S. military would likely be extremely reluctant to place nuclear weapons onboard a drone aircraft or undersea vehicle, Russia is already developing such a system. The Poseidon, or Status-6, undersea autonomous uncrewed vehicle is reportedly intended as a second- or third-strike weapon to deliver a nuclear attack against the United States.  How Russia intends to use the weapon is unclear —and could evolve over time —but an uncrewed platform like the Poseidon in principle could be sent on patrol, risking dangerous accidents.  Other nuclear powers could see value in nuclear-armed drone aircraft or undersea vehicles as these technologies mature.  Losing control of a nuclear-armed drone could cause nuclear weapons to fall into the wrong hands or, in the worst case, escalate a nuclear crisis. 

Michael E. O’Hanlon

"Chinese President Xi Jinping and U.S. President Joe Biden agreed late in 2024 that artificial intelligence (AI) should never be empowered to decide to launch a nuclear war .  "The groundwork for this excellent policy decision was laid over five years of discussions at the Track II U.S.-China Dialogue on Artificial Intelligence and National Security convened by the Brookings Institution and Tsinghua University’s Center for International Security and Strategy.  "By examining several cases from the U.S.-Soviet rivalry during the Cold War, one can see what might have happened if AI had existed back in that period and been trusted with the job of deciding to launch nuclear weapons or to preempt an anticipated nuclear attack — and had been wrong in its decisionmaking.  "Given the prevailing ideas, doctrines, and procedures of the day, an AI system trained on that information (perhaps through the use of many imaginary scenarios that reflected the current conventional wis...

Neutron Dance

Image

Rather be dancin'

According to the Cox Report, as of 1999, the United States had never deployed a neutron weapon .  The nature of this statement is not clear; it reads, "The stolen information also includes classified design information for an enhanced radiation weapon (commonly known as the "neutron bomb"), which neither the United States, nor any other nation, has ever deployed."  However, the fact that neutron bombs had been produced by the US was well known at this time and part of the public record. Cohen suggests the report is playing with the definitions; while the US bombs were never deployed to Europe, they remained stockpiled in the US. In addition to the two superpowers, France and China are known to have tested neutron or enhanced radiation bombs. France conducted an early test of the technology in 1967 and tested an actual neutron bomb in 1980. China conducted a successful test of neutron bomb principles in 1984 and a successful test of a neutron bomb in 1988.

I'm not crazy! You're crazy!

The general thought of many of the members of these Green Light Teams was that these missions were near suicidal.   One Green Light Team member, Louis Frank Napoli, said of the missions: "We were kamikaze pilots without the airplanes."  Robert Deifel, another Green Light Team member, said of the missions: "There was no room for error... We had to be absolutely perfect."  The risk was extremely prevalent when discussing the possible time frame for when these atomic devices could ignite on a mechanical timer.  This timer would become less efficient and more risky the longer the duration of the timer was set.  The team members had been informed that the timers could go off up to eight minutes earlier than desired and even thirteen minutes after expected.

Just see whether it's true or not!

"It was only the test that was terrible. All the rest —Ah, yes, away with it! —even seemed ridiculous, if one looked at it that way: her traipsing off with Quantorzo like that, and my becoming worked up over the gross stupidity of those who believed me to be a usurer.  "But how then? Had I been brought to this? To the point where I could no longer take anything seriously?   "And the stab which I had experienced a short while back, which had led to that violent outburst? "That was all very well. But where was the stab? In me? If I touched myself, if I rubbed my hands together, if I said 'I' —but to whom was I saying it? For whose benefit?  "I was alone. In all the world, I was alone. For myself, I was alone. And in the instantaneous shudder which now shot up to the roots of my hair, I knew eternity and all the frigidity of that infinite solitude.  "To whom was I to say 'I'? Of what use to say 'I,' if one were to be at once caught up...

Techdirt

"There’s something important to understand about innovation. It doesn’t actually happen in a vacuum .  "The reason Silicon Valley became Silicon Valley wasn’t because a bunch of genius inventors happened to like California weather.  "It was because of a complex web of institutions that made innovation possible:  Courts that would enforce contracts (but not non-competes, allowing ideas to spread quickly and freely across industries),  Universities that shared research, a  Financial system that could fund new ideas, and  Laws that let people actually try those ideas out.  "And surrounding it all: A  fairly stable economy, Stability in global markets, and (more recently) a Strong belief in a global open internet."

AI malware

Van Andel’s digital unraveling began last February, when he downloaded free software from popular code-sharing site GitHub while trying out some new artificial intelligence technology on his home computer .  The software helped create AI images from text prompts. It worked, but the AI assistant was actually malware that gave the hacker behind it access to his computer, and his entire digital life. The hacker gained access to 1Password, a password-manager that Van Andel used to store passwords and other sensitive information, as well as “session cookies,” digital files stored on his computer that allowed him to access online resources including Disney’s Slack channel.

Differential privacy

Image

Kean D. Birch: "Someone’s going to lose a lot of money."

"As the management guru Peter Drucker pointed out a long time ago, at a certain point in the innovation cycle, businesses stop investing and start reaping the returns on their investments .  "Right now, though, Big Tech can see the endpoint coming to capture returns on their previous investments, but they can also see that while we wait for the next techno-economic paradigm, they can continue to reap further returns from the current dying paradigm by propping up generative AI as a way to also prop up their expensive asset base.   "Unfortunately, the enormous sums invested in computing capacity for generative AI have to make an enormous return, or they’ll be wasted .  "Jim Covello, Head of Global Equity Research at Goldman Sachs, argues that although AI looks like it’ll attract US $1 trillion in investment in the coming years, there really isn’t a US $1 trillion problem for it to solve. Someone’s going to lose a lot of money."  

Anatta

According to Collins, the Suttas present the doctrine in three forms . First, they apply the "no-self, no-identity" investigation to all phenomena as well as any and all objects, yielding the idea that "all things are not-self" ( sabbe dhamma anattā ).  Second, states Collins, the Suttas apply the doctrine to deny self of any person, treating conceit to be evident in any assertion of "this is mine, this I am, this is myself" ( etam mamam eso 'ham asmi, eso me atta ti ). Third, the Theravada texts apply the doctrine as a nominal reference, to identify examples of "self" and "not-self," respectively the Wrong view and the Right view;  This third case of nominative usage is properly translated as "self" (as an identity) and is unrelated to "soul," states Collins. The first two usages incorporate the idea of soul.

Teletransporter

Parfit argues that any criteria we attempt to use to determine sameness of person will be lacking, because there is no further fact .  What matters, to Parfit, is simply "Relation R," psychological connectedness, including memory, personality, and so on. Parfit continues this logic to establish a new context for morality and social control.  He cites that it is morally wrong for one person to harm or interfere with another person and it is incumbent on society to protect individuals from such transgressions.  That accepted, it is a short extrapolation to conclude that it is also incumbent on society to protect an individual's "Future Self" from such transgressions; tobacco use could be classified as an abuse of a Future Self's right to a healthy existence. 

Time Travel

Some philosophers have argued that there exists a connection between the nature of the self and the nature of time .  Vincent Conitzer argues that the vertiginous question is related to A series and B series theories of time, and that arguments in favor of the A-theory of time are more effective as arguments for the combined position of both A-theory being true and the "I" being metaphysically privileged from other perspectives.   Caspar Hare has made similar arguments with the theories of egocentric presentism and perspectival realism , of which several other philosophers have written reviews.