Posts

Showing posts from March, 2026

Supa, supa unlovable vibes

"Taimur Khan, a tech entrepreneur with a background in software engineering, found 16 vulnerabilities —six of which he said were critical —in a single Lovable-hosted app that leaked more than 18,000 people's data. "He declined to name the app during the disclosure process, although it was hosted on Lovable's platform and showcased on its Discover page. The app had more than 100,000 views and around 400 upvotes at the time Khan began his probe. "The main issue, Khan said, was that all apps that are vibe-coded on Lovable's platform are shipped with their backends powered by Supabase , which handles authentication, file storage, and real-time updates through a PostgreSQL database connection. "However, when the developer —in this case AI —or the human project owner fails to explicitly implement crucial security features like Supabase's row-level security and role-based access, code will be generated that looks functional but in reality is flawed."

Taboom!

"Kenneth Payne at King’s College London set three leading large language models —GPT-5.2, Claude Sonnet 4 and Gemini 3 Flash —against each other in simulated war games. "The scenarios involved intense international standoffs, including border disputes, competition for scarce resources and existential threats to regime survival. "The AIs were given an escalation ladder, allowing them to choose actions ranging from diplomatic protests and complete surrender to full strategic nuclear war. The AI models played 21 games, taking 329 turns in total, and produced around 780,000 words describing the reasoning behind their decisions. "In 95 per cent of the simulated games, at least one tactical nuclear weapon was deployed by the AI models. 'The nuclear taboo doesn’t seem to be as powerful for machines [as] for humans,' says Payne. "What’s more, no model ever chose to fully accommodate an opponent or surrender, regardless of how badly they were losing.  "At ...