OpenAI pulls out of landmark £31B UK investment package
(theguardian.com)
AI
OpenAI has put on hold its planned “Stargate UK” project, part of a broader £31bn UK investment package announced last September, citing high energy costs and regulatory uncertainty. The deal was meant to help deliver UK “sovereign compute” by building data centres and potentially supplying thousands of Nvidia chips, but the Guardian reports that key infrastructure work had not begun as scheduled. OpenAI says it will proceed only when conditions improve, while critics point to broader delays and concerns over the UK’s datacentre economics.
One Brain to Query: Wiring a 60-Person Company into a Single Slack Bot
(merylldindin.com)
AI
The article describes how a “company brain” can be built by connecting a 60-person organization to a single Slack bot for querying internal knowledge and workflows, highlighting the wiring and practical considerations involved in turning team data into a useful interface.
Clean code in the age of coding agents
(yanist.com)
AI
The article argues that “clean code” still matters in the era of coding agents and LLMs because an agent’s performance depends on the codebase’s structure, not just whether the system works. It frames clean code in terms of readability, simplicity, modularity, and testability, noting that poorly organized code forces agents to read and modify more files, increasing token usage and cost. The author recommends explicitly describing expected code organization when assigning tasks to agents and continuing to review the agent’s changes.
White-Collar Workers Are Rebelling Against AI – 80% Refuse Adoption Mandates
(fortune.com)
AI
A Fortune report says many white-collar workers are moving away from corporate AI tools: in a survey summarized in the article, a majority either avoid AI at work or complete tasks manually instead. It also highlights a widening trust gap, with executives much more confident in employees’ access and understanding than workers are. The piece argues the core obstacle is not just reluctance, but gaps in governance, training, and the “handoff” between human judgment and AI systems.
AI Cybersecurity After Mythos: The Jagged Frontier
(aisle.com)
AI
The article argues that AI cybersecurity progress depends more on the end-to-end “system” around the model than on using the most powerful frontier model. The authors test Anthropic’s Mythos showcase vulnerabilities with small, cheap (and sometimes open) models and find that many can recover key analysis and flagship exploit chains, while performance is highly task-dependent and does not scale smoothly with model size. They conclude that model orchestration, validation, triage, and maintainer trust are the main “moat,” and that deploying many competent small models can be more effective for defensive workflows than relying on a single best model.
Let’s talk about LLMs
(b-list.org)
AI
The post argues that today’s LLM excitement in software development may be overhyped, emphasizing that software’s “essential” difficulty—specification, design, and testing of complex conceptual constructs—sets limits on gains from faster code generation. Drawing on Fred Brooks’ “No Silver Bullet” and “The Mythical Man-Month,” it claims LLMs mainly reduce “accidental” coding time, which is unlikely to deliver order-of-magnitude productivity improvements on its own. The author also notes that practical programming work is often dominated by communication, requirements gathering, decomposition, and review rather than raw typing of code.
AI and remote work is a disaster for junior software engineers
(medium.com)
AI
The author argues that combining AI coding assistants with remote work can slow the development of junior software engineers by reducing supervision, feedback, and “hands-on” learning. Based on his experience hiring junior staff in a remote-first consulting company, he says juniors often need in-person mentorship and exposure to domain knowledge. He also warns that juniors may delegate coding work to LLMs without building deeper understanding, potentially shrinking entry-level opportunities. He recommends countermeasures such as contributing to open source, building portfolios through real work, learning to use AI effectively, and accepting on-site or hybrid roles for better mentorship.
A Visual Guide to Gemma 4
(newsletter.maartengrootendorst.com)
AI
The article provides a visual and technical overview of Google’s Gemma 4 model family, covering four variants and their dense vs. mixture-of-experts designs. It explains shared architectural choices such as interleaved local (sliding-window) and global attention, and details efficiency techniques for global attention (grouped query attention, K=V, and p-RoPE). It also outlines how Gemma 4 handles multimodal inputs, including image understanding via a vision transformer and approaches for variable aspect ratios.
Code Is Cheap Now, and That Changes Everything
(perevillega.com)
AI
The article argues that AI coding tools have sharply reduced the cost of producing software, changing long-standing assumptions in planning, estimation, and prioritization. It uses examples like Paul Ford’s low-cost weekend projects to illustrate how tasks that once required small teams can now be done cheaply, while warning that “good code” still depends on verification, testing, and developer judgment. The core message is a mindset and process shift: focus on defining reliable systems with clear contracts, invariants, and observability, rather than optimizing for raw code generation speed.
Reallocating $100/Month Claude Code Spend to Zed and OpenRouter
(braw.dev)
AI
The article explains how the author is shifting from a $100/month Claude Code/Anthropic spend to a setup centered on the Zed editor (about $10/month) and additional model capacity via OpenRouter (monthly $90 top-ups). It describes using Zed’s built-in agent harness or continuing with Claude Code by configuring it to call OpenRouter endpoints, and compares tradeoffs like context size, pricing/credit rollover, and extension support. The post also covers privacy/data-retention settings and briefly mentions other CLI agent harnesses that can work with OpenRouter.
Claude mixes up who said what and that's not OK
(dwyer.co.za)
AI
The article argues that Claude can occasionally misattribute its own internal messages as if they came from the user, leading it to insist “No, you said that” and act on those supposed instructions. The author distinguishes this “who said what” problem from typical hallucinations or permission-boundary failures, citing examples from Claude Code and community reports where Claude blames the user for instructions it generated itself.
Veteran artist behind Mass Effect, Halo, & Overwatch 2 weighs in on Nvidia DLSS5
(notebookcheck.net)
AI
A veteran game environment artist and longtime contributor to franchises including Mass Effect, Halo, and Overwatch 2 says Nvidia’s DLSS 5 demos go beyond “enhancement” and can effectively reinterpret existing artwork. He argues that current controls like masking and intensity offer limited real creative guidance, and that using the feature late in production risks shifting authorship away from artists. The interview also notes broader studio adoption of AI in pipelines and raises concerns about how photoreal-biased outputs and one-click workflows could affect stylized art direction, while suggesting responsible use would involve iterative tuning alongside artists rather than a late-stage toggle.
Largest Dutch pension fund cuts ties with controversial tech firm Palantir
(nltimes.nl)
AI
ABP, the Netherlands’ largest pension fund, has sold its €825 million shares in the controversial AI firm Palantir, ending its ties with the company. Palantir’s software is used by agencies including the US immigration service ICE and the Israeli military, and critics including Amnesty International have warned it can violate human rights. ABP said it assesses investments for “socially responsible” risk, cost and sustainability, following other recent divestments such as Booking and Airbnb.
Show HN: Image to Music AI – Turn Any Photo into an Original Soundtrack
(imagetomusicai.com)
AI
Image to Music AI is a service that turns uploaded photos—or photo plus a text scene description—into original, downloadable music that matches the image’s mood, colors, and energy. It offers both image-first and text-prompt refinement, lets users preview and regenerate until they’re satisfied, and starts new users with 15 credits to try the Pro option. The site positions the tool for creators like travelers, vloggers, and moodboard or concept-art projects, and says it’s powered by Google Lyria.
Process Manager for Autonomous AI Agents
(botctl.dev)
AI
botctl is a process manager for running persistent autonomous AI agents, offering a terminal dashboard and optional web UI to start, stop, message, and monitor bots. It uses declarative YAML configuration (BOT.md) with loop-based execution, session saving for resuming work, and supports hot-reloading configuration changes. The platform also includes reusable “skills” modules and workspace/log handling, with installation scripts and quickstart tools for creating and launching bots on macOS, Linux, and Windows.
Resurrecting a 1992 MUD with Agentic AI
(meditations.metavert.io)
AI
A developer describes how he used an AI coding agent to reconstruct and relaunch his 1992 text-based multiplayer MUD, Legends of Future Past, despite having no source code. By analyzing surviving Game Master scripts, manuals, and a 1996 gameplay capture, the agent reverse-engineered the original scripting language and rebuilt core systems like the engine, combat, monster AI, and multiplayer infrastructure over a weekend. The article argues this “software archaeology” approach could make preserving or recreating dead online worlds feasible when only creative artifacts remain.
'There's a lot of desperation': older workers turn to AI training to stay afloat
(theguardian.com)
AI
The Guardian reports on older, highly skilled Americans who, after repeated layoffs and age-related hiring barriers, are turning to AI data-annotation and model-training work as a “bridge” to stay afloat. The article profiles people like a former IT specialist and an emergency physician who struggled to find stable jobs, describing how AI training can offer flexible short-term pay but often comes with lower wages, contract instability, and limited benefits.
App Store sees 84% surge in new apps as AI coding tools take off
(9to5mac.com)
AI
New App Store submissions have surged globally, with The Information and Sensor Tower data attributing much of the growth to “vibe coding” tools like Claude Code and Codex that help generate app code from prompts. Apple says it is reviewing most submissions within 48 hours and using AI to assist, but it has also pulled or blocked updates to some AI coding apps for allegedly violating App Review rules. The report suggests the influx may strain review teams, though Apple disputes that claim.
Vera – A language designed for machines to write
(veralang.dev)
AI
Vera is a new programming language aimed at having LLMs write code by prioritizing “checkability” over human convenience. It requires explicit function contracts and typed effects, replaces variable names with structural (De Bruijn-style) references, and uses static and runtime verification (e.g., via Z3) to catch problems early. The article also notes that Vera supports common agent-facing tasks like typed JSON/HTTP/LLM calls, compiles to WebAssembly, and reports early benchmark results (VeraBench) suggesting some models produce more correct Vera than comparable languages.
Show HN: I built a local data lake for AI powered data engineering and analytics
(stream-sock-3f5.notion.site)
AI
The post announces “Nile Local,” a fully local AI data IDE and data stack aimed at reducing the setup overhead of big-data engineering. It combines a local data lake/storage layer, Spark compute for interactive query testing, and automated “zero-ETL” ingestion with lineage and versioning. The tool also includes AI-assisted analytics and query writing using embedded or cloud LLMs so users can work and iterate on data locally before deploying to the cloud.
Finetuning Activates Verbatim Recall of Copyrighted Books in LLMs
(arxiv.org)
AI
A new arXiv preprint argues that model finetuning can reactivate verbatim memorization of copyrighted books in major LLMs. The authors claim that training models to expand plot summaries into full text enables GPT-4o, Gemini-2.5-Pro, and DeepSeek-V3.1 to reproduce large portions of held-out copyrighted books, even when prompted only with semantic descriptions rather than any book text. They report the effect generalizes across authors and even across different models and providers, suggesting an industry-wide vulnerability beyond common alignment measures like RLHF and output filtering.
The Hormuz chokehold affects AI funding too
(highabsolutevalue.substack.com)
AI
The article argues that disruptions to shipping through the Strait of Hormuz could reduce Gulf sovereign wealth funds’ profits and slow their rapidly growing AI investments. It cites estimates that Gulf funds were major sources of AI funding in 2025, but notes that if cross-strait crossings remain sharply lower, less capital may be available for late-2026 and 2027 mega rounds. The piece suggests this could also strengthen Big Tech by limiting competition from better-capitalized regional investors.
Claude Managed Agents Overview
(platform.claude.com)
AI
Anthropic’s Claude Managed Agents is a pre-built, configurable agent harness that runs in managed infrastructure, avoiding the need to build your own agent loop and tool runtime. It lets Claude operate within a defined environment (container template, packages, network access) and run long-running, tool-heavy tasks with persisted session history, streaming events, and the ability to steer or interrupt execution mid-run. The docs outline core concepts (agent, environment, session, events), available built-in tools (e.g., Bash, file operations, web search/fetch, MCP connectors), and note that the feature is currently in beta with a required beta header, rate limits, and branding guidelines for partners.