AI news

Browse stored weekly and monthly summaries for this subject.

Previous March 30, 2026 to April 05, 2026 Next

Summary

Generated about 22 hours ago.

TL;DR: This week highlighted rapid deployment of AI systems (healthcare and robotics) alongside ongoing model/tool releases, while the policy and governance conversation focused on safety, labeling, and legal exposure.

Model + tooling releases (and on-device momentum)

  • Microsoft launched three MAI models in Foundry/MAI Playground: MAI-Transcribe-1 (speech-to-text), MAI-Voice-1 (voice generation + custom voices), and MAI-Image-2 (image generation), with enterprise controls and red-teaming noted.
  • Google pushed Gemma 4 to the “Edge” on-device story (via an iPhone app) and coverage of running Gemma 4 locally (e.g., with LM Studio/Claude Code integrations).
  • Open-source agent tooling and QA workflows kept expanding: examples include nanocode (JAX/TPU agentic coding approach) and approaches to testing/QA with Claude agents.
  • A usage-scale claim circulated: Qwen-3.6-Plus reportedly processing 1T+ tokens/day on OpenRouter.

Real-world AI adoption + societal/legal pressure

  • Health: an Amsterdam cancer center reported AI cutting MRI scan time from 23 to 9 minutes, increasing capacity and shifting scans toward daytime hours.
  • Robotics/operations: reporting on Japan’s move toward “physical AI” deployments to keep warehouses/factories running as labor shortages worsen.
  • Policy/legal: updates included OpenAI Codex pricing changes (token-based usage) and court challenges targeting whether platforms can keep relying on Section 230, with AI-generated recommendations/summaries implicated.
  • Safety/ethics: posts and commentary addressed child-safety regulation delays, plus debates over AI-generated code labeling/review and risks of misplaced reliance on AI.

Emerging pattern

Across the period, coverage shifted from pure model announcements toward integration, orchestration, verification/QA, and deployment constraints—with tighter attention to safety, labeling, and accountability as AI moves into operational systems.

Stories

Wavelets on Graphs via Spectral Graph Theory (arxiv.org) AI

The paper presents a way to build wavelet transforms for functions on the vertices of a finite weighted graph using the graph Laplacian’s spectral decomposition. It defines scaled wavelet operators via a kernel g(tL) and forms graph wavelets by localizing these operators, with an admissibility condition ensuring the transform is invertible. The authors also study localization behavior at fine scales and provide an efficient Chebyshev-polynomial method to compute the transform without diagonalizing the Laplacian.

In Japan, the robot isn't coming for your job; it's filling the one nobody wants (techcrunch.com) AI

Japan is accelerating “physical AI” not to replace jobs broadly, but to keep factories, warehouses, and other critical operations running as labor shortages worsen. Backed by government targets and investment, companies are moving from pilots to customer-funded deployments using more autonomous robotics software, orchestration, and integration across existing hardware. Industry sources say Japan’s strength in high-precision robotics components and control systems is a key advantage, with a hybrid ecosystem where incumbents scale while startups build perception and workflow capabilities.

Iran threatens 'complete and utter annihilation' of OpenAI's $30B Stargate (tomshardware.com) AI

Iran’s Islamic Revolutionary Guard Corps has issued a video warning that any attacks on Iranian power infrastructure would be met with “complete and utter annihilation,” naming U.S. and Israeli facilities in the region. The threat specifically targets OpenAI’s reported $30B Stargate AI data center in Abu Dhabi, showing satellite imagery of a 1GW site. The warning follows recent reports of rocket strikes disrupting some AWS data centers and comes amid broader threats from Iran toward major U.S. tech companies.

Policy on adding AI generated content to my software projects (joeyh.name) AI

The author describes a tongue-in-cheek policy for accepting AI-generated code into their projects: bypassing normal code review if the submission compiles, is clearly labeled as “(AI generated),” and includes a signed Developer Certificate of Origin. They note they may still make small changes for QA purposes and will keep the contributor credited as the author, but warn that unlabeled AI code may crowd out human code reviews. The post is framed humorously with examples, including gating a change to leap days.

3 New world class MAI models, available in Foundry (microsoft.ai) AI

Microsoft announced three new MAI models—MAI-Transcribe-1 for speech-to-text, MAI-Voice-1 for voice generation (including custom voice creation), and MAI-Image-2 for image generation—now available in Microsoft Foundry and MAI Playground. The company says MAI-Transcribe-1 targets fast, accurate transcription for the most-used languages, MAI-Voice-1 can generate 60 seconds of audio per second of compute and preserve speaker identity, and MAI-Image-2 delivers faster image generation with similar quality. Microsoft also lists starting prices for each model and notes enterprise controls and red-teaming for safer deployment.

AI Cuts MRI Scan Time from 23 to 9 Minutes at Amsterdam Cancer Center (nltimes.nl) AI

Amsterdam’s Antoni van Leeuwenhoek Hospital has introduced AI software that reduces MRI scan times from 23 to 9 minutes. The tool speeds up converting scan data into images and helps limit motion blur from patients who struggle to remain still. The hospital says it is also increasing weekly capacity and shifting more scans into daytime hours after internal testing of the system.

Salarymen, Specialists, and Small Businesses (noahpinion.blog) AI

The article argues that, in the near term, AI is more likely to replace specific tasks than entire jobs, with employment so far largely holding up. It proposes a three-way shift in work: “specialists” whose roles remain because tasks are tightly bundled and stakes are high, “salarymen” generalists who supervise and patch AI outputs while adapting to changing AI strengths, and more “small business” owners enabled by AI leverage.

Gemma 4 on iPhone (apps.apple.com) AI

Google’s AI Edge Gallery iPhone app adds official support for the newly released Gemma 4 model family, touting fully offline, on-device generative AI. The update introduces features like “Thinking Mode” to show step-by-step reasoning (for supported models), “Agent Skills” for tool-augmented responses, plus multimodal image queries, audio transcription/translation, and prompt testing controls. The app also includes model download/management and benchmark testing, with performance dependent on the device’s hardware.

Running Google Gemma 4 Locally with LM Studio's New Headless CLI and Claude Code (ai.georgeliu.com) AI

The article explains how to run Google’s Gemma 4 26B (MoE) locally on macOS using LM Studio 0.4.0’s new headless command-line tools (llmster/lms CLI) and how to integrate the setup with Claude Code. It walks through downloading and loading the model, checking performance and memory/parallelism, and selecting context length and quantization to fit within a Mac with 48GB unified memory. It also notes that while Gemma 4’s MoE design makes it feasible on modest hardware, running it via Claude Code can introduce noticeable slowdown.

Reaffirming our commitment to child safety in the face of EuropeanUnion inaction (blog.google) AI

Google says that with the EU ePrivacy derogation allowing CSAM-detection tools expiring on April 3, Europe risks leaving children less protected online. It notes that several companies have voluntarily used tools such as hash-matching to detect, remove, and report CSAM while continuing to take steps on interpersonal communication services. Google and other signatories call on EU institutions to urgently complete a regulatory framework and maintain established child-safety efforts.

Codex is switching to API pricing based usage for all users (help.openai.com) AI

OpenAI’s Codex rate card has been updated: as of April 2, 2026, Codex pricing for new and existing ChatGPT Business customers and new ChatGPT Enterprise plans shifts from per-message estimates to token-based usage (credits per million input, cached input, and output tokens). The article provides separate legacy rate cards for Plus/Pro and most other plans until migrations are completed, with users advised to review both during the transition.

Meta, Google under attack as court cases bypass 30-year-old legal shield (cnbc.com) AI

Recent court losses for Meta and Google, along with other lawsuits, are testing whether platforms can still rely on Section 230’s protections that have shielded them for decades. CNBC reports that plaintiffs are pursuing narrower theories aimed at bypassing the law—often by focusing on how products are designed and how AI-generated summaries or recommendations are presented to users. The article notes that while penalties so far are limited, the cases could shape future litigation as the industry shifts from traditional social media and search toward AI-driven experiences, with possible appeals up to the Supreme Court.

Nanocode: The best Claude Code that $200 can buy in pure JAX on TPUs (github.com) AI

A GitHub discussion introduces “nanocode,” a fully open-source, end-to-end approach to train a Claude-Code-like agentic coding model using pure JAX on TPUs. The author describes an architecture and training pipeline based on Anthropic-style Constitutional AI and Andrej Karpathy’s nanochat, including synthetic data generation, preference optimization for alignment, and TPU-optimized training. They report that a 1.3B-parameter model (d24) can be reproduced in about 9 hours on a TPU v6e-8 for roughly $200, with smaller variants costing less, and provide starter commands and training/evaluation notes.

Microsoft terms say Copilot is for entertainment purposes only, not serious use (tomshardware.com) AI

Microsoft’s updated Copilot terms state the AI is “for entertainment purposes only,” warns it can make mistakes, and says users should not rely on it for important advice. The article notes this caution is similar to disclaimers from other AI providers and argues it conflicts with how Microsoft markets and integrates Copilot into products like Windows 11 for business use. It also emphasizes the need to verify AI outputs due to issues like hallucinations and automation bias.

Code Reviews Need to Evolve (latent.space) AI

The article argues that traditional human code reviews are becoming infeasible as code changes grow and AI-generated code increases review time and effort. It proposes shifting review “upstream” to human-authored specs and acceptance criteria, with automated, deterministic verification (tests, type checks, contracts), layered trust gates, restricted agent permissions, and adversarial verification by separate agents. The overall point is to replace approval-by-reading-diffs with approval-by-verifying intent and constraints before code is generated.

The Locksmith's Apprentice – Claude told me to expose my data without auth (mpdc.dev) AI

An IT operator describes building a self-hosted “security operations brain” for AI-assisted monitoring, then discovering it had been exposed to the public internet for 11 days due to a tunnel/DNS setup with no authentication. He says Anthropic’s Claude helped design and deploy the system via Anthropic’s MCP tooling, but authentication was never considered, even as multiple AI sessions continued to access and modify his exposed data. After discovering the issue, the fix was to remove the DNS record, and he uses the incident to argue that AI can follow correct procedures while missing real-world security context and urgency.

Banray.eu: Raising awareness of the terrible idea that is always-on AI glasses (banray.eu) AI

The Banray.eu site argues that Meta’s camera-equipped “Ray-Ban Meta” glasses enable always-on, privacy-invasive surveillance, including potential sharing and human review of recorded footage by subcontractors. It also claims Meta is preparing built-in facial identification features that would expand consent-free facial data collection, and points to broader industry moves toward smart glasses with persistent recording. The article urges venues and regulators to adopt policies against such devices and facial recognition.

Large language models are not the problem (nature.com) AI

In a commentary, Hiranya V. Peiris argues that anxiety about AI in science is misplaced: if a large language model can replicate someone’s scientific contribution, the issue lies less with the model than with what the field is doing to value and develop genuine work. The piece suggests that the concern signals a need for better standards or practices in research and training.

Eight years of wanting, three months of building with AI (lalitm.com) AI

Lalit Maganti describes releasing “systaqlite,” a new set of SQLite developer tools built over three months using AI coding agents. He explains why SQLite parsing—made difficult by the lack of a formal specification and limited parser APIs—was the core obstacle, and how AI helped accelerate prototyping, refactoring, and learning topics like pretty-printing and editor extension development. He also argues that AI was a net positive only when paired with tight review and strong scaffolding, after an early AI-generated codebase became too fragile and was rewritten.

Talk like caveman (github.com) AI

The GitHub repo “caveman” offers a Claude Code skill that makes Claude respond in a more concise “caveman” style. It claims to cut output tokens by about 75% by removing filler, hedging, and pleasantries while keeping technical accuracy. Users can install it via npx or the Claude Code plugin system and toggle modes with commands like /caveman and “stop caveman”.

AGI won't automate most jobs–because they're not worth the trouble (fortune.com) AI

A Yale economist argues that in an AGI era most jobs may not be automated because replacing people is not worth the compute cost, even if the systems could do it. Instead, compute would be directed to “bottleneck” work tied to long-run growth, while more “supplementary” roles like hospitality or customer-facing jobs may persist. The paper warns that automation could still reduce labor’s share of income and shift gains to owners of computing resources, making inequality the central political issue during the transition.