Generated about 10 hours ago.
TL;DR: April 7 centered on AI’s expanding capabilities (models, agents, video editing), alongside mounting scrutiny—accuracy issues, privacy/security risks, and societal/policy concerns.
Models, tooling & agents broaden
- Google open-sourced Scion, an experimental multi-agent orchestration testbed that runs “deep agents” as isolated concurrent processes.
- Meta released VOID, an open-source video pipeline (built on CogVideoX) to delete an object and the resulting interactions.
- Zhipu AI posted GLM-5.1, emphasizing improvements for long-horizon tasks.
- Community/tooling activity included fine-tuning Gemma 4 multimodally on Apple Silicon, plus agent-harness tooling (Meta-agent) that uses iterative evals and live traces.
Safety, reliability & policy pressure rise
- Trail of Bits audited Meta’s WhatsApp “Private Inference” (TEEs) and found 28 issues, stressing that privacy depends on deployment details like input validation and attestation freshness.
- Ars Technica reported tests suggesting Google AI Overviews are wrong ~10% of the time (Google disputed the benchmark framing).
- Anthropic detailed Claude Mythos Preview in a system card and launched Project Glasswing to repurpose the model for defensive cybersecurity.
- Broader commentary included concerns about AI-written work, labor impacts, and calls for stronger oversight (e.g., WSJ opinion on AI risks; AP on typewriters to deter cheating).