April 2026 TLDR Setup for Ollama and Gemma 4 26B on a Mac mini (gist.github.com) AI
The gist provides a step-by-step guide for running Ollama on an Apple Silicon Mac mini, pulling the Gemma 4 12B model, and configuring it to start automatically with the model preloaded and kept alive. It includes commands to verify GPU/CPU usage, create a launch agent to periodically “warm” the model, and set OLLAMA_KEEP_ALIVE to prevent unloading due to inactivity. It also notes relevant Ollama updates such as the MLX backend and summarizes key memory considerations for a 24GB system.
April 03, 2026 16:36
Source: Hacker News