Parameter Count Is the Worst Way to Pick a Model on 8GB VRAM
Parameter Count Is the Worst Way to Pick a Model on 8GB VRAM I've been running local LLMs on an RTX 4060 8GB for six months. Qwen2.5-32B, Qwen3.5-9B/27B/35B-A3B, BGE-M3 — all crammed through Q4_K_M...

Source: DEV Community
Parameter Count Is the Worst Way to Pick a Model on 8GB VRAM I've been running local LLMs on an RTX 4060 8GB for six months. Qwen2.5-32B, Qwen3.5-9B/27B/35B-A3B, BGE-M3 — all crammed through Q4_K_M quantization. One thing I can say with certainty: Parameter count is the worst metric for model selection. Online comparisons rank models by size — "32B gives this quality," "7B gives that." Benchmarks like MMLU and HumanEval publish rankings by parameter count. But those assume abundant VRAM. On 8GB, parameter count fails to predict the actual experience. This article covers three rules I derived from real measurements, plus a decision framework for 8GB VRAM model selection. All data comes from my previous benchmark articles. Rule 1: Fitting in VRAM ≠ Running Fast When you hit the 8GB wall, the first instinct is "VRAM usage is X GB, so it fits." But VRAM usage and speed have no linear relationship. The Qwen3.5 three-model comparison made this painfully clear: Model VRAM Speed GPU Utilizatio