Docker Model Runner: Running Machine Learning Models with Docker
Understanding How to Choose Model Variants
Tags are an essential part of identifying and selecting the right model variant for your needs. In Docker Hub, model tags follow a structured naming convention:
The SmolLM2 family is published in multiple variants that differ in model size, quantization method, memory footprint, and runtime performance.
The number of parameters is the primary factor that defines a model's capacity and baseline capability. This is typically indicated directly in the image tag. In addition to parameter count, each variant applies a different numerical representation (precision or quantization), which significantly affects memory usage and inference speed.
For SmolLM2, the available variants include:
ai/smollm2:135M-F16ai/smollm2:135M-Q2_Kai/smollm2:135M-Q4_0ai/smollm2:135M-Q4_K_Mai/smollm2:360M-F16ai/smollm2:360M-Q4_0ai/smollm2:360M-Q4_K_MThe
135M
Painless Docker - 2nd Edition
A Comprehensive Guide to Mastering Docker and its EcosystemEnroll now to unlock all content and receive all future updates for free.
Hurry! This limited time offer ends in:
To redeem this offer, copy the coupon code below and apply it at checkout:
