Local LLM. Chapter 1
Hello there! 🖖 Recap Tool: llama.cpp OS: Windows 1. [Hugging Face Models](https://huggingface.co/models) 2. App: llama.cpp 3. Model: SmolVLM-500M-Instruct-GGUF 4. Win + R -> winget install llama.cpp 5. CMD -> llama-cli -hf ggml-org/SmolVLM-500M-Instruct-GGUF:Q8_0 /// Cleanup 1. winget list llama.cpp 2. winget uninstall --id ggml.llamacpp Step-by-Step Implementation Quick Way to Run llama.cpp on Windows Navigate to Hugging Face Models App: llama.cpp Model: SmolVLM-500M-Instruct-GGUF Click “Use this model” -> “llama.cpp” A modal window will appear with instructions on how to install llama.cpp and a command to run the selected model. Use WinGet to install and run: # Press Win + R and type powershell (or use Terminal/CMD) # 1. Install llama.cpp via Windows Package Manager winget install llama.cpp # 2. Download and run the model directly from Hugging Face in the console llama-cli -hf ggml-org/SmolVLM-500M-Instruct-GGUF:Q8_0 ...