<?xml version="1.0" encoding="utf-8" standalone="yes"?><rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:content="http://purl.org/rss/1.0/modules/content/"><channel><title>PrivateLLM on The .NET + LM Sandbox</title><link>https://lmcorner.net/tags/privatellm/</link><description>Recent content in PrivateLLM on The .NET + LM Sandbox</description><generator>Hugo -- 0.150.1</generator><language>en-us</language><lastBuildDate>Mon, 23 Mar 2026 23:54:03 +0100</lastBuildDate><atom:link href="https://lmcorner.net/tags/privatellm/index.xml" rel="self" type="application/rss+xml"/><item><title>Local LLM. Chapter 1</title><link>https://lmcorner.net/posts/local-llm/</link><pubDate>Mon, 23 Mar 2026 23:54:03 +0100</pubDate><guid>https://lmcorner.net/posts/local-llm/</guid><description>&lt;h3 id="hello-there-"&gt;Hello there! 🖖&lt;/h3&gt;
&lt;h3 id="recap"&gt;Recap&lt;/h3&gt;
&lt;ul&gt;
&lt;li&gt;&lt;strong&gt;Tool:&lt;/strong&gt; llama.cpp&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;OS&lt;/strong&gt;: Windows&lt;/li&gt;
&lt;/ul&gt;
&lt;pre tabindex="0"&gt;&lt;code&gt;1. [Hugging Face Models](https://huggingface.co/models)
2. App: llama.cpp
3. Model: SmolVLM-500M-Instruct-GGUF
4. Win + R -&amp;gt; winget install llama.cpp
5. CMD -&amp;gt; llama-cli -hf ggml-org/SmolVLM-500M-Instruct-GGUF:Q8_0
/// Cleanup
1. winget list llama.cpp
2. winget uninstall --id ggml.llamacpp
&lt;/code&gt;&lt;/pre&gt;&lt;h2 id="step-by-step-implementation"&gt;Step-by-Step Implementation&lt;/h2&gt;
&lt;h3 id="quick-way-to-run-llamacpp-on-windows"&gt;Quick Way to Run llama.cpp on Windows&lt;/h3&gt;
&lt;ol&gt;
&lt;li&gt;Navigate to &lt;a href="https://huggingface.co/models"&gt;Hugging Face Models&lt;/a&gt;&lt;/li&gt;
&lt;/ol&gt;
&lt;ul&gt;
&lt;li&gt;&lt;strong&gt;App:&lt;/strong&gt; llama.cpp&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;Model:&lt;/strong&gt; SmolVLM-500M-Instruct-GGUF
&lt;img alt="Hugging Face Model Select" loading="lazy" src="https://lmcorner.net/posts/local-llm/hf-1.png"&gt;&lt;/li&gt;
&lt;/ul&gt;
&lt;ol start="2"&gt;
&lt;li&gt;Click &amp;ldquo;Use this model&amp;rdquo; -&amp;gt; &amp;ldquo;llama.cpp&amp;rdquo;&lt;/li&gt;
&lt;/ol&gt;
&lt;ul&gt;
&lt;li&gt;A modal window will appear with instructions on how to install llama.cpp and a command to run the selected model.
&lt;img alt="Hugging Face Model App Select" loading="lazy" src="https://lmcorner.net/posts/local-llm/hf-2.png"&gt;&lt;/li&gt;
&lt;/ul&gt;
&lt;ol start="3"&gt;
&lt;li&gt;Use WinGet to install and run:&lt;/li&gt;
&lt;/ol&gt;
&lt;pre tabindex="0"&gt;&lt;code&gt;# Press Win + R and type powershell (or use Terminal/CMD)
# 1. Install llama.cpp via Windows Package Manager
winget install llama.cpp
# 2. Download and run the model directly from Hugging Face in the console
llama-cli -hf ggml-org/SmolVLM-500M-Instruct-GGUF:Q8_0
&lt;/code&gt;&lt;/pre&gt;&lt;p&gt;&lt;img alt="alt text" loading="lazy" src="https://lmcorner.net/posts/local-llm/hf-3.png"&gt;&lt;/p&gt;</description></item></channel></rss>