How to Install Deepseek AI Models Locally on Your Desktop | Run Deepseek R1 Model with LM Studio

How to Install Deepseek AI Models Locally on Your Desktop  | Run Deepseek R1 Model with LM Studio

Step 1: Search for LM Studio

Open your browser and go to Google.com.

Type "lm studio" in the search bar and hit Enter.

Step 2: Click on the Official Link

From the search results, click on the official website link:
LM Studio – Discover, download, and run local LLMs

Website: https://lmstudio.ai 

Step 3: Download LM Studio for Windows

On the official LM Studio website, you will see different download options.
Click on the button: Download LM Studio for Windows (v0.3.9)

Step 4: Select Installation Option

 Only for me (Administrator) – if you want to install it for your user only

Click on Next to proceed.

 

Step 5: Choose Install Location

The setup shows the default install path.

Click Browse to change it (optional).

Click Install to begin the installation.

Step 6: Complete the Setup

LM Studio has been installed.

Click Finish to close the setup.

Step 7: Launch LM Studio

Once setup is complete, LM Studio will open.

You'll see the welcome screen: "Experiment & develop with LLMs locally on your computer."

Click “Get your first LLM →” to start using LM Studio.

Step 8: Download Your First Local LLM

You will see the model suggestion : DeepSeek R1 Distilled (Qwen 7B)

Or click “Skip onboarding” (top-right) if you want to explore manually later.

Step 9: Start Using LM Studio

Once the model is loaded, you will see the chat interface.

After opening LM Studio, you’ll see: “No model loaded.”

Step 10: Select and Load a Model

Click on “Select a model to load” at the top center.

This will open the model list.

Step 11: Download the Model

From the list, select DeepSeek R1 Distill (Qwen 7B).

Click the Download button (approx. 4.68 GB).

The download will begin and show progress in the Downloads panel. Wait for the download to complete before loading the model.

Once downloaded, the model will be ready for use in your local LM Studio!

Step 12: Start Prompting Your LLM

Once the model (e.g., DeepSeek R1 Distill Qwen 7B) is loaded, you’ll see its name at the top bar.

Type your question or command (prompt) in the input box below. For example: “Create a YouTube video script on AI”

Click Send — the model will now respond locally on your system!

Step 13: Enjoy Powerful Offline AI!

LM Studio is now fully functional with the DeepSeek R1 model.

You can now generate content, ask questions, write code, and more — all offline! The best part? Your data stays secure and local, with no internet required.

Final Note

Congratulations!
You can now enjoy hassle-free LLM access with #InstallerGuru – Installation made easy.