To easily test the DeepSeek AI sandbox on your Ubuntu 24.04 LTS installation locally, the recommended and straightforward method involves using Ollama. Ollama is an open-source tool that simplifies the process of downloading, running, and managing large language models like DeepSeek locally.
Here’s a step-by-step guide:
Prerequisites:
- Ubuntu 24.04 LTS installed on your machine.
- A stable internet connection for downloading the model.
- At least 8GB of RAM (16GB or more recommended for larger models).
- Basic familiarity with the terminal.
Installation Steps:
1. Install Python and Git
It’s good practice to update your system before installing new packages:
sudo apt update && sudo apt upgrade -y
Ensure you have Python 3.8 or higher, pip, and Git installed:
sudo apt install python3 python3-pip git -y
python3 --version
pip3 --version
git --version
2. Install Ollama
Download and install Ollama using the following command:
curl -fsSL https://ollama.com/install.sh | sh
Start and enable the Ollama service:
sudo systemctl start ollama
sudo systemctl enable ollama
Verify the installation:
ollama --version
3. Download and Run the DeepSeek Model
Use the following command to download and run the deepseek-r1:7b
model:
ollama run deepseek-r1:7b
This will download the model and start an interactive terminal session where you can enter prompts.
4. Verify the Model Installation
To view installed models:
ollama list
You should see deepseek-r1:7b
in the list.
Optional: Install a Web UI for a More User-Friendly Experience
While you can interact with DeepSeek in the terminal, using a Web UI like Ollama Web UI (formerly Open WebUI) makes the experience more intuitive.
1. Install Dependencies and Create a Virtual Environment
sudo apt install python3-venv -y
python3 -m venv ~/open-webui-venv
source ~/open-webui-venv/bin/activate
2. Install Ollama Web UI
pip install open-webui
3. Start the Web UI Server
open-webui serve
Access the Web UI at: http://localhost:8080
4. Interact with DeepSeek
In the Web UI, choose deepseek-r1:7b
from the dropdown and begin interacting in a chat interface.
Optional: Enable Web UI on System Boot
1. Create a systemd Service File
sudo nano /etc/systemd/system/open-webui.service
Paste the following (replace your_username
with your actual Ubuntu username):
[Unit]
Description=Open WebUI Service
After=network.target
[Service]
User=your_username
WorkingDirectory=/home/your_username/open-webui-venv
ExecStart=/home/your_username/open-webui-venv/bin/open-webui serve
Restart=always
Environment="PATH=/home/your_username/open-webui-venv/bin"
[Install]
WantedBy=multi-user.target
2. Reload Systemd and Enable the Service
sudo systemctl daemon-reload
sudo systemctl enable open-webui.service
sudo systemctl start open-webui.service
3. Check Service Status
sudo systemctl status open-webui.service
Important Considerations:
- Hardware Requirements: Running LLMs is resource-heavy. A CUDA-compatible GPU significantly improves performance.
- Model Sizes: DeepSeek offers multiple sizes (1.5B, 7B, 14B, 32B, 70B). The larger the model, the more memory and compute needed.
- Offline Use: After the initial model download, you can use Ollama offline.
- Virtual Environments: Recommended for isolating dependencies, especially for the Web UI.
By following these steps, you should be able to easily install and start testing the DeepSeek AI sandbox locally on your Ubuntu 24.04 LTS system using Ollama. Remember to check the official Ollama and DeepSeek documentation for the latest updates and advanced features.