After the Deepseek coder was announced promising amazing performance and quality in code generation, I was curious to try it out locally. I found this youtube video that explained a little bit of how to install deepseek-coder-v2
with shell-gpt
and ollama
/ litellm
:
However in the video the author mentioned that he was having trouble getting it to work correctly locally, and in the end just resorted to using the platform API.
I wasn’t able to do any further testing locally at the time because my 16GB RAM laptop was not enough to handle the deepseek-coder-v2
models which require at least 32GB RAM. However I now find myself with 32GB RAM available, so I started looking again into getting deepseek-coder-v2
to work locally with ollama
and litellm
and with shell-gpt
. And I have finally succeeded! Here’s how I did it in Ubuntu 24.04 in WSL2 on Windows 11, with Python 3.12:
Install ollama
curl -fsSL https://ollama.com/install.sh | sh
Install the deepseek-coder-v2 image
ollama pull deepseek-coder-v2
(this can take a while so go grab a coffee!)
Install shell-gpt with litellm
pipx install shell-gpt[litellm]
Get a Deepseek API key
- Register and login to the Deepseek open platform
- Go to the API keys menu and click on Create API Key
- Enter the API key name in the pop-up (e.g. Shell GPT)
- Copy the generated API key and save it somewhere that you can easily retrieve it
Customize the Shell GPT configuration
nano ~/.config/shell_gpt/.sgptrc
Configure with these values:
CHAT_CACHE_PATH=/tmp/chat_cache
CACHE_PATH=/tmp/cache
CHAT_CACHE_LENGTH=100
CACHE_LENGTH=100
REQUEST_TIMEOUT=60
DEFAULT_MODEL=ollama/deepseek-coder-v2
DEFAULT_COLOR=magenta
ROLE_STORAGE_PATH=~/.config/shell_gpt/roles
DEFAULT_EXECUTE_SHELL_CMD=false
DISABLE_STREAMING=false
CODE_THEME=dracula
OPENAI_FUNCTIONS_PATH=~/.config/shell_gpt/functions
OPENAI_USE_FUNCTIONS=true
SHOW_FUNCTIONS_OUTPUT=false
API_BASE_URL=http://127.0.0.1:11434
PRETTIFY_MARKDOWN=true
USE_LITELLM=true
SHELL_INTERACTION=true
OS_NAME=auto
SHELL_NAME=auto
OPENAI_API_KEY=deepseek-api-key-here
Make sure to:
- set
DEFAULT_MODEL
toollama/deepseek-coder-v2
- set
API_BASE_URL
tohttp://127.0.0.1:11434
(or at the IP:PORT your ollama instance is running on) - set
USE_LITELLM
totrue
- paste your Deepseek API key as the value of the
OPENAI_API_KEY
environment variable
You should now be able to run sgpt hello
and get an answer from your local deepseek-coder-v2
!
Be First to Comment