使用Ollama部署deepseek大模型

阅读 16

05-14 12:00

1、安装ollama

curl -fsSL https://ollama.com/install.sh | sh

2、下载或运行大模型

ollama pull deepseek-r1:70b

ollama run deepseek-r1:70b

3、修改配置文件

vim /etc/systemd/system/ollama.service


内容:

[Service]

# ExecStart=/usr/local/bin/ollama serve

User=ollama

Group=ollama

ExecStart=/usr/local/bin/ollama serve  # run deepseek-r1:70b

Restart=always

RestartSec=3

Environment="PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin"

Environment="CUDA_VISIBLE_DEVICES=2,3"  # 指定使用 GPU 0 和 1  

Environment="OLLAMA_HOST=0.0.0.0"       # 允许外部访问  

Environment="OLLAMA_MODELS=/data/ollama/models"  # 自定义模型存储路径(可选)  

StandardOutput=file:/var/log/ollama-deepseek.log

StandardError=file:/var/log/ollama-deepseek.log

4、修改后执行命令

sudo systemctl daemon-reload

sudo systemctl restart ollama

5、查看状态指令

sudo systemctl status ollama

6、查看日志

journalctl -u ollama.service -xe

7、配置防火墙规则

sudo firewall-cmd --zone=public --add-port=11434/tcp --permanent

sudo firewall-cmd --reload

sudo firewall-cmd --list-ports | grep 11434  # 显示已开放的端口列表

精彩评论(0)

0 0 举报