类别 | 详情 |
---|---|
CUDA | 12.4 |
Python | 3.10 |
操作系统 | Ubuntu 22.04 |
ktransformers | 0.2.2rc2 |
组件 | 型号/规格 |
---|---|
CPU | Intel Xeon E5-2686 v4 |
主板 | 劲鲨 X99 D8i |
内存 | 256GB |
显卡 | NVIDIA RTX 3080M(16GB 显存) |
安装系统依赖
#更新镜像源
sudo vim /etc/apt/sources.list
#写入镜像地址
deb http://mirrors.aliyun.com/ubuntu/ jammy main restricted universe multiverse
deb-src http://mirrors.aliyun.com/ubuntu/ jammy main restricted universe multiverse
deb http://mirrors.aliyun.com/ubuntu/ jammy-security main restricted universe multiverse
deb-src http://mirrors.aliyun.com/ubuntu/ jammy-security main restricted universe multiverse
deb http://mirrors.aliyun.com/ubuntu/ jammy-updates main restricted universe multiverse
deb-src http://mirrors.aliyun.com/ubuntu/ jammy-updates main restricted universe multiverse
deb http://mirrors.aliyun.com/ubuntu/ jammy-proposed main restricted universe multiverse
deb-src http://mirrors.aliyun.com/ubuntu/ jammy-proposed main restricted universe multiverse
deb http://mirrors.aliyun.com/ubuntu/ jammy-backports main restricted universe multiverse
deb-src http://mirrors.aliyun.com/ubuntu/ jammy-backports main restricted universe multiverse
#更新pip镜像源
pip config set global.index-url https://mirrors.aliyun.com/pypi/simple/
wget https://repo.anaconda.com/miniconda/Miniconda3-latest-Linux-x86_64.sh
chmod +x Miniconda3-latest-Linux-x86_64.sh
./Miniconda3-latest-Linux-x86_64.sh
conda init
source ~/.bashrc
source ~/miniconda3/bin/activate
conda config --add channels https://mirrors.tuna.tsinghua.edu.cn/anaconda/pkgs/free/
conda config --add channels https://mirrors.tuna.tsinghua.edu.cn/anaconda/pkgs/main/
conda config --set show_channel_urls yes
nvidia-smi #查看驱动是否已经安装
上官方下载驱动https://developer.nvidia.com 或B站搜索怎能安装驱动程序
wget https://developer.download.nvidia.com/compute/cuda/repos/ubuntu2204/x86_64/cuda-ubuntu2204.pin
sudo mv cuda-ubuntu2204.pin /etc/apt/preferences.d/cuda-repository-pin-600
wget https://developer.download.nvidia.com/compute/cuda/12.4.0/local_installers/cuda-repo-ubuntu2204-12-4-local_12.4.0-550.54.14-1_amd64.deb
sudo dpkg -i cuda-repo-ubuntu2204-12-4-local_12.4.0-550.54.14-1_amd64.deb
sudo cp /var/cuda-repo-ubuntu2204-12-4-local/cuda-*-keyring.gpg /usr/share/keyrings/
sudo apt-get update
sudo apt-get -y install cuda-toolkit-12-4
编辑 ~/.bashrc
文件:
export PATH=$PATH:/usr/local/cuda/bin
export LD_LIBRARY_PATH=$LD_LIBRARY_PATH:/usr/local/cuda/lib64:/usr/lib/x86_64-linux-gnu
export CUDA_PATH=/usr/local/cuda
应用配置:
source ~/.bashrc
nvcc -V
新建 Python 3.10 环境
sudo apt update && sudo apt upgrade -y #更新系统
sudo apt-get update sudo apt-get install -y build-essential cmake ninja-build libnuma-dev git
conda create --name ktransformers python=3.10
conda activate ktransformers
安装关键依赖
conda install -c conda-forge libstdcxx-ng
pip3 install torch torchvision torchaudio --index-url https://download.pytorch.org/whl/cu124
pip3 install packaging ninja cpufeature numpy
有git账号的需要设置一下用户名邮箱,没有也可以直接官网下载对应版本的代码
git clone https://github.com/kvcache-ai/ktransformers.git
cd ktransformers
git submodule init && git submodule update
# 编译安装(若支持 NUMA)
export USE_NUMA=1
bash install.sh
# 根据 CUDA 和 PyTorch 版本选择对应 wheel(示例为 CUDA 12.4,Python3.10)
pip install https://github.com/mjun0812/flash-attention-prebuild-wheels/releases/download/v0.0.5/flash_attn-2.6.3+cu124torch2.6-cp310-cp310-linux_x86_64.whl
下载 DeepSeek-R1 配置文件
https://huggingface.co/deepseek-ai/DeepSeek-R1/tree/main
下载除.safetensors喜爱的其他文件,放到DeepSeek-R1-config目录
下载 GGUF 模型文件
国内网络问题,可以从镜像点下载,翻到目录DeepSeek-R1-GGUF,文件有点大,可以用迅雷会员加速下载
https://hf-mirror.com/unsloth/DeepSeek-R1-GGUF/tree/main/DeepSeek-R1-UD-Q2_K_XL
python -m ktransformers.local_chat
--model_path ./DeepSeek-R1-config
--gguf_path ./DeepSeek-R1-GGUF
--cpu_infer 16
--max_new_tokens 1000
--force_think
--use_flash_attn
# 支持多GPU配置及通过 `--optimize_config_path` 进行更细粒度的显存卸载设置
PYTORCH_CUDA_ALLOC_CONF=expandable_segments:True python3.10 ktransformers/server/main.py
--model_path /home/sean/DeepSeek-R1-GGUF/DeepSeek-R1-config/
--gguf_path /home/sean/DeepSeek-R1-GGUF/DeepSeek-R1-UD-Q2_K_XL/
--model_name unsloth/DeepSeek-R1-UD-Q2_K_XL
--cpu_infer 16
--max_new_tokens 2000
--cache_lens 32768
--total_context 32768
--cache_q4 true
--temperature 0.6
--top_p 0.95
--optimize_config_path ktransformers/optimize/optimize_rules/DeepSeek-V3-Chat.yaml
--force_think
--use_cuda_graph
--host 0.0.0.0
--port 6688
官方教程https://github.com/kvcache-ai/ktransformers/blob/main/doc/en/install.md
用心做好每一件事。
参与评论
手机查看
返回顶部