快速体验LLaMA-Factory 私有化部署和高效微调Llama3模型FAQ
创始人
2024-11-06 07:06:25
0

序言

之前已经介绍了在超算互联网平台SCNet上使用异构加速卡AI 显存64GB PCIE,私有化部署Llama3模型,并对 Llama3-8B-Instruct 模型进行 LoRA 微调推理合并 ,详细内容请参考另一篇博客:快速体验LLaMA-Factory 私有化部署和高效微调Llama3模型(曙光超算互联网平台异构加速卡DCU)。

由于博主调试过程中遇到较多问题,本文记录FAQ相关问题,仅提供解决思路。

一、参考资料

曙光超算互联网平台SCNet之国产异构加速卡DCU

Llama3本地部署与高效微调入门

二、重要说明

当遇到包冲突时,通常使用 pip install --no-deps -e . 可解决绝大多数问题。

三、FAQ

Q:ERROR: pip's dependency resolver does not currently take into account all the packages that are installed. This behaviour is the source of the following dependency conflicts.

ERROR: pip's dependency resolver does not currently take into account all the packages that are installed. This behaviour is the source of the following dependency conflicts. lmdeploy 0.1.0-git782048c.abi0.dtk2404.torch2.1. requires transformers==4.33.2, but you have transformers 4.43.3 which is incompatible. 
ERROR: pip's dependency resolver does not currently take into account all the packages that are installed. This behaviour is the source of the following dependency conflicts. transformers 4.33.2 requires tokenizers!=0.11.3,<0.14,>=0.11.1, but you have tokenizers 0.15.0 which is incompatible. vllm 0.3.3+git3380931.abi0.dtk2404.torch2.1 requires transformers>=4.38.0, but you have transformers 4.33.2 which is incompatible. 

错误原因:错误一要求安装 transformers==4.33.2,安装该版本后,出现错误二。错误二要求安装 transformers>=4.38.0,与错误一相矛盾。

解决方法:解决该问题的思路,请参考下文的FAQ。

Q:pip._vendor.packaging.version.InvalidVersion: Invalid version: '0.1.0-git782048c.abi0.dtk2404.torch2.1.'

ERROR: Exception: Traceback (most recent call last):   File "/opt/conda/envs/llama3/lib/python3.10/site-packages/pip/_internal/cli/base_command.py", line 105, in _run_wrapper     status = _inner_run()   File "/opt/conda/envs/llama3/lib/python3.10/site-packages/pip/_internal/cli/base_command.py", line 96, in _inner_run     return self.run(options, args)   File "/opt/conda/envs/llama3/lib/python3.10/site-packages/pip/_internal/cli/req_command.py", line 67, in wrapper     return func(self, options, args)   File "/opt/conda/envs/llama3/lib/python3.10/site-packages/pip/_internal/commands/install.py", line 483, in run     installed_versions[distribution.canonical_name] = distribution.version   File "/opt/conda/envs/llama3/lib/python3.10/site-packages/pip/_internal/metadata/pkg_resources.py", line 192, in version     return parse_version(self._dist.version)   File "/opt/conda/envs/llama3/lib/python3.10/site-packages/pip/_vendor/packaging/version.py", line 56, in parse     return Version(version)   File "/opt/conda/envs/llama3/lib/python3.10/site-packages/pip/_vendor/packaging/version.py", line 202, in __init__     raise InvalidVersion(f"Invalid version: '{version}'") pip._vendor.packaging.version.InvalidVersion: Invalid version: '0.1.0-git782048c.abi0.dtk2404.torch2.1.' 
(llama_factory_torch) root@notebook-1813389960667746306-scnlbe5oi5-50216:/public/home/scnlbe5oi5/Downloads/models/LLaMA-Factory# pip install tokenizers==0.13 Looking in indexes: https://pypi.tuna.tsinghua.edu.cn/simple Collecting tokenizers==0.13   Downloading https://pypi.tuna.tsinghua.edu.cn/packages/cc/67/4c05eb8cbe8d20e52f5f47a9c591738d8cbc2a29e918813b7fcc431ec3db/tokenizers-0.13.0-cp310-cp310-manylinux_2_12_x86_64.manylinux2010_x86_64.whl (7.0 MB)      ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 7.0/7.0 MB 37.4 MB/s eta 0:00:00 WARNING: Error parsing dependencies of lmdeploy: Invalid version: '0.1.0-git782048c.abi0.dtk2404.torch2.1.' WARNING: Error parsing dependencies of mmcv: Invalid version: '2.0.1-gitc0ccf15.abi0.dtk2404.torch2.1.' Installing collected packages: tokenizers   Attempting uninstall: tokenizers     Found existing installation: tokenizers 0.15.0     Uninstalling tokenizers-0.15.0:       Successfully uninstalled tokenizers-0.15.0 ERROR: Exception: Traceback (most recent call last):   File "/opt/conda/envs/llama3/lib/python3.10/site-packages/pip/_internal/cli/base_command.py", line 105, in _run_wrapper     status = _inner_run()   File "/opt/conda/envs/llama3/lib/python3.10/site-packages/pip/_internal/cli/base_command.py", line 96, in _inner_run     return self.run(options, args)   File "/opt/conda/envs/llama3/lib/python3.10/site-packages/pip/_internal/cli/req_command.py", line 67, in wrapper     return func(self, options, args)   File "/opt/conda/envs/llama3/lib/python3.10/site-packages/pip/_internal/commands/install.py", line 483, in run     installed_versions[distribution.canonical_name] = distribution.version   File "/opt/conda/envs/llama3/lib/python3.10/site-packages/pip/_internal/metadata/pkg_resources.py", line 192, in version     return parse_version(self._dist.version)   File "/opt/conda/envs/llama3/lib/python3.10/site-packages/pip/_vendor/packaging/version.py", line 56, in parse     return Version(version)   File "/opt/conda/envs/llama3/lib/python3.10/site-packages/pip/_vendor/packaging/version.py", line 202, in __init__     raise InvalidVersion(f"Invalid version: '{version}'") pip._vendor.packaging.version.InvalidVersion: Invalid version: '0.1.0-git782048c.abi0.dtk2404.torch2.1.' 

错误原因:lmdeploy版本问题。

解决方法:解决该问题的思路,请参考下文的FAQ。

Q:版本匹配问题

(llama_factory) root@notebook-1813389960667746306-scnlbe5oi5-17811:/public/home/scnlbe5oi5/Downloads/models/LLaMA-Facto ry# pip install -r requirements.txt ... Installing collected packages: pydub, websockets, urllib3, tomlkit, shtab, semantic-version, scipy, ruff, importlib-resources, ffmpy, docstring-parser, aiofiles, tyro, sse-starlette, tokenizers, gradio-client, transformers, trl, peft, gradio   Attempting uninstall: websockets     Found existing installation: websockets 12.0     Uninstalling websockets-12.0:       Successfully uninstalled websockets-12.0   Attempting uninstall: urllib3     Found existing installation: urllib3 1.26.13     Uninstalling urllib3-1.26.13:       Successfully uninstalled urllib3-1.26.13   Attempting uninstall: tokenizers     Found existing installation: tokenizers 0.15.0     Uninstalling tokenizers-0.15.0:       Successfully uninstalled tokenizers-0.15.0   Attempting uninstall: transformers     Found existing installation: transformers 4.38.0     Uninstalling transformers-4.38.0:       Successfully uninstalled transformers-4.38.0 ERROR: pip's dependency resolver does not currently take into account all the packages that are installed. This behaviour is the source of the following dependency conflicts. lmdeploy 0.1.0-git782048c.abi0.dtk2404.torch2.1. requires transformers==4.33.2, but you have transformers 4.43.3 which is incompatible. 

错误原因lmdeploy 0.1.0-git782048c.abi0.dtk2404.torch2.1.transformers 版本冲突,要求 transformers==4.33.2。由于LLaMA-Factory项目要求 transformers>=4.41.2,因此选择升级 lmdeploy 以匹配 transformers 版本。

解决方法:在光合社区中查询并下载安装lmdeploy。以 lmdeploy-0.2.6+das1.1.git6ba90df.abi1.dtk2404.torch2.1.0-cp310-cp310-manylinux_2_31_x86_64.whl 为例,尝试安装 lmdeploy-0.2.6

root@notebook-1813389960667746306-scnlbe5oi5-17811:~# pip list | grep lmdeploy lmdeploy                       0.1.0-git782048c.abi0.dtk2404.torch2.1. (llama_factory) root@notebook-1813389960667746306-scnlbe5oi5-17811:/public/home/scnlbe5oi5/Downloads/cache# pip install  lmdeploy-0.2.6+das1.1.git6ba90df.abi1.dtk2404.torch2.1.0-cp310-cp310-manylinux_2_31_x86_64.whl ... Installing collected packages: shortuuid, tokenizers, transformers, peft, lmdeploy   Attempting uninstall: tokenizers     Found existing installation: tokenizers 0.19.1     Uninstalling tokenizers-0.19.1:       Successfully uninstalled tokenizers-0.19.1   Attempting uninstall: transformers     Found existing installation: transformers 4.43.3     Uninstalling transformers-4.43.3:       Successfully uninstalled transformers-4.43.3   Attempting uninstall: peft     Found existing installation: peft 0.12.0     Uninstalling peft-0.12.0:       Successfully uninstalled peft-0.12.0   Attempting uninstall: lmdeploy     Found existing installation: lmdeploy 0.1.0-git782048c.abi0.dtk2404.torch2.1.     Uninstalling lmdeploy-0.1.0-git782048c.abi0.dtk2404.torch2.1.:       Successfully uninstalled lmdeploy-0.1.0-git782048c.abi0.dtk2404.torch2.1. Successfully installed lmdeploy-0.2.6+das1.1.git6ba90df.abi1.dtk2404.torch2.1.0 peft-0.9.0 shortuuid-1.0.13 tokenizers-0.15.2 transformers-4.38.1 

lmdeploy-0.2.6 安装成功,且没有报错,但是transformers版本降低为transformers-4.38.1

重新启动服务,又出现新问题:

(llama_factory) root@notebook-1813389960667746306-scnlbe5oi5-17811:/public/home/scnlbe5oi5/Downloads/models/LLaMA-Facto ry# python src/webui.py \ >     --model_name_or_path "/public/home/scnlbe5oi5/Downloads/models/LLaMA-Factory/models/Meta-Llama-3-8B-Instruct" \ >     --template llama3 \ >     --infer_backend vllm \ >     --vllm_enforce_eager Traceback (most recent call last):   File "/public/home/scnlbe5oi5/Downloads/models/LLaMA-Factory/src/webui.py", line 17, in      from llamafactory.webui.interface import create_ui   File "/public/home/scnlbe5oi5/Downloads/models/LLaMA-Factory/src/llamafactory/__init__.py", line 38, in      from .cli import VERSION   File "/public/home/scnlbe5oi5/Downloads/models/LLaMA-Factory/src/llamafactory/cli.py", line 21, in      from . import launcher   File "/public/home/scnlbe5oi5/Downloads/models/LLaMA-Factory/src/llamafactory/launcher.py", line 15, in      from llamafactory.train.tuner import run_exp   File "/public/home/scnlbe5oi5/Downloads/models/LLaMA-Factory/src/llamafactory/train/tuner.py", line 25, in      from ..hparams import get_infer_args, get_train_args   File "/public/home/scnlbe5oi5/Downloads/models/LLaMA-Factory/src/llamafactory/hparams/__init__.py", line 20, in      from .parser import get_eval_args, get_infer_args, get_train_args   File "/public/home/scnlbe5oi5/Downloads/models/LLaMA-Factory/src/llamafactory/hparams/parser.py", line 45, in      check_dependencies()   File "/public/home/scnlbe5oi5/Downloads/models/LLaMA-Factory/src/llamafactory/extras/misc.py", line 82, in check_dependencies     require_version("transformers>=4.41.2", "To fix: pip install transformers>=4.41.2")   File "/opt/conda/envs/llama_factory/lib/python3.10/site-packages/transformers/utils/versions.py", line 111, in require_version     _compare_versions(op, got_ver, want_ver, requirement, pkg, hint)   File "/opt/conda/envs/llama_factory/lib/python3.10/site-packages/transformers/utils/versions.py", line 44, in _compare_versions     raise ImportError( ImportError: transformers>=4.41.2 is required for a normal functioning of this module, but found transformers==4.38.1. To fix: pip install transformers>=4.41.2 

解决方法:升级 transformers,又出现新问题:

(llama_factory) root@notebook-1813389960667746306-scnlbe5oi5-17811:/public/home/scnlbe5oi5/Downloads/cache# pip install -U transformers ... Installing collected packages: tokenizers, transformers   Attempting uninstall: tokenizers     Found existing installation: tokenizers 0.15.2     Uninstalling tokenizers-0.15.2:       Successfully uninstalled tokenizers-0.15.2   Attempting uninstall: transformers     Found existing installation: transformers 4.38.1     Uninstalling transformers-4.38.1:       Successfully uninstalled transformers-4.38.1 ERROR: pip's dependency resolver does not currently take into account all the packages that are installed. This behaviour is the source of the following dependency conflicts. lmdeploy 0.2.6+das1.1.git6ba90df.abi1.dtk2404.torch2.1.0 requires transformers<=4.38.1,>=4.33.0, but you have transformers 4.43.3 which is incompatible. Successfully installed tokenizers-0.19.1 transformers-4.43.3 

错误原因lmdeploy 0.2.6transformers 版本冲突,要求 transformers<=4.38.1,>=4.33.0。由于LLaMA-Factory项目要求 transformers>=4.41.2,因此选择继续升级 lmdeploy 以匹配 transformers 版本。

解决方法:升级 lmdeploy

(llama_factory) root@notebook-1813389960667746306-scnlbe5oi5-17811:/public/home/scnlbe5oi5/Downloads/cache# pip install -U lmdeploy ... Installing collected packages: nvidia-nccl-cu12, nvidia-curand-cu12, nvidia-cuda-runtime-cu12, nvidia-cublas-cu12, lmdeploy   Attempting uninstall: lmdeploy     Found existing installation: lmdeploy 0.2.6+das1.1.git6ba90df.abi1.dtk2404.torch2.1.0     Uninstalling lmdeploy-0.2.6+das1.1.git6ba90df.abi1.dtk2404.torch2.1.0:       Successfully uninstalled lmdeploy-0.2.6+das1.1.git6ba90df.abi1.dtk2404.torch2.1.0 Successfully installed lmdeploy-0.5.2.post1 nvidia-cublas-cu12-12.5.3.2 nvidia-cuda-runtime-cu12-12.5.82 nvidia-curand-cu12-10.3.6.82 nvidia-nccl-cu12-2.22.3 

lmdeploy-0.5.2 安装成功,且没有报错。

重新启动服务,又出现新问题:

(llama_factory) root@notebook-1813389960667746306-scnlbe5oi5-17811:/public/home/scnlbe5oi5/Downloads/models/LLaMA-Facto ry# python src/webui.py     --model_name_or_path "/public/home/scnlbe5oi5/Downloads/models/LLaMA-Factory/models/Meta-Llama-3-8B-Instruct"     --template llama3     --infer_backend vllm     --vllm_enforce_eager Traceback (most recent call last):   File "/public/home/scnlbe5oi5/Downloads/models/LLaMA-Factory/src/webui.py", line 17, in      from llamafactory.webui.interface import create_ui   File "/public/home/scnlbe5oi5/Downloads/models/LLaMA-Factory/src/llamafactory/__init__.py", line 38, in      from .cli import VERSION   File "/public/home/scnlbe5oi5/Downloads/models/LLaMA-Factory/src/llamafactory/cli.py", line 21, in      from . import launcher   File "/public/home/scnlbe5oi5/Downloads/models/LLaMA-Factory/src/llamafactory/launcher.py", line 15, in      from llamafactory.train.tuner import run_exp   File "/public/home/scnlbe5oi5/Downloads/models/LLaMA-Factory/src/llamafactory/train/tuner.py", line 25, in      from ..hparams import get_infer_args, get_train_args   File "/public/home/scnlbe5oi5/Downloads/models/LLaMA-Factory/src/llamafactory/hparams/__init__.py", line 20, in      from .parser import get_eval_args, get_infer_args, get_train_args   File "/public/home/scnlbe5oi5/Downloads/models/LLaMA-Factory/src/llamafactory/hparams/parser.py", line 45, in      check_dependencies()   File "/public/home/scnlbe5oi5/Downloads/models/LLaMA-Factory/src/llamafactory/extras/misc.py", line 85, in check_dependencies     require_version("peft>=0.11.1", "To fix: pip install peft>=0.11.1")   File "/opt/conda/envs/llama_factory/lib/python3.10/site-packages/transformers/utils/versions.py", line 111, in require_version     _compare_versions(op, got_ver, want_ver, requirement, pkg, hint)   File "/opt/conda/envs/llama_factory/lib/python3.10/site-packages/transformers/utils/versions.py", line 44, in _compare_versions     raise ImportError( ImportError: peft>=0.11.1 is required for a normal functioning of this module, but found peft==0.9.0. To fix: pip install peft>=0.11.1 

解决方法:安装 peft==0.11.1

(llama_factory) root@notebook-1813389960667746306-scnlbe5oi5-17811:/public/home/scnlbe5oi5/Downloads/cache# pip install peft==0.11.1 ... Installing collected packages: peft   Attempting uninstall: peft     Found existing installation: peft 0.12.0     Uninstalling peft-0.12.0:       Successfully uninstalled peft-0.12.0 Successfully installed peft-0.11.1 

peft==0.11.1 安装成功,且没有报错。

重新启动服务,又出现新问题:

(llama_factory) root@notebook-1813389960667746306-scnlbe5oi5-17811:/public/home/scnlbe5oi5/Downloads/models/LLaMA-Facto ry# python src/webui.py     --model_name_or_path "/public/home/scnlbe5oi5/Downloads/models/LLaMA-Factory/models/Meta-Llama-3-8B-Instruct"     --template llama3     --infer_backend vllm     --vllm_enforce_eager [2024-07-31 15:23:04,562] [INFO] [real_accelerator.py:158:get_accelerator] Setting ds_accelerator to cuda (auto detect) Traceback (most recent call last):   File "/public/home/scnlbe5oi5/Downloads/models/LLaMA-Factory/src/webui.py", line 17, in      from llamafactory.webui.interface import create_ui   File "/public/home/scnlbe5oi5/Downloads/models/LLaMA-Factory/src/llamafactory/__init__.py", line 38, in      from .cli import VERSION   File "/public/home/scnlbe5oi5/Downloads/models/LLaMA-Factory/src/llamafactory/cli.py", line 22, in      from .api.app import run_api   File "/public/home/scnlbe5oi5/Downloads/models/LLaMA-Factory/src/llamafactory/api/app.py", line 21, in      from ..chat import ChatModel   File "/public/home/scnlbe5oi5/Downloads/models/LLaMA-Factory/src/llamafactory/chat/__init__.py", line 16, in      from .chat_model import ChatModel   File "/public/home/scnlbe5oi5/Downloads/models/LLaMA-Factory/src/llamafactory/chat/chat_model.py", line 26, in      from .vllm_engine import VllmEngine   File "/public/home/scnlbe5oi5/Downloads/models/LLaMA-Factory/src/llamafactory/chat/vllm_engine.py", line 37, in      from vllm.sequence import MultiModalData ImportError: cannot import name 'MultiModalData' from 'vllm.sequence' (/opt/conda/envs/llama_factory/lib/python3.10/site-packages/vllm/sequence.py) 

该问题的解决方法,请参考下文的FAQ。

Q:ImportError: cannot import name 'MultiModalData' from 'vllm.sequence'

最新取用的代码,运行api.py(或者是webui.py)报错,错误信息均是:ImportError: cannot import name ‘MultiModalData’ from ‘vllm.sequence’ (/usr/local/lib/python3.10/dist-packages/vllm/sequence.py) #3645

ImportError: cannot import name 'MultiModalData' from 'vllm.sequence' 

错误原因:vllm版本过高或者版本过低,而LLaMA-Factory项目要求最低版本 vllm==0.4.3

解决方法:以版本过高为例,将vllm版本从 vllm==0.5.0 降低到 vllm==0.4.3,又出现新问题:

(llama_factory) root@notebook-1813389960667746306-scnlbe5oi5-17811:/public/home/scnlbe5oi5/Downloads/models/LLaMA-Facto ry# pip install vllm==0.4.3 ... Installing collected packages: nvidia-ml-py, triton, nvidia-nvtx-cu12, nvidia-nvjitlink-cu12, nvidia-nccl-cu12, nvidia-curand-cu12, nvidia-cufft-cu12, nvidia-cuda-runtime-cu12, nvidia-cuda-nvrtc-cu12, nvidia-cuda-cupti-cu12, nvidia-cublas-cu12, llvmlite, lark, joblib, interegular, distro, diskcache, cmake, cloudpickle, nvidia-cusparse-cu12, nvidia-cudnn-cu12, numba, prometheus-fastapi-instrumentator, openai, nvidia-cusolver-cu12, lm-format-enforcer, torch, xformers, vllm-flash-attn, outlines, vllm   Attempting uninstall: triton     Found existing installation: triton 2.1.0+git3841f975.abi0.dtk2404     Uninstalling triton-2.1.0+git3841f975.abi0.dtk2404:       Successfully uninstalled triton-2.1.0+git3841f975.abi0.dtk2404   Attempting uninstall: nvidia-nccl-cu12     Found existing installation: nvidia-nccl-cu12 2.22.3     Uninstalling nvidia-nccl-cu12-2.22.3:       Successfully uninstalled nvidia-nccl-cu12-2.22.3   Attempting uninstall: nvidia-curand-cu12     Found existing installation: nvidia-curand-cu12 10.3.6.82     Uninstalling nvidia-curand-cu12-10.3.6.82:       Successfully uninstalled nvidia-curand-cu12-10.3.6.82   Attempting uninstall: nvidia-cuda-runtime-cu12     Found existing installation: nvidia-cuda-runtime-cu12 12.5.82     Uninstalling nvidia-cuda-runtime-cu12-12.5.82:       Successfully uninstalled nvidia-cuda-runtime-cu12-12.5.82   Attempting uninstall: nvidia-cublas-cu12     Found existing installation: nvidia-cublas-cu12 12.5.3.2     Uninstalling nvidia-cublas-cu12-12.5.3.2:       Successfully uninstalled nvidia-cublas-cu12-12.5.3.2   Attempting uninstall: torch     Found existing installation: torch 2.1.0+git00661e0.abi0.dtk2404     Uninstalling torch-2.1.0+git00661e0.abi0.dtk2404:       Successfully uninstalled torch-2.1.0+git00661e0.abi0.dtk2404   Attempting uninstall: xformers     Found existing installation: xformers 0.0.25+gitd11e899.abi0.dtk2404.torch2.1     Uninstalling xformers-0.0.25+gitd11e899.abi0.dtk2404.torch2.1:       Successfully uninstalled xformers-0.0.25+gitd11e899.abi0.dtk2404.torch2.1   Attempting uninstall: vllm     Found existing installation: vllm 0.3.3+git3380931.abi0.dtk2404.torch2.1     Uninstalling vllm-0.3.3+git3380931.abi0.dtk2404.torch2.1:       Successfully uninstalled vllm-0.3.3+git3380931.abi0.dtk2404.torch2.1 ERROR: pip's dependency resolver does not currently take into account all the packages that are installed. This behaviour is the source of the following dependency conflicts. lmdeploy 0.5.2.post1 requires torch<=2.2.2,>=2.0.0, but you have torch 2.3.0 which is incompatible. lmdeploy 0.5.2.post1 requires triton<=2.2.0,>=2.1.0; sys_platform == "linux", but you have triton 2.3.0 which is incompatible. Successfully installed cloudpickle-3.0.0 cmake-3.30.1 diskcache-5.6.3 distro-1.9.0 interegular-0.3.3 joblib-1.4.2 lark-1.1.9 llvmlite-0.43.0 lm-format-enforcer-0.10.1 numba-0.60.0 nvidia-cublas-cu12-12.1.3.1 nvidia-cuda-cupti-cu12-12.1.105 nvidia-cuda-nvrtc-cu12-12.1.105 nvidia-cuda-runtime-cu12-12.1.105 nvidia-cudnn-cu12-8.9.2.26 nvidia-cufft-cu12-11.0.2.54 nvidia-curand-cu12-10.3.2.106 nvidia-cusolver-cu12-11.4.5.107 nvidia-cusparse-cu12-12.1.0.106 nvidia-ml-py-12.555.43 nvidia-nccl-cu12-2.20.5 nvidia-nvjitlink-cu12-12.5.82 nvidia-nvtx-cu12-12.1.105 openai-1.37.1 outlines-0.0.34 prometheus-fastapi-instrumentator-7.0.0 torch-2.3.0 triton-2.3.0 vllm-0.4.3 vllm-flash-attn-2.5.8.post2 xformers-0.0.26.post1 

解决方法:将torch版本从 torch 2.3.0 降低到 torch 2.1.0,又出现新问题:

(llama_factory) root@notebook-1813389960667746306-scnlbe5oi5-17811:/public/home/scnlbe5oi5/Downloads/models/LLaMA-Facto ry# pip install torch==2.1.0 ... Installing collected packages: triton, nvidia-nccl-cu12, torch   Attempting uninstall: triton     Found existing installation: triton 2.3.0     Uninstalling triton-2.3.0:       Successfully uninstalled triton-2.3.0   Attempting uninstall: nvidia-nccl-cu12     Found existing installation: nvidia-nccl-cu12 2.20.5     Uninstalling nvidia-nccl-cu12-2.20.5:       Successfully uninstalled nvidia-nccl-cu12-2.20.5   Attempting uninstall: torch     Found existing installation: torch 2.3.0     Uninstalling torch-2.3.0:       Successfully uninstalled torch-2.3.0 ERROR: pip's dependency resolver does not currently take into account all the packages that are installed. This behaviour is the source of the following dependency conflicts. vllm 0.4.3 requires torch==2.3.0, but you have torch 2.1.0 which is incompatible. vllm-flash-attn 2.5.8.post2 requires torch==2.3.0, but you have torch 2.1.0 which is incompatible. xformers 0.0.26.post1 requires torch==2.3.0, but you have torch 2.1.0 which is incompatible. Successfully installed nvidia-nccl-cu12-2.18.1 torch-2.1.0 triton-2.1.0 

解决方法:将vllm版本从 vllm 0.4.3 降低到 vllm 0.4.2,又出现新问题:

(llama_factory) root@notebook-1813389960667746306-scnlbe5oi5-17811:/public/home/scnlbe5oi5/Downloads/models/LLaMA-Facto ry# pip install vllm==0.4.2 ... Installing collected packages: vllm-nccl-cu12, triton, nvidia-nccl-cu12, tiktoken, torch, lm-format-enforcer, vllm   Attempting uninstall: triton     Found existing installation: triton 2.1.0     Uninstalling triton-2.1.0:       Successfully uninstalled triton-2.1.0   Attempting uninstall: nvidia-nccl-cu12     Found existing installation: nvidia-nccl-cu12 2.18.1     Uninstalling nvidia-nccl-cu12-2.18.1:       Successfully uninstalled nvidia-nccl-cu12-2.18.1   Attempting uninstall: tiktoken     Found existing installation: tiktoken 0.7.0     Uninstalling tiktoken-0.7.0:       Successfully uninstalled tiktoken-0.7.0   Attempting uninstall: torch     Found existing installation: torch 2.1.0     Uninstalling torch-2.1.0:       Successfully uninstalled torch-2.1.0   Attempting uninstall: lm-format-enforcer     Found existing installation: lm-format-enforcer 0.10.1     Uninstalling lm-format-enforcer-0.10.1:       Successfully uninstalled lm-format-enforcer-0.10.1   Attempting uninstall: vllm     Found existing installation: vllm 0.4.3     Uninstalling vllm-0.4.3:       Successfully uninstalled vllm-0.4.3 ERROR: pip's dependency resolver does not currently take into account all the packages that are installed. This behaviour is the source of the following dependency conflicts. lmdeploy 0.5.2.post1 requires torch<=2.2.2,>=2.0.0, but you have torch 2.3.0 which is incompatible. lmdeploy 0.5.2.post1 requires triton<=2.2.0,>=2.1.0; sys_platform == "linux", but you have triton 2.3.0 which is incompatible. Successfully installed lm-format-enforcer-0.9.8 nvidia-nccl-cu12-2.20.5 tiktoken-0.6.0 torch-2.3.0 triton-2.3.0 vllm-0.4.2 vllm-nccl-cu12-2.18.1.0.4.0 

解决方法:将vllm版本从 vllm 0.4.2 降低到 vllm 0.4.1,又出现新问题:

(llama_factory) root@notebook-1813389960667746306-scnlbe5oi5-17811:/public/home/scnlbe5oi5/Downloads/models/LLaMA-Facto ry# pip install vllm==0.4.1 ... Installing collected packages: triton, nvidia-nccl-cu12, torch, xformers, vllm   Attempting uninstall: triton     Found existing installation: triton 2.3.0     Uninstalling triton-2.3.0:       Successfully uninstalled triton-2.3.0   Attempting uninstall: nvidia-nccl-cu12     Found existing installation: nvidia-nccl-cu12 2.20.5     Uninstalling nvidia-nccl-cu12-2.20.5:       Successfully uninstalled nvidia-nccl-cu12-2.20.5   Attempting uninstall: torch     Found existing installation: torch 2.3.0     Uninstalling torch-2.3.0:       Successfully uninstalled torch-2.3.0   Attempting uninstall: xformers     Found existing installation: xformers 0.0.26.post1     Uninstalling xformers-0.0.26.post1:       Successfully uninstalled xformers-0.0.26.post1   Attempting uninstall: vllm     Found existing installation: vllm 0.4.2     Uninstalling vllm-0.4.2:       Successfully uninstalled vllm-0.4.2 ERROR: pip's dependency resolver does not currently take into account all the packages that are installed. This behaviour is the source of the following dependency conflicts. vllm-flash-attn 2.5.8.post2 requires torch==2.3.0, but you have torch 2.2.1 which is incompatible. Successfully installed nvidia-nccl-cu12-2.19.3 torch-2.2.1 triton-2.2.0 vllm-0.4.1 xformers-0.0.25 

解决方法:将vllm版本从 vllm-flash-attn 2.5.8.post2 降低到 vllm-flash-attn-2.5.6,又出现新问题:

(llama_factory) root@notebook-1813389960667746306-scnlbe5oi5-17811:/public/home/scnlbe5oi5/Downloads/models/LLaMA-Facto ry# pip install vllm-flash-attn==2.5.6 ... Installing collected packages: triton, nvidia-nccl-cu12, torch, vllm-flash-attn   Attempting uninstall: triton     Found existing installation: triton 2.2.0     Uninstalling triton-2.2.0:       Successfully uninstalled triton-2.2.0   Attempting uninstall: nvidia-nccl-cu12     Found existing installation: nvidia-nccl-cu12 2.19.3     Uninstalling nvidia-nccl-cu12-2.19.3:       Successfully uninstalled nvidia-nccl-cu12-2.19.3   Attempting uninstall: torch     Found existing installation: torch 2.2.1     Uninstalling torch-2.2.1:       Successfully uninstalled torch-2.2.1   Attempting uninstall: vllm-flash-attn     Found existing installation: vllm-flash-attn 2.5.8.post2     Uninstalling vllm-flash-attn-2.5.8.post2:       Successfully uninstalled vllm-flash-attn-2.5.8.post2 ERROR: pip's dependency resolver does not currently take into account all the packages that are installed. This behaviour is the source of the following dependency conflicts. vllm 0.4.1 requires torch==2.2.1, but you have torch 2.1.2 which is incompatible. xformers 0.0.25 requires torch==2.2.1, but you have torch 2.1.2 which is incompatible. Successfully installed nvidia-nccl-cu12-2.18.1 torch-2.1.2 triton-2.1.0 vllm-flash-attn-2.5.6 

解决方法:将vllm版本从 vllm 0.4.1 降低到 vllm 0.4.0

(llama_factory) root@notebook-1813389960667746306-scnlbe5oi5-17811:/public/home/scnlbe5oi5/Downloads/models/LLaMA-Facto ry# pip install vllm==0.4.0 ... Installing collected packages: xformers, vllm   Attempting uninstall: xformers     Found existing installation: xformers 0.0.25     Uninstalling xformers-0.0.25:       Successfully uninstalled xformers-0.0.25   Attempting uninstall: vllm     Found existing installation: vllm 0.4.1     Uninstalling vllm-0.4.1:       Successfully uninstalled vllm-0.4.1 Successfully installed vllm-0.4.0 xformers-0.0.23.post1 

vllm 0.4.0 安装成功,且没有报错。

重新启动服务,又出现新问题:

(llama_factory) root@notebook-1813389960667746306-scnlbe5oi5-17811:/public/home/scnlbe5oi5/Downloads/models/LLaMA-Facto ry# python src/webui.py \ >     --model_name_or_path "/public/home/scnlbe5oi5/Downloads/models/LLaMA-Factory/models/Meta-Llama-3-8B-Instruct" \ >     --template llama3 \ >     --infer_backend vllm \ >     --vllm_enforce_eager No ROCm runtime is found, using ROCM_HOME='/opt/dtk' /opt/conda/envs/llama_factory/lib/python3.10/site-packages/torchvision/io/image.py:13: UserWarning: Failed to load image Python extension: 'libc10_hip.so: cannot open shared object file: No such file or directory'If you don't plan on using image functionality from `torchvision.io`, you can ignore this warning. Otherwise, there might be something wrong with your environment. Did you have `libjpeg` or `libpng` installed before building `torchvision` from source?   warn( [2024-07-31 15:52:48,647] [INFO] [real_accelerator.py:158:get_accelerator] Setting ds_accelerator to cuda (auto detect) Traceback (most recent call last):   File "/public/home/scnlbe5oi5/Downloads/models/LLaMA-Factory/src/webui.py", line 17, in      from llamafactory.webui.interface import create_ui   File "/public/home/scnlbe5oi5/Downloads/models/LLaMA-Factory/src/llamafactory/__init__.py", line 38, in      from .cli import VERSION   File "/public/home/scnlbe5oi5/Downloads/models/LLaMA-Factory/src/llamafactory/cli.py", line 22, in      from .api.app import run_api   File "/public/home/scnlbe5oi5/Downloads/models/LLaMA-Factory/src/llamafactory/api/app.py", line 21, in      from ..chat import ChatModel   File "/public/home/scnlbe5oi5/Downloads/models/LLaMA-Factory/src/llamafactory/chat/__init__.py", line 16, in      from .chat_model import ChatModel   File "/public/home/scnlbe5oi5/Downloads/models/LLaMA-Factory/src/llamafactory/chat/chat_model.py", line 26, in      from .vllm_engine import VllmEngine   File "/public/home/scnlbe5oi5/Downloads/models/LLaMA-Factory/src/llamafactory/chat/vllm_engine.py", line 29, in      from vllm import AsyncEngineArgs, AsyncLLMEngine, RequestOutput, SamplingParams   File "/opt/conda/envs/llama_factory/lib/python3.10/site-packages/vllm/__init__.py", line 4, in      from vllm.engine.async_llm_engine import AsyncLLMEngine   File "/opt/conda/envs/llama_factory/lib/python3.10/site-packages/vllm/engine/async_llm_engine.py", line 12, in      from vllm.engine.llm_engine import LLMEngine   File "/opt/conda/envs/llama_factory/lib/python3.10/site-packages/vllm/engine/llm_engine.py", line 16, in      from vllm.model_executor.model_loader import get_architecture_class_name   File "/opt/conda/envs/llama_factory/lib/python3.10/site-packages/vllm/model_executor/model_loader.py", line 10, in      from vllm.model_executor.models.llava import LlavaForConditionalGeneration   File "/opt/conda/envs/llama_factory/lib/python3.10/site-packages/vllm/model_executor/models/llava.py", line 11, in      from vllm.model_executor.layers.activation import get_act_fn   File "/opt/conda/envs/llama_factory/lib/python3.10/site-packages/vllm/model_executor/layers/activation.py", line 9, in      from vllm._C import ops ImportError: libcuda.so.1: cannot open shared object file: No such file or directory 

该问题的解决方法,请参考下文的FAQ。

Q:ImportError: libcuda.so.1: cannot open shared object file: No such file or directory

ImportError: libcuda.so.1: cannot open shared object file: No such file or directory 

查找 libcuda.so.1 文件:

(llama_factory) root@notebook-1813389960667746306-scnlbe5oi5-17811:/public/home/scnlbe5oi5/Downloads/models/LLaMA-Facto ry# find / -name "libcuda.so.1" find: '/proc/1/map_files': Operation not permitted find: '/proc/13/map_files': Operation not permitted find: '/proc/45/map_files': Operation not permitted find: '/proc/116/map_files': Operation not permitted find: '/proc/118/map_files': Operation not permitted find: '/proc/120/map_files': Operation not permitted find: '/proc/121/map_files': Operation not permitted find: '/proc/5527/map_files': Operation not permitted find: '/proc/5529/map_files': Operation not permitted find: '/proc/5531/map_files': Operation not permitted find: '/proc/6148/map_files': Operation not permitted find: '/proc/24592/map_files': Operation not permitted find: '/proc/24970/map_files': Operation not permitted find: '/proc/24971/map_files': Operation not permitted 

错误原因:没有找到该文件,猜测是vllm的版本问题。

解决方法:重新安装 llvm 0.4.3,又出现新问题:

(llama_factory) root@notebook-1813389960667746306-scnlbe5oi5-17811:/public/home/scnlbe5oi5/Downloads/models/LLaMA-Facto ry# pip install vllm==0.4.3 ... Installing collected packages: triton, nvidia-nccl-cu12, torch, lm-format-enforcer, xformers, vllm-flash-attn, vllm   Attempting uninstall: triton     Found existing installation: triton 2.1.0     Uninstalling triton-2.1.0:       Successfully uninstalled triton-2.1.0   Attempting uninstall: nvidia-nccl-cu12     Found existing installation: nvidia-nccl-cu12 2.18.1     Uninstalling nvidia-nccl-cu12-2.18.1:       Successfully uninstalled nvidia-nccl-cu12-2.18.1   Attempting uninstall: torch     Found existing installation: torch 2.1.2     Uninstalling torch-2.1.2:       Successfully uninstalled torch-2.1.2   Attempting uninstall: lm-format-enforcer     Found existing installation: lm-format-enforcer 0.9.8     Uninstalling lm-format-enforcer-0.9.8:       Successfully uninstalled lm-format-enforcer-0.9.8   Attempting uninstall: xformers     Found existing installation: xformers 0.0.23.post1     Uninstalling xformers-0.0.23.post1:       Successfully uninstalled xformers-0.0.23.post1   Attempting uninstall: vllm-flash-attn     Found existing installation: vllm-flash-attn 2.5.6     Uninstalling vllm-flash-attn-2.5.6:       Successfully uninstalled vllm-flash-attn-2.5.6   Attempting uninstall: vllm     Found existing installation: vllm 0.4.0     Uninstalling vllm-0.4.0:       Successfully uninstalled vllm-0.4.0 ERROR: pip's dependency resolver does not currently take into account all the packages that are installed. This behaviour is the source of the following dependency conflicts. lmdeploy 0.5.2.post1 requires torch<=2.2.2,>=2.0.0, but you have torch 2.3.0 which is incompatible. lmdeploy 0.5.2.post1 requires triton<=2.2.0,>=2.1.0; sys_platform == "linux", but you have triton 2.3.0 which is incompatible. Successfully installed lm-format-enforcer-0.10.1 nvidia-nccl-cu12-2.20.5 torch-2.3.0 triton-2.3.0 vllm-0.4.3 vllm-flash-attn-2.5.8.post2 xformers-0.0.26.post1 

错误原因lmdeploy 0.5.2.post1torch 版本冲突,要求 torch<=2.2.2,>=2.0.0,而当前版本为torch 2.3.0lmdeploy 0.5.2.post1triton 版本冲突,要求 triton<=2.2.0,>=2.1.0,而当前版本为triton 2.3.0理论上,应该升级lmdeploy 版本以匹配torch版本,但是lmdeploy已经是最新版本了。因此,尝试降低lmdeploy版本

解决方法:将lmdeploy版本从 lmdeploy 0.5.2.post1 降低到 lmdeploy 0.5.0,又出现新问题:

(llama_factory) root@notebook-1813389960667746306-scnlbe5oi5-17811:/public/home/scnlbe5oi5/Downloads/models/LLaMA-Facto ry# pip install lmdeploy==0.5.0 ... Installing collected packages: triton, nvidia-nccl-cu12, torch, lmdeploy   Attempting uninstall: triton     Found existing installation: triton 2.3.0     Uninstalling triton-2.3.0:       Successfully uninstalled triton-2.3.0   Attempting uninstall: nvidia-nccl-cu12     Found existing installation: nvidia-nccl-cu12 2.20.5     Uninstalling nvidia-nccl-cu12-2.20.5:       Successfully uninstalled nvidia-nccl-cu12-2.20.5   Attempting uninstall: torch     Found existing installation: torch 2.3.0     Uninstalling torch-2.3.0:       Successfully uninstalled torch-2.3.0   Attempting uninstall: lmdeploy     Found existing installation: lmdeploy 0.5.2.post1     Uninstalling lmdeploy-0.5.2.post1:       Successfully uninstalled lmdeploy-0.5.2.post1 ERROR: pip's dependency resolver does not currently take into account all the packages that are installed. This behaviour is the source of the following dependency conflicts. vllm 0.4.3 requires torch==2.3.0, but you have torch 2.2.2 which is incompatible. vllm-flash-attn 2.5.8.post2 requires torch==2.3.0, but you have torch 2.2.2 which is incompatible. xformers 0.0.26.post1 requires torch==2.3.0, but you have torch 2.2.2 which is incompatible. 

解决方法:将torch版本从 torch 2.2.2 升级到 torch 2.3.0,又出现新问题:

(llama_factory) root@notebook-1813389960667746306-scnlbe5oi5-17811:/public/home/scnlbe5oi5/Downloads/models/LLaMA-Facto ry# pip install torch==2.3.0 ... Installing collected packages: triton, nvidia-nccl-cu12, torch   Attempting uninstall: triton     Found existing installation: triton 2.2.0     Uninstalling triton-2.2.0:       Successfully uninstalled triton-2.2.0   Attempting uninstall: nvidia-nccl-cu12     Found existing installation: nvidia-nccl-cu12 2.19.3     Uninstalling nvidia-nccl-cu12-2.19.3:       Successfully uninstalled nvidia-nccl-cu12-2.19.3   Attempting uninstall: torch     Found existing installation: torch 2.2.2     Uninstalling torch-2.2.2:       Successfully uninstalled torch-2.2.2 ERROR: pip's dependency resolver does not currently take into account all the packages that are installed. This behaviour is the source of the following dependency conflicts. lmdeploy 0.5.0 requires torch<=2.2.2,>=2.0.0, but you have torch 2.3.0 which is incompatible. lmdeploy 0.5.0 requires triton<=2.2.0,>=2.1.0; sys_platform == "linux", but you have triton 2.3.0 which is incompatible. Successfully installed nvidia-nccl-cu12-2.20.5 torch-2.3.0 triton-2.3.0 

解决方法:把lmdeploy版本从 lmdeploy 0.5.0 升级到 lmdeploy 0.5.1

(llama_factory) root@notebook-1813389960667746306-scnlbe5oi5-17811:/public/home/scnlbe5oi5/Downloads/models/LLaMA-Facto ry# pip install lmdeploy==0.5.1 ... Installing collected packages: lmdeploy   Attempting uninstall: lmdeploy     Found existing installation: lmdeploy 0.5.0     Uninstalling lmdeploy-0.5.0:       Successfully uninstalled lmdeploy-0.5.0 Successfully installed lmdeploy-0.5.1 

lmdeploy-0.5.1 安装成功,且没有报错。

重新启动服务,又出现新问题:

(llama_factory) root@notebook-1813389960667746306-scnlbe5oi5-17811:/public/home/scnlbe5oi5/Downloads/models/LLaMA-Facto ry# python src/webui.py \ >     --model_name_or_path "/public/home/scnlbe5oi5/Downloads/models/LLaMA-Factory/models/Meta-Llama-3-8B-Instruct" \ >     --template llama3 \ >     --infer_backend vllm \ >     --vllm_enforce_eager Traceback (most recent call last):   File "/public/home/scnlbe5oi5/Downloads/models/LLaMA-Factory/src/webui.py", line 17, in      from llamafactory.webui.interface import create_ui   File "/public/home/scnlbe5oi5/Downloads/models/LLaMA-Factory/src/llamafactory/__init__.py", line 38, in      from .cli import VERSION   File "/public/home/scnlbe5oi5/Downloads/models/LLaMA-Factory/src/llamafactory/cli.py", line 21, in      from . import launcher   File "/public/home/scnlbe5oi5/Downloads/models/LLaMA-Factory/src/llamafactory/launcher.py", line 15, in      from llamafactory.train.tuner import run_exp   File "/public/home/scnlbe5oi5/Downloads/models/LLaMA-Factory/src/llamafactory/train/tuner.py", line 25, in      from ..hparams import get_infer_args, get_train_args   File "/public/home/scnlbe5oi5/Downloads/models/LLaMA-Factory/src/llamafactory/hparams/__init__.py", line 20, in      from .parser import get_eval_args, get_infer_args, get_train_args   File "/public/home/scnlbe5oi5/Downloads/models/LLaMA-Factory/src/llamafactory/hparams/parser.py", line 45, in      check_dependencies()   File "/public/home/scnlbe5oi5/Downloads/models/LLaMA-Factory/src/llamafactory/extras/misc.py", line 85, in check_dependencies     require_version("peft>=0.11.1", "To fix: pip install peft>=0.11.1")   File "/opt/conda/envs/llama_factory/lib/python3.10/site-packages/transformers/utils/versions.py", line 111, in require_version     _compare_versions(op, got_ver, want_ver, requirement, pkg, hint)   File "/opt/conda/envs/llama_factory/lib/python3.10/site-packages/transformers/utils/versions.py", line 44, in _compare_versions     raise ImportError( ImportError: peft>=0.11.1 is required for a normal functioning of this module, but found peft==0.9.0. To fix: pip install peft>=0.11.1 

解决方法:升级 peft==0.11.1

(llama_factory) root@notebook-1813389960667746306-scnlbe5oi5-17811:/public/home/scnlbe5oi5/Downloads/cache# pip install peft==0.11.1 ... Installing collected packages: peft   Attempting uninstall: peft     Found existing installation: peft 0.9.0     Uninstalling peft-0.9.0:       Successfully uninstalled peft-0.9.0 Successfully installed peft-0.11.1 

peft-0.11.1 安装成功,且没有报错。

重新启动服务,又出现新问题:

(llama_factory) root@notebook-1813389960667746306-scnlbe5oi5-17811:/public/home/scnlbe5oi5/Downloads/models/LLaMA-Facto ry# python src/webui.py     --model_name_or_path "/public/home/scnlbe5oi5/Downloads/models/LLaMA-Factory/models/Meta-Llama-3-8B-Instruct"     --template llama3     --infer_backend vllm     --vllm_enforce_eager No ROCm runtime is found, using ROCM_HOME='/opt/dtk' /opt/conda/envs/llama_factory/lib/python3.10/site-packages/torchvision/io/image.py:13: UserWarning: Failed to load image Python extension: 'libc10_hip.so: cannot open shared object file: No such file or directory'If you don't plan on using image functionality from `torchvision.io`, you can ignore this warning. Otherwise, there might be something wrong with your environment. Did you have `libjpeg` or `libpng` installed before building `torchvision` from source?   warn( [2024-07-31 16:58:35,443] [INFO] [real_accelerator.py:158:get_accelerator] Setting ds_accelerator to cuda (auto detect) gradio_share: False Running on local URL:  http://127.0.0.1:7860  Could not create share link. Missing file: /opt/conda/envs/llama_factory/lib/python3.10/site-packages/gradio/frpc_linux_amd64_v0.2.  Please check your internet connection. This can happen if your antivirus software blocks the download of this file. You can install manually by following these steps:  1. Download this file: https://cdn-media.huggingface.co/frpc-gradio-0.2/frpc_linux_amd64 2. Rename the downloaded file to: frpc_linux_amd64_v0.2 3. Move the file to this location: /opt/conda/envs/llama_factory/lib/python3.10/site-packages/gradio 

该问题的解决方法,请参考下文的FAQ。

Q. Could not create share link. Missing file:/PATH/TO/gradio/frpc_linux_amd64_v0.2

【Gradio】Could not create share link

在这里插入图片描述

Could not create share link. Missing file: /opt/conda/envs/llama_factory_torch/lib/python3.11/site-packages/gradio/frpc_linux_amd64_v0.2.   Please check your internet connection. This can happen if your antivirus software blocks the download of this file. You can install manually by following these steps:   1. Download this file: https://cdn-media.huggingface.co/frpc-gradio-0.2/frpc_linux_amd64 2. Rename the downloaded file to: frpc_linux_amd64_v0.2 3. Move the file to this location: /opt/conda/envs/llama_factory_torch/lib/python3.11/site-packages/gradio 
# 解决方法 1. 下载 https://cdn-media.huggingface.co/frpc-gradio-0.2/frpc_linux_amd64  2. 重命名 mv frpc_linux_amd64 frpc_linux_amd64_v0.2  3. 移动到指定目录 cp frpc_linux_amd64_v0.2 /opt/conda/envs/llama_factory_torch/lib/python3.10/site-packages/gradio  4. 修改权限 chmod +x /opt/conda/envs/llama_factory_torch/lib/python3.10/site-packages/gradio/frpc_linux_amd64_v0.2 

Q. Could not create share link.. Please check your internet connection or our status page

Could not create share link. Please check your internet connection or our status page: https://status.gradio.app 

解决方法:修改 frpc_linux_amd64_v0.2文件权限。

chmod +x /opt/conda/envs/llama_factory_torch/lib/python3.11/site-packages/gradio/frpc_linux_amd64_v0.2 

相关内容

热门资讯

透视辅助(wPK)微扑克辅助挂... 透视辅助(wPK)微扑克辅助挂(透视)详细辅助德州教程(原来真的有挂)所有人都在同一条线上,像星星一...
透视美元局!德扑ai智能,(德... 透视美元局!德扑ai智能,(德州wepower)都是是有挂(详细辅助科技教程);1)德扑ai智能辅助...
透视总结(AAPoker)aa... 透视总结(AAPoker)aapoker辅助(透视)真是真的是有挂(详细辅助揭秘攻略)1、aapok...
透视教程(微扑克)微扑克ai辅... 透视教程(微扑克)微扑克ai辅助器苹果版(透视)详细辅助详细教程(果然是真的有挂);1、全新机制【微...
透视线上!德扑数据软件,(德州... 透视线上!德扑数据软件,(德州)真是真的有挂(详细辅助解密教程)德扑数据软件辅助器中分为三种模型:德...
透视中牌率(AaPOKER)a... 透视中牌率(AaPOKER)aapoker透明挂(透视)一贯是有挂(详细辅助系统教程);1、aapo...
透视讲解(WPK)WPK透视辅... 透视讲解(WPK)WPK透视辅助(透视)详细辅助揭秘教程(真是存在有挂);1、任何WPK透视辅助ai...
透视黑科技!德扑之星有猫腻,(... 透视黑科技!德扑之星有猫腻,(德州wepower)本来有挂(详细辅助2025新版总结)1)德扑之星有...
透视线上(aAPOKER)aa... 透视线上(aAPOKER)aapoker透明挂(透视)好像存在有挂(详细辅助玩家教程)在进入aapo...
透视新版(wPk)WPK透视辅... 透视新版(wPk)WPK透视辅助(透视)详细辅助攻略教程(确实真的有挂);1、在WPK透视辅助ai机...