You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
I`m new to AI model, but I`m not new to python, so I`m sure I`m going to be of help.
Describe the bug
I tried to setup the model locally and I followed the instruction in README. At first, everything went smoothly.
Until I tried this command:
Traceback (most recent call last):
File "/usr/local/bin/torchrun", line 8, in <module>
sys.exit(main())
File "/usr/local/lib/python3.10/site-packages/torch/distributed/elastic/multiprocessing/errors/__init__.py", line 348, in wrapper
return f(*args, **kwargs)
File "/usr/local/lib/python3.10/site-packages/torch/distributed/run.py", line 901, in main
run(args)
File "/usr/local/lib/python3.10/site-packages/torch/distributed/run.py", line 892, in run
elastic_launch(
File "/usr/local/lib/python3.10/site-packages/torch/distributed/launcher/api.py", line 133, in __call__
return launch_agent(self._config, self._entrypoint, list(args))
File "/usr/local/lib/python3.10/site-packages/torch/distributed/launcher/api.py", line 229, in launch_agent
master_addr, master_port = _get_addr_and_port(rdzv_parameters)
File "/usr/local/lib/python3.10/site-packages/torch/distributed/launcher/api.py", line 169, in _get_addr_and_port
master_addr, master_port = parse_rendezvous_endpoint(endpoint, default_port=-1)
File "/usr/local/lib/python3.10/site-packages/torch/distributed/elastic/rendezvous/utils.py", line 104, in parse_rendezvous_endpoint
raise ValueError(
ValueError: The hostname of the rendezvous endpoint '/deepseek/DeepSeek-V3-Demo:29500' must be a dot-separated list of labels, an IPv4 address, or an IPv6 address.
To Reproduce
Follow the instruction in README until this.,than run my command instead. Expected behavior
I am able to chat with Deepseek-V3 locally.
Screenshots
Additional context
I`m using a wsl Linux subsystem in Windows 11. So it's kinda like using a virtual machine.
I`m not so sure wether this was a Deepseek issue or not.
The text was updated successfully, but these errors were encountered:
I`m new to AI model, but I`m not new to python, so I`m sure I`m going to be of help.
Describe the bug
I tried to setup the model locally and I followed the instruction in README. At first, everything went smoothly.
Until I tried this command:
And then a Traceback occured:
To Reproduce
Follow the instruction in README until this.,than run my command instead.
Expected behavior
I am able to chat with Deepseek-V3 locally.
Screenshots
Additional context
I`m using a
wsl Linux subsystem
in Windows 11. So it's kinda like using a virtual machine.I`m not so sure wether this was a Deepseek issue or not.
The text was updated successfully, but these errors were encountered: