-
Notifications
You must be signed in to change notification settings - Fork 10.2k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Error With Docker #2535
Comments
Looks like the arguments you send are meant for the "main" program, while the docker image you use has tools.sh as entrypoint. Try building .devops/main-cuda.Dockerfile instead. |
If, as @JohanAR suggests, you check and are running the right container, see if removing the double quotes from around $arg2 on line 18 of tools.sh and rebuilding the container helps. (Reason: main in the output above was invoked, but looks like it was given a single argument value made of all the flags and values you’ve supplied glued together - removing the double quotes would be a hacky way of letting shell split the arg2 contents back out into individual arguments and values) Line 18 in 93356bd
|
Ahh, I missed the "--run" argument.. |
There is a PR to replace the shell script with a Python one so the argument parsing is more robust: #1686 Maybe it should be merged now? |
To get it working I had to replace "$arg2" with "$@" in tools.sh and then rebuild. Then the arg2 assignment can be skipped |
This should allow passing multiple arguments to containers with the full image that are using the tools.sh frontend. Fix from #2535 (comment)
Thanks, @DKAndreasen, that seems to do the trick. I made a PR. |
* [Docker] fix tools.sh argument passing. This should allow passing multiple arguments to containers with the full image that are using the tools.sh frontend. Fix from #2535 (comment)
* [Docker] fix tools.sh argument passing. This should allow passing multiple arguments to containers with the full image that are using the tools.sh frontend. Fix from ggerganov/llama.cpp#2535 (comment)
I was running it with docker command:
It returns me this error:
usage: ./main [options]
options:
-h, --help show this help message and exit
-i, --interactive run in interactive mode
--interactive-first run in interactive mode and wait for input right away
-ins, --instruct run in instruction mode (use with Alpaca models)
--multiline-input allows you to write or paste multiple lines without ending each in ''
-r PROMPT, --reverse-prompt PROMPT
halt generation at PROMPT, return control in interactive mode
(can be specified more than once for multiple prompts).
--color colorise output to distinguish prompt and user input from generations
-s SEED, --seed SEED RNG seed (default: -1, use random seed for < 0)
-t N, --threads N number of threads to use during computation (default: 16)
-p PROMPT, --prompt PROMPT
prompt to start generation with (default: empty)
-e process prompt escapes sequences (\n, \r, \t, ', ", \)
--prompt-cache FNAME file to cache prompt state for faster startup (default: none)
--prompt-cache-all if specified, saves user input and generations to cache as well.
not supported with --interactive or other interactive options
--prompt-cache-ro if specified, uses the prompt cache but does not update it.
--random-prompt start with a randomized prompt.
--in-prefix-bos prefix BOS to user inputs, preceding the
--in-prefix
string--in-prefix STRING string to prefix user inputs with (default: empty)
--in-suffix STRING string to suffix after user inputs with (default: empty)
-f FNAME, --file FNAME
prompt file to start generation.
-n N, --n-predict N number of tokens to predict (default: -1, -1 = infinity)
-c N, --ctx-size N size of the prompt context (default: 512)
-b N, --batch-size N batch size for prompt processing (default: 512)
-gqa N, --gqa N grouped-query attention factor (TEMP!!! use 8 for LLaMAv2 70B) (default: 1)
-eps N, --rms-norm-eps N rms norm eps (TEMP!!! use 1e-5 for LLaMAv2) (default: 5.0e-06)
--top-k N top-k sampling (default: 40, 0 = disabled)
--top-p N top-p sampling (default: 0.9, 1.0 = disabled)
--tfs N tail free sampling, parameter z (default: 1.0, 1.0 = disabled)
--typical N locally typical sampling, parameter p (default: 1.0, 1.0 = disabled)
--repeat-last-n N last n tokens to consider for penalize (default: 64, 0 = disabled, -1 = ctx_size)
--repeat-penalty N penalize repeat sequence of tokens (default: 1.1, 1.0 = disabled)
--presence-penalty N repeat alpha presence penalty (default: 0.0, 0.0 = disabled)
--frequency-penalty N repeat alpha frequency penalty (default: 0.0, 0.0 = disabled)
--mirostat N use Mirostat sampling.
Top K, Nucleus, Tail Free and Locally Typical samplers are ignored if used.
(default: 0, 0 = disabled, 1 = Mirostat, 2 = Mirostat 2.0)
--mirostat-lr N Mirostat learning rate, parameter eta (default: 0.1)
--mirostat-ent N Mirostat target entropy, parameter tau (default: 5.0)
-l TOKEN_ID(+/-)BIAS, --logit-bias TOKEN_ID(+/-)BIAS
modifies the likelihood of token appearing in the completion,
i.e.
--logit-bias 15043+1
to increase likelihood of token ' Hello',or
--logit-bias 15043-1
to decrease likelihood of token ' Hello'--grammar GRAMMAR BNF-like grammar to constrain generations (see samples in grammars/ dir)
--grammar-file FNAME file to read grammar from
--cfg-negative-prompt PROMPT
negative prompt to use for guidance. (default: empty)
--cfg-scale N strength of guidance (default: 1.000000, 1.0 = disable)
--rope-freq-base N RoPE base frequency (default: 10000.0)
--rope-freq-scale N RoPE frequency scaling factor (default: 1)
--ignore-eos ignore end of stream token and continue generating (implies --logit-bias 2-inf)
--no-penalize-nl do not penalize newline token
--memory-f32 use f32 instead of f16 for memory key+value (default: disabled)
not recommended: doubles context memory required and no measurable increase in quality
--temp N temperature (default: 0.8)
--perplexity compute perplexity over each ctx window of the prompt
--hellaswag compute HellaSwag score over random tasks from datafile supplied with -f
--hellaswag-tasks N number of tasks to use when computing the HellaSwag score (default: 400)
--keep N number of tokens to keep from the initial prompt (default: 0, -1 = all)
--chunks N max number of chunks to process (default: -1, -1 = all)
--mlock force system to keep model in RAM rather than swapping or compressing
--no-mmap do not memory-map model (slower load but may reduce pageouts if not using mlock)
--numa attempt optimizations that help on some NUMA systems
if run without this previously, it is recommended to drop the system page cache before using this
see #1437
-ngl N, --n-gpu-layers N
number of layers to store in VRAM
-ts SPLIT --tensor-split SPLIT
how to split tensors across multiple GPUs, comma-separated list of proportions, e.g. 3,1
-mg i, --main-gpu i the GPU to use for scratch and small tensors
-lv, --low-vram don't allocate VRAM scratch buffer
-mmq, --mul-mat-q use experimental mul_mat_q CUDA kernels instead of cuBLAS. TEMP!!!
Reduces VRAM usage by 700/970/1430 MiB for 7b/13b/33b but prompt processing speed
is still suboptimal, especially q2_K, q3_K, q5_K, and q6_K.
--mtest compute maximum memory usage
--export export the computation graph to 'llama.ggml'
--verbose-prompt print prompt before generation
--lora FNAME apply LoRA adapter (implies --no-mmap)
--lora-base FNAME optional model to use as a base for the layers modified by the LoRA adapter
-m FNAME, --model FNAME
model path (default: models/7B/ggml-model.bin)
error: unknown argument: -m /models/Llama-2-7B-Chat-GGML/llama-2-7b-chat.ggmlv3.q4_0.bin -p Building a website can be done in 10 simple steps: -n 512 --n-gpu-layers 1
--simple-io use basic IO for better compatibility in subprocesses and limited consoles
Environment and Context
Please provide detailed information about your computer setup. This is important in case the issue is not reproducible except for under certain specific conditions.
$ lscpu
$ uname -a
Failure Information (for bugs)
Please help provide information about the failure if this is a bug. If it is not a bug, please remove the rest of this template.
Steps to Reproduce
Please provide detailed steps for reproducing the issue. We are not sitting in front of your screen, so the more detail the better.
Failure Logs
Please include any relevant log snippets or files. If it works under one configuration but not under another, please provide logs for both configurations and their corresponding outputs so it is easy to see where behavior changes.
Also, please try to avoid using screenshots if at all possible. Instead, copy/paste the console output and use Github's markdown to cleanly format your logs for easy readability.
Example environment info:
Example run with the Linux command perf
The text was updated successfully, but these errors were encountered: