Skip to content

Commit

Permalink
[KUNLUNXIN] minor fix for llama3-8b, remove cpu-initialization
Browse files Browse the repository at this point in the history
  • Loading branch information
w4yne committed Oct 4, 2024
1 parent ceb9e7a commit c6c25fd
Showing 1 changed file with 1 addition and 1 deletion.
Original file line number Diff line number Diff line change
Expand Up @@ -15,7 +15,7 @@ VENDOR_ARGS=" \
--use-mcore-models \
--use-flash-attn \
--disable-bias-linear \
--use-cpu-initialization --hidden-dropout 0 --attention-dropout 0 \
--hidden-dropout 0 --attention-dropout 0 \
--no-async-tensor-model-parallel-allreduce --no-gradient-accumulation-fusion
"

0 comments on commit c6c25fd

Please sign in to comment.