Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

bitsandbytes No longer supported on Windows. #278

Closed
gerwintmg opened this issue Jul 19, 2023 · 12 comments · Fixed by #410
Closed

bitsandbytes No longer supported on Windows. #278

gerwintmg opened this issue Jul 19, 2023 · 12 comments · Fixed by #410

Comments

@gerwintmg
Copy link

gerwintmg commented Jul 19, 2023

When i update my local repo a saw that the dependency for bitsandbytes was updated.
But unfortunately it no longer supports Windows bitsandbyte requirements

Howe does this impact windows support for lit-gpt?

is there anything i can do to keep. lit-gpt running on my windows install?

@carmocca
Copy link
Contributor

What error are you getting?

We install bitsandbytes in our CI's Windows jobs and it seems to install. There are no bitsandbytes tests because they require a CPU. https://github.com/Lightning-AI/lit-gpt/actions/runs/5601352941/jobs/10245095265#step:4:359

If this became a problem, we could drop it off the base requirements.txt and have quantization users install it on demand. But first I'd like to understand why it fails on your system and not our CI.

@gerwintmg
Copy link
Author

gerwintmg commented Jul 20, 2023

Well as i am using a NVIDIA GeForce RTX 3080 (LAPTOP) with 16GB VRAM i am i little short on memory if i would like to try some finetuning.
In my search for a solution i found #275

trying to run this pull request i got an error ending with:

        CUDA Setup failed despite GPU being available. Please run the following command to get more information:

        python -m bitsandbytes

        Inspect the output of the command and see if you can locate CUDA libraries. You might need to add them
        to your LD_LIBRARY_PATH. If you suspect a bug, please take the information from python -m bitsandbytes
        and open an issue at: https://github.com/TimDettmers/bitsandbytes/issues

Wel it's a pull request but as its seams the error is relevant to the underlying library. So, i go back to the main branch of lit-gpt and try to test the underlying library.

running "python -m bitsandbytes" gave the same error.

If i understand you correctly lit-gpt only supports windows when no CPU is used?

Tinking about this i decided to run:

 python chat/base.py --checkpoint_dir 'D:\\Projects\\lit-gpt\\checkpoints\\tiiuae\\falcon-7b' --quantize bnb.nf4

This gave me the result:

Loading model 'D:\\Projects\\lit-gpt\\checkpoints\\tiiuae\\falcon-7b\\lit_model.pth' with {'org': 'tiiuae', 'name': 'falcon-7b', 'block_size': 2048, 'vocab_size': 50254, 'padding_multiple': 512, 'padded_vocab_size': 65024, 'n_layer': 32, 'n_head': 71, 'n_embd': 4544, 'rotary_percentage': 1.0, 'parallel_residual': True, 'bias': False, 'n_query_groups': 1, 'shared_attention_norm': True, '_norm_class': 'LayerNorm', 'norm_eps': 1e-05, '_mlp_class': 'GptNeoxMLP', 'intermediate_size': 18176, 'condense_ratio': 1}
False
The following directories listed in your path were found to be non-existent: {WindowsPath('/Users/gtamm/miniconda3/envs/litGPT/lib'), WindowsPath('C')}
C:\Users\gtamm\miniconda3\envs\litGPT\Lib\site-packages\bitsandbytes\cuda_setup\main.py:166: UserWarning: C:\Users\gtamm\miniconda3\envs\litGPT did not contain ['libcudart.so', 'libcudart.so.11.0', 'libcudart.so.12.0'] as expected! Searching further paths...
  warn(msg)
CUDA_SETUP: WARNING! libcudart.so not found in any environmental path. Searching in backup paths...
The following directories listed in your path were found to be non-existent: {WindowsPath('/usr/local/cuda/lib64')}
DEBUG: Possible options found for libcudart.so: set()
CUDA SETUP: PyTorch settings found: CUDA_VERSION=118, Highest Compute Capability: 8.6.
CUDA SETUP: To manually override the PyTorch CUDA version please see:https://github.com/TimDettmers/bitsandbytes/blob/main/how_to_use_nonpytorch_cuda.md
CUDA SETUP: Loading binary C:\Users\gtamm\miniconda3\envs\litGPT\Lib\site-packages\bitsandbytes\libbitsandbytes_cuda118.so...
argument of type 'WindowsPath' is not iterable
CUDA SETUP: Problem: The main issue seems to be that the main CUDA runtime library was not detected.
CUDA SETUP: Solution 1: To solve the issue the libcudart.so location needs to be added to the LD_LIBRARY_PATH variable
CUDA SETUP: Solution 1a): Find the cuda runtime library via: find / -name libcudart.so 2>/dev/null
CUDA SETUP: Solution 1b): Once the library is found add it to the LD_LIBRARY_PATH: export LD_LIBRARY_PATH=$LD_LIBRARY_PATH:FOUND_PATH_FROM_1a
CUDA SETUP: Solution 1c): For a permanent solution add the export from 1b into your .bashrc file, located at ~/.bashrc
CUDA SETUP: Solution 2: If no library was found in step 1a) you need to install CUDA.
CUDA SETUP: Solution 2a): Download CUDA install script: wget https://github.com/TimDettmers/bitsandbytes/blob/main/cuda_install.sh
CUDA SETUP: Solution 2b): Install desired CUDA version to desired location. The syntax is bash cuda_install.sh CUDA_VERSION PATH_TO_INSTALL_INTO.
CUDA SETUP: Solution 2b): For example, "bash cuda_install.sh 113 ~/local/" will download CUDA 11.3 and install into the folder ~/local
Traceback (most recent call last):
  File "D:\Projects\lit-gpt\chat\base.py", line 297, in <module>
    CLI(main)
  File "C:\Users\gtamm\miniconda3\envs\litGPT\Lib\site-packages\jsonargparse\_cli.py", line 85, in CLI
    return _run_component(component, cfg_init)
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "C:\Users\gtamm\miniconda3\envs\litGPT\Lib\site-packages\jsonargparse\_cli.py", line 147, in _run_component
    return component(**cfg)
           ^^^^^^^^^^^^^^^^
  File "D:\Projects\lit-gpt\chat\base.py", line 157, in main
    with fabric.init_module(empty_init=True), quantization(quantize):
  File "C:\Users\gtamm\miniconda3\envs\litGPT\Lib\contextlib.py", line 137, in __enter__
    return next(self.gen)
           ^^^^^^^^^^^^^^
  File "D:\Projects\lit-gpt\lit_gpt\utils.py", line 54, in quantization
    from quantize.bnb import Linear4bit
  File "D:\Projects\lit-gpt\quantize\bnb.py", line 16, in <module>
    import bitsandbytes as bnb
  File "C:\Users\gtamm\miniconda3\envs\litGPT\Lib\site-packages\bitsandbytes\__init__.py", line 6, in <module>
    from . import cuda_setup, utils, research
  File "C:\Users\gtamm\miniconda3\envs\litGPT\Lib\site-packages\bitsandbytes\research\__init__.py", line 1, in <module>
    from . import nn
  File "C:\Users\gtamm\miniconda3\envs\litGPT\Lib\site-packages\bitsandbytes\research\nn\__init__.py", line 1, in <module>
    from .modules import LinearFP8Mixed, LinearFP8Global
  File "C:\Users\gtamm\miniconda3\envs\litGPT\Lib\site-packages\bitsandbytes\research\nn\modules.py", line 8, in <module>
    from bitsandbytes.optim import GlobalOptimManager
  File "C:\Users\gtamm\miniconda3\envs\litGPT\Lib\site-packages\bitsandbytes\optim\__init__.py", line 6, in <module>
    from bitsandbytes.cextension import COMPILED_WITH_CUDA
  File "C:\Users\gtamm\miniconda3\envs\litGPT\Lib\site-packages\bitsandbytes\cextension.py", line 20, in <module>
    raise RuntimeError('''
RuntimeError:
        CUDA Setup failed despite GPU being available. Please run the following command to get more information:

        python -m bitsandbytes

        Inspect the output of the command and see if you can locate CUDA libraries. You might need to add them
        to your LD_LIBRARY_PATH. If you suspect a bug, please take the information from python -m bitsandbytes
        and open an issue at: https://github.com/TimDettmers/bitsandbytes/issues

I hope this is helpful.

@Andrei-Aksionov
Copy link
Collaborator

In the beginning of the output you can see:

CUDA_SETUP: WARNING! libcudart.so not found in any environmental path. Searching in backup paths...

And bitsandbytes shows:

CUDA Setup failed despite GPU being available ....

So definitely something is messed up with CUDA. Maybe try to recreate env and reinstall packages from scratch.

@gerwintmg
Copy link
Author

In the beginning of the output you can see:

CUDA_SETUP: WARNING! libcudart.so not found in any environmental path. Searching in backup paths...

And bitsandbytes shows:

CUDA Setup failed despite GPU being available ....

So definitely something is messed up with CUDA. Maybe try to recreate env and reinstall packages from scratch.

That was one of my first toughs but looking at:

CUDA SETUP: Loading binary C:\Users\gtamm\miniconda3\envs\litGPT\Lib\site-packages\bitsandbytes\libbitsandbytes_cuda118.so...
argument of type 'WindowsPath' is not iterable

and knowing cuda interference without quaternization works
i believe the issue is somewhere else.

I have also tried to set environment variables like:

$env:BNB_CUDA_VERSION=122; $env:LD_LIBRARY_PATH= $env:CUDA_PATH_V12_2; $env:CUDA_VERSION=122

But even than it stops with the same error, with a different CUDA version.

@Andrei-Aksionov
Copy link
Collaborator

Well, this is definitely and issue with bitsandbytes on Windows:
bitsandbytes-foundation/bitsandbytes#30

Plus there is another issue with similar error:
bitsandbytes-foundation/bitsandbytes#32

So indeed, it looks like bnb is not supported on Windows for now. At least by bnb team.
Someone in comments provided it's own solution:
bitsandbytes-foundation/bitsandbytes#30 (comment)
So I recommend to take a look at user's comments in issue # 30 and try some of them.
Unfortunately I don't have a Windows machine.

@Andrei-Aksionov
Copy link
Collaborator

bnb can be installed on all three platforms: macos, windows and linux.
But it can run only on cuda device - we can eliminate macos. It uses custom cuda libraries that are compiled only for linux - we can eliminate windows.
As a result it can be installed, but will run only on linux. This should be mentioned in corresponding .md file.

@gerwintmg
Copy link
Author

I hope a solution can be found for bitsandbytes.

following the issues in your previous comment i come to "https://github.com/ShanGor/bitsandbytes-windows"

Why try to build it on Windows? Because my RTX4070 laptop version only has 8G memory. Linux cannot load the large model without enough GPU memory. But Windows has the "Shared GPU Memory" concept to share RAM to GPU, which can make our job continue.

building a custom version of bitsandbytes is not something I'm looking forward to. So, I'm going to try wsl2 hope that works for me.

If i can help in any way to get this to work on windows, please let me know :)

@patrickhwood
Copy link
Contributor

following the issues in your previous comment i come to "https://github.com/ShanGor/bitsandbytes-windows"

Why try to build it on Windows? Because my RTX4070 laptop version only has 8G memory. Linux cannot load the large model without enough GPU memory. But Windows has the "Shared GPU Memory" concept to share RAM to GPU, which can make our job continue.

Not sure where they got the idea that shared memory isn't available on Linux, as it was available there first: https://developer.nvidia.com/blog/unified-memory-cuda-beginners/ The Windows drivers are actually missing some features for full support of unified memory (auto memory up/download on page fault). Unified memory is used in the QLORA paper: https://arxiv.org/pdf/2305.14314

@gerwintmg
Copy link
Author

For future windows users wsl2 is unfortunately not the solution. At least not for me

I installed cuda on wsl2 following.

# Works
python chat/base.py --checkpoint_dir '/mnt/d/Projects/lit-gpt/checkpoints/tiiuae/falcon-7b' 
# Does not work
python chat/base.py --checkpoint_dir '/mnt/d/Projects/lit-gpt/checkpoints/tiiuae/falcon-7b' --quantize bnb.nf4

Result:

python chat/base.py --checkpoint_dir '/mnt/d/Projects/lit-gpt/checkpoints/tiiuae/falcon-7b' --quantize bnb.nf4
/home/gerwintmg/miniconda3/envs/lit-gpt/lib/python3.11/site-packages/pydantic/_migration.py:282: UserWarning: `pydantic.utils:Representation` has been removed. We are importing from `pydantic.v1.utils:Representation` instead.See the migration guide for more details: https://docs.pydantic.dev/latest/migration/
  warnings.warn(
Loading model '/mnt/d/Projects/lit-gpt/checkpoints/tiiuae/falcon-7b/lit_model.pth' with {'org': 'tiiuae', 'name': 'falcon-7b', 'block_size': 2048, 'vocab_size': 50254, 'padding_multiple': 512, 'padded_vocab_size': 65024, 'n_layer': 32, 'n_head': 71, 'n_embd': 4544, 'rotary_percentage': 1.0, 'parallel_residual': True, 'bias': False, 'n_query_groups': 1, 'shared_attention_norm': True, '_norm_class': 'LayerNorm', 'norm_eps': 1e-05, '_mlp_class': 'GptNeoxMLP', 'intermediate_size': 18176, 'condense_ratio': 1}
False
/home/gerwintmg/miniconda3/envs/lit-gpt/lib/python3.11/site-packages/bitsandbytes/cuda_setup/main.py:166: UserWarning: /home/gerwintmg/miniconda3/envs/lit-gpt did not contain ['libcudart.so', 'libcudart.so.11.0', 'libcudart.so.12.0'] as expected! Searching further paths...
  warn(msg)
The following directories listed in your path were found to be non-existent: {PosixPath('unix')}
CUDA_SETUP: WARNING! libcudart.so not found in any environmental path. Searching in backup paths...
DEBUG: Possible options found for libcudart.so: {PosixPath('/usr/local/cuda/lib64/libcudart.so')}
CUDA SETUP: PyTorch settings found: CUDA_VERSION=118, Highest Compute Capability: 8.6.
CUDA SETUP: To manually override the PyTorch CUDA version please see:https://github.com/TimDettmers/bitsandbytes/blob/main/how_to_use_nonpytorch_cuda.md
CUDA SETUP: Loading binary /home/gerwintmg/miniconda3/envs/lit-gpt/lib/python3.11/site-packages/bitsandbytes/libbitsandbytes_cuda118.so...
libcusparse.so.11: cannot open shared object file: No such file or directory
CUDA SETUP: Something unexpected happened. Please compile from source:
git clone https://github.com/TimDettmers/bitsandbytes.git
cd bitsandbytes
CUDA_VERSION=118 make cuda11x
python setup.py install
Traceback (most recent call last):
  File "/mnt/d/Projects/lit-gpt/chat/base.py", line 297, in <module>
    CLI(main)
  File "/home/gerwintmg/miniconda3/envs/lit-gpt/lib/python3.11/site-packages/jsonargparse/_cli.py", line 85, in CLI
    return _run_component(component, cfg_init)
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/home/gerwintmg/miniconda3/envs/lit-gpt/lib/python3.11/site-packages/jsonargparse/_cli.py", line 147, in _run_component
    return component(**cfg)
           ^^^^^^^^^^^^^^^^
  File "/mnt/d/Projects/lit-gpt/chat/base.py", line 157, in main
    with fabric.init_module(empty_init=True), quantization(quantize):
  File "/home/gerwintmg/miniconda3/envs/lit-gpt/lib/python3.11/contextlib.py", line 137, in __enter__
    return next(self.gen)
           ^^^^^^^^^^^^^^
  File "/mnt/d/Projects/lit-gpt/lit_gpt/utils.py", line 54, in quantization
    from quantize.bnb import Linear4bit
  File "/mnt/d/Projects/lit-gpt/quantize/bnb.py", line 16, in <module>
    import bitsandbytes as bnb
  File "/home/gerwintmg/miniconda3/envs/lit-gpt/lib/python3.11/site-packages/bitsandbytes/__init__.py", line 6, in <module>
    from . import cuda_setup, utils, research
  File "/home/gerwintmg/miniconda3/envs/lit-gpt/lib/python3.11/site-packages/bitsandbytes/research/__init__.py", line 1, in <module>
    from . import nn
  File "/home/gerwintmg/miniconda3/envs/lit-gpt/lib/python3.11/site-packages/bitsandbytes/research/nn/__init__.py", line 1, in <module>
    from .modules import LinearFP8Mixed, LinearFP8Global
  File "/home/gerwintmg/miniconda3/envs/lit-gpt/lib/python3.11/site-packages/bitsandbytes/research/nn/modules.py", line 8, in <module>
    from bitsandbytes.optim import GlobalOptimManager
  File "/home/gerwintmg/miniconda3/envs/lit-gpt/lib/python3.11/site-packages/bitsandbytes/optim/__init__.py", line 6, in <module>
    from bitsandbytes.cextension import COMPILED_WITH_CUDA
  File "/home/gerwintmg/miniconda3/envs/lit-gpt/lib/python3.11/site-packages/bitsandbytes/cextension.py", line 20, in <module>
    raise RuntimeError('''
RuntimeError:
        CUDA Setup failed despite GPU being available. Please run the following command to get more information:

        python -m bitsandbytes

        Inspect the output of the command and see if you can locate CUDA libraries. You might need to add them
        to your LD_LIBRARY_PATH. If you suspect a bug, please take the information from python -m bitsandbytes
        and open an issue at: https://github.com/TimDettmers/bitsandbytes/issues

I am not really getting any wiser with this error message but hope it can help.

@Andrei-Aksionov
Copy link
Collaborator

I usually don't use conda, but back in the days I remember that installed this package:
https://anaconda.org/anaconda/cudatoolkit
But I installed it for Tensorflow. PyTorch should be distributed with cuda, so I'm not sure why it cannot find libcudart.

Btw have you tried what the error message suggests?

CUDA SETUP: To manually override the PyTorch CUDA version please see:https://github.com/TimDettmers/bitsandbytes/blob/main/how_to_use_nonpytorch_cuda.md

@gerwintmg
Copy link
Author

I usually don't use conda, but back in the days I remember that installed this package: https://anaconda.org/anaconda/cudatoolkit But I installed it for Tensorflow. PyTorch should be distributed with cuda, so I'm not sure why it cannot find libcudart.

Btw have you tried what the error message suggests?

CUDA SETUP: To manually override the PyTorch CUDA version please see:https://github.com/TimDettmers/bitsandbytes/blob/main/how_to_use_nonpytorch_cuda.md

Let's try.

export BNB_CUDA_VERSION=122
export LD_LIBRARY_PATH=$LD_LIBRARY_PATH:/usr/local/cuda/
python chat/base.py --checkpoint_dir '/mnt/d/Projects/lit-gpt/checkpoints/tiiuae/falcon-7b' --quantize bnb.nf4

Resulted in:

/home/gerwintmg/miniconda3/envs/lit-gpt/lib/python3.11/site-packages/pydantic/_migration.py:282: UserWarning: `pydantic.utils:Representation` has been removed. We are importing from `pydantic.v1.utils:Representation` instead.See the migration guide for more details: https://docs.pydantic.dev/latest/migration/
  warnings.warn(
Loading model '/mnt/d/Projects/lit-gpt/checkpoints/tiiuae/falcon-7b/lit_model.pth' with {'org': 'tiiuae', 'name': 'falcon-7b', 'block_size': 2048, 'vocab_size': 50254, 'padding_multiple': 512, 'padded_vocab_size': 65024, 'n_layer': 32, 'n_head': 71, 'n_embd': 4544, 'rotary_percentage': 1.0, 'parallel_residual': True, 'bias': False, 'n_query_groups': 1, 'shared_attention_norm': True, '_norm_class': 'LayerNorm', 'norm_eps': 1e-05, '_mlp_class': 'GptNeoxMLP', 'intermediate_size': 18176, 'condense_ratio': 1}
/home/gerwintmg/miniconda3/envs/lit-gpt/lib/python3.11/site-packages/bitsandbytes/cuda_setup/main.py:106: UserWarning:

================================================================================
WARNING: Manual override via BNB_CUDA_VERSION env variable detected!
BNB_CUDA_VERSION=XXX can be used to load a bitsandbytes version that is different from the PyTorch CUDA version.If this was unintended set the BNB_CUDA_VERSION variable to an empty string: export BNB_CUDA_VERSION=
If you use the manual override make sure the right libcudart.so is in your LD_LIBRARY_PATH
For example by adding the following to your .bashrc: export LD_LIBRARY_PATH=$LD_LIBRARY_PATH:<path_to_cuda_dir/lib64
Loading CUDA version: BNB_CUDA_VERSION=122
================================================================================


  warn((f'\n\n{"="*80}\n'
>> Prompt: Hello
>> Reply: ,
I have been using kafka .NET client to consume kafka topics . But most of the time am receiving the following exception.
Any idea on how to avoid this exception ?
Error logs are available too.
Fatal error accessin

Thanks, this seems to work for inference. wsl2 on windows seams to work :)

@gerwintmg
Copy link
Author

As inference combined with quantize is working, i decided to test my original objective.
Unfortunately it failed.

This is my results:

python finetune/adapter.py --checkpoint_dir '/mnt/d/Projects/lit-gpt/checkpoints/tiiuae/falcon-7b' --out_dir out/adapter/falcon-7b-finetuned-aplaca --data_dir "/mnt/d/Projects/lit-gpt/data/alpaca" --quantize bnb.int8
/mnt/d/Projects/lit-gpt.qlora/lit-gpt/finetune/adapter.py:297: JsonargparseDeprecationWarning:
    Only use the public API as described in https://jsonargparse.readthedocs.io/en/stable/#api-reference.
    Importing from jsonargparse.cli is kept only to avoid breaking code that does not correctly use the public
    API. It will no longer be available from v5.0.0.

  from jsonargparse.cli import CLI
/home/gerwintmg/miniconda3/envs/lit-gpt/lib/python3.11/site-packages/pydantic/_migration.py:282: UserWarning: `pydantic.utils:Representation` has been removed. We are importing from `pydantic.v1.utils:Representation` instead.See the migration guide for more details: https://docs.pydantic.dev/latest/migration/
  warnings.warn(
Using bfloat16 Automatic Mixed Precision (AMP)
{'eval_interval': 600, 'save_interval': 1000, 'eval_iters': 100, 'log_interval': 1, 'devices': 1, 'learning_rate': 0.003, 'batch_size': 64.0, 'micro_batch_size': 4, 'gradient_accumulation_iters': 16.0, 'epoch_size': 50000, 'num_epochs': 5, 'max_iters': 62500, 'weight_decay': 0.02, 'warmup_steps': 1562.0}
Global seed set to 1337
Loading model '/mnt/d/Projects/lit-gpt/checkpoints/tiiuae/falcon-7b/lit_model.pth' with {'org': 'tiiuae', 'name': 'falcon-7b', 'block_size': 2048, 'vocab_size': 50254, 'padding_multiple': 512, 'padded_vocab_size': 65024, 'n_layer': 32, 'n_head': 71, 'n_embd': 4544, 'rotary_percentage': 1.0, 'parallel_residual': True, 'bias': False, 'n_query_groups': 1, 'shared_attention_norm': True, '_norm_class': 'LayerNorm', 'norm_eps': 1e-05, '_mlp_class': 'GptNeoxMLP', 'intermediate_size': 18176, 'condense_ratio': 1, 'adapter_prompt_length': 10, 'adapter_start_layer': 2}
/home/gerwintmg/miniconda3/envs/lit-gpt/lib/python3.11/site-packages/bitsandbytes/cuda_setup/main.py:106: UserWarning:

================================================================================
WARNING: Manual override via BNB_CUDA_VERSION env variable detected!
BNB_CUDA_VERSION=XXX can be used to load a bitsandbytes version that is different from the PyTorch CUDA version.
If this was unintended set the BNB_CUDA_VERSION variable to an empty string: export BNB_CUDA_VERSION=
If you use the manual override make sure the right libcudart.so is in your LD_LIBRARY_PATH
For example by adding the following to your .bashrc: export LD_LIBRARY_PATH=$LD_LIBRARY_PATH:<path_to_cuda_dir/lib64
Loading CUDA version: BNB_CUDA_VERSION=122
================================================================================


  warn((f'\n\n{"="*80}\n'
Number of trainable parameters: 1,365,330
Number of non trainable parameters: 7,217,189,760
Global seed set to 1337
Validating ...
Recommend a movie for me to watch during the weekend and explain the reason.
Below is an instruction that describes a task. Write a response that appropriately completes the request.

### Instruction:
Recommend a movie for me to watch during the weekend and explain the reason.

### Response:

#### ![img](img/3.jpg)

> I think that you would enjoy watching "The Terminator" because it has a good storyline. I loved the scene with the train blowout and the scene from the bar where you meet Kyle Reese. The last scene in the hotel room is also very exciting.
## #### ![img](img/4.jpg)

> I think that you should watch "Truman Show" because it is a
Estimated TFLOPs: 384.08
Measured TFLOPs: 369.65
Traceback (most recent call last):
  File "/mnt/d/Projects/lit-gpt.qlora/lit-gpt/finetune/adapter.py", line 299, in <module>
    CLI(setup)
  File "/home/gerwintmg/miniconda3/envs/lit-gpt/lib/python3.11/site-packages/jsonargparse/_cli.py", line 85, in CLI
    return _run_component(component, cfg_init)
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/home/gerwintmg/miniconda3/envs/lit-gpt/lib/python3.11/site-packages/jsonargparse/_cli.py", line 147, in _run_component
    return component(**cfg)
           ^^^^^^^^^^^^^^^^
  File "/mnt/d/Projects/lit-gpt.qlora/lit-gpt/finetune/adapter.py", line 75, in setup
    fabric.launch(main, data_dir, checkpoint_dir, out_dir, quantize)
  File "/home/gerwintmg/miniconda3/envs/lit-gpt/lib/python3.11/site-packages/lightning/fabric/fabric.py", line 805, in launch
    return self._wrap_and_launch(function, self, *args, **kwargs)
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/home/gerwintmg/miniconda3/envs/lit-gpt/lib/python3.11/site-packages/lightning/fabric/fabric.py", line 887, in _wrap_and_launch
    return to_run(*args, **kwargs)
           ^^^^^^^^^^^^^^^^^^^^^^^
  File "/home/gerwintmg/miniconda3/envs/lit-gpt/lib/python3.11/site-packages/lightning/fabric/fabric.py", line 892, in _wrap_with_setup
    return to_run(*args, **kwargs)
           ^^^^^^^^^^^^^^^^^^^^^^^
  File "/mnt/d/Projects/lit-gpt.qlora/lit-gpt/finetune/adapter.py", line 115, in main
    train(fabric, model, optimizer, train_data, val_data, checkpoint_dir, out_dir, speed_monitor)
  File "/mnt/d/Projects/lit-gpt.qlora/lit-gpt/finetune/adapter.py", line 171, in train
    logits = model(input_ids, max_seq_length=max_seq_length, lm_head_chunk_size=128)
             ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/home/gerwintmg/miniconda3/envs/lit-gpt/lib/python3.11/site-packages/torch/nn/modules/module.py", line 1522, in _wrapped_call_impl
    return self._call_impl(*args, **kwargs)
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/home/gerwintmg/miniconda3/envs/lit-gpt/lib/python3.11/site-packages/torch/nn/modules/module.py", line 1531, in _call_impl
    return forward_call(*args, **kwargs)
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/home/gerwintmg/miniconda3/envs/lit-gpt/lib/python3.11/site-packages/lightning/fabric/wrappers.py", line 116, in forward
    output = self._forward_module(*args, **kwargs)
             ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/home/gerwintmg/miniconda3/envs/lit-gpt/lib/python3.11/site-packages/torch/nn/modules/module.py", line 1522, in _wrapped_call_impl
    return self._call_impl(*args, **kwargs)
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/home/gerwintmg/miniconda3/envs/lit-gpt/lib/python3.11/site-packages/torch/nn/modules/module.py", line 1531, in _call_impl
    return forward_call(*args, **kwargs)
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/mnt/d/Projects/lit-gpt.qlora/lit-gpt/lit_gpt/adapter.py", line 102, in forward
    x, *_ = block(x, (cos, sin), max_seq_length)
            ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/home/gerwintmg/miniconda3/envs/lit-gpt/lib/python3.11/site-packages/torch/nn/modules/module.py", line 1522, in _wrapped_call_impl
    return self._call_impl(*args, **kwargs)
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/home/gerwintmg/miniconda3/envs/lit-gpt/lib/python3.11/site-packages/torch/nn/modules/module.py", line 1531, in _call_impl
    return forward_call(*args, **kwargs)
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/mnt/d/Projects/lit-gpt.qlora/lit-gpt/lit_gpt/adapter.py", line 148, in forward
    h, new_kv_cache, new_adapter_kv_cache = self.attn(
                                            ^^^^^^^^^^
  File "/home/gerwintmg/miniconda3/envs/lit-gpt/lib/python3.11/site-packages/torch/nn/modules/module.py", line 1522, in _wrapped_call_impl
    return self._call_impl(*args, **kwargs)
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/home/gerwintmg/miniconda3/envs/lit-gpt/lib/python3.11/site-packages/torch/nn/modules/module.py", line 1531, in _call_impl
    return forward_call(*args, **kwargs)
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/mnt/d/Projects/lit-gpt.qlora/lit-gpt/lit_gpt/adapter.py", line 232, in forward
    y = self.scaled_dot_product_attention(q, k, v, mask=mask)
        ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/mnt/d/Projects/lit-gpt.qlora/lit-gpt/lit_gpt/model.py", line 273, in scaled_dot_product_attention
    return torch.nn.functional.scaled_dot_product_attention(
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
RuntimeError: handle_0 INTERNAL ASSERT FAILED at "../c10/cuda/driver_api.cpp":15, please report a bug to PyTorch.

I'm now closing this issue as i beleave my current error has to do with the currently unmerged pull request. #275

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging a pull request may close this issue.

4 participants