Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Difficulties on installing dependencies and make it work! #9

Open
Sehaba95 opened this issue Jun 16, 2023 · 9 comments
Open

Difficulties on installing dependencies and make it work! #9

Sehaba95 opened this issue Jun 16, 2023 · 9 comments

Comments

@Sehaba95
Copy link

Hi! I have been struggling on running the project on my machine:

OS: Ubuntu 22.04
GPU: GeForce RTX 3080
Python: 3.8

I spent few days to make it work! Here I will share how I solved this step by step!

@Sehaba95
Copy link
Author

I solved the issues of installation as follow:

  1. Create a new virtual environment in the "Building-GAN" folder, and activate it:
    conda create -n "myenv" python=3.8
    conda activate myenv

  2. Export the path of the project:
    export PYTHONPATH="${PYTHONPATH}:/path/to/Building-GAN/"

  3. Install Nvidia driver:
    sudo apt install nvidia-driver-470 nvidia-settings nvidia-prime

  4. Install PyTorch 1.8.0 for Cuda 11.1:
    pip install torch==1.8.0+cu111 torchvision==0.9.0+cu111 torchaudio==0.8.0 -f https://download.pytorch.org/whl/torch_stable.html

  5. Install Pytorch-geometric for Pytorch 1.8.0 and Cuda 11.0:
    pip install --no-index torch-scatter -f https://pytorch-geometric.com/whl/torch-1.8.0+cu111.html
    pip install --no-index torch-sparse -f https://pytorch-geometric.com/whl/torch-1.8.0+cu111.html
    pip install --no-index torch-cluster -f https://pytorch-geometric.com/whl/torch-1.8.0+cu111.html
    pip install --no-index torch-spline-conv -f https://pytorch-geometric.com/whl/torch-1.8.0+cu111.html
    pip install torch-geometric==1.6.2

This worked for me! Hope it will work for you too!

@Graana128
Copy link

Hi @Sehaba95 @chinyich @kaihungc1993.
How can I use this code in google colab as I am encountering the dependencies error
in <cell line: 7>()
5 # from Data.GraphConstructor import GraphConstructor
6 import matplotlib.pyplot as plt
----> 7 from torch_scatter import scatter, scatter_max
8 from util import gumbel_softmax, softmax_to_hard
9 from util_graph import find_max_out_program_index

2 frames
/usr/lib/python3.10/ctypes/init.py in init(self, name, mode, handle, use_errno, use_last_error, winmode)
372
373 if handle is None:
--> 374 self._handle = _dlopen(self._name, mode)
375 else:
376 self._handle = handle

OSError: /usr/local/lib/python3.10/dist-packages/torch_scatter/_version_cpu.so: undefined symbol: _ZN3c1017RegisterOperatorsD1Ev

@Sehaba95
Copy link
Author

You have a false installation of pytorch-scatter, you need to re-install it. You can find more details about your error in this link.

@asadrizvi64
Copy link

asadrizvi64 commented Mar 6, 2024

hey, @Sehaba95 @chinyich @kaihungc1993
Please could you look into these dependancies, as the error isnt helping me atm

PS D:\work\graana\Building-GAN> python inference.py
Traceback (most recent call last):
File "D:\work\graana\Building-GAN\inference.py", line 15, in
from torch_geometric.data import DataLoader, Batch
File "C:\Users\HP.pyenv\pyenv-win\versions\3.9.0\lib\site-packages\torch_geometric_init_.py", line 2, in
import torch_geometric.nn
File "C:\Users\HP.pyenv\pyenv-win\versions\3.9.0\lib\site-packages\torch_geometric\nn_init_.py", line 2, in
from .data_parallel import DataParallel
File "C:\Users\HP.pyenv\pyenv-win\versions\3.9.0\lib\site-packages\torch_geometric\nn\data_parallel.py", line 5, in
from torch_geometric.data import Batch
File "C:\Users\HP.pyenv\pyenv-win\versions\3.9.0\lib\site-packages\torch_geometric\data_init_.py", line 1, in
File "C:\Users\HP.pyenv\pyenv-win\versions\3.9.0\lib\site-packages\torch_geometric\data\data.py", line 8, in
from torch_sparse import coalesce, SparseTensor
File "C:\Users\HP.pyenv\pyenv-win\versions\3.9.0\lib\site-packages\torch_sparse_init_.py", line 15, in
torch.ops.load_library(importlib.machinery.PathFinder().find_spec(
File "C:\Users\HP.pyenv\pyenv-win\versions\3.9.0\lib\site-packages\torch_ops.py", line 104, in load_library
ctypes.CDLL(path)
File "C:\Users\HP.pyenv\pyenv-win\versions\3.9.0\lib\ctypes_init_.py", line 374, in init
self._handle = _dlopen(self._name, mode)
d' (or one of its dependencies). Try using the full path with constructor syntax.

the only difference from your installations and mine are the python version as you're using python3.8 and cuda while im using it on python3.9 and cpu. below are the list of library versions

PS D:\work\graana\Building-GAN> pip list
Package Version


absl-py 0.9.0
aio-pika 8.2.3
aiofiles 23.2.1
aiogram 2.25.2
aiohttp 3.8.6
aiohttp-retry 2.8.3
aiormq 6.4.2
aiosignal 1.3.1
altair 5.2.0
annotated-types 0.6.0
anyio 4.2.0
APScheduler 3.9.1.post1
ase 3.22.1
asgiref 3.7.2
astunparse 1.6.3
async-timeout 4.0.3
attrs 22.1.0
autobahn 23.6.2
Automat 22.10.0
Babel 2.9.1
beautifulsoup4 4.12.3
bidict 0.22.1
blis 0.7.11
boto3 1.34.2
botocore 1.34.2
CacheControl 0.12.14
cachetools 5.3.2
catalogue 2.0.10
certifi 2023.11.17
cffi 1.16.0
channels 4.0.0
charset-normalizer 3.3.2
click 8.1.7
clip 1.0
cloudpathlib 0.16.0
cloudpickle 2.2.1
cmake 3.28.3
colorama 0.4.6
colorclass 2.2.2
coloredlogs 15.0.1
colorhash 1.2.1
confection 0.1.4
confluent-kafka 2.3.0
constantly 23.10.4
cryptography 41.0.7
cycler 0.12.1
cymem 2.0.8
daphne 4.0.0
dask 2022.10.2
Django 4.2.8
django-cors-headers 4.3.1
djangorestframework 3.14.0
dnspython 2.3.0
docopt 0.6.2
exceptiongroup 1.2.0
fastapi 0.109.0
fbmessenger 6.0.0
ffmpy 0.3.1
filelock 3.13.1
fire 0.5.0
flatbuffers 23.5.26
fonttools 4.46.0
frozenlist 1.4.1
fsspec 2023.12.2
ftfy 6.1.3
future 0.18.3
gast 0.4.0
gdown 5.0.1
gensim 4.3.2
google-auth 2.25.2
google-auth-oauthlib 1.0.0
google-pasta 0.2.0
googledrivedownloader 0.4
gradio 3.43.1
gradio_client 0.5.0
greenlet 3.0.2
grpcio 1.60.0
h11 0.14.0
h5py 3.10.0
httpcore 1.0.2
httptools 0.6.1
httpx 0.26.0
huggingface-hub 0.20.1
humanfriendly 10.0
hyperlink 21.0.0
idna 3.6
importlib-metadata 7.0.0
importlib-resources 6.1.1
incremental 22.10.0
install 1.3.5
isodate 0.6.1
jax 0.4.23
Jinja2 3.1.2
jmespath 1.0.1
joblib 1.2.0
jsonpickle 3.0.2
jsonschema 4.17.3
keras 2.12.0
kiwisolver 1.4.5
langcodes 3.3.0
libclang 16.0.6
llvmlite 0.42.0
locket 1.0.0
magic-filter 1.0.12
Markdown 3.5.1
markdown-it-py 3.0.0
MarkupSafe 2.1.3
matplotlib 3.5.3
mattermostwrapper 2.2
mdurl 0.1.2
ml-dtypes 0.3.1
mpmath 1.3.0
msgpack 1.0.7
multidict 5.2.0
murmurhash 1.0.10
networkx 2.6.3
nltk 3.8.1
numba 0.59.0
numpy 1.24.4
oauthlib 3.2.2
opencv-python 4.9.0.80
opt-einsum 3.3.0
orjson 3.9.12
packaging 20.9
pamqp 3.2.1
pandas 2.2.0
partd 1.4.1
Pillow 10.1.0
pip 24.0
pluggy 1.3.0
portalocker 2.8.2
preshed 3.0.9
prompt-toolkit 3.0.28
protobuf 4.23.3
psutil 5.9.8
psycopg2-binary 2.9.9
pyasn1 0.5.1
pyasn1-modules 0.3.0
pycparser 2.21
pydantic 2.6.0
pydantic_core 2.16.1
pydot 1.4.2
pydub 0.25.1
Pygments 2.17.2
PyJWT 2.8.0
pykwalify 1.8.0
pymongo 4.3.3
pyOpenSSL 23.3.0
pyparsing 3.1.1
pyreadline3 3.4.1
pyrsistent 0.20.0
python-crfsuite 0.9.9
python-dateutil 2.8.2
python-engineio 4.8.0
python-louvain 0.16
python-multipart 0.0.6
python-socketio 5.10.0
pytz 2022.7.1
pywin32 306
PyYAML 6.0.1
questionary 1.10.0
randomname 0.1.5
rasa 3.6.15
rasa-sdk 3.6.2
rdflib 7.0.0
redis 4.6.0
regex 2022.10.31
requests 2.31.0
requests-oauthlib 1.3.1
requests-toolbelt 1.0.0
rich 13.7.0
rocketchat-API 1.30.0
rsa 4.9
ruamel.yaml 0.17.21
ruamel.yaml.clib 0.2.8
ruff 0.1.14
s3transfer 0.9.0
safetensors 0.4.1
sanic 21.12.2
Sanic-Cors 2.0.1
sanic-jwt 1.8.0
sanic-routing 0.7.2
scikit-learn 1.1.3
scipy 1.11.4
semantic-version 2.10.0
sentence-transformers 2.2.2
sentencepiece 0.1.99
sentry-sdk 1.14.0
service-identity 23.1.0
setuptools 69.0.2
shellingham 1.5.4
simple-websocket 1.0.0
six 1.16.0
sklearn-crfsuite 0.3.6
slack-sdk 3.26.1
smart-open 6.4.0
sniffio 1.3.0
soupsieve 2.5
spacy 3.7.2
spacy-legacy 3.0.12
spacy-loggers 1.0.5
SQLAlchemy 1.4.50
sqlparse 0.4.4
srsly 2.4.8
starlette 0.35.1
structlog 23.2.0
structlog-sentry 2.0.3
sympy 1.12
tabulate 0.9.0
tarsafe 0.0.4
tensorboard 2.12.3
tensorboard-data-server 0.7.2
tensorboardX 2.6.2.2
tensorflow 2.12.0
tensorflow-estimator 2.12.0
tensorflow-hub 0.13.0
tensorflow-intel 2.12.0
tensorflow-io-gcs-filesystem 0.31.0
termcolor 2.4.0
terminaltables 3.1.10
thinc 8.2.2
threadpoolctl 3.2.0
tokenizers 0.15.0
tomlkit 0.12.0
toolz 0.12.0
torch 1.8.0+cpu
torch-cluster 1.5.9
torch-geometric 1.6.2
torch-scatter 2.0.8
torch-sparse 0.6.12
torch-spline-conv 1.2.1
torchaudio 0.8.0
torchvision 0.9.0+cpu
tqdm 4.66.1
transformers 4.36.2
twilio 8.2.2
Twisted 23.10.0
twisted-iocpsupport 1.0.4
txaio 23.1.1
typer 0.9.0
typing_extensions 4.9.0
typing-utils 0.1.0
tzdata 2023.3
tzlocal 5.2
ujson 5.9.0
urllib3 2.1.0
uvicorn 0.27.0.post1
wasabi 1.1.2
wcwidth 0.2.12
weasel 0.3.4
webexteamssdk 1.6.1
websockets 10.4
Werkzeug 3.0.1
wheel 0.42.0
wrapt 1.14.1
wsproto 1.2.0
yarl 1.9.4
zipp 3.17.0
zope.interface 6.1

@Graana128
Copy link

Graana128 commented Mar 14, 2024 via email

@Sehaba95
Copy link
Author

You have to downgrade Python to 3.8, and it should work!

@Graana128
Copy link

Graana128 commented Mar 15, 2024

You have to downgrade Python to 3.8, and it should work!

**>>>>Plz let me know if you have a solution of this error. **

CUDA not available, using CPU.
Namespace(b1=0.5, b2=0.999, batch_size=8, comment='0', cuda='0', d_lr=0.0001, eval_period=20, far_weight=0.0, g_lr=0.0001, gan_loss='WGANGP', gp_lambda=10.0, if_curriculum=False, latent_dim=128, lp_hinge_margin=1.0, lp_loss_fun='hinge', lp_sample_size=20, lp_similarity_fun='cos', lp_weight=0.0, n_cpu=8, n_critic_d=1, n_critic_g=5, n_critic_p=5, n_epochs=1000, noise_dim=32, plot_period=10, program_layer=4, raw_dir='Data/6types-raw_data', test_size=4000, tr_weight=0.0, train_data_dir='Data/6types-processed_data', train_size=96000, variation_eval_id1=96018, variation_eval_id2=96010, variation_num=25, voxel_layer=12)
Total 120000 data: 96000 train / 4000 test
Data/6types-processed_data\data096018.pt
C:\Users\92332\AppData\Local\Programs\Python\Python38\lib\site-packages\torch\utils\data\dataloader.py:474: UserWarning: This DataLoader will create 8 worker processes in total. Our suggested max number of worker in current system is 4 (cpuset is not taken into account), which is smaller than what this DataLoader is going to create. Please be aware that excessive worker creation might get DataLoader running slow or even freeze, lower the worker number to avoid potential slowness/freeze if necessary.
warnings.warn(_create_warning_msg(
CUDA not available, using CPU.
Namespace(b1=0.5, b2=0.999, batch_size=8, comment='0', cuda='0', d_lr=0.0001, eval_period=20, far_weight=0.0, g_lr=0.0001, gan_loss='WGANGP', gp_lambda=10.0, if_curriculum=False, latent_dim=128, lp_hinge_margin=1.0, lp_loss_fun='hinge', lp_sample_size=20, lp_similarity_fun='cos', lp_weight=0.0, n_cpu=8, n_critic_d=1, n_critic_g=5, n_critic_p=5, n_epochs=1000, noise_dim=32, plot_period=10, program_layer=4, raw_dir='Data/6types-raw_data', test_size=4000, tr_weight=0.0, train_data_dir='Data/6types-processed_data', train_size=96000, variation_eval_id1=96018, variation_eval_id2=96010, variation_num=25, voxel_layer=12)
Total 120000 data: 96000 train / 4000 test
Data/6types-processed_data\data096018.pt
C:\Users\92332\AppData\Local\Programs\Python\Python38\lib\site-packages\torch\utils\data\dataloader.py:474: UserWarning: This DataLoader will create 8 worker processes in total. Our suggested max number of worker in current system is 4 (cpuset is not taken into account), which is smaller than what this DataLoader is going to create. Please be aware that excessive worker creation might get DataLoader running slow or even freeze, lower the worker number to avoid potential slowness/freeze if necessary.
warnings.warn(_create_warning_msg(
Traceback (most recent call last):
File "", line 1, in
File "C:\Users\92332\AppData\Local\Programs\Python\Python38\lib\multiprocessing\spawn.py", line 116, in spawn_main
exitcode = _main(fd, parent_sentinel)
File "C:\Users\92332\AppData\Local\Programs\Python\Python38\lib\multiprocessing\spawn.py", line 125, in _main
prepare(preparation_data)
File "C:\Users\92332\AppData\Local\Programs\Python\Python38\lib\multiprocessing\spawn.py", line 236, in prepare
_fixup_main_from_path(data['init_main_from_path'])
File "C:\Users\92332\AppData\Local\Programs\Python\Python38\lib\multiprocessing\spawn.py", line 287, in _fixup_main_from_path
main_content = runpy.run_path(main_path,
File "C:\Users\92332\AppData\Local\Programs\Python\Python38\lib\runpy.py", line 262, in run_path
return _run_module_code(code, init_globals, run_name,
File "C:\Users\92332\AppData\Local\Programs\Python\Python38\lib\runpy.py", line 95, in _run_module_code
_run_code(code, mod_globals, init_globals,
File "C:\Users\92332\AppData\Local\Programs\Python\Python38\lib\runpy.py", line 85, in _run_code
exec(code, run_globals)
File "D:\GAN\Building-GAN-master\inference.py", line 104, in
evaluate(test_data_loader, generator, args.raw_dir, viz_dir, follow_batch, device_ids, number_of_batches=n_batches,trunc=trunc_num)
File "D:\GAN\Building-GAN-master\util_eval.py", line 84, in evaluate
for i, g in enumerate(data_loader):
File "C:\Users\92332\AppData\Local\Programs\Python\Python38\lib\site-packages\torch\utils\data\dataloader.py", line 355, in iter
return self._get_iterator()
File "C:\Users\92332\AppData\Local\Programs\Python\Python38\lib\site-packages\torch\utils\data\dataloader.py", line 301, in _get_iterator
return _MultiProcessingDataLoaderIter(self)
File "C:\Users\92332\AppData\Local\Programs\Python\Python38\lib\site-packages\torch\utils\data\dataloader.py", line 914, in init
w.start()
File "C:\Users\92332\AppData\Local\Programs\Python\Python38\lib\multiprocessing\process.py", line 121, in start
self._popen = self._Popen(self)
File "C:\Users\92332\AppData\Local\Programs\Python\Python38\lib\multiprocessing\context.py", line 224, in _Popen
return _default_context.get_context().Process._Popen(process_obj)
File "C:\Users\92332\AppData\Local\Programs\Python\Python38\lib\multiprocessing\context.py", line 326, in _Popen
return Popen(process_obj)
File "C:\Users\92332\AppData\Local\Programs\Python\Python38\lib\multiprocessing\popen_spawn_win32.py", line 45, in init
prep_data = spawn.get_preparation_data(process_obj._name)
File "C:\Users\92332\AppData\Local\Programs\Python\Python38\lib\multiprocessing\spawn.py", line 154, in get_preparation_data
_check_not_importing_main()
File "C:\Users\92332\AppData\Local\Programs\Python\Python38\lib\multiprocessing\spawn.py", line 134, in _check_not_importing_main
raise RuntimeError('''
RuntimeError:
An attempt has been made to start a new process before the
current process has finished its bootstrapping phase.

    This probably means that you are not using fork to start your
    child processes and you have forgotten to use the proper idiom
    in the main module:

        if __name__ == '__main__':
            freeze_support()
            ...

    The "freeze_support()" line can be omitted if the program
    is not going to be frozen to produce an executable.

@asadrizvi64
Copy link

asadrizvi64 commented Mar 18, 2024

@Sehaba95 sorry to bother you again, I'm running it on:

OS: Windows 10
GPU: GeForce GTX 1080ti 8GB
Python: 3.8

My inference.py crashes VScode down only because probably my system doesn't support the project requirements. I just wanted to ask that what are the inputs when running the inference and is it possible for you to upload a video-demo running the project

@Sehaba95
Copy link
Author

To run the inference.py, I just followed what is written in the README. To understand more how it works, you can read deeply the inference.py script!

@asadrizvi64 I am not the author of this project! I just shared how I made it work, so someone who want to use this project or test it, will go faster!

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

3 participants