-
Notifications
You must be signed in to change notification settings - Fork 20
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Pytorch Lightning Model Question #107
Comments
Hi @calvinp0! Thank you for your feedback! You need to define the However, if you use 🚧 Our implementation for regression is still unstable but we have made progress in the soon-to-come 0.2.1 version that we will merge in the following days. Reach out and raise issues if you have other questions or concerns. 🚧 To read if you want to keep your LightningModule: You won't have the computation of the metrics that come with the RegressionRoutine - you can directly use the In any case, don't hesitate to give us more details here or contact us through Discord (written with @alafage) |
Thank @o-laurent I in the end decided to create simple class DataModule(pl.LightningDataModule):
def __init__(self, data_dir: str, features_generator: List[str], batch_size: int, num_workers: int, persistent_workers: bool = False):
super().__init__()
self.data_dir = data_dir
self.features_generator = features_generator
self.batch_size = batch_size
self.num_workers = num_workers
self.persistent_workers = persistent_workers
def prepare_data(self):
"""
This method is called only once and on only one GPU. It's used to perform any data download or preparation steps.
"""
print("Preparing data...")
def setup(self, stage: Optional[str] = None):
"""
Call in SmilesDataset
Need to consider if splitting via scaffolding or 5 fold etc.
Multiple GPU
"""
self.data = SmilesDataset(f'{self.data_dir}/delaney-processed.csv', features_generator=self.features_generator)
if stage == 'fit' or stage is None:
self.train_data = self.data.get_split('train')
self.val_data = self.data.get_split('val')
self.test_data = self.data.get_split('test')
if stage == 'test':
self.test_data = self.data.get_split('test')
def train_dataloader(self):
return DataLoader(self.train_data, batch_size=self.batch_size, num_workers=self.num_workers, collate_fn=collate_molgraph_dataset, persistent_workers=self.persistent_workers)
def val_dataloader(self):
return DataLoader(self.val_data, batch_size=self.batch_size, num_workers=self.num_workers, collate_fn=collate_molgraph_dataset, persistent_workers=self.persistent_workers)
def test_dataloader(self):
return DataLoader(self.test_data, batch_size=self.batch_size, num_workers=self.num_workers, collate_fn=collate_molgraph_dataset, persistent_workers=self.persistent_workers) I ask, because I have attempted to follow the tutorial here, whilst using my model and dataset but am now receiving when I run the code: from lightning.pytorch import Trainer
trainer = Trainer(max_epochs=5) #, enable_progress_bar=False)
trainer.fit(model=routine, datamodule=data_module) and receive this error:
|
Actually, I discovered the issue was calling |
Hi @calvinp0, Thanks for the details! We could add a comment to advise users to use Please don't hesitate to let us know if we can help you in any way. |
Hi @o-laurent , yes I think that would be great to add that advisement for future users. I will try to utilise the TUTTrainer, thanks! On that note, and please tell me if I should open up another thread in discussions, is there a tutorial or information on the Monte Carlo Dropout wrapper: https://github.com/ENSTA-U2IS-AI/torch-uncertainty/blob/main/torch_uncertainty/models/wrappers/mc_dropout.py |
Hi again @calvinp0, Thanks, we'll find a place to highlight this when we improve the documentation. We can create a discussion thread or chat on Discord if you have more specific questions. Otherwise, I've just slightly improved the wrapper, its documentation, and the MC-Dropout tutorial on the dev branch. NB: Since the modified version of the tutorial is not yet pushed on main, our website's tutorial page remains outdated. |
Hi!
I have built my model using Pytorch Lighntning, thus it has the functions training_step, validation_step etc. I attempted to follow the tutorial here: https://torch-uncertainty.github.io/auto_tutorials/tutorial_der_cubic.html#gathering-everything-and-training-the-model
But it errors with
NotImplementedError: Module [CMPNNModel] is missing the required "forward" function
(which I guess may be obvious). So does this mean to utilise this package I will need to change my model from a PyTorch Lightning one to a Torch one? Or have I done something incorrect.Thank you!
The text was updated successfully, but these errors were encountered: