Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Are we doing cross validation right? #30

Open
colinsauze opened this issue Dec 9, 2021 · 1 comment
Open

Are we doing cross validation right? #30

colinsauze opened this issue Dec 9, 2021 · 1 comment

Comments

@colinsauze
Copy link
Member

In the neural networks section we tell people to do a 4 way cross validation and once those are complete to retrain on the entire dataset. Is this really the right way to do things? In other neural net frameworks (e.g. keras) you can do multiple iterations of training, so that final retraining isn't needed since the training at each split will continue from where the previous one left off. Can scikit learn behave like this?

@MaciejWas
Copy link

Hi,

the training at each split will continue from where the previous one left off

I'm not sure if I understand you correctly, but I believe that in CV each iteration is supposed to be independent from the rest. There might exist some tricks which keras uses to minimize computation, could you give the name of specific function in keras or other framework?

I think the example is good as long as metrics are good on CV results.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants