Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Timeout values should be configurable in NAPALM profiles #2390

Closed
lunkwill42 opened this issue Apr 8, 2022 · 0 comments · Fixed by #2460
Closed

Timeout values should be configurable in NAPALM profiles #2390

lunkwill42 opened this issue Apr 8, 2022 · 0 comments · Fixed by #2460
Assignees
Labels
CNaaS Related to the CNaaS activity enhancement netconf

Comments

@lunkwill42
Copy link
Member

Is your feature request related to a problem? Please describe.

Users report problems with completing PortAdmin configuration changes on Juniper switch stacks. Apparently, committing the configuration on Juniper switch stacks takes longer the more switches are in the stack, and PortAdmin is reporting timeout errors.

The NAPALM (and/or PyEZ) library seems to use a default of a 60 second timeout, but nowhere in NAV is this configurable. It seems this can be to short even for switch stacks of only four switches.

Describe the solution you'd like

An explicit timeout value should be configurable in NAPALM management profiles. This will enable users to both increase the timeout if they are having issues with slow responses, but will also enable them to differentiate between NAPALM profiles for "fast" and "slow" devices, if they want.

Describe alternatives you've considered

The timeout could be a global option in portadmin.conf, but the user would not be able to differentiate between different classes of devices, they would need shell access and restart the web server after modifications were made.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
CNaaS Related to the CNaaS activity enhancement netconf
Projects
None yet
Development

Successfully merging a pull request may close this issue.

2 participants