-
Notifications
You must be signed in to change notification settings - Fork 273
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Netbox plugin: synchronising migrated VMs deletes information instead of changing it #6135
Comments
We just lost a bunch of custom added data to the virtual machines because of this. the vm should be updated and have its assigned cluster changed NOT the whole object deleted and re-created. vms shouldnt be deleted & re-created in netbox if migrated. simple updating the assigned cluster is all thats needed |
@ITJamie I recently wrote to Vates support as we have a XOA support contract with them. I can highly recommend this – they have great response times and offer good solutions. The response was:
So I guess this will be resolved soon :) |
Has this bug been addresses? |
It is in progress...probably by the end of the month(June 2023) 🤞🏻 |
Fixes #6038, Fixes #6135, Fixes #6024, Fixes #6036 See https://xcp-ng.org/forum/topic/6070 See zammad#5695 See https://xcp-ng.org/forum/topic/6149 See https://xcp-ng.org/forum/topic/6332 Complete rewrite of the plugin. Main functional changes: - Synchronize VM description - Fix duplicated VMs in Netbox after disconnecting one pool - Migrating a VM from one pool to another keeps VM data added manually - Fix largest IP prefix being picked instead of smallest - Fix synchronization not working if some pools are unavailable - Better error messages
Describe the bug
If you migrate a VM from one cluster to the other and then sync all changes to netbox, the plugin recreates the VM in Netbox (#6038) and deletes the "old" instance of the VM. This also deletes every piece of information manually entered.
To Reproduce
Expected behavior
The VM should not be deleted but edited based on the UUID field when the migration is done, so the parent pool (or, as it's called in Netbox, cluster) is changed to be the new pool. I expect the UUID to actually be unique and Netbox to reflect the actual state XOA is in when it's synchronised.
The text was updated successfully, but these errors were encountered: