You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Upon building a node using docker a few times from scratch I've noticed a possible reason for the vdp manager service not publishing itself or being accessible right away. This is while using defaults and not changing the default vdp settings - see workarounds to issue port deployment which worked for me.
The easy provider configuration process can repeatedly publish the local vdp on an interface ip address which doesn't exist in the container. At times this can match up perfectly during deployment but found this rarely does.
Below demonstrates the issue and a present workaround:
In this test the docker container is run in node mode and "Easy-provider" sets up 100.66.0.3 as the manager-url (can change each deployment of the container which is normal), and sets up endpoint proxies on the same IP for ports 8887 and 8880 as shown below.
Thereafter I check the local containers lvpnc_30925fcc adapter which has an IP address of 100.66.0.4.
I then manually re-run generate-vdp referencing all the same details while only updating the IP address syntax to match the local lvpnc_30925fcc adapter IP. The vdp manager then becomes accessible.
Without the manual re-binding to a valid adapter ip, important commands such as push-vdp fail since the manager is bound to 100.66.0.3 (https://100.66.0.3:8881) which doesn't exist.
Checking via show-vdp did show the correct updates in the output and i could access the vdp successfully inside the container. A push-vdp also now reflects the correction.
The issue above however is possibly a non-issue for general external users only reading the configuration of the vdp from outside of the container, since docker will pass the inbound session to any adapter in the container having the open listening port open since it's not explicitly locked down. Through testing if using a public IP or FQDN for the vdp manager and accessing the VDP manager from outside the container on tcp port 8881, this appears to work.
Upon building a node using docker a few times from scratch I've noticed a possible reason for the vdp manager service not publishing itself or being accessible right away. This is while using defaults and not changing the default vdp settings - see workarounds to issue port deployment which worked for me.
The easy provider configuration process can repeatedly publish the local vdp on an interface ip address which doesn't exist in the container. At times this can match up perfectly during deployment but found this rarely does.
Below demonstrates the issue and a present workaround:
lvpnc_30925fcc
adapter which has an IP address of 100.66.0.4.generate-vdp
referencing all the same details while only updating the IP address syntax to match the locallvpnc_30925fcc
adapter IP. The vdp manager then becomes accessible.push-vdp
fail since the manager is bound to 100.66.0.3 (https://100.66.0.3:8881) which doesn't exist.After "Easy-provider" completes, I observed the local interface IP address of lvpnc_30925fcc interface being
100.66.0.4
,I've then re-setup the vdp manager url with the expected interface IP which helped workaround the ability to talk talking with the local vdp.
Checking via
show-vdp
did show the correct updates in the output and i could access the vdp successfully inside the container. Apush-vdp
also now reflects the correction.The issue above however is possibly a non-issue for general external users only reading the configuration of the vdp from outside of the container, since docker will pass the inbound session to any adapter in the container having the open listening port open since it's not explicitly locked down. Through testing if using a public IP or FQDN for the vdp manager and accessing the VDP manager from outside the container on tcp port 8881, this appears to work.
The text was updated successfully, but these errors were encountered: