Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

alibabacloud slb support map same TCP and UDP port , eg 8000/TCPUDP #197

Open
wants to merge 6 commits into
base: master
Choose a base branch
from

Conversation

gaopeiliang
Copy link
Contributor

Sometime we want map slb UDP and TCP same port to Container .. More like a "Host"

@@ -144,8 +144,11 @@ func initLbCache(svcList []corev1.Service, minPort, maxPort int32, blockPorts []
var ports []int32
for _, port := range getPorts(svc.Spec.Ports) {
if port <= maxPort && port >= minPort {
newCache[lbId][port] = true
ports = append(ports, port)
value, ok := newCache[lbId][port]
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

you cannot redeploy the service that not controlled by kruise-game but use same port.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

cache just save lbId + port when design, it has no Protocol Attribute;

when use 8000/TCPUDP, cache just lbid:8000 and map to tcp:8000 udp:8000 two svc items;

when reinit cache , list tcp:8000 and udp:8000 should repeat to lbid:8000

else when 8000/TCPUDP, 9000/TCP will map error ....

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

here svc all filter by SlbIdLabelKey , all controlled by kruise-game ?

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I don't think there is a problem to set map true for two times. And the svc using SlbIdLabelKey may be deployed by user self, not all controlled by kruise-game.

Copy link
Contributor Author

@gaopeiliang gaopeiliang Jan 27, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

  1. set cache map true two times has no problem , but append podAllocate two times maybe problem; eg: when config 8000/TCPUDP,9000/TCP , first create podAllocate is lb:8000,9000 , when reinit podAllocate with append two times, result is lb:8000,8000,9000, it will be error when resync svc;

  2. filter all svc by SlbIdLabelKey manager by kruise-game or not, record lb port to cache avoid allocated; I think is right , podAllocate svc item not control by kruise-game never used ...

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I got your point

@chrisliu1995
Copy link
Member

chrisliu1995 commented Jan 26, 2025

Are you already in our dingtalk community group(44862615)? @gaopeiliang

@gaopeiliang
Copy link
Contributor Author

Are you already in our dingtalk community group(44862615)? @gaopeiliang

I will Join ....

}
}
if len(ports) != 0 {
newPodAllocate[svc.GetNamespace()+"/"+svc.GetName()] = lbId + ":" + util.Int32SliceToString(ports, ",")
log.Infof("svc %s/%s allocate slb %s ports %v", svc.Namespace, svc.Name, lbId, ports)
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The log is unnecessary, cause podAllocate will be printed when finished the initialization.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

yes ,,,

@chrisliu1995
Copy link
Member

Are you already in our dingtalk community group(44862615)? @gaopeiliang

I will Join ....

I had add your dingtalk account, you can check that.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants