-
Notifications
You must be signed in to change notification settings - Fork 2.5k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
feat: support multiple kubernets clusters discovery #7895
Conversation
@spacewander @tokers @tzssangglass I think the code part is already ready for review, The documentation part will be submitted later Please help for review ,Many TKS. |
apisix: | ||
node_listen: 1984 | ||
config_center: yaml | ||
enable_admin: false |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
We don't need to disable admin when config_center is yaml. Many existing yaml relative tests have this configuration, but we can remove it from the new test.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I found an one-line script to remove them:
grep -lP 'config_center: yaml\n\s+enable_admin: false' t/*/*.t |xargs -r sed -i -r -z 's/\n\s+enable_admin: false\n/\n/g'
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
not_match = { | ||
type = "array", | ||
items = { | ||
service = { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
We can save some repeated fields with something like service = xxx.service
?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Maybe not a good idea , because service had default value in single mode but no default value in multiple mode .
apisix/discovery/kubernetes/init.lua
Outdated
@@ -29,10 +29,9 @@ local core = require("apisix.core") | |||
local util = require("apisix.cli.util") | |||
local local_conf = require("apisix.core.config_local").local_conf() | |||
local informer_factory = require("apisix.discovery.kubernetes.informer_factory") | |||
local setmetatable = setmetatable |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Better to move the setmetatable
together with pcall
and others.
apisix/discovery/kubernetes/init.lua
Outdated
local pattern = "^(.*)/(.*/.*):(.*)$" -- id/namespace/name:port_name | ||
local match = ngx.re.match(service_name, pattern, "jo") | ||
if not match then | ||
core.log.info("get unexpected upstream service_name: ", service_name) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
We should use error
level log?
apisix/discovery/kubernetes/init.lua
Outdated
local id = match[1] | ||
local endpoint_dict = ctx[id] | ||
if not endpoint_dict then | ||
core.log.info("id not exis") |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
core.log.info("id not exis") | |
core.log.info("id not exist") |
apisix/discovery/kubernetes/init.lua
Outdated
|
||
local endpoint_dict = ngx.shared["kubernetes-" .. id] | ||
if not endpoint_dict then | ||
error(core.table.concat { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I would recommend using string.format
which is more common
See apache#7895 (comment) Signed-off-by: spacewander <[email protected]>
See apache#7895 (comment) Signed-off-by: spacewander <[email protected]>
properties = { | ||
token = { | ||
type = "string", | ||
namespace_selector = { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
What about reusing the namespace_selector schema?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
ok, I'll try to optimize the schema fully
|
||
|
||
function _M.init_worker() | ||
local discovery_conf = local_conf.discovery.kubernetes |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
we should update config-default.yaml
, added id
attribute
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
sorry i didn't understand what you mean @tzssangglass
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
here:
apisix/conf/config-default.yaml
Lines 307 to 336 in 2a44ba6
# kubernetes: | |
# service: | |
# schema: https #apiserver schema, options [http, https], default https | |
# host: ${KUBERNETES_SERVICE_HOST} #apiserver host, options [ipv4, ipv6, domain, environment variable], default ${KUBERNETES_SERVICE_HOST} | |
# port: ${KUBERNETES_SERVICE_PORT} #apiserver port, options [port number, environment variable], default ${KUBERNETES_SERVICE_PORT} | |
# client: | |
# # serviceaccount token or path of serviceaccount token_file | |
# token_file: ${KUBERNETES_CLIENT_TOKEN_FILE} | |
# # token: |- | |
# # eyJhbGciOiJSUzI1NiIsImtpZCI6Ikx5ME1DNWdnbmhQNkZCNlZYMXBsT3pYU3BBS2swYzBPSkN3ZnBESGpkUEEif | |
# # 6Ikx5ME1DNWdnbmhQNkZCNlZYMXBsT3pYU3BBS2swYzBPSkN3ZnBESGpkUEEifeyJhbGciOiJSUzI1NiIsImtpZCI | |
# # kubernetes discovery plugin support use namespace_selector | |
# # you can use one of [equal, not_equal, match, not_match] filter namespace | |
# namespace_selector: | |
# # only save endpoints with namespace equal default | |
# equal: default | |
# # only save endpoints with namespace not equal default | |
# #not_equal: default | |
# # only save endpoints with namespace match one of [default, ^my-[a-z]+$] | |
# #match: | |
# #- default | |
# #- ^my-[a-z]+$ | |
# # only save endpoints with namespace not match one of [default, ^my-[a-z]+$ ] | |
# #not_match: | |
# #- default | |
# #- ^my-[a-z]+$ | |
# # kubernetes discovery plugin support use label_selector | |
# # for the expression of label_selector, please refer to https://kubernetes.io/docs/concepts/overview/working-with-objects/labels | |
# label_selector: |- | |
# first="a",second="b" |
only single_mode_nodes configuration here
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Ok ,get your point .
This part will be submitted together with the documentation
A discussion: In the current implementation, single cluster and multiple clusters are implemented separately in the code. Can we support only multiple clusters, the original single cluster just has a cluster of multiple clusters. |
Makes me think it has two meanings. 1: Abandon compatibility with the current single cluster configuration, users need to manually modify the previous configuration after the upgrade, otherwise it cannot be used 2: Compatible with the current single cluster configuration, but modify the code to treat the single cluster as a multi-cluster configuration with only one array element I think you may mean 2, However, there is no full benefit, but it brings performance loss. |
core.log.error("list_watch failed, kind: ", handle.kind, | ||
", reason: ", "RuntimeException", ", message : ", status) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
"RuntimeException"
does not look like APISIX style logs
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
But i think is kubernetes style
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
People view APISIX logs from here, and we don't use their style of logs for every new ecology we dock.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think it's best practice to keep things as they are
In the List-Watch process, there are many error messages that may appear in many Kubernetes responses, all of them are in this format (Reason and Message)
It's not necessary to change the raw error message content for "apisix style"
apisix/cli/ops.lua
Outdated
if enabled_discoveries["kubernetes"] then | ||
|
||
local kubernetes_conf = yaml_conf.discovery["kubernetes"] | ||
if not sys_conf["discovery_shared_dict"] then |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Use plural: discovery_shared_dicts
.
I mean 1.
It's persuasive. Can you talk more about |
Maybe this needs more people to join the discussion
Additional id detection and table field access are required in the multiple_mode_nodes function, obviously there is a certain performance loss |
@zhixiongdu027 |
I will asap, it may take three or four days |
i see lua_shared_dict kubernetes default is 1m , If there are many services in multiple cluster, it may not be enough, but it cannot be set very large. How to evaluate the memory usage of a service? |
Please don't worry. And in multi-cluster mode, each cluster uses its own ngx.shared.DICT. In addition, you can use namespace_selector and label_selector to filter endpoint information to reduce memory |
thanks, I wonder if the community will support apisix-seed service discovery for kubernetes discovery in the future? |
@wolgod |
thanks, Will there be a release for 2.15.x support multiple kubernets clusters discovery ? because apisix 2.x can't switch to version 3 seamlessly |
No, functional PRs are not back-ported. |
Description
Fixes # (issue)
#7569 -> -- Support mulitple kubernets cluster
#7199 , #7185 -> -- Remove possible extra spaces in client.token
Checklist
Todo
Doc ...