Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

When k8s present, allow filtering of containers by namespace #2285

Merged
merged 1 commit into from
Mar 16, 2017

Conversation

ekimekim
Copy link
Contributor

To facilitate this, we replace the existing functionality of updateFilters which
sets k8s topologies to have the filters [namespace, managed], to instead append the namespace filter
to any existing. This lets it apply to both k8s and container topologies without overwriting existing
container filters. We instead set the managed filter in the static definition.

This however has the side effect that the ordering of the namespace filter and the managed filter
in k8s topologies has been reversed, so it reads:

Show Unmanaged | Hide Unmanaged
foo | bar | default | baz | All Namespaces

instead of:

foo | bar | default | baz | All Namespaces
Show Unmanaged | Hide Unmanaged

Fixes #1504

@ekimekim ekimekim requested a review from 2opremio February 28, 2017 02:42
kubernetesFilters(ns...), unmanagedFilter,
})
if len(ns) > 0 {
// We only want to apply k8s filters if any k8s info is present

This comment was marked as abuse.

This comment was marked as abuse.

for i, t := range topologies {
if t.id == podsID || t.id == servicesID || t.id == deploymentsID || t.id == replicaSetsID {
topologies[i] = updateTopologyFilters(t, []APITopologyOptionGroup{
kubernetesFilters(ns...), unmanagedFilter,

This comment was marked as abuse.

This comment was marked as abuse.

@@ -293,7 +293,14 @@ func IsNotPseudo(n report.Node) bool {
// IsNamespace checks if the node is a pod/service in the specified namespace
func IsNamespace(namespace string) FilterFunc {
return func(n report.Node) bool {
gotNamespace, _ := n.Latest.Lookup(kubernetes.Namespace)
try_keys := []string{kubernetes.Namespace, docker.LabelPrefix + "io.kubernetes.pod.namespace"}

This comment was marked as abuse.

This comment was marked as abuse.

@2opremio
Copy link
Contributor

@ekimekim ping

@ekimekim ekimekim force-pushed the mike/k8s-ns-in-container-view branch from 92621b4 to bc6e030 Compare March 16, 2017 21:20
To facilitate this, we replace the existing functionality of updateFilters which
sets k8s topologies to have the filters [namespace, managed], to instead append the namespace filter
to any existing. This lets it apply to both k8s and container topologies without overwriting existing
container filters. We instead set the managed filter in the static definition.

This however has the side effect that the ordering of the namespace filter and the managed filter
in k8s topologies has been reversed, so it reads:
	Show Unmanaged | Hide Unmanaged
	foo | bar | default | baz | All Namespaces
instead of:
	foo | bar | default | baz | All Namespaces
	Show Unmanaged | Hide Unmanaged
@ekimekim ekimekim force-pushed the mike/k8s-ns-in-container-view branch from bc6e030 to b01e890 Compare March 16, 2017 21:21
@ekimekim
Copy link
Contributor Author

ptal

@ekimekim ekimekim merged commit 76ddc75 into master Mar 16, 2017
@ekimekim ekimekim deleted the mike/k8s-ns-in-container-view branch March 16, 2017 21:56
ekimekim added a commit that referenced this pull request Mar 17, 2017
…iner-view"

This reverts commit 76ddc75, reversing
changes made to 3ade293.

We are rolling this back for now because it's causing a bug where sub-topologies
would have ~3000 repeated cases of the k8s filters, causing performance issues clientside.
ekimekim added a commit that referenced this pull request Mar 17, 2017
Revert "Merge pull request #2285 from weaveworks/mike/k8s-ns-in-container-view"
ekimekim added a commit that referenced this pull request Mar 20, 2017
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants