Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

In case of kubernetes integration detected return manifest in standalone agent layout instead of policy #114439

Conversation

MichaelKatsoulis
Copy link
Contributor

Summary

After discussions in #110408 , this PR

  • in the add agent flyout, if a K8s integration is detected then the standalone mode policy should be the K8s daemonset manifest and not general agent policy and the instructions to install agent should be the kubelet apply -f filename.yaml and not ./elastic-agent install.

In case no k8s integration detected in policy:
policy_no_k8s

agent_start

In case k8s integration detected:

configmap

daemonset

kubectl_apply_command

Checklist

@MichaelKatsoulis MichaelKatsoulis requested a review from a team as a code owner October 11, 2021 08:35
@botelastic botelastic bot added the Team:Fleet Team label for Observability Data Collection Fleet team label Oct 11, 2021
@elasticmachine
Copy link
Contributor

Pinging @elastic/fleet (Team:Fleet)

@MichaelKatsoulis MichaelKatsoulis added enhancement New value added to drive a business result Team:Fleet Team label for Observability Data Collection Fleet team and removed Team:Fleet Team label for Observability Data Collection Fleet team labels Oct 11, 2021
@@ -75,7 +75,7 @@ export async function getFullAgentPolicy(
id: agentPolicy.id,
outputs: {
...outputs.reduce<FullAgentPolicy['outputs']>((acc, output) => {
acc[getOutputIdForAgentPolicy(output)] = transformOutputToFullPolicyOutput(output);
acc[getOutputIdForAgentPolicy(output)] = transformOutputToFullPolicyOutput(output, standalone);
Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@nchaulet I think this was forgotten maybe as part of #111002 and this resulted in standalone being always false by default. And then ES_USERNAME and ES_PASSWORD wasn't added in the outputs section of the policy.

@MichaelKatsoulis
Copy link
Contributor Author

cc @mukeshelastic

@MichaelKatsoulis MichaelKatsoulis added the release_note:skip Skip the PR/issue when compiling release notes label Oct 11, 2021
import { DownloadStep, AgentPolicySelectionStep } from './steps';
import type { BaseProps } from './types';

type Props = BaseProps;

const RUN_INSTRUCTIONS = './elastic-agent install';

export const elasticAgentPolicy = '';
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

seems unused?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yes you are right

if (!pkg) {
return;
}
if (pkg.name === 'kubernetes') {
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

return;
}
let found = false;
(agentPol.package_policies as PackagePolicy[]).forEach(({ package: pkg }) => {
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think this whole block could probably be simplified using .some (https://developer.mozilla.org/en-US/docs/Web/JavaScript/Reference/Global_Objects/Array/some) with something like setIsK8s((agentPol.package_policies as PackagePolicy[]).some(() => ? true : false)

export const StandaloneInstructions = React.memo<Props>(({ agentPolicy, agentPolicies }) => {
const { getHref } = useLink();
const core = useStartServices();
const { notifications } = core;

const [selectedPolicyId, setSelectedPolicyId] = useState<string | undefined>(agentPolicy?.id);
const [fullAgentPolicy, setFullAgentPolicy] = useState<any | undefined>();
const [isK8s, setIsK8s] = useState<string | undefined>('isLoading');
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

using true and false as string is a little error prone what do you thing of using this types
useState<'IS_LOADING' | 'IS_KUBERNETES' | 'IS_NOT_KUBERNETES'>('IS_LOADING');

}
}, [fullAgentPolicy, isK8s]);

function policyMsg() {
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

to be consistent with how we do that in other part of fleet and avoid creating a function you could do this like

const policyMsg = isK8s === 'true' 
 ? (<Fromatred />)
: (<Fromatred />)

and later use it like 
<>{policyMsg}</>

return;
}
setYaml(fullAgentPolicy);
setRunInstructions('kubectl apply -f elastic-agent.yml');
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

nitpick but maybe move this two cmd in a constant in the top of the file KUBERNETES_RUN_INSTRUCTIONS STANDALONE_RUN_INSTRUCTIONS

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

you probably do not need a state

const runInstructions = isK8s === 'true' ? KUBERNETES_RUN_INSTRUCTIONS : STANDALONE_RUN_INSTRUCTIONS;

if (typeof fullAgentPolicy === 'object') {
return;
}
setYaml(fullAgentPolicy);
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

You probably do not need the setYaml neither the way I would implement this useMemo return a value and compute it only if dependencies changes.

So you could do

const yaml = useMemo(() => {
      return fullAgentPolicyToYaml(fullAgentPolicy);
}

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This is very tricky. The reason I did it like this is that I didn't want always this usememo to return a value for the yaml. There are unfortunately some race conditions where isk8s === true but fullAgentPolicy hasn't beed updated yet(remains an object instead of string) and also the opposite.
So the way I did it, these cases are skipped (setYaml does not run, so value of yaml remains as it was before).
If I go with the const yaml = useMemo(() => { return fullAgentPolicyToYaml(fullAgentPolicy); } approach then I always have to return a value which in those cases I shouldn't.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

In this case you should probably use useEffect instead of useMemo

const body = fullAgentConfigMap;
const headers: ResponseHeaders = {
'content-type': 'text/x-yaml',
'content-disposition': `attachment; filename="elastic-agent.yml"`,
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Do we need to have generate a different filename for the manifest?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

};

const configMapYaml = fullAgentConfigMapToYaml(fullAgentConfigMap, safeDump);
const updateMapHosts = configMapYaml.replace('http://localhost:9200', '{ES_HOST}');
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

It will not be localhost:9200 in an other environment than your local dev one. We should probably allow to pass this as an option to getFullAgentPolicy or we can rely on the user setting the correct value in the Fleet settings and remove that line.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yes I thought of that also. I decided to pass it as an option to getFullAgentPolicy. The problem is that the outputs is an array and the hosts of each array is also an array. Not sure how multiple outputs and multiple hosts per output in a policy can be configured in the UI though. I can just change the first ones of the array.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@MichaelKatsoulis Normally it's the responsibility of the user to configurate this hosts correctly in Fleet, so I think we should not replace it here.

@MichaelKatsoulis
Copy link
Contributor Author

@nchaulet Thanks for your review. I updated the PR after your comments

Copy link
Member

@nchaulet nchaulet left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

One last small comments, but after this 🚀

@MichaelKatsoulis
Copy link
Contributor Author

@elasticmachine merge upstream

@kibanamachine
Copy link
Contributor

💛 Build succeeded, but was flaky


Test Failures

Kibana Pipeline / general / X-Pack API Integration Tests.x-pack/test/api_integration/apis/search/session·ts.apis search search session touched time updates when you poll on an search

Link to Jenkins

Standard Out

Failed Tests Reporter:
  - Test has not failed recently on tracked branches

[00:00:00]     │
[00:00:00]       └-: apis
[00:00:00]         └-> "before all" hook in "apis"
[00:00:00]         └-: search
[00:00:00]           └-> "before all" hook in "search"
[00:00:03]           └-: search session
[00:00:03]             └-> "before all" hook for "should fail to extend a nonexistent session"
[00:00:03]             └-> should fail to extend a nonexistent session
[00:00:03]               └-> "before each" hook: global before each for "should fail to extend a nonexistent session"
[00:00:03]               │ proc [kibana] [2021-10-18T11:18:08.041+00:00][ERROR][plugins.dataEnhanced.data_enhanced] [object Object]
[00:00:03]               └- ✓ pass  (90ms)
[00:00:03]             └-> should sync search ids into not persisted session
[00:00:03]               └-> "before each" hook: global before each for "should sync search ids into not persisted session"
[00:00:03]               │ debg Waiting up to 5000ms for searches persisted into session...
[00:00:03]               │ proc [kibana] [2021-10-18T11:18:08.199+00:00][ERROR][plugins.dataEnhanced.data_enhanced] [object Object]
[00:00:04]               │ debg --- retry.waitForWithTimeout error: expected 200 "OK", got 404 "Not Found"
[00:00:04]               │ proc [kibana] [2021-10-18T11:18:08.774+00:00][ERROR][plugins.dataEnhanced.data_enhanced] [object Object]
[00:00:04]               │ debg --- retry.waitForWithTimeout failed again with the same message...
[00:00:05]               │ info [o.e.c.m.MetadataMappingService] [node-01] [.kibana_8.0.0_001/Mte-v5quQ1-QuiejopzyEw] update_mapping [_doc]
[00:00:05]               └- ✓ pass  (1.2s)
[00:00:05]             └-> should complete session when searches complete
[00:00:05]               └-> "before each" hook: global before each for "should complete session when searches complete"
[00:00:05]               │ debg Waiting up to 5000ms for searches persisted into session...
[00:00:05]               │ debg --- retry.waitForWithTimeout error: expected [] to contain 'FmloNUl1a3VQUk9TM2Rqa3V2NTVaMncbX0dUWGVOYV9RSlNvTWdha0VBeEF4UToyMjcz'
[00:00:06]               │ info [o.e.c.m.MetadataMappingService] [node-01] [.ds-.logs-deprecation.elasticsearch-default-2021.10.18-000001/o_EORlIWQn-KjL0eGKYh4Q] update_mapping [_doc]
[00:00:08]               │ info [o.e.c.m.MetadataMappingService] [node-01] [.kibana_8.0.0_001/Mte-v5quQ1-QuiejopzyEw] update_mapping [_doc]
[00:00:16]               │ debg Waiting up to 5000ms for searches eventually complete and session gets into the complete state...
[00:00:16]               └- ✓ pass  (11.4s)
[00:00:16]             └-> touched time updates when you poll on an search
[00:00:16]               └-> "before each" hook: global before each for "touched time updates when you poll on an search"
[00:00:16]               │ debg Waiting up to 20000ms for search session created...
[00:00:16]               │ proc [kibana] [2021-10-18T11:18:20.818+00:00][ERROR][plugins.dataEnhanced.data_enhanced] [object Object]
[00:00:17]               │ proc [kibana] [2021-10-18T11:18:21.419+00:00][ERROR][plugins.dataEnhanced.data_enhanced] [object Object]
[00:00:20]               └- ✖ fail: apis search search session touched time updates when you poll on an search
[00:00:20]               │      Error: expected '2021-10-18T11:18:21.795Z' to be below 2021-10-18T11:18:21.795Z
[00:00:20]               │       at Assertion.assert (/dev/shm/workspace/parallel/18/kibana/node_modules/@kbn/expect/expect.js:100:11)
[00:00:20]               │       at Assertion.lessThan.Assertion.below (/dev/shm/workspace/parallel/18/kibana/node_modules/@kbn/expect/expect.js:336:8)
[00:00:20]               │       at Function.lessThan (/dev/shm/workspace/parallel/18/kibana/node_modules/@kbn/expect/expect.js:531:15)
[00:00:20]               │       at Context.<anonymous> (test/api_integration/apis/search/session.ts:438:65)
[00:00:20]               │       at runMicrotasks (<anonymous>)
[00:00:20]               │       at processTicksAndRejections (node:internal/process/task_queues:96:5)
[00:00:20]               │       at Object.apply (/dev/shm/workspace/parallel/18/kibana/node_modules/@kbn/test/target_node/functional_test_runner/lib/mocha/wrap_function.js:87:16)
[00:00:20]               │ 
[00:00:20]               │ 

Stack Trace

Error: expected '2021-10-18T11:18:21.795Z' to be below 2021-10-18T11:18:21.795Z
    at Assertion.assert (/dev/shm/workspace/parallel/18/kibana/node_modules/@kbn/expect/expect.js:100:11)
    at Assertion.lessThan.Assertion.below (/dev/shm/workspace/parallel/18/kibana/node_modules/@kbn/expect/expect.js:336:8)
    at Function.lessThan (/dev/shm/workspace/parallel/18/kibana/node_modules/@kbn/expect/expect.js:531:15)
    at Context.<anonymous> (test/api_integration/apis/search/session.ts:438:65)
    at runMicrotasks (<anonymous>)
    at processTicksAndRejections (node:internal/process/task_queues:96:5)
    at Object.apply (/dev/shm/workspace/parallel/18/kibana/node_modules/@kbn/test/target_node/functional_test_runner/lib/mocha/wrap_function.js:87:16)

Metrics [docs]

Public APIs missing comments

Total count of every public API that lacks a comment. Target amount is 0. Run node scripts/build_api_docs --plugin [yourplugin] --stats comments for more detailed information.

id before after diff
fleet 1114 1119 +5

Async chunks

Total size of all lazy-loaded chunks that will be downloaded as the user navigates the app

id before after diff
fleet 634.8KB 636.5KB +1.7KB

Page load bundle

Size of the bundles that are downloaded on every page load. Target size is below 100kb

id before after diff
fleet 105.8KB 106.2KB +402.0B
Unknown metric groups

API count

id before after diff
fleet 1214 1219 +5

History

To update your PR or re-run it, just comment with:
@elasticmachine merge upstream

Copy link
Member

@nchaulet nchaulet left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🚀

@MichaelKatsoulis MichaelKatsoulis merged commit fa183ff into elastic:master Oct 18, 2021
@kibanamachine
Copy link
Contributor

Friendly reminder: Looks like this PR hasn’t been backported yet.
To create backports run node scripts/backport --pr 114439 or prevent reminders by adding the backport:skip label.

@kibanamachine kibanamachine added the backport missing Added to PRs automatically when the are determined to be missing a backport. label Oct 20, 2021
@kibanamachine
Copy link
Contributor

Friendly reminder: Looks like this PR hasn’t been backported yet.
To create backports run node scripts/backport --pr 114439 or prevent reminders by adding the backport:skip label.

@nchaulet
Copy link
Member

@MichaelKatsoulis Look like this PR has not been backported 7.x and 7.16

@nchaulet nchaulet added v8.0.0 auto-backport Deprecated - use backport:version if exact versions are needed and removed backport missing Added to PRs automatically when the are determined to be missing a backport. labels Oct 21, 2021
@kibanamachine
Copy link
Contributor

💚 Backport successful

Status Branch Result
7.16

This backport PR will be merged automatically after passing CI.

kibanamachine pushed a commit to kibanamachine/kibana that referenced this pull request Oct 21, 2021
…one agent layout instead of policy (elastic#114439)

* In case of kubernetes integartion detected return manifest in standalone agent layout instead of policy
kibanamachine added a commit that referenced this pull request Oct 21, 2021
…one agent layout instead of policy (#114439) (#115953)

* In case of kubernetes integartion detected return manifest in standalone agent layout instead of policy

Co-authored-by: Michael Katsoulis <[email protected]>
@mlunadia mlunadia changed the title In case of kubernetes integartion detected return manifest in standalone agent layout instead of policy In case of kubernetes integration detected return manifest in standalone agent layout instead of policy Jul 5, 2022
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
auto-backport Deprecated - use backport:version if exact versions are needed enhancement New value added to drive a business result release_note:skip Skip the PR/issue when compiling release notes Team:Fleet Team label for Observability Data Collection Fleet team v7.16.0 v8.0.0
Projects
None yet
Development

Successfully merging this pull request may close these issues.

5 participants