Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[Elastic Agent] when Agent is stopped, Metricbeat & Filebeat are not stopped #19522

Closed
EricDavisX opened this issue Jun 30, 2020 · 4 comments · Fixed by #19567
Closed

[Elastic Agent] when Agent is stopped, Metricbeat & Filebeat are not stopped #19522

EricDavisX opened this issue Jun 30, 2020 · 4 comments · Fixed by #19567
Assignees
Labels
bug Ingest Management:beta1 Group issues for ingest management beta1

Comments

@EricDavisX
Copy link
Contributor

[Elastic Agent] when Agent is stopped, Metricbeat & Filebeat are not stopped

@mdelapenya found this while helping Agent team write a smoke test. Testing on latest 8.0 containers (the Agent container is custom created, so it can be tested more thoroughly)

Please include configurations and logs if available.

For confirmed bugs, please report:
load up Agent and deploy to fleet.
when you stop agent process, the 2 beat processes are still running.

See his Centos7 logs info:
e2e-testing git:(148-fleet-scenarios) $> docker exec ingest-manager_elastic-agent_1 ps
PID TTY TIME CMD
1 ? 00:00:00 tail
105 ? 00:00:05 elastic-agent
119 ? 00:00:00 metricbeat
151 ? 00:00:00 filebeat
163 ? 00:00:00 metricbeat
185 ? 00:00:00 ps
➜ e2e-testing git:(148-fleet-scenarios) $> docker exec ingest-manager_elastic-agent_1 kill 105
➜ e2e-testing git:(148-fleet-scenarios) $> docker exec ingest-manager_elastic-agent_1 ps
PID TTY TIME CMD
1 ? 00:00:00 tail
119 ? 00:00:00 metricbeat
151 ? 00:00:00 filebeat
163 ? 00:00:00 metricbeat
196 ? 00:00:00 ps
➜ e2e-testing git:(148-fleet-scenarios) $> docker exec ingest-manager_elastic-agent_1 ps
PID TTY TIME CMD
1 ? 00:00:00 tail
119 ? 00:00:00 metricbeat
151 ? 00:00:00 filebeat
163 ? 00:00:00 metricbeat
201 ? 00:00:00 ps
➜ e2e-testing git:(148-fleet-scenarios) $> docker exec ingest-manager_elastic-agent_1 ps
PID TTY TIME CMD
1 ? 00:00:00 tail
119 ? 00:00:00 metricbeat
151 ? 00:00:00 filebeat
163 ? 00:00:00 metricbeat
206 ? 00:00:00 ps
After manually killing the process, it does not stop metricbeat and filebeat

@EricDavisX EricDavisX added the Ingest Management:beta1 Group issues for ingest management beta1 label Jun 30, 2020
@botelastic botelastic bot added the needs_team Indicates that the issue/PR needs a Team:* label label Jun 30, 2020
@EricDavisX EricDavisX added Team:Ingest Management and removed needs_team Indicates that the issue/PR needs a Team:* label labels Jun 30, 2020
@elasticmachine
Copy link
Collaborator

Pinging @elastic/ingest-management (Team:Ingest Management)

@mdelapenya
Copy link
Contributor

Adding context: the runtime docker image is Centos:7, where I installed the Agent following the guide: https://www.elastic.co/guide/en/ingest-management/7.8/elastic-agent-installation.html

$ curl -L -O https://artifacts.elastic.co/downloads/beats/elastic-agent/elastic-agent-7.8.0-linux-x86_64.tar.gz
$ tar xzvf elastic-agent-7.8.0-linux-x86_64.tar.gz
# add agent to PATH
$ ln -s /elastic-agent-7.8.0-linux-x86_64.tar.gz /usr/local/bin/elastic-agent
$ elastic-agent enroll http://kibana:5601 $token -f
$ elastic-agent run

@ph ph added the bug label Jun 30, 2020
@mdelapenya
Copy link
Contributor

mdelapenya commented Jul 1, 2020

More on this: when running elastic-agent on Centos, I got this process list:

[root@e0b3382bbe0f /]# ps aux
USER       PID %CPU %MEM    VSZ   RSS TTY      STAT START   TIME COMMAND
root         1  0.0  0.0   4412   676 ?        Ss   18:52   0:00 tail -f /dev/null
root        38  3.3  0.2 850328 23560 ?        Ssl  18:52   0:05 elastic-agent run
root        56  0.4  0.7 429032 64320 ?        Sl   18:52   0:00 /elastic-agent-8.0.0-SNAPSHOT-linux-x86_64/data/install/filebeat-8.0.0-SNAPSHOT-linux-x86_64/filebeat -E setup.ilm.enabled=false -E setup.template.enabled=false -E management.mode=x-pack-fleet -E management.enabled=true -E logging.level=debug -E logging.
root        69  0.6  1.0 676580 85808 ?        Sl   18:52   0:01 /elastic-agent-8.0.0-SNAPSHOT-linux-x86_64/data/install/metricbeat-8.0.0-SNAPSHOT-linux-x86_64/metricbeat -E setup.ilm.enabled=false -E setup.template.enabled=false -E management.mode=x-pack-fleet -E management.enabled=true -E logging.level=debug -E logg
root        81  1.8  0.7 707568 65012 ?        Sl   18:52   0:03 /elastic-agent-8.0.0-SNAPSHOT-linux-x86_64/data/install/filebeat-8.0.0-SNAPSHOT-linux-x86_64/filebeat -E setup.ilm.enabled=false -E setup.template.enabled=false -E management.mode=x-pack-fleet -E management.enabled=true -E logging.level=debug -E logging.
root        93  0.4  0.9 471776 81060 ?        Sl   18:52   0:00 /elastic-agent-8.0.0-SNAPSHOT-linux-x86_64/data/install/metricbeat-8.0.0-SNAPSHOT-linux-x86_64/metricbeat -E setup.ilm.enabled=false -E setup.template.enabled=false -E management.mode=x-pack-fleet -E management.enabled=true -E logging.level=debug -E logg
root       157  0.0  0.0  11840  3116 pts/0    Ss   18:52   0:00 bash
root       338  0.0  0.0  51768  3440 pts/0    R+   18:55   0:00 ps aux

For some reason, I see both metricbeat and filebeat duplicated. Is that intended?

@ghost
Copy link

ghost commented Sep 2, 2020

Bug Conversion:

Created 01 new Test-Case for this Ticket

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
bug Ingest Management:beta1 Group issues for ingest management beta1
Projects
None yet
Development

Successfully merging a pull request may close this issue.

5 participants