Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[BUG] [Filebeat] [AKS/EKS] No logs from K8s apps in Elasticsearch #1866

Closed
to-bar opened this issue Nov 19, 2020 · 3 comments
Closed

[BUG] [Filebeat] [AKS/EKS] No logs from K8s apps in Elasticsearch #1866

to-bar opened this issue Nov 19, 2020 · 3 comments
Assignees
Milestone

Comments

@to-bar
Copy link
Contributor

to-bar commented Nov 19, 2020

Describe the bug
Pod logs are not shipped by Filebeat (deployed in K8s) to Elasticsearch.

To Reproduce
Steps to reproduce the behavior:

  1. Deploy Epiphany on AKS/EKS cluster (k8s_as_cloud_service: true).
  2. Browse Filebeat index in Elasticsearch.

Expected behavior
Pod logs should be present in ES (with K8s fields such as kubernetes.pod.name).

Config files

kind: epiphany-cluster
title: Epiphany cluster Config
name: azepi
provider: any
specification:
  name: azepi
  admin_user:
    name: operations
    key_path: /home/epiuser/vms_rsa
  cloud:
    k8s_as_cloud_service: true
  components:
    repository:
      count: 1
      machines:
        - default-tb-module-tests-all-0
    kubernetes_master:
      count: 0
    kubernetes_node:
      count: 0
    logging:
      count: 1
      machines:
        - default-tb-module-tests-all-1
    monitoring:
      count: 0
    kafka:
      count: 0
    postgresql:
      count: 1
      machines:
        - default-tb-module-tests-all-2
    load_balancer:
      count: 0
    rabbitmq:
      count: 0
---
kind: configuration/feature-mapping
title: Feature mapping to roles
name: azepi
provider: any
specification:
  roles_mapping:
    repository:
      - repository
      - image-registry
      - firewall
      - filebeat
      - node-exporter
      - applications
---
kind: infrastructure/machine
name: default-tb-module-tests-all-0
provider: any
specification:
  hostname: tb-module-tests-all-0
  ip: 51.104.175.13
---
kind: infrastructure/machine
name: default-tb-module-tests-all-1
provider: any
specification:
  hostname: tb-module-tests-all-1
  ip: 51.104.175.88
---
kind: infrastructure/machine
name: default-tb-module-tests-all-2
provider: any
specification:
  hostname: tb-module-tests-all-2
  ip: 51.104.172.112
---
kind: configuration/applications
title: "Kubernetes Applications Config"
name: default
specification:
  applications:
    - name: rabbitmq
      enabled: true
      image_path: rabbitmq:3.8.3
      use_local_image_registry: false
      service:
        name: rabbitmq-cluster
        port: 30672
        management_port: 31672
        replicas: 2
        namespace: queue
      rabbitmq:
        plugins:
          - rabbitmq_management
          - rabbitmq_management_agent
        policies:
          - name: ha-policy2
            pattern: ".*"
            definitions:
              ha-mode: all
        custom_configurations:
          - name: vm_memory_high_watermark.relative
            value: 0.5
        cluster:
    - name: auth-service
      enabled: true
      image_path: jboss/keycloak:9.0.0
      use_local_image_registry: false
      service:
        name: as-testauthdb
        port: 30104
        replicas: 2
        namespace: namespace-for-auth
        admin_user: auth-service-username
        admin_password: PASSWORD_TO_CHANGE
      database:
        name: auth-database-name
        user: auth-db-user
        password: PASSWORD_TO_CHANGE

Cloud Environment (please complete the following information):

  • AKS
  • EKS

Additional context
In core\src\epicli\data\common\ansible\playbooks\roles\filebeat\templates\custom-chart-values.yml.j2 there is condition for K8s input:
{% if 'kubernetes_master' in group_names or 'kubernetes_node' in group_names %}
which is not met since there is no kubernetes_master nor kubernetes_node group in the inventory file.

@to-bar
Copy link
Contributor Author

to-bar commented Dec 4, 2020

After solving this issue, the fix should be backported to v0.8 branch.

@to-bar to-bar added the priority/high Task with high priority label Dec 4, 2020
@mkyc mkyc modified the milestones: S20201217, S20201231 Dec 4, 2020
@ar3ndt
Copy link
Contributor

ar3ndt commented Dec 14, 2020

looks as easy fix - like mentioned in additional context it should be enough to just add another 'or' condition checking flag k8s_as_cloud_service

@sk4zuzu sk4zuzu self-assigned this Dec 23, 2020
@przemyslavic przemyslavic self-assigned this Jan 4, 2021
@mkyc mkyc modified the milestones: S20201231, S20210114 Jan 4, 2021
@przemyslavic
Copy link
Collaborator

Fixed. Now logs from containers are visible in Kibana.

image.png

image.png

@plirglo plirglo closed this as completed Jan 5, 2021
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

No branches or pull requests

6 participants