Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Haproxy refactor #1586

Merged
merged 16 commits into from
Sep 10, 2020
Merged

Conversation

sk4zuzu
Copy link
Contributor

@sk4zuzu sk4zuzu commented Aug 26, 2020

No description provided.

@sk4zuzu sk4zuzu self-assigned this Aug 26, 2020
@sk4zuzu sk4zuzu force-pushed the feature/haproxy-upgrade branch from 14c8a15 to dd8ef95 Compare August 31, 2020 07:45
@sk4zuzu sk4zuzu marked this pull request as ready for review September 1, 2020 07:23
@sk4zuzu sk4zuzu changed the title Haproxy refactor and upgrade Haproxy refactor Sep 1, 2020
@sk4zuzu
Copy link
Contributor Author

sk4zuzu commented Sep 1, 2020

This PR is already getting too big, so the epicli upgrade part will be handled in another PR. So far only the eplici apply part has been implemented.

Features included:

  • docker based haproxy https://hub.docker.com/_/haproxy ✔️
  • lightweight docker image extraction tool (replaces docker export command) ✔️
  • runc / systemd managed reusable haproxy container (not using docker) ✔️
  • reimplementation of kubernetes load balancer service ✔️
  • reimplementation of the "load_balancer" feature ✔️
  • identical code for all linux distros we support ✔️
  • offline package ✔️
  • fully working logging (kibana) and monitoring (grafana) code for the "load_balancer" feature ✔️

Sugestions for testing:

  • offline / online modes for each distro Ubuntu/RHEL/CentOS (epicli apply)
  • verify if epicli apply is idempotent and restarts / reloads systemd service in reaction to config changes
  • deploy with monitoring and logging enabled and verify if metrics and logs are read
  • backup / recovery scenarios for each distro Ubuntu/RHEL/CentOS (epicli backup, eplicli recovery)

Example config for "any" provider:

kind: epiphany-cluster
title: "Epiphany cluster Config"
name: any4
specification:
  name: any4
  admin_user:
    name: centos
    key_path: /workspaces/epiphany/core/src/epicli/clusters/id_rsa
  components:
    kubernetes_master:
      count: 1
      machines:
        - default-k8s-master1
    kubernetes_node:
      count: 1
      machines:
        - default-k8s-node1
    load_balancer:
      count: 1
      machines:
        - default-k8s-master3
    logging:
      count: 1
      machines:
        - default-k8s-node2
    monitoring:
      count: 1
      machines:
        - default-k8s-node3
provider: any
---
kind: configuration/shared-config
title: Shared configuration that will be visible to all roles
name: default
specification:
  use_ha_control_plane: true
  promote_to_ha: false
provider: any
---
kind: configuration/haproxy
title: "HAProxy"
name: default
specification:
  logs_max_days: 60
  self_signed_certificate_name: self-signed-fullchain.pem
  self_signed_private_key_name: self-signed-privkey.pem
  self_signed_concatenated_cert_name: self-signed-test.tld.pem
  haproxy_log_path: "/var/log/haproxy.log"
  stats:
    enable: true
    bind_address: 127.0.0.1:9000
    uri: "/haproxy?stats"
    user: operations
    password: your-haproxy-stats-pwd
  frontend:
    - name: https_front
      port: 443
      https: true
      backend:
      - http_back1
  backend: # example backend config below
    - name: http_back1
      server_groups:
      - kubernetes_node
      # servers: # Definition for server to that hosts the application.
      # - name: "node1"
      #   address: "epiphany-vm1.domain.com"
      port: 30104
provider: any
---
kind: infrastructure/machine
provider: any
name: default-k8s-master1
specification:
  hostname: z1a1
  ip: 10.40.2.10
---
kind: infrastructure/machine
provider: any
name: default-k8s-master2
specification:
  hostname: z1a2
  ip: 10.40.2.11
---
kind: infrastructure/machine
provider: any
name: default-k8s-master3
specification:
  hostname: z1a3
  ip: 10.40.2.12
---
kind: infrastructure/machine
provider: any
name: default-k8s-node1
specification:
  hostname: z1b1
  ip: 10.40.2.20
---
kind: infrastructure/machine
provider: any
name: default-k8s-node2
specification:
  hostname: z1b2
  ip: 10.40.2.21
---
kind: infrastructure/machine
provider: any
name: default-k8s-node3
specification:
  hostname: z1b3
  ip: 10.40.2.22

@sk4zuzu sk4zuzu mentioned this pull request Sep 4, 2020
@przemyslavic
Copy link
Collaborator

/azp run

Copy link
Contributor

@mkyc mkyc left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Why are you extracting haproxy from docker image instead of building it from sources?

@sk4zuzu
Copy link
Contributor Author

sk4zuzu commented Sep 10, 2020

Why are you extracting haproxy from docker image instead of building it from sources?

IMO taking into account that we support oflfine mode it's acutally less harm if we use docker images (alpine based) than add another set of packages (needed for the build) to already existing 14GiB of stuff.

It's not just the extraction, I use runc a much lighter container which works ok with systemd services.

Copy link
Contributor

@mkyc mkyc left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Cool

@sk4zuzu sk4zuzu merged commit 093c627 into hitachienergy:develop Sep 10, 2020
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

5 participants