Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Running on Kubernetes #2

Closed
akihikokuroda opened this issue Oct 18, 2024 · 14 comments
Closed

Running on Kubernetes #2

akihikokuroda opened this issue Oct 18, 2024 · 14 comments
Assignees
Labels
enhancement New feature or request

Comments

@akihikokuroda
Copy link

akihikokuroda commented Oct 18, 2024

Description:
For the user who has experiences with kubernetes, it's much easier to deal with kubernetes than the docker compose.
Kubernetes is much better than docker compose for the users who want to run their own instance of the stack.

Desired solution
Provide necessary manifest files for kubernetes cluster and provide detail instructions for some recommended kubernetes environments.

@psschwei
Copy link
Contributor

Kompose might be helpful here

@akihikokuroda
Copy link
Author

Please assign this to me if this is a good feature to implement.

@akihikokuroda akihikokuroda changed the title Question: Running on Kubernetes Running on Kubernetes Oct 25, 2024
@ph11be
Copy link

ph11be commented Oct 28, 2024

This would be ideal for us (need to deploy on a cluster running services behind a firewall which we can't access from a deployment outside).

I did try to use Compose (which I have used for similar in the past), but it did not work out of the box, so probably a more bespoke approach is needed. A Helm chart would be great, but at least a set of yamls for k8s/openshift deployment would be good.

@jezekra1
Copy link
Collaborator

We should provide a helm chart with all the external dependencies optional so that you can swap them out in the cloud.

Bitnami charts are a good starting point.
https://github.com/bitnami/charts/blob/main/bitnami/mlflow/Chart.yaml#L16

@akihikokuroda are you still up for it?

@akihikokuroda
Copy link
Author

Yes, I am. There are multiple options for Kubernetes deployment. A set of vanilla Kubernetes yaml files, Kustomize, helm and Kubernetes operator. Is the helm our choice? Does anyone have any preference?

@akihikokuroda
Copy link
Author

If we use helm, we can put core bee services into the main chart and other dependencies as the subchart. So that we can replace the dependency components easily.

@ph11be
Copy link

ph11be commented Oct 28, 2024

Agree Helm would give the flexibility (through sub charts) to easily swap the dependencies. It also allows for easy substitution of configuration options through a config values file.

@planetf1
Copy link
Contributor

planetf1 commented Oct 30, 2024

++ agree. I also find compose awkard, and it can get difficult to keep as configurations get more complex. Though there are many people who still prefer it, especially if they don't have k8s skills. In a previous project (part of which which involved a tutorial/getting started environment ie their first experience with the stack) I moved compose->k8s (helm) for all the above reasons, but there was then a constant stream of k8s questions & issues - usually dealing with the differences between various k8s setups (just as we have docker vs podman with compose), and their lack of skills. Sometimes these got a little tricky

My preference would probably be for a helm chart as it's an easier 'packaging'. However, it's not so good at >day0 ops. So if any reconfiguration/control is needed an operator would be even better, but that's typically a lot more work - including in the design of the custom resources to model . I'd therefore be inclined to leave that for a future option.

Happy to help out with any k8s setup/testing as needed. We should have something that works in a typical 'desktop' environment (KinD / podman desktop (minikube?) / rancher desktop etc) as well a cloud (int or ext) - perhaps openshift sufficient. Just mentioning as permissions (ids/root), resources etc typically cause pain...

@akihikokuroda
Copy link
Author

I'm making a very primitive helm chart now. I'll make a [WIP] PR in a day or two.

@akihikokuroda
Copy link
Author

WIP PR: #14

@jezekra1
Copy link
Collaborator

jezekra1 commented Nov 4, 2024

Hey, I'm looking at the PR and it's a good start, thanks @akihikokuroda!

We'd like to align it with the kubernetes configuration we use for deployment so that we have a single unified way how to deploy the application :)

This means:

  • Add support for external infrastructure (mongo, redis, milvus)
  • Properly handle seeders and migrations - make them optional init containers / helm hooks
  • Standardize values.yml format to follow bitnami conventions
  • Proper secret management - add support for production secrets as well as dummy auth currently used locally
  • Add support for production UI image (pre-built with different NEXT build arguments)
  • Proper scaling configuration and support for worker pods - (RUN_BULLMQ_WORKERS API configuration)
  • Ingress configuration & local port-forwarding

Since this requires specific knowledge from the existing internal deployments and application internals, I'll take over the PR and make the necessary changes.

It might take some time, so in the meantime we'll be maintaining the docker-compose version of the stack.

cc @matoushavlena

@akihikokuroda
Copy link
Author

@jezekra1 Would you make this issue an epic/story and create issues for each item? So the others can contribute some pieces?

@jezekra1
Copy link
Collaborator

jezekra1 commented Nov 4, 2024

Sure! I created the epic here:
#27

@mmurad2 mmurad2 added the enhancement New feature or request label Nov 5, 2024
@jezekra1
Copy link
Collaborator

jezekra1 commented Nov 8, 2024

closing this as duplicate of #27

@jezekra1 jezekra1 closed this as completed Nov 8, 2024
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
enhancement New feature or request
Projects
None yet
Development

No branches or pull requests

6 participants