-
-
Notifications
You must be signed in to change notification settings - Fork 5.6k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[Proposal][Discuss] Gitea Cluster #13791
Comments
modules/tasks task will need to be refactor to have an easy interface: |
Some kind of git storage layer would be needed imho (something like gitlab has) |
I would fokus on tasks since git data via shared-storage work quite well at the moment |
It is but in fact it's expensive. So a distributed git data storage layer still be a necessary feature of Gitea in future. |
+1 Safe distributed/concurrent gitea is surely the highest priority from a user point of view, as off-the-shelf options for distributed SQL databases and distributed file systems are readily available. |
Roadmap:
master elecdone by DBMS: who get SQL select-update query in first
~7msg types
msg comsome sort of https://nats.io/, https://activemq.apache.org/cross-language-clients, ... over DB, Redis, ... ? sidenotes
|
Interesting discussion. I think this started back in 2017 #2959. There needs to be recognition of 2 cluster use cases: Load Balancing and High Availability (HA) with 2 types of location configurations: local and remote. The more distant the cluster participant, data shifts from synchronous (near -real-time) to delayed; creating a spectrum of data synchronization quality levels from highly consistent to eventually consistent. Technologies picked should be able to operate at a distance as well as on local prem without reconfiguration. Secure communications via tunneling and certificate based authentication between nodes should also be considered. The "tricky part" is figuring out where to put the replication. Since gitea supports multiple databases, and each employs different and incompatible replication mechanisms, a formalized middle-ware layer is likely required to replicate data. The mid-layer replications also allows different db backend configuration (eg postgresql and Mysql) to provide transparent replication. Replications will need some type lockout strategy for check-in/outs and zips operations during replication activity. The options are:
With remote site load balancing, it is possible to have check-in collisions causing inconsistencies. The use cases that cause these conditions:
I hope this helps some of your design decisions PS: don't forget config files change pushes.. |
We should probably also need some kind of git repository access layer so that they could be distributed across cluster with local storage |
Just want to contribute my own experience using Gitea for the last couple of years. Our first attempt was to run dockerized Gitea in kube, with storage back end provided by NFS. We rely on kube healthcheck to restart an unresponsive Gitea instance, which can run on any tainted host managed by kube. This solves the reliability issue somewhat, though there will be a period of unavailability while the container restarts. Our v2 setup swaps out NFS for ceph CSI in kube. R/W performance improves dramatically. We also use S3 compat layer in ceph to store LFS data. My most pressing desire for v3 is HA. We can be less ambitious and work on single local cluster first. There can be a dedicated pod for running cron tasks, so Gitea can concentrate on doing git and webserver stuff. We can also use s3 for storage exclusively for its sync capabilities. |
Do you have some hint to move from nfs to ceph CSI? I like to test out the perf. I already use S3 (minio) for all other Gitea storage. |
Will there be concurrency problem when using Ceph CSI, since there is no file lock protection? |
@piamo no. Only a single instance of gitea runs at any one time, so no locking is necessary. The appropriate ceph volume is auto mounted on whichever host the gitea container runs on. So yeah my setup is not HA, just resilient to host failure. |
@imacks But if two or more concurrent requests try to change the same repo, lock is still necessary. |
I think one immediate step for Gitea would be to enable limiting read-only operations and disable cron to somewhat achieve high availability. Many parts can already be deployed in a HA way:
What we need right now is to allow for disabling cron jobs, then Gitea can be deployed in a cluster with ReadWriteMany storage for git objects. To support ReadWriteOnce storage, the files need to be replicated by Gitea instead of the storage provider. Then Gitea must have a read-only mode and those replicas need to pull changes from master instance. In this case, the read-only operations should be identified so that a load balancer can route traffic properly. After we have done the above step, then we could try to find some leader election protocols so that a replica can be promoted to master if master is down. This would be the second step. Only after we have done that, we can start to split cron jobs to multiple instances. I think this is more complicated than the first two steps above. |
Just FYI, we have an active WIP for a Gitea-HA setup in the helm-chart going on right now: https://gitea.com/gitea/helm-chart/pulls/437 It is based on Postgres-HA, a RWX file system and redis-cluster. The only thing that is a true issue still are the duplicated cron executions. The biggest issue would be that both do the same thing at the exact same moment and crash therefore. Maybe implementing a random offset/sleep could help in the first place to at least ensure proper functionality? Even if all jobs would still be executed redundantly but it would at least allow us to make some initial progress. |
There are still some locks in fact need to be refactored except cron, see #22176 |
|
Idk what the "docker's duplicate insert bug" is here and all the other points are also somewhat unclear in terms of severity. I think we need to check and find out in the end. And to test all of them, we need a (functional) HA cluster first to test on. I can provide a instance for testing if needed. Are you interested @wxiaoguang @lunny? I could also give you access to the k8s namespace so you can explore the pods yourself. On the other hand I wonder if this could also be set up and tested using the project funds? A terraform setup which destroys everything again after testing is not a big deal. And the helm chart logic for a HA setup is ready. |
I think most problems here are obvious from code level. Maybe we can find more when we start testing. LThank you for you idea about the testing infrastructure. When we need those, we can discuss them. But for now, there are so many problems, maybe we should begin from starting some discuss or sending some PRs. |
Context:
I am interested, however, I have a quite long TODO list and many new PRs: So I don't think I have the bandwidth at the moment. |
I didn't check everything in the code so far but I think something like https://github.com/hibiken/asynq could help with the cron issues? For the shared repo access I was actually wondering why not trying to abstract that e.g. with a S3 compatible storage and use something like redlock to synchronize the access repositories. I'd even assume concurrent read should be fine? It's only about consistence when writing to a repository (presumably)? |
In #28958 I've started a distributed implementation for the internal notifier. Thereby events such as |
How does a Gitea deployment scale? Gitea cluster should resolve part of it.
Currently when running several Gitea instances which shared database, git storage. There is still something needs to resolve.
comment by @wxiaoguang
ExclusivePool
pool, which is also in-process now.Use global lock instead of NewExclusivePool to allow distributed lock between multiple Gitea instances #31813(based on Introduce globallock as distributed locks #31908)
The text was updated successfully, but these errors were encountered: