-
Notifications
You must be signed in to change notification settings - Fork 274
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Use a single badgerdb instance #283
Comments
To be honest, I have no experience using multiple badger instances in single process. On the other hand, with single instance we will need to double-check that keys are not conflicting. |
We do use multiple DBs within ll-core on its own. We should look at cleaning up ll-core before merging the ipfs and state. |
@marbar3778, can you pls point to the places where we use multiple DBs in |
@tzdybal, I had an experience using multiple Badgers at the same time. I needed kinda independent namespaced KV storages, but that was a lot of overhead. Badger spawned goroutines ate CPU, RAM was bloated, also every instance did a disk footprint, so overhead is on every dimension. Unfortunately, I don't have any real number logged. |
Ah I was thinking about the light client and the node. Within a single node it is a sinlge instance of badger |
IMO, we should also consider settling on the same version as IPFS. Currently, ipfs uses v1. There are plans to switch to v2: ipfs/kubo#6818 We could of course also write a v3 ds (if it does not already exist) and use it across the board. It is not entirely clear to me why we picked v3 tbh and if it is some kind of official release or rather a development version. The badger docs on how to choose a version do not even mention v3 (currrently): https://github.com/dgraph-io/badger#choosing-a-version Either way, this does not have high priority right now but it's something to keep an eye on. |
Out of curiosity, did some small research.
|
Yeah, in our case migration to v3 should be easy. AFAIK, the badger changes major version only in case of on-disk data format change, so for existing users that would require some special migration. In our case, we don't have any so we are good to go. |
we will move the ipfs storage story to a separate repo. So there is no need inside of this repo to deal with this. If anything we should make pruning after the unbonding period the default (if that is not the case already). |
Currently, we are using two stores: one for tendermint related data and one for IPLD related data.
original discussion: #211 (comment)
ref: #182
The text was updated successfully, but these errors were encountered: