-
-
Notifications
You must be signed in to change notification settings - Fork 3.1k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Getting error (can't replace a newer value with an older) when publishing IPNS #6683
Comments
@nk-the-crazy I don't think what you're trying to do is supported, although perhaps it should be. I'd look at https://discuss.ipfs.io/t/name-publish-error-cant-replace-a-newer-value-with-an-older-value/3823/8 and #5816 for more information. Your backup essentially functions the same as if IPNS allowed multiple users to publish with the same key since restoring on the same machine is equivalent to starting up a second machine with the same key. Unfortunately, for some reasons you can read about in the linked issues and other Github issues it does not. This is what's leading to your issues. Is there any more information you can give me to replicate your scenario? When you restore from your backup do you immediately do Possible solution idea@Stebalien what do you think about making the datastore keys for IPNS and the DHT harmonize (either make the DHT use |
We intentionally split these:
The issue was expiration: We started deleting IPNS records after they expired and, in doing so, ended up deleting the record we were using to republish IPNS records. What we should do is:
|
I just ran into the same issue. Looks like there's an edge case where IPFS losts track of the correct information. It should print a warning in this case, but should still push a new version (somehow). I was using But even after deactivating pubsub I'm not able to push anything new. Also I couldn't cancel the subscribtions on pubsub either... The error on the command was
While the console printed a stack trace after nil pointer dereference:
|
@RubenKelevra it's not clear from your post if your issue is the same as this one or a different one. Could you clarify which version of IPFS you're using and if you are publishing from multiple nodes or otherwise manually manipulating the IPFS data store (e.g. backups) like the OP did. |
Just looked into this issue with @aschmahmann. This was a bug but has since been fixed. |
I'm sorry, next time I'll be more clear!
I use the latest version (binary from the ArchLinux repo).
Single instance of IPFS publishing, I've even had just one key generated and wanted to push a second time.
Nope, I had run previously a manually Then I rebooted the machine. Afterwards I couldn't publish with the old key. Trying to remove the subscription showed the error mentioned. Then I've started IPFS without pubsub to test out if I can publish without pubsub which also failed. I decided to remove the key. Then I've restarted IPFS with pubsub again and it showed no subscriptions. I generated a new key which worked flawlessly. |
Would be nice. I ran into the same issue as @nk-the-crazy, since I deleted my datastore and now want to publish a new IPNS. I guess I now have to wait 96 hours (timeout of the older version?) Additionally, this would allow different peers to update the IPNS on different positions of the network, which makes sense for any redundant setup. |
It sounds like what you're asking for here (as opposed to with your datastore issue) is really about third-party republishing of IPNS keys (essentially #1958). It'd be great to refactor namesys #6537 and make some API changes to make this possible since nothing in the protocol actually prevents third-party republishing. |
Nope, that's completely different: #1958 asks for the ability to republish already available data. I was just noting, that if you have multiple server, you probably like to publish with the same ipns key from different servers, especially in a cluster configuration where multiple servers can alter the data. The issue is that there's already a namesys item floating around, I've got the valid key but my database just doesn't know the current counter number. Instead of just asking the network, which values are currently floating around, my node just publishes the default number. So I would need to publish updates until I exceed the counter value already present in the network to make any changes. I solved the issue now by just generating new keys and publishing with them new ipns records. |
The implication of "cluster" is that you have some sort of consensus among the machines you are running. Without consensus your machines could publish values that clobber each other. There's nothing in an IPNS record that prevents you from doing this multi-machine publishing, but this definitely seems more like a feature for applications building on ipfs (e.g. ipfs-cluster) instead of go-ipfs itself.
👍. We should definitely make it easier to recover from these errors than forcing a key rotation. Also, I may be mistaken but I don't think that deleting your datastore (as opposed to just gc-ing all stored data) is actually supported. |
Okay, maybe an example helps to convey my point: Server A writes a file to the cluster, waits for the census to confirm that this is the new head and publishes an IPNS record. Server B writes a new version of the file to the cluster, waits for the census to confirm that this is the new head. But Server B now cannot publish a new IPNS record, despite having the private key: If There should be an option to query the DHT for the currently highest value of an IPNS and the ability to specify the version value. I don't think there's a need for an application to fiddle around in the DHT and patch the local IPFS database. This should be solved by IPFS itself. |
You could do that, but the DHT isn't like a single database you can query. What happens if there's a partition, or some other network anomaly. Now server B thinks the latest version is 5, when it's really 7 and therefore their publish (version 6) doesn't actually get propagated to the network.
Doing this definitely allows users to shoot themselves in the foot by just messing with the values. This doesn't necessarily mean I'm opposed to it, but if this functionality was exposed it would definitely need to come with sufficient warning lights. |
This is biting me as well because I am trying to move the publisher from one serve to another. Can we add a Also is there any way to work around this in the short term? |
I hit this issue again today, after converting a badgerds to a flatfs datastore... Is there any chance to add a flag to ignore this issue and bump the version up? |
Description
After restoring data directory ( .ipfs ) from zipped archive, that created earlier.
We're unable to publish IPNS with default key.
Getting Error :
( IPFS crashed suddenly, so we removed current data directory and restored it from zip archive , that was created 2-3 days ago. After that it , we're unable to publish. Actually, this kind of restore operations were done several times, with no problems. )
The text was updated successfully, but these errors were encountered: