You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Hello,
As far as I understand, if we want to get the balance of a shielded address, the front-end starts a shielded-sync like process by getting all blocks from the RPC. It won't work after hard-fork anyway.
Instead of doing that, wouldn't it be better to assume that the connected RPC is fully shielded-sync and try to get the balance? I don't know if this is possible, but it would be nice if it was this way.
Reasons:
It is not possible to pull the first block after the hardfork. An old RPC must be used until the hardfork block
Gettings hundreds of thousands of blocks in the front-end is tedious
If you want to test it, my shielded-sync RPC address for shielded expedition: https://rpc.namascan.com/
Thanks, have a nice day!
The text was updated successfully, but these errors were encountered:
@hkey0@Rigorously Thank you for messages. If the protocol guys think caching is not a good idea we can't do much about it. I do not feel competent enough in this area to have strong opinion. What I can tell you is that we are having internal discussions about improving shieled-sync/balance queries. The problem is recognized and we are working on solutions(s). :) I will leave this open for time being, if there is some more info in future I will let you know here.
Hello,
As far as I understand, if we want to get the balance of a shielded address, the front-end starts a
shielded-sync
like process by getting all blocks from the RPC. It won't work after hard-fork anyway.Instead of doing that, wouldn't it be better to assume that the connected RPC is fully shielded-sync and try to get the balance? I don't know if this is possible, but it would be nice if it was this way.
Reasons:
If you want to test it, my shielded-sync RPC address for shielded expedition:
https://rpc.namascan.com/
Thanks, have a nice day!
The text was updated successfully, but these errors were encountered: