Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Pubsub flood due to same message propagated multiple times #524

Closed
stbrody opened this issue Feb 25, 2023 · 8 comments
Closed

Pubsub flood due to same message propagated multiple times #524

stbrody opened this issue Feb 25, 2023 · 8 comments

Comments

@stbrody
Copy link

stbrody commented Feb 25, 2023

Incident report from 3Box Labs (Ceramic) Team

Incident summary

The Ceramic pubsub topic has been experiencing a flood of pubsub messages beyond our usual load for the last several days now. We log every pubsub message we receive on the nodes that we run, and running analysis on those logs using LogInsights shows us that we are receiving messages with the exact same `seqno` multiple times - one message can show up upwards of 15 times in an hour. During normal operation we do not see this issue with seqnos showing up multiple times. This dramatic increase in the number of messages that need processing is causing excess load on our nodes that is causing major performance problems, even with as much caching and de-duplication as we can do at our layer.

Evidence of the issue

Graph of our incoming pubsub message activity showing how the number of messages spiked way up a few days ago. The rate before 2/20 was our normal, expected amount of traffic:
Screen Shot 2023-02-25 at 1 47 14 PM

AWS LogInsights Query demonstrating how the majority of this increased traffic is due to seeing the same message (with the same seqno) re-delivered multiple times. Before the spike we never saw a msg_count greater than 2.

Screen Shot 2023-02-25 at 1 49 09 PM

Screen Shot 2023-02-25 at 1 49 27 PM

Steps to reproduce

Connect to the gossipsub topic `/ceramic/mainnet`. Observe the messages that come in, keep track of the number of times you see a message with each `seqno`. You'll see that over the span of an hour you see the same message with the same `seqno` delivered multiple times

Historical context

We have seen this happen before, in fact it's happened to us several times over the last year, and we've reported it to PL multiple times. You can see our original report here (at the time we were still using js-ipfs): https://github.com/libp2p/js-libp2p/issues/1043. When this happened again after we had migrated to go-ipfs, we reported it again, this time on slack: https://filecoinproject.slack.com/archives/C025ZN5LNV8/p1661459082059149?thread_ts=1661459082.059149&cid=C025ZN5LNV8

We have since discovered a bug in how go-libp2p-pubsub maintained the seenMessage cache and worked to get a fix into kubo 0.18.1: #502

We have updated our nodes to 0.18.1, but of course we have no direct control over what versions of ipfs/kubo the rest of the nodes on the Ceramic network are running, so even if the above bugfix would resolve the issue if every single node on the network were to upgrade to it, we have no real way to enforce that and no idea how long it will be (if ever) before there are no older ipfs nodes participating in our pubsub topic. Not to mention the possibility of a malicious node connecting to our pubsub topic and publishing a large volume of bogus messages (or re-broadcasting valid messages). So no matter what, we need a way to respond to incidents like this that goes beyond "get your users to upgrade to the newest kubo and pray that that makes the problem go away", which has been what we've been told every time we're reported this issue so far.

Our request from Protocol Labs

This is an extremely severe incident that has affected us multiple times over the last year. It strikes without warning and leaves our network crippled. Every previous time this happened it cleared up on its own within a day or so, but this one has been going on for 5 days now without letting up. We need some way to respond to incidents like this, and to potential malicious attacks in the future where someone intentionally floods our network with pubsub traffic.

So our questions for PL are:

  • What short term options can we take on the nodes that we operate, or that we can tell our community of node operators to take, to resolve this immediate issue that is currently affecting our production network?
  • Do you have any tools for inspecting the p2p network that would let us identify which node(s) are the source of the issue? If we knew that the issue was because of one problematic node that was, for instance, running a very old version of ipfs or running on very underpowered hardware, we could potentially reach out to them directly and get them to upgrade or take down their node. Or perhaps we could tell existing node operators to block connections from that problematic peer.
  • What is the recommended way in general to respond to nodes that (either intentionally through malice or accidentally through a bug) spam a pubsub topic with bogus or re-broadcast messages?
  • What additional steps can we take going forward to prepare our network to be more resilient to issues like this in the future?

Thank you for your time and attention to this important issue!

-Spencer, Ceramic Protocol Engineer

@p-shahi p-shahi added kind/bug A bug in existing code (including security flaws) need/maintainer-input Needs input from the current maintainer(s) labels Feb 25, 2023
@vyzo
Copy link
Collaborator

vyzo commented Feb 25, 2023

Do you have any sort of message validation running in your network?

It seems that this is the real root of your problem.

@stbrody
Copy link
Author

stbrody commented Feb 25, 2023

Do you have any sort of message validation running in your network?

Sorry @vyzo can you clarify what you mean by this? Is this a gossipsub feature we need to enable? If you mean application-level message validation we already do validate that message conform to the expected schema, and we even have caches around most of the behaviors that are triggered when applying messages. All these messages are valid messages, they are just getting delivered multiple times.

@vyzo
Copy link
Collaborator

vyzo commented Feb 25, 2023

caches, like the seen cache, are just that... caches, not a security feature or sufficient replay protection at scale.

Pubsub, the library, supports an elaborate application level validation scheme. This allows blockchains to intervene and reject or ignore messages at the application level using eventual consistency models or message pool protection rules (factoring in origin peers in transaction topics), and this is why this is a problem that does not generally arise there.

You absolutely must provide a validator to the library if you want to scale.
If you are using through ipfs, which does not let you instantiate a validator afaict, please work with the ipfs maintainers to provide the necessary api for you.

This is not a problem that can be solved with a bandaid, and the pubsub library cannot do more than it already does...

The best we can do is give you an api for writing validators, and we already provide that.

Please use it.

(edit: fixed some typos)

@vyzo
Copy link
Collaborator

vyzo commented Feb 25, 2023

Note that your validator nees to go beyond merely syntactic validation.

At, the very minimum you need to have a concept of a per peer nonce if you dont have a natural filter for your topic (such a filter could be block height or miner/validator permit for instance).

@stbrody
Copy link
Author

stbrody commented Feb 25, 2023

Interesting. Our model is a bit different than a standard blockchain but I can definitely work with the team to think about what additional types of validator rules would make sense for our use case. The per peer nonce seems like a reasonable and simple place to start. I'm really surprised that kubo doesn't provide that level of simple validation out of the box though. In any case, I'll open an issue with kubo to see what we can do there.

So that seems like a very useful direction to pursue for the answer to my fourth question - "what can we do to protect ourselves in the future?". Using more libp2p level validators, either through kubo configuration or by moving away from kubo and using libp2p directly, seems like a good direction to investigate. Thank you for that information!

But I'm still stuck on what we can do in the short term to mitigate the issue we are actively experiencing at the moment. I.e. my first and second questions in my original post. Any suggestions on how we can get back to a stable state would be very much appreciated!

@vyzo
Copy link
Collaborator

vyzo commented Feb 26, 2023

Yes please do work wirh ipfs people for the validator interface.

I am not sure what you can do to stop your current "diameter exceeds timecache boundaries" flood, short of shutting down your topic for sufficient time to quiet down. And this would be only temporary, the storm might come back.

Maybe restarting with a very long timecache window (assuming this is not a replay attack and a natural occurence) could help.

@vyzo
Copy link
Collaborator

vyzo commented Feb 26, 2023

See ipfs/kubo#9665 (comment) for a possible quick solution to the current predicament.

It will need to patch ipfs however, and longer term we will need to allow multiple validators per topic in pubsub (the internals are ready for it nonetheless, it is just a matter of user api and we can do this quickly when the need arises(.

@p-shahi p-shahi removed kind/bug A bug in existing code (including security flaws) need/maintainer-input Needs input from the current maintainer(s) labels Feb 26, 2023
@p-shahi
Copy link
Member

p-shahi commented Feb 26, 2023

closing in favor of ipfs/kubo#9665

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

3 participants