Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Closed DHT Based Routing #10

Open
Stebalien opened this issue Apr 9, 2019 · 13 comments
Open

Closed DHT Based Routing #10

Stebalien opened this issue Apr 9, 2019 · 13 comments

Comments

@Stebalien
Copy link
Member

Stebalien commented Apr 9, 2019

Libp2p currently relies on a fully p2p Kademlia DHT. Unfortunately, even if we get it to the point where it behaves optimally, it still won't scale enough to support our content routing needs.

Proposal: Implement a "closed" DHT with known members. Note: This is often called a "Distributed KV Store".

Unlike our Kademlia DHT:

  1. This system would be permissioned instead of permissionless. Joining as a DHT "server" would require either human or blockchain consensus.
  2. Have a fixed, well-known routing table, fetched when initially connecting to this DHT.

Motivations:

  • 1-RTT lookup
  • batch put
  • long-lived routing records

Performing a 1-RTT lookup and/or a batch put requires a known routing table. The round-trips in traditional p2p DHTs all come from discovering this routing table along the way.

A known routing table requires some form of consensus on the members of this routing table. That's where the trust comes in.

The last part of this is long-lived routing records. The Internet Archive has ~400e9 files which equates to at least 35TiB of "provider records". However, the IA isn't adding 400e9 files per day. Given stable nodes that can be trusted to keep long-lived records alive, the IA wouldn't have to keep re-broadcasting old records to keep them alive.


Notes:

  • Really, we may want to treat large services like the IA as "trackers". However, the current system won't even scale for smaller services.
  • We may also want to integrate payment for large users to prevent abuse but we can probably punt on that for now.

CC @Kubuxu, IIRC you already proposed something kind of like this but I couldn't find the proposal.

CC @obo20 as this is really important for pinning services.

@Stebalien
Copy link
Member Author

Specifically, I'm imagining a typical sharded KV architecture where the keyspace is broken up into shards and then each shard has some number of replica nodes.

@Kubuxu
Copy link
Member

Kubuxu commented Apr 9, 2019

My proposal was to build something like this based on Koorde, where rules of joining as full DHT node would be strict, and thanks to Koorde's structure it would be possible to, for example, cleanly hand over records when some node has to go offline.

Information about Koorde: https://github.com/libp2p/research-dht/issues/9

@raulk
Copy link
Member

raulk commented Apr 9, 2019

Out of curiosity. Putting aside the permissioning via consensus element, what's the difference between this and something like a KV store with sharding implemented via consistent hashing, à la Cassandra but with a KV model? Why would one pick a closed DHT versus a distributed database?

@Kubuxu
Copy link
Member

Kubuxu commented Apr 9, 2019

Why would one pick a closed DHT versus a distributed database?

One of the primary reasons I would raise is that we know that DHT can scale to 100s or 1000s of nodes. This isn't as clear in the case of distributed databases. Also, I think that it would be easier for a number of unrelated parties to run DHT nodes in a permission DHT network than a single distributed database cluster.

@raulk
Copy link
Member

raulk commented Apr 9, 2019

That makes sense. We do not require a private network between these nodes, right? Then I'd add that the solution needs had to be fit for adversarial environments. Also, self-healing upon departures, and collusion resistant if replicas are decided deterministically and are known to everybody.

@Stebalien
Copy link
Member Author

what's the difference between this and something like a KV store with sharding implemented via consistent hashing, à la Cassandra but with a KV model? Why would one pick a closed DHT versus a distributed database?

That's basically, what I'm suggesting. (I'm using DHT literally to mean distributed hash table, not implying any kind of p2p-ness).

@aschmahmann
Copy link

TLDR: Relying on specific centralized parties in the short term is ok, but nothing closed please 🙏🙏🙏

While I can be onboard with voluntarily run supernodes (e.g. relays, rendezvous, etc.) that help us move from a centralized world towards a distributed one, a closed network feels like a step in the wrong direction.

As simple issue that comes up is that if the system is "permissioned" then some organization is granting the permissions. That organization is now potentially subject to the whims of political and legal pressures to censure certain data from their network. Additionally, there are other issues such as there being less attention and focus on the open networks. @jimpick told me that apparently this occurred with Dat where their DHT was broken for a while, but nobody really noticed because they were defaulting to their centralized DNS solution.

I'm fairly certain there are other tradeoffs we can make aside from closing/permissioning the system that will get us similar results. Some straw men suggestions include:

  • Having separate opt-in DHTs (as described by @Stebalien DHT 2.0 ipfs/notes#291 (comment)) with extra network assumptions (e.g. willingness to keep long lived records and to have high availability)
  • Requiring the specific DHT to be pretty densely connected and using some CRDT-type structure to assemble a total order on DHT server nodes. Then use those nodes to back a DHT that operates similarly to a classical hash table.
  • Add 2-3 copies of the data on the network and require the DHT server nodes to probabilistically check/refresh the other copies in the network based on how reliable the other nodes have been in the past.

Overall, while it's nice that a permissioned system would help alleviate classic DHT issues like poisoning and Sybil attacks it feels like just kicking the can on the entire premise of a distributed/p2p network. I would bet that in our current environment people are more likely to be mooches of free network resources (like the DHT) then trying to break everything (Sybil attacks/intentional poisoning to evict DHT records). If so, then if we just give people the ability to opt-in to donating large amounts of resources to the network we'd probably end up with a smaller faster DHT that is still open.

@hsanjuan
Copy link

@lanzafame this sounds close to our discussions.

We thought that having a general DHT for Everything is unoptimal, and that the general DHT should just be used for service-specific DHT discovery. And that seems to be pretty much what is proposed here ipfs/notes#291 (comment) .

@Stebalien
Copy link
Member Author

@aschmahmann

Ok, I missed an important point here: I'm not suggesting we replace our DHT with a closed DHT, I'm suggesting we add a secondary fast-path DHT.

Having separate opt-in DHTs (as described by @Stebalien ipfs/notes#291 (comment)) with extra network assumptions (e.g. willingness to keep long lived records and to have high availability)

That's basically what this is, except that you can't just show up and claim to be reliable. We need these nodes to actually be reliable, that's why this is "closed". However, that doesn't mean we have to have a single organization deciding who can participate (we can have the members of the DHT decide who can join).

Add 2-3 copies of the data on the network and require the DHT server nodes to probabilistically check/refresh the other copies in the network based on how reliable the other nodes have been in the past.

We already put 20 copies. DHT nodes should probably rebalance but that can get really expensive.


The real issue is that we simply can't provide the same performance/reliability guarantees of a centralized system with a fully open DHT with nodes that aren't completely reliable.

  • To have 1-RTT lookups, we need to know the routing table. There are some fully p2p DHTs that support this but the communication complexity gets pretty bad with network churn.
  • Billions of content routing records won't work on any DHT without persistence. 1B routing records is 100GiB of bandwidth so the IA would need 36TiB of bandwidth per day (minimum).

@vyzo
Copy link

vyzo commented Apr 12, 2019

we could use a process similar to what tor uses for their relays.

@vyzo
Copy link

vyzo commented Apr 12, 2019

See https://blog.torproject.org/lifecycle-new-relay

@Stebalien
Copy link
Member Author

Follow-up: @jbenet noted that incentivized would have the same effect while being more decentralized. Basically:

  1. Joining would require staking something of value (e.g., using crypto currency).
  2. Users would pay for access to the system (not for every access but some kind of amortized thing to reduce overhead).
  3. Members that fail to store something or respond to queries lose stake.

The tricky part would be proving that a member of this DHT isn't responding to queries but this isn't insoluble.

@yiannisbot
Copy link
Contributor

It looks like there is consensus that a single DHT is not an option, especially for the long term. That said, I see a hierarchical structure, which would help in many ways, including performance in terms of delivery delay, time to first byte, flow-completion time etc. In such a setup the top-level (and maybe also some lower-level, but not leaf) resolution system(s) (DHT or similar) would effectively be pointers to lower-layer "services", rather than content as such.

If we assume the above rather vague setup, I would support an incentivised approach to securing a place in the top-level resolution system. I would reasonably expect that not everyone would want a place in the top-level resolution system (i.e., be reachable globally) - see a classroom/conference/smart city environment, where content is of interest only locally and hosts are not interested in making the content globally reachable. Those that want to make content globally reachable would have to stake resources. Staking would take into account volume of data, replication/redundancy for availability and performance guarantees etc. and would also depend on the topology, that is, the further up the hierarchy the content/service is advertised the higher the stake.

In this case, we would still need mechanisms to prove that nodes do not misbehave and forward requests in time.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

7 participants