Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

OnionCat4 discussion notebook #34

Open
rahra opened this issue Jul 17, 2021 · 4 comments
Open

OnionCat4 discussion notebook #34

rahra opened this issue Jul 17, 2021 · 4 comments
Labels
question Further information is requested

Comments

@rahra
Copy link
Owner

rahra commented Jul 17, 2021

This is a collection of open questions for OnionCat4, to remind me that I do not forget. And of course, open for discussion!

OnionCat4 is developed in the branch hsv3lookup.

  • Which port should the DNS service use? Currently udp6/8060, better udp6/53?
  • How many queries should the resolver do in parallel? And to which hosts? Active sessions 1st?
  • What should happen when the TTL of a hosts expires? Re-lookup?
  • Which and how many of the hosts in the hosts db should be queried for new entries?
  • Should the internal hosts db be saved regularly and restored after a reboot?
  • Shall zone transfers be supported?
  • Is $sysconfdir/etc/tor/onioncat.hosts a good location for the hosts file? (or better /etc/onioncat/hosts, or /etc/onioncat.hosts, or...)
@rahra rahra added the question Further information is requested label Jul 17, 2021
@aight8
Copy link

aight8 commented Jul 21, 2021

I don't know the exact state of the v3 lookup mechanism.
However I want to write some personal notes / keywords:

random 3 digit pin (simple cluster passphrase guessing avoidance) + custom passphrase -> seed bytes -> hiarchial deterministic ed25519 key
-> master pub key is shared -> determ all pub keys -> generate onion IDs (n = 10)
-> master prv key required at init of a node

-> node try to connect to (n) nodes at bootstrap, if any connection succeeeded -> join cluster, otherwise -> create new
-> on cluster enter (new node): node receives joined node count + used ID's (e.g.: 0,1,2,5 / node 3,4 went offline), maxNodeID = 5, my new node ID = 6
-> on cluster enter (with existing ID): same than new node, but advertise own node with specific ID, have to proove to the cluster (entrypoint node) I have the prv of the node with this ID (sign something)
-> on cluster create: nothing (node ID = 0)
-> a node can send heartbeat to the cluster, node is removed from DHT (active nodes) after timeout time (e.g. 30s)

-> onion services are created adhoc via the tor control port

this represents a lightweight p2p application (as control app/discovery/registrar)

used technologies:
ed25519, tor, a p2p stack with dht

@rahra
Copy link
Owner Author

rahra commented Jul 22, 2021

I think I got your idea. But how would you find the "initial contact"? By distributing the master key to your set of OnionCat nodes?

I'm already working on an article describing what I'm working on with this V3 lookup mechanism ;)

@aight8
Copy link

aight8 commented Jul 22, 2021

By providing the passphrase for every node on-site once while the node bootstrap phase. It is used to generate the pubkeys at m/*. Then it try to access the cluster by resolving the first n nodes and connect to one of them. When it has connected to any of them my current node is part of the tor P2P network. (my node receives: all online nodes (index list) + the last ever used node index. the next higher one is my nodes index). Now the bootstrap generates the private key for that index from the generated master key - a ed25519 prv key which is my onion address, and discard the master key for security reasons. (with the master key I could impersonate every node in the network - so one bad node can do anything)
The tap interface could map then like:
192.168.100.[node-index] -> m/[node-index]

As improvement theoretically the node cluster can ensure that a node at m/0 is always available. If not just publish that one, don't care if it get republished. m/0 could be an cluster entry node or just a DHT that provides indexes of online nodes (for example beyond 10, when the first 10 are offline). Though that improvement goes pretty far...

ocat -p "with-this-passphrase-im-part-of-the-network"                      (6st form)

Okey cool, I am curious!

@rahra
Copy link
Owner Author

rahra commented Jul 24, 2021

Sounds good. Although it also sounds like a lot of work which I do not have any more, at least at the moment.
But this is an open source project and you have my full support, for writing a paper or design draft on that and of course later in a possible implementation.
I think what needs a little bit more attention still is the issue with the master key and the possibility of a rogue OnionCat node.
What I did in the moment is pretty straight forward, no crypto, just DNS lookups within the network. Stand by for my explanation.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
question Further information is requested
Projects
None yet
Development

No branches or pull requests

2 participants