-
Notifications
You must be signed in to change notification settings - Fork 57
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
connector: use tcp socket for communicating with connectors #682
Conversation
67e17a3
to
18ea9fe
Compare
18ea9fe
to
808a659
Compare
go/connector/run.go
Outdated
if err != nil { | ||
return fmt.Errorf("splitting socket address: %w", err) | ||
} | ||
cmdStdin.Write([]byte(fmt.Sprintf("tcp %s:%s\n", DockerHostInternal, port))) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Would it be possible to provide this as an argument to the connector rather than sending it over stdin
? I'm not sure if it's needed to be used as a "ready" signal here with the section above waiting to receive a connection on the socket before doing anything with it. Using an argument instead of stdin
seems simpler to me, but not an absolute must-do (especially if it doesn't work).
A couple of general thoughts/questions:
|
Combined with Johnny's point last week about having a "Ready" message on connector stdout, it feels like there could be a coherent migration plan along the lines of:
|
An issue with READY is that Another option, slightly less elegant but simpler (and more transferable to firecracker?), would be to use an image label which marks that the container supports TCP transport and identifies the specific port on which it'll be listening. The runtime looks for this label, port-forwards it in the |
I'm not sure if I yet see the supporting reason for having the connector listen on a port rather than the runtime (other than it being more intuitive, which I personally didn't have the same sense). |
One thought: I think it would make connector development & testing simpler. It would be nice to be able to start up a connector container and interact with it via its TCP socket, as opposed to needing to start a TCP server prior to starting the container, and then interacting with the connector through that TCP server. |
Another reason is security model: we don't want to give untrusted code a capability to dial out to trusted code. The trusted code should dial into the untrusted sandbox. |
a3e771d
to
0a18b43
Compare
Okay, now:
There is still no network-tunnel binary, but with this we can start migrating simple airbyte connectors, and our own connectors by updating our libraries. We can meanwhile work on the network tunnel binary, and once that's in, migrate the advanced connectors which need the tunnel. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
LGTM
🤔 there would be multiple connectors running, so we'll have to account for this. Is it possible for us to not port-forward, but instead to have prior knowledge of the container name (with DNS resolution provided by docker) or of the container IP, which can be directly dialed at the advertised port? |
I will look into this to see if it's possible 🤔 |
Thank you all for your comments,
|
1a2c77f
to
d862dd2
Compare
Description:
Workflow steps:
(How does one use this feature, and how has it changed)
Documentation links affected:
(list any documentation links that you created, or existing ones that you've identified as needing updates, along with a brief description)
Notes for reviewers:
(anything that might help someone review this PR)
This change is