-
Notifications
You must be signed in to change notification settings - Fork 1.8k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Add "limiter" support to database service #9087
Conversation
0d5940e
to
14b93a7
Compare
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Right now the limit is implemented only in db/app agent and the connection in DB/APP passed to HandleConnection
func: svc/db/Server.HandleConnection(conn net.Conn)
is actually part of reverse tunnel connection established from db/app agent to the teleport proxy.
Thus the conn.RemoteAddr().String()
call in reverse tunnel connection handler conn.RemoteAddr() will return teleport proxy reverse tunnel address instead of factual remote DB client address.
https://github.com/gravitational/teleport/blob/master/lib/reversetunnel/agent.go#L382
(a *Agent) run() {
// reverse tunnel connection to teleport proxy
conn, err := a.connect()
...
err = a.processRequests(conn)
}
(a *Agent) processRequests(conn *ssh.Client) error {
...
t := &transport{
...
sconn: conn.Conn,
server: a.Server,
...
}
go t.start()
}
(p *transport) start() {
...
case LocalNode:
p.server.HandleConnection(sshutils.NewChConn(p.sconn, p.channel))
}
where p.server.HandleConnection
is case of db agent is svc/db/Server.HandleConnection
If we want to limit external db/app users based on IP the limit should be implemented in proxy app/db handlers:
teleport/lib/srv/db/proxyserver.go
Line 55 in 5cd7c3c
type ProxyServer struct { |
@smallinsky @jakule We have client IP in the identity so it should be available to app/db agents for connections coming out of reverse tunnel: https://github.com/gravitational/teleport/blob/v8.0.0/lib/tlsca/ca.go#L126. We should limit by this instead of RemoteAddr. @jakule Also, please verify this with a real cluster (in addition to unit tests). |
Just curious what address shows up as client IP. Is it the public IP that server receives (for a regular production setup)? |
I think that right now the identity clientIP is not filled in case of db and app cert: And is only used in MFA flow: teleport/lib/auth/grpcserver.go Line 2085 in c5b2da1
|
@r0mant I think that @smallinsky is right. I also don't see clientIP being set anywhere in the DB module. Should I set that field and use it for the limiting? |
@jakule Yeah, I think we should do it. I would also see what the current behavior is, to make sure. I.e. for connections coming out of reverse tunnel - what is the remote addr? It should be that of the proxy but wouldn't hurt to verify.
The client IP is the remote address of the connecting client (tsh, psql, etc) from the perspective of the server. As to what it is exactly, it depends a lot on the network configuration and how client's connection reaches the server. If client and server are on the same network, then in most cases it would just be the client machine's IP address. If, say, client is on a private network and dials a Teleport server on a different network through a gateway, it would most likely be NAT'd address of the gateway. If there's a some sort of load balancer in front of Teleport proxy, then the client address will be the IP of the load balancer, unless Proxy Protocol is enabled on the LB - @jakule would be nice to test this scenario with proxy protocols as well once we make this change (database proxy service supports proxy protocol so should be just a matter of using something like AWS NLB or even HAProxy locally). |
Thanks for the explanation! The reason I was asking about this is that many corps use VPN so their clients may share a few (or one) IPs to the public. |
40f258f
to
64e3e6a
Compare
@r0mant As we talked I reduced changes here to DB only. I tested the rate limiting in a few different configurations (including Postgres and MongoDB). As you also asked I checked proxy protocol with Nginx and I was able to see the real IP address. |
3f39ec6
to
0480908
Compare
Initial implementation.
Move the app limiter logic to HTTP handler.
Improve documentation. Minor refactoring.
Add test to cover the previously incorrect scenario.
1bb3bc3
to
6b76aa3
Compare
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The logic LGTM though I have left some minor comment and question if limiter.NewLimiter
should be used instead of limiter. NewConnectionsLimiter
to support rate limiting.
Add MySQL to limiter test.
Send a correct error to a connected client when a connection limit is hit.
… it inside engine itself.
lib/srv/db/server.go
Outdated
defer func() { | ||
if r := recover(); r != nil { | ||
s.log.Warnf("Recovered while handling DB connection from %v: %v.", clientConn.RemoteAddr(), r) | ||
err = trace.BadParameter("failed to handle client connection") | ||
} | ||
if err != nil { | ||
engine.SendError(err) | ||
} | ||
}() |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Should this be moved above right after the engine
is created so if an error happens during monitorConn or InitializeConnection, it would also be sent to the client properly?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
As I mentioned below, an error cannot be sent before InitializeConnection()
is called.
lib/srv/db/server.go
Outdated
@@ -689,7 +699,36 @@ func (s *Server) handleConnection(ctx context.Context, conn net.Conn) error { | |||
return trace.Wrap(err) | |||
} | |||
|
|||
err = engine.HandleConnection(ctx, sessionCtx, conn) | |||
if err := engine.InitializeConnection(clientConn, sessionCtx); err != nil { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Is this separate InitializeConnection method needed? Can't we set clientConn to the engine e.g. in dispatch
method when creating an engine? Or is it needed to perform some initialization actions (e.g. handleStartup for Postgres), otherwise error won't be sent properly?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@r0mant Answering your comments. InitializeConnection()
is needed (mainly for postgres) to initialize connection with a client otherwise the communication blocks as client and DB agent tries to send data at the same time. Because of that we cannot send an error that comes from InitializeConnection()
as connection is not ready.
I think that the best what I can do is to move InitializeConnection()
into dispatch()
, so after an engine is created, the connection is ready and we're able to send a message.
This PR implements DB limiting (part of #7893).
Rate/connection-limiting can be enabled by overriding the values in a configuration YAML:
Example error reported when a connection has been rejected when the connection to Postgres: