-
Notifications
You must be signed in to change notification settings - Fork 37
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Capsicum #481
Capsicum #481
Conversation
The test failure is not caused by this PR. |
e882f4a
to
fabd41c
Compare
I've rebased it on the latest master to fix the merge conflicts and fix the tests. |
fabd41c
to
3b1cf36
Compare
3b1cf36
to
5ccb7dc
Compare
Rebased. |
OK, so there is quite a bit to go through here... Is this intended to work on FreeBSD while resulting in a no-op on other OSes? |
Yes, exactly. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Not done reviewing, still need to go through why pasv_listener is needed for instance but for now wanted to give you feedback.
Thanks a lot for taking the time to contribute. This indeed looks promising.
When I run |
I think you got TLS to work right? |
I get a bunch of warnings on the master branch, too. Do you? |
Yes. |
No works fine my side. I'm on Rust 1.72.0 and MacOs. |
Ahh, it's a nightly thing. I'll fix the warnings that are branch-specific. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Couple of questions / comments
let storage = Mutex::new(Some(Filesystem::new(std::env::temp_dir()))); | ||
let sgen = Box::new(move || storage.lock().unwrap().take().unwrap()); | ||
let server: Server<Filesystem, User> = libunftp::ServerBuilder::with_authenticator(sgen, auth) | ||
.pasv_listener(control_sock.local_addr().unwrap().ip()) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Why exactly is it needed to create the passive listener here? Is the IP somehow going to be different from the parent?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Prior to this PR, the passive listener socket would be created in the PASV command handler. However, that doesn't happen until after we enter capsicum mode, and you can't create new sockets once already in capsicum mode. So we need to preallocate here.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
OK Makes sense
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
There is a problem with this implementation: The passive port is allocated once, and it keeps listening on the same port during the session. During a longer control session, the open port would be detectable by an attacker. The attacker can easily perform a 'port stealing' attack, by connecting to the open port before the legitimate client does.
This could perhaps be fixed by implementing a similar mechanism used for the proxy protocol: The pasv code would request the passive port via a channel.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think we can do it more easily than that. We don't need to listen
on the socket early; just create it. So we can create the socket but only call listen
during Pasv::handle_nonproxy_mode
. Would that suffice?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
That indeed would reduce the problem. It would still be best if the chosen port is randomly chosen for each pasv invocation. So that the port is not 'owned' by the session which would lead to higher saturation of the "connection pool". And also wouldn't make the next listening port predictable for an attacker who can somehow observe these connections.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Hmm. I can try. The restriction is that we can't open new sockets once we enter capability mode. Is it possible to re-bind an already-listening socket to a new port?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Good one... perhaps after closing the connection? I don't know.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
We should keep in tact the behavior that:
- Client issues pasv
- Server picks a random port in the specified range, and starts listening
- Client issues a data command and connects to passive port
- Server accepts connection <--- passive port is now free to be listened to again after a next pasv from any other client
Step 4 is important, the time between 3 and 4 is short. With a small pool of ports, it could be a potential bottleneck if 4 doesn't release that port from being used.
Does this solution actually work on FreeBsd? were you able to connect to the server and perform some uploads / downloads etc? |
Curious about your usecase by the way if you feel free to share. |
Yes I can. I invoke it like [
{
"username": "alice",
"password": "Wonderland",
"home": "/tmp/alice"
},
{
"username": "bob",
"password": "dylan",
"home": "/usr/home/bob"
}
] And regarding my use-case, I'm working on a backup and recovery product. The recovery path can work over FTP/SSL. And multiple customers' data may be stored within the same server. So a vulnerability in the FTP server could theoretically allow one customer to read another's data. Using Capsicum to isolate each FTP session gives us an extra layer of security. And since recovery usually involves a small number of connections each transferring a large amount of data, we don't mind the small performance penalty entailed by the per-connection fork/exec. |
Cool usecase! |
Take an IpAddr argument instead of SocketAddr
When a process runs in capability mode, as with Capsicum, it loses all access to global namespaces. So for example, functions like open() don't work. Instead, openat() may used. This commit replaces all global filesystem accesses in libunftp-sbe-fs with relative accesses, making it compliant with capability mode. The new cap_fs module is a 1:1 replacement for tokio::fs. The code could be simplified, and some runtime allocations removed, by inlining this module into libunftp-sbe-fs::Filesystem. However, I've left it separate for now to minimize diffs.
Capsicum is a capabilities model in which process lack any access to global namespaces. For example, they can't use open(). They can only use openat() instead. Libunftpd needs several modifications to be compatible with this model: * The listening socket needed by the PASV command must be preallocated. Only one socket is needed, because the connection helper is exec'ed for every session. * The storage backend needs an "enter" method that can descend into a subdirectory of its current root. This will come in a future commit. * The server must have an optional connection_helper program that it will exec to handle each connected socket. This is necessary so that users with different home directories can be handled by processes that are jailed to their specific home directories. It's necessary to exec, rather than merely fork, because Tokio uses kqueue, and a kqueue is not inherited by children. * The TLS stuff must be setup earlier, before the connection_helper enters capability mode. This will come in a future commit.
* Add an optional "home" method to User. * Add an "enter" method to StorageBackend. It descends into a subdirectory of the current root. * After successful login, call StorageBackend::enter if the user has a home directory. Note that the whole process will effectively be jailed to the user's home directory, so it won't be possible to reauthenticate as an other user.
Unlike DefaultUser, it has a home directory.
Split Server into Server and ServerBuilder. The latter has a build() method that creates the former. This gives the user the ability to do stuff after the Server is fully constructed (including the TLS stuff) but before entering the Server's main loop. For example, enter capability mode. This change also makes the TLS setup apply to Server::service.
But don't enter Capsicum mode except on FreeBSD.
* Remove vim directives.
So that unftp-auth-jsonfile won't need to know it.
This reverts commit bddf254.
by deferring the asynchronous part until build().
When preallocating a socket for Pasv, bind it to a port early, but don't call listen() until the actual PASV command arrives.
In capability mode, bind(2) is forbidden. Instead of preallocating a passive listener port as an early version of this PR did, supply a callback that can be used to bind the passive port. For a capability-mode server, this callback will be use cap_bind(3).
@robklg all rebased! |
Otherwise the per-connection helper processes used by Capsicum would never exit.
I missed this in Storage::metadata, leading to commands like SIZE failing in capability mode.
@asomers this is now depending on a specific rev in capsicum-net and capsicum-rs. Any chance we can make it depend on a released crate? |
Yes, @robklg I'll ask the maintainer of Capsicum to make a release. |
That would be great. Thanks! |
Capsicum is a security technology that restricts processes from accessing any global namespaces. After entering capsicum mode, a process may no longer use a syscall like open(); instead it's restricted to openat().
This PR:
Fixes #475