Skip to content
This repository has been archived by the owner on Apr 18, 2024. It is now read-only.

Commit

Permalink
mptcp: Do not race forced-closure with packet reception
Browse files Browse the repository at this point in the history
It is possible that we call inet_csk_prepare_forced_close() - which
calls bh_unlock_sock() - on a socket that is already in the ehash table
because it went through tcp_v4_syn_recv_sock.

That is a problem. Because, the moment we unlock incoming packets may be
processed on the socket, because this socket has not the mpc-flag set.

So, that means that we are processing an incoming packet while at the
same time being potentially inside tcp_done(). Only bad things can
happen there...

I'm hunting down a weird kernel-WARNING in __inet_hash_connect() and the
current best guess is that this is the problem, as it can cause all kind
of racy behavior.

[858163.658887] ------------[ cut here ]------------
[858163.661126] WARNING: CPU: 24 PID: 7966 at net/ipv4/inet_hashtables.c:740 __inet_hash_connect+0x419/0x440
[...]

Fixes: 1f2d951 ("Fix error-handling")
Signed-off-by: Christoph Paasch <[email protected]>
Signed-off-by: Matthieu Baerts <[email protected]>
(cherry picked from commit 1d5fc78)
Signed-off-by: Matthieu Baerts <[email protected]>
(cherry picked from commit fe3634a)
Signed-off-by: Matthieu Baerts <[email protected]>
  • Loading branch information
cpaasch authored and matttbe committed Jun 27, 2022
1 parent a0ff38f commit b800164
Showing 1 changed file with 28 additions and 6 deletions.
34 changes: 28 additions & 6 deletions net/mptcp/mptcp_ctrl.c
Original file line number Diff line number Diff line change
Expand Up @@ -1110,6 +1110,28 @@ static const struct tcp_sock_ops mptcp_sub_specific = {
.set_cong_ctrl = __tcp_set_congestion_control,
};

/* Inspired by inet_csk_prepare_forced_close */
static void mptcp_icsk_forced_close(struct sock *sk)
{
/* The problem with inet_csk_prepare_forced_close is that it unlocks
* before calling tcp_done. That is fine for sockets that are not
* yet in the ehash table. But for us we already are there. Thus,
* if we unlock we run the risk of processing packets while inside
* tcp_done() and friends. That can cause all kind of problems...
*/

/* The below has to be done to allow calling inet_csk_destroy_sock */
sock_set_flag(sk, SOCK_DEAD);
percpu_counter_inc(sk->sk_prot->orphan_count);
inet_sk(sk)->inet_num = 0;

tcp_done(sk);

/* sk_clone_lock locked the socket and set refcnt to 2 */
bh_unlock_sock(sk);
sock_put(sk);
}

static int mptcp_alloc_mpcb(struct sock *meta_sk, __u64 remote_key,
__u8 mptcp_ver, u32 window)
{
Expand Down Expand Up @@ -1372,8 +1394,7 @@ static int mptcp_alloc_mpcb(struct sock *meta_sk, __u64 remote_key,
kmem_cache_free(mptcp_sock_cache, master_tp->mptcp);
master_tp->mptcp = NULL;

inet_csk_prepare_forced_close(master_sk);
tcp_done(master_sk);
mptcp_icsk_forced_close(master_sk);
return -EINVAL;

err_inherit_port:
Expand Down Expand Up @@ -2121,8 +2142,7 @@ static int __mptcp_check_req_master(struct sock *child,

if (mptcp_create_master_sk(meta_sk, mtreq->mptcp_rem_key,
mtreq->mptcp_ver, child_tp->snd_wnd)) {
inet_csk_prepare_forced_close(meta_sk);
tcp_done(meta_sk);
mptcp_icsk_forced_close(meta_sk);

return -ENOBUFS;
}
Expand Down Expand Up @@ -2336,9 +2356,11 @@ struct sock *mptcp_check_req_child(struct sock *meta_sk,
/* Drop this request - sock creation failed. */
inet_csk_reqsk_queue_drop(meta_sk, req);
reqsk_queue_removed(&inet_csk(meta_sk)->icsk_accept_queue, req);
inet_csk_prepare_forced_close(child);
tcp_done(child);

mptcp_icsk_forced_close(child);

bh_unlock_sock(meta_sk);

return meta_sk;
}

Expand Down

0 comments on commit b800164

Please sign in to comment.