summaryrefslogtreecommitdiffstats
path: root/block/bfq-iosched.c
diff options
context:
space:
mode:
authorPaolo Abeni <pabeni@redhat.com>2020-11-19 11:46:03 -0800
committerJakub Kicinski <kuba@kernel.org>2020-11-20 15:33:25 -0800
commitea4ca586b16ff2eb6157fe13969eb72d2403a3a1 (patch)
tree8d17cb274f9432777f1f69fe0ee729b714cc345a /block/bfq-iosched.c
parentfa3fe2b150316b294f2c662653501273ff25bba8 (diff)
mptcp: refine MPTCP-level ack scheduling
Send timely MPTCP-level ack is somewhat difficult when the insertion into the msk receive level is performed by the worker. It needs TCP-level dup-ack to notify the MPTCP-level ack_seq increase, as both the TCP-level ack seq and the rcv window are unchanged. We can actually avoid processing incoming data with the worker, and let the subflow or recevmsg() send ack as needed. When recvmsg() moves the skbs inside the msk receive queue, the msk space is still unchanged, so tcp_cleanup_rbuf() could end-up skipping TCP-level ack generation. Anyway, when __mptcp_move_skbs() is invoked, a known amount of bytes is going to be consumed soon: we update rcv wnd computation taking them in account. Additionally we need to explicitly trigger tcp_cleanup_rbuf() when recvmsg() consumes a significant amount of the receive buffer. Signed-off-by: Paolo Abeni <pabeni@redhat.com> Signed-off-by: Mat Martineau <mathew.j.martineau@linux.intel.com> Signed-off-by: Jakub Kicinski <kuba@kernel.org>
Diffstat (limited to 'block/bfq-iosched.c')
0 files changed, 0 insertions, 0 deletions