It looks like a detail, but it's critical if we're dealing with somebody, such as near-future self, using TCP_REPAIR to migrate TCP connections in the guest or container. The last packet sent from the 'source' process/guest/container typically reports a small window, or zero, because the guest/container hadn't been draining it for a while. The next packet, appearing as the target sets TCP_REPAIR_OFF on the migrated socket, is a keep-alive (also called "window probe" in CRIU or TCP_REPAIR-related code), and it comes with an updated window value, reflecting the pre-migration "regular" value. If we ignore it, it might take a while/forever before we realise we can actually restart sending. Fixes: 238c69f9af45 ("tcp: Acknowledge keep-alive segments, ignore them for the rest") Signed-off-by: Stefano Brivio <sbrivio(a)redhat.com> --- tcp.c | 4 +++- 1 file changed, 3 insertions(+), 1 deletion(-) diff --git a/tcp.c b/tcp.c index af6bd95..2addf4a 100644 --- a/tcp.c +++ b/tcp.c @@ -1664,8 +1664,10 @@ static int tcp_data_from_tap(const struct ctx *c, struct tcp_tap_conn *conn, tcp_send_flag(c, conn, ACK); tcp_timer_ctl(c, conn); - if (p->count == 1) + if (p->count == 1) { + tcp_tap_window_update(conn, ntohs(th->window)); return 1; + } continue; } -- 2.43.0
On Tue, Feb 11, 2025 at 08:50:51PM +0100, Stefano Brivio wrote:It looks like a detail, but it's critical if we're dealing with somebody, such as near-future self, using TCP_REPAIR to migrate TCP connections in the guest or container. The last packet sent from the 'source' process/guest/container typically reports a small window, or zero, because the guest/container hadn't been draining it for a while. The next packet, appearing as the target sets TCP_REPAIR_OFF on the migrated socket, is a keep-alive (also called "window probe" in CRIU or TCP_REPAIR-related code), and it comes with an updated window value, reflecting the pre-migration "regular" value. If we ignore it, it might take a while/forever before we realise we can actually restart sending. Fixes: 238c69f9af45 ("tcp: Acknowledge keep-alive segments, ignore them for the rest") Signed-off-by: Stefano Brivio <sbrivio(a)redhat.com>Reviewed-by: David Gibson <david(a)gibson.dropbear.id.au> Although...--- tcp.c | 4 +++- 1 file changed, 3 insertions(+), 1 deletion(-) diff --git a/tcp.c b/tcp.c index af6bd95..2addf4a 100644 --- a/tcp.c +++ b/tcp.c @@ -1664,8 +1664,10 @@ static int tcp_data_from_tap(const struct ctx *c, struct tcp_tap_conn *conn, tcp_send_flag(c, conn, ACK); tcp_timer_ctl(c, conn); - if (p->count == 1) + if (p->count == 1) {... not really this patch, but this condition seems wrong to me. IIUC it's attempting to detect the last packet in the batch, which isn't necessarily the same thing as the _only_ packet in the batch. Admittedly, it probably will be for a keep-alive, but I'm having a hard time convincing myself it absolutely has to be. Should this maybe be (i + 1 == p->count) instead?+ tcp_tap_window_update(conn, ntohs(th->window)); return 1; + } continue; }-- David Gibson (he or they) | I'll have my music baroque, and my code david AT gibson.dropbear.id.au | minimalist, thank you, not the other way | around. http://www.ozlabs.org/~dgibson
On Wed, 12 Feb 2025 11:42:54 +1100 David Gibson <david(a)gibson.dropbear.id.au> wrote:On Tue, Feb 11, 2025 at 08:50:51PM +0100, Stefano Brivio wrote:No, not really, I just want to select one-packet batches on purpose. If a keep-alive is part of a batch 1. it's not a keep-alive and 2. it would probably need a more complicated handling which I hadn't really time to think about. See previous discussion on this: https://archives.passt.top/passt-dev/Zz01CDMNyFN-Ze68@zatzitIt looks like a detail, but it's critical if we're dealing with somebody, such as near-future self, using TCP_REPAIR to migrate TCP connections in the guest or container. The last packet sent from the 'source' process/guest/container typically reports a small window, or zero, because the guest/container hadn't been draining it for a while. The next packet, appearing as the target sets TCP_REPAIR_OFF on the migrated socket, is a keep-alive (also called "window probe" in CRIU or TCP_REPAIR-related code), and it comes with an updated window value, reflecting the pre-migration "regular" value. If we ignore it, it might take a while/forever before we realise we can actually restart sending. Fixes: 238c69f9af45 ("tcp: Acknowledge keep-alive segments, ignore them for the rest") Signed-off-by: Stefano Brivio <sbrivio(a)redhat.com>Reviewed-by: David Gibson <david(a)gibson.dropbear.id.au> Although...--- tcp.c | 4 +++- 1 file changed, 3 insertions(+), 1 deletion(-) diff --git a/tcp.c b/tcp.c index af6bd95..2addf4a 100644 --- a/tcp.c +++ b/tcp.c @@ -1664,8 +1664,10 @@ static int tcp_data_from_tap(const struct ctx *c, struct tcp_tap_conn *conn, tcp_send_flag(c, conn, ACK); tcp_timer_ctl(c, conn); - if (p->count == 1) + if (p->count == 1) {... not really this patch, but this condition seems wrong to me. IIUC it's attempting to detect the last packet in the batch, which isn't necessarily the same thing as the _only_ packet in the batch.Admittedly, it probably will be for a keep-alive, but I'm having a hard time convincing myself it absolutely has to be.It is, because it makes no sense to batch keep-alives...Should this maybe be (i + 1 == p->count) instead?-- Stefano